Catching the Deviations
Automating document review boosts speed and accuracy. The risk of non-compliance drops to virtually zero
In the insurance industry, document generation is typically complex. There are hundreds of rules and thousands of templates, all of which are based on a plethora of text modules. The problem is that this process is subject to a multitude of changes, usually the result of new insurance terms and conditions or products. The text and layout need to be modified and checked (add, switch and remove text modules).
Deviations can also arise from release changes to the systems that generate the text. In this case, the challenge is to review every document for legal and technical accuracy. Do the documents look exactly the same after the update? Do they also meet all the corporate identity (CI) and compliance criteria in the new software version? For many insurers, review of documents after software updates makes up the lion’s share of quality assurance work.
Content and appearance check – everything automated
But considering the complexity of the task how is it possible to check every document, in all its versions, reliably? Many insurers assign extra employees to the task. But in practical terms, they usually just print a random document (candidate) and the related template (reference) and compare the two with the naked eye.
Of course, that is hardly 100% reliable. But reliable quality assurance is essential for every insurer. Because they are more efficient and reliable, software tools that perform automatic tests are better. They validate documents against a defined set of rules, compare them with one another directly, and identify how the changes affect the quality of the final document. The advantage is that documents are checked not only for visual deviations but also deviations in content at the textual level. Software solutions of this type detect even the smallest outlier, even those that can't be seen with the naked eye.
Not every deviation is important
All sorts of such checking tools are available, and several even offer two critical benefits: They are designed for high throughput and also allow for defining tolerance limits and excluding areas from the check. This batch processing ability and the flexibility to define check criteria are definitely advantageous to insurance companies with high volumes of documents containing a lot of variable data.
Incidentally, these checking tools also resolve the inevitable conflict: on the one hand, to cover as much as possible for any quality problems (keeping the risk of undetected errors as low as possible); versus skipping sections of the document that are irrelevant for quality (avoid unnecessary effort). The check is designed to identify how changes affect the content and layout of a document. In the final analysis, not every change is relevant.
Test result as PDF and XML
The following is one possible scenario that some insurance companies actually use. The responsible department defines the test cases (review task), providing the criteria for testing. The IT department is responsible for technical execution. That way the reviewer can concentrate on the content without having to worry about the actual execution, which occurs automatically.
The document is then checked for content: Are the data correct (XML format)? Are the correct text modules used? Do they go together? That means the system analyzes if the specific document contains the values defined for a specific template. If the XML dataset has not been saved, it is created as a new review task in the correspondence system. For many insurance companies, the automatic generation of test cases is an important component of document generation.
The test itself is performed by the review software, which is controlled by a separate application. The application checks the database connected to the text system for any outstanding review jobs and then passes the data needed for the test to the checking software. Such information may include: Where are the reference and candidate documents? What users are involved; is there an existing comparison profile? Where should the test result be saved?
Other review parameters may include: Should a full text comparison be run or just a structural analysis? What areas of the document should be excluded from the comparison? In sum, all the criteria can be passed to the testing software through the separate control software. The test results are automatically transferred to the database. The reviewer can then display the results on his screen as a PDF comparison file or XML file that shows deviations down to the pixel level.
The software lists which documents were correct, where any changes occurred, and which ones affected the quality of the document. If the deviations did not affect CI, compliance or technical accuracy, the checked documents (candidates) are released. The application is configured to automatically retrieve and make available the test results in definable intervals.
First the organization, then IT
But there are organizational and technical issues within document processing to clarify prior to implementing the checking software. What type of test result is needed (PDF differences file or XML file)? At what point during document production should the test be conducted? What parameters are important, which ones can be ignored (tolerance)? These are questions that must be answered in the specialist departments. Implementation should therefore always be a joint project and include the technical departments, IT department, and the central output instance, if there is one.
Learn more about the benefits and advantages of automated document checking
DocBridge Delta is a platform-independent and scalable application for comparing documents in a variety of ways. It performs rule validation (CI, compliance), the direct comparison of documents at the visual and content level, and also supports regression, iteration, and conversion testing. In a direct document comparison, the application lists the differences found in log files and displays them graphically on the screen (visual comparison).
For a visual comparison, each document is converted into a pixel image of equal resolution and the resulting raster images are compared, as if on a light table, where the two documents are overlaid to reveal the differences between them. The software shows the places where the two versions deviate. Users are thus able to make the necessary changes exactly where needed.
The software reads in two files, e.g. in AFP, PDF or PostScript, compares the original to the modified one and displays the differences found in just seconds. Comparisons at the pixel level identify the changes and their location. Comparisons at the structural level evaluate character sequences of the text, font attributes and other properties that affect the output.
One advantage of DocBridge Delta is that country-specific quality criteria can be saved as sets of rules, and certain areas of a document (variable fields such as address, customer number, etc.) can be excluded from the test (masking). The software also allows the highest possible testing tolerance (fuzzy methodology) without neglecting the absolute accuracy of the content, corporate identity (fonts, layout, etc.) and compliance (legal obligations). The tolerance parameters can be set as needed.
DocBridge Delta is able to test for both intended and unintended changes, even exposing issues that are not clearly visible but potentially problematic. It is able to compare the same file type or even compare different file types against each other. The software offers two different checking methods: an interactive interface for ad-hoc testing as well command-line input for automating the process.
The Compart solution is also designed for robust use and automation and can be customized to suit user requirements (department, development/IT, production). Intuitive operation makes even complex tests relatively easy to configure. The solution supports existing output management structures (legacy) and can be used at any point during the document production process.