Getting Better Recognition Results with OCR Segmentation
Today’s high-speed forms processing workflows depend on accurate character recognition to capture data from document images. Rather than manually reviewing forms and entering data by hand, optical character recognition (OCR) and intelligent character recognition (ICR) allow developers to automate the data capture process while also cutting down on human error. Thanks to OCR segmentation, these tools are able to read a wide range of character types to keep forms workflows moving efficiently.
Recognizing Fonts
Deploying OCR to capture data is a complex undertaking due to the immense diversity of fonts in use. Modern character recognition software focuses on identifying the pixel patterns associated with specific characters rather than matching characters to existing libraries. This gives them the flexibility needed to discern multiple font types, but problems can still arise due to spacing issues that make it difficult to tell where one character ends and another begins.
Fonts generally come in one of two forms that impact how much space each character occupies. “Fixed” or “monospaced” fonts are uniformly spaced so that every character takes up the exact same amount of space on the page. While not quite as popular now in the era of word processing software and digital printing, fixed fonts were once the standard form of typeface due to the technical limitations of printing presses and typewriters. On a traditional typewriter, for example, characters were evenly spaced because each typebar (or striker) was a standardized size.
From an OCR standpoint, fixed fonts are easier to read because they can be neatly segmented. Each segmented character is the same size, no matter what letters, numbers, or symbols are used. In the example below, the amount of space occupied by the characters is determined by the number of characters used, not the shape of the characters themselves. This makes it easy to break the text down into a segmented grid for accurate recognition.
“Proportional” fonts, however, are not uniformly spaced. The amount of space taken up by each character is determined by the shape of the character itself. So while a w takes up the same space as an i in a fixed font, it takes up much more space in a proportional font.
The inherent characteristics of proportional fonts makes them more difficult to segment cleanly. Since each character occupies a variable amount of space, each segmentation box needs to be a different shape. In the example below, applying a standardized segmentation grid to the text would fail to cleanly separate individual characters, even though both lines feature the exact same character count.
Yet another font challenge comes from “kerning,” which reduces the space between certain characters to allow them to overlap. Frequently used in printing, kerning makes for an aesthetically pleasing font, but it can create serious headaches for OCR data capture because many characters don’t separate cleanly. In the example below, small portions of the W and the A overlap, which could create confusion for an OCR engine as it analyzes pixel data. While the overlap is very slight in this example, many fonts feature far more extreme kerning.
In order to get a clean reading of printed text for more accurate recognition results, OCR engines like the one built into Accusoft’s SmartZone SDK utilize segmentation to take an image and split it into several smaller images before applying recognition. This allows the engine to isolate characters from one another to get a clean reading without any stray pixels that could impact recognition results.
Much of this process is handled automatically by the software. SmartZone, for instance, has OCR segmentation settings and properties that are handled internally based on the image at hand. In some cases, however, those controls may need to be adjusted manually to ensure the highest level of accuracy. If a specific font routinely returns failed or low confidence recognition results, it may be necessary to use the OCR segmentation properties to adjust for font characteristics like spaces, overlaps (kerning), and blob size (which distinguishes which pixels are classified as noise).
Applying ICR Segmentation
All of the challenges associated with cleanly segmenting printed text are magnified when it comes to hand printed text. Characters are rarely spaced or even shaped consistently, especially when they’re drawn without the guidance of comb lines that provide clear separation for the person completing a form.
Since ICR engines read characters as individual glyphs, they can become confused if overlapping characters are interpreted as a single glyph. In the example below, there is a slight overlap between the A and the c, while the cross elements of the f and t are merged to form the impression of a single character.
SmartZone’s ICR segmentation properties can be used to pull apart overlapping characters and split merged characters for more accurate recognition results. This is also important for maintaining a consistent character count. If the ICR engine isn’t accounting for overlapped and merged characters, it could return fewer character results than are actually present in the image.
Enhance Your Data Forms Capture with SmartZone
Accusoft’s SmartZone SDK supports both zonal and full page OCR/ICR for forms processing workflows to quickly and accurately capture information from document images. When incorporated into a forms workflow and integrated with identification and alignment tools like the ones found in FormSuite, users can streamline data capture and processing by extracting text and routing it to the appropriate databases or application tools. SmartZone’s OCR supports 77 distinct languages from around the world, including a variety of Asian and Cyrillic characters. For a hands-on look at how SmartZone can enhance your data capture workflow, download a free trial today.