复杂背景下字符识别(ocr) - 图文

更新时间:2023-09-18 04:56:01 阅读量: 幼儿教育 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

This example shows how to detect regions in an image that contain text. This is a common task performed on unstructured scenes. Unstructured scenes are images that contain undetermined or random scenarios. For example, you can detect and recognize text automatically from captured video to alert a driver about a road sign. This is different than structured scenes, which contain known scenarios where the position of text is known beforehand.

Segmenting text from an unstructured scene greatly helps with additional tasks such as optical character recognition (OCR). The automated text detection algorithm in this example detects a large number of text region candidates and progressively removes those less likely to contain text.

? ? Step 1: Detect Candidate Text Regions Using MSER Step 2: Remove Non-Text Regions Based On Basic Geometric Properties ? Step 3: Remove Non-Text Regions Based On Stroke Width Variation ? ? ? Step 4: Merge Text Regions For Final Detection Result Step 5: Recognize Detected Text Using OCR References Step 1: Detect Candidate Text Regions Using MSER

The MSER feature detector works well for finding text regions [1]. It works well for text because the consistent color and high contrast of text leads to stable intensity profiles.

Use the detectMSERFeatures function to find all the regions within the image and plot these results. Notice that there are many non-text regions detected alongside the text.

colorImage = imread('handicapSign.jpg'); I = rgb2gray(colorImage);

% Detect MSER regions.

[mserRegions] = detectMSERFeatures(I, ...'RegionAreaRange',[200 8000],'ThresholdDelta',4);

figure imshow(I) hold on

plot(mserRegions, 'showPixelList', true,'showEllipses',false) title('MSER regions') hold off

Step 2: Remove Non-Text Regions Based On Basic Geometric Properties

Although the MSER algorithm picks out most of the text, it also detects many other stable regions in the image that are not text. You can use a

rule-based approach to remove non-text regions. For example, geometric properties of text can be used to filter out non-text regions using simple thresholds. Alternatively, you can use a machine learning approach to train a text vs. non-text classifier. Typically, a combination of the two approaches produces better results [4]. This example uses a simple rule-based approach to filter non-text regions based on geometric properties.

There are several geometric properties that are good for discriminating between text and non-text regions [2,3], including:

? ? ? ? ?

Aspect ratio Eccentricity Euler number Extent Solidity

Use regionprops to measure a few of these properties and then remove regions based on their property values.

% First, convert the x,y pixel location data within mserRegions into linear% indices as required by regionprops. sz = size(I);

pixelIdxList = cellfun(@(xy)sub2ind(sz, xy(:,2), xy(:,1)), ... mserRegions.PixelList, 'UniformOutput', false);

% Next, pack the data into a connected component struct. mserConnComp.Connectivity = 8; mserConnComp.ImageSize = sz;

mserConnComp.NumObjects = mserRegions.Count;

mserConnComp.PixelIdxList = pixelIdxList;

% Use regionprops to measure MSER properties

mserStats = regionprops(mserConnComp, 'BoundingBox',

'Eccentricity', ...'Solidity', 'Extent', 'Euler', 'Image');

% Compute the aspect ratio using bounding box data. bbox = vertcat(mserStats.BoundingBox); w = bbox(:,3); h = bbox(:,4);

aspectRatio = w./h;

% Threshold the data to determine which regions to remove. These thresholds% may need to be tuned for other images. filterIdx = aspectRatio' > 3;

filterIdx = filterIdx | [mserStats.Eccentricity] > .995 ; filterIdx = filterIdx | [mserStats.Solidity] < .3; filterIdx = filterIdx | [mserStats.Extent] < 0.2 | [mserStats.Extent] > 0.9;

filterIdx = filterIdx | [mserStats.EulerNumber] < -4;

% Remove regions

mserStats(filterIdx) = []; mserRegions(filterIdx) = [];

% Show remaining regions figure imshow(I) hold on

plot(mserRegions, 'showPixelList', true,'showEllipses',false) title('After Removing Non-Text Regions Based On Geometric Properties') hold off

Step 3: Remove Non-Text Regions Based On Stroke Width Variation

Another common metric used to discriminate between text and non-text is stroke width. Stroke width is a measure of the width of the curves and

lines that make up a character. Text regions tend to have little stroke width variation, whereas non-text regions tend to have larger variations. To help understand how the stroke width can be used to remove non-text regions, estimate the stroke width of one of the detected MSER regions. You can do this by using a distance transform and binary thinning operation [3].

% Get a binary image of the a region, and pad it to avoid boundary effects% during the stroke width computation. regionImage = mserStats(6).Image;

regionImage = padarray(regionImage, [1 1]);

% Compute the stroke width image.

distanceImage = bwdist(~regionImage);

skeletonImage = bwmorph(regionImage, 'thin', inf);

strokeWidthImage = distanceImage;

strokeWidthImage(~skeletonImage) = 0;

% Show the region image alongside the stroke width image. figure

subplot(1,2,1)

imagesc(regionImage) title('Region Image')

subplot(1,2,2)

imagesc(strokeWidthImage) title('Stroke Width Image')

In the images shown above, notice how the stroke width image has very little variation over most of the region. This indicates that the region is more likely to be a text region because the lines and curves that make up the region all have similar widths, which is a common characteristic of human readable text.

In order to use stroke width variation to remove non-text regions using a threshold value, the variation over the entire region must be quantified into a single metric as follows:

% Compute the stroke width variation metric

strokeWidthValues = distanceImage(skeletonImage); strokeWidthMetric =

std(strokeWidthValues)/mean(strokeWidthValues);

Then, a threshold can be applied to remove the non-text regions. Note that this threshold value may require tuning for images with different font styles.

% Threshold the stroke width variation metric strokeWidthThreshold = 0.4;

strokeWidthFilterIdx = strokeWidthMetric > strokeWidthThreshold; The procedure shown above must be applied separately to each detected MSER region. The following for-loop processes all the regions, and then shows the results of removing the non-text regions using stroke width variation.

% Process the remaining regions for j = 1:numel(mserStats)

regionImage = mserStats(j).Image;

regionImage = padarray(regionImage, [1 1], 0);

distanceImage = bwdist(~regionImage);

skeletonImage = bwmorph(regionImage, 'thin', inf);

strokeWidthValues = distanceImage(skeletonImage);

strokeWidthMetric =

std(strokeWidthValues)/mean(strokeWidthValues);

strokeWidthFilterIdx(j) = strokeWidthMetric > strokeWidthThreshold; end

% Remove regions based on the stroke width variation mserRegions(strokeWidthFilterIdx) = []; mserStats(strokeWidthFilterIdx) = [];

% Show remaining regions figure imshow(I) hold on

plot(mserRegions, 'showPixelList', true,'showEllipses',false) title('After Removing Non-Text Regions Based On Stroke Width Variation') hold off

Step 4: Merge Text Regions For Final Detection Result

本文来源:https://www.bwwdw.com/article/lgbh.html

Top