1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Understanding And Applying Machine Vision Part 7 potx

25 260 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 25
Dung lượng 515,33 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Often pixel counting is used for tasks other than the main application problem such as threshold selection as already described, checking part location, verifying the image, etc.Edge Fin

Trang 1

Page 187

Figure 8.19Erosion operation(courtesy of ERIM)

Figure 8.20Duality of dilation and erosion(courtesy of ERIM)

Page 188

Figure 8.21Mathematical morphology in processing of PC board images

(courtesy of General Scanning/SVS)

Trang 2

The results of the first operation-closing are depicted in Figure 8.22(b) A difference image is shown in Figure 8.22(c)

A threshold is taken on the difference image [Figure 8.22(d)] Filtering based on shape takes place to distinguish the noise from the scratch; that is, the detail that can fit in a structured element in the shape of a ring is identified in Figure 8.22(e) and subtracted [Figure 8.22(f)] Figure 8.22(g) depicts the segmented scratch displayed on the original image

Figure 8.22a–gMorphology operating on a scratch on manifold(courtesy of Machine Vision International)

Page 189

Trang 3

Figure 8.22

Continued

Page 190

Trang 4

Figure 8.22 Continued.

Page 191

Figure 8.22Continued

Trang 5

Often pixel counting is used for tasks other than the main application problem such as threshold selection (as already described), checking part location, verifying the image, etc.

Edge Finding

Finding the location of edges in the image is basic to the majority of the image-processing algorithms in use This can

be one of two types: binary or enhanced edge Binary-edge-finding methods examine a black-and-white image and fred the X-Y location of certain edges These edges are white to black, or black-to-white transitions One technique requires the user to position scan lines, or tools ("gates") in the image (Fig 8.23) The system then finds all edges along the tools, and reports their X-Y coordinates

Figure 8.23Edge finding, circles are edge locations found

Page 193These gates operate like a one-dimensional window, starting at one end and recording coordinates of any transitions Many times they can be programmed to respond to only one polarity of edge, or only edges "n" pixels apart, or other qualifiers Like windows, some systems allow orientation at angles, while others do not

Because they are one-dimensional, and also because the video is binarized, any edge data collected using gates should

be verified This verification can be done by using many different gates and combining the results by averaging,

throwing out erratic results, etc Other features are often used to verify binary edge data such as pixel counting

Gray-scale edge finding is very closely tied with gray-scale edge enhancement; in fact, usually the two are combined into one operation and are not available as two separate steps Generally, the edge-enhanced (gradient) picture is

analyzed to find the location of the maximum gradient, or slope

Trang 6

This is identified as the "edge." The set of coordinates of these edge points are taken as features and used as inputs for classification Sometimes, the edge picture is thresholded to create a binary image so that the edges are "thick." A

"thinning" or "skeletonizing'' algorithm is then used to give single pixel wide edges The first method, finding gradient, gives the true edge but usually requires knowing the direction of the edge (at least approximately) The second method, thresholding the gradient and thinning, may not find the true edge location if the edge is not uniform in slope The error

is usually less than one pixel, depending on the image capture setup; and, thus, thresholding the gradient image is a very common gray-scale edge locating method

There are systems that use algorithms that produce edge location coordinates directly from the gray scale data, using algorithms that combine filters, gradient approximations, and neighborhood operations into one step These are usually tool-based (or window-based) systems Many even 'learn" edges by storing characteristics such as the strengths of the edge, its shape and the intensity pattern in the neighborhood These characteristics can be used to ensure that the

desired edge is being found during run-time

Trang 7

Figure 8.25Thresholded segmentation.

A pixel gray value histogram (Table 4 1) analysis display can provide the operator with some guidance in setting the threshold In some systems, the threshold is adaptive; the operator sets its relative level, but the setting for the specific analysis is adjusted based on the pixel gray scale histogram of the scene itself

Once thresholded, processing and analysis is based on a binary image An alternative to thresholded segmentation is that based on regions, areas in an image whose pixels share a common set of properties; for example, all gray scale values 0–25 are characterized as one region, 25–30, 30–40, and so on, as others

TABLE 8.1 Uses for Histograms

Binary threshold settingMultiple-threshold settingAutomatic iris controlHistogram equalization (display)

Signature analysisExclusion of high and low pixels

Texture analysis (local intensity spectra)

SRI analysis is a popular set of shape feature extraction algorithms They operate on a binary image by identifying

"blobs" in the image and generating geometrical features for each blob Blobs can be nested (as a part with a hole; the part is a blob, and the hole is a blob also) SRI analysis has several distinct advantages: the features it generates are appropriate for many applications; most features are derived independent of part location or orientation, and it lends itself well to a "teach by show" approach (teach the system by showing it a part)

Trang 8

SRI, or "connectivity" analysis as it is often called, requires a binary image However, it only deals with edge points,

so often the image is "run-length" encoded prior to analysis Starting at the top left of the screen (Figure 8.26), and moving across the line, pixels are counted until the first edge is reached This count is stored, along with the "color" of the pixels (B or W), and the counter is reset to zero At the end of the line, one should have a set of 'lengths" that add

up to the image size (shown below), 0 = black, 1 = white

Page 196

These set of like pixels are called runs, so we have encoded this line of binary video encoded by the length of the runs; hence, "run-length encoding." Run-length encoding is a simple function to perform with a hardware circuit, or can be done fairly quickly in software

Figure 8.26Shape features

Each line in the image is run-length encoded, and the runs are all stored The runs explain about the objects in the scene The image in Figure 8.26 may appearas:

Trang 9

Note how the left "blob" split into two vertical sections Similarly, as one works down the image, some blobs may

"combine" into one The keys to SRI are the algorithms to keep track of blobs and sub-blobs (holes), and to generate features from the run lengths associated with each blob From these codes many features can be derived The area of a blob is the total of all runs in that blob

By similar operations, the algorithms derive:

Second moments of inertia Eccentricity ("roundness")

Minimum circumscribed rectangle Major and minor axes

Maximum and minimum radii, etc

Trang 10

Figure 8.27Connectivity/blob analysis(courtesy of International Robotmation Intelligence).

Page 198

Figure 8.28Geometric features used to sort chain link

(courtesy of Octek)

Figures 8.27 and 8.28 are examples of the SRI approach It is apparent that knowing all these features for every blob in the scene would be enough to satisfy most applications For this reason, SRI analysis is a powerful tool However, it has two important drawbacks that must be understood

1 Binary image - the algorithms only operate on a binary image The binarization process must be designed to produce the best possible binary image under the worst possible circumstances Additionally, extra attention needs to be paid to the image-verifying task It is easy for the SRI analysis to produce spurious data because of a poor binary image

2 Speed - because it (typically) examines the whole image, and because so many features are calculated during

analysis, the algorithms tend to take more time than some other methods This can be solved by windowing to restrict processing, and by only selecting the features necessary to be calculated Most SRI-based systems allow unnecessary features to be disabled via a software switch, cutting processing time

Some systems using SRI have a "teach by show" approach In the teach mode, a part is imaged and processed In interactive mode, the desired features are stored At run time, these features are used to discriminate part types, to reject non-conforming parts, or to find part position The advantage is that the features are found directly from a

sample part, without additional operation interaction

Trang 11

For the case of a binary image and template, the match process is fairly simple The total number of different pixels is evaluated; the smaller this number, the better the match If one XORs the image and the template, white results where the pixel values are different, black where they were same (see Fig 8.29)

Figure 8.29Binary image and template

Page 200

Trang 12

Figure 8.30Location 1 Total number of pixels = 25, number of black pixels = 21,

match % = 21/25 × 100% = 84%

Figure 8.31Location 2 Number of black pixels = 13, match % = 13/25 × 100% = 52% match

Page 201

Figure 8.32Evaluation of percent match at eachlocation and matrix formed

Evaluate the match at several different locations in the recording each with its location For location #1: template pixel (0, 0) at image pixel (0, 0) (see Fig 8.30)

Now shift the template one pixel to the right For location #2: template pixel (0, 0) (see Fig 8.31) Notice that one can only shift the reference four more pixels to the right The number of different displacements in one dimension is equal

to the size of the image, minus the size of the template in that dimension, plus one Mathematically, the number of

different locations possible for an m × n image and an i × j template is: (md-i+l) * (n-j+1).

If this process is continued exhaustively, evaluating the percent match at each location and forming a matrix of the results, Figure 8.32 is obtained

Trang 13

From this, the best match is at (4, 5) in the image This is the pixel in the image that corresponds to pixel (0, 0) in the plate Therefore, this is also an offset that can be added at any pixel coordinates in the template to find their location in the image This simple example brings out several points:

A full XOR and pixel counting operation must be done at each offset If a match is tried at every possible location (exhaustive search), the time requirements are considerable

100% match occurs only for an exact match The closer the match, the higher the percentage

Two unrelated patterns (random noise) will match to about 50%, using this quality calculation (XOR and count) The only way to get 0% match would be to

Page 202have a reverse image (for example, a white cross on a black field) What useful information is obtained from the

pattern matching process?

Most importantly:

1 Location of the template in the image

2 The quality of match between the image and the template

The location can be used for several things In robot guidance applications, the part's X-Y coordinates are often the answer to the application problem For most inspection systems, the material handling does not hold part position closely enough Pattern matching can be used to find the part's location, which is then used to direct other processing steps For instance, often the location found by pattern matching will relocate a set of windows so that they fall over certain areas of the part

The quality of match can be used to verify the image If the quality is lower than expected, something may be wrong with the part (mislocated, damaged, etc.) or the image (bad lighting or threshold) Also, more than one pattern

matching process may occur, each using a reference corresponding to a different number The reference that gives the best match is the same part number as the image; this can be used to sort mixed parts

In reality, parts can not only translate (X and Y), but rotate (0) Therefore, a third variable must be introduced At each location (X and Y), the template must be rotated through 360 degrees, and the match at each angle evaluated This gives the system the ability to find parts at any orientation, as well as any position However, the computational speed

is quite substantial for an exhaustive search approach Assuming that the match is evaluated at each location (every pixel) and at every angle (1 degree increments), we must make [(assume 256 × 256 image, 64 × 64 template) = (256 -

64 + 1) (256 - 64/1) 360] = 13.4 million match operations, each one 64 × 64!

The most common approach for solving this problem is to use a "rough search" first, followed by a "fine search." The

"rough search'' may only examine every tenth pixel in X and Y, and every 30 degrees in 0 This would call for

Then, starting at the "most promising" point(s) found by the rough search, a fine search is conducted over a limited area This search uses increments of 1 pixel and the smallest angle increment The best match thus found is the final result For most images used, this procedure works very well However, this provides a guide to selecting a reference

To aid the rough search, the template should contain some large features (of course, they should be unambiguous - so they are not confused with undesired features) For the fine search, some small, sharp details should be in the template also

Trang 14

Page 203

Figure 8.33Running template through image

The search is an interesting problem, since it calls for some interpolation to lay the rotated template (Fig 8.33) An often-used technique is binary pattern matching of edge pictures In this case, both the template and the image have been edge enhanced and thresholded (but usually not skeletonized) This allows pattern recognition to be used on scones that cannot be binarized simply Normalization of the gray scale images or auto-thresholding of the edge may

be necessary to get consistent edge pictures

In Figure 8.33 the template was given In practice the template is loaded during a "teaching phase" A part is imaged by the system and a nominal (0,0) point chosen The appropriate size subimage is extracted from the image (or its image) and stored in memory as the template Usually the teach phase allows matching with the part to check the template, loading of multiple references for part-sorting applications, and viewing of the reference for further verification

Sometimes the system allows "masking" of irrelevant portions of the template For instance, if the part may or may not have a hole in a certain location, and one does not want its presence or absence to affect the quality of the match, it can

be "masked" out; pattern matching will then consider only the rest of the template

Gray-Scale Pattern Matching

This operation is similar to binary pattern matching, except that the part and the template contain gray-scale values.The measure of goodness of fit is then based on the difference between pixel values A "square root of the sum of squares" quantity is appropriate, for instance:

Ngày đăng: 10/08/2014, 02:21

TỪ KHÓA LIÊN QUAN