1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Xử lý hình ảnh kỹ thuật số P17 pdf

37 337 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Image Segmentation
Tác giả William K. Pratt
Trường học John Wiley & Sons, Inc.
Chuyên ngành Digital Image Processing
Thể loại Book
Năm xuất bản 2001
Thành phố Hoboken
Định dạng
Số trang 37
Dung lượng 806,44 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

AMPLITUDE SEGMENTATION METHODSThis section considers several image segmentation methods based on the ing of luminance or color components of an image.. Multilevel Luminance Thresholding

Trang 1

The definition of segmentation adopted in this chapter is deliberately restrictive;

no contextual information is utilized in the segmentation Furthermore, tion does not involve classifying each segment The segmenter only subdivides animage; it does not attempt to recognize the individual segments or their relationships

segmenta-to one another

There is no theory of image segmentation As a consequence, no single standardmethod of image segmentation has emerged Rather, there are a collection of ad hocmethods that have received some degree of popularity Because the methods are adhoc, it would be useful to have some means of assessing their performance Haralickand Shapiro (1) have established the following qualitative guideline for a goodimage segmentation: “Regions of an image segmentation should be uniform andhomogeneous with respect to some characteristic such as gray tone or texture.Region interiors should be simple and without many small holes Adjacent regions

of a segmentation should have significantly different values with respect to the acteristic on which they are uniform Boundaries of each segment should be simple,not ragged, and must be spatially accurate.” Unfortunately, no quantitative imagesegmentation performance metric has been developed

char-Several generic methods of image segmentation are described in the followingsections Because of their complexity, it is not feasible to describe all the details ofthe various algorithms Surveys of image segmentation methods are given in Refer-ences 1 to 6

Digital Image Processing: PIKS Inside, Third Edition William K Pratt

Copyright © 2001 John Wiley & Sons, Inc.ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

Trang 2

17.1 AMPLITUDE SEGMENTATION METHODS

This section considers several image segmentation methods based on the ing of luminance or color components of an image An amplitude projectionsegmentation technique is also discussed

threshold-17.1.1 Bilevel Luminance Thresholding

Many images can be characterized as containing some object of interest of ably uniform brightness placed against a background of differing brightness Typicalexamples include handwritten and typewritten text, microscope biomedical samples,and airplanes on a runway For such images, luminance is a distinguishing featurethat can be utilized to segment the object from its background If an object of inter-est is white against a black background, or vice versa, it is a trivial task to set amidgray threshold to segment the object from the background Practical problemsoccur, however, when the observed image is subject to noise and when both theobject and background assume some broad range of gray scales Another frequentdifficulty is that the background may be nonuniform

reason-Figure 17.1-1a shows a digitized typewritten text consisting of dark letters

against a lighter background A gray scale histogram of the text is presented in

Fig-ure 17.1-1b The expected bimodality of the histogram is masked by the relatively large percentage of background pixels Figure 17.1-1c to e are threshold displays in

which all pixels brighter than the threshold are mapped to unity display luminanceand all the remaining pixels below the threshold are mapped to the zero level of dis-play luminance The photographs illustrate a common problem associated withimage thresholding If the threshold is set too low, portions of the letters are deleted(the stem of the letter “p” is fragmented) Conversely, if the threshold is set too high,object artifacts result (the loop of the letter “e” is filled in)

Several analytic approaches to the setting of a luminance threshold have beenproposed (7,8) One method is to set the gray scale threshold at a level such that thecumulative gray scale count matches an a priori assumption of the gray scale proba-bility distribution (9) For example, it may be known that black characters cover25% of the area of a typewritten page Thus, the threshold level on the image might

be set such that the quartile of pixels with the lowest luminance are judged to beblack Another approach to luminance threshold selection is to set the threshold atthe minimum point of the histogram between its bimodal peaks (10) Determination

of the minimum is often difficult because of the jaggedness of the histogram Asolution to this problem is to fit the histogram values between the peaks with some

analytic function and then obtain its minimum by differentiation For example, let y and x represent the histogram ordinate and abscissa, respectively Then the quadratic

curve

(17.1-1)

y = ax2+bx+c

Trang 3

AMPLITUDE SEGMENTATION METHODS 553

FIGURE 17.1-1 Luminance thresholding segmentation of typewritten text.

(c) High threshold, T = 0.67 (d ) Medium threshold, T = 0.50

(f ) Histogram, Laplacian mask (e) Low threshold, T = 0.10

Trang 4

where a, b, and c are constants provides a simple histogram approximation in the

vicinity of the histogram valley The minimum histogram valley occurs for

Papamarkos and Gatos (11) have extended this concept for thresholdselection

Weska et al (12) have suggested the use of a Laplacian operator to aid in

lumi-nance threshold selection As defined in Eq 15.3-1, the Laplacian forms the spatialsecond partial derivative of an image Consider an image region in the vicinity of anobject in which the luminance increases from a low plateau level to a higher plateaulevel in a smooth ramplike fashion In the flat regions and along the ramp, the Lapla-cian is zero Large positive values of the Laplacian will occur in the transition regionfrom the low plateau to the ramp; large negative values will be produced in the tran-sition from the ramp to the high plateau A gray scale histogram formed of onlythose pixels of the original image that lie at coordinates corresponding to very high

or low values of the Laplacian tends to be bimodal with a distinctive valley between

the peaks Figure 17.1-1f shows the histogram of the text image of Figure 17.1-1a

after the Laplacian mask operation

If the background of an image is nonuniform, it often is necessary to adapt theluminance threshold to the mean luminance level (13,14) This can be accomplished

by subdividing the image into small blocks and determining the best threshold levelfor each block by the methods discussed previously Threshold levels for each pixelmay then be determined by interpolation between the block centers Yankowitz andBruckstein (15) have proposed an adaptive thresholding method in which a thresh-old surface is obtained by interpolating an image only at points where its gradient islarge

17.1.2 Multilevel Luminance Thresholding

Effective segmentation can be achieved in some classes of images by a recursive

multilevel thresholding method suggested by Tomita et al (16) In the first stage of

the process, the image is thresholded to separate brighter regions from darkerregions by locating a minimum between luminance modes of the histogram Thenhistograms are formed of each of the segmented parts If these histograms are notunimodal, the parts are thresholded again The process continues until the histogram

of a part becomes unimodal Figures 17.1-2 to 17.1-4 provide an example of thisform of amplitude segmentation in which the peppers image is segmented into fourgray scale segments

17.1.3 Multilevel Color Component Thresholding

The multilevel luminance thresholding concept can be extended to the segmentation

of color and multispectral images Ohlander et al (17, 18) have developed a mentation scheme for natural color images based on multidimensional thresholding

seg-of color images represented by their RGB color components, their luma/chroma YIQ

components, and by a set of nonstandard color components, loosely called intensity,

x = –b⁄2a

Trang 5

AMPLITUDE SEGMENTATION METHODS 555

FIGURE 17.1-2 Multilevel luminance thresholding image segmentation of the peppers_

mon image; first-level segmentation

(a) Original (b) Original histogram

(c) Segment 0 (d ) Segment 0 histogram

(e) Segment 1 (f ) Segment 1 histogram

Trang 6

hue, and saturation Figure 17.1-5 provides an example of the property histograms

of these nine color components for a scene The histograms, have been measuredover those parts of the original scene that are relatively devoid of texture: the non-busy parts of the scene This important step of the segmentation process is necessary

to avoid false segmentation of homogeneous textured regions into many isolatedparts If the property histograms are not all unimodal, an ad hoc procedure isinvoked to determine the best property and the best level for thresholding of thatproperty The first candidate is image intensity Other candidates are selected on apriority basis, depending on contrast level and location of the histogram modes.After a threshold level has been determined, the image is subdivided into itssegmented parts The procedure is then repeated on each part until the resultingproperty histograms become unimodal or the segmentation reaches a reasonable

FIGURE 17.1-3 Multilevel luminance thresholding image segmentation of the peppers_

mon image; second-level segmentation, 0 branch

(a) Segment 00 (b) Segment 00 histogram

(c) Segment 01 (d ) Segment 01 histogram

Trang 7

AMPLITUDE SEGMENTATION METHODS 557

stage of separation under manual surveillance Ohlander's segmentation techniqueusing multidimensional thresholding aided by texture discrimination has provedquite effective in simulation tests However, a large part of the segmentation controlhas been performed by a human operator; human judgment, predicated on trialthreshold setting results, is required for guidance

In Ohlander's segmentation method, the nine property values are obviously

inter-dependent The YIQ and intensity components are linear combinations of RGB; the hue and saturation measurements are nonlinear functions of RGB This observation raises several questions What types of linear and nonlinear transformations of RGB

are best for segmentation? Ohta et al (19) suggest an approximation to the spectralKarhunen–Loeve transform How many property values should be used? What is thebest form of property thresholding? Perhaps answers to these last two questions may

FIGURE 17.1-4 Multilevel luminance thresholding image segmentation of the peppers_

mon image; second-level segmentation, 1 branch

(a) Segment 10 (b) Segment 10 histogram

(c) Segment 11 (d ) Segment 11 histogram

Trang 8

be forthcoming from a study of clustering techniques in pattern recognition (20).Property value histograms are really the marginal histograms of a joint histogram ofproperty values Clustering methods can be utilized to specify multidimensionaldecision boundaries for segmentation This approach permits utilization of all theproperty values for segmentation and inherently recognizes their respective crosscorrelation The following section discusses clustering methods of imagesegmentation.

FIGURE 17.1-5 Typical property histograms for color image segmentation.

Trang 9

AMPLITUDE SEGMENTATION METHODS 559 17.1.4 Amplitude Projection

Image segments can sometimes be effectively isolated by forming the average

amplitude projections of an image along its rows and columns (21,22) The

horizon-tal (row) and vertical (column) projections are defined as

(17.1-2)and

(17.1-3)

Figure 17.1-6 illustrates an application of gray scale projection segmentation of animage The rectangularly shaped segment can be further delimited by taking projec-tions over oblique angles

FIGURE 17.1-6 Gray scale projection image segmentation of a toy tank image.

(c) Segmentation (d ) Column projection

W

B

Trang 10

17.2 CLUSTERING SEGMENTATION METHODS

One of the earliest examples of image segmentation, by Haralick and Kelly (23)using data clustering, was the subdivision of multispectral aerial images of agricul-tural land into regions containing the same type of land cover The clustering seg-mentation concept is simple; however, it is usually computationally intensive.Consider a vector of measurements at each pixel coordinate

(j, k) in an image The measurements could be point multispectral values, point color

components, and derived color components, as in the Ohlander approach describedpreviously, or they could be neighborhood feature measurements such as the movingwindow mean, standard deviation, and mode, as discussed in Section 16.2 If themeasurement set is to be effective for image segmentation, data collected at variouspixels within a segment of common attribute should be similar That is, the data

should be tightly clustered in an N-dimensional measurement space If this condition holds, the segmenter design task becomes one of subdividing the N-dimensional

measurement space into mutually exclusive compartments, each of which envelopestypical data clusters for each image segment Figure 17.2-1 illustrates the conceptfor two features In the segmentation process, if a measurement vector for a pixelfalls within a measurement space compartment, the pixel is assigned the segmentname or label of that compartment

Coleman and Andrews (24) have developed a robust and relatively efficientimage segmentation clustering algorithm Figure 17.2-2 is a flowchart that describes

a simplified version of the algorithm for segmentation of monochrome images Thefirst stage of the algorithm involves feature computation In one set of experiments,Coleman and Andrews used 12 mode measurements in square windows of size 1, 3,

7, and 15 pixels The next step in the algorithm is the clustering stage, in which theoptimum number of clusters is determined along with the feature space center ofeach cluster In the segmenter, a given feature vector is assigned to its closest clustercenter

FIGURE 17.2-1 Data clustering for two feature measurements.

X1

Trang 11

CLUSTERING SEGMENTATION METHODS 561

The cluster computation algorithm begins by establishing two initial trial clustercenters All feature vectors of an image are assigned to their closest cluster center.Next, the number of cluster centers is successively increased by one, and a cluster-ing quality factor is computed at each iteration until the maximum value of isdetermined This establishes the optimum number of clusters When the number ofclusters is incremented by one, the new cluster center becomes the feature vectorthat is farthest from its closest cluster center The factor is defined as

(17.2-1)

where and are the within- and between-cluster scatter matrices, respectively,

and denotes the trace of a matrix The within-cluster scatter matrix is puted as

com-(17.2-2)

where K is the number of clusters, M k is the number of vector elements in the kth

cluster, xi is a vector element in the kth cluster, is the mean of the kth cluster, and

S k is the set of elements in the kth cluster The between-cluster scatter matrix is

Trang 12

where M denotes the number of pixels to be clustered Coleman and Andrews (24)

have obtained subjectively good results for their clustering algorithm in the tation of monochrome and color images

segmen-17.3 REGION SEGMENTATION METHODS

The amplitude and clustering methods described in the preceding sections are based

on point properties of an image The logical extension, as first suggested by Muerleand Allen (25), is to utilize spatial properties of an image for segmentation

17.3.1 Region Growing

Region growing is one of the conceptually simplest approaches to image

segmenta-tion; neighboring pixels of similar amplitude are grouped together to form asegmented region However, in practice, constraints, some of which are reasonablycomplex, must be placed on the growth pattern to achieve acceptable results.Brice and Fenema (26) have developed a region-growing method based on a set

of simple growth rules In the first stage of the process, pairs of quantized pixels are

combined together in groups called atomic regions if they are of the same amplitude

and are four-connected Two heuristic rules are next invoked to dissolve weak

boundaries between atomic boundaries Referring to Figure 17.3-1, let R1 and R2 be

two adjacent regions with perimeters P1 and P2, respectively, which have previouslybeen merged After the initial stages of region growing, a region may contain previ-

ously merged subregions of different amplitude values Also, let C denote the length

of the common boundary and let D represent the length of that portion of C for which the amplitude difference Y across the boundary is smaller than a significance factor The regions R1 and R2 are then merged if

Trang 13

REGION SEGMENTATION METHODS 563

where is a constant typically set at This heuristic prevents merger ofadjacent regions of the same approximate size, but permits smaller regions to beabsorbed into larger regions The second rule merges weak common boundariesremaining after application of the first rule Adjacent regions are merged if

seg-17.3.2 Split and Merge

Split and merge image segmentation techniques (29) are based on a quad tree data

representation whereby a square image segment is broken (split) into four quadrants

if the original image segment is nonuniform in attribute If four neighboring squaresare found to be uniform, they are replaced (merge) by a single square composed ofthe four adjacent squares

In principle, the split and merge process could start at the full image level and tiate split operations This approach tends to be computationally intensive Con-versely, beginning at the individual pixel level and making initial merges has thedrawback that region uniformity measures are limited at the single pixel level Ini-tializing the split and merge process at an intermediate level enables the use of morepowerful uniformity tests without excessive computation

ini-The simplest uniformity measure is to compute the difference between the largestand smallest pixels of a segment Fukada (30) has proposed the segment variance as

a uniformity measure Chen and Pavlidis (31) suggest more complex statistical sures of uniformity The basic split and merge process tends to produce ratherblocky segments because of the rule that square blocks are either split or merged.Horowitz and Pavlidis (32) have proposed a modification of the basic processwhereby adjacent pairs of regions are merged if they are sufficiently uniform

mea-17.3.3 Watershed

Topographic and hydrology concepts have proved useful in the development ofregion segmentation methods (33–36) In this context, a monochrome image is con-sidered to be an altitude surface in which high-amplitude pixels correspond to ridgepoints, and low-amplitude pixels correspond to valley points If a drop of water were

2

=

D C

> ε3

4

=

Trang 14

to fall on any point of the altitude surface, it would move to a lower altitude until itreached a local altitude minimum The accumulation of water in the vicinity of a

local minimum is called a catchment basin All points that drain into a common catchment basin are part of the same watershed A valley is a region that is sur- rounded by a ridge A ridge is the loci of maximum gradient of the altitude surface.

There are two basic algorithmic approaches to the computation of the watershed of

an image: rainfall and flooding

In the rainfall approach, local minima are found throughout the image Each

local minima is given a unique tag Adjacent local minima are combined with aunique tag Next, a conceptual water drop is placed at each untagged pixel The dropmoves to its lower-amplitude neighbor until it reaches a tagged pixel, at which time

it assumes the tag value Figure 17.3-2 illustrates a section of a digital image passing a watershed in which the local minimum pixel is black and the dashed lineindicates the path of a water drop to the local minimum

encom-In the flooding approach, conceptual single pixel holes are pierced at each local

minima, and the amplitude surface is lowered into a large body of water The waterenters the holes and proceeds to fill each catchment basin If a basin is about to over-flow, a conceptual dam is built on its surrounding ridge line to a height equal to thehighest- altitude ridge point Figure 17.3-3 shows a profile of the filling process of acatchment basin (37) Figure 17.3-4 is an example of watershed segmentation pro-vided by Moga and Gabbouj (38)

Figure 17.3-2 Rainfall watershed.

Trang 15

REGION SEGMENTATION METHODS 565

Figure 17.3-3 Profile of catchment basis filling.

FIGURE 17.3-4 Watershed image segmentation of the peppers_mon image Courtesy of

Alina N Moga and M Gabbouj, Tampere University of Technology, Finland

DAM

Trang 16

Simple watershed algorithms tend to produce results that are oversegmented(39) Najman and Schmitt (37) have applied morphological methods in their water-shed algorithm to reduce over segmentation Wright and Acton (40) have performedwatershed segmentation on a pyramid of different spatial resolutions to avoid over-segmentation.

17.4 BOUNDARY DETECTION

It is possible to segment an image into regions of common attribute by detecting theboundary of each region for which there is a significant change in attribute acrossthe boundary Boundary detection can be accomplished by means of edge detection

as described in Chapter 15 Figure 17.4-1 illustrates the segmentation of a projectilefrom its background In this example a derivative of Gaussian edge detector

FIGURE 17.4-1 Boundary detection image segmentation of the projectile image.

(a) Original

11×11

Trang 17

often be broken Edge linking techniques can be employed to bridge short gaps in

such a region boundary

17.4.1 Curve-Fitting Edge Linking

In some instances, edge map points of a broken segment boundary can be linkedtogether to form a closed contour by curve-fitting methods If a priori information isavailable as to the expected shape of a region in an image (e.g., a rectangle or acircle), the fit may be made directly to that closed contour For more complex-shaped regions, as illustrated in Figure 17.4-2, it is usually necessary to break up thesupposed closed contour into chains with broken links One such chain, shown in

Figure 17.4-2 starting at point A and ending at point B, contains a single broken link Classical curve-fitting methods (29) such as Bezier polynomial or spline fitting can

be used to fit the broken chain

In their book, Duda and Hart (41) credit Forsen as being the developer of a

sim-ple piecewise linear curve-fitting procedure called the iterative endpoint fit In the first stage of the algorithm, illustrated in Figure 17.4-3, data endpoints A and B are

connected by a straight line The point of greatest departure from the straight-line

(point C) is examined If the separation of this point is too large, the point becomes

an anchor point for two straight-line segments (A to C and C to B) The procedure

then continues until the data points are well fitted by line segments The principaladvantage of the algorithm is its simplicity; its disadvantage is error caused byincorrect data points Ramer (42) has used a technique similar to the iterated end-point procedure to determine a polynomial approximation to an arbitrary-shapedclosed curve Pavlidis and Horowitz (43) have developed related algorithms forpolygonal curve fitting The curve-fitting approach is reasonably effective for sim-ply structured objects Difficulties occur when an image contains many overlappingobjects and its corresponding edge map contains branch structures

FIGURE 17.4-2 Region boundary with missing links indicated by dashed lines.

Trang 18

17.4.2 Heuristic Edge-Linking Methods

The edge segmentation technique developed by Roberts (44) is typical of the ophy of many heuristic edge-linking methods In Roberts' method, edge gradientsare examined in pixels blocks The pixel whose magnitude gradient is largest

philos-is declared a tentative edge point if its magnitude philos-is greater than a threshold value.Then north-, east-, south-, and west-oriented lines of length 5 are fitted to the gradi-ent data about the tentative edge point If the ratio of the best fit to the worst fit,measured in terms of the fit correlation, is greater than a second threshold, the tenta-tive edge point is declared valid, and it is assigned the direction of the best fit Next,straight lines are fitted between pairs of edge points if they are in adjacent blocks and if the line direction is within degrees of the edge direction of eitheredge point Those points failing to meet the linking criteria are discarded A typical

boundary at this stage, shown in Figure 17.4-4a, will contain gaps and multiply

con-nected edge points Small triangles are eliminated by deleting the longest side; small

FIGURE 17.4-3 Iterative endpoint curve fitting.

4×4

4×423

±

Ngày đăng: 21/01/2014, 15:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm