1. Trang chủ
  2. » Giáo án - Bài giảng

Bản chất của hình ảnh y sinh học (Phần 5)

165 233 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 165
Dung lượng 5,79 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Furthermore, in the case of clustered calcications in mammograms,cells in cervical smears, and other examples of images with multicomponentROIs, analysis may commence with the detection

Trang 1

Dete ction of Regions of Interest

Although a physician or a radiologist, of necessity, will carefully examine

an image on hand in its entirety, more often than not, diagnostic features ofinterest manifest themselves in local regions It is uncommon that a condition

or disease will alter an image over its entire spatial extent In a screeningsituation, the radiologist scans the entire image and searches for featuresthat could be associated with disease In a diagnostic situation, the medicalexpert concentrates on the region of suspected abnormality, and examines itscharacteristics to decide if the region exhibits signs related to a particulardisease

In the CAD environment, one of the roles of image processing would be todetect the region of interest (ROI) for a given, specic, screening or diagnosticapplication Once the ROIs have been detected, the subsequent tasks wouldrelate to the characterization of the regions and their classication into one

of several categories A few examples of ROIs in dierent biomedical imagingand image analysis applications are listed below

Cells in cervical-smear test images (Papanicolaou or Pap-smear test)

272, 273]

Calcications in mammograms 274]

Tumors and masses in mammograms 275, 276, 277]

The pectoral muscle in mammograms 278]

The breast outline or skin-air boundary in mammograms 279]

The broglandular disc in mammograms 280]

The air-way tree in lungs

The arterial tree in lungs

The arterial tree of the left ventricle, and constricted parts of the samedue to plaque development

Segmentation is the process that divides an image into its constituent parts,objects, or ROIs Segmentation is an essential step before the description,recognition, or classication of an image or its constituents Two major ap-proaches to image segmentation are based on the detection of the followingcharacteristics:

Trang 2

Discontinuity| Abrupt changes in gray level (corresponding to edges)are detected.

Similarity | Homogeneous parts are detected, based on gray-levelthresholding, region growing, and region splitting/merging

Depending upon the nature of the images and the ROIs, we may attempt todetect the edges of the ROIs (if distinct edges are present), or we may attempt

to grow regions to approximate the ROIs It should be borne in mind that,

in some cases, an ROI may be composed of several disjoint component areas(for example, a tumor that has metastasized into neighboring regions andcalcications in a cluster) Edges that are detected may include disconnectedparts that may have to be matched and joined We shall explore severaltechniques of this nature in the present chapter

Notwithstanding the stated interest in local regions as above, applications

do exist where entire images need to be analyzed for global changes in terns: for example, changes in the orientational structure of collagen bers inligaments (seeFigure 1.8),and bilateral asymmetry in mammograms (seeSec-

pat-tion 8.9) Furthermore, in the case of clustered calcications in mammograms,cells in cervical smears, and other examples of images with multicomponentROIs, analysis may commence with the detection of single units of the pattern

of interest, but several such units present in a given image may need to beanalyzed, separately and together, in order to reach a decision regarding thecase

5.1 Thresholding and Binarization

If the gray levels of the objects of interest in an image are known from priorknowledge, or can be determined from the histogram of the given image, theimage may be thresholded to detect the features of interest and reject otherdetails For example, if it is known that the objects of interest in the imagehave gray-level values greater than L1, we could create a binary image fordisplay as

g(m n) = 0 iff(m n)L1

255 iff(m n)L1 (5.1)where f(m n) is the original image g(m n) is the thresholded image to bedisplayed and the display range is 0 255] See alsoSection 4.4.1

Methods for the derivation of optimal thresholds are described in Sections5.4.1, 8.3.2, and 8.7.2

Example: Figure 5.1 (a)shows a TEM image of a ligament sample strating collagen bers in cross-section seeSection 1.4 Inspection of the his-togram of the image (shown in Figure 2.12) shows that the sections of the

Trang 3

demon-collagen bers in the image have gray-level values less than about 180 valuesgreater than this level represent the brighter background in the image Thehistogram also indicates the fact that the gray-level ranges of the collagen-

ber regions and the background overlap signicantly Figure 5.1 (b)shows athresholded version of the image in (a), with all pixels less than 180 appearing

in black, and all pixels above this level appearing in white This operation

is the same as the thresholding operation given by Equation 5.1, but in theopposite sense Most of the collagen ber sections have been detected by thethresholding operation However, some of the segmented regions are incom-plete or contain holes, whereas some parts that appear to be separate anddistinct in the original image have been merged in the result An optimalthreshold derived using the methods described in Sections 5.4.1, 8.3.2, and

8.7.2could lead to better results

5.2 Detection of Isolated Points and Lines

Isolated points may exist in images due to noise or due to the presence ofsmall particles in the image The detection of isolated points is useful in noiseremoval and the analysis of particles The following convolution mask may

be used to detect isolated points 8]:

2 4

The operation computes the dierence between the current pixel at the center

of the mask and the average of its 8-connected neighbors (The mask couldalso be seen as a generalized version of the Laplacian mask in Equation 2.83.)The result of the mask operation could be thresholded to detect isolated pixelswhere the dierence computed would be large

Straight lines or line segments oriented at 0o 45o 90o, and 135o may bedetected by using the following 33 convolution masks 8]:

2 4

;1 ;1 ;1

2 2 2

;1 ;1 ;1

3 5 2 4

;1 ;1 2

;1 2 ;1

2 ;1 ;1

3 5 2

Trang 4

(b)

FIGURE 5.1

(a) TEM image of collagen bers in a scar-tissue sample from a rabbit ligament

at a magnication of approximately 30 000 See also Figure 1.5 Imagecourtesy of C.B Frank, Department of Surgery, University of Calgary See

Figure 2.12 for the histogram of the image (b) Image in (a) thresholded atthe gray level of 180

Trang 5

5.3 Edge Detection

One of the approaches to the detection of an ROI is to detect its edges TheHVS is particularly sensitive to edges and gradients, and some theories andexperiments indicate that the detection of edges plays an important role inthe detection of objects and analysis of scenes 122, 281, 282]

In Section 2.11.1 on the properties of the Fourier transform, we saw thatthe rst-order derivatives and the Laplacian relate to the edges in the im-age Furthermore, we saw that these space-domain operators have equivalentformulations in the frequency domain as highpass lters with gain that isproportional to frequency in a linear or quadratic manner The enhancementtechniques described inSections 4.6 and4.7 further strengthen the relation-ship between edges, gradients, and high-frequency spectral components Weshall now explore how these approaches may be extended to detect the edges

or contours of objects or regions

(Note: Some authors consider edge extraction to be a type of image hancement.)

en-5.3.1 Convolution mask operators for edge detection

An edge is characterized by a large change in the gray level from one side

to the other, in a particular direction dependent upon the orientation of theedge Gradients or derivatives measure the rate of change, and hence couldserve as the basis for the development of methods for edge detection

The rst derivatives in the xand y directions, approximated by the rstdierences, are given by (using matrix notation)

f0

yb(m n)f(m n);f(m;1 n)

f0

xb(m n)f(m n);f(m n;1) (5.4)where the additional subscript b indicates a backward-dierence operation.Because causality is usually not a matter of concern in image processing, thedierences may also be dened as

f0

yf(m n)f(m+ 1 n);f(m n)

f0

xf(m n)f(m n+ 1);f(m n) (5.5)where the additional subscriptf indicates a forward-dierence operation Alimitation of the operators as above is that they are based upon the values

of only two pixels this makes the operators susceptible to noise or spuriouspixel values A simple approach to design robust operators and reduce thesensitivity to noise is to incorporate averaging over multiple measurements

Trang 6

Averaging the two denitions of the derivatives in Equations 5.4 and 5.5, weget

f0

ya(m n)0:5 f(m+ 1 n);f(m;1 n)]

f0

xa(m n)0:5 f(m n+ 1);f(m n;1)] (5.6)where the additional subscriptaindicates the inclusion of averaging

In image processing, it is also desirable to express operators in terms ofodd-sized masks that may be centered upon the pixel being processed ThePrewitt operators take these considerations into account with the following

33 masks for the horizontal and vertical derivativesGxandGy, respectively:

Gx:

2 4

Gfx(m n) = (fGx)(m n) (5.10)and

Gfy(m n) = (fGy)(m n): (5.11)

If the magnitude is to be scaled for display or thresholded for the tion of edges, the square-root operation may be dropped, or the magnitudeapproximated as G + G in order to save computation

Trang 7

detec-The Sobel operators are similar to the Prewitt operators, but include largerweights for the pixels in the row or column of the pixel being processed as

Gx:

2 4

Edges oriented at 45o and 135o may be detected by using rotated versions

of the masks as above The Prewitt operators for the detection of diagonaledges are

G45 o :

2 4

Similar masks may be derived for the Sobel operator

(Note: The positive and negative signs of the elements in the masks abovemay be interchanged to obtain operators that detect gradients in the oppositedirections This step is not necessary if directions are considered in the range

0o;180o only, or if only the magnitudes of the gradients are required.)Observe that the sum of all of the weights in the masks above is zero.This indicates that the operation being performed is a derivative or gradientoperation, which leads to zero output values in areas of constant gray level,and the loss of intensity information

The Roberts operator uses 22 neighborhoods to compute cross-dierences

g(m n) =jf(m+ 1 n+ 1);f(m n)j+jf(m+ 1 n);f(m n+ 1)j (5.17)with the indices in matrix-indexing notation The individual dierences mayalso be squared, and the square root of their sum taken to be the net gradient.The advantage of the Roberts operator is that it is a forward-looking operator,

as a result of which the result may be written in the same array as the inputimage This was advantageous when computer memory was expensive and inshort supply

Trang 8

Examples: Figure 5.2 (a) shows the Shapes text image Part (b) of the

gure shows the gradient magnitude image, obtained by combining, as inEquation 5.9, the horizontal and vertical derivatives, shown in parts (c) and(d) of the gure, respectively The image in part (c) presents high values (pos-itive or negative) at vertical edges only horizontally oriented edges have beendeleted by the horizontal derivative operator The image in part (d) showshigh output at horizontal edges, with the vertically oriented edges having beenremoved by the vertical derivative operator The test image has strong edgesfor most of the objects present, which are clearly depicted in the derivativeimages however, the derivative images show the edges of a few objects thatare not readily apparent in the original image as well Parts (e) and (f) of the

gure show the derivatives at 45oand 135o, respectively the images indicatethe diagonal edges present in the image

Figures 5.3, 5.4, and 5.5 show similar sets of results for the clock, theknee MR, and the chest X-ray test images, respectively In the derivatives

of the clock image, observe that the numeral \1" has been obliterated bythe vertical derivative operator Figure 5.3 (d)], but gives rise to high outputvalues for the horizontal derivative Figure 5.3 (c)] The clock image has theminute hand oriented at approximately 135o with respect to the horizontalthis feature has been completely removed by the 135o derivative operator, asshown in Figure 5.3 (f), but has been enhanced by the 45oderivative operator,

as shown in Figure 5.3 (e) The knee MR image contains sharp boundariesthat are depicted well in the derivative images in Figure 5.4 The derivativeimages of the chest X-ray image in Figure 5.5 indicate large values at theboundaries of the image, but depict the internal details with weak derivativevalues, indicative of the smooth nature of the image

5.3.2 The Laplacian of Gaussian

Although the Laplacian is a gradient operator, it should be recognized that it

is a second-order dierence operator As we observed inSections 2.11.1and4.6, this leads to double-edged outputs with positive and negative values ateach edge this property is demonstrated further by the example inFigure 5.6

(see also Figure 4.26) The Laplacian has the advantage of being tional, that is, being sensitive to edges in all directions however, it is notpossible to derive the angle of an edge from the result The operator is alsosensitive to noise because there is no averaging included in the operator thegain in the frequency domain increases quadratically with frequency, caus-ing signicant amplication of high-frequency noise components For thesereasons, the Laplacian is not directly useful in edge detection

omnidirec-The double-edged output of the Laplacian indicates an important property

of the operator: the result possesses a zero-crossing in between the positiveand negative outputs across an edge the property holds even when the edge

in the original image is signicantly blurred This property is useful in thedevelopment of robust edge detectors The noise sensitivity of the Laplacian

Trang 9

(a) (b)

FIGURE 5.2

(a) Shapes test image (b) Gradient magnitude, display range 0 400] out of

0 765] (c) Horizontal derivative, display range ;200 200] out of ;765 765].(d) Vertical derivative, display range ;200 200] out of ;765 765] (e) 45o

derivative, display range ;200 200] out of ;765 765] (f) 135o derivative,display range ;200 200] out of ;765 765]

Trang 10

(a) (b)

FIGURE 5.3

(a) Clock test image (b) Gradient magnitude, display range 0 100] out of

0 545] (c) Horizontal derivative, display range ;100 100] out of ;538 519].(d) Vertical derivative, display range ;100 100] out of ;446 545] (e) 45o

derivative, display range ;100 100] out of ;514 440] (f) 135o derivative,display range  100 100] out of  431 535]

Trang 11

(a) (b)

FIGURE 5.4

(a) Knee MR image (b) Gradient magnitude, display range 0 400] out of

0 698] (c) Horizontal derivative, display range ;200 200] out of ;596 496].(d) Vertical derivative, display range ;200 200] out of ;617 698] (e) 45o

derivative, display range ;200 200] out of ;562 503] (f) 135o derivative,display range ;200 200] out of ;432 528]

Trang 12

(a) (b)

FIGURE 5.5

(a) Part of a chest X-ray image (b) Gradient magnitude, display range

0 50] out of 0 699] (c) Horizontal derivative, display range ;50 50] out of

;286 573] (d) Vertical derivative, display range ;50 50] out of ;699 661].(e) 45o derivative, display range ;50 50] out of ;452 466] (f) 135o deriva-tive, display range  50 50] out of  466 442]

Trang 14

may be reduced by including a smoothing operator A scalable smoothing erator could be dened in terms of a 2D Gaussian function, with the variancecontrolling the spatial extent or width of the smoothing function Combiningthe Laplacian and the Gaussian, we obtain the popular Laplacian-of-Gaussian

x2+y2 The LoG function is isotropic and has positive andnegative values Due to its shape, it is often referred to as the Mexican hat

or sombrero

Figure 5.7 shows the LoG operator in image and mesh-plot formats thebasic Gaussian used to derive the LoG function is also shown for reference.The Fourier magnitude spectra of the Gaussian and LoG functions are alsoshown in the gure It should be observed that, whereas the Gaussian is alowpass lter (which is also a 2D Gaussian in the frequency domain), the LoGfunction is a bandpass lter The width of the lters is controlled by theparameterof the Gaussian

Figure 5.8shows proles of the LoG and the related Gaussian for two values

of Figure 5.9shows the proles of the Fourier transforms of the functions

in Figure 5.8 The proles clearly demonstrate the nature of the functionsand their ltering characteristics

An approximation to the LoG operator is provided by taking the dierencebetween two Gaussians of appropriate variances: this operator is known asthe dierence-of-Gaussians or DoG operator 282]

Examples: Figure 5.10shows the Shapes test image, the LoG of the imagewith  = 1 pixel, and the locations of the zero-crossings of the LoG of theimage with  = 1 pixel and  = 2 pixels The zero-crossings indicate thelocations of the edges in the image The use of a large value forreduces theeect of noise, but also causes smoothing of the edges and corners, as well asthe loss of the minor details present in the image

Trang 18

Figure 5.11shows the clock image, its LoG, and the zero-crossings of theLoG with= 1 pixel and= 2 pixels The results illustrate the performance

of the LoG operator in the presence of noise

Figures 5.12, 5.13, and 5.14 show similar sets of results for the myocyteimage, the knee MR image, and the chest X-ray test images Comparativeanalysis of the scales of the details present in the images and the zero-crossings

of the LoG for dierent values ofindicates the importance of selecting values

of theparameter in accordance with the scale of the details to be detected.5.3.3 Scale-space methods for multiscale edge detectionMarr and Hildreth 281, 282] suggested that physical phenomena may be de-tected simultaneously over several channels tuned to dierent spatial sizes

or scales, with an approach known as the spatial coincidence An intensitychange that is due to a single physical phenomenon is indicated by zero-crossing segments present in independent channels over a certain range ofscales, with the segments having the same position and orientation in eachchannel A signicant intensity change indicates the presence of a major eventthat is registered as a physical boundary, and is recognized as a single phys-ical phenomenon The boundaries of a signicant physical pattern should bepresent over several channels, suggesting that the use of techniques based onzero-crossings generated from lters of dierent scales could be more eec-tive than the conventional (single-scale) methods for edge detection see, forexample, Figure 5.13

Zero-crossings and scale-space: The multichannel model for the HVS

283] and the Marr-Hildreth spatial coincidence assumption 281] led to thedevelopment of methods for the detection of edges based upon multiscaleanalysis performed with lters of dierent scales Marr and Hildreth pro-posed heuristic rules to combine information from the dierent channels in amultichannel vision model they suggested the use of a bank of LoG lterswith several values of , which may be represented as fr 2g(x y)g, with

 >0

A method for obtaining information in images across a continuum of scaleswas suggested by Witkin 284], who introduced the concept of scale-space Themethod rapidly gained considerable interest, and has been explored further byseveral researchers in image processing and analysis 285, 286, 287, 288, 289].The scale-space (x y) of an imagef(x y) is dened as the set of all zero-crossings of its LoG:

(x y) = 2g(x y) f(x y) : (5.22)

Trang 24

As the scale  varies from 0 to 1, the set f(x y)g forms continuoussurfaces in the (x y) scale-space.

Several important scale-space concepts apply to 1D and 2D signals It hasbeen shown that the scale-space of almost all signals ltered by a Gaussiandetermines the signal uniquely up to a scaling constant 285] (except for noise-contaminated signals and some special functions 290]) The importance ofthis property lies in the fact that, theoretically, for almost all signals, noinformation is lost by working in the scale-space instead of the image domain.This property plays an important role in image understanding 291], imagereconstruction from zero-crossings 285, 292], and image analysis using thescale-space approach 288] Furthermore, it has also been shown that theGaussian does not create additional zero-crossings as the scale  increasesbeyond a certain limit, and that the Gaussian is the only lter with thisdesirable scaling behavior 285]

Based on the spatial-coincidence assumption, Witkin 284] proposed a 1Dstability analysis method for the extraction of primitive events that occur over

a large range of scales The primitive events were organized into a qualitativesignal description representing the major events in the signal Assuming thatzero-crossing curves do not cross one another (which was later proven to beincorrect by Katz 293]), Witkin dened the stability of a signal interval asthe scale range over which the signal interval exists major events could then

be captured via stability analysis However, due to the complex topologicalnature of spatial zero-crossings, it is often dicult to directly extend Witkin's1D stability analysis method to 2D image analysis The following problemsaect Witkin's method for stability analysis:

It has been shown that zero-crossing curves do cross one another 293]

It has been shown that real (authentic) zero-crossings could turn intofalse (phantom) zero-crossings as the scale  increases 294] Use ofthe complete scale-space (with ranging from 0 to1) may introduceerrors in certain applications an appropriate scale-space using only a

nite range of scales could be more eective

For 2D signals (images), the scale-space consists of zero-crossing surfacesthat are more complex than the zero-crossing curves for 1D signals Thezero-crossing surfaces may split and merge as the scale varies (decreases

or increases, respectively)

As a consequence of the above, there is no simple topological region sociated with a zero-crossing surface, and tracing a zero-crossing surfaceacross scales becomes computationally dicult

as-Liu et al 295] proposed an alternative denition of zero-crossing surfacestability in terms of important spatial boundaries In this approach, a spatialboundary is dened as a region of steep gradient and high contrast, and iswell-dened if it has no neighboring boundaries within a given range This

Trang 25

denition of spatial boundaries is consistent with the Marr-Hildreth coincidence assumption Furthermore, stability maps 288] associated withthe scale-space are used A relaxation algorithm is included in the process togenerate zero-crossing maps.

spatial-In the method of Liu et al 295], the discrete scale-space approach is used toconstruct a representation of a given image in terms of a stability map, which

is a measure of pattern boundary persistence over a range of lter scales For

a given imagef(x y), a set of zero-crossing maps is generated by convolvingthe image with the set of isotropic functions r 2g(x yi), 1  i  N Itwas indicated thatN = 8 sampled i values ranging from 1 to 8 pixels wereadequate for the application considered Ideally, one would expect a patternboundary to be accurately located over all of the scales However, it has beenshown 296, 297] that the accuracy of zero-crossing localization depends uponthe width of the central excitatory region of the lter (dened aswi= 2p

2i

298]) Chen and Medioni 299] proposed a 1D method for localization ofzero-crossings that works well for ideal step edges and image patterns withsharp contrast however, the method may not be eective for the construction

of the spatial scale-space for real-life images with poor and variable contrast.Instead of directly matching all the zero-crossing locations at a point (x y)over the zero-crossing maps, Liu et al proposed a criterion C(i) that is

a function of the scale i to dene a neighborhood in which the matchingprocedure is performed at a particular scale:

C(i) =f(x0 y0)g j x;i x0

x+i

y ;iy0

y+i 1 (5.23)where (x0 y0) are the actual locations of the zero-crossings, (x y) is the pixellocation at which the lters are being applied, and  is a constant to bedetermined experimentally ( = 1 was used by Liu et al.) Therefore, if azero-crossing (x yi) is found in the neighborhood dened by C(i), anarbitrary constant  is assigned to a function Si(x y), which otherwise isassigned a zero, that is,

Si(x y) =  if(x yi)2C(i)

where the subscripticorresponds to theith scalei

Applying Equations 5.23 and 5.24 to the set of zero-crossings detected, a set

of adjusted zero-crossing maps fS1(x y) S2(x y) ::: SN(x y)g is obtained,where N is the number of scales The adjusted zero-crossing maps are used

to construct the zero-crossing stability map (x y) as

(x y) =XN

i =1 Si(x y): (5.25)The values of (x y) are, in principle, a measure of boundary stability throughthe lter scales Marr and Hildreth 281] and Marr and Poggio 300] suggested

Trang 26

that directional detection of zero-crossings be performed after the LoG ator has been applied to the image.

oper-According to the spatial-coincidence assumption, a true boundary should

be high in contrast and have relatively large values at the correspondinglocations Furthermore, there should be no other edges within a given neigh-borhood Thus, if in a neighborhood of (x y), nonzero stability map valuesexist only along the orientation of a local segment of the stability map thatcrosses (x y), then (x y) may be considered to signify a stable edge pixel at(x y) On the other hand, if many nonzero stability map values are present atdierent directions, (x y) indicates an insignicant boundary pixel at (x y)

In other words, a consistent stability indexing method (in the sense of thespatial-coincidence assumption) should take neighboring stability indices intoaccount Based upon this argument, Liu et al proposed a relative stabilityindex (x y) computed from the stability map where (x y)6= 0, as follows

In a neighborhood of (x y), if m nonzero values are found, (x y) isrelabeled asl0, and the rest of (xk yk) are relabeled aslk, k= 1 ::: m;

1 see Figure 5.15 In order to avoid using elements in the neighborhoodthat belong to the same edge, those (x0 y0) having the same orientation asthat of l0 are not included in the computation of (x y) Based upon theserequirements, the relative stability index (x y) is dened as

(x y) = l0

P m ; 1

where k = exp(;d2 k) anddk=p

(x;xk)2+ (y;yk)2 and (xk yk) are thelocations oflk It should be noted that 0 (x y)1 and that the value of(x y) is governed by the geometrical distribution of the neighboring stabilityindex values

Stability of zero-crossings: Liu et al 295] observed that the use of crossings to indicate the presence of edges is reliable as long as the edges arewell-separated otherwise, the problem of false zero-crossings could arise 301].The problem with using zero-crossings to localize edges is that the zeros ofthe second derivative of a function localize the extrema in the rst derivative

zero-of the function the extrema include the local minima and maxima in the

rst derivative of the function, whereas only the local maxima indicate thepresence of edge points Intuitively, those zero-crossings that correspond tothe minima of the rst derivative are not associated with edge points at all

In image analysis based upon the notion of zero-crossings, it is desirable to beable to distinguish real zero-crossings from false ones, and to discard the falsezero-crossings Motivated by the work of Richter and Ullman 301], Clark 294]conducted an extensive study on the problem of false and real zero-crossings,and proposed that zero-crossings may be classied as real if (x y)<0 andfalse if (x y)>0, where (x y) =rr 2p(x y)] rp(x y), where denotesthe dot product, p(x y) is a smoothed version of the given image such as

p(x y) = g(x y) f(x y)], p(x y) = h

@p @p i T

and  2p(x y)] =

Trang 27

d d

Trang 28

:Liu et al included this step in their method

to detect true zero-crossings

Example: Figure 5.16 (a)shows an SEM image of collagen bers in a ament scar-tissue sample Parts (b) { (d) of the gure show the zero-crossingmaps obtained with= 1 4 and 8 pixels The result in (b) contains severalspurious or insignicant zero-crossings, whereas that in (d) contains smoothededges of only the major regions of the image Part (e) shows the stability map,which indicates the edges of the major objects in the image The stability mapwas used to detect the collagen bers in the image SeeSection 8.5for details

lig-on directilig-onal analysis of oriented patterns by further processing of the ity map Methods for the directional analysis of collagen bers are described

by the distance between the adjacent maxima in the output A basic lterderived using the criteria mentioned above was approximated by the rstderivative of a Gaussian Procedures were proposed to incorporate multiscaleanalysis and directional lters to facilitate ecient detection of edges at allorientations and scales including adaptive thresholding with hysteresis.The LoG lter is nondirectional, whereas Canny's method selectively evalu-ates a directional derivative across each edge By avoiding derivatives at otherangles that would not contribute to edge detection but increase the eects ofnoise, Canny's method could lead to better results than the LoG lter.5.3.5 Fourier-domain methods for edge detection

In Section 4.7, we saw that highpass lters may be applied in the domain to extract the edges in the given image However, the inclusion of all

Fourier-of the high-frequency components present in the image could lead to noisyresults Reduction of high-frequency noise suggests the use of bandpass l-ters, which may be easily implemented as a cascade of a lowpass lter with ahighpass lter In the frequency domain, such a cascade of lters results inthe multiplication of the corresponding transfer functions Because edges areoften weak or blurred in images, some form of enhancement of the correspond-ing frequency components would also be desirable This argument leads us

to the LoG lter: a combination of the Laplacian, which is a

Trang 29

high-frequency-FIGURE 5.16

(a) SEM image of a ligament scar-tissue sample (b) { (d) Zero-crossing tions detected using the LoG operator with= 1 4 and 8 pixels, respectively.(e) The stability map, depicting the major edges present in the image Re-produced with permission from Z.-Q Liu, R.M Rangayyan, and C.B Frank,

loca-\Statistical analysis of collagen alignment in ligaments by scale-space ysis", IEEE Transactions on Biomedical Engineering, 38(6):580{588, 1991

anal-c IEEE

Trang 30

emphasis lter with its gain quadratically proportional to frequency, with aGaussian lowpass lter The methods and results presented in Section 5.3.2

demonstrate the edge-detection capabilities of the LoG lter, which may beeasily implemented in the frequency domain Frequency-domain implemen-tation using the FFT may be computationally advantageous when the LoGfunction is specied with a large spatial array, which would be required in thecase of large values of

Several other line-detection and edge-detection methods, such as Gabor

lters (seeSections 5.10, 8.4, 8.9, and 8.10) and fan lters (see Section 8.3)

may also be implemented in the frequency domain with advantages

5.3.6 Edge linking

The results of most methods for edge detection are almost always uous, and need to be processed further to link disjoint segments and obtaincomplete representations of the boundaries of ROIs Two principal proper-ties that may be used to establish the similarity of edge pixels from gradientimages are the following 8]:

discontin-The strength of the gradient | a point (x0 y0) in a neighborhood of(x y) is similar in gradient magnitude to the point (x y) if

kG(x y);G(x0 y0)k T (5.27)whereG(x y) is the gradient vector of the given imagef(x y) at (x y)andT is a threshold

The direction of the gradient | a point (x0 y0) in a neighborhood of(x y) is similar in gradient direction to the point (x y) if

33 or 55 neighborhoods may be used for checking pixels for similarity intheir gradients as above Further processing steps may include linking of edgesegments separated by small breaks and deleting isolated short segments.See Section 5.10.2 (page 493) for the details of an edge analysis methodknown as edge- ow propagation

Trang 31

5.4 Segmentation and Region Growing

Dividing an image into regions that could correspond to structural units, jects of interest, or ROIs is an important prerequisite for most techniques forimage analysis Whereas a human observer may, by merely looking at a dis-played image, readily recognize its structural components, computer analysis

ob-of an image requires algorithmic analysis ob-of the array ob-of image pixel valuesbefore arriving at conclusions about the content of the image Computeranalysis of images usually starts with segmentation, which reduces pixel data

to region-based information about the objects and structures present in theimage 303, 304, 305, 306, 307]

Image segmentation techniques may be classied into four main categories:thresholding techniques 8, 306],

boundary-based methods 303, 308],

region-based methods 8, 123, 274, 309, 310, 311, 312], and

hybrid techniques 313, 314, 315, 316, 317] that combine boundary andregion criteria

Thresholding methods are based upon the assumption that all pixels whosevalues lie within a certain range belong to the same class see Section 5.1

The threshold may be determined based upon the valleys in the histogram ofthe image however, identifying thresholds to segment objects is not easy evenwith optimal thresholding techniques 8, 306] Moreover, because threshold-ing algorithms are solely based upon pixel values and neglect all of the spatialinformation in the image, their accuracy of segmentation is limited further-more, thresholding algorithms do not cope well with noise or blurring at objectboundaries

Boundary-based techniques make use of the property that, usually, pixelvalues change rapidly at the boundaries between regions The methods start

by detecting intensity discontinuities lying at the boundaries between jects and their backgrounds, typically through a gradient operation Highvalues of the output provide candidate pixels for region boundaries, whichmust then be processed to produce closed curves representing the boundariesbetween regions, as well as to remove the eects of noise and discontinu-ities due to nonuniform illumination and other eects Although edge-linkingalgorithms have been proposed to assemble edge pixels into a meaningfulset of object boundaries (see Section 5.3.6), such as local similarity analy-sis, Hough-transform-based global analysis, and global processing via graph-theoretic techniques 8], the accurate conversion of disjoint sets of edge pixels

ob-to closed-loop boundaries of ROIs is a dicult task

Trang 32

Region-based methods, which are complements of the boundary-based proach, rely on the postulate that neighboring pixels within a region havesimilar values Region-based segmentation algorithms may be divided intotwo groups: region splitting and merging and region growing.

ap-Segmentation techniques in the region splitting and merging category tially subdivide the given image into a set of arbitrary, disjoint regions, andthen merge and/or split the regions in an attempt to satisfy some prespeciedconditions

ini-Region growing is a procedure that groups pixels into regions The simplest

of region-growing approaches is pixel aggregation, which starts with a seedpixel and grows a region by appending spatially connected neighboring pixelsthat meet a certain homogeneity criterion Dierent homogeneity criteriawill lead to regions with dierent characteristics It is important, as well asdicult, to select an appropriate homogeneity criterion in order to obtainregions that are appropriate for the application on hand

Typical algorithms in the group of hybrid techniques rene image tation by integration of boundary and region information proper combination

segmen-of boundary and region information may produce better segmentation resultsthan those obtained by either method on its own For example, the mor-phological watershed method 315] is generally applied to a gradient image,which can be viewed as a topographic map with boundaries between regions

as ridges Consequently, segmentation is equivalent to ooding the phy from the seed pixels, with region boundaries being erected to keep waterfrom the dierent seed pixels from merging Such an algorithm is guaranteed

topogra-to produce closed boundaries, which is known topogra-to be a major problem withboundary-based methods However, because the success of this type of an al-gorithm relies on the accuracy of the edge-detection procedure, it encountersdiculties with images in which regions are both noisy and have blurred orindistinct boundaries Another interesting method within this category, calledvariable-order surface tting 313], starts with a coarse segmentation of thegiven image into several surface-curvature-sign primitives (for example, pit,peak, and ridge), which are then rened by an iterative region-growing methodbased on variable-order surface tting This method, however, may only besuitable to the class of images where the image contents vary considerably.The main diculty with region-based segmentation schemes lies in the se-lection of a homogeneity criterion Region-based segmentation algorithmshave been proposed using statistical homogeneity criteria based on regionalfeature analysis 312], Bayesian probability modeling of images 318], Markovrandom elds 319], and seed-controlled homogeneity competition 311] Seg-mentation algorithms could also rely on homogeneity criteria with respect togray level, color, texture, or surface measures

Trang 33

5.4.1 Optimal thresholding

Suppose it is known a priori that the given image consists of only two principalbrightness levels with the prior probabilitiesP1andP2 Consider the situationwhere natural variations or noise modify the two gray levels to distributionsrepresented by Gaussian PDFs p1(x) andp2(x), where xrepresents the graylevel The PDF of the image gray levels is then 8]

Suppose that the dark regions in the image correspond to the background,and the bright regions to the objects of interest Then, all pixels below athreshold T may be considered to belong to the background, and all pixelsaboveT may be considered as pixels belonging to the object of interest Theprobability of erroneous classication is then

Pe(T) =P1

Z 1

T p1(x)dx + P2

Z T

;1

p2(x)dx: (5.30)

To nd the optimal threshold, we may dierentiatePe(T) with respect toT

and equate the result to zero, which leads to

P1p1(T) =P2p2(T): (5.31)Applying this result to the Gaussian PDFs gives (after taking logarithms andsome simplication) the quadratic equation 8]

The possibility of two solutions indicates that it may require two thresholds

to obtain the optimal threshold

Trang 34

If21=22=2, a single threshold may be used, given by

of the two means

Thresholding using boundary characteristics: The number of pixelscovered by the objects of interest to be segmented from an image is almost al-ways a small fraction of the total number of pixels in the image: the gray-levelhistogram of the image is then likely to be almost unimodal The histogrammay be made closer to being bimodal if only the pixels on or near the bound-aries of the object regions are considered

The selection and characterization of the edge or boundary pixels may beachieved by using gradient and Laplacian operators as follows 8]:

wherer f(x y) is a gradient andr 2 f(x y) is the Laplacian of the given image

f(x y) T is a threshold and 0, L+,L; represent three distinct gray levels

In the resulting image, the pixels that are not on an edge are set to zero, thepixels on the darker sides of edges are set toL+, and the pixels on the lightersides of edges are set toL; This information may be used not only to detectobjects and edges, but also to identify the leading and trailing edges of objects(with reference to the scanning direction)

SeeSection 8.3.2for a description of Otsu's method of deriving the optimalthreshold for binarizing a given image see also Section 8.7.2 for discussions

on a few other methods to derive thresholds

5.4.2 Region-oriented segmentation of images

Let R represent the region spanning the entire space of the given image.Segmentation may be viewed as a process that partitionsRintonsubregions

Trang 35

P(Ri) = TRUE, for i = 1 2 ::: n (for example, all pixels within aregion have the same intensity)

P(Ri Rj) =FALSE 8i j i6=j (for example, the intensities of thepixels in dierent regions are dierent)

where P(Ri) is a logical predicate dened over the points in the set Ri, and

is the null set

A simple algorithm for region growing by pixel aggregation based upon thesimilarity of a local property is as follows:

Start with a seed pixel (or a set of seed pixels)

Append to each pixel in the region those of its 4-connected or 8-connectedneighbors that have properties (gray level, color, etc.) that are similar

to those of the seed

Stop when the region cannot be grown any further

The results of an algorithm as above depend upon the procedure used toselect the seed pixels and the measures of similarity or inclusion criteria used.The results may also depend upon the method used to traverse the imagethat is, the sequence in which neighboring pixels are checked for inclusion.5.4.3 Splitting and merging of regions

Instead of using seeds to grow regions or global thresholds to separate an imageinto regions, one could initially consider to divide the given image arbitrarilyinto a set of disjoint regions, and then to split and/or merge the regions usingconditions or predicatesP

A general split/merge procedure is as follows 8]: Assuming the image to

be square, subdivide the entire imageR successively into smaller and smallerquadrant regions such that, for any region Ri, P(Ri) = TRUE: In otherwords, if P(R) =FALSE, divide the image into quadrants if P isFALSE

for any quadrant, subdivide that quadrant into subquadrants Iterate theprocedure until no further changes are made, or a stopping criterion is reached.The splitting technique may be represented as a quadtree Diculties couldexist in selecting an appropriate predicateP

Because the splitting procedure could result in adjacent regions that aresimilar, a merging step would be required, which may be specied as follows:Merge two adjacent regions Ri andRk ifP(Ri Rk) =TRUE:Iterate until

no further merging is possible

5.4.4 Region growing using an additive tolerance

A commonly used region-growing scheme is pixel aggregation 8, 123] Themethod compares the properties of spatially connected neighboring pixels with

Trang 36

those of the seed pixel the properties used are determined by homogeneitycriteria For intensity-based image segmentation, the simplest property isthe pixel gray level The term \additive tolerance level" stands for the per-mitted absolute gray-level dierence between the neighboring pixels and theseed pixel: a neighboring pixelf(m n) is appended to the region if its abso-lute gray-level dierence with respect to the seed pixel is within the additivetolerance levelT:

a neighboring pixel is compared with the mean gray level, called the runningmean R c, of the region being grown at its current stage, Rc This criterionmay be represented as

jf(m n); R c j T (5.37)where

R c = 1Nc

( mn ) 2 R c f(m n) (5.38)whereNc is the number of pixels inRc Figure 5.17 (d) shows the result ob-tained with the running-mean algorithm by using the same additive tolerancelevel as before (T = 3) With the running-mean criterion, no matter whichpixel is selected as the seed, the same nal region is obtained in the presentexample, as long as the seed pixel is within the region, which is the centralhighlighted area in Figure 5.17 (d)

In the simple scheme described above, the seed pixel is always used to checkthe incoming neighboring pixels, even though most of them are not spatiallyconnected or close to the seed Such a region-growing procedure may fail when

a seed pixel is inappropriately located at a noisy pixel Shen 320, 321, 322]suggested the use of the \current center" pixel as the reference instead ofthe seed pixel that was used to commence the region-growing procedure Forexample, the shaded area shown in Figure 5.18 represents a region beinggrown After the pixelCis appended to the region, its 4-connected neighbors(labeled as Ni, i = 1 2 3 4) or 8-connected neighbors (marked as Ni, i =

1 2  8) would be checked for inclusion in the region, using

The pixel C is called the current center pixel However, because some ofthe neighboring pixels (N1 and N5 in the illustration in Figure 5.18) are

Trang 37

128 127

124 124 126

126 125 127 128

124 124 126

126 125 127 128

100 101

102

100 100 100

100 100

100 100

seed 127

124 124 126

126 125 127 128

124 124 126

126 125 127 128

100 101

102

100 100 100

100 100

100 100

Trang 38

already included in the region shown, only N2, N3, and N4 in the case of4-connectivity, orN2,N3,N4,N6,N7, andN8 in the case of 8-connectivityare compared with their current center pixel C for region growing, ratherthan with the original seed pixel For the example shown inFigure 5.17,thisprocedure generates the same result as shown in Figure 5.17 (d) independent

of the location of the seed pixel (within the ROI) when using the same additivetolerance level (T = 3)

seed

C

N5 N1 N6 N2

5.4.5 Region growing using a multiplicative tolerance

In addition to the sensitivity of the region to seed pixel selection with tolerance region growing, the additive tolerance level or absolute dierence ingray level T is not a good criterion for region growing: an additive tolerancelevel of 3, while appropriate for a seed pixel value or running mean of, forexample, 127, may not be suitable when the seed pixel gray level or runningmean is at a dierent level, such as 230 or 10 In order to address this problem,

additive-a reladditive-ative dierence, badditive-ased upon additive-a multiplicadditive-ative toleradditive-ance level , could beemployed Then, the criterion for region growing could be dened as

jf(m n); R c j

or

Trang 39

wheref(m n) is the gray level of the current pixel being checked for inclusion,and R c could stand for the original seed pixel value, the current center pixelvalue, or the running-mean gray level Observe that the two equations aboveare comparable to the denitions of simultaneous contrast in Equations 2.7and 2.8.

The additive and multiplicative tolerance levels both determine the mum gray-level deviation allowed within a region, and any deviation less thanthis level is considered to be an intrinsic property of the region, or to be noise.Multiplicative tolerance is meaningful when related to the SNR of a region (orimage), whereas additive tolerance has a direct connection with the standarddeviation of the pixels within the region or a given image

maxi-5.4.6 Analysis of region growing in the presence of noise

In order to analyze the performance of region-growing methods in the presence

of noise, let us assume that the given image gmay be modeled as an idealimagef plus a noise image , wheref consists of a series of strictly uniform,disjoint, or nonoverlapping regions Ri, i = 1 2 ::: k, and includes theircorresponding noise parts i, i= 1 2 ::: k Mathematically, the image may

=

i i i= 1 2 ::: k: (5.44)

A strictly uniform regionRi is composed of a set of connected pixelsf(m n)

at positions (m n) whose values equal a constanti, that is,

Ri=f(m n)jf(m n) =i g: (5.45)The set of regions Ri, i = 1 2 ::: k, is what we expect to obtain as theresult of segmentation Suppose that the noise parts i, i= 1 2 ::: k, arecomposed of white noise with zero mean and standard deviationi then, wehave

g=

i (Ri+i) i= 1 2 ::: k (5.46)and

Trang 40

1 '2 '  'k ' (5.49)where the symbol'represents statistical similarity, the image f may be de-scribed as

g'



i Ri+ i= 1 2 ::: k (5.50)and

f =

i Ri 'g; i= 1 2 ::: k: (5.51)Additive-tolerance region growing is well-suited for segmentation of this spe-cial type of image, and an additive tolerance level solely determined bymay

be used globally over the image However, such special cases are rare in realimages A given image generally has to be modeled, as in Equation 5.46,where multiplicative-tolerance region growing may be more suitable, with theexpectation that a global multiplicative tolerance level can be derived for all

of the regions in the given image Because the multiplicative tolerance levelcould be made a function of  i i that is directly related to the SNR, whichcan be dened as 10log102

et al 274, 320] attempts to solve

When human observers attempt to recognize an object in an image, theyare likely to use the information available in both an object of interest andits surroundings Unlike most of the reported boundary detection methods,the HVS detects object boundaries based on not only the information aroundthe boundary itself, such as the pixel variance and gradient around or acrossthe boundary, but also on the characteristics of the object Shen et al 274,320] proposed using the information contained within a region as well as itsrelationship with the surrounding background to determine the appropriatetolerance level to grow a region Such information could be represented by aset of features characterizing the region and its background With increasingtolerance levels obtained using a certain step size, it could be expected that the

Ngày đăng: 27/05/2016, 15:42

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN