1. Trang chủ
  2. » Công Nghệ Thông Tin

DIGITAL IMAGE PROCESSING 4th phần 9 docx

81 239 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 81
Dung lượng 3,97 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

VISUAL TEXTURE DISCRIMINATION A discrete stochastic field is an array of numbers that are randomly distributed inamplitude and governed by some joint probability density 12,13.. Julesz T

Trang 1

VISUAL TEXTURE DISCRIMINATION 547

16.5 VISUAL TEXTURE DISCRIMINATION

A discrete stochastic field is an array of numbers that are randomly distributed inamplitude and governed by some joint probability density (12,13) When converted

to light intensities, such fields can be made to approximate natural textures ingly well by control of the generating probability density This technique is usefulfor generating realistic appearing artificial scenes for applications such as airplaneflight simulators Stochastic texture fields are also an extremely useful tool forinvestigating human perception of texture as a guide to the development of texturefeature extraction methods

surpris-In the early 1960s, Julesz (14) attempted to determine the parameters of tic texture fields of perceptual importance This study was extended later by Julesz

stochas-et al (15–17) Further extensions of Julesz’s work have been made by Pollack (18),

FIGURE 16.4-2 Brodatz texture fields.

Trang 2

able insight into the mechanism of human visual perception and have led to someuseful quantitative texture measurement methods.

Figure 16.5-1 is a model for stochastic texture generation In this model, an array

of independent, identically distributed random variables passes through alinear or nonlinear spatial operator to produce a stochastic texture array By controlling the form of the generating probability density and thespatial operator, it is possible to create texture fields with specified statistical proper-ties Consider a continuous amplitude pixel at some coordinate in Let the set denote neighboring pixels but not necessarily nearest geo-metric neighbors, raster scanned in a conventional top-to-bottom, left-to-right fash-ion The conditional probability density of conditioned on the state of itsneighbors is given by

(16.5-1)

The first-order density employs no conditioning, the second-order density

implies that J = 1, the third-order density implies that J = 2, and so on.

16.5.1 Julesz Texture Fields

In his pioneering texture discrimination experiments, Julesz utilized Markov processstate methods to create stochastic texture arrays independently along rows of thearray The family of Julesz stochastic arrays are defined below (13)

1 Notation Let denote a row neighbor of pixel and let

P(m), for m = 1, 2, , M, denote a desired probability generating function.

2 First-order process Set for a desired probability function P(m) The

resulting pixel probability is

Trang 3

VISUAL TEXTURE DISCRIMINATION 549

, where the modulus function

for integers p and q This gives a first-order probability

(16.5-3a)

and a transition probability

(16.5-3b)

4 Third-order process Set for , and set

governing probabilities then become

(16.5-4a)

(16.5-4b)(16.5-4c)

This process has the interesting property that pixel pairs along a row are pendent, and consequently, the process is spatially uncorrelated

inde-Figure 16.5-2 contains several examples of Julesz texture field discriminationtests performed by Pratt et al (20) In these tests, the textures were generatedaccording to the presentation format of Figure 16.5-3 In these and subsequentvisual texture discrimination tests, the perceptual differences are often small Properdiscrimination testing should be performed using high-quality photographic trans-parencies, prints or electronic displays The following moments were used as simpleindicators of differences between generating distributions and densities of the sto-chastic fields

(16.5-5a)(16.5-5b)(16.5-5c)

=

θ E{[x0–η] x[ 1–η] x[ 2–η]}

σ3 -

=

Trang 4

The examples of Figure 16.5-2a and b indicate that texture field pairs differing

in their first- and second-order distributions can be discriminated The example of

Figure 16.5-2c supports the conjecture, attributed to Julesz, that differences in

third-order, and presumably, higher-order distribution texture fields cannot beperceived provided that their first- and second-order distributions are pairwiseidentical

FIGURE 16.5-2 Field comparison of Julesz stochastic fields;

(a) Different first order

Trang 5

VISUAL TEXTURE DISCRIMINATION 551

16.5.2 Pratt, Faugeras and Gagalowicz Texture Fields

Pratt et al (20) have extended the work of Julesz et al (14–17) in an attempt to studythe discrimination ability of spatially correlated stochastic texture fields A class ofGaussian fields was generated according to the conditional probability density

(16.5-6a)where

(16.5-6b)

(16.5-6c)

The covariance matrix of Eq 16.5-6a is of the parametric form

FIGURE 16.5-3 Presentation format for visual texture discrimination experiments.

p x( 0 z1, ,… z J)

2π( )J+ 1

KJ+1

1 2 -– (vJ+1–ηJ+ 1)T(KJ+1) 1(vJ+1–ηJ+ 1)

KJ

1 2 -

Trang 6

where denote correlation lag terms Figure 16.5-4 presents an example ofthe row correlation functions used in the texture field comparison tests describedbelow

Figures 16.5-5 and 16.5-6 contain examples of Gaussian texture field comparisontests In Figure 16.5-5, the first-order densities are set equal, but the second-ordernearest neighbor conditional densities differ according to the covariance function plot

of Figure 16.5-4a Visual discrimination can be made in Figure 16.5-5, in which the

correlation parameter differs by 20% Visual discrimination has been found to bemarginal when the correlation factor differs by less than 10% (20) The first- andsecond-order densities of each field are fixed in Figure 16.5-6, and the third-order

FIGURE 16.5-4 Row correlation factors for stochastic field generation Dashed line, field

A; solid line, field B.

(b) Constrained third-order density

Trang 7

VISUAL TEXTURE DISCRIMINATION 553

conditional densities differ according to the plan of Figure 16.5-4b Visual

dis-crimination is possible The test of Figure 16.5-6 seemingly provides a

second-order density pairs and are not necessarily equal for

an arbitrary neighbor , and therefore the conditions necessary to disproveJulesz’s conjecture are violated

To test the Julesz conjecture for realistically appearing texture fields, it is essary to generate a pair of fields with identical first-order densities, identical

nec-FIGURE 16.5-5 Field comparison of Gaussian stochastic fields with different second-order

FIGURE 16.5-6 Field comparison of Gaussian stochastic fields with different third-order

Trang 8

Markovian type second-order densities, and differing third-order densities for everypair of similar observation points in both fields An example of such a pair of fields

is presented in Figure 16.5-7 for a non-Gaussian generating process (19) In thisexample, the texture appears identical in both fields, thus supporting the Juleszconjecture

Gagalowicz has succeeded in generating a pair of texture fields that disprove theJulesz conjecture (21) However, the counterexample, shown in Figure 16.5-8,

is not very realistic in appearance Thus, it seems likely that if a statistically based

FIGURE 16.5-7 Field comparison of correlated Julesz stochastic fields with identical first-

and second-order densities, but different third-order densities

FIGURE 16.5-8 Gagalowicz counterexample.

hA = 0.500, hB = 0.500

sA = 0.167, sB = 0.167

aA = 0.850, aB = 0.850

qA = 0.040, qB = − 0.027

Trang 9

and autocorrelation functions, but different nth-order probability densities Visual

discrimination is readily accomplished between the fields This leads to the sion that these low-order moment measurements, by themselves, are not always suf-ficient to distinguish texture fields

conclu-16.6 TEXTURE FEATURES

As noted in Section 16.4, there is no commonly accepted quantitative definition ofvisual texture As a consequence, researchers seeking a quantitative texture measurehave been forced to search intuitively for texture features, and then attempt to evalu-ate their performance by techniques such as those presented in Section 16.1 Thefollowing subsections describe several texture features of historical and practicalimportance References 22 to 24 provide surveys on image texture feature extrac-tion Randen and Husoy (25) have performed a comprehensive study of many tex-ture feature extraction methods

FIGURE 16.5-9 Field comparison of correlated stochastic fields with identical means,

variances and autocorrelation functions, but different nth-order probability densities

gener-ated by different processing of the same input field Input array consists of uniform random variables raised to the 256th power Moments are computed

hA = 0.413, hB = 0.412

sA = 0.078, sB = 0.078

aA = 0.915, aB = 0.917

qA = 1.512, qB = 0.006

Trang 10

woodland regions extracted from aerial photographs On the other hand, Fourierspectral analysis has proved successful (28,29) in the detection and classification ofcoal miner’s black lung disease, which appears as diffuse textural deviations fromthe norm.

16.6.2 Edge Detection Methods

Rosenfeld and Troy (30) have proposed a measure of the number of edges in aneighborhood as a textural measure As a first step in their process, an edge maparray is produced by some edge detector such that for a detectededge and otherwise Usually, the detection threshold is set lower thanthe normal setting for the isolation of boundary points This texture measure isdefined as

ural images The autocorrelation function is defined as

Trang 11

TEXTURE FEATURES 557

for computation over a window with pixel lags Presumably, aregion of coarse texture will exhibit a higher correlation for a fixed shift thanwill a region of fine texture Thus, texture coarseness should be proportional to thespread of the autocorrelation function Faugeras and Pratt (5) have proposed the fol-lowing set of autocorrelation spread measures:

S(2, 0) and S(0, 2), the cross-relation S(1, 1) and the second-degree spread S(2, 2).

Figure 16.6-1 shows perspective views of the autocorrelation functions of thefour Brodatz texture examples (5) Bhattacharyya distance measurements of thesetexture fields, performed by Faugeras and Pratt (5), are presented in Table 16.6-1

These B-distance measurements indicate that the autocorrelation shape features are

marginally adequate for the set of four shape features, but unacceptable for fewer

features Tests by Faugeras and Pratt (5) verify that the B-distances are low for

FIGURE 16.6-1 Perspective views of autocorrelation functions of Brodatz texture fields.

m n,( )

Trang 12

the stochastic field pairs of Figure 16.5-9, which have the same autocorrelationfunctions but are visually distinct.

16.6.4 Decorrelation Methods

Stochastic texture fields generated by the model of Figure 16.5-1 can be describedquite compactly by specification of the spatial operator and the stationary

first-order probability density p(W) of the independent, identically distributed

gener-ating process This observation has led to a texture feature extraction dure, developed by Faugeras and Pratt (5), in which an attempt has been made toinvert the model and estimate its parameters Figure 16.6-2 is a block diagram oftheir decorrelation method of texture feature extraction In the first step of themethod, the spatial autocorrelation function is measured over a texturefield to be analyzed The autocorrelation function is then used to develop a whiten-ing filter, with an impulse response , using techniques described in Section

proce-19.2 The whitening filter is a special type of decorrelation operator It is used to

generate the whitened field

(16.6-4)

This whitened field, which is spatially uncorrelated, can be utilized as an mate of the independent generating process by forming its first-orderhistogram

Trang 13

TEXTURE FEATURES 559

FIGURE 16.6-2 Decorrelation method of texture feature extraction.

FIGURE 16.6-3 Whitened Brodatz texture fields.

Trang 14

If were known exactly, then, in principle, it could be used to identify from the texture observation But, the whitened field estimate can only be used to identify the autocorrelation function, which, ofcourse, is already known As a consequence, the texture generation modelcannot be inverted However, the shape of the histogram of augmented

by the shape of the autocorrelation function have proved to be useful texturefeatures

Figure 16.6-3 shows the whitened texture fields of the Brodatz test images.Figure 16.6-4 provides plots of their histograms The whitened fields are observed to

be visually distinctive; their histograms are also different from one another

Tables 16.6-2 and 16.6-3 list, respectively, the B-distance measurements for

histogram shape features alone, and histogram and autocorrelation shape features

The B-distance is relatively low for some of the test textures for histogram-only

features A combination of the autocorrelation shape and histogram shape featuresprovides good results, as noted in Table 16.6-3

An obvious disadvantage of the decorrelation method of texturemeasurement, as just described, is the large amount of computation involved ingenerating the whitening operator An alternative is to use an approximatedecorrelation operator Two candidates, investigated by Faugeras and Pratt (5),are the Laplacian and Sobel gradients Figure 16.6-5 shows the resultant

decorrelated fields for these operators The B-distance measurements using the

Laplacian and Sobel gradients are presented in Tables 16.6-2 and 16.6-3 Thesetests indicate that the whitening operator is superior, on average, to theLaplacian operator But the Sobel operator yields the largest average and largest

Trang 17

TEXTURE FEATURES 563

16.6.5 Dependency Matrix Methods

Haralick et al (7) have proposed a number of textural features based on the jointamplitude histogram of pairs of pixels If an image region contains fine texture, thetwo-dimensional histogram of pixel pairs will tend to be uniform, and for coarse tex-ture, the histogram values will be skewed toward the diagonal of the histogram.Consider the pair of pixels and that are separated by r radial units at

an angle with respect to the horizontal axis Let represent thetwo-dimensional histogram measurement of an image over some windowwhere each pixel is quantized over a range The two-dimensional his-togram can be considered as an estimate of the joint probability distribution

(16.6-5)

FIGURE 16.6-5 Laplacian and Sobel gradients of Brodatz texture fields.

(a) Laplacian, sand (b) Sobel, sand

(c) Laplacian, raffia (d ) Sobel, raffia

Trang 18

For each member of the parameter set , the two-dimensional histogrammay be regarded as a array of numbers relating the measured statistical

dependency of pixel pairs Such arrays have been called a gray scale dependency matrix or a co-occurrence matrix Because a histogram array must beaccumulated for each image point and separation set underconsideration, it is usually computationally necessary to restrict the angular andradial separation to a limited number of values Figure 16.6-6 illustrates geometricalrelationships of histogram measurements made for four radial separation points andangles of radians under the assumption of angular symmetry

To obtain statistical confidence in estimation of the joint probability distribution, thehistogram must contain a reasonably large average occupancy level This can beachieved either by restricting the number of amplitude quantization levels or byutilizing a relatively large measurement window The former approach results in aloss of accuracy in the measurement of low-amplitude texture, while the latterapproach causes errors if the texture changes over the large window A typicalcompromise is to use 16 gray levels and a window of about 30 to 50 pixels on eachside Perspective views of joint amplitude histograms of two texture fields arepresented in Figure 16.6-7

For a given separation set , the histogram obtained for fine texture tends to

be more uniformly dispersed than the histogram for coarse texture Texture ness can be measured in terms of the relative spread of histogram occupancy cellsabout the main diagonal of the histogram Haralick et al (7) have proposed a num-ber of spread indicators for texture measurement Several of these have beenpresented in Section 16.2 As an example, the inertia function of Eq 16.2-15 results

coarse-in a texture measure of the form

Trang 19

TEXTURE FEATURES 565

If the textural region of interest is suspected to be angularly invariant, it is able to average over the measurement angles of a particular measure to produce themean textural measure (23)

FIGURE 16.6-7 Perspective views of gray scale dependency matrices for ,

(a) Grass (b) Dependency matrix, grass

(c) Ivy (d) Dependency matrix, ivy

r = 4 θ = 0

M T(j k r, , ) 1

Nθ

- T j k r(, , ,θ)θ

=

Trang 20

Another useful measurement is the angular independent spread defined by

(16.6-9)

16.6.6 Microstructure Methods

Examination of the whitened, Laplacian and Sobel gradient texture fields of Figures16.6-3 and 16.6-5 reveals that they appear to accentuate the microstructure of thetexture This observation was the basis of a texture feature extraction scheme devel-oped by Laws (31), and described in Figure 16.6-8 Laws proposed that the set ofnine pixel impulse response arrays shown in Figure 16.6-9, be con-

volved with a texture field to accentuate its microstructure The ith microstructure

array is defined as

(16.6-10)

Then, the energy of these microstructure arrays is measured by forming their ing window standard deviation according to Eq 16.2-2, over a window thatcontains a few cycles of the repetitive texture

mov-FIGURE 16.6-8 Laws microstructure texture feature extraction method.

Trang 21

TEXTURE FEATURES 567

Figure 16.6-10 shows a mosaic of several Brodatz texture fields that have beenused to test the Laws feature extraction method Note that some of the texture fieldsappear twice in the mosaic Figure 16.6-11 illustrates the texture arrays Inclassification tests of the Brodatz textures performed by Laws (31), the correct tex-ture was identified in nearly 90% of the trials

Many of the microstructure detection operators of Figure 16.6-9 have beenencountered previously in this book: the pyramid average, the Sobel horizontal andvertical gradients, the weighted line horizontal and vertical gradients and the crosssecond derivative The nine Laws operators form a basis set that can be generatedfrom all outer product combinations of the three vectors

121

=

v2 1

2 -

101

=

v3 1

2 -121

=

Trang 22

Alternatively, the Chebyshev basis set proposed by Haralick (32) for edgedetection, as described in Section 16.3.3, can be used for texture measurement Thefirst Chebyshev basis vector is

(16.6-12)

The other two are identical to Eqs 16.6-11b and 16.6-11c The Laws procedure

can be extended by using larger size Chebyshev arrays or other types of basisarrays (33)

Ade (34) has suggested a microstructure texture feature extraction proceduresimilar in nature to the Laws method, which is based on a principal componentstransformation of a texture sample In the development of this transformation, pixelswithin a neighborhood are regarded as being column stacked into a vec-

tor, as shown in Figure 16.6-12a Then a covariance matrix K that specifies all

pairwise covariance relationships of pixels within the stacked vector is estimatedfrom a set of prototype texture fields Next, a transformation matrix T that diagonalizes the covariance matrix K is computed, as described in Eq 5.5-8

FIGURE 16.6-10 Mosaic of Brodatz texture fields.

3×3

v1 1

3 - 111

=

9×9

9×9

Trang 23

TEXTURE FEATURES 569

FIGURE 16.6-11 Laws microstructure texture features.

Trang 24

The rows of T are eigenvectors of the principal components transformation.

Each eigenvector is then cast into a impulse response array by the ing operation of Eq 5.3-4 The resulting nine eigenmatrices are then used inplace of the Laws fixed impulse response arrays, as shown in Figure 16.6-8 Ade(34,35) has computed eigenmatrices for a Brodatz texture field and a cloth sam-ple Interestingly, these eigenmatrices are similar in structure to the Lawsarrays

destack-Manian et al (36) have also developed a variant of the Laws microstructuremethod With reference to Figure 16.6-8, they use the six impulseresponse arrays, called logical operators, shown in Figure 6.6-13 The standarddeviation measurement is over a pixel window Next, features areextracted from the standard deviation measurements using the four zonal filterfeature masks of Figure 16.3-1.They report good classification results for theBrodatz texture set (36)

FIGURE 16.6-11 (continued) Laws microstructure texture features.

(i ) Laws no 9

3×3

2×2

5×5

Trang 25

TEXTURE FEATURES 571

FIGURE 16.6-13 Logical operator impulse response arrays.

16.6.7 Gabor Filter Methods

The microstructure method of texture feature extraction is not easily scalable structure arrays must be derived to match the inherent periodicity of each texture to becharacterized Bovik et al (37–39) have utilized Gabor filters (40) as an efficient means

Micro-of scaling the impulse response function arrays Micro-of Figure 16.6-8 to the texture periodicity

FIGURE 16.6-12 Neighborhood covariance relationships.

Trang 26

where F is a scaling factor and The Gaussian component is

(16.6-13)

where is the Gaussian spread factor and is the aspect ratio between the x and y

axes The rotation of coordinates is specified by

(16.6-14)

where is the orientation angle with respect to the x axis The continuous domain

filter transfer function is given by (38)

(16.6-15)

Figure 16.6-14 shows the relationship between the real and imaginary nents of the impulse response array and the magnitude of the transferfunction (38) The impulse response array is composed of sine-wave gratingswithin the elliptical region The half energy profile of the transfer function isshown in gray

compo-Grigorescu et al (41) have performed a comprehensive comparison of Gaborfilter texture features In the comparative study of texture classification methods

by Randen and Husoy (25), the Gabor filter method, like many other methods,gave mixed results It performed well on some texture samples, but poorly onothers

G x y( , ) 1

2πλσ2 - (x⁄λ)2

y2

+

2σ2

H u v( , ) 2π2

σ2

u ′ F–( )2

v′( )2+

Trang 27

TEXTURE FEATURES 573

16.6.8 Transform and Wavelet Methods

The Fourier spectra method of texture feature extraction can be generalized to otherunitary transforms The concept is straightforward A texture sample is subdi-vided into pixel arrays, and a unitary transform is performed for each arrayyielding a feature vector The window size needs to large enough to containseveral cycles of the texture periodicity

Mallat (42) has used the discrete wavelet transform, based on Haar wavelets(see Section 8.4.2) as a means of generating texture feature vectors Improvedresults have been obtained by Unser (43), who has used a complete Haar-based

FIGURE 16.6-14 Relationship between impulse response array and transfer function of a

Trang 28

value decomposition of a texture sample In this method, a texture sample istreated as a matrix X, and the amplitude-ordered set of singular values s(n) for n = 1, 2, , N is computed, as described in Appendix A1.2 If the elements of X

are spatially unrelated to one another, the singular values tend to be uniformly

dis-tributed in amplitude On the other hand, if the elements of X are highly structured,

the singular-value distribution tends to be skewed such that the lower-order singularvalues are much larger than the higher-order ones

Figure 16.6-15 contains measurements of the singular-value distributions of thefour Brodatz textures performed by Ashjari (44) In this experiment, the

pixel texture originals were first subjected to a statistical rescaling process to duce four normalized texture images whose first-order distributions were Gauss-ian with identical moments Next, these normalized texture images weresubdivided into 196 non-overlapping pixel blocks, and an SVD transfor-mation was taken of each block Figure 16.6-14 is a plot of the average value ofeach singular value The shape of the singular-value distributions can be quanti-fied by the one-dimensional shape descriptors defined in Section 16.2 Table 16.6-

pro-4 lists Bhattacharyya distance measurements obtained by Ashjari (pro-4pro-4) for themean, standard deviation, skewness and kurtosis shape descriptors For this exper-

iment, the B-distances are relatively high, and therefore good classification results

should be expected

TABLE 16.6-4 Bhattacharyya Distance of SVD Texture

Feature Sets for Prototype Texture Fields: SVD Features

Trang 29

REFERENCES 575

REFERENCES

1 H C Andrews, Introduction to Mathematical Techniques in Pattern Recognition,

Wiley-Interscience, New York, 1972

2 R O Duda, P E Hart and D G Stork, Pattern Classification, 2nd ed.,

Wiley-Inter-science, New York, 2001

3 K Fukunaga, Introduction to Statistical Pattern Recognition, 2nd ed., Academic Press,

New York, 1990

4 W S Meisel, Computer-Oriented Approaches to Pattern Recognition, Academic Press,

New York, 1972

5 O D Faugeras and W K Pratt, “Decorrelation Methods of Texture Feature Extraction,”

IEEE Trans Pattern Analysis and Machine Intelligence, PAMI-2, 4, July 1980, 323–

332

6 R O Duda, “Image Data Extraction,” unpublished notes, July 1975

7 R M Haralick, K Shanmugan and I Dinstein, “Texture Features for Image

Classifica-tion,” IEEE Trans Systems, Man and Cybernetics, SMC-3, November 1973, 610–621.

8 G G Lendaris and G L Stanley, “Diffraction Pattern Sampling for Automatic Pattern

Recognition,” Proc IEEE, 58, 2, February 1970, 198–216.

9 R M Pickett, “Visual Analysis of Texture in the Detection and Recognition of Objects,”

in Picture Processing and Psychopictorics, B C Lipkin and A Rosenfeld, Eds.,

Aca-demic Press, New York, 1970, 289–308

10 J K Hawkins, “Textural Properties for Pattern Recognition,” in Picture Processing and Psychopictorics, B C Lipkin and A Rosenfeld, Eds., Academic Press, New York, 1970,

347–370

FIGURE 16.6-15 Singular-value distributions of Brodatz texture fields.

Trang 30

17 B Julesz, “Experiments in the Visual Perception of Texture,” Scientific American, 232,

4, April 1975, 2–11

18 I Pollack, Perceptual Psychophysics, 13, 1973, 276–280.

19 S R Purks and W Richards, “Visual Texture Discrimination Using Random-Dot

Patterns,” J Optical Society America, 67, 6, June 1977, 765–771.

20 W K Pratt, O D Faugeras and A Gagalowicz, “Visual Discrimination of Stochastic

Texture Fields,” IEEE Trans Systems, Man and Cybernetics, SMC-8, 11, November

1978, 796–804

21 A Gagalowicz, “Stochastic Texture Fields Synthesis from a priori Given Second Order

Statistics,” Proc IEEE Computer Society Conf Pattern Recognition and Image ing, Chicago IL, August 1979, 376–381.

Process-22 E L Hall et al., “A Survey of Preprocessing and Feature Extraction Techniques for

Radiographic Images,” IEEE Trans Computers, C-20, 9, September 1971, 1032–1044.

23 R M Haralick, “Statistical and Structural Approach to Texture,” Proc IEEE, 67, 5, May

1979, 786–804

24 T R Reed and J M H duBuf, “A Review of Recent Texture Segmentation and Feature

Extraction Techniques,” CVGIP: Image Understanding, 57, May 1993, 358–372.

25 T Randen and J H Husoy, “Filtering for Classification: A Comparative Study,” IEEE

Trans Pattern Analysis and Machine Intelligence, PAMI 21, 4, April 1999, 291–310.

26 A Rosenfeld, “Automatic Recognition of Basic Terrain Types from Aerial Photographs,”

Photogrammic Engineering, 28, 1, March 1962, 115–132.

27 J M Coggins and A K Jain, “A Spatial Filtering Approach to Texture Analysis,”

Pat-tern Recognition Letters, 3, 3, 1985, 195–203.

28 R P Kruger, W B Thompson and A F Turner, “Computer Diagnosis of

Pneumoconio-sis,” IEEE Trans Systems, Man and Cybernetics, SMC-4, 1, January 1974, 40–49.

29 R N Sutton and E L Hall, “Texture Measures for Automatic Classification of

Pulmo-nary Disease,” IEEE Trans Computers, C-21, July 1972, 667–676.

30 A Rosenfeld and E B Troy, “Visual Texture Analysis,” Proc UMR–Mervin J Kelly Communications Conference, University of Missouri–Rolla, Rolla, MO, October

1970, Sec 10–1

31 K I Laws, “Textured Image Segmentation,” USCIPI Report 940, University of SouthernCalifornia, Image Processing Institute, Los Angeles, January 1980

Trang 31

REFERENCES 577

32 R M Haralick, “Digital Step Edges from Zero Crossing of Second Directional

Derivatives,” IEEE Trans Pattern Analysis and Machine Intelligence, PAMI-6, 1,

Janu-ary 1984, 58–68

33 M Unser and M Eden, “Multiresolution Feature Extraction and Selection for Texture

Segmentation,” IEEE Trans Pattern Analysis and Machine Intelligence, PAMI-11, 7,

July 1989, 717–728

34 F Ade, “Characterization of Textures by Eigenfilters,” Signal Processing, September

1983

35 F Ade, “Application of Principal Components Analysis to the Inspection of Industrial

Goods,” Proc SPIE International Technical Conference/Europe, Geneva, April 1983.

36, V Manian, R Vasquez and P Katiyar, “Texture Classification Using Logical Operators,”

IEEE Trans Image Processing, 9, 10, October 2000, 1693–1703

37 M Clark and A C Bovik, “Texture Discrimination Using a Model of Visual Cortex,”

Proc IEEE International Conference on Systems, Man and Cybernetics, Atlanta, GA,

1986

38 A C Bovik, M Clark and W S Geisler, “Multichannel Texture Analysis Using

Local-ized Spatial Filters,” IEEE Trans Pattern Analysis and Machine Intelligence, PAMI-12,

1, January 1990, 55–73

39 A C Bovik, “Analysis of Multichannel Narrow-Band Filters for Image Texture

Segmen-tation,” IEEE Trans Signal Processing, 39, 9, September 1991, 2025–2043.

40 D Gabor, “Theory of Communication,” J Institute of Electrical Engineers, 93, 1946,

429–457

41 S Grigorescu, N Petkov and P Kruizinga, “Comparison of Texture Features Based on

Gabor Filters,” IEEE Trans Image Processing, 11, 10, October 2002, 1160–1167

42 S G Mallat, “A Theory for Multiresolution Signal Decomposition: The Wavelet

Repre-sentation,” IEEE Trans Pattern Analysis and Machine Intelligence, PAMI-11, 7, July

1989, 674–693

43 M Unser, “Texture Classification and Segmentation Using Wavelet Frames,” IEEE

Trans Image Processing, IP-4, 11, November 1995, 1549–1560.

44 B Ashjari, “Singular Value Decomposition Texture Measurement for Image tion,” Ph.D dissertation, University of Southern California, Department of ElectricalEngineering, Los Angeles February 1982

Trang 33

17

Digital Image Processing: PIKS Scientific Inside, Fourth Edition, by William K Pratt

Copyright © 2007 by John Wiley & Sons, Inc.

IMAGE SEGMENTATION

Segmentation of an image entails the division or separation of the image into regions

of similar attribute The most basic attribute for segmentation is image luminanceamplitude for a monochrome image and color components for a color image Imageedges and texture are also useful attributes for segmentation

The definition of segmentation adopted in this chapter is deliberately restrictive;

no contextual information is utilized in the segmentation Furthermore, tion does not involve classifying each segment The segmenter only subdivides animage; it does not attempt to recognize the individual segments or their relationships

segmenta-to one another

There is no theory of image segmentation As a consequence, no single standardmethod of image segmentation has emerged Rather, there are a collection of ad hocmethods that have received some degree of popularity Because the methods are adhoc, it would be useful to have some means of assessing their performance Haralickand Shapiro (1) have established the following qualitative guideline for a goodimage segmentation: “Regions of an image segmentation should be uniform andhomogeneous with respect to some characteristic such as gray tone or texture.Region interiors should be simple and without many small holes Adjacent regions

of a segmentation should have significantly different values with respect to the acteristic on which they are uniform Boundaries of each segment should be simple,not ragged, and must be spatially accurate.” Unfortunately, no quantitative imagesegmentation performance metric has been developed

Trang 34

char-17.1.1 Bilevel Luminance Thresholding

Many images can be characterized as containing some object of interest of ably uniform brightness placed against a background of differing brightness Typicalexamples include handwritten and typewritten text, microscope biomedical samplesand airplanes on a runway For such images, luminance is a distinguishing featurethat can be utilized to segment the object from its background If an object of inter-est is white against a black background, or vice versa, it is a trivial task to set a midgray threshold to segment the object from the background Practical problems occur,however, when the observed image is subject to noise and when both the object andbackground assume some broad range of gray scales Another frequent difficulty isthat the background may be nonuniform

reason-Figure 17.1-1a shows a digitized typewritten text consisting of dark letters

against a lighter background A gray scale histogram of the text is presented in

Figure 17.1-1b The expected bimodality of the histogram is masked by the tively large percentage of background pixels Figure 17.1-1c to e are threshold

rela-displays in which all pixels brighter than the threshold are mapped to unity play luminance and all the remaining pixels below the threshold are mapped tothe zero level of display luminance The photographs illustrate a common prob-lem associated with image thresholding If the threshold is set too low, portions

dis-of the letters are deleted (the stem dis-of the letter “p” is fragmented) Conversely, ifthe threshold is set too high, object artifacts result (the loop of the letter “e” isfilled in)

Several analytic approaches to the setting of a luminance threshold have beenproposed (3,9) One method is to set the gray scale threshold at a level such thatthe cumulative gray scale count matches an a priori assumption of the gray scaleprobability distribution (10) For example, it may be known that black characterscover 25% of the area of a typewritten page Thus, the threshold level on theimage might be set such that the quartile of pixels with the lowest luminance arejudged to be black Another approach to luminance threshold selection is to setthe threshold at the minimum point of the histogram between its bimodal peaks(11) Determination of the minimum is often difficult because of the jaggedness

Trang 35

AMPLITUDE SEGMENTATION 581

FIGURE 17.1-1 Luminance thresholding segmentation of typewritten text.

(a) Gray scale text (b) Histogram

(c) High threshold, T = 0.67 (d ) Medium threshold, T = 0.50

(f ) Histogram, Laplacian mask (e) Low threshold, T = 0.10

Trang 36

A global threshold can be determined by minimization of some difference sure between an image to be segmented and its test segments Otsu (13) has devel-oped a thresholding algorithm using the Euclidean difference Sahoo et al (6)have reported that the Otsu method is the best global thresholding techniqueamong those that they tested.

mea-Weska et al (14) have suggested the use of a Laplacian operator to aid inluminance threshold selection As defined in Eq 15.3-1, the Laplacian formsthe spatial second partial derivative of an image Consider an image region inthe vicinity of an object in which the luminance increases from a low plateaulevel to a higher plateau level in a smooth ramp like fashion In the flat regionsand along the ramp, the Laplacian is zero Large positive values of the Lapla-cian will occur in the transition region from the low plateau to the ramp; largenegative values will be produced in the transition from the ramp to the high pla-teau A gray scale histogram formed of only those pixels of the original imagethat lie at coordinates corresponding to very high or low values of the Laplacian

tends to be bimodal with a distinctive valley between the peaks Figure 17.1-1f shows the histogram of the text image of Figure 17.1-1a after the Laplacian

mask operation

If the background of an image is nonuniform, it often is necessary to adapt theluminance threshold to the mean luminance level (15,16) This can be accom-plished by subdividing the image into small blocks and determining the best thresh-old level for each block by the methods discussed previously Threshold levels foreach pixel may then be determined by interpolation between the block centers.Yankowitz and Bruckstein (17) have proposed an adaptive thresholding method inwhich a threshold surface is obtained by interpolating an image only at pointswhere its gradient is large

17.1.2 Multilevel Luminance Thresholding

Effective segmentation can be achieved in some classes of images by a recursive

multilevel thresholding method suggested by Tomita et al (18) In the first stage

of the process, the image is thresholded to separate brighter regions from darker

Trang 37

AMPLITUDE SEGMENTATION 583

FIGURE 17.1-2 Multilevel luminance thresholding image segmentation of the peppers_

mon image; first-level segmentation

(a) Original (b) Original histogram

(c) Segment 0 (d ) Segment 0 histogram

(e) Segment 1 (f ) Segment 1 histogram

Trang 38

regions by locating a minimum between luminance modes of the histogram Thenhistograms are formed of each of the segmented parts If these histograms are notunimodal, the parts are thresholded again The process continues until the histogram

of a part becomes unimodal Figures 17.1-2 to 17.1-4 provide an example of thisform of amplitude segmentation in which the peppers image is segmented into fourgray scale segments

Several methods have been proposed for the selection of multilevel thresholds.The methods of Reddi et al (19) and Kapur et al (20) are based upon image histo-rams References 21 to 23 are more recent proposals for threshold selection

FIGURE 17.1-3 Multilevel luminance thresholding image segmentation of the peppers_

mon image; second-level segmentation, 0 branch

(a) Segment 00 (b) Segment 00 histogram

(c) Segment 01 (d ) Segment 01 histogram

Trang 39

AMPLITUDE SEGMENTATION 585

17.1.3 Multilevel Color Component Thresholding

The multilevel luminance thresholding concept can be extended to the segmentation

of color and multispectral images Ohlander et al (24,25) have developed a tation scheme for natural color images based on multidimensional thresholding of

segmen-color images represented by their RGB segmen-color components, their luma/chroma YIQ

components and by a set of nonstandard color components, loosely called intensity,

hue and saturation Figure 17.1-5 provides an example of the property histograms

of these nine color components for a scene The histograms, have been measuredover those parts of the original scene that are relatively devoid of texture: the non-busy parts of the scene This important step of the segmentation process is necessary

to avoid false segmentation of homogeneous textured regions into many isolatedparts If the property histograms are not all unimodal, an ad hoc procedure is

FIGURE 17.1-4 Multilevel luminance thresholding image segmentation of the peppers_

mon image; second-level segmentation, 1 branch

(a) Segment 10 (b) Segment 10 histogram

(c) Segment 11 (d ) Segment 11 histogram

Trang 40

FIGURE 17.1-5 Typical property histograms for color image segmentation.

Ngày đăng: 14/08/2014, 02:20

TỪ KHÓA LIÊN QUAN