1. Trang chủ
  2. » Công Nghệ Thông Tin

The Essential Guide to Image Processing- P18 ppsx

30 429 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Approaches For Color And Multispectral Images
Trường học Unknown University
Chuyên ngành Image Processing
Thể loại lecture notes
Định dạng
Số trang 30
Dung lượng 2,63 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Another rather obvious approach is to apply a desired edge detection method arately to each color component and construct a cumulative edge map.. The Laplacian of Gaussian approach can a

Trang 1

19.5 Approaches for Color and Multispectral Images 519

FIGURE 19.10

Canny edge detector of Eq (19.22) applied after Gaussian smoothing over a range of ␴:

(a)␴ ⫽ 0.5; (b) ␴ ⫽ 1; (c) ␴ ⫽ 2; and (d) ␴ ⫽ 4 The thresholds are fixed in each case at TU⫽ 10

and TL⫽ 4

only computational cost beyond that for grayscale images is incurred in obtaining the

luminance component image, if necessary In many color spaces, such as YIQ, HSL,

CIELUV , and CIELAB, the luminance image is simply one of the components in that

representation For others, such as RGB, computing the luminance image is usually easy

and efficient The main drawback to luminance-only processing is that important edges

are often not confined to the luminance component Therefore, a gray level difference in

the luminance component is often not the most appropriate criterion for edge detection

in color images

Trang 2

(a) (b)

FIGURE 19.11

Canny edge detector ofEq (19.22)applied after Gaussian smoothing with␴ ⫽ 2: (a) TU ⫽ 10,

TL ⫽ 1; (b) TU ⫽ TL ⫽ 10; (c) TU ⫽ 20, TL ⫽ 1; (d) TU ⫽ TL ⫽ 20 As TLis changed, notice theeffect on the results of hysteresis thresholding

Another rather obvious approach is to apply a desired edge detection method arately to each color component and construct a cumulative edge map One possibility

sep-for overall gradient magnitude, shown here sep-for the RGB color space, combines the

component gradient magnitudes[24]:

ⵜf c (x,y) ⫽ ⵜf R (x,y)⫹ⵜf G (x,y)⫹ⵜf B (x,y).The results, however, are biased according to the properties of the particular color spaceused It is often important to employ a color space that is appropriate for the target

Trang 3

19.5 Approaches for Color and Multispectral Images 521

application For example, edge detection that is intended to approximate the human

visual system’s behavior should utilize a color space having a perceptual basis, such as

CIELUV or perhaps HSL Another complication is the fact that the components’ gradient

vectors may not always be similarly oriented, making the search for local maxima of

|ⵜf c| along the gradient direction more difficult If a total gradient image were to be

computed by summing the color component gradient vectors, not just their magnitudes,

then inconsistent orientations of the component gradients could destructively interfere

and nullify some edges

Vector approaches to color edge detection, while generally less computationally

effi-cient, tend to have better theoretical justification Euclidean distance in color space

between the color vectors of a given pixel and its neighbors can be a good basis for

an edge detector [24] For the RGB case, the magnitude of the vector gradient is as

follows:

ⵜf c (x,y) ⫽ ⵜfR (x,y)2⫹ⵜf G (x,y)2⫹ⵜf B (x,y)2

Trahanias and Venetsanopoulos [29]described the use of vector order statistics as the

basis for color edge detection A later paper byScharcanski and Venetsanopoulos [26]

furthered the concept While not strictly founded on the gradient or Laplacian, their

techniques are effective and worth mention here because of their vector bases The basic

idea is to look for changes in local vector statistics, particularly vector dispersion, to

indicate the presence of edges

Multispectral images can have many components, complicating the edge detection

problem even further.Cebrián et al [6]describe several methods that are useful for

mul-tispectral images having any number of components Their description uses the second

directional derivative in the gradient direction as the basis for the edge detector, but other

types of detectors can be used instead The components-average method forms a

gray-scale image by averaging all components, which have first been Gaussian-smoothed, and

then finds the edges in that image The method generally works well because

multispec-tral images tend to have high correlation between components However, it is possible

for edge information to diminish or vanish if the components destructively interfere

Cumani [8]explored operators for computing the vector gradient and created an

edge detection approach based on combining the component gradients A multispectral

contrast function is defined, and the image is searched for pixels having maximal

direc-tional contrast Cumani’s method does not always detect edges present in the component

bands, but it better avoids the problem of destructive interference between bands

The maximal gradient method constructs a single gradient image from the component

images[6] The overall gradient image’s magnitude and direction values at a given pixel

are those of the component having the greatest gradient magnitude at that pixel Some

edges can be missed by the maximal gradient technique because they may be swamped

by differently oriented, stronger edges present in another band

The method of combining component edge maps is the least efficient because an edge

map must first be computed for every band On the positive side, this method is capable

of detecting any edge that is detectable in at least one component image Combination

Trang 4

of component edge maps into a single result is made more difficult by the edge locationerrors induced by Gaussian smoothing done in advance The superimposed edges canbecome smeared in width because of the accumulated uncertainty in edge localization.

A thinning step applied during the combination procedure can greatly reduce this edgeblurring problem

19.6 SUMMARY

Gray level edge detection is most commonly performed by convolving an image, f , with

a filter that is somehow based on the idea of the derivative Conceptually, edges can be

revealed by locating either the local extrema of the first derivative of f or the zero-crossings

of its second derivative The gradient and the Laplacian are the primary derivative-basedfunctions used to construct such edge-detection filters The gradient,ⵜ, is a 2D extension

of the first derivative while the Laplacian,ⵜ2, acts as a 2D second derivative A variety

of edge detection algorithms and techniques have been developed that are based on thegradient or Laplacian in some way Like any type of derivative-based filter, ones based onthese two functions tend to be very sensitive to noise Edge location errors, false edges,and broken or missing edge segments are often problems with edge detection applied tonoisy images For gradient techniques, thresholding is a common way to suppress noiseand can be done adaptively for better results Gaussian smoothing is also very helpfulfor noise suppression, especially when second-derivative methods such as the Laplacianare used The Laplacian of Gaussian approach can also provide edge information over arange of scales, helping to further improve detection accuracy and noise suppression aswell as providing clues that may be useful during subsequent processing

Recent comparisons of various edge detectors have been made byHeath et al [13]

andBowyer et al [4] They have concluded that the subjective quality of the results ofvarious edge detectors applied to real images is quite dependent on the images themselves.Thus, there is no single edge detector that produces a consistently best overall result.Furthermore, they found it difficult to predict the best choice of edge detector for a givensituation

REFERENCES

[1] D H Ballard and C M Brown, Computer Vision Prentice-Hall, Englewood Cliffs, NJ, 1982 [2] V Berzins Accuracy of Laplacian edge detectors Comput Vis Graph Image Process., 27:195–210,

1984.

[3] A C Bovik, T S Huang, and D C Munson, Jr The effect of median filtering on edge estimation

and detection IEEE Trans Pattern Anal Mach Intell., PAMI-9:181–194, 1987.

[4] K Bowyer, C Kranenburg, and S Dougherty Edge detector evaluation using empirical ROC curves.

Comput Vis Image Underst., 84:77–103, 2001.

[5] J Canny A computational approach to edge detection IEEE Trans Pattern Anal Mach Intell.,

PAMI-8:679–698, 1986.

Trang 5

References 523

[6] M Cebrián, M Perez-Luque, and G Cisneros Edge detection alternatives for multispectral remote

sensing images In Proceedings of the 8th Scandinavian Conference on Image Analysis, Vol 2,

1047–1054 NOBIM-Norwegian Soc Image Pross & Pattern Recognition, Tromso, Norway, 1993.

[7] J S Chen, A Huertas, and G Medioni Very fast convolution with Laplacian-of-Gaussian masks.

In Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit., 293–298 IEEE, New York, 1986.

[8] A Cumani Edge detection in multispectral images Comput Vis Graph Image Process Graph.

Models Image Process., 53:40–51, 1991.

[9] L Ding and A Goshtasby On the Canny edge detector Pattern Recognit., 34:721–725, 2001.

[10] R M Haralick and L G Shapiro Computer and Robot Vision, Vol 1 Addison-Wesley, Reading,

MA, 1992.

[11] Q Ji and R M Haralick Efficient facet edge detection and quantitative performance evaluation.

Pattern Recognit., 35:689–700, 2002.

[12] R C Hardie and C G Boncelet Gradient-based edge detection using nonlinear edge enhancing

prefilters IEEE Trans Image Process., 4:1572–1577, 1995.

[13] M Heath, S Sarkar, T Sanocki, and K Bowyer Comparison of edge detectors, a methodology and

initial study Comput Vis Image Underst., 69(1):38–54, 1998.

[14] A Huertas and G Medioni Detection of intensity changes with subpixel accuracy using

Laplacian-Gaussian masks IEEE Trans Pattern Anal Mach Intell., PAMI-8(5):651–664, 1986.

[15] S R Gunn On the discrete representation of the Laplacian of Gaussian Pattern Recognit., 32:

1463–1472, 1999.

[16] A K Jain Fundamentals of Digital Image Processing Prentice-Hall, Englewood Cliffs, NJ, 1989.

[17] J S Lim Two-Dimensional Signal and Image Processing Prentice-Hall, Englewood Cliffs, NJ, 1990.

[18] D Marr Vision W H Freeman, New York, 1982.

[19] D Marr and E Hildreth Theory of edge detection Proc R Soc Lond B, 270:187–217, 1980.

[20] B Mathieu, P Melchior, A Oustaloup, and Ch Ceyral Fractional differentiation for edge detection.

Signal Processing, 83:2421–2432, 2003.

[21] J Merron and M Brady Isotropic gradient estimation In Proc IEEE Comput Soc Conf Comput.

Vis Pattern Recognit., 652–659 IEEE, New York, 1996.

[22] V S Nalwa and T O Binford On detecting edges IEEE Trans Pattern Anal Mach Intell.,

PAMI-8(6):699–714, 1986.

[23] W K Pratt Digital Image Processing, 2nd ed Wiley, New York, 1991.

[24] S J Sangwine and R E N Horne, editors The Colour Image Processing Handbook Chapman and

Hall, London, 1998.

[25] S Sarkar and K L Boyer Optimal infinite impulse response zero crossing based edge detectors.

Comput Vis Graph Image Process Image Underst., 54(2):224–243, 1991.

[26] J Scharcanski and A N Venetsanopoulos Edge detection of color images using directional

operators IEEE Trans Circuits Syst Video Technol., 7(2):397–401, 1997.

[27] P Siohan, D Pele, and V Ouvrard Two design techniques for 2-D FIR LoG filters In M Kunt,

editor, Proc SPIE, Visual Communications and Image Processing, Vol 1360, 970–981, 1990.

[28] V Torre and T A Poggio On edge detection IEEE Trans Pattern Anal Mach Intell.,

PAMI-8(2):147–163, 1986.

Trang 6

[29] P E Trahanias and A N Venetsanopoulos Color edge detection using vector order statistics IEEE Trans Image Process., 2(2):259–264, 1993.

[30] A P Witkin Scale-space filtering In Proc Int Joint Conf Artif Intell., 1019–1022 William

Kaufmann Inc., Karlsruhe, Germany, 1983.

[31] D Ziou and S Wang Isotropic processing for gradient estimation In Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit., 660–665 IEEE, New York, 1996.

Trang 7

20

Diffusion Partial Differential

Equations for Edge

Detection

Scott T Acton

University of Virginia

20.1 INTRODUCTION AND MOTIVATION

20.1.1 Partial Differential Equations in Image and Video Processing

The collision of imaging and differential equations makes sense Without motion or

change of scene or changes within the scene, imaging is worthless First, consider a static

environment—we would not need vision in this environment, as the components of the

scene are unchanging In a dynamic environment, however, vision becomes the most

valuable sense Second, consider a constant-valued image with no internal changes or

edges Such an image is devoid of value in the information-theoretic sense

The need for imaging is based on the presence of change The mechanism for change

in both time and space is described and governed by differential equations.

The partial differential equations (PDEs) of interest in this chapter enact diffusion In

chemistry or heat transfer, diffusion is a process that equilibrates concentration differences

without creating or destroying mass In image and video processing, we can consider the

mass to be the pixel intensities or the gradient magnitudes, for example.

These important differential equations are PDEs, since they contain partial derivatives

with respect to spatial coordinates and time These equations, especially in the case

of anisotropic diffusion, are nonlinear PDEs since the diffusion coefficient is typically

nonlinear

20.1.2 Edges and Anisotropic Diffusion

Sudden, sustained changes in image intensity are called edges We know that the human

visual system makes extensive uses of edges to perform visual tasks such as object

recog-nition[1] Humans can recognize complex 3D objects using only line drawings or image

edge information Similarly, the extraction of edges from digital imagery allows a

valu-able abstraction of information and a reduction in processing and storage costs Most 525

Trang 8

definitions of image edges involve some concept of feature scale Edges are said to exist

at certain scales—edges from detail existing at fine scales and edges from the boundaries

of large objects existing at large scales Furthermore, large-scale edges exist at fine scales,leading to a notion of edge causality

In order to locate edges of various scales within an image, it is desirable to have animage operator that computes a scaled version of a particular image or frame in a videosequence This operator should preserve the position of such edges and facilitate the

extraction of the edge map through the scale space The tool of isotropic diffusion, a

linear lowpass filtering process, is not able to preserve the position of important edgesthrough the scale space Anisotropic diffusion, however, meets this criterion and hasbeen used effectively in conjunction with edge detection

The main benefit of anisotropic diffusion is edge preservation through the imagesmoothing process Anisotropic diffusion yields intra-region smoothing, not inter-regionsmoothing, by impeding diffusion at the image edges The anisotropic diffusion processcan be used to retain image features of a specified scale Furthermore, the localizedcomputation of anisotropic diffusion allows efficient implementation on a locally-

interconnected computer architecture Caselles et al furnish additional motivation for

using diffusion in image and video processing[2] The diffusion methods use localizedmodels where discrete filters become PDEs as the sample spacing goes to zero ThePDE framework allows various properties to be proved or disproved including stability,locality, causality, and the existence and uniqueness of solutions Through the establi-shed tools of numerical analysis, high degrees of accuracy and stability are possible

In this chapter, we introduce diffusion for image and video processing We cally concentrate on the implementation of anisotropic diffusion, providing severalalternatives for the diffusion coefficient and the diffusion PDE Energy-based variationaldiffusion techniques are also reviewed Recent advances in anisotropic diffusion proce-sses, including multiresolution techniques, multispectral techniques, and techniques forultrasound and radar imagery, are discussed Finally, the extraction of image edges afteranisotropic diffusion is addressed, and vector diffusion processes for attracting activecontours to boundaries are examined

specifi-20.2 BACKGROUND ON DIFFUSION

20.2.1 Scale Space and Isotropic Diffusion

In order to introduce the diffusion-based processing methods and the associated processes

of edge detection, let us define some notation Let I represent an image with real-valued

intensity I (x) image at position x in the domain ⍀ When defining the PDEs for diffusion,

let It be the image at time t with intensities I t (x) Corresponding with image I is the edge map e—the image of “edge pixels” e (x) with Boolean range (0 = no edge, 1 = edge), or

real-valued range e (x) ∈ [0,1] The set of edge positions in an image is denoted by ⌿.

The concept of scale space is at the heart of diffusion-based image and video

processing A scale space is a collection of images that begins with the original, fine

Trang 9

20.2 Background on Diffusion 527

scale image and progresses toward more coarse scale representations Using a scale space,

important image processing tasks such as hierarchical searches, image coding, and image

segmentation may be efficiently realized Implicit in the creation of a scale space is the

scale generating filter Traditionally, linear filters have been used to scale an image In

fact, the scale space ofWitkin [3]can be derived using a Gaussian filter:

The Marr-Hildreth paradigm uses a Gaussian scale space to define multiscale edge

detection Using the Gaussian-convolved (or diffused) images, one may detect edges

by applying the Laplacian operator and then finding zero-crossings [5] This popular

method of edge detection, called the Laplacian-of-a-Gaussian (LoG), is strongly

moti-vated by the biological vision system However, the edges detected from isotropic diffusion

(Gaussian scale space) suffer from artifacts such as corner rounding and from edge

localization error (deviation in detected edge position from the “true” edge position) The

localization errors increase with increased scale, precluding straightforward multiscale

image/video analysis As a result, many researchers have pursued anisotropic diffusion as

a viable alternative for generating images suitable for edge detection This chapter focuses

on such methods

20.2.1.1 Anisotropic Diffusion

The main idea behind anisotropic diffusion is the introduction of a function that inhibits

smoothing at the image edges This function, called the diffusion coefficient c (x),

encour-ages intra-region smoothing over inter-region smoothing For example, if c (x) is constant

at all locations, then smoothing progresses in an isotropic manner If c (x) is allowed

to vary according to the local image gradient, we have anisotropic diffusion A basic

anisotropic diffusion PDE is

⭸I t (x)

with I0⫽ I[6]

Trang 10

The discrete formulation proposed in[6]will be used as a general framework forimplementation of anisotropic diffusion in this chapter Here the image intensities areupdated according to

direc-is given by t ⌬T is the time step—for stability, ⌬T ⱕ1

2 in the 1D case, and⌬T ⱕ1

4 inthe 2D case using four diffusion directions For 1D discrete-domain signals, the simpledifferencesⵜI d (x) with respect to the “western” and “eastern” neighbors, respectively

(neighbors to the left and right), are defined by

and

The parameters h1 and h2 define the sample spacing used to estimate the directionalderivatives For the 2D case, the diffusion directions include the “northern” and “south-ern” directions (up and down), as well as the “western” and “eastern” directions (left andright) Given the motivation and basic definition of diffusion-based processing, we willnow define several implementations of anisotropic diffusion that can be applied for edgeextraction

20.3 ANISOTROPIC DIFFUSION TECHNIQUES

20.3.1 The Diffusion Coefficient

The link between edge detection and anisotropic diffusion is found in the edge-preservingnature of anisotropic diffusion The function that impedes smoothing at the edges isthe diffusion coefficient Therefore, the selection of the diffusion coefficient is the mostcritical step in performing diffusion-based edge detection We will review several possiblevariants of the diffusion coefficient and discuss the associated positive and negativeattributes

To simplify the notation, we will denote the diffusion coefficient at location x by

c (x) in the continuous case For the discrete-domain case, c d (x) represents the diffusion coefficient for direction d at location x Although the diffusion coefficients here are

defined using c (x) for the continuous case, the functions are equivalent in the

discrete-domain case of c d (x) Typically c(x) is a nonincreasing function of |ⵜI(x)|, the gradient magnitude at position x As such, we often refer to the diffusion coefficient as c (|ⵜI(x)|).

For small values of|ⵜI(x)|, c(x) tends to unity As |ⵜI(x)| increases, c(x) decreases to

zero.Teboul et al [7]establish three conditions for edge-preserving diffusion coefficients.These conditions are (1) lim

|ⵜI(x)|→0 c (x) ⫽ M where 0 < M < ⬁, (2) lim

|ⵜI(x)|→⬁ c (x) ⫽ 0,

Trang 11

20.3 Anisotropic Diffusion Techniques 529

and (3) c(x) is a strictly decreasing function of |ⵜI(x)| Property 1 ensures isotropic

smoothing in regions of similar intensity, while property 2 preserves edges The third

property is given in order to avoid numerical instability While most of the coefficients

discussed here obey the first two properties, not all formulations obey the third property

In[6], Perona and Malik propose

as diffusion coefficients Diffusion operations using(20.9)and(20.10)have the ability

to sharpen edges (backward diffusion), and are inexpensive to compute However, these

diffusion coefficients are unable to remove heavy-tailed noise and create “staircase”

arti-facts[8, 9] See the example of smoothing using(20.9)on the noisy image inFig 20.1(a),

producing the result inFig 20.1(b) In this case, the anisotropic diffusion operation leaves

several outliers in the resultant image A similar problem is observed inFig 20.2(b), using

the corrupted image inFig 20.2(a)as input You et al have also shown that(20.9)and

(20.10) lead to an ill-posed diffusion—a small perturbation in the data may cause a

significant change in the final result[10]

The inability of anisotropic diffusion to denoise an image has been addressed by

Catte et al [11] andAlvarez et al [12] Their regularized diffusion operation uses a

modification of the gradient image used to compute the diffusion coefficients In this

case, a Gaussian-convolved version of the image is employed in computing diffusion

coefficients Using the same basic form as(20.9), we have

This method can be used to rapidly eliminate noise in the image as shown inFig 20.1(c)

In this case, the diffusion is well posed and converges to a unique result, under certain

conditions [11] Drawbacks of this diffusion coefficient implementation include the

additional computational burden of filtering at each step and the introduction of a linear

filter into the edge-preserving anisotropic diffusion approach The loss of sharpness due

to the linear filter is evident inFig 20.2(c) Although the noise is eradicated, the edges

are softened and blotching artifacts appear in the background of this example result

Another modified gradient implementation, called morphological anisotropic

diff-usion, can be formed by substituting

Trang 12

into (20.11), where B is a structuring element of size m ⫻ m, I ◦ B is the

morpho-logical opening of I by B, and I • B is the morphological closing of I by B In [13],the open-close and close-open filters were used in an alternating manner between itera-tions, thus reducing grayscale bias of the open-close and close-open filters As the result

inFig 20.1(d)demonstrates, the morphological anisotropic diffusion method can beused to eliminate noise and insignificant features while preserving edges Morphological

Trang 13

20.3 Anisotropic Diffusion Techniques 531

anisotropic diffusion has the advantage of selecting feature scale (by specifying the

structuring element B) and selecting the gradient magnitude threshold, whereas

pre-vious anisotropic diffusions, such as (20.9)and(20.10), only allowed selection of the

gradient magnitude threshold

You et al introduce the following diffusion coefficient in[10]:

Trang 14

(d) (e)

FIGURE 20.2

(a) Corrupted “cameraman” image (Laplacian noise, SNR⫽ 13dB) used as input for results

inFigs 20.2(b)–(e); (b) after 8 iterations of anisotropic diffusion with(20.9), k⫽ 25; (c) after

8 iterations of anisotropic diffusion with(20.11) and (20.12), k⫽ 25; (d) after 75 iterations

of anisotropic diffusion with(20.14), T ⫽ 6, e ⫽ 1, p ⫽ 0.5; (e) after 15 iterations of multigrid

anisotropic diffusion with(20.11)and(20.12), k⫽ 6[35]

where the parameters are constrained by␧ > 0 and 0 < p < 1 T is a threshold on the gradient magnitude, similar to k in(20.9) This approach has the benefits of avoidingstaircase artifacts and removing impulse noise The main drawback is computationalexpense As seen inFig 20.2(d), anisotropic diffusion with this diffusion coefficientsucceeds in removing noise and retaining important features from Fig 20.2(a), butrequires a significant number of updates

The diffusion coefficient

standard anisotropic diffusion coefficient as in(20.9)continues to smooth over edges

Trang 15

20.3 Anisotropic Diffusion Techniques 533

while iterating, the robust formulation(20.16)preserves edges of a prescribed scale

and effectively stops diffusion

Here seven important versions of the diffusion coefficient were given that involve

tradeoffs between solution quality, solution expense, and convergence behavior Other

research in the diffusion area focuses on the diffusion PDE itself The next section

reveals significant modifications to the anisotropic diffusion PDE that affect fidelity to

the input image, edge quality, and convergence properties

20.3.2 The Diffusion PDE

In addition to the basic anisotropic diffusion PDE given inSection 20.1.2, other diffusion

mechanisms may be employed to adaptively filter an image for edge detection.Nordstrom

[18]used an additional term to maintain fidelity to the input image, to avoid the selection

of a stopping time, and to avoid termination of the diffusion at a trivial solution, such as

a constant image This PDE is given by

⭸I t (x)

⭸t ⫺ div {c(x)ⵜIt (x)} ⫽ I0(x) ⫺ I t (x). (20.17)Obviously, the right-hand side I0(x) ⫺ I t (x) enforces an additional constraint that pena-

lizes deviation from the input image

Just asCanny [19]modified the LoG edge detection technique by detecting

zero-crossings of the Laplacian only in the direction of the gradient, a similar edge-sensitive

approach can be taken with anisotropic diffusion Here, the boundary-preserving

diffu-sion is executed only in the direction orthogonal to the gradient direction, whereas the

standard anisotropic diffusion schemes impede diffusion across the edge If the rate of

change of intensity is set proportional to the second partial derivative in the direction

orthogonal to the gradient (called␶), we have

This anisotropic diffusion model is called mean curvature motion, because it induces a

diffusion in which the connected components of the image level sets of the solution image

move in proportion to the boundary mean curvature Several effective edge-preserving

diffusion methods have arisen from this framework including[20] and[21].Alvarez

et al [12]have used the mean curvature method in tandem with the regularized diffusion

coefficient of(20.11)and(20.12) The result is a processing method that preserves the

causality of edges through scale space For edge-based hierarchical searches and multiscale

analyses, the edge causality property is extremely important

The mean curvature method has also been given a graph theoretic interpretation

[22, 23].Yezzi [23]treats the image as a graph inn—a typical 2D grayscale image would

be a surface in3 where the image intensity is the third parameter, and each pixel is a

graph node Hence a color image could be considered a surface in 5 The curvature

motion of the graphs can be used as a model for smoothing and edge detection For

example, let a 3D graph s be defined by s(x) = s(x, y ) = [x, y, I(x, y)] for the 2D image I

Ngày đăng: 01/07/2014, 10:43

TỪ KHÓA LIÊN QUAN