The position of the edge is scanned on the grey scale profile Fig.2.2.This is created along a scanning line which begins on a dark background areaand runs to the bright surface area, all
Trang 2materials science 123
Trang 3Springer Series in
materials science
Editors: R Hull R M Osgood, Jr J Parisi H Warlimont
The Springer Series in Materials Science covers the complete spectrum of materials physics, including fundamental principles, physical properties, materials theory and design Recognizing the increasing importance of materials science in future device technologies, the book titles in this series ref lect the state-of-the-art in understanding and controlling the structure and properties
of all important classes of materials.
Please view available titles in Springer Series in Materials Science
on series homepage http://www.springer.com/series/856
Trang 4Roman Louban
Image Processing
of Edge and Surface Defects
Theoretical Basis of Adaptive Algorithms with Numerous Practical Applications
With 118 Figures
123
Trang 5Microelectronics Science Laboratory
Department of Electrical Engineering
Columbia University
SeeleyW Mudd Building
New York, NY 10027, USA
Professor Jürgen ParisiUniversität Oldenburg, Fachbereich PhysikAbt Energie- und HalbleiterforschungCarl-von-Ossietzky-Straße 9–11
26129 Oldenburg, GermanyProfessor HansWarlimontDSL Dresden Material-Innovation GmbHPirnaer Landstr 176
01257 Dresden, Germany
Springer Series in Materials Science ISSN 0933-033X
ISBN 978-3-642-00682-1 e-ISBN 978-3-642-00683-8
DOI 10.1007/978-3-642-00683-8
SpringerHeidelbergDordrechtLondon New York
Library of Congress Control Number: 2009929025
c
Springer-Verlag Berlin Heidelberg 2009
This work is subject to copyright All rights are reserved, whether the whole or part of the material
is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, casting, reproduction on microfilm or in any other way, and storage in data banks Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer Violations are liable to prosecution under the German Copyright Law.
broad-The use of general descriptive names, registered names, trademarks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protec- tive laws and regulations and therefore free for general use.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Trang 7The human ability to recognize objects on various backgrounds is amazing.Many times, industrial image processing tried to imitate this ability by itsown techniques
This book discusses the recognition of defects on free-form edges and homogeneous surfaces My many years of experience has shown that such atask can be solved efficiently only under particular conditions Inevitably, thefollowing questions must be answered: How did the defect come about? Howand why is a person able to recognize a specific defect? In short, one needs ananalysis of the process of defect creation as well as an analysis of its detection
in-As soon as the principle of these processes is understood, the processes can
be described mathematically on the basis of an appropriate physical modeland can then be captured in an algorithm for defect detection This approachcan be described as “image processing from a physicist’s perspective” I havesuccessfully used this approach in the development of several industrial imageprocessing systems and improved upon them in the course of time I would like
to present the achieved results in a hands-on book on the basis of edge-basedalgorithms for defect detection on edges and surfaces
I would like to thank all who have supported me in writing this book
My special thanks to Charlotte Helzle, Managing Director, hema electronicGmbH, Aalen, Germany During my 12 years of cooperation with that com-pany, I have had the opportunity to transform many projects in industrial im-age processing from proof of concept to the development stage and bring theminto service I would also like to thank Professor Joachim P Spatz, ManagingDirector, Department of New Materials and Biosystems at the Max-PlanckInstitute for Metals Research, Stuttgart, Germany, who gave me permission
to use the corresponding applications of adaptive algorithms as illustrativeexamples in my book I thank the foundation All M.C Escher works,Cordon
Art-Baarn-Holland, and the magazine Qualit¨ at und Zuverl¨ assigkeit from Carl
Hanser for permitting me to use their images as illustrations in this book
My personal thanks go to Michael Rohrbacher, my former supervisor and
a good friend, for having incessantly supported and encouraged me I thank
Trang 8J¨urgen Kraus for the creative support in the development of the Christo
func-tion, which plays a fudamental role in defect detection
I especially thank my children, Anna Louban, who is a student at theUniversity of Konstanz, Germany, and Ilia Louban, who is a doctoral candi-date at the Institute for Physical Chemistry, Biochemistry Group, University
of Heidelberg, Germany, as well as another doctoral candidate of the Institutefor Physical Chemistry, Patrick Hiel, for thoroughly proofreading the entirebook and for their numerous suggestions for improvement I also would like toexpress my sincere thanks to Konstantin Sigal and Alexandra Lyon, withoutwhose help the English version of this book would not have been possible
I sincerely thank the employees of Springer, particularly Dr habil Claus
E Ascheron, Executive Editor Physics, for taking personal interest in thisbook and for the support in every phase of its creation
I thank all readers in advance for their suggestions of improvement andcompliments
June 2009
Trang 91 Introduction 1
1.1 What Does an Image Processing Task Look Like? 1
1.2 Conventional Methods of Defect Recognition 3
1.2.1 Structural Analysis 3
1.2.2 Edge-Based Segmentation with Pre-defined Thresholds 5
1.3 Adaptive Edge-Based Object Detection 6
2 Edge Detection 9
2.1 Detection of an Edge 9
2.1.1 Single Edge 10
2.1.2 Double Edge 21
2.1.3 Multiple Edges 24
2.2 Non-Linear Approximation as Edge Compensation 27
3 Defect Detection on an Edge 31
3.1 Defect Recognition on a Regular Contour 32
3.2 Defect Detection on a Dented Wheel Contour 33
3.3 Recognition of a Defect on a Free-Form Contour 34
3.3.1 Fundamentals on Morphological Enveloping Filtering 37
3.3.2 Defect Recognition on a Linear Edge Using an Envelope Filter 43
3.3.3 Defect Recognition on a Free-Form Edge Using an Envelope Filter 44
4 Defect Detection on an Inhomogeneous High-Contrast Surface 47
4.1 Defect Edge 47
4.2 Defect Recognition 50
4.2.1 Detection of Potential Defect Positions 51
4.2.2 100% Defect Positions 56
Trang 104.2.3 How Many 100% Defect Positions Must a Real Defect
Have? 57
4.2.4 Evaluation of Detected Defects 60
4.3 Setup of Adaptivity Parameters of the SDD Algorithm 60
4.4 Industrial Applications 64
4.4.1 Surface Inspection of a Massive Metallic Part 64
4.4.2 Surface Inspection of a Deep-Drawn Metallic Part 65
4.4.3 Inspection of Non-Metallic Surfaces 65
4.4.4 Position Determination of a Welded Joint 66
4.4.5 Robot-Assisted Surface Inspection 68
5 Defect Detection on an Inhomogeneous Structured Surface 71 5.1 How to Search for a Blob? 71
5.2 Adaptive Blob Detection 73
5.2.1 Adaptivity Level 1 74
5.2.2 Further Adaptivity Levels 79
5.3 Setup of Adaptivity Parameters of the ABD Algorithm 81
5.4 Industrial Applications 83
5.4.1 Cell Inspection using Microscopy 84
5.4.2 Inspection of a Cold-Rolled Strip Surface 85
5.4.3 Inspection of a Wooden Surface 86
6 Defect Detection in Turbo Mode 93
6.1 What is the Quickest Way to Inspect a Surface? 93
6.2 How to Optimize the Turbo Technique? 95
7 Adaptive Edge and Defect Detection as a basis for Automated Lumber Classification and Optimisation 99
7.1 How to Grade a Wood Cutting? 99
7.1.1 Boundary Conditions 100
7.1.2 Most Important Lumber Terms 100
7.2 Traditional Grading Methods 101
7.2.1 Defect-Related Grading 101
7.2.2 Grading by Sound Wood Cuttings .102
7.3 Flexible Lumber Grading .103
7.3.1 Adaptive Edge and Defect Detection .104
7.3.2 Defect-Free Areas: From “Spaghetti” to “Cutting” .104
7.3.3 Simple Lumber Classification Using only Four Parameters 106
7.3.4 The 3-Metres Principle .116
7.3.5 Grading of Lumber with Red Heart .119
7.4 The System for Automatic Classification and Sorting of Hardwood Lumber 123
7.4.1 Structure of the Vision system 123
7.4.2 User Interface .124
Trang 11Contents XI
8 Object Detection on Images Captured Using
a Special Equipment 129
8.1 Evaluation of HDR Images 129
8.2 Evaluation of X-ray Images .131
9 Before an Image Processing System is Used 135
9.1 Calibration 135
9.1.1 Evaluation Parameters .136
9.1.2 Industrial Applications .141
9.2 Geometrical Calibration .142
9.2.1 h-Calibration 144
9.2.2 l-Calibration .149
9.3 Smallest Detectable Objects 158
9.3.1 Technical Pre-Condition for Minimal Object Size .158
9.3.2 Minimum Detectable Objects in Human Perception .159
References 161
Index 165
Trang 12a defect on a surface Often, the surface to be inspected is inhomogeneous and
of high contrast Brightness fluctuations on the surface are common Still,all defects need to be detected irrespective of other problems and withoutidentifying regular objects as defects
There are a number of image processing systems that are able to carryout surface inspection more or less successfully However, the requirements
of industry are growing so rapidly and on such a large scale that existingsystems can no longer satisfy the demand The reason for this is not thecomputing capacity of an image processing system but the methods used forthe recognition of defects
This book will present an approach to this problem that allows the opment of an algorithm suitable for the recognition of a surface defect Thisalgorithm has been implemented as C-library functions for Seelector by hemaelectronic GmbH (a digital signal processing image processing system) [1] and
devel-as plug-ins for NeuroCheck (a PC image processing system) [2] and hdevel-as beensuccessfully tested in several applications This algorithm will be presented inthis book and demonstrated with numerous examples
1.1 What Does an Image Processing Task Look Like?
As with any task, preparation is of paramount importance Thus a problemwith a correct definition is already half solved Unfortunately, in the field ofsurface inspection, a detailed and, above all, correct definition of the defects
Trang 132 1 Introduction
to be detected is far from satisfactory Typically, all defects are captured byphotography, and they are logged into a defect catalogue A further descrip-tion of these defects is often performed in a formal way, where size, form,orientation, and, at best, brightness of a defect are taken into consideration.But, when a more tangible defect definition is asked for, there is a “detailed”explanation: “Well, can’t you see it?!” [2] This is true: what you see is usuallyenough for a humans Human beings learn to detect defects according to theircharacteristic features of which they are not explicitly aware, and are able torecognize them even if those defects were not explicitly defined earlier Allthis is done in the background of this process according to a “program” thathas been developed and refined in the course of human evolution But howcould an image processing system, which is a machine, achieve such a perfor-mance? When speaking of this, you would have often heard a well-intendedadvice: “Don’t you bother, the computer will do it!” But the problem is that
a computer must be programmed by a human first
Well, how does a human see? What are the features of an object that hereally perceives?
Let us take a look at the famous picture “Waterfall” by M.C Escher(Fig.1.1) At first sight, the water is flowing upward, which is impossibleaccording to the rules of gravity The artist and our minds play tricks on us.But if we take a closer look, we are able to understand how this illusion iscreated Which features of the picture are true to reality and how do werecognize them? We know that water never flows upward and thanks to ourknowledge of physics we do not believe the illusion This helps us to getbehind the painter’s tricks and to perceive the features of the picture that areunobtrusive but “valid”
The same applies for defect recognition: Because of a formal defect tion, many image processing methods refer to formal features of the requireddefect
descrip-But, the creation of a defect is a physical process The properties of thedamaged material and the processing deformations induced by surface damagedetermine the appearance of a defect The characteristic features, createdenable the explicit recognition of such a defect This is why the analysis of thephysical nature of a defect is a basic part of the approach to defect recognitionpresented in this book
In order to stress the difference between this and conventional methods
of surface defect recognition, we shall first give a review of these methods.More detailed information on conventional methods is given in several books
on digital image processing, e.g., [3]
Trang 14Fig 1.1.M.C Escher’s “Wasserfall” 2009 The M.C Escher Company-Holland Allrights reserved
1.2 Conventional Methods of Defect Recognition
Trang 15detec-4 1 Introduction
the reason why they fail to recognize non-artificial objects which are neveridentical to the reference objects and are in an inhomogeneous environment.The number of pseudo-defects increases rapidly
Furthermore, the number of features necessary for the recognition of jects increases so vastly that a control of such recognition systems is almostimpossible More than 1500 textural features are currently used for defectrecognition [7]
ob-The support by neuronal networks is of little help Consequently, the called feature clouds in a multi-dimensional feature space become more andmore blurred as the number of learned objects increases so that the defectrecognition capability of an image processing system decreases, whereas therecognition of pseudo-defects increases The following example illustrates thisprocess A fork with four spikes and a knife (Fig.1.2a) are two completelydifferent objects that an image processing system can learn to recognize andperfectly separate by using a structural analysis software Let us expand theterms “fork” and “knife” as follows First, we add a more slender three-spikefork to the four-spike fork, then another and even more slender and longertwo-spike fork and a meat fork The knives are added to include more and
so-Fig 1.2 Structural analysis of objects (a) Reference objects, (b) Expanded object
range
Trang 16more slender, shorter, and unusual knives: e.g., a cheese knife with sparings
in the middle of the blade and two horns at the tip (Fig.1.2b) We make theimage processing system learn all the new objects Despite the fact that thisexpansion has led to a major change in appearance of the objects in question,
a person still considers the ensemble as two different groups of objects: forksand knives An image processing system, however, even supported by neuronalnetworks, may assign the meat fork and the cheese knife to the same objectclass, as each one of these is a boundary object of its group The reason for this
is both the high similarity of these objects and the immense deviation of theremaining learned objects from one another within every reference group Twodifferent groups are classed as one, and once the following test is complete,the four-spike fork and a knife will be incorrectly classed as related objects.Another method used to detect a defect on a sample image is edge-basedsegmentation [3] Here the detection of edges of an object plays a major role.The most sophisticated general edge localization is done by transformingthe entire image to an edge image using different cut-off filters
Besides the high computing effort, this method is at a disadvantage inthat the image is processed, i.e., changed by filtering, which adversely affectsthe edge detection itself and the respective results Some edges cannot even
be detected due to insufficient contrast, whereas random areas with sufficientillumination gradient are wrongly detected as edges
1.2.2 Edge-Based Segmentation with Pre-defined Thresholds
Another technique [8] requires an initial binarization of the image After narization, an edge is first detected and then the object is scanned In order
bi-to binarize, a threshold must be determined It can be either pre-defined orcalculated on the basis of the content of the image
The pre-defined threshold, however, does not consider variances, e.g., mination fluctuations that can occur either on a series of consecutive captures
illu-or in different areas of the current image In this case, the inspection imagecannot be properly binarized, which means that the edges are then incorrectlydetected
It is possible to adapt the object detection process to the inspection image
by calculating the threshold directly from contents of the image A histogram
is used [9, 10] to display the frequency of individual grey scale values ring in the image A binarization threshold can then be determined with theabsolute or local maximum or minimum of the histogram This technique can
occur-be refined by increasing the numoccur-ber of iterations [11]
If the histogram is captured on an image section that is too large, ual details of this section will be adversely affected, which also applies to theedges located there Consequently, those edges tend to get blurred or shifted
individ-On the contrary, if image sections are chosen too small, no exact recognition
of correct minimum or maximum is possible, as the number of test pixels istoo low Therefore, the split area cannot be binarized correctly The process
Trang 176 1 Introduction
of splitting the image to binarize into appropriate sections [9] can be mized only by experimental means The binarization result of the image thendepends on the pre-determined splitting of the image The technique loses itsflexibility
opti-In order to determine the appropriate binarization threshold, a series ofbinary images captured with falling or rising thresholds can be evaluated [10].This is, however, very laborious and time consuming and, above all, this ispossible only in a very limited number of cases
Furthermore, binarization of the image affects the recognition of objectsand thus distorts it, as does any other image filtering process In addition,since it is based only on the variation of the grey scale value, this techniquecauses highly increased pseudo-defect recognition Therefore, edge recognition
or object recognition should be carried out only on the basis of the originalgrey-scale image
One of the best known methods for the detection of an object in an image
is the segmentation based on contour tracing (so-called blob analysis) [12].This may be a dark object on a bright surface or a bright object on a darksurface On order to simplify the discussion, we will generally focus on a darkobject on a bright surface In the other case, the image can be inverted Theblob analysis is carried out in a test area, where the first pixel that is part of
an object is determined along a scanning line Normally, the scanning line isplaced over all rows of the test area The first detected object point, calledstarting point, has to show a brightness that lies below the surface brightnessand above the object brightness, whereas the previous pixel should show abrightness above the surface brightness From the detected starting point, theobject contour can be further detected by the means of conventional contourtracing algorithms Contour tracing can be carried out using the minimumvalue of the surface brightness, where all pixels that are part of the objectwill have a brightness which is lower than this value
Blob analysis, however, uses fixed thresholds, which cannot ensure reliabledefect recognition on a structured inhomogeneous surface A simplified ver-sion of this technique [2], where the minimum surface brightness threshold
is identical with the maximum defect brightness threshold, which means abinarization of the image, is even less appropriate
To summarize, it can be stated that neither formal characteristics norpre-defined brightness variances of a defect can be assumed as its explicitrecognition features This is why the methods described above cannot ensure
a flexible and at the same time explicit defect recognition on an inhomogeneoussurface that shows global and local brightness variances
1.3 Adaptive Edge-Based Object Detection
The task therefore is to create such a technique of defect recognition Toachieve this, we need a wholly different approach to this problem Instead
Trang 18of trying various formal image-processing methods for defect recognition, thebackground of defect recognition must be analysed taking into account thephysical aspects of defect formation and human sight behaviour In doing so,new characteristic features can be detected, which the technique of defectrecognition must correspond to An explicit “genetic fingerprint” of a defectmust be acquired These characteristic features must not be dependent ondefect and surface size, shape and orientation, or brightness An explicit defectrecognition is ensured on the basis of these characteristic features.
What is it then that decisively differentiates between a defective and afaultless surface? In case of a defect, there is always a boundary between thedefect and the defect-free surface – a material edge For example, this edgecan be identified on an angular grinding of a metallic surface that has a crack(Fig.1.3) [13] The roughness profile of the test surface shows the same result(Fig.1.4a) An intact surface cannot show such edges (Fig.1.4b)
Fig 1.3.Angular grinding of a metallic surface with a crack
Fig 1.4 Roughness profile of a (a) defective and (b) an intact metallic surface
Trang 198 1 Introduction
So, the creation of a material edge is determined by the physical ties of the material and by the development of the damage process Therefore,defect recognition can be done on the basis of the defect edge detection, in-dependently of the brightness variations on the defect edge Global and localbrightness conditions have to be taken into account Later, all detected ob-jects have to be analysed according to their further features and eventually totheir sizes, and sorted accordingly This is the reason why the methods below
proper-for detection and recognition of surface defects are called methods of adaptive edge-based object detection.
Edge detection, which plays a major role in defect recognition, must ofcourse be the first to be thoroughly investigated and described However, wewill discuss it in a very general way to ensure that the findings can be usedfor recognition of different edges under different environmental conditions
Trang 20cam-of major economic importance.
In industrial image processing, an entire block of the above-mentionedtechniques is used Here the central point is to detect whether there is anedge in the test area at all and to localize the edge when it is known toexist Most edge recognition methods [2, 3], however, presume that an edgedoes already exist in the test area, and the task is to detect it as precisely aspossible In reality and primarily in defect detection, the potential location
of an edge must be determined first Only then can an edge be successfullyscanned for and located
Besides that, real boundary conditions can aggravate the detection of anedge, such as brightness fluctuations of the scanned edge (e.g., different localbrightness values at the edge, as with wood), sharpness of the edge represen-tation (e.g., a cant), and the complexity of the edge (e.g., double edge as in awood board with bark)
2.1 Detection of an Edge
One of the most frequently used methods of direct edge detection from a greyscale image is based on a pre-determined edge model [3] and concerns thesituation where the edge location must be known in advance Nevertheless, itwill be presented here in order to stress the difference to the technique thatwill be described later
Trang 2110 2 Edge Detection
Usually, a scan for edges within a certain edge model occurs along scanninglines in a certain direction The criteria for the detection of an edge resultfrom the grey scale profile along a scanning line Here two edge directions aredifferentiated: rising edges and falling edges You speak of a falling edge whenthe grey scale profile runs from bright to dark; otherwise, it is a rising edge
A typical technique uses the following parameters:
• Edge height: in order to detect a valid edge, there must be a minimumdifference of grey scale values along a scanning line This is called the edgeheight
• Edge length: The edge length value describes on what length must occurthe minimum difference of grey scale values determined by the edge height
As these parameters remain unchanged for every image to be inspected,there can be no dynamic adaptation of the inspection features for the currentimage Thus, a general highlighting of edges on an inhomogeneous surfacewill result in missing out on real defects and in massive recognition of pseudo-defects
Other conventional techniques of image processing are also known, e.g.,using a histogram or a grey scale profile or combining the two for edge detec-tion Here, the significant parameters also must be generally pre-determined.Therefore these techniques are still not capable of providing a flexible and atthe same time explicit edge detection on an inhomogeneous image
To achieve this, a histogram of the test area and the grey scale profilecaptured along the scanning line within the test area must be investigatedand analysed on a substantially more precise physical basis This physicalbackground can be explained in the detection of a single-level edge, which
will be referred to as a single edge below.
2.1.1 Single Edge
It is known that the intensity distribution in a light beam shows a Gaussianprofile [14] As a surface can be regarded as a light source because of itsreflection, a brightness distribution that runs from the surface over the edge
to the background can be described by a Gaussian distribution [3] This modelhas shown to be the best for the exact calculation of the edge position withsub-pixel accuracy [15] This is why a Gaussian profile can be assumed forthe description of the grey scale profile and its differentiation (the brightnessgradient)
A histogram is a frequency distribution of the brightness on a background
In natural images, the content usually has a falling amplitude spectrum,whereas the noise has an approximately constant spectrum in the frequencyrange The histogram therefore, like the grey-scale profile, shows a Gaussianprofile [16] or a profile resulting from several Gaussian distributions [3].The brightness value that occurs most frequently on the surface of anobject can be defined as the brightness of that object This technique, as
Trang 22opposed to, for example, the mean value technique, explicitly and reliably
determines the real brightness Isurf(surface) of a test surface, as it is perceived
by a human observer However, this applies only if a fault with a specificbrightness value does not feature a higher or comparable frequency In thiscase, it is not possible to differentiate the fault from the main surface (e.g.,checker board) If the test histogram is represented by a very noisy curve, thishistogram can be analysed so that the search position of the surface brightness
Isurf can be determined according to the centre of mass
The same applies for a background with brightness Ibgrd (background).Generally, you can assume that the test edge separates the dark back-ground from a bright surface (Fig.2.1) If not, the roles of the surface and ofthe background have to be interchanged
The position of the edge is scanned on the grey scale profile (Fig.2.2).This is created along a scanning line which begins on a dark background areaand runs to the bright surface area, all within a test area, e.g., a rectangle(Fig.2.1) The test results of the histogram (Fig.2.3) from the test area areused here simultaneously
Using the captured histogram (Fig.2.3) from the test area (Fig.2.1), the
surface brightness Isurf as well as the background brightness Ibgrd can bedetermined Here, it is important to determine a typical brightness separation
value Isprt (separation) to be able to separate the corresponding parts of thehistogram (background and surface) from one another (Fig.2.3) The methods
for the determination of this separation value of the surface brightness Isurf
and of the background brightness Ibgrd will be outlined later on
The edge location is done within a testing distance L0 along a scanning
line, while the local maximum brightness Imax and the local brightness
in-crease ΔI are determined at the test distance L0 and compared to the
edge-specific minimum brightness I0 and to the edge-specific minimum brightness
increase ΔI0 (minimum difference of the grey scale values) The length of the
test distance L0, the edge-specific minimum brightness value I0, and the
edge-specific brightness increase ΔI0 are calculated using the brightness values ofthe test area
Fig 2.1. On-edge detection methods (scheme)
Trang 2312 2 Edge Detection
Fig 2.2.Grey-scale profile across an edge
Fig 2.3.Histogram of the test area
The examination of the histogram is followed by a curve sketching thecaptured grey scale profile Since, as assumed, the grey scale profile shows
a Gaussian profile, this curve represents a normal distribution according toGauss, and thus an exponential function, showing certain correlations betweencharacteristic points
The grey scale Gaussian profile can be described as follows [17]:
I(x) = Isurfexp
− x22σ2
where I(x) is the current brightness of the test point at the distance x from profile maximum; Isurf is the surface brightness at the profile maximum; x is the distance of the test point from the profile maximum point; and σ is the
Gaussian profile’s standard deviation
Trang 24The most important points on the grey scale profile according to thistechnique are the points that are placed at the distance corresponding to the
single or double Gaussian deviation from standard σ from the maximum of
the profile (Fig.2.2)
Starting at the maximum of the profile, the single deviation from standard
σ shows a turning point of the grey scale profile Iturn(turn point), indicatingthat theoretically there may be an edge:
standard 2σ shows a point with the grey scale profile intensity of an edge.
This is where the background is located:
The distance between the points with the values Iturnand Ibgrdalso
corre-sponds to the standard deviation σ and is therefore strictly dependent on the
respective grey scale profile (Fig.2.2) The ratio of these values is, however,constant for all possible grey scale profiles crossing an edge and represents a
minimum brightness value η0 It follows from (2.1) to (2.4) that
So the brightness factor η0 defines a minimum ratio of brightness values
of the background and the (so far) theoretical edge position The brightness
at the first possible edge location can be defined as edge-specific minimum
brightness I0 Thus the point where the grey scale profile shows the
edge-specific minimum brightness I0 is considered the third important point of theGaussian analysis profile
Remarkably, the brightness factor η0 represents a constant that isimportant beyond image processing Generally speaking, this constant in-dicates the presence of a passing or a crossover if the corresponding process
is a Markov process or, in other words, whether it shows the Gaussian bution [17, 18] This phenomenon occurs in a number of real-world situations
Trang 25distri-14 2 Edge Detection
The most widely known example is the 80–20 rule, also known as the Pareto principle [19] This states that the first 20% of an effort is responsible for
80% of the result and the other 20% of the result requires the remaining 80%
of the overall effort According to another example from economics, 75% ofall world trade is turned over among 25% of the global population Thesecases describe the beginning of a qualitative change in a quantitative process,
with the limit lying between 20 and 25% Thus, the constant η0 ≈ 0.223 can
be understood as a universal constant which marks the limit of this change.With regard to the grey value profile that is oriented at 90◦ to an edge, thisconstant precisely and reliably determines the place at witch the test edgecan be located
In order to determine the edge, a minimum test distance L0 is definedinside which the test edge can be located, so that the high-interference areasneighbouring the background or the surface lie outside this distance Since anedge means an ascent of the grey scale curve, the grey scale profile must show
an edge-specific minimal brightness increase ΔI0 (difference of grey values),
found at the edge within the test distance L0
The length of the test distance L0 must not be less than the distance
between the turn point Iturnand the edge point (background brightness Ibgrd),
ensuring that the position of the edge is definitely within the test distance L0
This distance corresponds to the standard deviation σ of the grey scale profile.
At the same time, the test distance L0 must not exceed double the standard
deviation σ Otherwise, the test distance L0 becomes larger than the entiretransition area between the background and the surface (Fig.2.2) This is the
reason why the following condition for the test distance L0 must be met:
In order to determine the parameters I0 and ΔI0, the following limiting
cases can be considered If the final point of the test distance L0 has alreadyreached the surface
the edge is still within this test distance Then, it follows from (2.10) that
Trang 26The local minimum brightness Imin does not yet correspond to the
sur-face brightness (Imin< Isurf) and exceeds the background brightness (Imin>
Ibgrd) It indicates a transition area between the background and the face (Fig.2.2) where an edge can lie, and thus determines the edge-specific
sur-minimum brightness I0 It can be calculated using (2.12) and (2.11):
If the end point of the test distance L0is already in the position that
corre-sponds to the brightness I0, and the starting point is still on the background,then, for a real edge, the condition (2.9) must be first met This means that the
search for an edge must not begin until the local brightness increase ΔI within the test distance L0 has reached the edge-specific brightness increase ΔI0
Assuming that the starting point of the test distance L0 is in an extremeposition where the brightness value is zero, the edge-specific minimum bright-
ness increase ΔI0 determining the start conditions for the edge localizationcan be defined as follows:
ΔI0= I0− 0 = I0= Isurfη0. (2.14)
An edge can therefore be present only between the two following positions
The first position corresponds to the end point of the test distance L0 whenthe conditions (2.8) and (2.9) are simultaneously met within the test distance
L0 for the first time The second position corresponds to the starting point of
the test distance L0 when the conditions (2.8) and (2.9) are simultaneously
met within the test distance L0 for the last time
Since the grey scale profile is represented by a Gaussian function, thefollowing is implied by (2.13) and (2.1) and under the condition that the end
point of the test distance L0 has the brightness Isurf and the starting point
This corresponds to the condition for the length of the test distance L0
(2.7) as assumed above and demonstrates at the same time that this length
is image specific and characteristic of the curve
The edge-specific minimum brightness I0 must be higher than the
po-tential noise-induced upward deviations of the background brightness Ibgrd
(Fig.2.3) At the same time, it has to feature the edge-specific minimum
brightness increase ΔI0that is defined by surface brightness Isurfand by
mini-mum brightness factor η For this reason, further parameters might be useful
Trang 2716 2 Edge Detection
for edge recognition (Fig.2.3): the lower Isprt dark and the upper Isprt lightbrightness separation value as well as the lower δdark and the upper δlight
safety clearances determining these brightness separation values
The upper Isprt dark and lower Isprt light brightness separation valuesseparate the surface and the background areas on the grey scale value profile(Fig.2.2) and on the histogram (Fig.2.3) from the other areas that can beadversely affected by possible interferences This allows for a practical curvesketching that is, for example, needed for the determination of the standard
deviation σ of the grey scale value.
As the edge-specific minimum brightness I0 is assumed to be the lower
brightness separation value Isprt dark(Fig.2.2), the following applies according
to (2.13):
The value range [Ibgrd− Isprt dark] can be defined as a security zone in the
background area This value range corresponds to the safety clearance δdark
between the background and the position beginning from where an edge ispresent
Analogously, the value range [Isprt light− Isurf] can be defined as a securityzone in the surface area This value range corresponds to the safety clearance
differ-ence between the surface brightness Isurf and the upper brightness separation
value Isprt light will be guaranteed by the edge-specific minimum brightness
increase ΔI0:
From (2.18) and (2.14), it follows that
The exact edge position as well as the deviation from standard σ, which
is called half-edge width, can then be located using well-known calculation
methods from the range [Isprt dark, Isprt light] at a much higher accuracy, even
at sub-pixel accuracy
However, the interferences that occur within the range [Isprt dark, Isprt light]can provoke local ascents on the test grey scale profile, which leads to a falseedge detection or even to no detection at all if the conventional calculationmethods are applied The described technique offers an excellent opportu-nity here to analyse the collected data using the method of least squares as a
Gaussian curve As the background brightness Ibgrdand the surface brightness
Isurf are both already known at the moment of benchmarking, only two
pa-rameters, i.e., the half-edge width σ and the position of the profile maximum
x0, remain to be calculated
The half-edge width σ can be calculated using an auxiliary distance l0
which corresponds to the distance between the positions where the profile has
the values Isprt dark and Isprt light for the first time As the grey scale profile
is a Gaussian function, the following applies according to (2.19) and (2.1):
Trang 28auxiliary distance l0 can be assumed as the deviation from standard σ of the
grey scale profile without loss of generality:
As soon as all parameters for curve-sketching of the grey scale profile aredefined, the question arises: Where should the necessary background bright-
ness Ibgrd and the surface brightness Isurf come from? For this, a histogram
of the inspection area can be used that displays the brightness values of thebackground as well as the brightness values of the surface (2.3) In order to
be able to separate the corresponding areas from each other, the brightness
separation value Isprt is required This value is derived from the turn point
Iturnof the grey scale profile (Fig.2.2), which obviously has a brightness value
lying between the background brightness Ibgrd, and the surface brightness Isurf
(whose ratios are defined in (2.2) and (2.4))
For the definition of the brightness separation value Isprt, which servesonly for orientation in this particular case, one could use the general bright-
ness values for the background Ibgrd gen or the surface Isurf geninstead of the
background brightness Ibgrd or the surface brightness Isurf The former can
be determined in a separate area containing only the background or only thesurface by the use of a histogram From (2.2), (2.4), and (2.6) it follows that
η0
After separating in two of the histograms of the test area (Fig.2.1), the
brightness values for the background are in the histogram range of [I −I ]
Trang 29cal-cisely by calculating further local separation values I sprt dark and Isprt lighton
the basis of the already calculated values Ibgrd and Isurf according to (2.17)and (2.19) From the new defined areas [Idark−Isprt dark] and [Isprt−Isprt light],
the background brightness Ibgrd and surface brightness Isurf have to berecalculated
At this point, the main issues concerning edge detection can be resolved
An edge in the test area is present only if the following condition is fulfilledthere:
If this condition is positive, the edge detection is run within the distance
where the grey scale profile shows brightness values within the range [Isprt dark,
Isprt light]
But if the object and the background have little brightness difference such
as in the case of a non-edged lumber, the condition (2.18) is no longer valid
Then the edge-specific minimal brightness increase ΔI0 is calculated on thebasis of a standardized Gaussian profile with consideration of the existing
background brightness Ibgrd:
ΔI0= (Isurf− Ibgrd)η0. (2.27)
Thus the lower brightness separation value Isprt dark from (2.17) and the
upper brightness separation value Isprt lightfrom (2.18) are calculated for the
specific minimum brightness I0:
I0= Ibgrd+ ΔI0= Ibgrd+ (Isurf− Ibgrd)η0; (2.28)
The half-edge width σ and an exact edge position can now be determined
by the above technique without loss of generality
In all cases, the test surface can be assumed as a luminescent source, and
a curve sketching of the grey scale profile according to the rules describedabove can be carried out The important thing to note is that the histogramand the grey scale profile are both treated as Gaussian distributions for edgedetection and evaluation and are used together as source data In this way, thephysical background of edge formation is taken into consideration and with
it the pre-conditions for dynamically determining edge-specific and characteristic parameters are created, which are adapted to global and local
image-brightness conditions A technique for adaptive edge detection [20] is defined.
Trang 30An edge detection on a wooden board will be presented as an example forthis technique As can be seen on the test image, different local brightnessvalues can occur at the edges of the board (Fig.2.4a) Furthermore, for differ-ent reasons (varying processing quality, unevenness of the board, etc.) theseedges are displayed with varying sharpness in some places.
The current image shows an edge recognition that has been carried outfrom the background to the surface The test area of the edge detection hasbeen moved downwards from above The iteration step of the test area can
be defined as required: from a continuous down to a sufficiently accurate scan
of the required board edge The height of the test area must be a least threepixels in order to ensure a representative mean value for the set-up of thegrey scale profile, according to Shannon’s theorem [3] With a higher number
of scanning pixels, the edge detection can be severely affected by randomlyoccurring interferences
The results of the edge detection are shown on the processing image(Fig.2.4b), where all detected edge positions have been marked green.However, if different interference objects, like chips, are captured in theimage in front of the test edges (Fig.2.5a), a major problem can occur They
Fig 2.4 Edge detection on a wooden board: (a) source image and (b) the resulting
image (detected edges)
Trang 31Fig 2.5.Edge detection on a wooden board in an environment with many
interfer-ences: (a) source image; (b) the resulting image (without error checking, detected edges); and (c) resulting image (with error checking, detected edges)
Trang 32If the security zone shows a surface brightness with certain deviations not
being below the threshold Isprt light, the detected edge is valid Or else, the edgelocalization is continued The length of the security zone and the threshold
so that the technique can keep its adaptivity The length of the security zone
can be defined as a multiple of the local half-edge σloc At he same time, thislength must take into account the maximum width of occurring interferences.This ensures secure and explicit edge detection on an image with numerousinterferences (Fig.2.5c)
The individual erroneous or undetected edge positions can be corrected onthe basis of other positions that have been detected correctly A correspondingroutine will be presented in Sect.2.2
2.1.2 Double Edge
A much bigger challenge is a double edge It occurs when edge detection iscarried out on the outer side of a non-square-edged sawn board with bark(Fig.2.6)
The technique for the detection of a single edge described above must
be extended in order to be used for the detection of a complex edge, e.g., adouble edge In this case, the background and the surface of the wood are nolonger connected, since they are separated by an intermediate zone (bark)
Therefore, the brightness values Isurf and Ibgrd (Figs.2.7 and 2.8) cannot
be regarded as interdependent values The two required edges are detectedseparately in the corresponding transition area (Fig.2.8) But in order to dothis, one needs to separate edge-specific minimum values for the brightness
I0 and the brightness increase ΔI0 in every transition area to be inspected.Furthermore, for the calculation of corresponding area-specific parameters,
the overall background brightness Isprt is required These parameters can beacquired from two additional image areas containing only the background
or the surface, respectively The overall brightness separation value Isprt can
be calculated according to (2.25), and then by analysing the correspondinghistogram (Fig.2.8), it can be used for the determination of the background
brightness Ibgrdor surface brightness Isurf in the test area This area must belarge enough (e.g., a multiple of the suggested bark width) in order to capturethe background and the surface on the acquired histogram
It is easy to understand that no area-specific surface brightness Isurf can
be determined for the edge between the background and the bark (Fig.2.7).This is the reason why all further area-specific parameters must be derived
depending on the background brightness Ibgrd
The brightness Isurf virt can be regarded as virtual surface brightness
Together with the background brightness Ibgrd, it forms a grey scale file for the edge between the background and the bark According to (2.4), itcan be defined as follows:
Trang 33pro-22 2 Edge Detection
Fig 2.6 Schematic presentation of the double edge detection technique: (a) plan view and (b) cross-sectional view
Trang 34Fig 2.7. Grey-scale profile across a double edge
Fig 2.8.Histogram of the test area at the double edge
The area-characteristic and edge-specific minimum brightness I0 bgrd culated from the background can then be calculated with (2.13), (2.31),and (2.6):
Accordingly, the area-characteristic as well as the edge-specific minimum
brightness increase ΔI0 bgrd is defined according to (2.14), (2.31), and (2.6):
ξ1
Trang 3524 2 Edge Detection
Also, no area-characteristic background brightness Ibgrdcan be calculatedfor the edge between the bark and the surface (Fig.2.7) In this case, the
area-characteristic values for the edge-specific minimum brightness I0 surfand
the edge-specific minimum brightness increase ΔI0 surf can be calculated on
the basis of the surface brightness Isurf The calculation of these values can becarried out in the same way as was done with the edge between the backgroundand the bark However, it must be taken into account that the calculations ofthe histogram portion (Fig.2.7) and of the grey scale profile portion (Fig.2.8)that constitute the brightness values of the intermediate zone (bark) are notaffected In order to avoid this and still to fulfil the condition (2.25), thecorresponding area-characteristic parameters can be defined as follows:
This ensures that the scan for the bark surface cannot start in an propriate area, but in a place where the required edge must be located.The high effectiveness of the new adaptive technique can be demonstrated
inap-on square-edged and inap-on ninap-on-square-edged wooden boards
A wooden board shows a typical double edge on the outer bark side(Fig.2.9a, c) But the edges between the background and the bark are wavyand hardly visible on a dark background These edges can only be seen by thebare eye on a negative image (Fig.2.9b, d), whereas the detection of double-edges of a wooden board is done on the source image The results (Fig.2.9)show that a good detection of double edges can be achieved by the extendedadaptive technique
The few edge positions that have been incorrectly detected or not detected(especially on the difficult edges between the background and the bark) can
be corrected based on the other correctly detected positions A correspondingroutine will be presented in Sect.2.2
The detection of edges on the square boards with the same techniqueproduces even better results (Fig.2.10), with both probable edges detected atthe same position
So, an edge, which may be a double-edge or a single edge, which is oftenthe case with wooden boards that are squared on one side, can be detected
by the same adaptive technique This technique will prove and ensure secureand precise edge detection
2.1.3 Multiple Edges
A further complexity in edge detection arises with multiple edges, that is,composed of more than two single edges Analogously to the detection of adouble edge, each one of them can be detected either beginning on the woodsurface or on the background
Trang 36Fig 2.9 Detection of a double edge on non-square-edged boards (a, c) ing image (detected edges: background-bark, bark-wood); (b, d) negative of the
result-resulting image
The fist possibility is carried out as follows The brightest edge is tected with the above technique for the recognition of a double edge The fistedge that is recognized shows a brightness that can be used as local surface
de-brightness Isurf loc for the detection of the next darker edge With this chainlocalization, all other edges are detected However, all search steps should be
matched to the background brightness Ibgrd, so that the edge detection withthis method does not run out beyond the background
The second possibility of edge detection in a complex multiple edge is based
on the recognition of the darkest edge with the above technique of edge recognition The fist edge recognized shows a brightness that can be used
Trang 37adaptivity parameters with the size in the range [0–1] instead of using the turning point coefficient ξ1, the edge point coefficient ξ2, and the brightness
Trang 38factor η0 An adaptive edge detection can now be carried out for single edges
as well as for multiple edges, ensuring a reliable, flexible, and explicit detection
of the edges
2.2 Non-Linear Approximation as Edge Compensation
In the detection of an edge that can be mathematically described in a tive manner by using a curve (e.g., a straight line, circle, ellipse, and so on),the replacement of runaways or non-detected points is rather easy and welldocumented and described in the literature [21] The smoothing of a curve canalso be carried out using statistical considerations and calculations [22] Here,the curve to be smoothed will represent a dataset that is twice continuouslydifferentiable and continuously and monotonously growing
defini-If edge detection with a natural product, e.g., wood board, is required,there is no definite curve available for the edge description The mathematicaltechnique has to be adapted to a previously undefined curve
In the practice, e.g., in the wood industry, the evaluations of such edges
have been carried out by humans only, and therefore represent an “intuitive”
edge tracing This is therefore an experience-based technique that smoothensdifferent single runaways in order to achieve a relatively long and continuousedge curve for the measurement of the board Since such a technique is state
of the art according to various standards [23–26], the mathematical sation of detected edge points is carried out, in a way, in a human manner
compen-As an example for such a smoothing, all the edge points of an outeredge of a non-square-edged wooden board detected by an adaptive technique(Sect.2.1) (Fig.2.9c) can be taken Each one of the four detected edges showsthe following:
• single non-detected points;
• points matching most points of the detected contour;
• points not matching the points of the contour These points correspond
to allowed natural deviations of wood edge (e.g., the course of the grownwood edge);
• random points that do not belong to the contour (runaways)
In order to smoothen the detected contour curve, all missing points resenting the runaways and the negligible deviations must be filtered out andreplaced by matching points The greater deviations with greater influence onthe course of the curve must not be filtered out, but have to determine the run
rep-of the edge curve These conditions can be fulfilled if the test board does notshow a great bend (banana form) Thus all true edge points have a naturaldeviation from the edge line which complies with the Gaussian distribution.Since bent boards are cast out anyway in the pre-run, this assumption is valid.The total length of an edge which can be displayed as an alignment of allpoints forms the basis for a statistical evaluation referring to the Gaussiandistribution
Trang 3928 2 Edge Detection
From all detected points, an offset straight line and the correspondingstandard deviation are calculated by using the method of least squares Theposition of all points is then checked in relation to the offset straight line
As is well known, the result of this analysis is a Gaussian distribution of
point coordinates produced by physical measurements Therefore a security measure PPhiof 68.3% exists for the natural deviation of the points that must
be located in the admissible deviation range This deviation range has a widthcorresponding to the standard deviation of all detected points from the offsetstraight [17] This means that the test distance has a natural course only
if the detected deviations do not amount to more then 32.7% of the total
length, which corresponds to an insecurity of 1–PPhi In these cases, they aretolerated Otherwise, the deviation points must be replaced by appropriatereplacement points
The replacement of points that have not been detected or points with
a deviation that is too strong is done in different ways, depending on theposition of the missing point
First, all missing points that are surrounded by two valid points must bereplaced regardless of their distance to the replacement point The coordinates
of such a point are individually calculated using a straight line that connectsthe surrounding valid points
Later on, the missing points outside of the valid point range are replaced.This is also done by using an averaged straight line But this time the cor-responding straight line is calculated by the method of least squares on thebasis of all valid points, both the old and the newly calculated
However, this technique replaces only the big runaways The ing fluctuations are as possible on the local test distances as they are on thewhole test distance However, they are much smaller there The fluctuationscan be corrected so that the previously described technique can be carried outnot only locally but also globally This produces a repeated optimization thatregards any further local test distance as an entire test distance The reduction
correspond-of the test distance can be regarded as a transformation from chaos to order according to “Period doubling scenario” of the fractal theory [27] Therefore,
the bifurcation of test distances is done using a continuous periodicity of 2i.The last local test distance must contain at least 10 points in order to pro-duce a statistically significant point set [17] Moreover, all optimization stepsprovide a certain overlapping of local test distances so that the optimizationprocess ensures a seamless edge course Since in any test distance the share ofrepresentative points must correspond to at least 68.3% of all existing points,the overlapping of two adjacent local test distances can, at maximum, containthe remaining 32.7% of all points
This means that the suggested technique represents a non-linear imation of the edge point alignment of the overlapping, performing an edge
approx-compensation as an adaptive fractal optimization process The results of the
application of this technique in edge detection on an outer side of
Trang 40non-square-Fig 2.11.Non-linear approximation of detected edges on non-square-edged boards:
(a, c) resulting image (detected edges: background-bark, bark-wood); (b, d)
nega-tive of the resulting image
edged boards (Fig.2.9a, c) are shown in Fig.2.11 They show that, using anadaptive fractal optimisation process, an exact adaptation and accomplish-ment of the detected edge points to a real course of the edge and thus an
“intuitive” edge tracing can be achieved.