Digital Image Processing 13III.1 Matching by minimun distance classifier Suppose that we define the prototype of each pattern class to be the mean vector of the pattern of that clas
Trang 1Digital Image Processing
The scope covered by out treatment of digital image
processing to include recognition of individual image
regions, which we called objects or patterns.
The approaches to pattern recognition are divided
into two principal areas:
Decision-theoretic: This catogory deals with patterns
described using quantitative descriptors, such as length,
area, texture …
Structural: Deals with patterns best described by qualitative
descriptors, such as the relational descriptors
Trang 2Digital Image Processing 3
II Patterns and pattern classes
A pattern is an arrangement of descriptors The
name feature is used often in pattern recognition to
denote a descriptor.
A pattern class is a family of patterns that share
some common properties.
w1, w2, , wK denotes pattern classes, Where K is the
number of classes
Pattern recognition by machine involves techniques for
assining patterns to their respective classes automatically
(and with as little human intervention as possible)
II Patterns and pattern classes
Three common pattern arrangements used in
practice are:
Vectors: for quantitative descriptions
Strings and trees: for qualitative descriptions
Pattern vectors are represented by bold
lowercase letters, such as z, y and z, and
take a form or
Trang 3Digital Image Processing 5
II Patterns and pattern classes
II Patterns and pattern classes
Another Example: We can form pattern vectors by
letting x1=r(θ1),…xn=r(θn) The vectors became
points in n-dimensions space.
Trang 4Digital Image Processing 7
II Patterns and pattern classes
In some applications pattern characteristics are best
described by structural relationships.
For example: fingerprint recognition is based on the
interrelationships of print features Together with
their relatives sizes and locations, these features are
primitive components that describe fingerprint ridge
properties, such as abrupt ending, branching, and
disconnected segments.
Recognition problems of this type, in which not only
quantitative mearsures about each feature but also
the spatial relationships between the features
determine class menbership, generally are best
solved by structural approachs.
II Patterns and pattern classes
Example
Trang 5Digital Image Processing 9
II Patterns and pattern classes
Example
II Patterns and pattern classes
Example
Trang 6Digital Image Processing 11
III Recognition Based on
Decision-Theoretic Methods
Decision-theoretic appoaches to recognition
are based on the use of decision functions.
Let x=(x1, x2, , xn)T represent an n-dimensional
pattern vector
ω1, ω2, , ωW denote W pattern classes
The basic problem in decision-theoretic pattern
recognition is to find decision functions d1(x),
d2(x), , dw(x) with property that, if pattern x
belongs to class ωithen:
III Recognition Based on
Decision-Theoretic Methods
The decision boundary separating class ωi
from ωj is given by values of x for which
di(x)=dj(x), or equivalently, by value of x for
which:
Common practice is to identify the decision
boundary between two classes by the single
function dij(x)=di(x)-dj(x)=0
Thus dij(x)>0 for pattern of class ωi and
Trang 7Digital Image Processing 13
III.1 Matching by minimun distance
classifier
Suppose that we define the prototype of each
pattern class to be the mean vector of the pattern of
that class
where Njis the number of pattern vector from class ωj
Using the Euclidean distance to determine closeness reduces
the problem to computing the distance measures:
III.1 Matching by minimun distance
classifier
Assign x to class ωj if Di(x) is smallest distance
It is not difficult to show that selecting the
smallest distance is equivalent to evaluating the
functions
And assign x to class ωj if Di(x) is largest numerical
value This formalation agrees with the concept of a
decision function as define in Eq (12.2-1)
Trang 8Digital Image Processing 15
III.1 Matching by minimun distance
Trang 9Digital Image Processing 17
III.1 Matching by minimun distance
classifier
III.1 Matching by minimun distance
classifier
Trang 10Digital Image Processing 19
III.2 Matching by Correlation
Problem is finding matches of
subimage w(s,t) of size JxK
within a image f(x,y) of size
MxN, assume that J<=M,
K<=N
In its simplest form, the
correlation between f(x,y) and
w(x,y) is
For x=0,1, , M-1, y=0,1, , N-1 and the summation is taken
over the image region where w and f overlap
III.2 Matching by Correlation
Move w around the image area, giving
the function c(x,y) The maximum
value(s) of c indicates the position(s)
where w best matches f
The correlation function given in 12.2-7
has disadvantages of being sensitive to
changes in the amplitude of f and w For
example, doubling all values of f doubles
the value of c(x,y).
Trang 11Digital Image Processing 21
An other approach is to perform matching via the correlation
coefficient, which is defined as:
where x=0,1, ,M-1, y=0,1, ,N-1,
w Nis average value of the pixels in w,
f Nis average of f in the region coincident with the current location of
w, and
The summation are taken over the coordinates common to both f
and w.
The correlation coefficient γ(x,y) is scaled in the range -1 to 1,
independent of scale changes in amplitude of f and w
Trang 12Digital Image Processing 23
III.4 Optimum Statistical Classifiers
Foundation
Denote:
p(ωj/x) is probability that a particular pattern x comes from
class ωj
Lkj is coefficient of loss when pattern x actually comes from
class ωjbut classifier decides that x came from class ωk
Then, the average loss incurred if assign x to class ωk,
rj(x):
This equantion often is called conditional average risk or
loss in decision theory terminology
III.4 Optimum Statistical Classifiers
Foundation
We know that p(A/B)=[p(A)*p(B/A)]/p(B) Using this
expression, we write 12.2-9 in the form:
where p(x /ωk) is the probability density function of the
patterns from class ωk and p(ωk) is probability of
occurrence of class ωk
Bacause 1/p(x) is positive and common to all rj(x), so it
can be dropped from 12.2-10 then rj(x) can be:
Trang 13Digital Image Processing 25
III.4 Optimum Statistical Classifiers
Foundation
The classifier has W possible classes to choose from for
any given unknow pattern If it computers r1(x),
r2(x),…,rW(x) for each pattern x and assigns the pattern to
class with the smallest loss, the total average loss with
respect to all decisions will be minimum
The classifier that minimizes the total average loss is
called the Bayes classifier
Thus Bayes classifier assigns an unknown pattern x to
class ωi if ri(x)<rj(x) for j=1,2, ,W; j<>i In other words, x is
assigned to class ωi if
III.4 Optimum Statistical Classifiers
Foundation
The “loss” for correct decision is assigned value 0
and the loss for incorrect decision is assigned
value 1 Under these conditions, the loss function
becames:
Trang 14Digital Image Processing 27
III.4 Optimum Statistical Classifiers
Foundation
III.4 Optimum Statistical Classifiers
Foundation
The decision functions given in 12.2-17 are
optimal in the sense that they minimize the
average loss in misclassification
However, we have to know:
The probability density functions of the patterns in each
class, and
The probability of occurrence of each class
Trang 15Digital Image Processing 29
III.4 Optimum Statistical Classifiers
Foundation
The second requirement is not problem For instance, if all classes
are equally likely to occur then p(ωi) = 1/M Even if this condition is
not true, these probabilities generally can be inferred from
knowledge of the problem.
Estimation of the probability density functions p(x/ωi) is another
matter If the pattern vectors, x, are n-dimensional, then p(x/ωi) is a
function of n variables, which, if its form is not know, requires
methods from multivariate probability theory for its estimation.
These methods are difficult to apply in practice.
For these reasons, use for Bayes classifier generally is based on
the assumation of an analytic expression for the various density
functions and then an estimation of the necessary parameters from
samples patterns from each class By far the most prevalent form
assumed for p(x/ωi) is Gaussian probability density function
III.4 Optimum Statistical Classifiers
Bayes classifier for Gaussian pattern
classes
Let consider a 1-D problem (n=1) involving two
pattern classes (W=2) governed by Gaussian
densities, with means m1 and m2 and standard
deviations σ1 and σ2, respectively From Eq
12.2-17 the Bayes decision functions have the form:
Trang 16Digital Image Processing 31
III.4 Optimum Statistical Classifiers
Bayes classifier for Gaussian pattern classes
Fig 12.10 show a plot of the probability density functions for the
two classes The boundary between the two classes is a single
point, denoted x0suchs that d1(x0)=d2(x0)
If the two classes are equally likely to occur, then p(ω1)= p(ω2)
=1/2, and the decision boundary is the value of x0 for which
p(x0/ω1)= p(x0/ω2)
III.4 Optimum Statistical Classifiers
Bayes classifier for Gaussian pattern classes
In the n-demensional case, the Gaussian density
of the vectors in the jth pattern class has the form
where, mjand Cjare approximated as
Trang 17Digital Image Processing 33
III.4 Optimum Statistical Classifiers
Bayes classifier for Gaussian pattern classes
Because of the exponential form of Gaussian density,
working with the natural logarithm of decision function is
more convenient In other words, we can use the form:
And it infers
IV.Neural Networks
The approaches discussed in the preceding is based on the
use of sample patterns to estimate statistical parameters
The patterns used to estimate these parameters usually are
called training patterns, and a set of such patterns from
each class is called a training set.
The process by which a training set is used to obtain
decision functions is called learning or training.
The statistical properties of the pattern classes in a problem
often are unknown or cannot be estimated
In practice, such decision-theoretic problems are best
handled by methods that yield the required decision
Trang 18Digital Image Processing 35
IV.Neural Networks
An approach manage to organize some nonlinear computing
elements (called neurons) as a networks to classify a input
pattern
The resulting models are referred to by various names:
neural networks, neurocomputers, parallel distributed
processing (PDP) modelsm neuromorphic systems, layered
self-adaptive networks, connectionist models
Here we use the name neural networks or neural nets We
use these networks as vehicles for adaptively developing the
coefficients of decision functions via successive
presentations of training sets of patterns
IV.1 Perceptron
The most simple of neural networks is perceptron In its most
basic form, the perceptron learns a linear decision function that
dichotomizes two linearly separable training sets
Trang 19Digital Image Processing 37
IV.1 Perceptron
Fig 12.14 shows schematically the perceptron model for
two pattern classes
The response of this basic device is based on weighted
sum of its inputs; that is
which is a linear decision function with respect to the
components of the pattern vectors
The coefficients wi, i=1,2,…, n, n+1, called weights
The function that maps the output of the summing
junction into the final output of the device sometimes is
called the activation function.
When d(x)>0 the activation function causes the output of perceptron to
be +1, it indecates the pattern x was recognized as belonging to class
ω1 The reverse is true when d(x)<0.
When d(x)=0, x lies on the decision surface separating the two pattern
classes The decision boundary implemented by the perceptron is
obtained by set Eq 12.2-29 equal to zero:
which is the equation of a hyperplane in n-dimensional pattern space
Geometrically, the first n coefficients establish the orientation of the
hyperplane, whereas the last coefficient, wn+1, is proportional to the
perdendicular distance from the orgin to the hyperplane.
IV.1 Perceptron
Trang 20Digital Image Processing 39
Denote yi=xi, i=1,2, , n, and yn+1=1, then 12.2-29
becomes:
where
y=(y1,y2, ,yn,1)Tis now an augmented pattern vector and
w=(w1,w2, ,wn,wn+1) is called the weight vector.
The problem is how to establish the weight vector ?
IV.1 Perceptron
Training algorithms
Linearly separable classes: A simple, iterative algorithm for
obtaining a solution weight vector for two linearly separable training sets
follows For two training sets of augmented pattern vectors belonging to
pattern classes ω1and ω2, respectively.
IV.1 Perceptron
Trang 21Digital Image Processing 41
Training algorithms
The correction increment c is assumed to be positive and, for now, to be constant
This algorithm sometimes is referred to as the fixed increment correction rule
IV.1 Perceptron
Training algorithms
IV.1 Perceptron
Trang 22Digital Image Processing 43
Training algorithms
IV.1 Perceptron
Training algorithms
Nonseparable classes: One of the early methods of training
perceptron is Widrow-Hoff, or Least-Mean-Square (LMS)
delta rule, the method minimizes the error between the
actual and desired response at any time training step.
IV.1 Perceptron
Trang 23Digital Image Processing 45
Training algorithms
IV.1 Perceptron
Training algorithms
IV.1 Perceptron
Trang 24Digital Image Processing 47
Trang 25Digital Image Processing 49
IV.2 Multilayer Neural Networks
IV.2 Multilayer Neural Networks
Trang 26Digital Image Processing 51
IV.2 Multilayer Neural Networks
IV.2 Multilayer Neural Networks
Trang 27Digital Image Processing 53
This method is used for the comparison of region boundaries that
are described in terms of shape numbers
The degree of similarity, k, between two region boundaries
(shapes) is defined as the largest order for which their shape
numbers still coincide For example, let a and b denote shape
numbers of closed boundaries represented by 4-directional chain
codes These two shapes have a degree of similarity k if:
V.1 Matching Shape Numbers
V.1 Matching Shape Numbers
Trang 28Digital Image Processing 55
V.1 Matching Shape Numbers
Suppose that two region boundaries, a and b, are
coded into string denoted a1a2…an and b1b2…bm
Let α represent the number of matches between the
two strings, where a match occurs in the kth position
if ak=bk The number symbols that do not match is
β=0 if and only if a1a2…an≡ b1b2…bm (n=n)
A simple measure of similarity between a and b is
the ratio
V.2 String Matching
Trang 29Digital Image Processing 57
Trang 30Digital Image Processing 59
V.3 Syntactic Recognition of Strings
V.3 Syntactic Recognition of Strings
Trang 31Digital Image Processing 61
Use of semantics
V.3 Syntactic Recognition of Strings
Automata as string Recognizers
V.3 Syntactic Recognition of Strings