Figure 5.35 shows the sults for column search top left, for row search top right, and superimposed re-sults, where pixels missing in one reconstructed image have been added from the othe
Trang 15.3 The Unified Blob-edge-corner Method (UBM) 165
Lsegmin = 4 pyramid-pixels or 16 original image pixels Figure 5.35 shows the sults for column search (top left), for row search (top right), and superimposed re-sults, where pixels missing in one reconstructed image have been added from the other one, if available
re-The number of blobs to be handled is at least one order of magnitude smaller than for the full representation underlying Figure 5.34 For a human observer, rec-ognizing the road scene is not difficult despite the pixels missing Since homoge-neous regions in road scenes tend to be more extended horizontally, the superposi-tion ‘column over row’ (bottom right) yields the more naturally looking results Note, however, that up to now no merging of blob results from one stripe to the next has been done by the program When humans look at a scene, they cannot but
do this unwillingly and apparently without special effort For example, nobody will have trouble recognizing the road by its almost homogeneously shaded gray val-
ues The transition from 1-D blobs in separate stripes to 2-D blobs in the image and
to a 3-D surface in the outside world are the next steps of interpretation in machine vision
5.3.2.5 Extended Shading Models in Image Regions
The 1-D blob results from stripe analysis are stored in a list for each stripe, and are
accumulated over the entire image Each blob is characterized by
1 the image coordinates of its starting point (row respectively column
num-ber and its position jref in it),
2 its extension Lseg in search direction,
3 the average intensity Ic at its center, and
4 the average gradient components of the intensity au and av
This allows easy merging of results of two neighboring stripes Figure 5.36a shows
the start of 1-D blob merging when the threshold conditions for merger are
satis-fied in the region of overlap in adjacent stripes: (1) The amount of overlap should exceed a lower bound, say, two
or three pixels (2) The
differ-ence in image intensity at the
center of overlap should be
small Since the 1-D blobs are
given by their cg-position (ubi
= jref + Lseg,i/2), their ‘weights’
(proportional to the segment
length Lseg,i), and their intensity
gradients, the intensities at the
center of overlap can be
com-puted in both stripes (Icovl1) and
Icovl2) from the distance
be-tween the blob center and the
center of overlap exploiting the
gradient information This
yields the condition for
Figure 5.36 Merging of overlapping 1-D blobs in
adjacent stripes to a 2-D blob when intensity and
gradient components match within threshold limits
įuS
2
įvS2
(a) Merging of first two 1-D blobs to a 2-D blob
(b) Recursive merging of a 2-D blob with an ping 1-D blob to an extended 2-D blob.
Trang 2cov 1 cov 2
|I l I l | DelIthreshMerg (5.37) Condition (3) for merging is that the intensity gradients should also lie within small
common bounds (difference < DelSlopeThrsh, see Table 5.1)
If these conditions are all satisfied, the position of the new cg after merger is computed from a balance of moments on the line connecting the cg’s of the regions
to be merged; the new cg of the combined areas S2D thus has to lie on this line This yields the equation (see Figure 5.36a)
and, solved for the shift įuS with S2D = Lseg1 + Lseg2, the relation
is obtained The same is true for the v-component
Figure 5.36b shows the same procedure for merging an existing 2-D blob, given
by its weight S2D, the cg-position at cg2D, and the segment boundaries in the last stripe To have easy access to the latter data, the last stripe is kept in memory for
one additional stripe evaluation loop even after the merger to 2-D blobs has been finished The equations for the shift in cg are identical to those above if Lseg1 is re-
placed by S2Dold The case shown in Figure 5.36b demonstrates that the position of the cg is not necessarily inside the 2-D blob region
A 2-D blob is finished when in the new stripe no area of overlap is found any
more The size S2D of the 2-D blob is finally given by the sum of the Lseg-values of all stripes merged The contour of the 2-D blob is given by the concatenated lower
and upper bounds of the 1-D blobs merged Minimum (umin, vmin) and maximum
values (umax, vmax) of the coordinates yield the encasing box of area
encbox= ( max min) ( max min)
given by the coordinates of its center of gravity ucg and vcg This robust feature
makes highly visible blobs attractive for tracking
5.3.2.6 Image Analysis on two Scales
Since coarse resolution may be sufficient for the near range and the sky, fine scale image analysis can be confined to that part of the image containing regions further away After the road has been identified nearby, the boundaries of these image re-gions can be described easily around the subject’s lane as looking like a “pencil tip” (possibly bent) Figure 5.37 shows results demonstrating that with highest resolution (within the white rectangles), almost no image details are lost both for the horizontal (left) and the vertical search (right)
The size and position of the white rectangle can be adjusted according to the tual situation, depending on the scene content analyzed by higher system levels Conveniently, the upper left and lower right corners need to be given to define the
Trang 3ac-5.3 The Unified Blob-edge-corner Method (UBM) 167
Reconstructed image (horizontal):
Figure 5.37 Foveal–peripheral differentiation of image analysis shown by the
‘imag-ined scene’ reconstructed from symbolic representations on different scales: Outer part 44.11, inner part 11.11 from video fields compressed 2:1 after processing; left: horizontal search, right: vertical search, with the Hofmann operator
rectangle; the region of high resolution should be symmetrical around the horizon and around the center of the subject’s lane at the look-ahead distance of interest, in general
5.3.3 The Corner Detection Algorithm
Many different types of nonlinearities may occur on different scales For a long time, so-called 2-D-features have been studied that allow avoiding the “aperture problem”; this problem occurs for features that are well defined only in one of the two degrees of freedom, like edges (sliding along the edge) Since general texture analysis requires significantly more computing power not yet available for real-time applications in the general case right now, we will also concentrate on those
points of interest which allow reliable recognition and computation of feature flow
[Moravec 1979; Harris, Stephens 1988; Tomasi, Kanade 1991; Haralick, Shapiro 1993]
5.3.3.1 Background for Corner Detection
Based on the references just mentioned, the following algorithm for corner tion fitting into the mask scheme for planar approximation of the intensity function has been derived and proven efficient The structural matrix
has been defined with the terms from Equations 5.17 and 5.18 Note that compared
to the terms used by previously named authors, the entries on the main diagonal are formed from local gradients (in and between half-stripes), while those on the cross-diagonal are twice the product of the gradient components of the mask (average of the local values) With Equation 5.18, this corresponds to half the sum of all four cross-products
Trang 4It can thus be seen that the normalized second eigenvalue Ȝ2N and circularity q
are different expressions for the same property In both terms, the absolute tudes of the eigenvalues are lost
magni-Threshold values for corner points are chosen as lower limits for the
determi-nant detN = w and circularity q:
min
w!w and
Trang 55.3 The Unified Blob-edge-corner Method (UBM) 169
Harris was the first to use the eigenvalues of the structural matrix for threshold definition For each location in the image, he defined the performance value
2 2
Corner candidates are points for which RH 0 is valid; larger values of D yield
fewer corners and vice versa Values around D = 0.04 to 0.06 are recommended This condition on RH is equivalent to (from Equations 5.44, 5.53, and 5.54)
2
Kanade et al (1991) (KLT) use the following corner criterion: After a
smooth-ing step, the gradients are computed over the region D·D (2 d D d 10 pixels) The
reference frame for the structural matrix is rotated so that the larger eigenvalue O1points in the direction of the steepest gradient in the region
O1is thus normal to a possible edge direction A corner is assumed to exist if O2
is sufficiently large (above a threshold value O2thr) From the relation det N = O1·O2,the corresponding value of O2KLT can be determined
O2KLT values are deleted The threshold value has to be derived from a histogram of
O2 by experience in the domain For larger D, the corners tend to move away from
the correct position
5.3.3.2 Specific Items in Connection with Local Planar Intensity Models
Let us first have a look at the meaning of the threshold terms circularity (q in tion 5.50) and trace N (Equation 5.55) as well as the normalized second eigenvalue
Equa-(Ȝ2N in Equation 5.49) for the specific case of four symmetrical regions in a 2 × 2 mask, as given in Figure 5.20 Let the perfect rectangular corner in intensity distri-
bution as in Figure 5.38b be given by local gradients fr1 = fc1 = 0 and fr2 = fc2 = íK
Then the global gradient components are f = f = íK/2 The determinant Equation
Trang 65.44 then has the value det N = 3/4·K4 The term Q (Equation 5.47) becomes Q =
K2, and the “circularity” q according to Equation 5.51 is
2det / = 4/3 = 0 75
The two eigenvalues of the structure matrix are Ȝ1 = 1.5·K2, and Ȝ2 = 0.5·K2 so
that traceN = 2Q is 4· K2; this yields the normalized second eigenvalue as Ȝ2N = 1/3 Table 5.2 contains this case as the second row Other special cases according to the intensity distributions given in Figure 5.38 are also shown The maximum circular-ity of 1 occurs for the checkerboard corners in Figure 5.38a and row 1 in Table 5.2; the normalized second eigenvalue also assumes its maximal value of 1 in this case The case Figure 5.38c (third row in the table) shows the more general situation with three different intensity levels in the mask region Here, circularity is still close to 1 and Ȝ2N is above 0.8 The case in Figure 5.38e with constant average mask intensity in the stripe is shown in row 5 of Table 5.2: Circularity is rather
high at q = 8/9 § 0.89 and Ȝ2N = 0.5 Note that from the intensity and gradient
val-ues of the whole mask this feature can only be detected by gz (IM and gy) remain constant along the search path
By setting the minimum required circularity qmin as the threshold value for ceptance to
ac-min0.7
Figure 5.38 Local intensity gradients on mel-level for calculation of circularity q in
corner selection: (a) Ideal checker-board corner: q = 1 (b) ideal single corner: q = 0.75;
(c) slightly more general case (three intensity levels, closer to planar); (d) ideal shading,
one direction only (linear case for interpolation, q§ 0); (e) demanding (idealized) corner feature for extraction (see text).
Trang 75.3 The Unified Blob-edge-corner Method (UBM) 171
all significant cases of intensity corners will be picked Figure 5.38d shows an
al-most planar intensity surface with gradients –K in the column direction and a very
small gradient ± İ in the row direction (K >> İ) In this case all characteristic
val-ues: det N, circularity q, and the normalized second eigenvalue Ȝ2N all go to zero (row 4 in the table) The last case in Table 5.2 shows the special planar intensity
distribution with the same value for all local and global gradients (–K); this
corre-sponds to Fig 5.20c It can be seen that circularity and Ȝ2N are zero; this nice ture for the general planar case is achieved through the factor 2 on the cross-diagonal of the structure matrix Equation 5.42
fea-When too many corner candidates are found, it is possible to reduce their number
not by lifting qmin but by introducing another threshold value traceNmin that limits the sum of the two eigenvalues According to the main diagonals of Equations 5.42 and 5.46, this means prescribing a minimal value for the sum of the squares of all local gradients in the mask
Table 5.2 Some special cases for demonstrating the characteristic values of the structure
matrix in corner selection as a function of a single gradient value K TraceN is twice the lue of Q (column 4)
va-Example Local
gradi-ent values
Det N
Equation 5.44
Term Q
Equation 5.47
Circula-rity q
Ȝ 1 Ȝ 2N =
Ȝ 2/ Ȝ 1 Figure
§ 2İ 2/K2
§ 0
This parameter depends on the absolute magnitude of the gradients and has thus to
be adapted to the actual situation at hand It is interesting to note that the planarity check (on 2-D curvatures in the intensity space) for interpolating a tangent plane to the actual intensity data has a similar effect as a low boundary of the threshold
value, traceNmin
5.3.4 Examples of Road Scenes
Figure 5.39 left shows the nonplanar regions found in horizontal search (white bars) with ErrMax = 3% Of these, only those locations marked by cyan crosses
have been found satisfying the corner condition qmin = 0.6 and traceNmin = 0.11 The figure on the right-hand side shows results with the same parameters except
the reduction of the threshold value to traceN = 0.09, which leaves an increased
Trang 8Figure 5.39 Corner candidates derived from regions with planar interpolation
resi-dues > 3% (white bars) with parameters (m, n, mc, nc = 3321) The circularity threshold
q min = 0.6 eliminates most of the candidates stemming from digitized edges (like lane markings) The number of corner candidates can be reduced by lifting the threshold on the sum of the eigenvalues traceN min from 0.09 (right: 103, 121) to 0.11 (left image: 63,
72 candidates); cyan = row search, red = column search
number of corner candidates (over 60% more) Note that all oblique edges ing minor corners from digitization), which were picked by the nonplanarity check, did not pass the corner test (no crosses in both figures) The crosses mark corner candidates; from neighboring candidates, the strongest yet has to be selected by
(show-comparing results from different scales mc = 2 and nc = 1 means that two original pixels are averaged to a single cell value; nine of those form a mask element (18 pixels), so that the entire mask covers 18×4 = 72 original pixels
Figure 5.40 demonstrates all results obtainable by the unified blob-edge-corner method (UBM) in a busy highway scene in one pass: The upper left subfigure shows the original full video image with shadows from the cars on the right-hand side The image is analyzed on the pixel level with mask elements of size four pix-els (total mask = 16 pixels) Recall that masks are shifted by steps of 1 in search di-rection and by steps of mel-size in stripe direction About 105 masks result for evaluation of each image The lower two subfigures show the small nonplanarity regions detected (about 1540), marked by white bars In the left figure the edge elements extracted in row search (yellow, = 1000) and in column search (red, = 3214) are superimposed Even the shadow boundaries of the vehicles and the re-flections from the own motor hood (lower part) are picked The circularity thresh-old of qmin = 0.6 and traceNmin = 0.2 filter up to 58 corner candidates out of the
1540 nonplanar mask results; row and column search yield almost identical results (lower right) More candidates can be found by lowering ErrMax and traceNmin.Combining edge elements to lines and smooth curves, and merging 1-D blobs to 2-D (regional) blobs will drastically reduce the number of features These com-pound features are more easily tracked by prediction error feedback over time Sets
of features moving in conjunction, e.g blobs with adjacent edges and corners, are
indications of objects in the real world; for these objects, motion can be predicted and changes in feature appearance can be expected (see the following chapters) Computing power is becoming available lately for handling the features mentioned
in several image streams in parallel With these tools, machine vision is maturing for application to rather complex scenes with multiple moving objects However, quite a bit of development work yet has to be done
Trang 95.3 The Unified Blob-edge-corner Method (UBM) 173
fea-tures extractable by the unified blob-edge-corner method UBM superimposed The
image processing parameters were: MaxErr = 4%; m = n = 3, mc = 2, n c = 1 (33.21); anglefact = 0.8 and IntGradMin = 0.02 for edge detection; qmin = 0.7; tra-
ceNmin = 0.06 for corner detection and Lsegmin = 4, VarLim = 64 for shaded blobs Features extracted were 130 corner candidates, 1078 nonplanar regions (1.7%),
4223 ~vertical edge elements, 5918 ~horizontal edge elements, 1492 linearly shaded intensity blobs (from row search) and 1869 from column search; the latter have been used only partially to fill gaps remaining from the row search The non-planar regions remaining are the white areas
Only an image with several colors can convey the information contained to a human observer The entire image is reconstructed from symbolic representations
of the features stored The combination of linearly shaded blobs with edges and corners alleviates the generation of good object hypotheses, especially when char-
Figure 5.40 Features extracted with unified blob-edge-corner method (UBM):
Bi-directionally nonplanar intensity distributions (white regions in lower two subfigures, ~ 1540), edge elements and corner candidates (column search in red), and linearly shaded blobs One vertical and one horizontal example is shown (gray straight lines in upper right subfigure with dotted lines connecting to the intensity profiles between the images Red and green are the intensity profiles in the two half-stripes used in UBM; about 4600 1-D blobs resulted, yielding an average of 15 blobs per stripe The top right subfigure is reconstructed from symbolically represented features only (no original pixel values) Collections of features moving in conjunction designate objects in the world
Trang 10Figure 5.41 “Imagined” feature set extracted with the unified blob-edge-corner method
UBM: Linearly shaded blobs (gray areas), horizontally (green) and vertically extracted edges (red), corners (blue crosses) and nonhomogeneous regions (white)
acteristic sub-objects such as wheels can be recognized With the background knowledge that wheels are circular (for smooth running on flat ground) with the center on a horizontal axis in 3-D space, the elliptical appearance in the image al-lows immediate determination of the aspect angle without any reference to the body on which it is mounted Knowing some state variables such as the aspect an-gle reduces the search space for object instantiation in the beginning of the recog-nition process after detection
5.4 Statistics of Photometric Properties of Images
According to the results of planar shading models (Section 5.3.2.4), a host of formation is now available for analyzing the distribution of image intensities to ad-just parameters for image processing to lighting conditions [Hofmann 2004] For each image stripe, characteristic values are given with the parameters of the shad-ing models of each segment Let us assume that the intensity function of a stripe
in-can be described by ns segments Then the average intensity bs of the entire stripe
over all segments i of length li and average local intensity bi is given by
Trang 115.4 Statistics of Photometric Properties of Images 175
1
S n
minMin S MaxmaxS
The difference between both expressions yields the dynamic range in intensity
H S within an image stripe, respectively, H G an image region The dynamic range in
intensity of a single segment is given by H i = maxi – mini The average dynamic range within a stripe, respectively in an image region, then follows as
If the maximal or minimal intensity value is to be less sensitive to single outliers
in intensity, the maximal, respectively, minimal value over all average values b i of all segments may be used:
or in variances
The next section describes a procedure for finding transformations between ages of two cameras looking at the same region (as in stereovision) to alleviate joint image (stereo) interpretation
Trang 12im-5.4.1 Intensity Corrections for Image Pairs
This section uses some of the statistical values defined previously Two images of
a stereo camera pair are given that have different image intensity distributions due
to slightly different apertures, gain values, and shutter times that are independently automatically controlled over time (Figure 5.42a, b) The cameras have approxi-mately parallel optical axes and the same focal length; they look at the same scene Therefore, it can be assumed that segmentation of image regions will yield similar results except for absolute image intensities The histograms of image intensities are shown in the left-hand part of the figure The right (lower) stereo image is darker than the left one (top) The challenge is to find a transformation procedure which allows comparing image intensities to both sides of edges all over the image
The lower sub–figure (c) shows the final result It will be discussed after the transformation procedure has been derived
At first, the characteristic photometric properties of the image areas within the white rectangle are evaluated in both images by the stripe scheme described The left and right bars in Figure 5.43a, b show the characteristic parameters considered
In the very bright areas of both images (top), saturation occurs; this harsh earity ruins the possibility of smooth transformation in these regions The intensity
nonlin-transformation rule is to be derived using five support points: b MinG , b dunkelG , b G ,
b hellG and b MaxG of the marked left and right image regions The full functional tionship is approximated by interpolation of these values with a fourth-order poly-nomial The central upper part of Figure 5.43 shows the resulting function as a curve; the lower part shows the scaling factors as a function of the intensity values The support points are marked as dots Figure 5.42c shows the adapted histogram
rela-on the left-hand side and the resulting image rela-on the right-hand side It can be seen that after the transformation, the intensity distribution in both images has become much more similar Even though this transformation is only a coarse approxima-tion, it shows that it can alleviate evaluation of image information and correspon-dence of features
(a)
(b)
(c)
Figure 5.42 Images of different brightness of a stereo system with corresponding
his-tograms of the intensity: (a) left image, (b) right-hand-side image, and (c) right-hand image adapted to the intensity distribution of the left-hand image after the intensity transformation described (see Figure 5.43, after [Hofmann 2004])
Trang 135.4 Statistics of Photometric Properties of Images 177
Figure 5.43 Statistical photometric characteristics of (a) the left-hand, and (b) the
right-hand stereo image (Figure 5.42); the functional transformation of intensities shown in the center minimizes differences in the intensity histogram (after [Hofmann 2004]).
The largest deviations occur at high intensity values (right end of histogram); fortunately, this is irrelevant for road-scene interpretation since the corresponding regions belong to the sky
To become less dependent on intensity values of single blob features in two or more images, the ratio of intensities of several blob features recognized with uncer-tainty and their relative location may often be advantageous to confirm an object hypothesis
5.4.2 Finding Corresponding Features
The ultimate goal of feature extraction is to recognize and track objects in the real world, that is the scene observed Simplifying feature extraction by reducing, at first search, spaces to image stripes (a 1-D task with only local lateral extent) gen-erates a difficult second step of merging results from stripes into regional charac-teristics (2-D in the image plane) However, we are not so much interested in (vir-tual) objects in the image plane (as computational vision predominantly is) but in recognizing 3-D-objects moving in 3-D space over time! Therefore, all the knowl-edge about motion continuity in space and time in both translational and rotational degrees of freedom has to be brought to bear as early as possible Self-occlusion and partial occlusion by other objects has to be taken into account and observed from the beginning Perspective mapping is the link from spatio–temporal motion
of 3-D objects in 3-D space to the motion of groups of features in images from ferent cameras
dif-So the difficult task after basic feature extraction is to find combinations of tures belonging to the same object in the physical world and to recognize these fea-tures reliably in an image sequence from the same camera and/or in parallel images from several cameras covering the same region in the physical world, maybe under slightly different aspect conditions Thus, finding corresponding features is a basic
Trang 14fea-task for interpretation; the following are major challenges to be solved in dynamic scene understanding:
Chaining of neighboring features to edges and merging local regions to geneous more global ones
homo- Selecting the best suited feature from a group of candidates for prediction error feedback in recursive tracking (see below)
Finding the corresponding features in sequences of images for determining ture flow, a powerful tool for motion understanding
fea- Finding corresponding features in parallel images from different cameras for stereointerpretation to recover depth information lost in single images
The rich information derived in previous sections from stripewise intensity proximation of one-dimensional segments alleviates the comparison necessary for establishing correspondence, that is, to quantify similarity Depending on whether homogeneous segments or segment boundaries (edges) are treated, different crite-ria for quantifying similarity can be used
ap-For segment boundaries as features, the type of intensity change (bright to dark
or vice versa), the position and the orientation of the edge as well as the ratio of
average intensities on the right- (R) and left-hand (L) side are compared ally, average intensities and segment lengths of adjacent segments may be checked for judging a feature in the context of neighboring features
Addition-For homogeneous segments as features considered, average segment intensity, average gradient direction, segment length, and the type of transition at the boundaries (dark-to-bright or bright-to-dark) are compared Since long segments in two neighboring image stripes may have been subdivided in one stripe but not in the other (due to effects of thresholding in the extraction procedure), chaining (concatenation) procedures should be able to recover from these arbitrary effects according to criteria to be specified Similarly, chaining rules for directed edges are able to close gaps if necessary, that is, if the remaining parameters allow a consis-tent interpretation
5.4.3 Grouping of Edge Features to Extended Edges
The stripewise evaluation of image features discussed in Section 5.3 yields (beside the nonplanar regions with potential corners) lists of corners, oriented edges, and homogeneously shaded segments These lists together with the corresponding in-dex vectors allow fast navigation in the feature database retaining neighborhood re-lationships The index vectors contain for each search path the corresponding im-age row (respectively, column), the index of the first segment in the list of results, and the number of segments in each search path
As an example, results of concatenation of edge elements (for column search) are shown in Figure 5.44, lower right (from [Hofmann 2004]); the steps required are discussed in the sequel Analog to image evaluation, concatenation proceeds in search path direction [top-down, see narrow rectangle near (a)] and from left to right The figure at the top shows the original video-field with the large white rec-
Trang 155.4 Statistics of Photometric Properties of Images 179
tangle marking the part evaluated; the near sky and the motor hood are left off All features from this region are stored in the feature database The lower two subfig-ures are based on these data only
At the lower left, a full reconstruction of image intensity is shown based on the complete set of shading models on a fine scale, disregarding nonlinear image in-tensity elements like corners and edges; these are taken here only for starting new blob segments The number of segments becomes very large if the quality of the reconstructed image is requested to please human observers However, if edges from neighboring stripes are grouped together, the resulting extended line features allow to reduce the number of shaded patches to satisfy a human observer and to appreciate the result of image understanding by machine vision
data structure for the edge feature, an entry into the neighboring search stripe is looked for, which approximately satisfies the colinearity condition with the stored edge direction at a distance corresponding to the width of the search stripe To ac-cept correspondence, the properties of a candidate edge point have to be similar to the average properties of the edge elements already accepted for chaining To
evaluate similarity, criteria like the Mahalanobis-distance may be computed, which
allow weighting the contributions of different parameters taken into account A threshold value then has to be satisfied to be accepted as sufficiently similar An-
stripes
(single video field, 768 pixel / row)
Figure 5.44 From features in stripes (here vertical) to feature aggregations over the 2-D
image in direction of object recognition: (a) Reduced vertical range of interest (motor hood and sky skipped) (b) Image of the scene as internally represented by symbolically stored feature descriptions (‘imagined’ world) (c) Concatenated edge features which to- gether with homogeneously shaded areas form the basis for object hypothesis generation and object tracking over time.
... Images of different brightness of a stereo system with correspondinghis-tograms of the intensity: (a) left image, (b) right-hand-side image, and (c) right-hand image... class="page_container" data-page="13">
5.4 Statistics of Photometric Properties of Images 177
Figure 5.43 Statistical photometric characteristics of (a) the left-hand, and (b)... a host of formation is now available for analyzing the distribution of image intensities to ad-just parameters for image processing to lighting conditions [Hofmann 2004] For each