1. Trang chủ
  2. » Công Nghệ Thông Tin

The Essential Guide to Image Processing- P24 pps

30 200 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề The Essential Guide to Image Processing
Trường học Unknown University
Chuyên ngành Image Processing
Thể loại Thesis
Định dạng
Số trang 30
Dung lượng 1,63 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Case a shows the average recognition rate averaging over all illumination/poses and all gallery sets obtained by the proposed algorithm using the top n matches.. D’arcy Thompson studied

Trang 1

top 1 top 5

0 10 20 30 40 50 60 70 80 90 100

top 1 top 5

top 1 top 5

Camera index

0 10 20 30 40 50 60 70 80 90 100

top 1 top 5

FIGURE 24.8

The average recognition rates across illumination (the top row) and across poses (the bottom row) for three cases Case (a) shows the average

recognition rate (averaging over all illumination/poses and all gallery sets) obtained by the proposed algorithm using the top n matches Case (b)

shows the average recognition rate (averaging over all illumination/poses for the gallery set (c27, f11) only) obtained by the proposed algorithm

using the top n matches Case(c) shows the average recognition rate (averaging over all illumination/poses and all gallery sets) obtained by the

“Eigenface” algorithm using the top n matches.

Trang 2

24.4 Face Modeling and Verification Across Age Progression 701

robust to aging effects Researchers from psychophysics laid the foundations for studies

related to facial aging effects D’arcy Thompson studied morphogenesis by means of

andTodd et al [81]identified certain forms of force configurations that when applied on

2D face profiles induce facial aging effects.Figure 24.9illustrates the effect of applying the

“revised” cardioidal strain transformation model on profile faces The aforementioned

transformation model is said to reflect the remodeling of fluid filled spherical objects

with applied pressure.O’Toole et al [82]studied the effects of facial wrinkles in increasing

age-difference classifier with the objective of developing systems that could perform face

verification across age progression The results from many such studies highlight the

importance of developing computational models that characterize both growth-related

shape variations and textural variations, such as wrinkles and other skin artifacts, in

developing a facial aging model

In this section, we shall present computational models that characterize shape

vari-ations that faces undergo during different stages of growth Facial shape varivari-ations due

to aging can be observed by means of facial feature drifts and progressive variations in

the shape of facial contours, across ages While facial shape variations during formative

years are primarily due to craniofacial growth, during adulthood, facial shape variations

are predominantly driven by the changing physical properties of facial muscles Hence,

we propose shape variation models for each of the age groups that best account for the

factors that induce such variations

(R0, ␪) (R1, ␪)

FIGURE 24.9

(a) Remodeling of a fluid filled spherical object; (b) facial growth simulated on the profile of a

child’s face using the “revised” cardioidal strain transformations

Trang 3

24.4.1 Shape Transformation Model for Young Individuals [60]

Drawing inspiration from the “revised” cardioidal strain transformation model

and (R1,1) denote the angular coordinates of a point on the surface of the object before

and after the transformation k denotes a growth-related constant Face anthropometric

dif-ferent facial features across ages Age-based facial measurements extracted across difdif-ferentfacial features play a crucial role in developing the proposed growth model.Figure 24.10

sto

go go

gn sl li al

FIGURE 24.10

Face anthropometry: of the 57 facial landmarks defined in[83], we choose 24 landmarks trated above for our study We further illustrate some of the key facial measurements that wereused to develop the growth model

Trang 4

illus-24.4 Face Modeling and Verification Across Age Progression 703

illustrates the 24 facial landmarks and some of the important facial measurements that

were used in our study

the craniofacial growth model amounts to identifying the growth parameters associated

with different facial features Let the facial growth parameters of the “revised” cardioidal

strain transformation model that correspond to facial landmarks designated by [n, sn, ls,

sto, li, sl, gn, en, ex, ps, pi, zy, al, ch, go] be [k1 , k2, k15] The facial growth parameters

for different age transformations can be computed using anthropometric constraints

on facial proportions The computation of facial growth parameters is formulated as a

nonlinear optimization problem We identified 52 facial proportions that can be reliably

estimated using the photogrammetry of frontal face images Anthropometric constraints

based on proportion indices translate into linear and nonlinear constraints on selected

facial growth parameters While constraints based on proportion indices such as the

intercanthal index and nasal index result in linear constraints on the growth parameters,

constraints based on proportion indices such as the eye fissure index and orbital width

index result in nonlinear constraints on the growth parameters.

Let the constraints derived using proportion indices be denoted as r1 (k) ⫽ ␤1, r2(k) ⫽

2, , r N (k) ⫽ ␤ N The objective function f (k) that needs to be minimized w.r.t k is

jand␤ i are constants c iis an age-based proportion index obtained from[83].)

the growth parameters that minimize the objective function in an iterative fashion Next,

using the growth parameters computed over selected facial landmarks, we compute the

growth parameters over the entire face region This is formulated as a scattered data

obtained using the proposed model

Trang 5

16 yrs

0.11

0.1 0.09 0.08

0.09 0.09

0.08

0.07 0.08

0.0

Original

16 yrs

Growth parameters (10 yrs – 16 yrs)

We propose a facial shape variation model that represents facial feature deformationsobserved during adulthood as that driven by the changing physical properties of theunderlying facial muscles The model is based on the assumption that the degrees offreedom associated with facial feature deformations are directly related to the physicalproperties and geometric orientations of the underlying facial muscles

Trang 6

24.4 Face Modeling and Verification Across Age Progression 705

where(x t0 (i) , y (i)

t0 ) and (x t1 (i) , y (i)

t1 ) correspond to the cartesian coordinates of the ith facial

feature at ages t0and t1, k (i)corresponds to a facial growth parameter, and[P t0 (i)]x,[P t0 (i)]y

corresponds to the orthogonal components of the pressure applied on the ith facial

feature at age t0

We propose a physically based parametric muscle model for human faces that

implic-itly accounts for the physical properties, geometric orientations, and functionalities of

each of the individual facial muscles Drawing inspiration from Waters’ muscle model

[87], we identify three types of facial muscles, namely linear muscles, sheet muscles, and

sphincter muscles, based on their functionalities Further, we propose transformation

models for each muscle type

The following factors are to be taken into consideration while developing the pressure

models (i) Muscle functionality and gravitational forces: The proposed pressure models

reflect the muscle functionalities such as the “stretch” operation and the “contraction”

operation The direction of applied pressure reflects the effects of gravitational forces

(ii) Points of origin and insertion for each muscle: The degrees of freedom associated with

muscle deformations are minimum at their points of origin (fixed end) and maximum

at their points of insertion (free end) Hence, the deformations induced over a facial

feature directly depend on the distance of the facial feature from the point of origin of

the underlying muscle The transformation models proposed on each muscle type are

illustrated below

1 Linear muscle(␣,␾)

Linear muscles correspond to the “stretch operation.” These muscles are described

by their attributes namely the muscle length(␣) and the muscle orientation w.r.t

to the facial axis(␾) The farther a feature is from the muscle’s point of origin, the

greater the chances that the feature undergoes deformation Hence, the pressure

is modeled such that P (i) ⬀␣ (i) (␣ i is the distance of the ith feature from the

point of origin.) The corresponding shape transformation model is described

below:

x (i) t1 ⫽ x t0 (i) ⫹ k [␣ (i)sin␾],

y (i)

t1 ⫽ y t (i)0 ⫹ k [␣ (i)cos␾].

2 Sheet muscle(␣,␾,␪,␻)

Sheet muscles correspond to the “stretch operation” as well They are described

by four of their attributes (muscle length, angles subtended, etc.) The pressure

applied on a fiducial feature is modeled as P (i) ⬀ ␣ (i)sec␪ (i), the distance of

the ith feature from the point(s) of origin of the underlying muscles The shape

transformation model is described below:

x (i)

t1 ⫽ x t (i)0 ⫹ k [␣ (i)sec␪ (i)sin(␾ ⫹ ␪ (i) )],

y (i) t1 ⫽ y t0 (i) ⫹ k [␣ (i)sec␪ (i)cos(␾ ⫹ ␪ (i) )].

Trang 7

3 Sphincter muscle(␣,␤)

The sphincter muscle corresponds to the “contraction/expansion” operation and

is described by two attributes The pressure modeled as a function of the

distance from the point of origin, P (i) ⬀r (i) (␾ (i) )cos␾ (i), is directed radially

inward/outward:

x (i) t1 ⫽ x t0 (i) ⫹ k [r (i) (␾ (i) )cos2␾ (i)],

y (i)

t1 ⫽ y t (i)0 ⫹ k [r (i) (␾ (i) )cos␾ (i)sin␾ (i)]

Figure 24.12illustrates the muscle-based pressure distributions described above.From a database that comprises 1200 pairs of age separated face images (predomi-nantly Caucasian), we selected 50 pairs of face images each undergoing the following

projec-tive measurements (21 horizontal measurements and 23 vertical measurements) acrossthe facial features We analyze the intrapair shape transformations from the perspective

of weight-loss, weight-gain, and weight-retention and select the appropriate training setsfor each case Again, following an approach similar to that described in the previoussection, we compute the muscle parameters by studying the transformation of ratios offacial distances across age transformations

From a modeling perspective, facial wrinkles and other forms of textural variationsobserved in aging faces can be characterized on the image domain by means of imagegradients Let(I t (i)1 , I (i)

t2 ), 1 ⱕ i ⱕ N correspond to pairs of age-separated face images of

N individuals undergoing similar age transformations (t1→ t2 ) In order to study the

facial wrinkle variations across age transformation, we identify four facial regions whichtend to have a high propensity toward developing wrinkles, namely the forehead region

(W1), the eye-burrow region (W2), the nasal region (W3), and the lower chin region (W4) W n, 1ⱕ n ⱕ 4 corresponds to the facial mask that helps isolate the desired facial

region LetⵜI t (i)1 andⵜI t (i)2 correspond to the image gradients of the ith image at t1and

t2years, 1ⱕ i ⱕ N Given a test image Jt1at t1years, the image gradient of which isⵜJt1,

we induce textural variations by incorporating the region-based gradient differences thatwere learned from the set of training images discussed above:

W n·ⵜI t (i)2 ⫺ ⵜI t (i)1 (24.24)

the proposed facial aging model

Trang 8

24.4 Face Modeling and Verification Across Age Progression 707

(i) Muscle-based pressure distribution

(iii) Pressure modeled on

Muscle-based pressure illustration

The proposed facial aging models were used to perform face recognition across age

transformations on two databases The first database was comprised of age-separated

face images of individuals under 18 years of age and the second comprised of

age-separated face images of adults On a database that comprises 260 age-age-separated image

pairs of adults, we perform face recognition across age progression The image pairs

We adopt PCA to perform recognition across ages under the following three settings: no

transformation in shape and texture, performing shape transformation, and performing

Trang 9

Original image

(age : 54 years)

Weight-loss

Weight-gain Original Transformed

Muscle-based feature drifts

Shape transformed (weight loss)

Effects of gradient transformation

Effects of gradient transformation

Shape and texture transformation

Shape and texture transformation

x y

x y



FIGURE 24.13

An overview of the proposed facial aging model: facial shape variations induced for the cases

of weight-gain and weight-loss are illustrated Further, the effects of gradient transformations ininducing textural variations are illustrated as well

TABLE 24.4 Face recognition across ages

shape and textural transformation.Table 24.4reports the rank 1 recognition score underthe three settings The experimental results highlight the significance of transformingshape and texture when performing face recognition across ages

A similar performance improvement was observed on the face database that comprisesindividuals under 18 years of age For a more detailed account on the experimental results,

we refer the readers to our earlier works[60, 61]

Trang 10

recog-References 709

REFERENCES

[1] R Chellappa, C L Wilson, and S Sirohey Human and machine recognition of faces: a survey.

Proc IEEE, 83:705–740, 1995.

[2] S Z Li and A K Jain, editors Handbook of Face Recognition Springer-Verlag, 2004.

[3] W Zhao, R Chellappa, A Rosenfeld, and P J Phillips Face recognition: a literature survey ACM

Comput Surv., 12, 2003.

[4] R Hietmeyer Biometric identification promises fast and secure processings of airline passengers.

I C A O J., 55(9):10–11, 2000.

[5] P J Phillips, R M McCabe, and R Chellappa Biometric image processing and recognition Proc.

European Signal Process Conf., Rhodes, Greece, 1998.

[6] P J Phillips, P Grother, R J Micheals, D M Blackburn, E Tabbssi, and M Bone Face recognition

vendor test 2002: evaluation report NISTIR 6965,http://www.frvt.org, 2003

[7] P J Phillips, H Moon, S Rizvi, and P J Rauss The FERET evaluation methodology for

face-recognition algorithms IEEE Trans Pattern Anal Mach Intell., 22:1090–1104, 2000.

[8] A J O’Toole Psychological and neural perspectives on human faces recognition In S Z Li and

A K Jain, editors, Handbook of Face Recognition, Springer, 2004.

[9] R O Duda, P E Hart, and D G Stork Pattern Classification Wiley-Interscience, 2001.

[10] V Bruce Recognizing Faces Lawrence Erlbaum Associates, London, UK, 1988.

[11] M Kirby and L Sirovich Application of Karhunen-Loéve procedure of the characterization of

human faces IEEE Trans Pattern Anal Mach Intell., 12:103–108, 1990.

[12] M Turk and A Pentland Eigenfaces for recognition J Cogn Neurosci., 3:72–86, 1991.

[13] K Etemad and R Chellappa Discriminant analysis for recognition of human face images J Opt.

Soc Am A, 14(8):1724–1733, 1997.

[14] P N Belhumeur, J P Hespanha, and D J Kriegman Eigenfaces vs Fisherfaces: recognition using

class specific linear projection IEEE Trans Pattern Anal Mach Intell., 19:711–720, 1997.

[15] W Zhao, R Chellappa, and A Krishnaswamy Discriminant analysis of principal components for

face recognition In Proc Int Conf Automatic Face and Gesture Recognit., 361–341, Nara, Japan,

1998.

[16] M S Barlett, H M Ladesand, and T J Sejnowski Independent component representations for

face recognition Proc Soc Photo Opt Instrum Eng., 3299:528–539, 1998.

[17] P Penev and J Atick Local feature analysis: a general statistical theory for object representation.

Netw.: Comput Neural Syst., 7:477–500, 1996.

[18] B Moghaddam and A Pentland Probabilistic visual learning for object representation IEEE Trans.

Pattern Anal Mach Intell., PAMI-19(7):696–710, 1997.

[19] B Moghaddam Principal manifolds and probabilistic subspaces for visual recognition IEEE Trans.

Pattern Anal Mach Intell., 24:780–788, 2002.

[20] S Zhou and R Chellappa Multiple-exemplar discriminant analysis for face recognition In Proc.

Int Conf Pattern Recognit., Cambridge, UK, August 2004.

[21] J Li, S Zhou, and C Shekhar A comparison of subspace analysis for face recognition In IEEE Int.

Conf Acoustics, Speech, and Signal Process (ICASSP), Hongkong, China, April 2003.

[22] S H Lin, S Y Kung, and J J Lin Face recognition/detection by probabilistic decision based neural

network IEEE Trans Neural Netw., 9:114–132, 1997.

Trang 11

[23] P J Phillips Support vector machines applied to face recognition Adv Neural Inf Process Syst.,

Int J Comput Vis., 35:203–222, 1999.

[28] R Basri and D Jacobs Lambertian reflectance and linear subspaces IEEE Trans Pattern Anal Mach Intell., 25:218–233, 2003.

[29] W T Freeman and J B Tenenbaum Learning bilinear models for two-factor problems in vision.

In Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit., Puerto Rico, 1997.

[30] A Shashua and T R Raviv The quotient image: class based re-rendering and recognition with

varying illuminations IEEE Trans Pattern Anal Mach Intell., 23:129–139, 2001.

[31] S Zhou, R Chellappa, and D Jacobs Characterization of human faces under illumination

vari-ations using rank, integrability, and symmetry constraints In European Conf on Comput Vis.,

Prague, Czech, May 2004.

[32] T Cootes, K Walker, and C Taylor View-based active appearance models In Proc of Int Conf on Automatic Face and Gesture Recognition, Grenoble, France, 2000.

[33] R Gross, I Matthews, and S Baker Eigen light-fields and face recognition across pose In Proc Int Conf Automatic Face and Gesture Recognit., Washington, DC, 2002.

[34] A Pentland, B Moghaddam, and T Starner View-based and modular eigenspaces for face

recog-nition In Proc of IEEE Computer Society Conf on Computer Vision and Pattern Recognition, Seattle,

WA, 1994.

[35] V Blanz and T Vetter Face recognition based on fitting a 3D morphable model IEEE Trans Pattern Anal Mach Intell., 25:1063–1074, 2003.

[36] A S Georghiades, P N Belhumeur, and D J Kriegman From few to many: illumination cone

models for face recognition under variable lighting and pose IEEE Trans Pattern Anal Mach Intell., 23:643–660, 2001.

[37] R Gross, I Matthews, and S Baker Fisher light-fields for face recognition across pose and

illumination In Proc of the German Symposium on Pattern Recognit., 2002.

[38] M Vasilescu and D Terzopoulos Multilinear image analysis for facial recognition In Proc of Int Conf on Pattern Recognit., Quebec City, Canada, 2002.

[39] S Zhou and R Chellappa Illuminating light field: image-based face recognition across

illumi-nations and poses In Proc of Int Conf on Automatic Face and Gesture Recognit., Seoul, Korea,

May 2004.

[40] M Lades, J C Vorbruggen, J Buhmann, J Lange, C V D Malsburg, R P Wurtz, and W Konen.

Distortion invariant object recognition in the dynamic link architecture IEEE Trans Comput.,

42(3):300–311, 1993.

[41] B Knight and A Johnston The role of movement in face recognition Vis Cogn., 4:265–274,

1997.

Trang 12

References 711

[42] B Li and R Chellappa A generic approach to simultaneous tracking and verification in video.

IEEE Trans Image Process., 11(5):530–554, 2002.

[43] V Krueger and S Zhou Exemplar-based face recognition from video In European Conference on

Computer Vision, Copenhagen, Denmark, 2002.

[44] S Zhou and R Chellappa Probabilistic human recognition from video In European Conf on

Comput Vis., Vol 3, 681–697, Copenhagen, Denmark, May 2002.

[45] S Zhou, V Krueger, and R Chellappa Probabilistic recognition of human faces from video.

Comput Vis Image Underst., 91:214–245, 2003.

[46] S Zhou, R Chellappa, and B Moghaddam Visual tracking and recognition using

appearance-adaptive models in particle filters IEEE Trans Image Process., 11:1434–1456, 2004.

[47] M J Lyons, J Biudynek, and S Akamatsu Automatic classification of single facial images IEEE

Trans Pattern Anal Mach Intell., 21(12):1357–1362, 1999.

[48] A Lanitis, C J Taylor, and T F Cootes Toward automatic simulation of aging affects on face

images IEEE Trans Pattern Anal Mach Intell., 24:442–455, 2002.

[49] M J Black and Y Yacoob Recognizing facial expressions in image sequences using local

parameterized models of image motion Int J Comput Vis., 25:23–48, 1997.

[50] Y Tian, T Kanade, and J Cohn Recognizing action units of facial expression analysis IEEE Trans.

Pattern Anal Mach Intell., 23:1–19, 2001.

[51] I Shimshoni, Y Moses, and M Lindenbaum Shape reconstruction of 3D bilaterally symmetric

surfaces Int J Comput Vis., 39:97–100, 2000.

[52] W Zhao and R Chellappa Symmetric shape from shading using self-ratio image Int J Comput.

Vis., 45:55–752, 2001.

[53] R T Frankot and R Chellappa A method for enforcing integrability in shape from shaging

problem IEEE Trans Pattern Anal Mach Intell., 10:439–451, 1988.

[54] T Kanade Computer Recognition of Human Faces Birhauser, Basel, Switzerland, and Stuggart,

Germany, 1973.

[55] M D Kelly Visual identification of people by computer Tech Rep AI-130, Stanford AI project,

Stanform, CA, 1970.

[56] T Vetter and T Poggio Linear object classes and image synthesis from a single example image.

IEEE Trans Pattern Anal Mach Intell., 11:733–742, 1997.

[57] M Ramachandran, S Zhou, D Jhalani, and R Chellappa A method for converting smiling face to

a neutral face with applications to face recognition In IEEE Int Conf Acoustics, Speech, and Signal

Process (ICASSP), Philadelphia, USA, March 2005.

[58] H Ling, S Soatto, N Ramanathan, and D Jacobs A study of face recognition as people age In

Proc IEEE Int Conf on Comput Vis (ICCV), Rio De Janeiro, October 2007.

[59] N Ramanathan and R Chellappa Face verification across age progression IEEE Trans Image.

Process., 15(11):3349–3362, 2006.

[60] N Ramanathan and R Chellappa Modeling age progression in young faces In Proc IEEE Comput.

Vis Pattern Recognit (CVPR), Vol 1, 387–394, New York, June 2006.

[61] N Ramanathan and R Chellappa Modeling shape and textural variations in aging adult faces In

IEEE Conf Automatic Face and Gesture Recognition, 2008.

[62] S Zhou, G Aggarwal, R Chellappa, and D Jacobs Appearance characterization of linear

Lambertian objects, generalized photometric stereo and illumination-invariant face recognition.

IEEE Trans Pattern Anal Mach Intell., 29:230–245, 2007.

Trang 13

[63] S Zhou and R Chellappa Image-based face recognition under illumination and pose variations.

J Opt Soc Am A, 22:217–229, 2005.

[64] H Hayakawa Photometric stereo under a light source with arbitrary motion J Opt Soc Am A,

11(11):3079–3089, 1994.

[65] L Zhang and D Samaras Face recognition under variable lighting using harmonic image

exemplars In IEEE Conf Comput Vis Pattern Recognit., 1925, 2003.

[66] J Atick, P Griffin, and A Redlich Statistical approach to shape from shading: reconstruction of

3-dimensional face surfaces from single 2-dimentional images Neural Comput., 8:1321–1340, 1996.

[67] K C Lee, J Ho, and D Kriegman Nine points of light: acquiring subspaces for face recognition

under variable lighting In IEEE Conf Comput Vis Pattern Recognit., 519–526, December 2001.

[68] P Fua Regularized bundle adjustment to model heads from image sequences without calibrated

data Int J Comput Vis., 38:153–157, 2000.

[69] Y Shan, Z Liu, and Z Zhang Model-based bundle adjustment with application to face modeling.

In Proc Int Conf Comput Vis., 645651, 2001.

[70] A R Chowdhury and R Chellappa Face reconstruction from video using uncertainty analysis and

a generic model Comput Vis Image Underst., 91:188–213, 2003.

[71] G Qian and R Chellappa Structure from motion using sequential Monte Carlo methods In Proc IEEE Int Conf Comput Vis., 2:614–621, Vancouver, Canada, 2001.

[72] A Laurentini The visual hull concept for silhouette-based image understanding IEEE Trans Pattern Anal Mach Intell., 16(2):150–162, 1994.

[73] W Matusik, C Buehler, R Raskar, S Gortler, and L McMillan Image-based visual hulls In Proc SIGGRAPH, 369–374, New Orleans, LA, USA, 2000.

[74] M Levoy and P Hanrahan Light field rendering In Proc SIGGRAPH, 31–42, New Orleans, LA,

USA, 1996.

[75] S J Gortler, R Grzeszczuk, R Szeliski, and M Cohen The lumigraph In Proc SIGGRAPH, 43–54,

New Orleans, LA, USA, 1996.

[76] M A O Vasilescu and D Terzopoulos Multilinear analysis of image ensembles: Tensorfaces,

In European Conf Comput Vis., 2350:447–460, Copenhagen, Denmark, May 2002.

[77] S Zhou and R Chellappa Rank constrained recognition under unknown illumination.

In IEEE Int Workshop on Analysis and Modeling of Faces and Gestures, 2003.

[78] S Romdhani and T Vetter Efficient, robust and accurate fitting of a 3D morphable model In Proc IEEE Int Conf Comput Vis., 59–66, Nice, France, 2003.

[79] A Lanitis, C J Taylor, and T F Cootes Automatic interpretation and coding of face images using

flexible models IEEE Trans Pattern Anal Mach Intell., 19(7):743–756, 1997.

[80] J B Pittenger and R E Shaw Aging faces as viscal-elastic events: Implications for a theory of

nonrigid shape perception J Exp Psychol Hum Percept Perform., 1(4):374–382, 1975.

[81] J T Todd, L S Mark, R E Shaw, and J B Pittenger The perception of human growth Sci Am.,

242(2):132–144, 1980.

[82] A J O’Toole, T Vetter, H Volz, and E M Salter Three-dimensional caricatures of human heads:

distinctiveness and the perception of facial age Perception, 26:719–732, 1997.

[83] L G Farkas Anthropometry of the Head and Face Raven Press, New York, 1994.

[84] D M Bates and D G Watts Nonlinear Regression and its Applications Wiley, New York, 1988.

Trang 14

References 713

[85] Fred L Bookstein Principal warps: thin-plate splines and the decomposition of deformations.

IEEE Trans Pattern Anal Mach Intell., 11(6):567–585, 1989.

[86] A Lanitis FG-Net aging database.

[87] K Waters A muscle model for animating three-dimensional facial expression Comput Graph.

(ACM), 21:17–24, 1987.

[88] P Perez, M Gangnet, and A Blake Poisson image editing Comput Graph (ACM), 313–318, 2003.

Trang 15

How Iris Recognition Works

John Daugman

University of Cambridge

are the basis of all current public deployments of iris-based automated biometric

identi-fication To date, many millions of persons across many countries have enrolled with this

system, most commonly for expedited border crossing in lieu of passport presentation

but also in government security watch-list screening programs The recognition

princi-ple is the failure of a test of statistical independence on iris phase structure, as encoded

by multiscale quadrature Gabor wavelets The combinatorial complexity of this phase

information across different persons spans about 249 degrees of freedom and generates a

about personal identity with extremely high confidence These high confidence levels are

important because they allow very large databases to be searched exhaustively

(one-to-many “identification mode”), without making false matches, despite so (one-to-many chances

Biometrics that lack this property can only survive one-to-one (“verification”) or few

comparisons This chapter explains the iris recognition algorithms and presents results

of 9.1 million comparisons among eye images from trials in Britain, the USA, Japan, and

Korea

25.1 INTRODUCTION

Reliable automatic recognition of persons has long been an attractive goal As in all

pattern recognition problems, the key issue is the relation between interclass and intraclass

variability: objects can be reliably classified only if the variation among different instances

of a given class is less than the variation between different classes For example in face

recognition, difficulties arise from the fact that the face is a changeable social organ

displaying a variety of expressions, as well as being an active 3D object whose image

varies with viewing angle, pose, illumination, accoutrements, and age[4, 5] It has been

shown that even for “mug shot” (pose-invariant) images taken at least one year apart,

even the best algorithms have unacceptably large error rates[6–8] Against this intraclass

(same face) variation, interclass variation is limited because different faces possess the

Ngày đăng: 01/07/2014, 10:44

TỪ KHÓA LIÊN QUAN