In this technique, called TV holography or ESPI electronic speckle pattern interferometry, the interference fringe pattern is reconstructed electron-ically.. 12.2 TV HOLOGRAPHY ESPI In t
Trang 1Computerized Optical Processes
12.1 INTRODUCTION
For almost 30 years, the silver halide emulsion has been first choice as the recording medium for holography, speckle interferometry, speckle photography, moir´e and optical filtering Materials such as photoresist, photopolymers and thermoplastic film have also been in use There are two main reasons for this success In processes where diffraction is involved (as in holographic reconstruction), a transparency is needed The other advantage
of film is its superior resolution Film has, however, one big disadvantage; it must undergo some kind of processing This is time consuming and quite cumbersome, especially in industrial applications
Electronic cameras (vidicons) were first used as a recording medium in holography at the beginning of the 1970s In this technique, called TV holography or ESPI (electronic speckle pattern interferometry), the interference fringe pattern is reconstructed electron-ically At the beginning of the 1990s, computerized ‘reconstruction’ of the object wave was first demonstrated This is, however, not a reconstruction in the ordinary sense, but it has proven possible to calculate and display the reconstructed field in any plane
by means of a computer It must be remembered that the electronic camera target can never act as a diffracting element The success of the CCD-camera/computer combi-nation has also prompted the development of speckle methods such as digital speckle photography (DSP)
The CCD camera has one additional disadvantage compared to silver halide films – its inferior resolution; the size of a pixel element of a 1317× 1035 pixel CCD camera target
is 6.8µm When used in DSP, the size σs of the speckles imaged onto the target must
be greater than twice the pixel pitch p, i.e.
where m is the camera lens magnification and F the aperture number (see Equation (8.9)) When applied to holography, the distance d between the interference fringes must according to the Nyquist theorem (see Section 5.8) be greater than 2p:
Copyright 2002 John Wiley & Sons, Ltd.
ISBN: 0-470-84300-4
Trang 2298 COMPUTERIZED OPTICAL PROCESSES
Assuming sin α ≈ α, this gives
α≤ λ
where α is the maximum angle between the object and reference waves and λ is the wavelength For p = 6.8 µm this gives α = 2.7◦ (λ = 0.6328 µm).
In this chapter we describe the principles of digital holography and digital speckle photography We also include the more mature method of TV holography
12.2 TV HOLOGRAPHY (ESPI)
In this technique (also called electronic speckle pattern interferometry, ESPI) the
holo-graphic film is replaced by a TV camera as the recording medium (Jones and Wykes 1989) Obviously, the target of a TV camera can be used neither as a holographic storage medium nor for optical reconstruction of a hologram Therefore the reconstruction process
is performed electronically and the object is imaged onto the TV target Because of the rather low resolution of a standard TV target, the angle between the object and reference waves has to be as small as possible This means that the reference wave is made in-line with the object wave A typical TV holography set-up therefore looks like that given in
Figure 12.1 Here a reference wave modulating mirror (M1)and a chopper are included, which are necessary only for special purposes in vibration analysis (see Section 6.9)
The basic principles of ESPI were developed almost simultaneously by Macovski et al.
(1971) in the USA, Schwomma (1972) in Austria and Butters and Leendertz (1971) in England Later the group of Løkberg (1980) in Norway contributed significantly to the field, especially in vibration analysis (Løkberg and Slettemoen 1987)
When the system in Figure 12.1 is applied to vibration analysis the video store is not needed As in the analysis in Section 6.9, assume that the object and reference waves on the TV target are described by
Laser
Object
Chopper
Mechanical load, heat
Vibration,
amplitude To M TV monitor
rectifier
TV camera
BS2
BS1
L
M1
M2
Video store + / −
Figure 12.1 TV-holography set-up From Lokberg 1980 (Reproduced by permission of Prof.
O J Løkberg, Norwegian Institute of Technology, Trondheim)
Trang 3respectively For a harmonically vibrating object we have (see equation (6.47) for g= 2)
where D(x) is the vibration amplitude at the point of spatial coordinates x and ω is the
vibration frequency The intensity distribution over the TV-target becomes
I (x, t) = U2+ U2
This spatial intensity distribution is converted into a corresponding time-varying video signal When the vibration frequency is much higher than the frame frequency of the TV system (251 s, European standard), the intensity observed on the monitor is proportional
to Equation (12.6) averaged over one vibration period, i.e (cf Equation (6.49))
where J0 is the zeroth-order Bessel function and the bars denote time average Before being displayed on the monitor, the video signal is high-pass filtered and rectified In the filtering process, the first two terms of Equation (12.7) are removed After full-wave rectifying we thus are left with
Actually, φ represents the phase difference between the reference wave and the wave scattered from the object in its stationary state The term U Uocos φ therefore repre-sents a speckle pattern and the J0-function is said to modulate this speckle pattern Equation (12.8) is quite analogous to Equation (6.51) except that we get a|J0|-dependence
instead of a J02-dependence The maxima and zeros of the intensity distributions have, however, the same locations in the two cases A time-average recording of a vibrating turbine blade therefore looks like that shown in Figure 12.2(a) when applying ordinary holography, and that in Figure 12.2(b) when applying TV holography We see that the main difference in the two fringe patterns is the speckled appearance of the TV holography picture
When applied to static deformations, the video store in Figure 12.1 must be included This could be a video tape or disc, or most commonly, a frame grabber (see Section 10.2)
in which case the video signal is digitized by an analogue-to-digital converter Assume that the wave scattered from the object in its initial state at a point on the TV target is described by
After deformation, this wave is changed to
u2= Uoei(φo+2kd) ( 12.10)
Trang 4300 COMPUTERIZED OPTICAL PROCESSES
Figure 12.2 (a) Ordinary holographic and (b) TV-holographic recording of a vibrating turbine blade (Reproduced by permission of Prof O J Løkberg, Norwegian Institute of Technology, Trondheim)
where d is the out of plane displacement and where we have assumed equal field
ampli-tudes in the two cases Before deformation, the intensity distribution on the TV target is given by
I1= U2+ U2
where U and φ are the amplitude and phase of the reference wave This distribution is
con-verted into a corresponding video signal and stored in the memory After the deformation, the intensity and corresponding video signal is given by
I2 = U2+ U2
These two signals are then subtracted in real time and rectified, resulting in an intensity distribution on the monitor proportional to
I1− I2= 2UU0|[cos(φ − φ0) − cos(φ − φ0− 2kd)]|
The difference signal is also high-pass filtered, removing any unwanted background signal due to slow spatial variations in the reference wave Apart from the speckle pattern due to
the random phase fluctuations φ − φ0 between the object and reference fields, this gives the same fringe patters as when using ordinary holography to static deformations The dark and bright fringes are, however, interchanged, for example the zero-order dark fringe corresponds to zero displacement
This TV holography system has a lot of advantages In the first place, the cumbersome, time-consuming development process of the hologram is omitted The exposure time is
quite short (251 s), relaxing the stability requirements, and one gets a new hologram
each 251 s Among other things, this means that an unsuccessful recording does not have the same serious consequences and the set-up can be optimized very quickly A lot of
Trang 5loading conditions can be examined during a relatively short time period Time-average recordings of vibrating objects at different excitation levels and different frequencies are easily performed
The interferograms can be photographed directly from the monitor screen or recorded
on video tape for later analysis and documentation TV holography is extremely useful for applications of the reference wave modulation and stroboscopic holography techniques mentioned in Section 6.9 In this way, vibration amplitudes down to a couple of angstroms have been measured The method has been applied to a lot of different objects varying
from the human ear drum (Løkberg et al 1979) to car bodies (Malmo and Vikhagen 1988).
When analysing static deformations, the real-time feature of TV holography makes it possible to compensate for rigid-body movements by tilting mirrors in the illumination beam path until a minimum number of fringes appear on the monitor
12.3 DIGITAL HOLOGRAPHY
In ESPI the object was imaged onto the target of the electronic camera and the interference fringes could be displayed on a monitor We will now see how the image of the object can be reconstructed digitally when the unfocused interference (between the object and reference waves) field is exposed to the camera target The experimental set-up is therefore quite similar to standard holography
The geometry for the description of digital holography is shown in Figure 12.3 We
assume the field amplitude uo(x, y) of the object to be existing in the xy-plane Let the hologram (the camera target) be in the ξ η-plane a distance d from the object Assume
that a hologram given in the usual way as (cf Equation (6.1))
I (ξ, η) = |r|2+ |uo|2+ ru∗
o+ r∗u
is recorded and stored by the electronic camera Here uoand r are the object and reference
waves respectively In standard holography the hologram is reconstructed by illuminating
the hologram with the reconstruction wave r This can of course not be done here How-ever, we can simulate r(ξ, η) in the ξ η-plane by means of the computer and therefore also construct the product I (ξ, η)r(ξ, η) In Chapter 4 we learned that if the field amplitude
distribution over a plane is given, then the field amplitude propagated to another point
Object
x ′ d′
d x
h
x
Figure 12.3
Trang 6302 COMPUTERIZED OPTICAL PROCESSES
in space is found by summing the contributions from the Huygens wavelets over the
aperture To find the reconstructed field amplitude distribution u a (x, y) in the xy-plane
we therefore apply the Rayleigh–Sommerfeld diffraction formula (Equation (4.7)):
u a (x, y)= 1
iλ
I (ξ, η)r(ξ, η)e
ikρ
with
We therefore should be able to calculate u a (x, y) in the xy-plane at any distance d from the hologram plane There are, however, two values of d of most practical
inter-est: (1) d= −d where the virtual image is located (see Section 6.4), and (2) d= d,
the location of the real image, provided the reference wave is a plane wave As found
in Section 6.4, this demands that the reference and reconstruction waves are identi-cal With today’s powerful computers it is straightforward to calculate the integral in Equation (12.15) However, with some approximations and rearrangements of the inte-grand, the processing speed can be increased considerably Below we discuss how to approach this problem
The first method for solving Equation (12.15) is to apply the Fresnel approximation as described in Section 1.7 That is, to retain the first two terms of a binomial expansion of
ρ and put cos = 1 This gives
u a (x, y)= exp{ikd}
iλd
I (ξ, η)r(ξ, η)exp
ik 2d[(ξ − x)2+ (η − y)2
]
dξ dη
= exp{ikd} exp{iπdλ(u2+ v2)}
iλd
I (ξ, η)r(ξ, η)
× exp
iπ
dλ (ξ
2+ η2
)
where we have introduced the spatial frequencies
u= x
dλ and v= y
Equation (12.17) can be written as
The reconstructed field is therefore given as the Fourier transform of I (ξ, η) multiplied
by r(ξ, η) and a quadratic phase function
w(ξ, η)= exp
iπ
dλ (ξ
2+ η2)
( 12.20)
Trang 7The evaluated integral is multiplied by a phase function
z(u, v) = exp{ikd} exp{iπd(u2+ v2)} ( 12.21)
In most applications z(u, v) can be neglected, e.g when only the intensity is of interest,
or if only phase differences matter, as in holographic interferometry
F {f (ξ, η)w(ξ, η)} is often referred to as a Fresnel transformation of f (ξ, η) When
d→ ∞, w(ξ, η) → 1 and the Fresnel transform reduces to a pure Fourier transform.
A spherical wave from a point (0, 0, −d)is described by
r(ξ, η) = Urexp
−iπ
dλ (ξ
2+ η2
)
( 12.22)
By using this as the reconstruction wave, r · w = constant, and again we get a pure
Fourier transform This case is called lensless Fourier transform holography Although this method gives a more efficient computation, we lose the possibility for numerical
focusing by varying the distance d, since it vanishes from the formula In Figure 12.4 the Fresnel method is applied
In the second method we first note that the diffraction integral, Equation (12.15), can
be written as
u a (x, y)=
I (ξ, η)r(ξ, η)g(x, y, ξ, η) dξ dη ( 12.23)
where
g(x, y, ξ, η)= 1
iλ
exp{ikd + (ξ − x)2+ (η − y)2}
d + (ξ − x)2+ (η − y)2 ( 12.24) which means that g(x, y, ξ, η) = g(x− ξ, y− η) and therefore Equation (12.23) can
be written as a convolution
From the convolution theorem (see Appendix B) we therefore have
By taking the inverse Fourier transform of this result, we get
The Fourier transform of g can be derived analytically (Goodman 1996):
G(u, v)=F {g} = exp
2π id λ
1− (λu)2− (λv)2
( 12.28)
and therefore
which saves us one Fourier transform
Trang 8304 COMPUTERIZED OPTICAL PROCESSES
Figure 12.4 Numerical reconstruction of the real image using the Fresnel method The bright central spot is due to the spectrum of the plane reference wave The object was a 10.5 cm high, 6.0 cm wide white plaster bust of the composer J Brahms placed 138 cm from the camera target Reproduced by courtesy of O Skotheim (2001)
Holographic interferometry An important application of digital holography is in the
field of holographic interferometry Standard methods (see Chapter 6) rely on the extrac-tion of the phase from interference fringes Digital holography has the advantage of providing direct access to phase data in the reconstructed wave field Denoting the recon-structed real (or virtual) wave as
we get
ϕ= tan−1Im{u}
This is a wrapped phase and we have to rely on unwrapping techniques as described
in Section 11.5 By reconstructing the real wave of the object in states 1 and 2 (e.g
between a deformation) we can extract the two phase maps ϕ1 and ϕ2 and calculate the
phase difference ϕ = ϕ1− ϕ2
Trang 912.4 DIGITAL SPECKLE PHOTOGRAPHY
In Section 8.4.2 we learned how to measure the displacement vector from a double-exposed specklegram by illuminating the specklegram by a direct laser beam and observ-ing the resultobserv-ing Young frobserv-inges on a screen (see Figure 8.10) In Section 8.5 we gave
a more detailed explanation of this phenomenon The intensities I1 and I2 in the first
and second recording we wrote as I1(x, y) = I (x, y) and I2(x, y) = I (x + d, y) This
could be done because we assumed the speckle displacement to be uniform within the laser beam illuminated area and for simplicity we assumed the displacement to be in the
x -direction The Fourier transforms of I1, I2 and I were denoted J1(u, v) , J2(u, v) and
J (u, v)respectively We found (Equation (8.38)) that
J2(u, v) = J1(u, v)· ei2π ud = J (u, v) · e i2π ud
( 12.32)
Now we discuss another technique called Digital Speckle Photography (DSP) Here the specklegrams are recorded by an electronic camera In practice, the image is divided into subimages with a size of, e.g., 8× 8 pixels Within each subimage, the speckle
displacement is assumed to be constant Assume that I1and I2are the intensities recorded
in a particular subimage The corresponding Fourier transforms are then easily calculated
by a computer Let us call this step 1 of our procedure In step 2 we calculate a new spectrum given as
F (u, v)= J1· J2
|J1· J2||J1· J2|α = J1· J∗
2
By using the result from Equation (12.32), we get
F (u, v) = |J (u, v)| 2α
ei2π ud ( 12.34)
To this we apply another Fourier transform operation (step 3):
F {F (u, v)} =
−∞
|J (u, v)| 2α
e−i2π[u(ξ−d)+vη] dudv = G α (ξ − d, η) ( 12.35)
where
G α (ξ, η)=
−∞
|J (u, v)| 2αe−i2π(uξ+vη) dudv ( 12.36)
In practice, G α (ξ − d, η) emerges as an expanded impulse or correlation peak located
at (d, 0) in the second spectral domain By this method we have obtained the cross-correlation between I1 and I2 Therefore this procedure gives a more direct method for
detecting the displacement d than does the Young fringe method The parameter α controls the width of the correlation peak Optimum values range from α= 0 for images
char-acterized by a high spatial frequency content and a high noise level, to α = 0.5 for low noise images with less fine structure For α > 0.5 the high frequency noise is magnified,
resulting in an unreliable algorithm
The last two steps of this procedure cannot be done optically but are easily performed
in a computer The local displacement vector for each subimage is found by the above procedure and thereby the 2-D displacement for the whole field can be deduced DSP is
Trang 10306 COMPUTERIZED OPTICAL PROCESSES
not restricted to laser speckles On the contrary, white light speckles are superior to laser speckles when measuring object deformations, mainly because they are more robust to decorrelation
A versatile method for creating white light speckles when measuring object contours or deformations is to project a random pattern onto the surface by means of an addressable video projector DSP has also been used in combination with X-rays to measure internal deformations (Synnergren and Goldrein 1999) Here a plane of interest in the material is seeded with grains of an X-ray absorbing material and a speckled shadow image is cast
on the X-ray film