1. Trang chủ
  2. » Ngoại Ngữ

OPTICAL IMAGING AND SPECTROSCOPY Phần 2 ppt

52 324 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 52
Dung lượng 2,2 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

2.47 for a focal imaging system is As with the canonical wave and correlation field multiplex systems presented in Sections 10.2 and 6.4.2, coded aperture imaging provides a very high de

Trang 1

Figure 2.28 Base transmission pattern, tiled mask, and inversion deconvolution for p ¼ 5.

Figure 2.29 Base transmission pattern, tiled mask, and inversion deconvolution for p ¼ 11.

38 GEOMETRIC IMAGING

Trang 2

implemented under cyclic boundary conditions rather than using zero padding Incontrast with a pinhole system, the number of pixels in the reconstructed coded aper-ture image is equal to the number of pixels in the base transmission pattern.Figures 2.31–2.33 are simulations of coded aperture imaging with the 59 59-elementMURA code As illustrated in the figure, the measured 59 59-element data arestrongly positive For this image the maximum noise-free measurement value is

100, and the minimum value is 58, for a measurement dynamic range of ,2 Wewill discuss noise and entropic measures of sensor system performance at variouspoints in this text, in our first encounter with a multiplex measurement system wesimply note that isomorphic measurement of the image would produce a muchhigher measurement dynamic range for this image

In practice, noise sensitivity is a primary concern in coded aperture and other tiplex sensor systems For the MURA-based coded aperture system, Gottesman andFenimore [102] argue that the pixel signal-to-noise ratio is

Trang 3

Figure 2.31 Coded aperture imaging simulation with no noise for the 59  59-element code

of Fig 2.30.

Figure 2.32 Coded aperture imaging simulation with shot noise for the 59  59-element code of Fig 2.30.

40 GEOMETRIC IMAGING

Trang 4

derive the square-root characteristic form of shot noise in particular For the 59 59MURA aperture, N ¼ 1749 If we assume that the object consists of binary values 1and 0, the maximum pixel SNR falls from 41 for a point object to 3 for an object with

200 points active The smiley face object of Fig 2.31 consists of 155 points.Dependence of the SNR on object complexity is a unique feature of multiplexsensor systems The equivalent of Eqn (2.47) for a focal imaging system is

As with the canonical wave and correlation field multiplex systems presented

in Sections 10.2 and 6.4.2, coded aperture imaging provides a very high depthfield image but also suffers from the same SNR deterioration in proportion tosource complexity

To this point we have considered images as two-dimensional distributions, despitethe fact that target objects and the space in which they are embedded are typicallyFigure 2.33 Coded aperture imaging simulation with additive noise for the 59  59-element code of Fig 2.30.

Trang 5

three-dimensional Historically, images were two-dimensional because focal imaging

is a plane-to-plane transformation and because photochemical and electronic detectorarrays are typically 2D films or focal planes Using computational image synthesis,however, it is now common to form 3D images from multiplex measurements Ofcourse, visualization and display of 3D images then presents new and differentchallenges

A variety of methods have been applied to 3D imaging, including techniquesderived from analogy with biological stereo vision systems and actively illuminatedacoustic and optical ranging systems Each approach has advantages specific to tar-geted object classes and applications Ranging and stereo vision are best adapted

to opaque objects where the goal is to estimate a surface topology embedded inthree dimensions

The present section and the next briefly overview tomographic methods for dimensional imaging These sections rely on analytical techniques and concepts, such

multi-as linear transform theory, the Fourier transform and vector spaces, which are not mally introduced until Chapter 3 The reader unfamiliar with these concepts may find

for-it useful to read the first few sections of that chapter before proceeding Our survey ofcomputed tomography is necessarily brief; detailed surveys are presented by Kak andSlaney [131] and Buzug [37]

Tomography relies on a simple 3D extension of the density-based object modelthat we have applied in this chapter The word tomography is derived from theGreek tomos, meaning slice or section, and graphia, meaning describing Theword predates computational methods and originally referred to an analog techniquefor imaging a cross section of a moving object While tomography is sometimes used

to refer to any method for measuring 3D distributions (i.e., optical coherencetomography; Section 6.5), computed tomography (CT) generally refers to the projec-tion methods described in this section

Despite our focus on 3D imaging, we begin by considering tomography of 2Dobjects using a one-dimensional detector array 2D analysis is mathematicallysimpler and is relevant to common X-ray illumination and measurement hardware.2D slice tomography systems are illustrated in Fig 2.34 In parallel beam systems,

a collimated beam of X rays illuminates the object The object is rotated in front ofthe X-ray source and one-dimensional detector opposite the source measures the inte-grated absoption along a line through the object for each ray component

As always, the object is described by a density function f (x, y) Defining, asillustrated in Fig 2.35, l to be the distance of a particular ray from the origin, u to

be the angle between a normal to the ray and the x axis, and a to be the distancealong the ray, measurements collected by a parallel beam tomography system takethe form

g l, uð Þ ¼

ð

f l cos uð  a sin u, l sin u þ a cos uÞda (2:49)

where g(l, u) is the Radon transform of f (x, y) The Radon transform is definedfor f [ L2(Rn) as the integral of f over all hyperplanes of dimension n 2 1 Each

42 GEOMETRIC IMAGING

Trang 6

Figure 2.34 Tomographic sampling geometries.

Figure 2.35 Projection tomography geometry.

Trang 7

hyperplane is defined by a surface normal vector in[in Eqn (2.49) in¼ cosuixþ sinuiy].The equation of the hyperplane is x in¼ l The Radon transform in Rn

¼ ^f u ¼ uð lcos u, v¼ ulsin uÞ (2:53)

where fˆ is the Fourier transform of f If we sample uniformly in l space along an ture of length Rs, then Dul¼ 2p/Rs The sample period along l determines the spatialextent of the sample

aper-In principle, one could use Eqn (2.52) to sample the Fourier space of the objectand then inverse-transform to estimate the object density In practice, difficulties ininterpolation and sampling in the Fourier space make an alternative algorithmmore attractive The alternative approach is termed convolution – backprojection.The algorithm is as follows:

1 Measure the projections g(l, u)

2 Fourier-transform to obtain gˆ(ul, u)

3 Multiply ^g(ul, u) by the filterjulj and inverse-transform This step consists ofconvolving g(l, u) with the inverse transformation ofjulj (the range of ul islimited to the maximum frequency sampled) This step produces the filteredfunction Q(l, u)¼Ð

julj^g uð l, uÞ exp (i2pull)dul

4 Sum the filtered functions Q(l, u) interpolated at points l¼ x cos u þ y sin u toproduce the reconstructed estimate of f This constitutes the backprojection step

To understand the filtered backprojection approach, we express the inverse Fouriertransform relationship

f (x, y)¼

ð ð

^f u, vð Þei2p(uxþvy)du dv (2:54)

44 GEOMETRIC IMAGING

Trang 8

1

1

^f w, uð Þei2pw(x cos uþy sin u)jwjdw du (2:57)

where we use the fact that for real-valued f (x, y), ^f (w, u )¼ ^f(w, u þ p) Thismeans that

f (x, y)¼

ðp 0

Q(l¼ x cos u þ y sin u, u)du (2:58)

The convolution – backprojection algorithm is illustrated in Fig 2.36, whichshows the Radon transform, the Fourier transform ^g(ul, u), the Fourier transform

^f (u, v), Q(l, u), and the reconstructed object estimate Note, as expected from the jection slice theorem, that ^g(ul, u) corresponds to slices of ^f (u, v) “unrolled” aroundthe origin Edges of the Radon transform are enhanced in Q(l, u), which is a “high-pass” version of g(l, u)

pro-We turn finally to 3D tomography, where we choose to focus on projectionsmeasured by a camera A camera measures a bundle of rays passing through a prin-cipal point, (xo, yo, zo) For example, we saw in Section 2.5 that a pinhole or codedaperture imaging captures ^f (ux, uy), where, repeating Eqn (2.31)

In 3D, tomographic imaging using projections through a sequence of principalpoints is cone beam tomography Note that Eqn (2.59) over a range of vertexpoints is not the 3D Radon transform We refer to the transformation based on projec-tions along ray bundles as the X-ray transform The X-ray transform is closely related

Trang 9

to the Radon transform, however, and can be inverted by similar means The 4DX-ray transform of 3D objects, consisting of projections through all principal points

on a sphere surrounding an object, overconstrains the object Tuy [233] cribes reduced vertex paths that produce well-conditioned 3D X-ray transforms of3D objects

des-A discussion of cone beam tomography using optical imagers is presented

by Marks et al [168], who apply the circular vertex path inversion algorithmFigure 2.36 Tomographic imaging with the convolution – backprojection algorithm.

46 GEOMETRIC IMAGING

Trang 10

developed by Feldkamp et al [68] The algorithm uses 3D convolution –backprojection based on the vertex path geometry and parameters illustrated

in Fig 2.37 Projection data fF(ux, uy) are weighted and convolved with the separablefilters

1þ u0 x 2

Optical sensor design boils down to compromises between mathematically attractiveand physically attainable visibilities In most cases, physical mappings are the startingpoint of design So far in this chapter we have encountered two approaches driven

Figure 2.37 Cone beam geometry.

Trang 11

primarily by ease of physical implementation (focal and tomographic imaging) andone attempt to introduce artificial coding (coded aperture imaging) The tensionbetween physical and mathematical/coding constraints continues to develop in theremainder of the text.

The ultimate challenge of optical sensor design is to build a system such that thevisibility v(A, B) is optimal for the sensor task 3D optical design, specifying thestructure and composition of optical elements in a volume between the object anddetector elements, is the ultimate toolkit for visibility coding Most current opticaldesign, however, is based on sequences of quasiplanar surfaces In view of the ele-gance of focal imaging and the computational challenge of 3D design, the planar/ray-based approach is generally appropriate and productive 3D design will,however, ultimately yield superior system performance

As a first introduction to the challenges and opportunities of 3D optics, we sider reference structure tomography (RST) as an extension of coded apertureimaging to multiple dimensions [30] Ironically, to ease explanation and analysis,our discussion of RST is limited to the 2D plane The basic concept of a referencestructure is illustrated in Fig 2.38 A set of detectors at fixed points observes aradiant object through a set of reference obscurants A ray through the object isvisible at a detector if it does not intersect an obscurant; rays that intersect obscurantsare not visible The reference structure/detector geometry segments the object spaceinto discrete visibility cells The visibility cells in Fig 2.38 are indexed by signatures

con-xi¼ 1101001 , where xijis one if the ith cell is visible to the jth detector andzero otherwise

Figure 2.38 Reference structure geometry.

48 GEOMETRIC IMAGING

Trang 12

As discussed in Section 2.1, the object signal may be decomposed into ponents fisuch that

com-fj¼ð

Interesting scaling laws arise from the consideration of RST systems consisting of

m detectors and n obscurants in a d-dimensional embedding space The number ofpossible signatures (2m) is much larger than the number of distinct visibility cells(O(mdnd)) Agarwal et al prove a lower bound of order (mn=log(n))d exists on theminimum number of distinct signatures [2] This means that despite the fact thatdifferent cells have the same signature in Fig 2.38, as the scale of the systemgrows, each cell tends to have a unique signature

The RST system might thus be useful as a point source location system, since apoint source hidden in (mn)dcells would be located in m measurements Since an effi-cient system for point source location on N cells requires log N measurements, RST isefficient for this purpose only if m n Point source localization may be regarded as

an extreme example of compressive sampling or combinatorial group testing Thenature of the object is constrained, in this case by extreme sparsity, such that recon-struction is possible with many fewer measurements than the number of resolutioncells More generally, reference structures may be combined with compressive esti-mation [38,58] to image multipoint sparse distributions on the object space.Since the number of signature cells is always greater than m, the linear mapping

g¼ Hf is always ill-conditioned H is a m  p matrix, where p is the number of natures realized by a given reference structure H may be factored into the product

sig-H¼ USVy by singular value decomposition (SVD) U and V are m m and

p p unitary matrices and S is a m  p matrix with nonnegative singular valuesalong the diagonal and zeros off the diagonal

Depending on the design of the reference structure and the statistics of the object,one may find that the pseudoinverse

is a good estimate of f We refer to this approach in Section 7.5 as projection of f ontothe range of H The range of H is an m-dimensional subspace of the p-dimensional

Trang 13

space spanned by the state of the signature cells The complement of this subspace in Rp

is the null space of H The use of prior information, such as the fact that f consists of apoint source, may allow one to infer the null space structure of f from measurements inthe range of H Brady et al [30] take this approach a step further by assuming that f isdescribed by a continuous basis rather than the visibility cell mosaic

Analysis of the measurement, representation, and analysis spaces requires moremathematical tools than we have yet utilized We develop such tools in the nextchapter and revisit abstract measurement strategies in Section 7.5

PROBLEMS

2.1 Visibility

(a) Suppose that the half space x 0 is filled with water while the half-space

x , 0 is filled with air Show that the visibility v(A, B)¼ 1 for all points

A and B [ R3

(b) Suppose that the subspace l x 21 is filled with glass (e.g., a window)while the remainder of space is filled with air Show that the visibilityv(A, B)¼ 1 for all points A and B [ R3

.(c) According to Eqn (2.5), the transmission angle u2becomes complex for(n1/n2)sin u1 1 In this case, termed total internal reflection, there is notransmitted ray from region 1 into region 2 How do you reconcile the factthat if n1 n2some rays from region 1 never penetrate region 2 with theclaim that all points in region 1 are visible to all points in region 2?2.2 Materials Dispersion The dispersive properties of a refractive medium areoften modeled using the Sellmeier equation

Trang 14

2.3 Microscopy A modern microscope using an infinity-corrected objectiveforms an intermediate image at a distance termed the reference focal length.What is the magnification of the objective in terms of the reference focallength and the objective focal length?

2.4 Telescopy Estimate the mass of a 1-m aperture 1-m focal length glass tive lens Compare your answer with the mass of a 1-m aperture mirror.2.5 Ray Tracing Consider an object placed 75 cm in front a 50-cm focal lengthlens A second 50-cm focal length lens is placed 10 cm behind the first.Sketch a ray diagram for this system, showing the intermediate and finalimage planes Is the image erect or inverted, real or virtual? Estimate themagnification

refrac-2.6 Paraxial Ray Tracing What is the ABCD matrix for a parabolic mirror? What

is the ray transfer matrix for a Cassegrain telescope? How does the cation of the telescope appear in the ray transfer matrix?

magnifi-2.7 Spot Diagrams Consider a convex lens in the geometry of Fig 2.10.Assuming that R1¼ R2¼ 20 cm and n ¼ 1.5, write a computer program togenerate a spot diagram in the nominal thin lens focal plane for parallel raysincident along the z axis A spot diagram is a distribution of ray crossingpoints For example, use numerical analysis to find the point that a ray incident

on the lens at point (x, y) crosses the focal plane Such a ray hits the firstsurface of the lens at point z ¼ (x2þ y2

)/2R12 d1and is refracted onto anew ray direction The refracted ray hits the second surface at the point suchthat (x, y, z)þ Cit1 satisfies z¼ d2 (x2þ y2)=2R1 The ray is refractedfrom this point toward the focal plane Plot spot diagrams for the refractedrays for diverse input points at various planes near the focal plane toattempt to find the plane of best focus Is the best focal distance the same aspredicted by Eqn (2.16)? What is the area covered by your tightest spotdiagram? (Hint: Since the lens is rotationally symmetric and the rays are inci-dent along the axis, it is sufficient to solve the ray tracing problem in the xzplane and rotate the 1D spot diagram to generate a 2D diagram.)

2.8 Computational Ray Tracing Automated ray tracing software is the workhorse

of modern optical design The goal of this problem is to write a ray tracingprogram using a mathematical programming environment (e.g., Matlab orMathematica) For simplicity, work with rays confined to the xz plane In a2D ray tracing program, each ray has a current position in the plane and acurrent unit vector To move to the next position, one must solve for the inter-cept at which the ray hits the next surface in the optical system One draws aline from the current position to the next position along the current directionand solves for the new unit vector using Snell’s law Snell’s law at anysurface in the plane is

Trang 15

where iiis a unit vector along the incident ray, itis a unit vector along therefracted ray and in is a unit vector parallel to the surface normal Given ii

and in, one solves for itusing Snell’s law and the normalization condition.The nth surface is defined by the curve Fn(x, z) ¼ 0 and the normal atpoints on the surface is

Figure 2.39 Ray tracing through arbitrary lenses.

52 GEOMETRIC IMAGING

Trang 16

lenses satisfying the general form of Eqn (2.69) for various values of a, k, d,and D.

2.9 Pinhole Visibility Derive the geometric visibility for a pinhole camera [Eqn.(2.25)]

2.10 Coded Aperture Imaging Design and simulate a coded aperture imagingsystem using a MURA mask Use a mask with 50 or more resolution cellsalong each axis

(a) Plot the transmission mask, the decoding matrix, and the cross-correlation

of the two

(b) Simulate reconstruction of an image of your choice, showing both themeasured data and its reconstruction using the MURA decoding matrix.(c) Add noise to your simulated data in the form of normal and Poissondistributed values at various levels and show reconstruction using thedecoding matrix

(d) Use inverse Fourier filtering to reconstruct the image (using the FFT of themask as an inverse filter) Plot the FFT of the mask

(e) Submit your code, images and a written description of your results.2.11 Computed Tomography

(a) Plot the functions f1(x, y) ¼ rect(x)rect( y) and f2(x, y) ¼ rect(x)rect( y)(1 2rect(xþ y)rect(x 2 y))

(b) Compute the Radon transforms of f1(x, y) and f2(x, y) numerically.(c) Use a numerical implementation of the convolution – backprojection tech-nique described in Section 2.6 to estimate f1(x, y) and f2(x, y) from theirRadon transforms

2.12 Reference Structure Tomography A 2D reference structure tomographysystem as sketched in Fig 2.38 modulates the visibility of an object embed-ding space using n obscuring disks The visibility is observed by m detectors.(a) Present an argument justifying the claim that the number of signature cells

is of order m2n2

(b) Assuming prior knowledge that the object is a point source, estimate thenumber of measurements necessary to find the signature cell occupied bythe object

(c) For what values of m and n would one expect a set of measurements touniquely identify the signature cell occupied by a point object?

Trang 17

ANALYSIS

.the electric forces can disentangle themselves from material bodies and can continue

to subsist as conditions or changes in the state of space.

—H Hertz [118]

Chapter 2 developed a geometric model for optical propagation and detection anddescribed algorithms for object estimation on the basis of this model Whilegeometric analysis is of enormous utility in analysis of patterns formed on image

or projection planes, it is less useful in analysis of optical signals at arbitrarypoints in the space between objects and sensors Analysis of the optical system atall points in space requires the concept of “the optical field.” The field describesthe state of optical phenomena, such as spectra, coherence, and polarization par-ameters, independent of sources and detectors

Representation, analysis, transformation, and measurement of optical fields are thefocus of the next four chapters The present chapter develops a mathematical frame-work for analysis of the field from the perspective of sensor systems A distinctionmay be drawn between optical systems, such as laser resonators and fiber waveguides,involving relatively few spatial channels and systems, such as imagers and spec-trometers, involving a large number of degrees of freedom The degrees of freedomare typically typically expressed as pixel values, modal amplitudes or Fourier com-ponents It is not uncommon for a sensor system to involve 10621012parameters

To work with such complex fields, this chapter explores mathematical toolsdrawn from harmonic analysis We begin to apply these tools to physical field analysis

in Chapter 4

Optical Imaging and Spectroscopy By David J Brady

Copyright # 2009 John Wiley & Sons, Inc.

55

Trang 18

We develop two distinct sets of mathematical tools:

1 Transformation tools, which enable us to analyze propagation of fields fromone space to another

2 Sampling tools, which enable us to represent continuous field distributionsusing discrete sets of numbers

A third set of mathematical tools, signal encoding and estimation from sample data, isconsidered in Chapters 7 and 8

This chapter initially considers transformation and sampling in the context ofconventional Fourier analysis, meaning that transformations are based on Fouriertransfer functions and impulse response kernels and sampling is based on theWhittaker – Shannon sampling theorem Since the mid-1980s, however, dramatic dis-coveries have revolutionized harmonic signal analysis New tools drawn fromwavelet theory allow us to compare various mathematical models for fields, includ-ing different classes of functions (plane waves, modes, and wavelets) that can beused to represent fields Different mathematical bases enable us to flexibly distributediscrete components to optimize computational processing efficiency in fieldanalysis

Section 3.2 broadly describes the nature of fields and field transformations.Sections 3.3 – 3.5 focus on basic properties of Fourier and Fresnel transformations

We consider conventional Shannon sampling in Section 3.6 Section 3.7 discussesdiscrete numerical analysis of linear transformations Sections 3.8 – 3.10 brieflyextend sampling and discrete transformation analysis to include wavelet analysis

A field is a distribution function defined over a space, meaning that a field value isassociated with every point in the space The optical field is a radiation field,meaning that it propagates through space as a wave Propagation induces relationshipsbetween the field at different points in space and constrains the range of distributionfunctions that describe physically realizable fields

Optical phenomena produce a variety of observables at each point in theradiation space The “data cube” of all possible observables at all points is anaive representation of the information encoded on the optical signal Well-informed characterization of the optical signal in terms of signal values at discretepoints, modal applitudes, or boundary conditions greatly reduces the complexity ofrepresenting and analyzing the field Our goal in this chapter is to develop themathematical tools that enable efficient analysis While our discussion is oftenabstract, it is important to remember that the mathematical distributions are tied

to physical observables On the other hand, this association need not be cularly direct We often find it useful to use field distributions that describe func-tions that are not directly observable in order to develop system models forultimately observable phenomena

Trang 19

parti-The state of the optical field is described using distributions defined over variousspaces The most obvious space is Euclidean space – time, R3 T, and the mostobvious field distribution is the electric field E(r, t) The optical field cannot be anarbitrary distribution because physical propagation rules induce correlationsbetween points in space – time For the field independent of sources and detectors,these relationships are embodied in the Maxwell equations discussed in Chapter 4.The quantum statistical nature of optical field generation and detection is mosteasily incorporated in these relationships using the coherence theory developed inChapter 6.

System analysis using field theory consists of the use of physical relationships

to determine the field in a region of space on the basis of the specification of thefield in a different region The basic problem is illustrated in Fig 3.1 The field isspecified in one region, such as the (x, y) plane illustrated in the figure Thisplane may correspond to the surface of an object or to an aperture through whichobject fields propagate Given the field on an input boundary, we combine mathemat-ical tools from this chapter with physical models from Chapter 4 to estimate thefield elsewhere In Fig 3.1, the particular goal is to find the field on the output(x0, y0) plane

Mathematically, propagation of the field from the input boundary to an outputboundary may be regarded as a transformation from a distribution f (x, y, t) on theinput plane to a distribution g(x0, y0, t) on the output plane For optical fields this trans-formation is linear

We represent propagation of the field from one boundary to another by thetransformation g(x0, y0) ¼ Tf f(x, y)g The transformation Tf.g is linear over itsfunctional domain if for all functions f1(x, y) and f2(x, y) in the domain and for allscalars a and b

T{af1(x)þ bf2(x)}¼ aT {f1(x)}þ bT{ f2(x)} (3:1)

Figure 3.1 Plane-to-plane transformation of optical fields.

3.2 FIELDS AND TRANSFORMATIONS 57

Trang 20

Linearity allows transformations to be analyzed using superpositions of basis functions.

A function f (x, y) in the range of a discrete basisffn(x, y)g may be expressed

Trang 21

Harmonic bases are particularly useful in the analysis of shift-invariant lineartransformations Letting g(x0, y0) ¼ Tf f (x, y)g, Tf.g is shift-invariant if

T{ f (x xo, y yo)}¼ g(x0 xo, y0 yo) (3:8)The impulse response of a shift-invariant transformation takes the form h(x0, x0,

y0, y0) ¼ h(x02 x, y02 y) and the integral representation of the transformation isthe convolution

Optical field propagation can usually be modeled as a linear transformation andcan sometimes be modeled as a shift-invariant linear transformation Shift invarianceapplies in imaging and diffraction in homogeneous media, where a shift in the inputobject produces a corresponding shift in the output image or diffraction pattern Shiftinvariance does not apply to diffraction or refraction through structured optical scat-terers or devices Recalling, for example, field propagation through reference struc-tures from Section 2.7, one easily sees that shift in the object field produces acomplex shift-variant recoding of the object shadow

Fourier analysis is a powerful tool for analysis of invariant systems Since invariant imaging and field propagation are central to optical systems, Fourier analy-sis is ubiquitous in this text The attraction is that Fourier analysis allows the response

shift-of a linear shift-invariant system to be modeled by a multiplicative transfer function( filter) Fourier analysis also arises naturally as an explanation of resonance and color

in optical cavities and quantum mechanical interactions One might say that Fourieranalysis is attractive because optical propagation is linear and because field – matterinteractions are nonlinear

Fourier transforms take slightly different forms in different texts; here we representthe Fourier transform fˆ(u) of a function f (x) [ L2(R) as

^

f (u)¼ð

1

^

3.3 FOURIER ANALYSIS 59

Trang 22

In optical system analysis one often has recourse to multidimensional Fourier forms, which one defines for f [ L2(Rn) as

and

F1 d ^f (u)du

Trang 23

Plancherel’s theorem is easily extended to show that for functions f (x) [ L2(R)and g(x) [ L2(R)

x axis We then find that

Trang 24

Combining Eqns (3.14) and (3.19), we note that

Trang 25

Rotation in Two Dimensions The two-dimensional Fourier transform is

F{ f (x0, y0)}¼ ^f (u cos uþ v sin u, v cos u  u sin u) (3:39)

Cylindrical Coordinates Analysis of the Fourier transform in other coordinatesystems is also of interest In optical systems with a well-defined axis of propagation,cylindrical coordinates are of particular interest In cylindrical coordinates the two-dimensional Fourier transformation is

Trang 26

If the input distribution is circularly symmetric f (r, u) ¼ f (r) and the Fouriertransformation reduces to the Hankel transformation

As mentioned previously, Fourier analysis is particularly attractive when consideringlinear shift-invariant (LSI) transformations arising in imaging and diffraction Thebasis of this attraction arises from the convolution theorem [Eqn (3.18)] Recallthe integral form of an LSI transformation from Eqn (3.9):

trans-is a simple product of the input function and the transfer function, rather than an gral transformation Transfer functions are used extensively in the analysis of opticalimaging systems, as discussed in Sections 4.7 and 6.4

inte-The transfer function plays the role of a filter, meaning that it modulates the trum of the input signal, attenuating some spatial frequencies while allowing others to

Ngày đăng: 05/08/2014, 14:20

TỪ KHÓA LIÊN QUAN