In other imaging communities such as military imaging systems which, at aminimum, include visible, image intensifiers, and infrared and medical imagingdevices, it is extremely important
Trang 2Signal Processing and Performance Analysis for Imaging Systems
Trang 3For a listing of recent titles in the Artech House Optoelectronics Series,
turn to the back of this book
Trang 4Signal Processing and Performance Analysis for Imaging Systems
S Susan Young Ronald G Driggers Eddie L Jacobs
Trang 5Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the U.S Library of Congress
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
ISBN-13: 978-1-59693-287-6
Cover design by Igor Valdman
© 2008 ARTECH HOUSE, INC.
685 Canton Street
Norwood, MA 02062
All rights reserved Printed and bound in the United States of America No part of this bookmay be reproduced or utilized in any form or by any means, electronic or mechanical, includ-ing photocopying, recording, or by any information storage and retrieval system, withoutpermission in writing from the publisher
All terms mentioned in this book that are known to be trademarks or service marks havebeen appropriately capitalized Artech House cannot attest to the accuracy of this informa-tion Use of a term in this book should not be regarded as affecting the validity of any trade-mark or service mark
10 9 8 7 6 5 4 3 2 1
Trang 6To our families
Trang 82.4 Imaging System Point Spread Function and Modulation
Trang 94.3.1 Definition of Nonrecursive and Recursive Filters 83
4.5.3 Condition of Constructing a Wavelet Transform 97
Trang 10CHAPTER 5
5.5 Antialias Image Resampling Using Fourier-Based Methods 114
6.1.2 Super-Resolution for Diffraction and Sampling 129
6.5 Sensors That Benefit from Super-Resolution Reconstruction 167
6.6 Performance Modeling and Prediction of Super-Resolution
Trang 118.3.2 Contrast Enhancement Based on Unsharp Masking 210
8.4 Contrast Enhancement Image Performance Measurements 217
Trang 129.4 Imaging System Performance with Fixed-Pattern Noise 243
10.5 Application of Tone Scale to Enhanced Visualization in Radiation
10.5.2 Locating and Labeling the Radiation and Collimation Fields 257
11.5.5 Image Fusion Quality Index by Piella and Heijmans 284
Trang 14In today’s consumer electronics market where a 5-megapixel camera is no longerconsidered state-of-the-art, signal and image processing algorithms are real-timeand widely used They stabilize images, provide super-resolution, adjust for detec-tor nonuniformities, reduce noise and blur, and generally improve camera perfor-mance for those of us who are not professional photographers Most of these signaland image processing techniques are company proprietary and the details of thesetechniques are never revealed to outside scientists and engineers In addition, it isnot necessary for the performance of these systems (including the algorithms) to bedetermined since the metric of success is whether the consumer likes the productand buys the device
In other imaging communities such as military imaging systems (which, at aminimum, include visible, image intensifiers, and infrared) and medical imagingdevices, it is extremely important to determine the performance of the imaging sys-tem, including the signal and image processing techniques In military imaging sys-tems that involve target acquisition and surveillance/reconnaissance, theperformance of an imaging system determines how effective the warfighter canaccomplish his or her mission In medical systems, the imaging system performancedetermines how accurately a diagnosis can be provided Signal and image process-ing plays a key role in the performance of these imaging systems and, in the past 5 to
10 years, has become a key contributor to increased imaging system performance.There is a great deal of government funding in signal and image processing forimaging system performance and the literature is full of university and governmentlaboratory developed algorithms There are still a great number of industry algo-rithms that, overall, are considered company proprietary We focus on those in theliterature and those algorithms that can be generalized in a nonproprietary manner.There are numerous books in the literature on signal and image processing tech-niques, algorithms, and methods The majority of these books emphasize the math-ematics of image processing and how they are applied to image information Veryfew of the books address the overall imaging system performance when signal andimage processing is considered a component of the imaging system Likewise, thereare many books in the area of imaging system performance that consider the optics,the detector, and the displays in the system and how the system performancebehaves with changes or modifications of these components There is very littlebook content where signal and imager processing is included as a component of theoverall imaging system performance This is the gap that we have attempted to fillwith this book While algorithm development has exploded in the past 5 to 10 years,
xiii
Trang 15the system performance aspects are relatively new and not quite fully understood.While the focus of this book is to help the scientist and engineer begin to understandthat these algorithms are really an imaging system component and help in the systemperformance prediction of imaging systems with these algorithms, the performancematerial is new and will undergo dramatic improvements in the next 5 years.
We have chosen to address signal and image processing techniques that are notnew, but the real time implementation in military and medical systems are relativelynew and the performance predication of systems with these algorithms are definitelynew There are some algorithms that are not addressed such as electronic stabiliza-tion and turbulence correction There are current programs in algorithm develop-ment that will provide great advances in algorithm performance in the next fewyears, so we decided not to spend time on these particular areas
It is worth mentioning that there is a community called “computational ing” where, instead of using signal/image processing to improve the performance of
imag-an existing imaging system approach, signal processing is imag-an inherent part of theelectro-optical design process for image formation The field includes unconven-tional imaging systems and unconventional processing, where the performance ofthe collective system design is beyond any conventional system approach In manycases, the resulting image is not important The goal of the field is to maximize sys-tem task performance for a given electro-optical application using nonconventionaldesign rules (with signal processing and electro-optical components) through theexploitation of various degrees of freedom (space, time, spectrum, polarization,dynamic range, and so forth) Leaders in this field include Dennis Healey at DARPA,Ravi Athale at MITRE, Joe Mait at the Army Research Laboratory, MarkMirotznick at Catholic University, and Dave Brady at Duke University Theseresearchers and others are forging a new path for the rest of us and have providedsome very stimulating experiments and demonstrations in the past 2 or 3 years We
do not address computational imaging in this book, as the design and approachmethods are still a matter of research and, as always, it will be some time before sys-tem performance is addressed in a quantitative manner
We would like to thank a number of people for their thoughtful assistance in thiswork Dr Patti Gillespie at the Army Research Laboratory provided inspiration andencouragement for the project Rich Vollmerhausen has contributed more to mili-tary imaging system performance modeling over the past 10 years than any otherresearcher, and his help was critical to the success of the project Keith Krapels andJonathan Fanning both assisted with the super-resolution work Khoa Dang, MikePrarie, Richard Moore, Chris Howell, Stephen Burks, and Carl Halford contributedmaterial for the fusion chapter There are many others who worked signal process-ing issues and with whom we collaborated through research papers to include:Nicole Devitt, Tana Maurer, Richard Espinola, Patrick O’Shea, Brian Teaney, LouisLarsen, Jim Waterman, Leslie Smith, Jerry Holst, Gene Tener, Jennifer Parks, DeanScribner, Jonathan Schuler, Penny Warren, Alan Silver, Jim Howe, Jim Hilger, andPhil Perconti We are grateful for the contributions that all of these people have pro-vided over the years
We (S Susan Young and Eddie Jacobs) would like to thank our coauthor, Dr.Ronald G Driggers for his suggestion of writing this book and encouragement inthis venture Our understanding and appreciation of system performance signifi-cance started from collaborating with him S Susan Young would like to thank Dr
Trang 16Hsien-Che Lee for his guidance and help early in her career in signal and image cessing On a personal side, we authors are very thankful to our families for theirsupport and understanding.
pro-xv
Trang 18P A R T I
Basic Principles of Imaging Systems and Performance
Trang 20C H A P T E R 1
Introduction
The “combined” imaging system performance of both hardware (sensor) and ware (signal processing) is extremely important Imaging system hardware isdesigned primarily to form a high-quality image from source emissions under alarge variety of environmental conditions Signal processing is used to help highlight
soft-or extract infsoft-ormation from the images that are generated from an imaging system.This processing can be automated for decision-making purposes or it can be utilized
to enhance the visual acuity of a human looking through the imaging system.Performance measures of an imaging system have been excellent methods forbetter design and understanding of the imaging system However, the imaging per-formance of an imaging system with the aid of signal processing has not been widelyconsidered in the light of improving image quality from imaging systems and signalprocessing algorithms Imaging systems can generate images with low-contrast,high-noise, blurring, or corrupted/lost high-frequency details, among others Howdoes the image performance of a low-cost imaging system with the aid of signal pro-cessing compare with the one of an expensive imaging system? Is it worth investing
in higher image quality by improving the imaging system hardware or by ing the signal processing software? The topic of this book is to relate the ability ofextracting information from an imaging system with the aid of signal processing toevaluate the overall performance of imaging systems
Understanding the image formation and recording process helps in understandingthe factors that affect image performance and therefore helps the design of imagingsystems and signal processing algorithms The image formation process and thesources of image degradation, such as loss of useful high-frequency details, noise, orlow-contrast target environment, are discussed in Chapter 2
Methods of determining image performance are important tools in determiningthe merits of imaging systems and signal processing algorithms Image performancedetermination can be performed via subjective human perception studies or imageperformance modeling Image performance prediction and the role of image perfor-mance modeling are also discussed in Chapter 3
3
Trang 211.3 Signal Processing: Basic Principles and Advanced Applications
The basic signal processing principles, including Fourier transform, wavelet form, finite impulse response (FIR) filters, and Fourier-based filters, are discussed inChapter 4
trans-In an image formation and recording process, many factors affect sensor mance and image quality, and these can result in loss of high-frequency information
perfor-or low contrast in an image Several common causes of low image quality are thefollowing:
• Many low-cost visible and thermal sensors spatially or electronicallyundersample an image Undersampling results in aliased imagery in whichsubtle/detailed information (high-frequency components) is lost in theseimages
• An imaging system’s blurring function (sometimes called the point spread
function, or PSF) is another common factor in the reduction of high-frequency
components in the acquired imagery and results in blurred images
• Low-cost sensors and environmental factors, such as lighting sources or ground complexities, result in low-contrast images
back-• Focal plan array (FPA) sensors have detector-to-detector variability in the FPAfabrication process and cause the fixed-pattern noise in the acquired imagery.There are many signal processing applications for the enhancement of imagingsystem performance Most of them attempt to enhance the image quality or removethe degradation phenomena Specifically, these applications try to recover the usefulhigh-frequency components that are lost or corrupted in the image and attempt tosuppress the undesired high-frequency components, which are noises In Chapters 5
to 11, the following classes of signal processing applications are considered:
1 Image resampling;
2 Super-resolution image reconstruction;
3 Image restoration—deblurring;
4 Image contrast enhancement;
5 Nonuniformity correction (NUC);
Trang 22also called image decimation, or image interpolation, according to whether the goal
is to reduce or enlarge the size (or resolution) of a captured image It can provide theimage values that are not recorded by the imaging system, but are calculated fromthe neighboring pixels Image resampling does not increase the inherent informa-tion content in the image, but poor image display reconstruction function canreduce the overall imaging system performance
The image resampling algorithms include spatial and spatial-frequency domain,
or Fourier-based windowing, methods The important considerations in imageresampling include the image resampling model, image rescale implementation, andresampling filters, especially the anti-aliasing image resampling filter These algo-rithms, examples, and image resampling performance measurements are discussed
in Chapter 5
The loss of high-frequency information in an image could be due to many factors.Many low-cost visible and thermal sensors spatially or electronically undersample
an image Undersampling results in aliased imagery in which the high-frequencycomponents are folded into the low-frequency components in the image Conse-quently, subtle/detailed information (high-frequency components) is lost in theseimages Super-resolution image reconstruction can produce high-resolution images
by using the existing low-cost imaging devices from a sequence, or a few snapshots,
of low-resolution images
Since undersampled images have subpixel shifts between successive frames,they represent different information from the same scene Therefore, the informa-tion that is contained in an undersampled image sequence can be combined toobtain an alias-free (high-resolution) image Super-resolution image reconstructionfrom multiple snapshots provides far more detail information than any interpolatedimage from a single snapshot
Figure 1.1 shows an example of a high-resolution (alias-free) infrared imagethat is obtained from a sequence of low-resolution (aliased) input images havingsubpixel shifts among them
(b)
(a)
infra-red images having subpixel shifts among them; and (b) output alias-free (high-resolution) image in which the details of tree branches are revealed.
Trang 23The first step in a super-resolution image reconstruction algorithm is to estimatethe supixel shifts of each frame with respect to a reference frame The second step is
to increase the effective spatial sampling by operating on a sequence of tion subpixel-shifted images There are also spatial and spatial frequency domainmethods for the subpixel shift estimation and the generation of the high-resolutionoutput images These algorithms, examples, and the image performance arediscussed in Chapter 6
An imaging system’s blurring function, also called the point spread function (PSF), is
another common factor in the reduction of high-frequency components in the
image Image restoration tries to inverse this blurring degradation phenomenon, butwithin the bandlimit of the imager (i.e., it enhances the spatial frequencies within theimager band) This includes deblurring images that are degraded by the limitations
of a sensor or environment The estimate or knowledge of the blurring function isessential to the application of these algorithms One of the most important consider-ations of designing a deblurring filter is to control noise, since the noise is likelyamplified at high frequencies The amplification of noise results in undesired arti-facts in the output image Figure 1.2 shows examples of image deblurring One inputimage [Figure 1.2(a)] contains the blur, while the deblurred version of it [Figure1.2(b)] removes the most blur Another input image [Figure 1.2(c)] contains the blurand noise; the noise effect illustrates on the deblurred version of it [Figure 1.2(d)].Image restoration tries to recover the high-frequency information below the diffrac-tion limit while limiting the noise artifacts The designs of deblurring filters, the
blurred bar image with noise added; and (d) deblurred version of (c).
Trang 24noise control mechanisms, examples, and image performance are discussed inChapter 7.
Image details can also be enhanced by image contrast enhancement techniques inwhich certain image edges are emphasized as desired For an example of a medicalapplication in diagnosing breast cancer from mammograms, radiologists follow theductal networks to look for abnormalities However, the number of ducts and theshape of ductal branches vary with individuals, which make the visual process oflocating the ducts difficult The image contrast enhancement provides the ability toenhance the appearance of the ductal elements relative to the fatty-tissue surround-ings, which helps radiologists to visualize abnormalities in mammograms
Image contrast enhancement methods can be divided into single-scale approachand multiscale approach In the single-scale approach, the image is processed in theoriginal image domain, such as a simple look-up table In the multiscale approach,the image is decomposed into multiple resolution scales, and processing is per-formed in the multiscale domain Because the information at each scale is adjustedbefore the image is reconstructed back to the original image intensity domain, theoutput image contains the desired detail information The multiscale approach canalso be coupled with the dynamic range reduction Therefore, the detail information
in different scales can be displayed in one output image Localized contrastenhancement (LCE) is the process in which these techniques are applied on a localscale for the management of dynamic range in the image For example, thesky-to-ground interface in infrared imaging can include a huge apparent tempera-ture difference that occupies most of the image dynamic range Small targets withsmaller signals can be lost, while LCE can reduce the large sky-to-ground interfacesignal and enhance small target signals (see Figure 8.10 later in this book) Details ofthe algorithms, examples, and image performance are discussed in Chapter 8
Focal plan array (FPA) sensors have been used in many commercial and militaryapplications, including both visible and infrared imaging systems, since they havewide spectral responses, compact structures, and cost-effective production How-ever, each individual photodetector in the FPA has a different photoresponse, due todetector-to-detector variability in the FPA fabrication process [1] Images that areacquired by an FPA sensor suffer from a common problem known as fixed-patternnoise, or spatial nonuniformity The technique to compensate for this distortion iscalled nonuniformity correction (NUC) Figure 1.3 shows an example of anonuniformity corrected image from an original input image with the fixed-patternnoise
There are two main categories of NUC algorithms, namely, calibration-basedand scene-adaptive algorithms A conventional, calibration-based NUC is the stan-dard two-point calibration, which is also called linear NUC This algorithm esti-
Trang 25mates the gain and offset parameters by exposing the FPA to two distinct anduniform irradiance levels The scene-adaptive NUC uses the data acquired in thevideo sequence and a motion estimation algorithm to register each point in the sceneacross all of the image frames This way, continuous compensation can be appliedadaptively for individual detector responses and background changes These algo-rithms, examples, and imaging system performance are discussed in Chapter 9.
Tone scale is a technique that improves the image presentation on an output displaymedium (softcopy display or hardcopy print) Tone scale is also a mathematicalmapping of the image pixel values from the sensor to a region of interest on an out-put medium Note that tone scale transforms improve only the appearance of theimage, but not the image quality itself The gray value resolution is still the same.However, a proper tone scale allows the characteristic curve of a display system tomatch the sensitivity of the human eye to enhance the image interpretation task per-formance There are various tone scale techniques, including piece-wise linear tonescale, nonlinear tone scale, and perceptual linearization tone scale These techniquesand a tone scale performance example are discussed in Chapter 10
Because researchers realize that different sensors provide different signature cues ofthe scene, image fusion has been receiving additional attention in signal processing.Some of those applications are shown to benefit from fusing the images of multiplesensors
Imaging sensor characteristics are determined by the wavebands that theyrespond to in the electromagnetic spectrum Figure 1.4 is a diagram of the electro-magnetic spectrum with wavelength indicated in metric length units [2] The mostfamiliar classifications of wavebands are the radiowave, microwave, infrared, visi-ble, ultraviolet, X-ray, and gamma-ray wavebands Figure 1.5 shows further subdi-vided wavebands for broadband sensors [3] For example, the infrared waveband is
shown in the image; and (b) nonuniformity corrected image in which the helicopter in the center
is clearly illustrated.
Trang 26divided into near infrared (NIR), shortwave infrared (SWIR), midwave infrared(MWIR), longwave infrared (LWIR), and far infrared The sensor types are driven
by the type of image information that can be exploited within these bands X-raysensors can view human bones for disease diagnosis Microwave and radiowavesensors have a good weather penetration in military applications Infrared sensorsdetect both temperature and emissivity and are beneficial for night-vision applica-tions Different subwaveband sensors in infrared wavebands can provide differentinformation For example, MWIR sensors respond better to hotter-than-terrestrialobjects LWIR sensors have better response to overall terrestrial object tempera-tures, which are around 300 Kelvins (K) Solar clutter is high in the MWIR in thedaytime and is negligible in the LWIR Figure 1.6 shows an example of fusingMWIR and LWIR images The road cracks are visible in LWIR, but not in MWIR.Similarly, the Sun glint is visible in MWIR image, but not in LWIR The fused imageshows both Sun glint and road cracks
Microwave (and subbands) Radiowave
λ wavelength
m (micrometers, µ )
Longwave infrared
Midwave infrared
Near - and short wave infrared
-Visible 0.4
1.0
10 14
3.0
Trang 27Many questions of image fusion remain unanswered and open to new researchopportunities Some of the questions involve how to select different sensors to pro-vide better image information from the scene; whether different imaging informa-tion can be effectively combined to provide a better cue in the scene; and how to bestcombine the information These issues are presented and examples and imaging sys-tem performance are provided in Chapter 11.
References
[1] Milton, A F., F B Barone, and M R Kruer, “Influence of Nonuniformity on Infrared
Focal Plan Array Performance,” Optical Engineering, Vol 24, No 5, 1985, pp 855–862.
[2] Richards, A., Alien Vision—Exploring the Electromagnetic Spectrum with Imaging
Tech-nology, Bellingham, WA: SPIE Press, 2001.
[3] Driggers, R G., P Cox, and T Edwards, Introduction to Infrared and Electro-Optical
Sys-tems, Norwood, MA: Artech House, 1999.
Road cracks
Road cracks clearly visible in
LW but not MW Sun glint
Fused image
not in MWIR Similarly, the Sun glint is visible in MWIR image, but not in LWIR The fused image shows both Sun glint and road cracks.
Trang 28C H A P T E R 2
Imaging Systems
In this chapter, basic imaging systems are introduced and the concepts of resolutionand sensitivity are explored This introduction presents helpful background infor-mation that is necessary to understand imaging system performance, which is pre-sented in Chapter 3 It also provides a basis for later discussions on theimplementation of advanced signal and image processing techniques
A basic imaging system can be depicted as a cascaded system where the input signal
is optical flux from a target and background and the output is an image presentedfor human consumption A basic imaging system is shown in Figure 2.1
The system can begin with the flux leaving the target and the background Forelectro-optical systems and more sophisticated treatments of infrared systems, thesystem can even begin with the illumination of the target with external sources.Regardless, the flux leaving the source traverses the atmosphere as shown This pathincludes blur from turbulence and scattering and a reduction in the flux due toatmospheric extinction, such as scattering, and absorption, among others The fluxthat makes it to the entrance of the optics is then blurred by optical diffraction andaberrations The flux is also reduced by the optical transmission The flux is imagedonto a detector array, either scanning or staring Here, the flux is converted fromphotons to electrons There is a quantum efficiency that reduces the signal, and thefinite size of the detector imposes a blur on the image The electronics furtherreduce, or in some cases enhance, the signal The display also provides a signalreduction and a blur, due to the finite size of the display element Finally, the eyeconsumes the image The eye has its own inherent blur and noise, which are consid-ered in overall system performance In some cases, the output of the electronics isprocessed by an automatic target recognizer (ATR), which is an automated process
of detecting and recognizing targets An even more common process is an aided get recognizer (AiTR), which is more of a cueing process for a human to view theresultant cued image “chips” (a small area containing an object)
tar-All source and background objects above 0K emit electromagnetic radiationassociated with the thermal activity on the surface of the object For terrestrial tem-peratures (around 300K), objects emit a good portion of the electromagnetic flux inthe infrared part of the electromagnetic spectrum This emission of flux is some-times called blackbody thermal emission The human eye views energy only in thevisible portion of the electromagnetic spectrum, where the visible band spans wave-lengths from 0.4 to 0.7 micrometer (µm) Infrared imaging devices convert energy in
11
Trang 29the infrared portion of the electromagnetic spectrum into displayable images in thevisible band for human use.
The infrared spectrum begins at the red end of the visible spectrum where the eyecan no longer sense energy It spans from 0.7 to 100µm The infrared spectrum is,
by common convention, broken into five different bands (this may vary according tothe application/community) The bands are typically defined in the following way:near-infrared (NIR) from 0.7 to 1.0µm, shortwave infrared (SWIR) from 1.0 to 3.0
µm, midwave infrared (MWIR) from 3.0 to 5.0 µm, longwave infrared (LWIR) from8.0 to 14.0 µm, and far infrared (FIR) from 14.0 to 100 µm These bands aredepicted graphically in Figure 2.2 Figure 2.2 shows the atmospheric transmissionfor a 1-kilometer horizontal ground path for a “standard” day in the United States.These types of transmission graphs can be tailored for any condition using sophisti-cated atmospheric models, such as MODTRAN (from http://www.ontar.com).Note that there are many atmospheric “windows” so that an imager designed withsuch a band selection can see through the atmosphere
Display
Human vision Optics
Electronics
ATR
Visible Ultra-
violet
Near- and wave infrared
short-Midwave infrared
Longwave infrared
Far infrared
Trang 30The primary difference between a visible spectrum camera and an infraredimager is the physical phenomenology of the radiation from the scene being imaged.The energy used by a visible camera is predominantly reflected solar or some otherilluminating energy in the visible spectrum The energy imaged by infrared imagers,commonly known as forward looking infrareds (FLIRs) in the MWIR and LWIRbands, is primarily self-emitted radiation From Figure 2.2, the MWIR band has anatmospheric window in the 3- to 5-µm region, and the LWIR band has an atmo-spheric window in the 8- to 12-µm region The atmosphere is opaque in the 5- to8-µm region, so it would be pointless to construct a camera that responds to thiswaveband.
Figure 2.3 provides images to show the difference in the source of the radiationsensed by the two types of cameras The visible image on the left side is all light thatwas provided by the Sun, propagated through Earth’s atmosphere, reflected off theobjects in the scene, traversed through a second atmospheric path to the sensor, andthen imaged with a lens and a visible band detector A key here is that the objects inthe scene are represented by their reflectivity characteristics The image characteris-tics can also change by any change in atmospheric path or source characteristicchange The atmospheric path characteristics from the sun to the objects change fre-quently because the Sun’s angle changes throughout the day, plus the weather andcloud conditions change The visible imager characterization model is a multipathproblem that is extremely difficult
The LWIR image given on the right side of Figure 2.3 is obtained primarily bythe emission of radiation by objects in the scene The amount of electromagneticflux depends on the temperature and emissivity of the objects A higher temperature
and a higher emissivity correspond to a higher flux The image shown is white hot—a whiter point in the image corresponds to a higher flux leaving the object It is
interesting to note that trees have a natural self-cooling process, since a high ature can damage foliage Objects that have absorbed a large amount of solarenergy are hot and are emitting large amounts of infrared radiation This is some-
temper-times called solar loading.
emitted flux (Images courtesy of NRL Optical Sciences Division.)
Trang 31The characteristics of the infrared radiation emitted by an object are described
by Planck’s blackbody law in terms of spectral radiant emittance, in the following:
2
where c1and c2are constants of 3.7418× 104
W-µm4/cm2and 1.4388× 104
µm-K.The wavelength,λ, is provided in micrometers and ε(λ) is the emissivity of the sur-face A blackbody source is defined as an object with an emissivity of 1.0 and is con-sidered a perfect emitter Source emissions of blackbodies at typical terrestrialtemperatures are shown in Figure 2.4 Often, in modeling and system performanceassessment, the terrestrial background temperature is assumed to be 300K Thesource emittance curves are shown for other temperatures for comparison Onecurve corresponds to an object colder than the background, and two curves corre-spond to temperatures hotter than the background
Planck’s equation describes the spectral shape of the source as a function ofwavelength It is readily apparent that the peak shifts to the left (shorter wave-lengths) as the body temperature increases If the temperature of a blackbody wereincreased to that of the Sun (5,900K), the peak of the spectral shape would decrease
to 0.55µm or green light (note that this is in the visible band) This peak wavelength
is described by Wien’s displacement law
For a terrestrial temperature of 300K, the peak wavelength is around 10µm It isimportant to note that the difference between the blackbody curves is the “signal” inthe infrared bands For an infrared sensor, if the background is at 300K and the tar-get is at 302K, the signal is the difference in flux between the blackbody curves Sig-nals in the infrared sensor are small riding on very large amounts of backgroundflux In the visible band, this is not the case For example, consider the case of awhite target on a black background The black background is generating no signal,while the white target is generating a maximum signal, given the sensor gain has
Trang 32been adjusted Dynamic range may be fully utilized in a visible sensor For the case
of an infrared sensor, a portion of the dynamic range is used by the large ground flux radiated by everything in the scene This flux is never a small value;hence, sensitivity and dynamic range requirements are much more difficult to satisfy
back-in back-infrared sensors than back-in visible sensors
There are three general categories of infrared sensor performance characterizations.The first is sensitivity and the second is resolution When end-to-end, orhuman-in-the-loop (HITL), performance is required, the third type of performancecharacterization describes the visual acuity of an observer through a sensor, whichwill be discussed in Chapter 3 The former two are both related to the hardware andsoftware that comprises the system, while the latter includes both the sensor and theobserver The first type of measure, sensitivity, is determined through radiometricanalysis of the scene/environment and the quantum electronic properties of thedetectors Resolution is determined by analysis of the physical optical properties,the detector array geometry, and other degrading components of the system inmuch the same manner as complex electronic circuit/signals analysis
Sensitivity describes how the sensor performs with respect to input signal level
It relates noise characteristics, responsivity of the detector, light gathering of theoptics, and the dynamic range/quantization of the sensor Radiometry describeshow much light leaves the object and background and is collected by the detector.Optical design and detector characteristics are of considerable importance in sensorsensitivity analysis In infrared systems, noise equivalent temperature difference(NETD) is often a first-order description of the system sensitivity The three-dimen-sional (3-D) noise model [1] describes more detailed representations of sensitivityparameters In visible systems, the noise equivalent irradiance (NEI) is a similarterm that is used to determine the sensitivity of the system
The second type of measure is resolution Resolution is the ability of the sensor
to image small targets and to resolve fine detail in large targets Modulation transferfunction (MTF) is the most widely used resolution descriptor in infrared systems.Alternatively, it may be specified by a number of descriptive metrics, such as theoptical Rayleigh Criterion or the instantaneous field-of-view (FOV), of the detector.While these metrics are component-level descriptions, the system MTF is anall-encompassing function that describes the system resolution
Sensitivity and resolution can be competing system characteristics, and they arethe most important issues in initial studies for a design For example, given a fixedsensor aperture diameter, an increase in focal length can provide an increase in reso-lution, but it may decrease sensitivity [1] Typically, visible band systems haveplenty of sensitivity and are resolution-limited, while infrared imagers have beenmore sensitivity-limited With staring infrared sensors, the sensitivity has seensignificant improvements
Quite often metrics, such as NETD and MTF, are considered to be separable.However, in an actual sensor, sensitivity and resolution performance are not inde-pendent As a result, minimum resolvable temperature difference (MRT or MRTD)
Trang 33or the sensor contrast threshold function (CTF) has become the primary mance metrics for infrared systems MRT and MRC (minimum resolvable contrast)are a quantitative performance measure in terms of both sensitivity and resolution.
perfor-A simple MRT curve is shown in Figure 2.5 The performance is bounded by the sor’s limits and the observer’s limits The temperature difference, or thermal con-trast, required to image smaller details in a scene increases with detail size Theinclusion of observer performance yields a single-sensor performance characteriza-tion It describes sensitivity as a function of resolution and includes the human visualsystem
A linear imaging system requires two properties [1, 2]: superposition and scaling
Consider an input scene, i(x, y) and an output image, o(x, y) Given that a linear tem is described by L{}, then
where i1(x, y) and i2(x, y) are input scenes and a and b are constants Superposition,
simply described, is that the image of two scenes, such as a target scene and a ground scene, is the sum of individual scenes imaged separately The simplest exam-ple here is that of a point source as shown in Figure 2.6 The left side of the figureshows the case where a single point source is imaged, then a second point source isimaged, and the two results are summed to give an image of the two point sources
System resolution limit
System response
Trang 34The superposition principle states that this sum of point source images would beidentical to the resultant image if both point sources were included in the inputscene.
The second property simply states that an increase in input scene brightnessincreases the image brightness Doubling a point source brightness would doublethe image brightness
The linear systems approach is extremely important with imaging systems,since any scene can be represented as a collection of weighted point sources Theoutput image is the collection of the imaging system responses to the point sources
In continuous (nonsampled) imaging systems, another property is typicallyassumed: shift-invariance Sometimes a shift invariant system is called isoplanatic.Mathematically stated, the response of a shift invariant system to a shifted input,such as a point source, is a shifted output; that is,
where x o and y oare the coordinates of the point source It does not matter where thepoint source is located in the scene, the image of the point source will appear to bethe same, only shifted in space The image of the point source does not change withposition If this property is satisfied, the shifting property of the point source, ordelta function, can be used,
( ) ( ) ( )
x y
2.3 Linear Shift-Invariant (LSI) Imaging Systems 17
Imaging system
P1
Imaging system
P2
+
Imaging system
Trang 35where x1≤ x o ≤ x2and y1≤ y o ≤ y2 The delta function,δ(x−x o , y−y o), is nonzero only
at x o , y oand has an area of unity The delta function is used frequently to describeinfinitesimal sources of light Equation (2.6) states that the value of the input scene
at x o , y o can be written in terms of a weighted delta function We can substitute i(x, y) in (2.6)
where ** denotes the two-dimensional (2-D) convolution The impulse response of
the system, h(x, y), is commonly called the point spread function (PSF) of the
imag-ing system The significance of (2.11) is that the system impulse response is a spatial
i x,y( )
x
Point spread function
o x,y( )
** h x,y( )
Trang 36filter that is convolved with the input scene to obtain an output image The fied LSI imaging system model is shown in Figure 2.7.
simpli-The system described here is valid for LSI systems only This analysis technique
is a reasonable description for continuous and well-sampled imaging systems It is
not a good description for an undersampled or a well-designed sampled imaging
system These sampled imaging systems do satisfy the requirements of a linear tem, but they do not follow the shift invariance property The sampling nature ofthese systems is described later in this chapter The representation of sampled imag-ing systems is a modification to this approach
sys-For completeness, we take the spatial domain linear systems model and convert
it to the spatial frequency domain Spatial filtering can be accomplished in both domains Given that x and y are spatial coordinates in units of milliradians, the spatial frequency domain has independent variables of f x and f y, cycles permilliradian A spatial input or output function is related to its spectrum by the Fou-rier transform
( ) [ ( ) ] ( ) [ ( ) ]
in order to simplify analyses descriptions One of the very important properties ofthe Fourier transform is that the Fourier transform of a convolution results in aproduct Therefore, the spatial convolution described in (2.11) results in a spectrumof
( ) ( ) ( )
O f x,f y = I f x,f y H f x,f y (2.15)Here, the output spectrum is related to the input spectrum by the product of theFourier transform of the system impulse response Therefore, the Fourier transform
of the system impulse response is called the transfer function of the system
Multi-plication of the input scene spectrum by the transfer function of an imaging systemprovides the same filtering action as the convolution of the input scene with the
imaging system PSF In imaging systems, the magnitude of the Fourier transform of
the system PSF is the modulation transfer function (MTF)
2.3 Linear Shift-Invariant (LSI) Imaging Systems 19
Trang 372.4 Imaging System Point Spread Function and Modulation Transfer Function
The system impulse response or point spread function (PSF) of an imaging system iscomprised of component impulse responses as shown in Figure 2.8 Each of the com-ponents in the system contributes to the blurring of the scene In fact, each of thecomponents has an impulse response that can be applied in the same manner as thesystem impulse response The blur attributed to a component may be comprised of afew different physical effects For example, the optical blur is a combination of thediffraction and aberration effects of the optical system The detector blur is a combi-nation of the detector shape and the finite time of detector integration as it traversesthe scene It can be shown that the PSF of the system is a combination of theindividual impulse responses
( ) ( ) ( ) ( ) ( )
h system x y, =h atm x y, **h optics x y, **hdet x y, **h elec x y, **h disp(x y, ) (2.16)
so that the total blur, or system PSF, is a combination of the component impulseresponses
The Fourier transform of the system impulse response is called the transfer tion of the system In fact, each of the component impulse responses given in (2.16)
func-has a component transfer function that, when cascaded (multiplied), the resulting
transfer function is the overall system transfer function; that is,
Input scene Atmosphere Optics Detectors
(x,y) (x,y)
Trang 38beginning with the optical effects Also, the transfer function of a system, as given in(2.17), is frequently described without the eye transfer function.
2.4.1 Optical Filtering
Two filters account for the optical effects in an imaging system: diffraction andaberrations The diffraction filter accounts for the spreading of the light as it passes
an obstruction or an aperture The diffraction impulse response for an incoherent
imaging system with a circular aperture of diameter D is
where λ is the average band wavelength and r= x2 +y2
The somb (for
som-brero) function is given by Gaskill [3] to be
Note that the scaling values in front of the somb and the Gaus functions are
intended to provide a functional area (under the curve) of unity so that no gain
is applied to the scene Examples of the optical impulse responses are given inFigure 2.9 corresponding to a wavelength of 10µm, an optical diameter of 10 cen-timeters, and a geometric blur of 0.1 milliradian
The overall impulse response of the optics is the combined blur of both the fraction and aberration effects
dif-( ) ( ) ( )
h optics x y, = h diff x y, **h geom x y, (2.22)The transfer functions corresponding to these impulse responses are obtained
by taking the Fourier transform of the functions given in (2.18) and (2.20) The
Fou-rier transform of the somb is given by Gaskill [3] so that the transfer function is
2.4 Imaging System Point Spread Function and Modulation Transfer Function 21
Trang 392.4.2 Detector Spatial Filters
The detector spatial filter is also comprised of a number of different effects, ing spatial integration, sample-and-hold, crosstalk, and responsivity, among others.The two most common effects are spatial integration and sample-and-hold; that is,
0 5 0
cycles per milliradians
cycles per milliradians
Trang 40The other effects can be included, but they are usually considered negligibleunless there is good reason to believe otherwise (i.e., the detector responsivity variesdramatically over the detector).
The detector spatial impulse response is due to the spatial integration of thelight over the detector Since most detectors are rectangular in shape, the rectanglefunction is used as the spatial model of the detector
( )
x DAS
y DAS
or x, direction Usually, the distance, in milliradians, between samples is smaller than the detector angular subtense by a factor called samples per IFOV or samples per DAS, spdas The sample-and-hold function can be considered a rectangular
2.4 Imaging System Point Spread Function and Modulation Transfer Function 23
40 0 0.5 1
h det_sp( , )x y
−0.5 20 0
cycles per milliradians
40 20 0