Figure 27.17 shows images from transmitted light microscopy and fluorescence microscopy that have been deblurred using optical sections above and below at a Z-interval of 1 m.. 27.6 App
Trang 1822 CHAPTER 27 Computer-Assisted Microscopy
27.6.5.1 Deblurring
Weinstein and Castleman pioneered the deblurring of optical section images using a simple method that involves subtracting adjacent plane images that have been blurred with an appropriate defocus psf [69] , given by
gj ⫺i∗ h⫺i⫹ gj ⫹i∗ hi ∗ k0, (27.14)
where fjis the specimen brightness distribution at focus level j, gjis the optical section
image obtained at level j, hiis the blurring psf due to being out of focus by the amount i, k0
is a heuristically designed highpass filter, and the ∗ represents the convolution operation Thus one can partially remove the defocused structures by subtracting 2M adjacent plane images that have been blurred with the appropriate defocus psf and convolved with
a suitable highpass filter k0 The filter, k0, and the number, M , of adjacent planes must be
selected to give good results While this technique cannot recover the specimen function exactly, it does improve optical section images at reasonable computational expense It is
often necessary to use only a small number, M , of adjacent planes to remove most of the
defocused information Figure 27.17 shows images from transmitted light microscopy and fluorescence microscopy that have been deblurred using optical sections above and below at a Z-interval of 1 m While this technique cannot recover the specimen function exactly, it does improve optical section images at reasonable computational expense.
FIGURE 27.17
Deblurring The top row shows images of FISH-labeled lymphocytes The left three images are from an optical section stack taken one micron apart The right image is the middle one deblurred The in-plane dots are brighter, while the out-of-plane dots are removed The bottom row shows transmitted images of May-Giemsa stained blood cells The left three images are from an optical section stack taken one-half micron apart The rightmost image is the middle one deblurred.
Trang 227.6 Applications in Clinical Cytogenetics 823
27.6.5.2 Image Fusion
One effective way to combine a set of deblurred optical section images into a single 2D
image containing the detail from each involves the use of the wavelet transform [8]
A linear transformation is defined by a set of basis functions It represents an image by
a set of coefficients that specify what mix of basis functions is required to reconstruct
that image Reconstruction is effected by summing the basis functions in proportions
specified by the coefficients The coefficients thus reflect how much each of the basis
functions resembles a component of the original image If a few of the basis functions
match the components of the image, then their coefficients will be large and the other
coefficients will be negligible, yielding a very compact representation The coefficients
that correspond to the desired components of the image can be increased in magnitude,
prior to reconstruction, to enhance those components.
27.6.5.3 Wavelet Design
A wavelet transform is a linear transformation in which the basis functions (except the
first) are scaled and shifted versions of one function, called the “mother wavelet.” If the
wavelet can be selected to resemble components of the image, then a compact
represen-tation results There is considerable flexibility in the design of basis functions Thus it
is often possible to design wavelet basis functions that are similar to the image
compo-nents of interest These compocompo-nents, then, are represented compactly in the transform
by relatively few coefficients These coefficients can be increased in amplitude, at the
expense of the remaining components, to enhance the interesting content of the image.
Fast algorithms exist for the computation of wavelet transforms.
Mallat’s iterative algorithm for implementing the one-dimensional discrete wavelet
transform (DWT) [70, 71] is shown in Fig 27.18 In the design of an orthonormal DWT,
one begins with a “scaling vector,” h0(k), of even length The elements of the scaling vector
must satisfy certain constraints imposed by invertibility For example, the elements must
Mallat’s (1D) DWT algorithm The left half shows one step of decomposition, while the right
half shows one step of reconstruction The down and up arrows indicate downsampling and
upsampling by a factor of two, respectively For an orthonormal transform, the two filters on the
right are the same as the two on the left Further steps of decomposition and reconstruction are
introduced at the open circle.
Trang 3824 CHAPTER 27 Computer-Assisted Microscopy
sum to √
2, their squares must sum to unity, and the sum of the even-numbered elements must equal the sum of the odds [65] From h0(k) is generated a “wavelet vector”
h1(k) ⫽⫾ (⫺1)kh0(⫺k). (27.15) These two vectors are used as discrete convolution kernels in the system of Fig 27.18 to implement the DWT For example, all possible four-element orthonormal scaling vectors are specified by
select the parameter values (e.g., c1and c2, above) and then use the cascade algorithm to construct the corresponding scaling function and basic wavelet These show the form of the basis functions of that wavelet transform Repeat the process using different parameter
values until the desired basis function shape is attained Then use h0(k) and h1(k) in the
2D version of Mallat’s algorithm to implement the wavelet transform and its inverse.
27.6.5.4 Wavelet Fusion
Image fusion is the technique of combining multiple images into one that preserves the interesting detail of each [72] The wavelet transform affords a convenient way to fuse images One simply takes, at each coefficient position, the coefficient value having maximum absolute amplitude and then reconstructs an image from all such maximum- amplitude coefficients If the basis functions match the interesting components of the image, then the fused image will contain the interesting components collected up from all of the input images The images can be combined in the transform domain by taking the maximum-amplitude coefficient at each coordinate An inverse wavelet transform of the resulting coefficients then reconstructs the fused image We found that deblurring prior to wavelet fusion significantly improves the measured sharpness of the processed images An example of wavelet image fusion using transmitted light and fluorescence images is shown in Fig 27.19 Optical section deblurring followed by image fusion produced an image in which all of the dots are visible for the fluorescence images We use these techniques to improve the information content of images from thick samples Specifically, this technique improves the dot information in acquired FISH images because
it incorporates data from focal planes above and below.
Trang 427.7 Commercially Available Systems 825
Computer-assisted microscopy systems can vary in price, sensitivity, and capability The
selection of a system depends upon the experimental applications for which it will be used.
Typically, the selection is based on requirements for image resolution, sensitivity, light
conditions, image acquisition time, image storage requirements, and most importantly
the postacquisition image processing and analysis required Other considerations are the
technical demands of assembling the component hardware and configuring software.
Computerized imaging systems can be assembled from component parts or obtained
from a supplier as a fully integrated system Several companies offer fully integrated
computerized microscopy systems and/or provide customized solutions for specialized
systems.
A brief listing of some of the commercially available systems is provided here.
Applied Precision Inc (Issaquah, WA) provides a computerized imaging instrument,
the DeltaVisionTM Restoration Microscopy System for applications such as 3D time
course studies with live cell material Applied Precision also offers the softWoRxTM
Imaging Workstation for post-acquisition image processing such as deconvolution,
3D segmentation, and rendering Universal Imaging Corp (West Chester, PA) provides
software, including the MetaMorphTM, MetaViewTM, and MetaFluorTMsystems, which
can be customized for computerized microscopy applications in transmitted light, time
FIGURE 27.19
Image fusion using transmitted light and fluorescence images The top row shows FISH-labeled
lymphocytes The left three images are from a deblurred optical section stack taken one micron
apart The right image is the fusion of the three using the biorthogonal 2,2 wavelet transform.
Notice that the fused image has all of the dots in focus The bottom rows demonstrate a similar
effect in transmitted light images The deblurring process, followed by image fusion, enhances
image detail.
Trang 5826 CHAPTER 27 Computer-Assisted Microscopy
lapse studies, and fluorescence microscopy VayTek Inc (Fairfield, Iowa) provides an integrated microscopy imaging system, the ProteusTMsystem, that can be custom con- figured to any microscopy system VayTek’s proprietary software for deconvolution and 3D reconstruction, including MicroTomeTM, VoxBlastTM, HazeBusterTM, VtraceTM, and Volume-ScanTM, can also be custom configured for most current microscopy systems ChromaVision Medical Systems Inc (San Juan, CA) provides an Automated Cellular Imaging System that allows cell detection based on color-, size-, and shape-based morpho- metric features MetaSystems GmbH (Altlussheim, Germany) provides a computerized microscopy system based on Zeiss optics for scanning and imaging pathology slides, cyto- genetic slides for FISH, MFISH, and metaphase detection, oncology slides, and for rare cell detection, primarily from blood, bone marrow, or tissue section samples Applied Imaging Corp (Santa Clara, CA), now part of MetaSystems GmbH, Germany, provides fully automated scanning and image analysis systems Their MDSTM system provides automated slide scanning using brightfield or fluorescent illumination to allow standard karyotyping, FISH, and comparative genomic hybridization, as well as rare cell detection They also have the OncopathTMand Ariol Sl-50TMimage analysis systems for oncology and clinical pathology applications.
The field of automated imaging is also of great interest to pharmaceutical and biotechnology companies Many are now developing high-throughput and high-content screening platforms for automated analysis of intracellular localization and dynamics
of proteins and to view the effects of a drug on living cells more quickly content imaging systems for cell-based assays have proliferated in the past year, examples include Cellomic’s ArrayScan system and KineticScan workstation (Cellomics, Inc., Pitts- burgh, PA); Amersham’s INCell Analyzer 1000 and 3000 (Amersham Biosciences Corp., Piscataway, NJ); Acumen Bioscience’s Explorer system (Melbourn, United Kingdom); CompuCyte’s iCyte imaging cytometer and LSC laser scanning cytometer (CompuCyte Corporation, Cambridge, MA); Atto Bioscience’s Pathway HT kinetic cell imaging sys- tem (Atto Bioscience Inc., Rockville, MD); Universal Imaging’s Discovery-1 system (Uni- versal Imaging Corporation, Downingtown, PA); and Q3DM’s (now part of Beckman Coulter, San Diego, CA), EIDAQ 100 High-Throughput Microscopy (HTM) system (recently discontinued).
The rapid development of microscopy techniques over the last few decades has been accompanied by similar advances in the development of new fluorescent probes and improvements in automated microscope systems and software Advanced applications such as deconvolution, FRET, and ion ratio imaging require sophisticated routines for controlling automated microscopes and peripheral devices such as filter wheels, shut- ters, automated stages, and cameras Computer-assisted microscopy provides the ability
to enhance the speed of microscope data acquisition and data analysis, thus relieving
Trang 627.8 Conclusions 827
humans of tedious tasks Not only the cost efficiency is improved due to the
correspond-ing reduction in labor costs and space but also errors associated with operator bias are
eliminated Researchers are not only relieved from tedious manual tasks but may also
quickly examine thousands of cells, plates, and slides, as well as precisely determine some
informative activity against a cell, and collect and mine massive amounts of data The
process is also repeatable and reproducible with a high degree of precision.
We have described a specific configuration of a computerized fluorescence microscope
with applications in clinical cytogenetics Fetal cell screening from maternal blood has
the potential to revolutionize the future of prenatal genetic testing, making noninvasive
testing available to all pregnant women Its clinical realization will be practical only via an
automated screening procedure because of the rare number of fetal cells available
Spe-cialized slides, based on the grid template, such as the subtelomeric FISH assay, require
automated scanning methods to increase accuracy and efficiency of the screening
pro-tocol Similarly, automated techniques are necessary to allow the quantitative analysis
for the measurement of the separation distance for detection of duplicated genes Thick
specimen imaging using deblurring methods allows the detection of cell structures that
are distributed throughout the volume of the entire cell Thus, there are sound reasons
for pursuing the goal of automation in medical cytogenetics Not only does automation
increase laboratory throughput, it also decreases laboratories’ costs for performing tests.
And as tests become more objective, the liability of laboratories also decreases The
mar-ket for comprehensive automated tests is vast in terms of both size (whether measured
in test volume or dollars) and potential impact on people’s lives.
The effective commercial use of computer-assisted microscopy and quantitative image
analysis requires the careful integration of automated microscopy, high-quality image
acquisition, and powerful analytical algorithms that can rationally detect, count, and
quantify areas of interest Typically, the systems should provide walk-away scanning
operation with automated slide loaders that can queue several (50 to 200) slides
Addi-tionally, the automated microscopy systems should have the capability to integrate with
viewing stations to create a network for reviewing images, analyzing data, and
gener-ating reports There has been an increase in the commercialization of computerized
microscopy and high-content imaging systems over that past five years Clearly, future
developments in this field will be of great interest to biotechnology All signs indicate that
superior optical instrumentation and software for cell research are on the development
horizon.
ACKNOWLEDGMENTS
We would like to thank Vibeesh Bose, Hyohoon Choi, and Mehul Sampat for their
assistance with the development and testing of the computerized microscopy system.
The development of the automated microscopy system was partially supported by NIH
SBIR Grant Nos HD34719-02, HD38150-02, and GM60894-01.
Trang 7828 CHAPTER 27 Computer-Assisted Microscopy
REFERENCES
[1] M Bravo-Zanoguera, B Massenbach, A Kellner, and J H Price High-performance autofocus
circuit for biological microscopy Rev Sci Instrum., 69:3966–3977, 1998.
[2] J C Oosterwijk, C F Knepfle, W E Mesker, H Vrolijk, W C Sloos, H Pattenier, I Ravkin,
G J van Ommen, H H Kanhai, and H J Tanke Strategies for rare-event detection: an approach
for automated fetal cell detection in maternal blood Am J Hum Genet., 63:1783–1792, 1998.
[3] L A Kamenstsky, L D Kamenstsky, J A Fletcher, A Kurose, and K Sasaki Methods for matic multiparameter analysis of fluorescence in situ hybridized specimens with a laser scanning
auto-cytometer Cytometry, 27:117–125, 1997.
[4] H Netten, I T Young, L J van Vliet, H J Tanke, H Vroljik, and W R Sloos FISH and chips:
automation of fluorescent dot counting in interphase cell nuclei Cytometry, 28:1–10, 1997.
[5] I Ravkin and V Temov Automated microscopy system for detection and genetic characterization
of fetal nucleated red blood cells on slides Proc Opt Invest Cells In Vitro In Vivo, 3260:180–191,
1998
[6] D J Stephens and V J Allan Light microscopy techniques for live cell imaging Science, 300:82–86,
2003
[7] T Lehmann, J Brendo, V Metzler, G Brook, and W Nacimlento Computer assisted
quantifica-tion of axo-somatic buttons at the cell membrane of motorneurons IEEE Trans Biomed Eng.,
48:706–717, 2001
[8] J Lu, D M Healy, and J B Weaver Contrast enhancement of medical images using multiscale edge
representation Opt Eng., 33:2151–2161, 1994.
[9] J S Ploem, A M van Driel-Kulker, L Goyarts-Veldstra, J J Ploem-Zaaijer, N P Verwoerd, and
M van der Zwan Image analysis combined with quantitative cytochemistry Histochemistry,
84:549–555, 1986
[10] E M Slayter and H S Slayter Light and Electron Microscopy Cambridge University Press,
New York, NY, 1992
[11] I T Young Quantitative microscopy IEEE Eng Med Biol., 15:59–66, 1996.
[12] J D Cortese Microscopy paraphernalia: accessories and peripherals boost performance The
Scientist, 14(24):26, 2000.
[13] E Gratton and M J vandeVan Laser sources for confocal microscopy In J B Pawley, editor,
Handbook of Biological Confocal Microscopy Plenum, New York, 69–97, 1995.
[14] Q Wu Autofocusing In Q Wu, F A Merchant, and K R Castleman, editors, Microscope Image
Processing Academic Press, Boston, MA, 441–467, 2008.
[15] E T Johnson and L J Goforth Metaphase spread detection and focus using closed circuit television
J Histochem Cytochem., 22(7):536–545, 1974.
[16] B Dew, T King, and D Mighdoll An automatic microscope system for differential leucocyte
counting J Histochem Cytochem., 22:685–696, 1974.
[17] H Harms and H M Aus Comparison of digital focus criteria for a TV microscope system
Cytometry, 5:236–243, 1984.
[18] F C Groen, I T Young, and G Ligthart A comparison of different focus functions for use in
autofocus algorithms Cytometry, 6(2):81–91, 1985.
[19] F R Boddeke, L J van Vliet, H Netten, and I T Young Autofocusing in microscopy based on the
OTF and sampling Bioimaging, 2:193–203, 1994.
Trang 8References 829
[20] L Firestone, K Cook, K Culp, N Talsania, and K Preston Comparison of autofocus methods for
automated microscopy Cytometry, 12(3):195–206, 1991.
[21] D Vollath Automatic focusing by correlative methods J Microsc., 147:279–288, 1987.
[22] D Vollath The influence of scene parameters and of noise on the behavior of automatic focusing
algorithms J Microsc., 152(2):133–146, 1988.
[23] J F Brenner, B S Dew, J B Horton, T King, P W Neurath, and W D Selles An automated
microscope for cytologic research J Histochem Cytochem., 24:100–111, 1976.
[24] A Erteza Depth of convergence of a sharpness index autofocus system Appl Opt., 15:877–881,
1976
[25] A Erteza Sharpness index and its application to focus control Appl Opt., 16:2273–2278, 1977.
[26] R A Muller and A Buffington Real time correction of atmospherically degraded telescope images
through image sharpening J Opt Soc Am., 64:1200–1210, 1974.
[27] J H Price and D A Gough Comparison of phase-contrast and fluorescence digital autofocus for
scanning microscopy Cytometry, 16(4):283–297, 1994.
[28] J M Geusebroek, F Cornelissen, A W Smeulders, and H Geerts Robust autofocusing in
microscopy Cytometry, 39(1):1–9, 2000.
[29] A Santos, C Ortiz de Solorzano, J J Vaquero, J M Pena, N Malpica, and F del Pozo Evaluation
of autofocus functions in molecular cytogenetic analysis J Microsc., 188(3):264–272, 1997.
[30] J C Russ The Image Processing Handbook CRC Press, Boca Raton, FL, 1994.
[31] K R Castleman, T P Riopka, and Q Wu FISH image analysis IEEE Eng Med Biol., 15(1):67–75,
1996
[32] E R Dougherty and J Astola An Introduction to Nonlinear Image Processing SPIE, Bellingham,
WA, 1994
[33] J Serra Image Analysis and Mathematical Morphology Academic Press, London, 1982.
[34] K R Castleman Digital image color compensation with unequal integration periods Bioimaging,
2:160–162, 1994
[35] K R Castleman and I T Young Fundamentals of microscopy In Q Wu, F A Merchant, and
K R Castleman, editors, Microscope Image Processing Academic Press, Boston, MA, 11–25, 2008.
[36] Y Wang, Q Wu, and K R Castleman Image enhancement In Q Wu, F A Merchant, and
K R Castleman, editors, Microscope Image Processing Academic Press, Boston, MA, 59–78, 2008.
[37] W E Higgins, W J T Spyra, E L Ritman, Y Kim, and F A Spelman Automatic extraction of the
arterial tree from 3-D angiograms IEEE Conf Eng Med Biol., 2:563–564, 1989.
[38] N Niki, Y Kawata, H Satoh, and T Kumazaki 3D imaging of blood vessels using x-ray rotational
angiographic system IEEE Med Imaging Conf., 3:1873–1877, 1993.
[39] C Molina, G Prause, P Radeva, and M Sonka 3-D catheter path reconstruction from biplane
angiograms SPIE, 3338:504–512, 1998.
[40] A Klein, T K Egglin, J S Pollak, F Lee, and A Amini Identifying vascular features with orientation
specific filters and b-spline snakes IEEE Comput Cardiol., 113–116, 1994.
[41] A K Klein, F Lee, and A A Amini Quantitative coronary angiography with deformable spline
models IEEE Trans Med Imaging, 16:468–482, 1997.
[42] D Guo and P Richardson Automatic vessel extraction from angiogram images IEEE Comput.
Cardiol., 25:441–444, 1998.
Trang 9830 CHAPTER 27 Computer-Assisted Microscopy
[43] Y Sato, S Nakajima, N Shiraga, H Atsumi, S Yoshida, T Koller, G Gerig, and R Kikinis 3D
multi-scale line filter for segmentation and visialization of curvilinear structures in medical images IEEE
Med Image Anal., 2:143–168, 1998.
[44] K R Castleman Digital Image Processing Prentice Hall, Englewood Cliffs, NJ, 1996.
[45] T Ridler and S Calvard Picture thresholding using an iterative selection method IEEE Trans Syst.
Man Cybern., 8:629–632, 1978.
[46] W Tsai Moment-preserving thresholding Comput Vis Graph Image Process., 29:377–393, 1985 [47] N Otsu A threshold selection method from gray-level histograms IEEE Trans Syst Man Cybern.,
9:62–66, 1979
[48] J Kapur, P Sahoo, and A Wong A new method for gray-level picture thresholding using the
entropy of the histogram Comput Vis Graph Image Process., 29(3):273–285, 1985.
[49] Q Wu, and K R Castleman Image segmentation In Q Wu, F A Merchant, and K R Castleman,
editors, Microscope Image Processing Academic Press, Boston, MA, 159–194, 2008.
[50] A C Bovik, editor The Handbook of Image and Video Processing, Chap 4.8, 4.9, and 4.12 Elsevier
Academic Press, 2005
[51] T McInerney and D Terzopoulos Deformable models in medical image analysis: a survey IEEE
Med Image Anal., 1:91–108, 1996.
[52] C Smets, G Verbeeck, P Suetens, and A Oosterlinck A knowledge-based system for the delineation
of blood vessels on subtraction angiograms Pattern Recognit Lett., 8:113–121, 1988.
[53] R Nekovei and Y Sun Back-propagation network and its configuration for blood vessel detection
in angiograms IEEE Trans Neural Netw., 6:64–72, 1995.
[54] L Dorst Discrete Straight Lines: Parameters, Primitives and Properties Delft University Press, Delft,
The Netherlands, 1986
[55] T Y Young, and K Fu Handbook of Pattern Recognition and Image Processing Academic Press,
San Diego, CA, 1986
[56] A K Jain Fundamentals of Digital Image Processing Prentice-Hall, Englewood Cliffs, NJ, 1989.
[57] H Firth, P A Boyd, P Chamberlain, I Z Mackenzie, R H Lindenbaum, and S M Hudson Severe
limb abnormalities after chorionic villus sampling at 56 to 66 days Lancet, 1:762–763, 1991.
[58] D Ganshirt-Ahlert, M Burschyk, H P Garritsen, L Helmer, P Miny, J Horst, H P Schneider, and
W Holzgreve Magnetic cell sorting and the transferrin receptor as potential means of prenatal
diagnosis from maternal blood Am J Obstet Gynecol., 166:1350–1355, 1992.
[59] D W Bianchi, G K Zickwolf, M C Yih, A F Flint, O H Geifman, M S Erikson, and J M Williams.Erythroid-specific antibodies enhance detection of fetal nucleated erythrocytes in maternal blood
Prenat Diagn., 13:293–300, 1993.
[60] S Elias, J Price, M Dockter, S Wachtel, A Tharapel, J L Simpson, and K W Klinger First trimester
prenatal diagnosis of trisomy 21 in fetal cells from maternal blood Lancet, 340:1033, 1992.
[61] F A Merchant, S J Aggarwal, K R Diller, K A Bartels, and A C Bovik Three-dimensionaldistribution of damaged cells in cryopreserved pancreatic islets as determined by laser scanning
confocal microscopy J Microsc., 169:329–338, 1993.
[62] K R Castleman and B S White Dot-Count proportion estimation in FISH specimens Bioimaging,
3:88–93, 1995
[63] F A Merchant and K R Castleman Strategies for automated fetal cell screening Hum Reprod.
Update, 8(6):509–521, 2002.
Trang 10References 831
[64] S J Knight and J Flint Perfect endings: a review of subtelomeric probes and their use in clinical
diagnosis J Med Genet., 37(6):401–409, 2000.
[65] X Hu, A M Burghes, P N Ray, M W Thompson, E G Murphy, and R G Worton Partial gene
duplication in Duchenne and Becker muscular dystrophies J Med Genet., 25:369–376, 1988.
[66] K S Chen, P Manian, T Koeuth, L Potocki, Q Zhao, A C Chinault, C C Lee, and J R Lupski
Homologous recombination of a flanking repeat gene cluster is a mechanism for a commom
contiguous gene deletion syndrome Nat Genet., 17:154–163, 1997.
[67] L Potocki, K Chen, S Park, D E Osterholm, M A Withers, V Kimonis, A M Summers,
W S Meschino, K Anyane-Yeboa, C D Kashork, L G Shaffer, and J R Lupski Molecular
mech-anism for duplication 17p11.2 - the homologous recombination reciprocal of the Smith-Magenis
microdeletion Nat Genet., 24:84–87, 2000.
[68] W H Press, W T Flannery, S A Teukolsky, and B P Vetterling Numerical Recipes in C Cambridge
University Press, New York, 1992
[69] M Weinstein and K R Castleman Reconstructing 3-D specimens from 2-D section images Proc.
SPIE, 26:131–138, 1971.
[70] S Mallat A theory for multiresolution signal decomposition: the wavelength representation IEEE
Trans Pattern Anal Mach Intell., 11:674–693, 1989.
[71] I Daubechies Orthonormal bases of compactly supported wavelets Commun Pure and Appl.
Math., 41:909–996, 1988.
[72] J Aggarwal Multisensor Fusion for Computer Vision Springer-Verlag, New York, NY, 1993.
Trang 1128
Towards Video Processing
Alan C Bovik
The University of Texas at Austin
Hopefully the reader has found the Essential Guide to Image Processing to be a valuable
resource for understanding the principles of digital image processing, ranging from the
very basic to the more advanced The range of readers interested in the topic is quite
broad, since image processing is vital to nearly every branch of science and engineering,
and increasingly, in our daily lives.
Of course our experience of images is not limited to the still images that are considered
in this Guide Indeed, much of the richness of visual information is created by scene
changes recorded as time-varying visual information Devices for sensors and recording
moving images have been evolving very rapidly in terms of speed, accuracy, and sensitivity,
and for nearly every type of available radiation These time-varying images, regardless of
modality, are collectively referred to as video.
Of course the main application of digital video processing is to provide high-quality
visible-light videos for human consumption The ongoing explosion of digital and
high-definition television, videos on the internet, and wireless video on handheld devices
ensures that there will be significant interest in topics in digital video processing for a
long time.
Video analysis is still a young field with considerable work left to be done By mining
the rich spatio-temporal information that is available in video, it is possible to analyze
the growth or evolutionary properties of dynamic physical phenomena or of living
spec-imens More broadly, video streams may be analyzed to detect movement for security
purposes, for vehicle guidance or navigation, and for tracking moving objects, including
people.
Digital video processing encompasses many approaches that derive from the essential
principles of digital image processing (of still images) found in this Guide Indeed, it is
best to become conversant in the techniques of digital image processing before embarking
on the study of digital video processing However, there is one important aspect of video
processing that significantly distinguishes it from still image processing, makes necessary
significant modifications of still image processing methods for adaptation to video, and
also requires the development of entirely new processing philosophies That aspect is
motion.
833
Trang 12834 CHAPTER 28 Towards Video Processing
Digital videos are taken from a real world containing 3D objects in motion These objects in motion project to images that are in motion, meaning that the image intensities and/or colors are in motion at the image plane Motion has attributes that are both simple and complex Simple, because most visual motion is relatively smooth in the sense that the instantaneous velocities of 3D objects do not usually change very quickly Yet object motion can also be complex, and includes deformations (when objects change shape), occlusions (when one object moves in front of another), acceleration (when objects change their direction), and so on.
It is largely the motion of these 3D objects and their 2D projections that determines our visual experience of the world The way in which motion is handled in video process- ing largely determines how videos will be perceived or analyzed Indeed, one of the first
steps in a large percentage of video processing algorithms is motion estimation, whereby
the movement of intensities or colors is estimated These motion estimates can be used
in a wide variety of ways for video processing and analysis.
Other ways wherein video presents special challenges relate to the significant increase
in data volume The extra (temporal) dimension of video implies significant increases in required storage, bandwidth, and processing resources Naturally it is of high interest to find efficient algorithms that exploit some of the special characteristics of video, such as temporal redundancy, in video processing.
The companion book to this one, the Essential Guide to Video Processing, explains the
significant problems encountered in video processing, beginning with the essentials of video sampling, through motion estimation and tracking, common processing steps such
as enhancement and interpolation, the extremely important topic of video compression, and on to more advanced topics such as video quality assessment, video networking, video
security, and wireless video Like the current book, the companion video Guide finishes
with a series of interesting and essential applications including video surveillance, video analysis of faces, medical video processing, and video-speech analysis.
It is our hope that the reader will embark on the second leg of their voyage of ery into one of the most exciting and timely technological topics of our age The first leg, digital image processing, while extremely fascinating and important on its own intellec- tual and practical merits, is in many ways a prelude to the broader, more sophisticated, and more challenging topic of digital video processing.
Trang 13ACIS, see Automated cellular imaging system
Adaptive elastic string matching algorithm, 669
Adaptive speckle filter, 539
Adaptive wavelet transforms, 486–490
Additive image offset in linear point operations,
Alternating sequential filters (ASF), 301
Analog images as physical functions, 179–180
Analog-to-digital conversion (A/D conversion), 6
AOD, see Average optical density
ARCH models, see Autoregressive conditional
heteroskedastic models
Area openings, 301–302
Arithmetic coding, 399–404, 401t, 402f, 435, 453context-based, 403
ART, see Algebraic reconstruction technique ASF, see Alternating sequential filters
Astronomical imaging, 56Atmospheric turbulence blur, 330Attached shadows, 687–688Authentication
biometric-based, 650image content, for watermarking, 636–641Autocorrelation function (ACF), 207f, 619, 642Autofocusing in computer-assisted microscopy,
787–790autofocus speed, 789–790focus functions, 788–789two-phase approach, 790fAutomated cellular imaging system (ACIS), 826Autoregressive conditional heteroskedastic
(ARCH) models, 217Average optical density (AOD), 45, 50
AWGN, see Additive white Gaussian noise
B
Band denoising functions, 247fBand thresholding, 244–247Band weighting, 248–249Bandpass filters, 229, 229f, 559, 660
Barbara image, 483, 484f, 486, 488, 489f
Basis functions, 212, 213f, 217fBayes’ decision rule, 210, 214, 308Bayes least squares, 210Bayesian coring, 253Bayesian model for optimal denoising, 258Befuddlement theory, 804
Behavioral biometrics, 677, 678t
BER, see Bit error rate
Bilinear interpolation, 66, 284, 285fBinary image morphologyboundary detection in, 90–92, 92flogical operations, 79–80windows, 80–82, 81fBinary image processing, 33–34binary image morphology, 79–92image thresholding, 71–77morphological filters, 82–90, 83f–86f, 88f, 89f 835
Trang 14creation of, 70
display of, 69, 70f
morphological filters for, 294–295
simple device for, 70f
Binary median filter, 90
Binary object detection, 308–309
Bit error rate (BER), 608–609
Bit plane encoding passes, 452–453
normalization or cleanup pass, 452
significance propagation, 452
Bitstream organization
layers, 454–455
packets, 454–455
Blind embedding schemes, 602
Blind image deconvolution, 324, 374
Blob coloring, 77
Block truncation coding (BTC), 37, 39f
Blur identification algorithms, 343–347
Boundary value problem, 342–343
BSNR, see Blurred signal-to-noise ratio
BTC, see Block truncation coding
Butterworth filter, 235
C
CALIC, see Context-based, adaptive, lossless
image codeCameras, 198–199
colorimetric recording, 198
mathematical model for color sensor of, 182
Canny’s edge detector, 516
Capacitance-based fingerprint sensor, 655
CAT, see Computer assisted tomography
269–271for impulse noise cleaning, 277, 278f, 279f,280
operation, 270f, 271f, 272output of, 269, 270f, 281fCentral limit theorem, 150Cepstrum for motion blur, 345fChain coding in binary image processing,
94–96, 96fChange detection, image differencing for, 63–65Chaotic watermarks, 618–620
Charge coupled devices (CCDs), 783–784
in noise model, 160–161Checkmark, 614
Chromaticity, 185, 186Chrominance components, 428, 436CIELab space, 192
Circle of confusion (COC), 329Circular convolution, 107Circularly-symmetric watermarks, 623–624Close filter, morphological, 87–88, 88fClustering property, 215
COC, see Circle of confusion
Code-blockscontributions to layers, 454fdefinition of, 450
Coding delay, 389Coding gain, 477Coefficient denoising functions, 251fCoefficient thresholding, 250–252Coefficient weighting, 252–253Coefficient-to-symbol mapping unit, 424Color aliasing, 183, 185
sampling for, 189–191Color images, 13–15, 16fedge detection for, 518transform domain watermarking for, 621Color sampling, 182
Color sensor, 182spectral response of, 183Color signals, sampling of, 193–196Color vectors, transformation of, 187Colorimetric recording, 198Colorimetry, 180–181Color-matching functions, 184CIE RGB, 186
CIE XYZ, 185f, 187fdependence of color, 185–186effect of illumination, 188–189
Trang 15iterative reconstruction methods, 767
Bayesian reconstruction methods,
mathematical preliminaries for, 747
nuclear imaging using, 744–746
rebinning methods in 3D PET, 758–759
clinical cytogenetic applications
detection of gene duplications, 808–817
fetal cell screening in maternal blood,
802–804
FISH for aneuploidy screening, 817–820performance, 817, 819t
STFISH, 804–808thick specimen imaging, 820–824for clinical cytogenetics
hardware, 799–800software, 800–802commercially available, 825–826components of, 779f
function of, 778hardwarefilter control, 781–782illumination source, 780–781image sensors, 783–784
X, Y stage positioning and Z-axis motors,782–783
image capture, 785image processing and analysis softwarebackground shading, 791–794, 793fcolor compensation, 794–795image enhancement, 795instrumentation-based errors, 791object measurement, 798–799segmentation for object identification,796–798
imaging software for, 785–786software for hardware control, 786autofocusing, 787–790automated slide scanning, 787image capture in, 790and user interface, 799Conditional dilation, iterations of, 304fConditional histograms, 216–217, 219fCone-beam data, 743, 745
Cone-beam tomography, 761–764, 761f, 762fCones of eye, 14
Cones sensitivities, 180, 181fConjugate gradient algorithm, 342Conjugate quadrature filters (CQF), 130Connected filters for smoothing and
simplification, 301–305Connected operators
area openings, 301–302reconstruction opening, 302–305Constrained least-squares filter, 336Constrained least-squares iteration, 363, 366fConstrained least-squares restoration, 338fConstrained-length Huffman codes, 398Content-adaptive watermarking, 635Context-based, adaptive, lossless image code
(CALIC), 413–415, 414f, 415t