1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Xử lý hình ảnh kỹ thuật số P11 pdf

21 339 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Image restoration models
Tác giả William K. Pratt
Chuyên ngành Digital Image Processing
Thể loại Sách giáo khoa
Năm xuất bản 2001
Thành phố Hoboken, New Jersey
Định dạng
Số trang 21
Dung lượng 419,52 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

11 IMAGE RESTORATION MODELS Image restoration may be viewed as an estimation process in which operations areperformed on an observed or measured image field to estimate the ideal image f

Trang 1

11

IMAGE RESTORATION MODELS

Image restoration may be viewed as an estimation process in which operations areperformed on an observed or measured image field to estimate the ideal image fieldthat would be observed if no image degradation were present in an imaging system.Mathematical models are described in this chapter for image degradation in generalclasses of imaging systems These models are then utilized in subsequent chapters as

a basis for the development of image restoration techniques

11.1 GENERAL IMAGE RESTORATION MODELS

In order effectively to design a digital image restoration system, it is necessaryquantitatively to characterize the image degradation effects of the physical imagingsystem, the image digitizer, and the image display Basically, the procedure is tomodel the image degradation effects and then perform operations to undo the model

to obtain a restored image It should be emphasized that accurate image modeling isoften the key to effective image restoration There are two basic approaches to themodeling of image degradation effects: a priori modeling and a posteriori modeling

In the former case, measurements are made on the physical imaging system, tizer, and display to determine their response for an arbitrary image field In someinstances it will be possible to model the system response deterministically, while inother situations it will only be possible to determine the system response in a sto-chastic sense The a posteriori modeling approach is to develop the model for theimage degradations based on measurements of a particular image to be restored.Basically, these two approaches differ only in the manner in which information isgathered to describe the character of the image degradation

digi-Digital Image Processing: PIKS Inside, Third Edition William K Pratt

Copyright © 2001 John Wiley & Sons, Inc.ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

Trang 2

Figure 11.1-1 shows a general model of a digital imaging system and restorationprocess In the model, a continuous image light distribution dependent

on spatial coordinates (x, y), time (t), and spectral wavelength is assumed toexist as the driving force of a physical imaging system subject to point and spatialdegradation effects and corrupted by deterministic and stochastic disturbances.Potential degradations include diffraction in the optical system, sensor nonlineari-ties, optical system aberrations, film nonlinearities, atmospheric turbulence effects,image motion blur, and geometric distortion Noise disturbances may be caused byelectronic imaging sensors or film granularity In this model, the physical imagingsystem produces a set of output image fields at time instant described

by the general relation

(11.1-1)

where represents a general operator that is dependent on the space

coordi-nates (x, y), the time history (t), the wavelength , and the amplitude of the light

distribution (C) For a monochrome imaging system, there will only be a single

out-put field, while for a natural color imaging system, may denote the red,

green, and blue tristimulus bands for i = 1, 2, 3, respectively Multispectral imagery

may also involve several output bands of data

In the general model of Figure 11.1-1, each observed image field isdigitized, following the techniques outlined in Part 3, to produce an array of imagesamples at each time instant The output samples of the digitizerare related to the input observed field by

(11.1-2)

FIGURE 11.1-1 Digital image restoration model.

C x y t λ( , , , )λ( )

F O( )i(x y t, , j) t j

F O( )i(x y t, , j) = O P{C x y t λ( , , , )}

O P{ }·

λ( )

F O( )i(x y t, , j)

F O( )i(x y t, , j)

F S( )i(m1,m2,t j) t j

F S( )i(m1,m2,t j) = O G{F O( )i(x y t, , j)}

Trang 3

GENERAL IMAGE RESTORATION MODELS 299

where is an operator modeling the image digitization process

A digital image restoration system that follows produces an output array

by the transformation

(11.1-3)

where represents the designed restoration operator Next, the output samples

of the digital restoration system are interpolated by the image display system to duce a continuous image estimate This operation is governed by therelation

pro-(11.1-4)

where models the display transformation

The function of the digital image restoration system is to compensate for dations of the physical imaging system, the digitizer, and the image display system

degra-to produce an estimate of a hypothetical ideal image field that would bedisplayed if all physical elements were perfect The perfect imaging system wouldproduce an ideal image field modeled by

(11.1-5)

where is a desired temporal and spectral response function, T is the

observa-tion period, and is a desired point and spatial response function

Usually, it will not be possible to restore perfectly the observed image such thatthe output image field is identical to the ideal image field The design objective ofthe image restoration processor is to minimize some error measure between

and The discussion here is limited, for the most part, to aconsideration of techniques that minimize the mean-square error between the idealand estimated image fields as defined by

(11.1-6)

where denotes the expectation operator Often, it will be desirable to placeside constraints on the error minimization, for example, to require that the imageestimate be strictly positive if it is to represent light intensities that are positive.Because the restoration process is to be performed digitally, it is often more con-venient to restrict the error measure to discrete points on the ideal and estimatedimage fields These discrete arrays are obtained by mathematical models of perfectimage digitizers that produce the arrays

Trang 4

(11.1-7b)

It is assumed that continuous image fields are sampled at a spatial period ing the Nyquist criterion Also, quantization error is assumed negligible It should benoted that the processes indicated by the blocks of Figure 11.1-1 above the dasheddivision line represent mathematical modeling and are not physical operations per-formed on physical image fields and arrays With this discretization of the continu-ous ideal and estimated image fields, the corresponding mean-square restorationerror becomes

11.2 OPTICAL SYSTEMS MODELS

One of the major advances in the field of optics during the past 40 years has been theapplication of system concepts to optical imaging Imaging devices consisting oflenses, mirrors, prisms, and so on, can be considered to provide a deterministictransformation of an input spatial light distribution to some output spatial light dis-tribution Also, the system concept can be extended to encompass the spatial propa-gation of light through free space or some dielectric medium

In the study of geometric optics, it is assumed that light rays always travel in astraight-line path in a homogeneous medium By this assumption, a bundle of rayspassing through a clear aperture onto a screen produces a geometric light projection

of the aperture However, if the light distribution at the region between the light and

Trang 5

OPTICAL SYSTEMS MODELS 301

dark areas on the screen is examined in detail, it is found that the boundary is notsharp This effect is more pronounced as the aperture size is decreased For a pin-hole aperture, the entire screen appears diffusely illuminated From a simplistic

viewpoint, the aperture causes a bending of rays called diffraction Diffraction of

light can be quantitatively characterized by considering light as electromagneticradiation that satisfies Maxwell's equations The formulation of a complete theory ofoptical imaging from the basic electromagnetic principles of diffraction theory is acomplex and lengthy task In the following, only the key points of the formulationare presented; details may be found in References 1 to 3

Figure 11.2-1 is a diagram of a generalized optical imaging system A point in theobject plane at coordinate of intensity radiates energy toward animaging system characterized by an entrance pupil, exit pupil, and intervening sys-tem transformation Electromagnetic waves emanating from the optical system arefocused to a point on the image plane producing an intensity The

imaging system is said to be diffraction limited if the light distribution at the image

plane produced by a point-source object consists of a converging spherical wavewhose extent is limited only by the exit pupil If the wavefront of the electromag-netic radiation emanating from the exit pupil is not spherical, the optical system is

said to possess aberrations.

In most optical image formation systems, the optical radiation emitted by anobject arises from light transmitted or reflected from an incoherent light source Theimage radiation can often be regarded as quasimonochromatic in the sense that thespectral bandwidth of the image radiation detected at the image plane is small withrespect to the center wavelength of the radiation Under these joint assumptions, theimaging system of Figure 11.2-1 will respond as a linear system in terms of theintensity of its input and output fields The relationship between the image intensityand object intensity for the optical system can then be represented by the superposi-tion integral equation

∞ –

=

Trang 6

where represents the image intensity response to a point source oflight Often, the intensity impulse response is space invariant and the input–outputrelationship is given by the convolution equation

Earth's atmosphere acts as an imaging system for optical radiation transversing apath through the atmosphere Normally, the index of refraction of the atmos-phere remains relatively constant over the optical extent of an object, but insome instances atmospheric turbulence can produce a spatially variable index of

H x( i,y i;x o,y o)

I i(x i,y i) H x( ix o,y iy o )I o(x o,y o ) x d o d y o

∞ –

∞ –

∞ –

I o(x o,y o ) x d o d y o

∞ –

∞ –

∞ –

I i(x i,y i ) x d i d y i

∞ –

∞ –

∞ –

H x y( , ) x d d y

∞ –

∞ –

Trang 7

OPTICAL SYSTEMS MODELS 303

refraction that leads to an effective blurring of any imaged object An equivalentimpulse response

(11.2-6)

where the K n are constants, has been predicted and verified mathematically byexperimentation (5) for long-exposure image formation For convenience in analy-sis, the function 5/6 is often replaced by unity to obtain a Gaussian-shaped impulseresponse model of the form

(11.2-7)

where K is an amplitude scaling constant and b x and by are blur-spread factors.Under the assumption that the impulse response of a physical imaging system isindependent of spectral wavelength and time, the observed image field can be mod-eled by the superposition integral equation

(11.2-8)

where is an operator that models the spectral and temporal characteristics ofthe physical imaging system If the impulse response is spatially invariant, themodel reduces to the convolution integral equation

FIGURE 11.2-2 Cross section of transfer function of a lens Numbers indicate degree of

∞ –

Trang 8

11.3 PHOTOGRAPHIC PROCESS MODELS

There are many different types of materials and chemical processes that have beenutilized for photographic image recording No attempt is made here either to surveythe field of photography or to deeply investigate the physics of photography Refer-ences 6 to 8 contain such discussions Rather, the attempt here is to develop mathe-matical models of the photographic process in order to characterize quantitativelythe photographic components of an imaging system

11.3.1 Monochromatic Photography

The most common material for photographic image recording is silver halide sion, depicted in Figure 11.3-1 In this material, silver halide grains are suspended in

emul-a tremul-anspemul-arent lemul-ayer of gelemul-atin themul-at is deposited on emul-a glemul-ass, emul-acetemul-ate, or pemul-aper bemul-acking

If the backing is transparent, a transparency can be produced, and if the backing is awhite paper, a reflection print can be obtained When light strikes a grain, an electro-chemical conversion process occurs, and part of the grain is converted to metallicsilver A development center is then said to exist in the grain In the developmentprocess, a chemical developing agent causes grains with partial silver content to beconverted entirely to metallic silver Next, the film is fixed by chemically removingunexposed grains

The photographic process described above is called a non reversal process It

produces a negative image in the sense that the silver density is inversely tional to the exposing light A positive reflection print of an image can be obtained

propor-in a two-stage process with nonreversal materials First, a negative transparency isproduced, and then the negative transparency is illuminated to expose negativereflection print paper The resulting silver density on the developed paper is thenproportional to the light intensity that exposed the negative transparency

A positive transparency of an image can be obtained with a reversal type of film.This film is exposed and undergoes a first development similar to that of a nonreversalfilm At this stage in the photographic process, all grains that have been exposed

FIGURE 11.3-1 Cross section of silver halide emulsion.

F O( )i(x y t, , j) O C C α β t λ( , , , )H x α( – ,y β– ) αd dβ

∞ –

∞ –

Trang 9

PHOTOGRAPHIC PROCESS MODELS 305

to light are converted completely to metallic silver In the next step, the metallicsilver grains are chemically removed The film is then uniformly exposed to light, oralternatively, a chemical process is performed to expose the remaining silver halidegrains Then the exposed grains are developed and fixed to produce a positive trans-parency whose density is proportional to the original light exposure

The relationships between light intensity exposing a film and the density of silvergrains in a transparency or print can be described quantitatively by sensitometricmeasurements Through sensitometry, a model is sought that will predict the spec-tral light distribution passing through an illuminated transparency or reflected from

a print as a function of the spectral light distribution of the exposing light and certainphysical parameters of the photographic process The first stage of the photographicprocess, that of exposing the silver halide grains, can be modeled to a first-orderapproximation by the integral equation

expo-is called a reciprocity failure of the film Another anomaly in exposure prediction expo-is

the intermittency effect, in which the exposures for a constant intensity light and for

an intermittently flashed light differ even though the incident energy is the same forboth sources Thus, if Eq 11.3-1 is to be utilized as an exposure model, it is neces-sary to observe its limitations: The equation is strictly valid only for a fixed expo-sure time and constant-intensity illumination

The transmittance of a developed reversal or non-reversal transparency as afunction of wavelength can be ideally related to the density of silver grains by theexponential law of absorption as given by

(11.3-2)

where represents the characteristic density as a function of wavelength for a

reference exposure value, and d e is a variable proportional to the actual exposure.For monochrome transparencies, the characteristic density function is reason-ably constant over the visible region As Eq 11.3-2 indicates, high silver densitiesresult in low transmittances, and vice versa It is common practice to change the pro-portionality constant of Eq 11.3-2 so that measurements are made in exponent tenunits Thus, the transparency transmittance can be equivalently written as

Trang 10

where d x is the density variable, inversely proportional to exposure, for exponent 10units From Eq 11.3-3, it is seen that the photographic density is logarithmicallyrelated to the transmittance Thus,

(11.3-4)

The reflectivity of a photographic print as a function of wavelength is alsoinversely proportional to its silver density, and follows the exponential law ofabsorption of Eq 11.3-2 Thus, from Eqs 11.3-3 and 11.3-4, one obtains directly

(11.3-5)

(11.3-6)

where d x is an appropriately evaluated variable proportional to the exposure of thephotographic paper

The relational model between photographic density and transmittance or reflectivity

is straightforward and reasonably accurate The major problem is the next step of

modeling the relationship between the exposure X(C) and the density variable d x

Figure 11.3-2a shows a typical curve of the transmittance of a nonreversal transparency

FIGURE 11.3-2 Relationships between transmittance, density, and exposure for a

Ngày đăng: 21/01/2014, 15:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm