1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Báo cáo sinh học: " Research Article Real-Time Multiple Moving Targets Detection from Airborne IR Imagery by Dynamic Gabor Filter and Dynamic Gaussian Detector" doc

22 346 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 22
Dung lượng 4,59 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Then, a dynamic Gabor filter is employed to enhance the differenceimages for more accurate detection and localization of moving targets.. Next, the specular highlights generated by the dy

Trang 1

and Dynamic Gaussian Detector

Fenghui Yao,1Guifeng Shao,1Ali Sekmen,1and Mohan Malkani2

1 Department of Computer Science, College of Engineering, Technology and Computer Science, Tennessee State University,

3500 John A Merritt Blvd, Nashville, TN 37209, USA

2 Department of Electrical and Computer Engineering, Tennessee State University, Nashville, TN 37209, USA

Correspondence should be addressed to Fenghui Yao,fyao@tnstate.edu

Received 1 February 2010; Revised 18 May 2010; Accepted 29 June 2010

Academic Editor: Jian Zhang

Copyright © 2010 Fenghui Yao et al This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This paper presents a robust approach to detect multiple moving targets from aerial infrared (IR) image sequences The proposednovel method is based on dynamic Gabor filter and dynamic Gaussian detector First, the motion induced by the airborne platform

is modeled by parametric affine transformation and the IR video is stabilized by eliminating the background motion A set offeature points are extracted and they are categorized into inliers and outliers The inliers are used to estimate affine transformationparameters, and the outliers are used to localize moving targets Then, a dynamic Gabor filter is employed to enhance the differenceimages for more accurate detection and localization of moving targets The Gabor filter’s orientation is dynamically changedaccording to the orientation of optical flows Next, the specular highlights generated by the dynamic Gabor filter are detected.The outliers and specular highlights are fused to indentify the moving targets If a specular highlight lies in an outlier cluster,

it corresponds to a target; otherwise, the dynamic Gaussian detector is employed to determine whether the specular highlightcorresponds to a target The detection speed is approximate 2 frames per second, which meets the real-time requirement of manytarget tracking systems

1 Introduction

Detection of moving targets in infrared (IR) imagery is a

challenging research topic in computer vision Detecting

and localizing a moving target accurately is important

for automatic tracking system initialization and recovery

from tracking failure Although many methods have been

developed on detecting and tracking targets in visual images

(generated by daytime cameras), there exits limited amount

of work on target detection and tracking from IR imagery in

computer vision community [1] IR images are obtained by

sensing the radiation in IR spectrum, which is either emitted

or reflected by the object in the scene Due to this property,

IR images can provide information which is not available in

visual images However, in comparison to the visual images,

the images obtained from an IR camera have extremely low

signal-to-noise ratio, which results in limited information

for performing detection and tracking tasks In addition, in

airborne IR images, nonrepeatability of the target signature,competing background clutter, lack of a priori information,high ego-motion of the sensor, and the artifacts due toweather conditions make detection or tracking of targetseven harder To overcome the shortcomings of the nature of

IR imagery, different approaches impose different constrains

to provide solutions for a limited number of situations Forinstance, several detection methods require that the targetsare hot spots which appear as bright regions in the IRimages [2 4] Similarly, some other methods assume thattarget features do not drastically change over the course

of tracking [4 7] or sensor platforms are stationary [5].However, in realistic target detection scenarios, none of theseassumptions are applicable, and a robust detection methodmust successfully deal with these problems

This paper presents an approach for robust real-timetarget detection in airborne IR imagery This approach hasthe following characteristics: (1) it is robust in presence of

Trang 2

high global motion and significant texture in background;

(2) it does not require that targets have constant velocity or

acceleration; (3) it does not assume that target features do

not drastically change over the course of tracking There are

two contributions in our approach The first contribution is

the dynamic Gabor filter In airborne IR video, the whole

background appears to be moving because of the motion

of the airborne platform Hence, the motion of the targets

must be distinguished from the motion of the background

To achieve this, the background motion is modeled by a

global parametric transformation and then motion image

is generated by frame differencing However, the motion

image generated by frame differencing using an IR camera

is weaker compared to that of a daytime camera Especially

in the presence of significant texture in background, the

small error in global motion model estimation accumulates

large errors in motion image This makes it impossible to

detect the target from the motion image directly To solve this

problem, we employ a Gabor filter to enhance the motion

image The orientation of Gabor filter is changed from frame

to frame and therefore we call it dynamic Gabor filter The

second contribution is dynamic Gaussian detector After

applying dynamic Gabor filter, the target detection problem

becomes the detection of specular highlights We employ

both specular highlights and clusters of outliers (the feature

points corresponding to the moving objects) to detect the

target If a specular highlight lies in a cluster of outliers, it

is considered as a target Otherwise, the Gaussian detector

is applied to determine if a specular highlight corresponds

to a target or not The orientation of Gaussian detector is

determined by the principal axis of the highlight Therefore,

we call it dynamic Gaussian detector.

The remainder of the paper is organized as follows

Section 2provides a literature survey on detecting moving

targets in airborne IR videos In Section 3, the proposed

algorithm is described in detail.Section 4presents the

exper-imental results.Section 5gives the performance analysis of

the proposed algorithm Conclusions and future works are

given inSection 6

2 Related Work

For the detection of IR targets, many methods use the ithot

spot technique, which assumes that the target IR radiation

is much stronger than the radiation of the background and

the noise The goal of these target detectors is then to detect

the center of the region with the highest intensity in image,

which is called ithot spot [1] The hot spot detectors use

various spatial filters to detect the targets in the scene Chen

and Reed modeled the underlying clutter and noise after

local demeaning as a whitened Gaussian random process

and developed a constant false alarm rate detector using

the generalized maximum likelihood ratio [2] Longmire

and Takken developed a spatial filter based on least mean

square (LMS) to maximize the signal-to-clutter ratio for

a known and fixed clutter environment [3] Morin have

presented a multistage infinite impulse response (IIR) filter

for detecting dim point targets [8] Tzannes and Brooks

presented a generalized likelihood ratio test (GLRT) solution

to detect small (point) targets in a cluttered backgroundwhen both the target and clutter are moving through theimage scene [9] These methods do not work well in presence

of significant texture in background because they employthe assumption that that the target IR radiation is muchstronger than the radiation of the background and thenoise This assumption is not always satisfied For instance,

Figure 1 shows two IR images with significant texture inbackground, each contains three vehicles on a road The IRradiation from asphalt concrete road and street lights is muchstronger than that of vehicle bodies, and street lights appear

in IR images as ithot spots but vehicles do not Yilmaz et

al applied fuzzy clustering, edge fusion and local texture

energy techniques to the input IR image directly, to detectthe targets [1] This method works well for IR videos withsimple texture in background such as ocean or sky For the

IR videos as shown inFigure 1, this method will fail becausethe textures are complicated and edges are across the entireimages In addition, this algorithm requires an initialization

of the target bounding box in the frame where the targetfirst appears Furthermore, this method can only detect andtrack a single target Recently, Yin and Collins developed amethod to detect and localize moving targets in IR imagery

by forward-backward motion history images (MHI) [10].Motion history images accumulate change detection resultswith a decay term over a short period of time, that is, motion

history length L This method can accurately detect location

and shape of multiple moving objects in presence of cant texture in background The drawback of this method isthat it is difficult to determine the proper value for motion

signifi-history length L Even a well-tuned motion signifi-history length

works well for one input video, it may not work for otherinput videos In airborne IR imagery, the moving objects may

be small, and intensity appearance may be camouflaged Toguarantee that the object shape can be detected well, a large

L can be selected But this will lengthen the lag of the target

detection system In this paper, we present a method fortarget detection in airborne IR imagery, which is motivated

by the need to overcome some of the shortcomings of existingalgorithms Our method does not have any assumption ontarget velocity and acceleration, object intensity appearance,and camera motion It can detect multiple moving targets

in presence of significant texture in background.Section 3

describes this algorithm in detail

3 Algorithm Description

The extensive literature survey indicates that moving targetdetection from stationary cameras has been well researchedand various algorithms have been developed When thecamera is mounted on an airborne platform, the wholebackground of the scene appears to be moving and theactual motion of the targets must be distinguished from thebackground motion without any assumption on velocity andacceleration of the platform Also, the algorithm must work

in real-time, that is, the time-consuming algorithms thatrepeatedly employ the entire image pixels are not applicablefor this problem

Trang 3

(a) (b)

Figure 1: Two sample IR images with significant textures in background (a) Frame 98 in dataset1; (b) Frame 0 in dataset 3

To solve these problems, we propose an approach to

perform the real-time multiple moving target detection in

airborne IR imagery This algorithm can be formulated in

four steps as follows

Step 1 Motion Compensation It consists of the feature

point detection, optical flow detection, estimation of the

global transformation model parameter, and frame

differ-encing

Step 2 Dynamic Gabor Filtering The frame difference

image generated inStep 1is weak, and it is difficult to detect

targets from the frame difference image directly We employ

Gabor filter to enhance the frame difference image The

orientation of Gabor filter is dynamically controlled by using

the orientation of the optical flows Therefore, we call it

dynamic Gabor filter.

Step 3 Specular Highlights Detection After the dynamic

Gabor filtering, the image changes appear as strong intensity

in the dynamic Gabor filter response We call these strong

intensity specular highlights The target detection problem

then becomes the specular highlight detection The detector

employs the specular highlight point detection and clustering

techniques to identify the center and size of the specular

highlights

Step 4 Target Localization If a specular highlight lies in a

cluster of outliers, it is considered as a target Otherwise, the

Gaussian detector is employed for further discrimination

The orientation of the specular highlight is used to control

the orientation of the Gaussian detector Therefore, we call it

dynamic Gaussian detector.

The processing flow of this algorithm is shown in

Figure 2 The following will describe above processing steps

in detail

3.1 Motion Compensation The motion compensation is a

technique for describing an image in terms of the

trans-formation of a reference image to the current image The

reference image can be previous image in time In airborne

Input imagesI t−Δ,I t

Motion compensation

Feature points detection

Inliers extraction Outliers extraction

Global model estimation

Motion detection

Dynamic gabor filtering

Specular highlights detection Outliers clustering

Trang 4

3.1.1 Feature Point Extraction The feature point extraction

is used as the first step of many vision tasks such as tracking,

localization, image mapping, and recognition Hence, many

feature point detectors exist in literature Harris corner

detector, Shi-Tomasi’s corner detector, SUSAN, SIFT, SURF,

and FAST are some representative feature point detection

algorithms developed over past two decades Harris corner

detector [11] computes an approximation to the second

derivative of the sum-of-squared-difference (SSD) between

a patch around a candidate corner and patches shifted The

where angle brackets denote averaging performed over the

image patch The corner response is defined as

C = |H| − k(trace H)2, (2)where k is a tunable sensitivity parameter A corner is

characterized by a large variation ofC in all directions of the

vector (x, y) Shi and Tomasi [12] conclude that it is better

to use the smallest eigenvalue of H as the corner strength

function, that is,

SUSAN [13] computes self-similarity by looking at the

proportion of pixels inside a disc whose intensity is within

some threshold of the center (nucleus) value Pixels closer in

value to the nucleus receive a higher weighting This measure

is known as (the Univalue Segment Assimilating Nucleus)

USAN A low value for the USAN indicates a corner since the

center pixel is very different from most of its surroundings

A set of rules is used to suppress qualitatively “bad” features,

and then local minima of the SUSANs (Smallest USAN)

are selected from the remaining candidates SIFT (Scale

Invariant Feature Transform) [14] obtains scale invariance

by convolving the image with a Difference of Gaussians

(DoG) kernel at multiple scales, retaining locations which are

optima in scale as well as space DoG is used because it is a

good approximation for the Laplacian of a Gaussian (LoG)

and much faster to compute (Speed Up Robust Features)

SURF [15] is based on the Hessian matrix, but uses a very

basic approximation, just as DoG is a very basic

Laplacian-based detector It relies on integral images to reduce the

computation time (Features from Accelerated Segment Test)

FAST feature detector [16] considers pixels in a Bresenham

circle of radius r around the candidate point If n contiguous

pixels are all brighter than the nucleus by at least t or all

darker than the nucleus by t, then the pixel under the nucleus

is considered to be a feature Although r can, in principle,

take any value, only a value of 3 is used (corresponding to a

circle of 16 pixels circumference), and tests show that the best

value ofn is 9.

For our real-time IR targets detection in airborne videos,

it needs a fast and reliable feature point detection algorithm

However, the processing time depends on image contents To

Table 1: Feature point detectors and their processing time for thesynthesized image inFigure 3

Feature point detector Processing

time (ms)

Number of featurepointsHarris corner detector 47 82Shi and Tomasi’s

toFigure 3(f)) in the local area of the real corner The totalnumber of the corners detected is 1424, which is much biggerthan the number of the ground truth corners And further,

we tested this algorithm by using images from airborne IRcamera It fails to extract feature points for many images.FAST is not proper for feature point detection in airborneimagery (iii) Harris corner detector is fast But it missedmany ground truth corners It is not candidate for ouralgorithm (iv) The processing time for SUSAN and Shi-Tomasi’s corner detector are almost the same SUSAN detectsmore ground truth corner than Shi-Tomasi’s method for thissynthesized image Further, to investigate the robustness ofSUSAN and Shi-Tomasi’s corner detector, another 640×512full color test image is synthesized This test image contains

252 (14 row, 18 column) randomly colored triangles, whichform 518 (37×13 + 18 (top) + 19 (bottom)) ground truthcorners The experiment result is shown in Figure 4 Shi-Tomasi’s method detected 265 corner points, as marked bysmall red rectangles in Figure 4(a), which are all groundtruth corner points SUSAN detected 598 corner points,

as depicted by small red rectangles in Figure 4(b), whichcontain 80 false corner points (refer to the two close smallrectangles at the top vertex of some triangles) These false

Trang 5

corner points will deteriorate the postprocessing

Further-more, the robustness of these two detectors is investigated

by using the IR images from airborne IR camera, as shown in

Figure 1, in which (a) shows an IR image with complicated

content, and (b) relatively simple contents The experiment

results are shown inFigure 5, in which (a) shows the corner

points detected by Shi-Tomasi’s method, and (b) by SUSAN

Although it is difficult to tell which ones are truth corner

points in Figures5(a) and5(b), it is obvious that (b) contains

many false corner points From these results, it is clear that

Shi-Tomasi’s method is more robust than SUSAN For more

details about performance evaluation of corner detectionalgorithms, readers are referred to [17]

From above results and discussion, this paper employsShi-Tomasi’s method to detect feature points For two inputimages, letP t 

= { p t1, , p t M  }andP t = { p t1, , p t N }denotethe feature points detected fromI t 

andI t, respectively, where

is called previous image,

I t is called current image or reference image These featurepoints are used for optical flow detection

Trang 6

(b)

Figure 4: Feature points detected by (a) Shi and Tomasi’s corner detector, and (b) SUSAN corner detector, for 640×512 color image

Trang 7

(b)

Figure 5: Feature points detected by (a) Shi and Tomasi’s corner detector and, (b) SUSAN corner detector, for 640×512 color image

3.1.2 Optical Flow Detection The optical flow is the

appar-ent motion of the brightness patterns in the image [18]

In our algorithm, the feature points obtained in previous

section are used as the brightness patterns in the definition

of optical flow [18] That is, the task for optical detection is

to find the corresponding feature point p t in frame I t, for

the feature point p t 

i in frameI t 

, wherei =1, 2, , M, j =

1, 2, , N.

There are many optical flow detection algorithms

Recently there are several new developments on this topic

Black and Anandan [19] proposed a framework based on

robust estimation that addresses violations of the brightness

constancy, and spatial smoothness assumptions caused by

multiple motions Bruhn et al [20] developed a differential

method that combines local methods such as the

Lucas-Kanade’s technique and global methods such as the

Horn-Schunck’s approach Zitnick et al.’s method is based on

statistical modeling of an image pair using constraints

on appearance and motion [21] Bouguet’s method is the

pyramidal implementation of the Lucas-Kanade’s technique

[22] The evaluation results of these four algorithms show

that Bouguet’s method is the best for the interpolation task

[23] As measured by average rank, the best performing

algorithms for the ground truth motion are Bruhn et al and

Black and Anandan

In our algorithm, we employed Bouguet’s method foroptical flow detection Figures6(a)and6(b)show two inputimages, I t  and I t The frame interval, Δ, is an importantparameter that affects the quality of the optical flow If it istoo small, the displacement between two consecutive frames

is also too small (close to zero) In this case, the optical flowcannot be precisely detected If it is too large, the error inthe process of finding the corresponding feature points inthe consecutive frame increases In this case, the optical flowalso cannot be precisely detected In our airborne videos, thehelicopter flew at very high altitude, and the displacementbetween consecutive image frames is relatively small Tospeed up the algorithm, Δ is set at 3 The experimentsshow our algorithm works well forΔ=1, , 4.Figure 6(c)

shows the optical flows detected from the feature points

{ p1t , , p t M  } and { p t1, , p N t }, where the optical flow aremarked by red line segments, and the endpoints of the opticalflows are marked by green dots LetF t  t = { F  t  t

1 ,  F t  t

2 , ,  F t  t

denote the detected optical flows Note that the start point

of ith optical flow,  F t  t, belongs to setP t 

, and the endpointbelongs to set P t For the feature points in setP t 

and P t,from which no optical flow is detected, they are filteredout Therefore, after this filtering operation, the number offeature points in two sets,P t 

andP t, becomes the same withthe number of optical flows in optical flow setF t  t, that is,

Trang 8

., p t K }, accordingly In the following, in order to make

the description easier, we consider that the feature points

in P t is sorted so that the start point and endpoint of  F i t  t

are consequently p t i  ∈ P t 

and p t i ∈ P t, respectively That

is,  F t  t means p t 

i p t Note that there is no need to perform

this sorting in the implementation because the optical flow



F t  tholds the index information for the feature points in set

P t 

andP t

3.1.3 Global Parametric Motion Model Estimation

(A) Transformation Model Selection Motion compensation

requires finding the coordinate transformation between

two consecutive images It is important to have a precise

description of the coordinate transformation between a pair

of images By applying the appropriate transformations via a

warping operation and subtracting the warped images from

the reference image, it is possible to construct the frame

difference that contains image changes (motion image)

There exist many publications about motion parameter

estimation which can be used for motion compensation A

coordinate transformation maps the image coordinates, x =

(x ,y )T, to a new set of coordinates, x=(x, y) T Generally,

the approach to finding the coordinate transformation relies

on assuming that it will take one of the following six models,(1) translation, (2) affine, (3) bilinear, (4) projective, (5)pseudo perspective, and (6) biquadratic, and then estimatingthe two to twelve parameters in the chosen models

The translation model is based on the assumptionthat the coordinate transformation between frames is onlytranslation Although it is easy to implement, it is verypoor to handle large changes due to camera rotation,panning, and tilting This model is not suitable for ourpurpose On the other hand, the parameter estimation in8-parameter projective model and 12-parameter biquadraticmodel becomes complicated Time-consuming models arenot suitable for the real-time applications Therefore, ouralgorithm does not employ these two models, neither Thefollowing investigates affine, bilinear, and pseudo perspectivemodels Let (x ,y ) denote the feature point coordinates inprevious image, and (x, y) the coordinates in the current

image Affine model is given by

x y

Trang 9

Figure 7(b)shows the transformed image for the image

inFigure 7(a), by applying the bilinear transformation with

parameters ofa1= a6=1.0, a4= −0.001, and others (a2, a3,

a5, a7, a8) equal to 0.0 For this set of parameters, if a4is also

set to 0.0, no transformation is applied to the original image

However, if a4 is set at 0.001, which corresponds to the

fact that a4contains 1‰ error, the output image is greatly

deformed Similarly, Figure 7(c) shows the transformed

image for the image in (a), by applying pseudo perspective

transformation with parameters of a2 = a8 = 1.0, a5 =

0.001, and others (a1, a3, a4, a6, a7) equal to 0.0 For this

set of parameters, if a5is also set to 0.0, no transformation is

applied to the original image However, if a5is set at0.001,

which corresponds to the fact that a5contains 1‰ error, the

output image is greatly deformed These results show that

bilinear model and pseudo perspective model are sensitive to

parameter errors A small error in parameter estimation may

cause huge difference in the transformed images We used the

images from airborne IR camera to test the frame difference

based on these two models, the results are poor In contrast,

the affine transformation contains translation, rotation, and

scale although it cannot capture camera pan and tilt motion.However, in the system to generate airborne videos, camerasare usually mounted on the moving platform such as ahelicopter or an UAV (unmanned aerial vehicle) In this case,there is no camera pan and tilt motion.Figure 7(d)shows thetransformed image for the image in (a), by applying affinetransformation with parameters ofa1= a4=1.0, a2=0.02,

a3= −0.02, and a5= a6=1.0 This setting is corresponding

to that a2 and a3contain 2% error, respectively Comparing

the results in Figures7(b),7(c), and 7(d), we can say thateven the parameter estimation error in affine transformation

is 20 times larger than the error in bilinear transformation orpseudo perspective transformation (2% in affine transform,

to 1‰ in bilinear transformation and pseudo perspectivetransformation), the image deformation is still tolerable (see

Figure 7(d)) This result shows that the affine model is robust

to the parameter errors Therefore, in our algorithm, weemploy affine model for motion detection

(B) Inliers/Outliers Separation The feature points

1 ,  F2t  t, ,  F t K  t } For the feature points in set P t 

andP t, some of them are associated with the background,

Trang 10

and some with the moving targets The feature points

associated with the moving targets are called outliers.

Those associated with the background are called inliers To

detect the motion image (that is, image changes) for two

consecutive images, the previous image is wrapped to the

current image by performing affine transformation, and

then the frame difference image can be obtained by image

subtraction This operation needs precise transformation

model To estimate the transformation model precisely,

the outliers must be excluded That is, the feature points

need to be categorized into outliers and inliers, and only

the inliers are used to estimate the affine transformation

parameters The inliers/outliers are separated automatically

by the following algorithm

Inliers/Outliers Separation Algorithm (i) Using all feature

points in set P t  and P t, 6-paramers in affine model are

primarily estimated by least-square method [24] That is,

a1, , a6are obtained by solving the equation below

x t

y t x t  i

representsK

i =1, (x t i ,y t i )∈ P t 

, and(x t,y t)∈ P t Let Adenote the affine model obtained from

(7)

(ii) Applying A to the feature points in set P t 

, thetransformed feature points are obtained, which are denoted

where · means norm operation,i =1, , K.

(iii) Inliers/outliers are discriminated according to the

whereλ Eis the weighting coefficient The value of λEdepends

on the size of the moving target The larger the movingtarget is, the smaller the value ofλ Eneeds to be In airborne

IR videos, the moving target is relatively small becausethe observer is at high altitude, λ E can be relatively large.Experiments show that the value ofλ Ecan be in the range

of 1.0 to 1.4, currently is set at 1.3

The algorithm described above is based on the factthat for the feature points belonging to the moving target,the error defined in (8) is large because the correspondingfeature points are moving accompanied with the movingtarget Figure 6(c)shows the inliers/outliers separation forthe feature points detected from the input images in Figures

6(a) and 6(b) The outliers are marked by blue dots.After this operation, P t  is separated to inliers set P t 

in = { p t 

out = { p t1, , p K tout}, and F t  t is separated to opticalflowsFint  t = { F  t  t

1 ,  F2t  t, ,  F K t int }corresponding to inliers, andoptical flows F t  t

out = { F  t  t

1 ,  F t  t

2 , ,  F t  t

Kout} corresponding tooutliers And the following relations hold

Again, in the following, to make the description easier,let us assumep t 

in dynamic Gabor filter (refer toSection 3.2.2) Outliers areused in target localization (refer toSection 3.4.2)

(C) A ffine Transformation Parameter Estimation There are

the six parameters in affine transformation It needs threepairs of feature points in P t 

in and P t

in to estimate thesesix parameters However, affine model determined only byusing three pairs of feature points might not be accurate

To determine these parameters efficiently and precisely, ourmethod employs the following algorithm

A ffine Model Estimation Algorithm (1) Randomly choose L

triplet inliers pairs fromPint andPint , respectively For a triplet

Trang 11

1, 3, 6, , 3L Let A = ( A1, A2, , A L) represent these L

Affine models They are used to determine the best affine

whether two feature points are matched The LACC is given

whereI t 

AandI tare the intensities of the two images, (xt 

i,y t 

i)and (x t,y t ) the ith and jth feature points to be matched,

m and n the half-width and half-length of the matching

is the standard variance of the image in matching window.c i j

ranges from1 to 1, indicating the similarity from smallest

to largest Once again, as mentioned inSection 3.1.3.(B), the

optical flows keep the corresponding relation for ith feature

point in P tin and jth feature point in P tin For simplifying

description, we just say the feature points inPt 

inare matched

to those inP t

in one to one, starting from 1 toKin Therefore,

c i jcan be rewritten asc ii The evaluation function for affine

(3) The affine model Ab ∈ A, whose evaluation value

is maximal, that is,E = max(E1,E2, , E ), is selected as

p

ε, q, L

=11(1− ε)q3 L

, (16)where ε(<0.5) is the ratio of moving target regions to the

whole image, q is the probability that the corresponding

points are inliers The probability that this algorithm picks

up the outliers is 1− p For example, p ≈ 0.993 when

ε = 0.3, q = 0.7, and L = 40, then 1− p = 0.007 That

is, the probability that the outliers will influence the affinetransformation estimation is very low, if the moving targetsconstitute a small area (i.e., less than 50%) In airborne videocamera, this requirement can be easily satisfied

(D) Image Changes Detection Here, in airborne imagery, the

image changes mean changes caused by the moving targets

We call image changes motion images The previous image is

transformed by the best affine model Ab, and subtract fromthe current image That is, the frame difference is generated

6(b)

3.2 Dynamic Gabor Filter 3.2.1 Problems of the Thresholding Algorithms To detect the

targets, the motion image needs to be binarized Figure 8

shows the binarization results for the frame difference image

inFigure 6(d)by employing three binarization algorithms.Figures8(a)and8(b)show the results for a fixed threshold

at 10 and 30, respectively Figure 8(c) shows the output

of the adaptive thresholding algorithm based on mean C,

where the window size is 5× 5 and the constant C is set

at 5.0.Figure 8(d) shows the output of Gaussian adaptivethresholding algorithm, where the window size is 5 ×5

and the constant C is set at 10.0 From these binary

images, it is difficult to detect targets Although by applyingsome morphological operations such as dilation and erosiontechniques, it is possible to detect targets from some framedifference images However for video sequence processing,this method is not stable To solve this problem, we needsome technique to enhance the frame difference image.Image enhancement is the improvement of digital imagequality (e.g., for visual inspection or for machine analysis),

Ngày đăng: 21/06/2014, 16:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm