1. Trang chủ
  2. » Ngoại Ngữ

Adaptive Particle Filter based on the Kurtosis of Distribution

62 176 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 62
Dung lượng 2,27 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Adaptive Particle Filter based on the Kurtosis of Distribution Songlin PiaoDepartment of Electrical and Computer Engineering, Hanyang UniversityDirected by Professor Whoi-Yul Kim Kurtosi

Trang 3

Adaptive Particle Filter based on the

Kurtosis of Distribution

bySonglin Piao

A Thesis Presented to theFACULTY OF THE GRADUATE SCHOOL

HANYANG UNIVERSITY

In Partial Fulfillment of theRequirements for the DegreeMASTER OF SCIENCE

in the Department of Electrical and Computer Engineering

February 2011

Copyright 2010 Songlin Piao

Trang 4

Adaptive Particle Filter based on the

Kurtosis of Distribution

bySonglin Piao

Approved as to style and content by:

Trang 5

TABLE OF CONTENTS

1.1 Background 1

1.2 Related work 2

II Particle filter 6 2.1 Auxiliary particle filter 7

2.2 Gaussian particle filter 7

2.3 Unscented particle filter 8

2.4 Rao-Blackwellized particle filter 9

III Proposed method 11 3.1 Basic concept 12

3.2 Concept of Kurtosis 15

3.3 Proposed method in 1D 17

3.4 Proposed method in 2D 19

3.5 Proposed method in 3D 20

3.6 Proposed method in general case 22

IV Experiment 24 4.1 Simulation in 1D 24

4.2 Simulation in 2D 27

Trang 6

4.3 Simulation in 3D 284.4 Real particle tracking 314.5 Face tracking 36

Trang 7

LIST OF FIGURES

3.1 Transition example 13

3.2 Example in 2D case 14

3.3 Kurtosis of Gaussians 16

3.4 Proposed distribution in 1D 18

3.5 Sampling from the specific probability density function 18

3.6 Proposed pdf looks similar with water wave 20

3.7 Motion vector in spherical system 21

3.8 Proposed pdf looks similar with shockwave 22

4.1 Simulation of fluctuation case 25

4.2 Simulation of non fluctuation case 27

4.3 Simulation in 2D 29

4.4 Simulation in 3D 30

4.5 Particles detection result 31

4.6 Motions in each frame 33

4.7 Motion angle 33

4.8 Tracking result 34

4.9 RMS error comparison 35

4.10 Face is tracked and detected 37

4.11 Face is tracked but not detected 38

4.12 Motion Angle 38

4.13 Speed 39

4.14 Face tracking result 39

4.15 Analysis data 40

Trang 8

LIST OF TABLES

3.1 Kurtosis of Gaussians 166.1 Random number generation test using 5000000 samples 43

Trang 9

Adaptive Particle Filter based on the Kurtosis of Distribution

Songlin PiaoDepartment of Electrical and Computer Engineering,

Hanyang UniversityDirected by Professor Whoi-Yul Kim

Kurtosis based adaptive particle filter is presented in this paper The concept

of belief is proposed to each particle sampling and the distribution of particlescan be adaptively changed according to the belief and motion information sothat particles could track object in higher accuracy The belief and motioninformation could be defined as a distance function of observation vector Inorder to achieve this goal, we change the way of normal re-sampling technique

We introduce a framework that particles are re-sampled based on the distancefunction We demonstrate the advantages of proposed method in two steps.First, we did strict simulation tests in 1D, 2D and 3D spaces to show that ourmethod can give better result Furthermore, we did the experiments in thereal cases One is real particle tracking in the hydraulic engineering area andthe other is normal face tracking based on the color feature We comparedthe result in each step to the result obtained from standard particle filter

Trang 10

I Introduction

The analysis and making inference about a dynamic system arise in awide variety of applications in many disciplines The Bayesian framework isthe most commonly used method for the study of dynamic systems Thereare two components needed in order to describe Bayesian framework First,

a process model describing the evolution of a hidden state of the system andsecond, a measurement model on noisy observations related to the hiddenstate If the noise and the prior distribution of the state variable is Gaussian,the predicted and posterior densities can be described by Gaussian densi-ties Kalman filter is one of the cases, which yields the optimized solution inMMSE But there are two big problems when people apply Bayesian frame-work to the real world One is that a realistic process and measurementmodel for a dynamic system in the real world is often nonlinear, and theother one is that the process noise and measurement noise sources could benon-Gaussian Simultaneous localization and mapping (SLAM) [1] problem

in robotic research area is a typical example Kalman filter performs poorwhen the linear and Gaussian conditions are not satisfied This has motivatedintensive research for nonlinear filters for over 40 years Nonlinear filters haveinvolved finding suboptimal solutions and may be classified into two majorapproaches: a local approach, approximating the posterior density function

by some particular form, and a global approach, computing the posteriordensity function without making any explicit assumptions about its form [2]

Trang 11

Gaussian filter is one of the typical examples in the local approach Thereare various forms of Gaussian filter proposed until now, for example, extendedKalman filter (EKF), iterated Kalman filter (IKF), unscented Kalman filter(UKF), and so on Another approach approximating non-Gaussian density

is a Gaussian-sum representation Based on the fact that non-Gaussian sities can be approximated reasonably well by a finite sum of Gaussian densi-ties, Alspach and Sorenson introduced the Gaussian-sum filter for nonlinearsystem [3]

den-A global approach approximates the densities directly, so that the grations involved in the Bayesian estimation framework are made as tractable

inte-as possible Particle filter is just a special example in this cinte-ase It uses aset of randomly chosen samples with associated weights to approximate thetrue density As one of the Sequential Monto Carlo (SMC) methods, whenthe number of samples becomes larger, the posterior function becomes moreaccurate However, the large number of samples often makes the use of SMCmethods computationally expensive But thanks to the development of com-puter technology, it becomes possible even if the system is comprised of veryhuge particles [4]

Particle filter, one of dynamic state estimation techniques, is commonlyused in many engineering applications, especially in signal processing andobject tracking Various forms of the particle filters and their applicationsare proposed until now Chang and Lin proposed a 3D model-based trackingalgorithm called the progressive particle filter [5], Gil et el solved Multi-robot SLAM problem using a Rao-Blackwellized particle filter [6], Jing andVadakkepat proposed an interacting MCMC particle filter to track maneuver-

Trang 12

ing target [7], Lakaemper and Sobel proposed an approach to building partialcorrespondences between shapes using particle filter [8], Mukherjee and Sen-gupta proposed a more generalized likelihood function model in order to gethiger performance [9], Pistori et el combined different auto-adjustable obser-vation models in a particle filter framework to get a good trade-off betweenaccuracy and runtime [10], Sajeeb et el proposed a semi-analytical particlefilter, requiring no Rao-Blackwell marginalization, for state and parameterestimations of nonlinear dynamical systems with additively Gaussian processand observation noises [11].

Shao et el used a particle based approach to solve the constrainedBayesian state estimation problems [12], Xu et el proposed an ant stochasticdecision based particle filter to solve the degradation problem when it is ap-plied to the model switching dynamic system [13], Chung and Furukawapresented a unified framework and control algorithm using particle filterfor the coordination of multiple pursuers to search for and capture multi-ple evaders [14], Handel proved that bootstrap-type Monte Carlo particlefilters approximate the optimal nonlinear filter in a time average sense uni-formly with respect to the time horizon when the signal is ergodic and theparticle system satisfies a tightness property [15], Hlinomaz and Hong pre-sented a full-rate multiple model particle filter for track-before-detect and

a multi-rate multiple model track-before-detect particle filter to track lowsignal-to-noise ratio targets which perform small maneuvers [16], Hotta pre-sented an adaptive weighting method for combining local classifiers using aparticle filter [17], Lu et el proposed a method to track and recognize ac-tions of multiple hockey players using the boosted particle filter [18], Maggioand Cavallaro proposed a framework for multi target tracking with feedbackthat accounts for scene contextual information [19], Moreno et el addressedthe SLAM problem for stereo vision systems under the unified formulation

Trang 13

of particle filter methods based on the models computing incremental 6-DoFpose differences from the image flow through a probabilistic visual odometrymethod [20].

Wang et el proposed camshift guided particle filter for tracking object

in video sequence [21], Zheng and Bhandarkar proposed an integrated facedetection and face tracking algorithm using a boosted adaptive particle fil-ter [22], Choi and Kim proposed a robust head tracking algorithm with the3D ellipsoidal head model integrated in particle filter [23], Li et el presented

a temporal probabilistic combination of discriminative observers of differentlifespans into the particle filter to solve the tracking problem in low framerate video with abrupt motion poses [24], Lin et el presented a particleswarm optimization algorithm to solve the parameter estimation problem fornonlinear dynamic rational filters [25], Olsson and Ryd´en did the research

on the asymptotic performance of approximate maximum likelihood tors for state space models obtained via sequential Monte Carlo method [26],Pantrigo et el solved the multi-dimensional visual tracking problems usingscatter search particle filter [27], Sankaranarayanan et el proposed a newtime-saving method for implementing particle filter using the independentMetropolis Hastings sampler [28], Simandl and Straka proposed a new func-tional approach to the auxiliary particle filter so that it could provide closerfiltering probability density function in term of point estimates [29], Wu et

estima-el proposed a novel fuzzy particle filtering method for online estimation ofnonlinear dynamic systems with fuzzy uncertainties [30], Yee et el proposedapproximate conditional mean particle filter which is a combination of theapproximate conditional mean filter and the sequential importance samplingparticle filter [31], Grisetti used Rao-Blackwelized particle fitler to solve thesimultaneous localization and mapping problem(SLAM) efficiently by reusingalready computed proposal distribution [1], Hong and Wicker proposed a mul-

Trang 14

tiresolutional particle filter in the spatial domain using thresholded wavelets

to reduce significantly the number of particles without losing the full strength

of a particle filter [32], Li and Chua presented a particle filter solution fornon-stationary color tracking using a transductive local exploration algo-rithm [33], McKenna and Nait-Charif proposed a method to track humanmotion using auxiliary particle filters and iterated likelihood weighting [34],Rathi et el formulated particle filtering algortihm in the geometric activecontour framework that can be used for tracking moving and deforming ob-jects [35], Shan et el proposed a real-time hand tracking algorithm using

a mean shift embedded particle filter [4], Clark and Bell proposed a cle PHD filter which propagates the first-order moment of the multi-targetposterior instead of the posterior distribution so that the tracker could trackmulti-target in real time [36], Emoto et el proposed a cyclic motion modelwhose state variable is the phase of a motion and estimated these variablesusing particle filter [37], Fearnhead et el introduced novel particle filters forclass of partially-observed continuous-time dynamic models where the signal

parti-is given by a multivariate diffusion process [38], Tamimi et el proposed anovel method which could localize mobile robots with omnidirectional vi-sion using particle filter and performance improved SIFT feature [39], Bolic

et el proposed novel re-sampling algorithms with architectures for efficientdistributed implementation of particle filters [40], Khan et el intergradedMarkov random field into particle filter to deal with tracking interacting tar-gets [41], S¨akk¨a et el proposed a new Rao-Blackwellized particle filteringbased algorithm for tracking an unknown number of targets [42], Schon et el.implemented marginalized particle filter by associating one kalman filter toeach particle so that the proposed method could reduce time complexity [43]

Trang 15

II Particle filter

Particle filtering is a technique used for filtering nonlinear dynamical tems driven by non-Gaussian noise processes The purpose of particle filter is

sys-to estimate the states {S1, · · · , St} recursively using the sampling technique

To estimate the states, the particle filter approximates the posterior tion p(St|Z1:t) with a set of samples {St(1), · · · , St(p)} and a noisy observation

distribu-Z1, · · · , Zt In particle filtering, the probability density distribution of thetarget state is represented by a set of particles The posterior density of thetarget state for a given input image is calculated and represented as a set

of particles In other words, a particle is an hypothesis of the target state,and each hypothesis is evaluated by assessing how well the hypothesis fitsthe current input data Depending on the scores of the hypotheses, the set

of hypotheses is updated and regenerated in the next time step The cle filter consists of two components, state transition model and observationmodel They can be written as

parti-T ranslationM odel : St= Ft(St−1, Nt),ObservationM odel : Zt= Ht(St, Wt) (2.1)The transition function Ftapproximates the dynamics of the object beingtracked using previous state St−1and the system noise Nt The measurement

Ht models a relationship among the noisy observation Zt, the hidden state

St, the observation noise Wt We can characterize transition probability

P (St|St−1) with the state transition model, and likelihood P (Zt|St) with theobservation model

Trang 16

2.1 Auxiliary particle filter

The auxiliary particle filter is a particle filtering algorithm introduced byPitt and Shephard in 1999 [44] to improve some deficiencies of the sequentialimportance resampling (SIR) algorithm when dealing with tailed observationdensities

Assume that the filtered posterior is described by the following M weightedsamples:

of some reference point µ(i)t which in some way is related to the transitionmodel xt|xt−1 (for example, the mean, a sample, etc.):

k(i)∼ P (i = k|zt) ∝ ωt(i)p(zt|µ(i)t ) (2.3)This is repeated for i = 1, 2, · · · , M , and using these indexes we can nowdraw the conditional samples:

x(i)t ∼ p(x|xk(i)t−1) (2.4)Finally, the weights are updated to account for the mismatch between thelikelihood at the actual sample and the predicted point µk(i)t :

ω(i)t ∝ p(zt|x

(i)

t )p(zt|µk (i)

2.2 Gaussian particle filter

The Gaussian particle filter and Gaussian sum particle filter were firstlyintroduced by Jayesh H Kotecha et el in [45] Gaussian particle filter ap-

Trang 17

proximates the posterior distributions by single Gaussians, similar so sian filters like the extended Kalman filter [46] and its variants The un-derlying assumption is that the predictive and filtering distributions can beapproximated as Gaussians Unlike the EKF, which also assumes that thesedistributions are Gaussians and employs linearization of the functions in theprocess and observation equations, the GPF updates the Gaussian approxi-mations by using particles.

Gaus-2.3 Unscented particle filter

The procedure of the latest presented unscented particle filter could bedescribed as below:

1 Initialization, t = 0:

• For i = 1, · · · , N , draw particle xi

0 ∼ p(x0) and set t = 1

2 Importance sampling step:

• For i = 1, use UKF with the main model to generate the portance distribution N (¯x1t, Pt1) from particle x1t−1 Sample x1t ∼

im-N (¯x1t, Pt1)

• For i = 2, 3, · · · , N , use UKF with the auxiliary model to generatethe importance distribution N (¯xit, Pti) from particle xi−1t Sample

xit∼ N (¯xit, Pti)

3 Importance weight step:

• For i = 1, · · · , N , evaluate the importance weights using likelihoodfunction

• For i = 1, · · · , N , normalize importance weight of all the particles

Trang 18

2.4 Rao-Blackwellized particle filter

The advantage of the Rao-Blackwellized particle filter is that it allowsthe state variables to be splitted into two sets, being of them analyticallycalculated from the posterior probability of the remaining ones It has beenapplied to SLAM, non-linear regression, multi-target tracking, appearanceand position estimation In the particle filter framework, if the dimension

of the state space becomes higher, it would be inefficient sampling in dimensional spaces [47] However, the state can be separated into tractablesubspaces in some cases If some of these subspaces can be analyticallycalculated, the size of the space over which particle filter samples will bedrastically reduced This kind of concept was first proposed in [48]

high-If we denote a state space model as st and observation model as zt andobservations are assumed to be conditionally independent given the process

stof marginal distribution p(st|zt) The aim is to estimate the joint posteriordistribution p(s0:t|z1:t) The pdf can be written in the recursive way

p(s0:t|z1:t) = p(zt|st)p(st|st−1)p(s0:t−1|z1:t−1)

p(zt|z1:t−1) , (2.6)where p(zt|z1:t−1) is a proportionality constant

In multi-dimensional spaces, obtained integrals are not always tractable.However, if the hidden variables had a structure, we could devide state stinto two groups, rt and kt such that p(st|st−1) = p(kt|rt−1:t, kt−1)p(rt|rt−1)

Trang 19

In such case, we can marginalize out k0:t from the posterior, reducing the mensionality problem Following the chain rule, the posterior is decomposed

di-as follows

p(r0:t, k0:t|z1:t) = p(k0:t|z1:t, r0:t)p(r0:t|z1:t), (2.7)where the marginal posterior distribution p(r0:t|z1:t) satisfies the alternativerecursion like

p(r0:t|z1:t) ∝ p(zt|rt)p(rt|rt−1)p(r0:t−1|z1:t−1) (2.8)

Trang 20

III Proposed method

It is acknowledged that a successful implementation of particle filter sorts to two aspects:

re-• How to select appropriately samples, i.e., how to avoid degeneration inwhich a number of samples are removed from the sample set due to thelower importance weights

• How to design an appropriate proposal distribution to facilitate easysampling and further to achieve a large overlap with the true statedensity function

This paper is focused on the second aspect The choice of proposal portance distribution is one of the critical issues in particle filter, while theperformance of particle filter heavily depends on the proposal importancefunction The optimal proposal importance distribution is q(xt|x0:t−1, y1:t) =p(xt|xt−1, yt), because it fully exploits the information in both xt−1 and yt

im-In practice, it is impossible to sample from this distribution due to its plication The second choice of proposal function is the transmission priorfunction q(xt|x0:t−1, y1:t) = p(xt|xt−1) for its easiness to sample This is themost popular choice But since this function does not use the latest infor-mation yt, the performance depends heavily on the variance of observationnoise When the observation noise variance is small, the performance is poor.The third choice is to use the method of local linearization to generate theproposal importance distribution [49]

Trang 21

com-We proposed a kurtosis based sampling technique to improve the racy of the current particle filter framework The word ’kurtosis’ means ameasure of the ’peakedness’ of the probability distribution of a real-valuedrandom variable In particle filter framework, each time state has its owndistribution for particles and it could be used to estimate the exact stateassociated with observed data The distribution is changing at every timestep In the previous work, people assume the distribution of particles equals

accu-to the distribution of state transition and assume the same Gaussian noisewhile doing prediction In this article, kurtosis based framework is proposed,which means the distribution of the particles is changing according to somemeaningful features, motion vector in this case

We denote state vector at time t as Vt = {v1, v2, · · · , vn} and the vation state vector at time t as Zt = {z1, z2, · · · , zk}, here n is the number

obser-of state dimension and k is the number obser-of observation dimension, usually k

is smaller than n Then the derivative of the each observation state can berepresented ∂z1

Trang 22

Figure 3.1 Transition examplefilter framework The relationship between Stand St−1 could be representedusing St = f (St−1) + nt, here nt is assumed to const Gaussian distributionmost of cases In the case we predict current state St with the velocityinformation vt−1 from the previous state, the relationship could be written

as St= St−1+ vt−1+ nt We change the distribution of particles according

to the value of vt−1, more specifically, if the absolute value of vt−1 becomelarger, then the kurtosis of the particles’ distribution would become higher,too In order to introduce this concept more easily, we take a simple example.Consider the case that if the radius and angle values are known and theycomply the Gaussian distribution as described in (3.2) and (3.3)

Trang 23

(a) Distribution (b) Contour (c) Normal case

Figure 3.2 Example in 2D case

If their distributions are known, then the joint distribution of (x, y) looks like

in Fig 3.2(a) In this example, the mean of θ is set to 4.3633 radian (about

250 degree) and the mean of ρ is set to 6 And their covariance matrix is setto

to predict the object’s position in the future state Actually, this is Nt intranslation model in equation (2.1) Instead, in the proposed method, weuse the noise model Nt associated with motion information In order to gofurther, it is necessary to introduce the concept of kurtosis a little

Trang 24

3.2 Concept of Kurtosis

Kurtosis is defined as the fourth cumulant divided by the square of thesecond cumulant, which is equal to the fourth moment around the meandivided by the square of the variance of the probability distribution minus 3as

− 3 =

1 n

Pn i=1(xi− ¯x)4(n1Pn

i=1(xi− ¯x)2)2 − 3, (3.6)where m4 is the fourth sample moment of about the mean, m2 is the secondsample moment about the mean, xi is the ith value, and ¯x is the samplemean In the case of particle filter, the kurtosis of a distribution could definedsimilarly as

kurtosis =

Pn i=1ωi(xi− ¯x)4(Pn

i=1ωi(xi− ¯x)2)2 − 3, (3.7)where ωi is the weight of the ith particle As mentioned before, kurtosis ismeasure of the ”peakedness” of some probability distribution In the case

of particle filter, this concept could be used to measure the new predictedposition of the particles The kurtosis changes according to the belief ofparticles and the length of the motion information When the belief of particle

is much more reliable and the strength of the motion vector is large, the value

of kurtosis become smaller In the case of Gaussian distribution, the largerthe kurtosis is, then the larger the standard deviation is This is actually thekey concept of the proposed method in this article

Fig 3.3 and Table 3.1 show several Gaussian distributions in 1D andtheir corresponding kurtosis values It can seen that the larger the sigmavalue is, the larger the kurtosis is

Trang 25

Figure 3.3 Kurtosis of Gaussians

Table 3.1 Kurtosis of Gaussians

µ σ kurtosisred 0 1.0 11.17963blue 0 1.2 14.01556green 0 1.4 16.85140yellow 0 1.6 19.68429magenta 0 1.8 22.48823black 0 2.0 25.16695

Trang 26

3.3 Proposed method in 1D

In the case of 1D, there are only two options for the angle One is totranslate along the positive direction, the other is to translate along thenegative direction If assume that the motion information is known, theposition of the current state is at 0 and the state transition satisfies St+1 =

St+ vt+ nt+1, then the probability of St+1 could represented as

Pst+1 = X

α=P p ,P n

α√12πσexp

α =(Pp positive direction ,

Pn negative direction , (3.9)

Pp+ Pn= 1 (3.10)The state transition distribution could be seen in Fig 3.4 But it is shownthat there is a gap at position 0, which is the position of the current state Inorder to solve this problem, we used cubic spline [50] to smooth the selectedfive points so that the gap is gone The five points selected are two peakpoints and two half middle points from left and right Gaussian distributionand the point at 0 The smoothed curve is drawn with green color in Fig.3.4 The left column has mean 1.5 and -1.5, the middle column has mean2.0 and -2.0, the right column has mean 2.5 and -2.5 The above row hasstandard deviation 1.2 and the nether row has standard deviation 1.8 Thepositive direction has weight 0.7 and the negative direction has weight 0.3.For N particles at time step t − 1, {p1t−1, p2t−1, · · · , pNt−1}, they would

be filtered firstly by resampling step and then produce new location of theparticle using the proposed probability density function like in Fig 3.5(a).For example, pit = pit−1 + qt−1, here qt−1 is the proposed pdf at time step

Trang 27

(a) µ = ±1.5 σ = 1.2 (b) µ = ±2.0 σ = 1.2 (c) µ = ±2.5 σ = 1.2

(d) µ = ±1.5 σ = 1.8 (e) µ = ±2.0 σ = 1.8 (f) µ = ±2.5 σ = 1.8

Figure 3.4 Proposed distribution in 1D

Figure 3.5 Sampling from the specific probability density function

Trang 28

t − 1 All the motion information has already been considered inside the pdf,and the pdf would change according to the motion information and the belief

of the particles The concept is very different from the normal particle filterwhere pit= pit−1+ vt−1+ nt, vt−1is the speed of object at time step t − 1 and

nt is usually a gaussian noise

dis-is moving inside the water Fig 3.6(b) [51]

We denote state vector as St= {xt, yt} ande velocity vector Vt= {vtx, vty}

at the time step t, then we generate the new state of each sample using

St+1i = Sti+ F (Vt), here F is the proposed sampling function based on themotion information at each time step In the case of normal particle filter,the equation is like St+1i = Sit+ vt+ Nt So the proposed sampling method isdifferent original sammpling method in the prediction state The proposedsampling method has a big advantage that it can change pdf or integrateother factors into the sampling state

Trang 29

(a) Proposed pdf (b) Water wave

Figure 3.6 Proposed pdf looks similar with water wave

In the case of 3D, when the state vector is St = {xt, yt, zt}, which is thelocation of the object at the current time step, the motion vector could berepresented as spherical coordinate system as in Fig (3.7) The relation-ship between the spherical coordinate (r, θ, φ) of a point and its cartesiancoordinates (x,y,z) is like

Trang 30

tran-Figure 3.7 Motion vector in spherical systembetween proposed pdf and motion could be expressed as

It is shown that the proposed pdf is the joint probability of (ρ, θ, φ) If

we assume the probability of ρ, θ, φ is independent with each other and theycomply the Gaussian distribution, then the joint probability of these threevariables looks similar to the shape of shockwave Fig 3.8(a) is the proposedpdf when Xt = 2, Yt = 3, Zt = 4, it can be seen that the shape of the pdf

is similar to the shockwave in Fig 3.8(b) It means the probability of theposition which locates along the motion direction has higher value Of course,the distribution of ρ, θ, φ is not necessarily the Gaussian distribution, they

Trang 31

(a) Proposed pdf (b) Shockwave

Figure 3.8 Proposed pdf looks similar with shockwave

could be any distribution you want to define But whatever the distribution

is, the kurtosis of the distribution would be changed according to the motioninformation

We have proposed new sampling methodology in 1D, 2D, 3D space Theproposed method could be extended to higher dimensional space We denotestate vector at time t as St = {s1, s2, · · · , sm} and observation state vector

as Zt= {z1, z2, · · · , zk}, here m is the number of state dimension and k is thenumber observation dimension and k is smaller than m or equal to m Thenthe problem can classify into two groups, one is the case when k is equal to

m, and the other is the case when k is smaller than m The observation statevector could be obtained using the corresponding state vector, we denotethis relationship as: Zt = H(St) We need to first calculate the gradient ofthe current observation vector and then calculate the distance D(Zt) usingequation (3.1) Then we calculate the gradient of Stfrom the gradient of Zt,

Ngày đăng: 10/12/2016, 10:04

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w