By maintaining an estimate of its own estimation uncertainty and the relativeuncertainty in the various sensor outputs, the Kalman ®lter is able to combine allsensor information ``optima
Trang 1Kalman Filter Basics
7.1 INTRODUCTION
7.1.1 What is a Kalman Filter?
It is an extremely effective and versatile procedure for combining noisy sensoroutputs to estimate the state of a system with uncertain dynamics
For our purposes in this book:
The noisy sensors may include GPS receivers and inertial sensors ometers and gyroscopes, typically)but may also include speed sensors (e.g.,wheel speeds of land vehicles, water speed sensors for ships, air speed sensorsfor aircraft, or Doppler radar), and time sensors (clocks)
(acceler- The system state in question may include the position, velocity, acceleration,attitude, and attitude rate of a vehicle on land, at sea, in the air, or in space, butthe system state may include ancillary ``nuisance variables'' for modelingcorrelated noise sources (e.g., GPS Selective Availability timing errors)andtime-varying parameters of the sensors, such as scale factor, output bias, or (forclocks)frequency Selective Availability has been suspended as of May 1,2000
Uncertain dynamics includes unpredictable disturbances of the host vehicle,whether caused by a human operator or by the medium (e.g., winds, surfacecurrents, turns in the road, or terrain changes), but it may also includeunpredictable changes in the sensor parameters
179
Mohinder S Grewal, Lawrence R Weill, Angus P Andrews
Copyright # 2001 John Wiley & Sons, Inc Print ISBN 0-471-35032-X Electronic ISBN 0-471-20071-9
Trang 2More abstract treatments of the Kalman ®lter are presented in [18, 19, 40, 46, 67, 69,
71, 72], and a more basic introduction can be found in [31]
7.1.2 How it Works
The Kalman ®lter maintains two types of variables:
1 Estimated State Vector The components of the estimated state vector includethe following:
(a)The variables of interest (i.e., what we want or need to know, such asposition and velocity)
(b)``Nuisance variables'' that are of no intrinsic interest but may be necessary
to the estimation process These nuisance variables may include, forexample, the selective availability errors of the GPS satellites Wegenerally do not wish to know their values but may be obliged to calculatethem to improve the receiver estimate of position
(c)The Kalman ®lter state variables for a speci®c application must include allthose system dynamic variables that are measurable by the sensors used inthat application For example, a Kalman ®lter for a system containingaccelerometers and rate gyroscopes must contain acceleration and rotationrate components to which these instruments respond The acceleration andangular rate components do not have to be those along the sensor inputaxes, however The Kalman ®lter state variables could be the componentsalong locally level earth-®xed coordinates, even though the sensorsmeasure components in vehicle-body-®xed coordinates
In similar fashion, the Kalman ®lter state variables for GPS-only navigationmust contain the position coordinates of the receiver antenna, but these could
be geodetic latitude, longitude, and altitude with respect to a referenceellipsoid or geocentric latitude, longitude, and altitude with respect to areference sphere, or ECEF Cartesian coordinates, or ECI coordinates, or anyequivalent coordinates
2 A Covariance Matrix: a Measure of Estimation Uncertainty The equationsused to propagate the covariance matrix (collectively called the Riccatiequation)model and manage uncertainty, taking into account how sensornoise and dynamic uncertainty contribute to uncertainty about the estimatedsystem state
By maintaining an estimate of its own estimation uncertainty and the relativeuncertainty in the various sensor outputs, the Kalman ®lter is able to combine allsensor information ``optimally,'' in the sense that the resulting estimate minimizesany quadratic loss function of estimation error, including the mean-squared value ofany linear combination of state estimation errors The Kalman gain is the optimal
Trang 3weighting matrix for combining new sensor data with a prior estimate to obtain anew estimate.
7.2 STATE AND COVARIANCE CORRECTION
The Kalman ®lter is a two-step process, the steps of which we call ``prediction'' and
``correction.'' The ®lter can start with either step, but we will begin by describing thecorrection step ®rst
The correction step makes corrections to an estimate, based on new informationobtained from sensor measurements
The Kalman gain matrix K is the crown jewel of Kalman ®ltering All the effort
of solving the matrix Riccati equation is for the sole purpose of computing the
``optimal'' value of the gain matrix K used for correcting an estimate ^x,
The derivation begins with background on properties of Gaussian probabilitydistributions and Gaussian likelihood functions, then development of models fornoisy sensor outputs and a derivation of the associated maximum-likelihood estimate(MLE)for combining prior estimates with noisy sensor measurements
7.2.1 Gaussian Probability Density Functions
Probability density functions (PDFs)are nonnegative integrable functions whoseintegral equals unity (i.e., 1) The density functions of Gaussian probabilitydistributions all have the form
p x 1
2pndet P
2x mTP 1x m; 7:3
Trang 4where n is the dimension of P (i.e., P is an n n matrix)and the parameters
def
The operator Ehi is the expectancy operator, also called the expected-valueoperator
The notation x 2 N m; P denotes that the variate (i.e., random variable) x isdrawn from the Gaussian distribution with mean m and covariance P Gaussiandistributions are also called normal or Laplace distributions
TABLE 7.1 Analogous Concepts in Three Different Contexts
distribution $ Likelihoodfunction L
a Argmax(f ) returns the argument x of the function f where f x achieves its maximum value For example, argmax(sin) p=2 and argmax(cos) 0.
Trang 57.2.2 Likelihood Functions
Likelihood functions are similar to probability density functions, except that theirintegrals are not constrained to equal unity, or even required to be ®nite They areuseful for comparing relative likelihoods and for ®nding the value of the unknownindependent variable x at which the likelihood function achieves its maximum, asillustrated in Fig 7.1
7.2.2.1 Gaussian Likelihood Functions Gaussian likelihood functionshave the form
7.2.2.2 Scaling of Likelihood Functions Maximum-likelihood estimation
is based on the argmax of the likelihood function, but for any positive scalar c > 0,
Fig 7.1 Maximum-likelihoodestimate.
Trang 6That is, positive scaling of likelihood functions will have no effect on the likelihood estimate As a consequence, likelihood functions can have arbitrarypositive scaling.
maximum-7.2.2.3 Independent Likelihood Functions The joint probability P A B
of independent events A and B is the product P A&B P A P B Theanalogous effect on independent likelihood functions LA and LB is the pointwiseproduct That is, at each ``point'' x,
7.2.2.4 Pointwise Products One of the remarkable attributes of Gaussianlikelihood functions is that their pointwise products are also Gaussian likelihoodfunctions, as illustrated in Fig 7.3
Given two Gaussian likelihood functions with parameter sets m A; YA and
fmB; YBg, their pointwise product is a scaled Gaussian likelihood function withparameters fmA B; YA Bg such that, for all x,
&
&
& &
& & &
Fig 7.2 Likelihoodwithout maximum.
Trang 7One can solve Eq 7.11 for the parameters fmA B; YA Bg as functions of theparameters fmA; YA; mB; YBg: Taking logarithms of both sides of Eq 7.11 willproduce the equation
1
2x mA BTYA Bx mA B
log c 1
2x mATYAx mA1
Next, taking the ®rst and second derivatives with respect to the independent variable
x will produce the equations
of the component likelihood functions (YA and YB in Eq 7.14)
CombinedMaximum-LikelihoodEstimate is a WeightedAverage Equation7.13 evaluated at x 0 is
Trang 8which can be solved for
mA B Yy
YA YBy YAmA YBmB; 7:17where y denotes the Moore±Penrose inverse of a matrix (de®ned in Section B.1.4.7).Equations 7.14 and 7.17 are key results for deriving the formula for Kalman gain.All that remains is to de®ne likelihood function parameters for noisy sensors
7.2.3 Noisy Measurement Likelihoods
The term measurements refers to outputs of sensors that are to be used in estimatingthe argument vector x of a likelihood function Measurement models represent howthese measurements are related to x, including those errors called measurementerrors or sensor noise These models can be expressed in terms of likelihoodfunctions with x as the argument
7.2.3.1 Measurement Vector The collective output values from a multitude `
of sensors are the components of a vector
z def
z1
z2
z3
z`
2666
377
called the measurement vector, a column vector with ` rows
7.2.3.2 Measurement Sensitivity Matrix We suppose that the measuredvalues zi are linearly1 related to the unknown vector x we wish to estimate,
where H is the measurement sensitivity matrix
7.2.3.3 Measurement Noise Measurement noise is the electronic noise at theoutput of the sensors It is assumed to be additive,
& &
1 The Kalman ®lter is de®ned in terms of the measurement sensitivity matrix H, but the extended Kalman
®lter (described in Section 7.6.4)can be de®ned in terms of a suitably differentiable vector-valued function
h x.
Trang 9where the measurement noise vector v is assumed to be zero-mean Gaussian noisewith known covariance R,
Yz R 1;assuming R is nonsingular
7.2.3.5 Unknown Vector Likelihoods The same parameters de®ningmeasurement likelihoods also de®ne an inferred likelihood function for the truevalue of the unknown vector, with mean
is not unusual for GPS=INS integration
7.2.4 Gaussian MLE
7.2.4.1 Variables Gaussian MLE uses the following variables:
^x, the maximum likelihood estimate of x It will always equal the argmax (meanm)of an associated Gaussian likelihood function, but it can have differentvalues:
Trang 10^x , representing the likelihood function prior to using the measurements.
^x , representing the likelihood function after using the measurements
P, the covariance matrix of estimation uncertainty It will always equal the inverse
of the information matrix Y of the associated likelihood function It also canhave two values:
P , representing the likelihood function prior to using the measurements
P , representing the likelihood function after using the measurements
z, the vector of measurements
H, the measurement sensitivity matrix
R, the covariance matrix of sensor noise
7.2.4.2 Update Equations The MLE formula for updating the variables ^x and
P to re¯ect the effect of measurements can be derived from Eqs 7.14 and 7.17 withinitial likelihood parameters
the information matrix of the measurements, and
where z is the measurement vector
7.2.4.3 Covariance Update The Gaussian likelihood function with eters fmA B; YA Bg of Eqs 7.14 and 7.17 then represents the state of knowledgeabout the unknown vector x combining both sources (i.e., the prior likelihood andthe effect of the measurements) That is, the covariance of MLE uncertainty afterusing the measurements will be
param-P Y 1
and the MLE of x after using the measurements will then be
Equation 7.14 can be simpli®ed by applying the following general matrix formula2:
Trang 11A 1 YA, the prior information matrix for ^x
A P , the prior covariance matrix for ^x
B HT, the transpose of the measurement sensitivity matrix
C R
D H, the measurement sensitivity matrix,
so that Eq 7.31 becomes
a form better conditioned for computation
7.2.4.4 Estimate Update Equation 7.17 with substitutions from Eqs 7.27±7.30 will have the form shown in Eq 7.37
Hyz
|{z}
Eq: 7:30
24
Trang 127.2.5 Kalman Gain Matrix
Equation 7.45 has the form of Eq 7.1 with Kalman gain matrix
7.3 STATE AND COVARIANCE PREDICTION
The rest of the Kalman ®lter is the prediction step, in which the estimate ^x and itsassociated covariance matrix of estimation uncertainty P are propagated from onetime epoch to another This is the part where the dynamics of the underlying physicalprocesses come into play The ``state'' of a dynamic process is a vector of variablesthat completely specify enough of the initial boundary value conditions forpropagating the trajectory of the dynamic process forward in time, and the procedurefor propagating that solution forward in time is called ``state prediction.'' The modelfor propagating the covariance matrix of estimation uncertainty is derived from themodel used for propagating the state vector
7.3.1 State Prediction Models
7.3.1.1 Differential Equation Models Ever since the differential calculuswas introduced (more or less simultaneously)by Sir Isaac Newton (1643±1727)andGottfried Wilhelm Leibnitz (1646±1716), we have been using ordinary differentialequations as models for the dynamical behavior of systems of all sorts
Example 7.1 Differential Equation Model for Harmonic Resonator Dynamicalbehavior of the one-dimensional damped mass±spring shown schematically in Fig.7.4 is modeled by the equations
Trang 13d2x
dt2 Cdampingm dxdtCspringm x w tm ; 7:48
where m mass attached to spring and damper
x upward displacement of mass from its rest position
Cspring spring constant of spring
Cdamping damping coef®cient of dashpot
w t disturbing force acting on mass
Systems of First-Order Linear Differential Equations The so-called statespace models for dynamic systems replace higher order differential equations withsystems of ®rst-order differential equations This can be done by de®ning the ®rst
n 1 derivatives of an nth-order differential equation as state variables
Example 7.2 State Space Model for Harmonic Oscillator Equation 7.48 is alinear second-order n 2 differential equation It can be transformed into a system
of two linear ®rst-order differential equations with state variables
x1defx mass displacement; x2defdxdt mass velocity;
Fig 7.4 Schematic model for dynamic system of Example 7.1.
Trang 14Systems of linear ®rst-order differential equations represented in ``long-hand''form as
dxn
dt fn1x1 fn2x2 fn3x3 fnnxn wncan be represented more compactly in matrix form as
f11 f12 f13 f1n
f21 f22 f23 f2n
f31 f32 f33 f3n .
fn1 fn2 fn3 fnn
2666
377
w1
w2
w3
wn
2666
377
7;
respectively
Trang 15Example 7.3 Harmonic Resonator Model in Matrix Form For the system oflinear differential equations 7.49 and 7.50,
Cdampingm
24
3
m
24
35:
Eigenvalues of Dynamic Coef®cient Matrices The coef®cient matrix F of asystem of linear differential equations _x Fx w has effective units of reciprocaltime, or frequency (in units of radians per second) It is perhaps then not surprisingthat the characteristic values (eigenvalues)of F are the characteristic frequencies ofthe dynamic system represented by the differential equations
The eigenvalues of an n n matrix F are the roots of its characteristicpolynomial
det lI F Pn
The eigenvalues of F have the same interpretation as the poles of the related systemtransfer function, in that the dynamic system _x Fx w is stable if and only if thesolutions l of det lI F 0 lie in the left half-plane
Example 7.4 Damping and Resonant Frequency for Underdamped HarmonicResonator For the dynamic coef®cient matrix
F C0spring 1
m
Cdampingm
24
35
in Example 7.3, the eigenvalues of F are the roots of its characteristic polynomial
det lI F det Cspringl 1
Cdampingm
24
3
5 l2Cdampingm l Cspringm ;which are
l Cdamping
12m
C2 damping 4mCspring
q
:
If the discriminant
C2 damping 4mCspring< 0;
Trang 16then the mass±spring system is called underdamped, and its eigenvalues are acomplex conjugate pair
damping oresonantiwith real part
1
tdamping
Cdamping2mand imaginary part
oresonant2m1 4mCspring C2
damping
q
:The alternative parameter
24
3
So long as the damping coef®cient Cdamping> 0, the eigenvalues of this systemwill lie in the left half-plane In that case, the damped mass±spring system isguaranteed to be stable
Matrix Exponential Function The matrix exponential function is de®ned (inSection B.6.4)as
1 The matrix N exp M is always invertible and N 1 exp M
2 If M is antisymmetric (i.e., its matrix transpose MT M), then N exp M
is an orthogonal matrix (i.e., its matrix transpose NT N 1)
Trang 173 The eigenvalues of N exp M are the (scalar)exponential functions of theeigenvalues of M.
4 If M s is an integrable function of a scalar s, then the derivative
ddt
where x t0 is the initial value of the state vector x for t t0
Time Invariant Systems If the dynamic coef®cient matrix F of Eq 7.51 does notdepend on t (time), then the problem is called time invariant In that case,
If we let ; xk 1; xk; xk1; be the corresponding state vector values of alinear dynamic system at those discrete times, then each of these values can bedetermined from the previous value by using Eq 7.56 in the form
Trang 18Equation 7.59 is the discrete-time dynamic system model corresponding to thecontinuous-time dynamic system model of Eq 7.51.
The matrix Fk 1 (de®ned in Eq 7.60)in the discrete-time model (Eq 7.59)iscalled a state transition matrix for the dynamic system de®ned by F Note that Fdepends only on F, and not on the dynamic disturbance function w t
The noise vectors wk are the discrete-time analog of the dynamic disturbancefunction w t They depend upon F and w
Example 7.5 State Transition Matrix for Harmonic Resonator Model Theunderdamped harmonic resonator model of Example 7.4 has no time-dependentterms in its coef®cient matrix (Eq 7.53), making it a time-invariant model with statetransition matrix
375; 7:62
where o oresonant; the resonant frequency
t tdamping; the damping time constant
7.3.2 Covariance Prediction Models
The word stochastic derives from the Greek expression for aiming at a target,indicating some degree of uncertainty in the dynamics of the projectile betweenlaunch and impact That idea has been formalized mathematically as stochasticsystems theory, used here for modeling dynamic processes that are not fullypredictable
A stochastic process is a model for the evolution over time of a probabilitydistribution For Kalman ®ltering, this can be viewed as the probability distribution
of the true state of a dynamic process When the underlying probability distribution
is Gaussian, the distribution is completely speci®ed by its mean and its covariance.The mean will be the estimated value of the state vector, and the covariance matrixrepresents the mean-squared uncertainty in the estimate The time update equations
Trang 19of the Kalman ®lter propagate the estimated value and its associated mean-squareduncertainty forward in time.
7.3.2.1 Zero-Mean White Gaussian Noise Processes A zero-mean whiteGaussian noise process in discrete time is a sequence of independent samples ; wk 1; wk; wk1; from a normal probability distribution N 0; Qk withzero mean and known ®nite covariances Qk In Kalman ®ltering, it is not necessary(but not unusual)that the covariance of all samples be the same
Sampling is called independent if the expected values of outer products
for all integer indices i and j of the random process
Zero-mean white Gaussian noise processes are the fundamental random processesused in Kalman ®ltering However, it is not necessary that all noise sources in themodeled sensors and dynamic systems be zero-mean white Gaussian noiseprocesses It is only necessary that they can be modeled in terms of such processes(to be addressed in Section 7.5)
7.3.2.2 Gaussian Linear Stochastic Processes in Discrete Time Alinear stochastic processes model in discrete time has the form
Qk represent the unknown random disturbances Together, they model the tion of the necessary statistical properties of the state variable x
propaga-Example 7.6 Harmonic Resonator with White Acceleration DisturbanceNoise If the disturbance acting on the harmonic resonator of Examples 7.1±7.5were zero-mean white acceleration noise with variance s2
disturbance, then its bance noise covariance matrix would have the form
Trang 20Forces applied to a rigid body, for example, can affect rotational dynamics as well astranslational dynamics This sort of coupling of common disturbance noise sourcesinto different components of the state dynamics can be represented by using a noisedistribution matrix G in the form
d
where the components of w t are the common disturbance noise sources and thematrix G represents how these disturbances are distributed among the state vectorcomponents
The covariance of state vector disturbance noise will then have the form GQwGT,where Qw is the covariance matrix for the white-noise process w t
The analogous model in discrete time has the form
xk Fk 1xk 1 Gk 1wk 1; 7:67where fwkg is a zero-mean white-noise process in discrete time
In either case (i.e., continuous or discrete time), it is possible to use the noisedistribution matrix for noise scaling, as well, so that the components of wk can beindependent, uncorrelated unit normal variates and the noise covariance matrix
Qw I, the identity matrix
7.3.2.4 Predictor Equations The linear stochastic process model parameters
F and Q can be used to calculate how the discrete-time process variables m and Pevolve over time
Using Eq 7.64 and taking expected values,
Trang 217.4.2 Common Terminology
The symbols used in Table 7.2 for the variables and parameters of the Kalman ®lterare essentially those used in the original paper by Kalman [71], and this notation isfairly common in the literature
The following are some names commonly used for the symbols in Table 7.2:
H is the measurement sensitivity matrix or observation matrix
H^xk is the predicted measurement
z H^xk , the difference between the measurement vector and the predictedmeasurement, is the innovations vector
K is the Kalman gain
Pk is the predicted or a priori value of estimation covariance
Pk is the corrected or a posteriori value of estimation covariance
Qk is the covariance of dynamic disturbance noise
R is the covariance of sensor noise or measurement uncertainty
^xk is the predicted or a priori value of the estimated state vector
^xk is the corrected or a posteriori value of the estimated state vector
z is the measurement vector or observation vector
7.4.3 Data Flow Diagrams
The matrix-level data ¯ow of the Kalman ®lter implementation for a time-varyingproblem is diagrammed in Fig 7.5, with the inputs shown on the left, the outputs(corrected estimates)on the right, and the symbol z 1 representing the unit delayoperator
The dashed lines in the ®gure enclose two computation loops The top loop is theestimation loop, with the feedback gain (Kalman gain)coming from the bottom
TABLE 7.2 Essential Kalman Filter Equations Predictor (Time Updates)
Predicted state vector:
^xk Fk^xk 1 Eq 7.68 Predicted covariance matrix:
P k F k P k 1 F T
k Q k 1 Eq 7.69 Corrector (Measurement Updates)
Kalman gain:
K k P k HTk H k P k HTk R k 1 Eq 7.46 Corrected state estimate:
^xk ^x k K k z k H k ^xk Eq 7.1 Corrected covariance matrix:
Pk Pk KkHkPk Eq 7.47
Trang 22loop The bottom loop implements the Riccati equation solution used to calculate theKalman gain This bottom loop runs ``open loop,'' in that there is no feedbackmechanism to stabilize it in the presence of roundoff errors Numerical instabilityproblems with the Riccati equation propagation loop were discovered soon after theintroduction of the Kalman ®lter.
+
+ +
+ +
Trang 237.5 ACCOMMODATING CORRELATED NOISE
The fundamental noise processes in the basic Kalman ®lter model are zero-meanwhite Gaussian noise processes:
fwkg, called dynamic disturbance, plant noise, or process noise and
fvkg, called sensor noise, measurement noise, or observation noise
However, it is not necessary that the physical noise processes of the real-worldapplicationÐeither the dynamic disturbance or the sensor noiseÐbe uncorrelated inorder to apply Kalman ®ltering
For applications with uncorrelated noise sources, it is only necessary to model thecorrelated noise process jk vk, wk using a linear stochastic system model of thesort
jk Fk 1jk 1 yk 1;where fykg is a zero-mean white Gaussian noise process, and then augment the statevector by appending the new variable j,
xaugmented xoriginal
j
7:70and modify the parameter matrices F; Q, and H accordingly
7.5.1 Correlated Noise Models
7.5.1.1 Autocovariance Functions Correlation of a random sequence fjkg
is characterized by its discrete-time autocovariance function Pf gDk, a function ofjkthe delay index Dk de®ned as
Pf gDk jk defEkh jk mx jkDk mxTi; 7:71where mx is the mean value of the random sequence fjkg
For white-noise processes,
PDk 0; Dk 6 0;C; Dk 0;
7:72where C is the covariance of the process
Trang 247.5.1.2 Random Walks Random walks, also called Wiener processes, arecumulative sums of white-noise processes fwkg:
a stochastic process model with state transition matrix F I, an identity matrix.Random walks are notoriously unstable, in the sense that the covariance of thevariate jk grows linearly with k and without bound as k ! 1 In general, if any ofthe eigenvalues of a state transition matrix fall on or outside the unit circle in thecomplex plane (as they all do for identity matrices), the variate of the stochasticprocess can fail to have a ®nite steady-state covariance matrix However, thecovariance of uncertainty in the estimated system state vector can still converge
to a ®nite steady-state value, even if the process itself is unstable Methodsfor determining whether estimation uncertainties will diverge are presented inChapter 8
7.5.1.3 Exponentially Correlated Noise Exponentially correlated randomprocesses have ®nite, constant steady-state covariances A scalar exponentiallyrandom process fxkg has a model of the sort
where Q is the variance of the scalar zero-mean white-noise process fwkg
The autocovariance sequence of an exponentially correlated random process indiscrete time has the general form
which falls off exponentially on either side of its peak value s2 (the processvariance)at Dk 0 The parameter Nc is called the correlation number of theprocess, where Nc t=Dt for correlation time t and sample interval Dt
Trang 257.5.1.4 Harmonic Noise Harmonic noise includes identi®able frequencycomponents, such as those from AC power or from mechanical or electricalresonances A stochastic process model for such sources has already been developed
in the examples of this chapter
7.5.1.5 SA Noise Autocorrelation A clock dithering algorithm is described
in U.S Patent 4,646,032 [134], including a parametric model of the autocorrelationfunction (autocovariance function divided by variance)of the resulting timing errors
SA was suspended on May 1, 2000, but can be turned on again Knowledge of thedithering algorithm does not necessarily give the user any advantage, but there is atleast a suspicion that this may be the algorithm used for SA dithering of theindividual GPS satellite time references Its theoretical autocorrelation function isplotted in Fig 7.6 along with an exponential correlation curve The two are scaled tocoincide at the autocorrelation coef®cient value of 1=e 0:36787944 , theargument at which correlation time is de®ned Unlike exponentially correlatednoise, this source has greater short-term correlation and less long-term correlation.The correlation time of SA errors determined from GPS signal analysis is on theorder of 102±103s It is possible that the actual correlation time is variable, whichmight explain the range of values reported in the literature
Although this is not an exponential autocorrelation function, it could perhaps bemodeled as such
Exponentially correlated
Relative time scale (tau/NT)
Fig 7.6 Autocorrelation function for pseudonoise algorithm.