On Extended RLS Lattice Adaptive Variants:Error-Feedback, Normalized, and Array-Based Recursions Ricardo Merched Signal Processing Laboratory LPS, Department of Electronics and Computer
Trang 1On Extended RLS Lattice Adaptive Variants:
Error-Feedback, Normalized, and
Array-Based Recursions
Ricardo Merched
Signal Processing Laboratory (LPS), Department of Electronics and Computer Engineering, Federal University of Rio de Janeiro, P.O Box 68504, Rio de Janeiro, RJ 21945-970, Brazil
Email: merched@lps.ufrj.br
Received 12 May 2004; Revised 10 November 2004; Recommended for Publication by Hideaki Sakai
Error-feedback, normalized, and array-based recursions represent equivalent RLS lattice adaptive filters which are known to offer better numerical properties under finite-precision implementations This is the case when the underlying data structure arises from a tapped-delay-line model for the input signal On the other hand, in the context of a more general orthonormality-based input model, these variants have not yet been derived and their behavior under finite precision is unknown This paper develops several lattice structures for the exponentially weighted RLS problem under orthonormality-based data structures, including error-feedback, normalized, and array-based forms As a result, besides nonminimality of the new recursions, they present unstable modes as well as hyperbolic rotations, so that the well-known good numerical properties observed in the case of FIR models no longer exist We verify via simulations that, compared to the standard extended lattice equations, these variants do not improve the robustness to quantization, unlike what is normally expected for FIR models
Keywords and phrases: RLS algorithm, orthonormal model, lattice, regularized least squares.
1 INTRODUCTION
In a recent paper [1], a new framework for exploiting data
structure in recursive-least-squares (RLS) problems has been
introduced As a result, we have shown how to derive RLS
lattice recursions for more general orthonormal networks
other than tapped-delay-line implementations [2] As is well
known, the original fast RLS algorithms are obtained by
ex-ploiting the shift structure property of the successive rows
of the corresponding input data matrix to the adaptive
algo-rithm That is, consider two successive regression (row)
vec-tors{ u M,N,u M,N+1 }, of orderM, say,
u M,N =u0(N) u1(N) · · · u M −1(N)
=u M −1,N u M −1(N)
,
u M,N+1 =u0(N + 1) u1(N + 1) · · · u M −1(N + 1)
=u0(N + 1) ¯uM −1,N+1
.
(1)
By recognizing that, in tapped-delay-line models we have
¯u M −1,N+1 = u M −1,N (2) One can exploit this relation to obtain the LS solution in
a fast manner The key for extending this concept to more
general structures in [1,3] was to show that, although the above equality no longer holds for general orthonormal models, it is still possible to relate the entries of { u M,N,
u M,N+1 }as
¯
u M −1,N+1 = u M,NΦM, (3) whereΦMis anM ×(M−1) structured matrix induced by the underlying orthonormal model.Figure 1illustrates such structure for which the RLS lattice algorithm of [1] was de-rived They constitute what we will refer to in this paper as
the a-posteriori-based lattice algorithm, since all these recur-sions are based on a posteriori estimation errors Now, it is
a well-understood fact that several other equivalent lattice structures exist for RLS filters that result from tapped-delay-line models These alternative implementations are known
in the literature as error-feedback, array-based (also referred
to as QRD lattice), and normalized lattice algorithms (see,
e.g., [4,5,6,7,8]) In [9], all such variants were further ex-tended to the special case of Laguerre-based filters, as we have explained in [1] Although all these forms are theoretically equivalent, they tend to exhibit different performances when considered under finite-precision effects
In this paper, we will derive all such equivalent lattice im-plementations for input data models based on the structure
Trang 2z −1 − a ∗ M−1
1− a M−1 z −1
A M−1
1− a M−1 z −1
ˆ
d(N)
w M,N
A2
1− a2z −1
A1
1− a1z −1
A0
1− a0z −1
s(N) z −1 − a ∗0
1− a0z −1
z −1 − a ∗1
1− a1z −1 · · ·
· · ·
Figure 1: Transversal orthonormal structure for adaptive filtering
ofFigure 1 The use of orthonormal bases can provide several
advantages For example, in some situations, long FIR
els can be replaced by shorter compact all-pass-like
mod-els as Laguerre filters (see, e.g., [10,11]) From the
adap-tive filtering point of view, this can represent large savings
in computational complexity The conventional IIR
adap-tive methods [12,13] present serious problems of stability,
local minima, and slow convergence, and in this sense the
use of orthogonal bases offers a stable and global solution,
due to their fixed poles location Moreover, orthonormality
guarantees good numerical conditioning for the underlying
estimation problem, in contrast to other equivalent system
descriptions (such as the fixed-denominator model and the
partial-fraction representation—see further [2]) The most
important application of such structured RLS problems is
in the field of line echo cancelation corresponding to long
channels, whereby FIR models can be replaced by short
or-thonormal IIR models Other applications include
channel-estimate-based equalization schemes, where the feedforward
linear equalizer can be similarly replaced by an orthonormal
IIR structure
After obtaining the new algorithms we will verify the
their performance through computer simulations under
finite-precision arithmetic As a result, the new forms turn
out to exhibit an unstable behavior Besides nonminimality
of their corresponding algorithm states, they present
unsta-ble modes or hyperbolic rotations in their recursions,
un-like the corresponding fast variants for FIR models (the
lat-ter, in contrast, is free from hyperbolic rotations and
un-stable modes, and present better numerical properties,
de-spite nonminimality) As a consequence, the new variants
do not show improvement in robustness to the
quantiza-tion effect, compared to the standard RLS lattice recursions
of [1], which remains the only reliable extended lattice
struc-ture This discussion on the numerical effects is provided in
Section 9
However, before starting our presentation we call the
at-tention of the reader for an important point Our main goal
in this paper is the development of the equivalent RLS
re-cursions that are normal extensions of the FIR case, and to
present some preliminary comparisons based on computer
simulations A complete analytical error analysis for each of
these algorithms is not a simple task and is beyond the scope
of this paper Nevertheless, the algorithms derivation is by itself a starting point for further development and improve-ments on such variants, which is a subject for future research Moreover, we will provide a brief review and discussion on the minimality and backward consistency properties in order
to explain (up to a certain extent) the stability of these vari-ants from the point of view of error propagation This will
be pursued inSection 9, while commenting on the sources of numerical errors in each case
Notation In this paper, A ⊕ B is the same as diag { A, B } We also denote ∗ as the conjugate and transpose of a vector Since we will be dealing with order-recursive variables, we will write, for example,H M,N, the order-M data matrix up to timeN The same goes for u M,N,e M(N), and so on
2 A MODIFIED RLS ALGORITHM
We first provide a brief review of the regularized least-squares problem, but with a slight modification in the definitions of the desired vector, denoted by y N, and the weighting ma-trix WN Thus given a column vector y N ∈ C N+1, and a data matrix H N ∈ C(N+1) × M, the exponentially weighted least-squares problem seeks the column vectorw ∈ C Mthat solves
min
w M
λ N+1 w M ∗Π−1
M w M+d N −HM,N w M2
WN
, (4)
whereΠM is a positive regularization matrix,WN =(λN ⊕
λ N −1⊕ · · · λ ⊕ t) is a weighting matrix defined in terms of
a forgetting factorλ satisfying 0 λ < 1, and t is an
arbi-trary scaling factor The symbol∗denotes complex conjugate transposition Moreover, we defined N as a growing length vector whose entries are assumed to change according to the following rule:
d N =
θd N −1
d(N)
Trang 3for some scalarθ.1The individual rows ofHN are denoted
by{ u i }:
HM,N =
u M,0
u M,1
u M,N
Note that the regularized problem in (4) can be conveniently
written as
min
w M
0L
d N −
AM,L
HM,N w M
2
W N
whereW N =(λN+L ⊕ λ N+L −1⊕ · · · ⊕ t), and where we have
factoredΠ−1
M as
Π−1
M =A∗
for some matrixAM,L This assumes that the incoming data
has started at some point in the past, depending on the
num-ber of rowsL ofAM,L(see [1]) Hence, defining the extended
quantities
H M,N
AM,L
HM,N =
x0,−1 x1,−1 · · · x M −1,−1
h0,N h1,N · · · h M −1,N
x0,N x1,N · · · x M −1,N
,
(9)
wherex i, −1represents a column ofAM,Landh i,N denotes a
column ofHM,N, as well as
y N =
0L
we can express (4) as a pure least-squares problem:
min
w M
y N − H M,N w M2
Therefore, the optimal solution of (11), denoted byw M,N, is
given by
w M,N P M,N H M,N ∗ W N y N, (12) where
P M,N =H M,N ∗ W N H M,N
−1
We denote the projection ofy N onto the range space of
H N by y M,N = H M,N w M,N The corresponding a posteriori
estimation error vector is given bye N = y N − H M,N w M,N
1 The reason for the introduction of the scalars{ θ, t }will be understood
very soon The classical recursive least-squares (RLS) problem corresponds
to the special choiceθ = t =1.
Now letw M,N −1 be the solution to a similar LS problem with the variables { y N,H M,N,W N,λ N+1 }in (4) replaced by
{ y N −1,H M,N −1,W N −1,λ N } That is,
w M,N −1=H M,N ∗ −1W N −1H M,N −1
−1
H M,N ∗ −1W N −1y N −1.
(14) Using (5) and the fact that
H M,N =
H M,N −1
in addition to the matrix inversion formula, it is straightfor-ward to verify that the following (modified) RLS recursions hold:
γ −1
M(N)=1 +tλ −1u M,N P M,N −1u ∗ M,N,
g M,N = λ −1P M,N −1u ∗ M,N γ M(N),
w M,N = θw M,N −1+tg M,N M(N),
M(N)= d(N) − θu M,N w M,N −1,
P M,N = λ −1P M,N −1− g M,N γ −1
M(N)gM,N ∗ ,
(16)
withw M, −1 =0MandP M, −1 =ΠM These recursions tell us how to update the weight estimatew M,N in time The well-known exponentially weighted RLS algorithm corresponds
to the special choice θ = t = 1 The introduction of the scalars{ θ, t }allows for a level of generality that is convenient for our purposes in the coming sections
3 STANDARD LATTICE RECURSIONS
that solves the RLS problem when the underlying input regression vectors arise from the orthonormal network of
Figure 1 The matrixΠMas well as all the initialization vari-ables are obtained according to an offline procedure as de-scribed in [1] The main step in this initialization proce-dure is the computation of ΠM, which remains unchanged for the new recursions we will present in the next sections The reader should refer to [1] for the details of its compu-tation.Figure 2illustrates the structure of themth section of
this extended lattice algorithm
4 ERROR-FEEDBACK LATTICE FILTERS
Observe that all the reflection coefficients defined for the a-posteriori-based lattice algorithm are computed as a ratio
in which the numerator and denominator are updated via separate recursions An error-feedback form is one that re-places the individual recursions for the numerator and de-nominator quantities by equivalent recursions for the reflec-tion coefficients themselves In principle, one could derive the recursions algebraically as follows Consider for instance
κ M(N)= ρ M(N)
ζ b
Trang 4Initialization For m = 0 to M, set
ζ m f(−1)= µ (small positive number)
δ m(−1)= ρ m(−1)= v m(−1)= b m(−1)=0
ζ b
m(−1)= π m − c ∗ mΠm c m
ζ m ˘b(−1)= π˘m+1 − ˘c ∗ mΠ¯m ˘c m
σ m = λζ m ˘b(−1)/ζ m f(−1)
χ m(−1)= a m φ ∗
mΠm c m+A m
ζ m ¯b(−1)= ζ b
m+1(−1)
For N ≥ 0, repeat
γ0(N) = ¯γ0(N) =1, f0(N) = b0(N) = s(N)
v0(N) =0, e0(N) = d(N)
For m = 0 to M − 1, repeat
ζ m ˘b(N) = σ m ¯γ m(N)ζ m f(N −1)
κ m ¯b(N) = ζ m ˘b(N)χ m+1(N −1)
¯b m(N) = a m+1 b m+1(N −1) +κ m ¯b(N)v m+1(N −1)
ζ m f(N) = λζ m f(N −1) +| f m(N) |2/ ¯γ m(N)
ζ b
m(N) = λζ b
m(N −1) +| b m(N) |2/γ m(N)
ζ m ¯b(N) = λζ m ¯b(N −1) +| ¯b m(N) |2/ ¯γ m(N)
χ m(N) = χ m(N −1) +a m v ∗
m(N)β m(N)
δ m(N) = λδ m(N −1) +f ∗
m(N)¯b m(N)/ ¯γ m(N)
ρ m(N) = λρ m(N −1) +e ∗
m(N)b m(N)/γ m(N)
γ m+1(N) = γ m(N) − | b m(N) |2/ζ b
m(N)
¯γ m+1(N) = ¯γ m(N) − | ¯b m(N) |2/ζ m ¯b(N)
κ v
m(N) = χ ∗ m(N)/ζ b
m(N), κ m(N) = ρ m(N)/ζ b
m(N)
κ b
m(N) = δ m(N)/ζ m f(N), κ m f(N) = δ ∗
m(N)/ζ m ¯b(N)
v m+1(N) = − a ∗
m v m(N) + κ v
m(N)b m(N)
e m+1(N) = e m(N) − κ m(N)b m(N)
b m+1(N) = ¯b m(N) − κ b
m(N) f m(N)
f m+1(N) = f m(N) − κ m f(N)¯b m(N)
Alternative recursions
ζ m+1 f (N) = ζ m f(N) − | δ m(N) |2/ζ m ¯b(N)
ζ b
m+1(N) = ζ m ¯b(N) − | δ m(N) |2/ζ m f(N)
ζ v
m(N) = λ −1 ζ v
m(N −1)− | v m(N) |2/γ m(N)
ζ m ¯b(N) = | a m+1 |2ζ b
m+1(N −1) +ζ m ˘b(N) | χ m+1(N −1)|2
¯γ m(N) = γ m+1(N −1) +ζ m ˘b(N) | v m+1(N −1)|2
Algorithm 1: Standard extended RLS lattice recursions
From the listing of the a-posteriori-based lattice filter of
ζ M b(N) into the expression for κM(N) leads to
κ M(N)= ρ M(N−1) +e ∗ M(N)bM(N)/γM(N)
ζ M b(N−1) +b M(N)2
/γ M(N) (18) and some algebra will result in a relation betweenκ M(N) and
κ M(N−1)
f m(N)
b m( N)
v m( N)
e m(N)
− a ∗ m
κ v
m(N)
¯bm(N)
κ m( f N)
κ b
m(N)
f m+1(N)
b m+1( N)
v m+1( N)
e m+1(N)
z −1
z −1
a m+1
κ m ¯b(N)
κ m(N)
Figure 2: A lattice section
We will not pursue this algebraic procedure here Instead,
we will follow the arguments used in [9] which highlights the interpretation of the reflection coefficients in terms of a least-squares problem This will allow us to invoke the recursions
we have already established for the modified RLS problem
ofSection 2and to arrive at the recursions for the reflection coefficients almost by inspection
4.1 A priori estimation errors
One form of error-feedback algorithm is the one based on a priori, as opposed to a posteriori, estimation errors They are
defined as
β M+1,N = x M+1,N − H M+1,N w b M+1,N −1,
¯
β M,N = x M+1,N − H¯M,N w b
M,N −1,
α M+1,N = x0,N − H¯M,N w M,N f −1,
M,N = y N − H M,N w M,N −1,
(19)
where now the a posteriori weight vectorw M,N f , for example,
is replaced byw M,N f −1 That is, these recursions are similar to the ones used for the a posteriori errors{ e M,N, ¯b M,N,b M+1,N,
f M+1,N }, with the only difference lying in the use of prior weight vector estimates
Following the same arguments as in Section III of [1], it can be verified that the last entries of these errors satisfy the following order-update relations in terms of the same reflec-tion coefficients{ κ M(N), κM f (N), κb
M(N)}:
M+1(N)= M(N)− κ M(N−1)βM(N),
β M+1(N)= β¯M(N)− κ b M(N−1)αM(N),
α M+1(N)= α M(N)− κ f (N−1) ¯β M(N),
(20)
Trang 5where{ κ M f (N), κb
M(N), κM(N)}can be updated as
κ M f (N)= κ M f(N−1) +β¯∗ M(N) ¯γM(N)
ζ M ¯b(N) α M+1(N),
κ b M(N)= κ b M(N−1) +α ∗ M(N) ¯γM(N)
ζ M f(N) β M+1(N),
κ M(N)= κ M(N−1) +β ∗ M(N)γM(N)
ζ M b(N) M+1(N)
(21)
The above recursions are well known and they are obtained
regardless of data structure
Now, recall that the a-posteriori-based algorithm still
re-quires the recursions for { ¯b M(N), vM(N)}, wherev M(N) is
referred to as the a posteriori rescue variable As we will see
in the upcoming sections, similar arguments will also lead to
the quantities{ β¯M(N),ν M(N)}, whereν M(N) will be
simi-larly defined as the a priori rescue variable corresponding to
v M(N) These in turn will allow us to obtain recursions for
their corresponding reflection coefficients{ k M ¯b(N), kv M(N)}
Moreover, we will verify thatν M(N) is the actual rescue
quan-tity used in the fixed-order fast transversal algorithm, and
which is based on a priori estimation errors
4.2 Exploiting data structure
The procedure to find a recursion forβ M,Nfollows similarly
to the one for the a posteriori errorb M,N Thus, beginning
from its definition
¯
β M,N = x M+1,N − H¯M,N P¯M,N −1H¯M,N ∗ −1W N −1x M+1,N −1
= x M+1,N −
0
H M+1,N −1 ΦM+1 P¯M,N −1
×Φ∗
M+1
0 H M+1,N ∗ −2
W N −1x M+1,N −1,
(22)
whereΦ is the matrix that relates{ H M+1,N −1, ¯H M,N }, and
us-ing the followus-ing relations into (22) (see [1]):
ΦM+1 P¯M,N −1Φ∗
M+1
= P M+1,N −2− ζ M ˘b(N−1)PM+1,N −2φ M+1 φ M+1 ∗ P M+1,N −2,
x M+1,N −1= a M+1
0
x M+1,N −2
+A M+1
A M
0
x M,N −2 − a ∗ M x M,N −1
, (23)
we obtain, after some algebra,
¯
β M(N)= a M+1 β¯M+1(N−1)
+ζ M ˘b(N−1)χM+1(N−2)λkM+1,N ∗ −1φ M+1, (24)
where
k M,N = g M,N γ −1
is the normalized gain vector, defined by the corresponding fast fixed-order recursions Thus, defining the a priori rescue variable
ν M(N) k ∗
M+1,N −1φ M+1, (26)
we have
¯
β M(N)= a M+1 β¯M+1(N−1)
+λκ M ¯b (N−1)νM+1(N−1)
(27)
In order to obtain a recursion forν M(N), consider the order-update recursion fork M,N, that is,
k M+1,N =
k M,N
β M ∗(N)
λζ b
M(N−1)
− w M,N b −1
Taking the complex transposition of (28) and multiplying it from the left byφ M+1we get
ν M+1(N)= − a ∗ M ν M(N) + λ−1κ v
M(N−1)βM(N) (29)
Of course, an equivalent recursion for χ M(N) can be ob-tained, considering the time update forw b M,N, which can be written as
− w M,N b
− w b M,N −1
k M,N
0 b M(N) (30) Hence, multiplying (30) from the left byφ ∗ M+1we get
χ M(N)= χ M(N−1) +a M ν ∗
M(N)bM(N) (31)
Now, it only remains to find recursions for the reflection co-efficients{ k M ¯b(N), kv
M(N)}
4.3 Time updates for { k M ¯b(N), kv
M(N)}
We now obtain time relations for the reflection coefficients
by exploiting the fact that these coefficients can be regarded
as least-squares solutions of order one [9,14]
We begin with the reflection coefficient
k M ¯b (N)= ζ M ˘b(N)χM+1(N−1)= χ M+1(N−1)
ζ M+1 v (N−1), (32)
where, from (31) and Section 5.1 of [1], the numerator and denominator quantities satisfy
χ M(N)= χ M(N−1) +a M ν ∗
M(N)bM(N),
ζ M v(N)= λ −1ζ M v(N−1)−v M(N)2
γ (N) .
(33)
Trang 6Now define the angle normalized errors
b M(N) b M(N)
γ M1/2(N) = β M(N)γ1/2
M (N),
v M(N) v M(N)
γ1/2(N) = ν M(N)γM1/2(N)
(34)
in terms of the square root of the conversion factorγ M(N)
It then follows from the above time updates forχ M(N) and
ζ M v(N) that{ χ M(N), ζM v(N)}can be recognized as the inner
products
χ M(N)= a M v ∗ M,N b M,N,
ζ M v(N)= v ∗ M,N W N −1v M,N , (35) which are written in terms of the following vectors of angle
normalized prediction errors:
b M,N
b M(− L)
b M(− L + 1)
b M+1 (N)
M,N
v M(− L)
v M(− L + 1)
v M (N)
(36)
In this way, the defining relation (32) forκ M ¯b (N) can be
writ-ten as
κ M ¯b(N)=v ∗ M+1,N −1W −1
N v M+1,N −1
−1
× v ∗ M+1,N −1W −1
N
a M+1 W N b M,N −1 (37)
which shows that κ M ¯b (N) can be interpreted as the
solu-tion of a first-order weighted least-squares problem, namely
that of projecting the vector (aM+1 W N b M,N) onto the vector
v M+1,N −1 This simple observation shows thatκ M ¯b (N) can be
readily time updated by invoking the modified RLS recursion
introduced inSection 2 That is, by making the identification
θ → λ, and t → −1, we have
κ M ¯b(N)= λκ M ¯b(N−1)
− v ∗ M+1(N−1)
ζ v
M(N)
− a M+1 b M+1(N−1)
− λv M+1 (N−1)κM ¯b (N−1)
= λκ M ¯b(N−1)
+v ∗ M+1(N−1)
ζ M v(N)
a M+1 β M+1(N−1) +λν M+1(N−1)κM ¯b (N−1)
= λκ M ¯b(N−1) +ζ M ˘b(N)v∗ M+1(N−1) ¯β M(N)
(38) This last equation is obtained from the update for ¯β M(N)
in (27) Similarly, the weightκ v
M(N) can be expressed as
κ v M(N)=b ∗ M,N W N b M,N
−1
b ∗ M,N W N
a ∗ M W N −1v M,N
(39) and therefore, making the identificationθ = λ −1, andt →1,
we can justify the following time update:
κ v
M(N)= λ −1κ v
M(N−1) +b ∗ M(N)
ζ M b(N)
a ∗ M v M(N)− λ −1b M(N)κv M(N−1)
= λ −1κ v
M(N−1)− b ∗ M(N)
ζ b
M(N)ν M+1(N)
(40)
A similar approach will also lead to the time updates of
{ κ M f (N), κb M(N), κM(N)} defined previously Algorithm 2
shows the a-priori-based lattice recursions with error feed-back.2
5 A-POSTERIORI-BASED REFLECTION COEFFICIENT RECURSIONS
Alternative recursions for the reflection coefficients{ κ v M(N),
κ M f (N), κb M(N), κM(N)}that are based on a posteriori errors can also be obtained The resulting reflection coefficients up-dates possess the advantage of avoiding the multiplicative factor λ −1 in the corresponding error-feedback recursions, which represent a potential source of instability of the algo-rithm
Thus consider for example the first equality of (38) It can be written as
κ M ¯b (N)=
1 +v M+1(N−1)2
ζ M v(N)
λκ ¯b
M(N−1)
+a M+1 v M+1 ∗ (N−1)bM+1(N−1)
γ M+1(N−1)ζM v(N) .
(41)
Recalling that ¯γ M(N) has the update
¯γ M(N)= γ M+1(N−1) +v M+1(N−1)2
ζ v
we have that
¯γ M(N)
γ M+1(N−1) = ζ M+1 v (N−2)
λζ M+1 v (N−1)= ζ M ˘b(N)
λζ M ˘b(N−1)
=
1 + v M+1(N−1)2
ζ M v(N)
(43)
2 Observe that the standard lattice filter obtained in [ 1 ] performs feed-back of several estimation error quantities from a higher-order problem, for example,b M+1( N −1), into the computation ofb M( N) The definition of error feedback in fast adaptive filters, however, has been referred to as the
feedback of such estimation errors into the computation of the reflection coe fficients themselves instead.
Trang 7Initialization For m = 0 to M, set
µ is a small positive number
κ m(−1)= κ b
m(−1)= κ m f(−1)
ν m(−1)= β m(−1)=0
ζ m f(−1)= µ
ζ b
m(−1)= π m − c ∗
mΠm c m
ζ m ˘b(−1)= π˘m+1 − ˘c ∗
mΠ¯m ˘c m
σ m = λζ m ˘b(−1)/ζ m f(−1)
χ m(−1)= a m φ ∗ mΠm c m+A m
κ m ¯b(−1)= ζ m ˘b(−1)χ m(−1)
κ v
m(−1)= χ m ∗(−1)/ζ b
m(−1)
ζ m ¯b(−1)= ζ b
m+1(−1)
For N ≥ 0, repeat
γ0(N) = ¯γ0(N) =1, α0(N) = β0(N) = s(N)
ν0(N) =0, 0(N) = d(N)
For m = 0 to M − 1, repeat
ζ m ˘b(N) = σ m ¯γ m(N)ζ m f(N −1)
¯
β M(N) = a M+1 β¯M+1(N −1) +λκ ¯b M(N −1)ν M+1(N −1)
κ M ¯b(N) = λκ M ¯b(N −1) +ζ M ˘b(N)v ∗ M+1(N −1) ¯β M(N)
ζ m f(N) = λζ m f(N −1) +| α m(N) |2¯γ m(N)
ζ b
m(N) = λζ b
m(N −1) +| β m(N) |2γ m(N)
ζ m ¯b(N) = λζ m ¯b(N −1) +| β¯m(N) |2¯γ m(N)
ν m+1(N) = − a ∗
m ν m(N) + λ −1 κ v
m(N −1)β m(N)
m+1(N) = m(N) − κ m(N −1)β m(N)
β m+1(N) = β¯m(N) − κ b
m(N −1)α m(N)
α m+1(N) = α m(N) − κ m f(N −1) ¯β m(N)
κ v
M(N) = λ −1 κ v
M(N −1)− β M(N)γ M(N)
ζ b M(N) ν M+1(N)
κ M f(N) = κ M f (N −1) +β¯M(N) ¯γ M(N)
ζ M(N) α M+1(N)
κ b
M(N) = κ b
M(N −1) +α M(N) ¯γ M(N)
ζ M f(N) β M+1(N)
κ M(N) = κ M(N −1) +β M(N)γ M(N)
ζ M b(N) M+1(N)
γ m+1(N) = γ m(N) − |b m(N)|2
ζ b
m(N)
¯γ m+1(N) = ¯γ m(N) − |¯b m(N)|2
ζ m(N)
Algorithm 2: The a-priori-based extended RLS lattice filter with
error feedback
so that we can write (41) as
κ M ¯b(N)= λ ¯γ M(N)
γ M+1(N−1)
×
κ M ¯b(N−1)
+a M+1 ζ M ˘b(N−1)v∗ M+1(N−1)bM+1(N−1)
γ M+1(N−1)
.
(44)
In a similar fashion, we can obtain the following recur-sion forκ v
M(N) from (40):
κ v M(N)= γ M+1(N)
λγ M(N)
κ v M(N−1) + a ∗ M b ∗ M(N)vM(N)
γ M(N)ζM b(N−1)
, (45) where we have used the fact that
γ M+1(N)
γ M(N) = λζ M b(N−1)
ζ M b(N) =
1−b
M(N)2
ζ M b(N)
. (46)
We can thus derive similar updates for the other reflec-tion coefficients Algorithm 3 is the resulting a-posteriori-based algorithm
6 NORMALIZED EXTENDED RLS LATTICE ALGORITHM
A normalized lattice algorithm is an equivalent variant that replaces each pair of cross-reflection coefficient updates, that is, { κ M f(N), κb M(N)} and { κ v M(N), κM ¯b(N)} by alterna-tive updates based on single coefficients, which we denote by
{ η M(N)}and{ ϕ M(N)} This is obtained by noting that these reflection coefficients are related to single parameters, that is,
{ κ M f (N), κb M(N)}related toδ M(N) and{ κ v M(N), κM ¯b(N)} re-lated toχ M(N) The reflection coefficient κM(N) is also re-placed byω M(N)
6.1 Recursion for η M(N)
We start by defining the coefficient
η M(N) δ M ∗(N)
ζ M ¯b/2(N)ζM f /2(N) (47) along with the normalized prediction errors
b M(N) b M(N)
γ1/2(N)ζM b/2(N),
f M (N) f M(N)
¯γ1M /2(N)ζM f /2(N),
¯b
M(N) ¯b M(N)
¯γ1M /2(N)ζM ¯b/2(N),
v M (N) v M(N)
γ1M /2(N)ζM v/2(N).
(48)
Now, referring toAlgorithm 1, we substitute the updat-ing equation for{ α M+1(N)}into the recursion for{ κ M f (N)} This yields
κ M f (N)= κ M f (N−1)
1−¯b
M(N)2
+ f M(N)¯b∗ M(N)
ζ M ¯b(N) ¯γM(N).
(49)
Trang 8Initialization For m = 0 to M, set
µ is a small positive number
κ m(−1)= κ b
m(−1)= κ m f(−1)
ν m(−1)= β m(−1)=0
ζ m f(−1)= µ
ζ b
m(−1)= π m − c ∗
mΠm c m
ζ m ˘b(−1)= π˘m+1 − ˘c ∗ mΠ¯m ˘c m
σ m = λζ m ˘b(−1)/ζ m f(−1)
χ m(−1)= a m φ ∗
mΠm c m+A m
κ m ¯b(−1)= ζ m ˘b(−1)χ m(−1)
κ v
m(−1)= χ ∗
m(−1)/ζ b
m(−1)
ζ m ¯b(−1)= ζ b
m+1(−1)
For N ≥ 0, repeat
γ0(N) = ¯γ0(N) =1, f0(N) = b0(N) = s(N)
v0(N) =0, e0(N) = d(N)
For m = 0 to M − 1, repeat
ζ m ˘b(N) = σ m ¯γ m(N)ζ m f(N −1)
κ M ¯b(N) = λ ¯γ M(N)
γ M+1(N−1)
κ M ¯b(N −1) +a M+1 ζ m(N−1)v M+1 ∗ (N−1)b M+1(N−1)
γ M+1(N−1)
¯b m(N) = a m+1 b m+1(N −1) +κ m ¯b(N)v m+1(N −1)
ζ m f(N) = λζ m f(N −1) +| f m(N) |2/ ¯γ m(N)
ζ b
m(N) = λζ b
m(N −1) +| b m(N) |2/γ m(N)
ζ m ¯b(N) = λζ m ¯b(N −1) +| ¯b m(N) |2/ ¯γ m(N)
γ m+1(N) = γ m(N) − |b m(N)|2
ζ m b(N)
¯γ m+1(N) = ¯γ m(N) − |¯b m(N)|2
ζ m(N)
κ v
M(N) = γ M+1(N)
λγ M(N)
κ v
M(N −1) +a ∗ M b ∗ M(N)v M(N)
γ M(N)ζ b
M(N−1)
κ b
m(N) = γ m+1(N)
¯γ m(N)
κ m(N −1) + f m ∗(N)¯b m(N)
¯γ m(N)ζ m f(N−1)
κ m f(N) = ¯γ m+1(N)
¯γ m(N)
κ m f(N −1) + ¯b ∗ m(N) f m(N)
¯γ m(N)ζ m ¯b(N−1)
κ m(N) = γ m+1(N)
γ m(N)
κ m(N −1) + b ∗ m(N)e m(N)
γ m(N)ζ b
m(N−1)
v m+1(N) = − a ∗
m v m(N) + κ v
m(N)b m(N)
e m+1(N) = e m(N) − κ m(N)b m(N)
b m+1(N) = ¯b m(N) − κ b
m(N) f m(N)
f m+1(N) = f m(N) − κ m f(N)¯b m(N)
Algorithm 3: The a-posteriori-based extended RLS lattice filter
with direct reflection coefficients updates
Multiplying both sides by the ratioζ M ¯b/2(N)/ζM f /2(N), we
obtain
η M(N)= ζ M ¯b/2(N)
ζ M f /2(N)κ
f
M(N−1)
1−¯b
M(N)2
+ f M (N)¯b M ∗(N)
(50)
However, from the time-update recursion for ζ M ¯b(N) and
ζ M f(N), the following relations hold:
ζ M ¯b/2(N)=λ1/2 ζ M ¯b/2(N−1)
1−¯b
M(N)2,
ζ M f /2(N)= λ1/2 ζ
f /2
M (N−1)
1−f
M(N)2.
(51)
Substituting these equations into (50), we obtain the de-sired time-update recursion for the first reflection coefficient:
η M(N)= η M(N−1)
1−¯b
M(N)2
1−f
M(N)2
+f M (N)¯b M ∗(N)
(52)
This recursion is in terms of the errors{ b M(N), fM (N)}
We thus need to determine order updates for these errors Thus dividing the order-update equation for b M+1(N) by
ζ M+1 b/2 (N)γM+11/2 (N), we obtain
b M+1(N)= ¯b M(N)− κ b
M(N) fM(N)
ζ b/2 M+1(N)γ1/2
Using the order-update relation forζ M b(N) we also have
ζ M+1 b (N)= ζ M ¯b(N)
1−η M(N)2
. (54)
In addition, the relation, forγ M(N),
γ M+1(N)= ¯γ M(N)−f M(N)2
ζ M f(N) (55) can be written as
γ M+1(N)= ¯γ M(N)
1−f
M(N)2
. (56) Therefore substituting (54) and (56) into (53), we obtain
b M+1 (N)= ¯b
M(N)− η ∗ M(N) fM (N)
1−f
M(N)2
1−η M(N)2. (57)
Similarly, using the order updates for f M+1(N), ζM f(N), and ¯γ M(N) we obtain
f M+1 (N)= f M (N)− η M(N)¯b M(N)
1−¯b
M(N)2
1−η M(N)2. (58)
Trang 96.2 Recursion for ω M(N)
In a similar vein, we introduce the normalized error
e M(N) e M(N)
γ M1/2(N)ζM1/2(N) (59) and the coefficient
ω M(N) ρ M(N)
ζ M b/2(N)ζ1/2
Using the order update forζ1/2(N) and γM(N), we can
establish the following recursion:
e M+1(N)= e M(N)− ω M(N)b M(N)
1−b
M(N)2
1−ω M(N)2. (61)
To obtain a time update for ω M(N), we first
substi-tute the recursion for e M+1(N) into the time update for
κ M(N) Then multiplying the resulting equation by the
ratioζ M b/2(N)/ζM e/2(N), and using the time updates for ζb
M(N) andζ M(N), we obtain
ω M(N)=
1−b
M(N)2
1−e
M(N)2
ω M(N−1) +b M ∗(N)eM (N)
(62)
Note that when ¯b M(N)= b M(N−1), the recursions derived
so far collapse to the well-known FIR normalized RLS lattice
algorithm For general structures, however, we need to derive
a recursion for the normalized variable ¯b M(N) as well This
can be achieved by normalizing the order update for ¯b M(N):
¯b M(N)= a M+1 b M+1(N−1) +κ M ¯b(N)vM+1(N−1)
ζ M ¯b/2(N) ¯γ1/2
In order to simplify this equation, we need to relate
ζ M ¯b(N) to ζb
M+1(N−1) and ¯γ M(N) to γM+1(N−1)
Recall-ing the alternative update forζ M ¯b(N):
ζ M ¯b(N)=a M+12
ζ M+1 b (N−1)
+χ M+1(N−1)2
ζ M+1 v (N−1) ,
(64)
we get
ζ M ¯b(N)= ζ b
M+1(N−1)
a M+12
+ϕ M+1(N−1)2
, (65)
where we have defined the reflection coefficient
ϕ M(N) χ M(N)
ζ b/2
M (N)ζv/2
In order to relate{ ¯γ M(N), γM+1(N−1)}, we resort to the alternative relation ofAlgorithm 1:
¯γ M(N)= γ M+1(N−1) +ζ M ˘b(N)v M+1(N−1)2
(67) which can be written as
¯γ M(N)= γ M+1(N−1)
1 +v
M+1(N−1)2
. (68) Substituting (65) and (68) into (63), we obtain
¯b
M(N)
= a M+1 b M+1(N−1) +ϕ M+1(N−1)vM+1 (N−1)
1 +v
M+1(N−1)2
a M+12
+ϕ M+1(N−1)2.
(69) This equation requires an order update for the normalized quantity v M (N) From the order update for vM(N), we can write
v M+1 (N)= − a ∗ M v M(N) + κv M(N)bM(N)
ζ M+1 v/2 (N)γ1M+1 /2 (N) . (70)
Similarly to (63), we need to relate{ ζ M+1 v (N), γM+1(N)}with
{ ζ M v(N), γM(N)} Thus recall that these quantities satisfy the following order updates:
ζ M+1 v (N)=a M2
ζ M v(N) +χ M(N)2
ζ M b(N) ,
γ M+1(N)= γ M(N)−b M(N)2
ζ M b(N) ,
(71)
which lead to the following relations:
ζ v M+1(N)= ζ v
M(N)
a M2
+ϕ M(N)2
,
γ M+1(N)= γ M(N)
1−b
M(N)2
.
(72)
Taking the square root on both sides of (72) and substituting into (70), we get
v M+1 (N)= − a ∗ M v M(N) + ϕ∗ M(N)bM (N)
1−b
M(N)2
a M2
+ϕ M(N)2.
(73)
Trang 106.3 Recursion for ϕ M(N)
This is the only remaining recursion, which is defined via
(66) To derive an update for it, we proceed similarly to the
recursions for{ η M(N), ωM(N)} First we substitute the
up-date forν M+1(N) into the update for κv
M(N) inAlgorithm 1 This gives
k v
M(N)= λ −1
1−b
M(N)2
k v
M(N−1) +a ∗ M b
∗
M(N)v M(N)
ζ M b(N) .
(74)
Then, multiplying the above equation by ζ b/2
M (N)/ζv/2
M (N) and using the fact that
ζ M b/2(N)= λ1/2 ζ
b/2
M (N−1)
1−b
M(N)2,
ζ M v/2(N)= λ−1/2 ζ M v/2(N−1)
1 +v
M(N)2
(75)
(see the equalities in (43) and (46)), we get
ϕ M(N)=
1 +v
M(N)2
1−b
M(N)2
ϕ M(N−1) +a ∗ M b M ∗(N)vM (N)
(76)
Algorithm 4is the resulting normalized extended RLS lattice
algorithm For compactness of notation and in order to save
in computations, we introduced the variables
r M b(N), rM f(N), rM e(N), rM v(N),
r M ϕ(N), rM ¯b(N), rM η(N), rM ω(N)
.
(77)
Note that the normalized algorithm returns the
nor-malized least-squares residual e M+1(N) The original error
e M+1(N) can be easily recovered, since the normalization
fac-tor can be computed recursively by
ζ M+11/2 (N)γ1M+1 /2 (N)= r b
M(N)rω
M(N)ζM1/2(N)γ1M /2(N) (78)
7 ARRAY-BASED LATTICE ALGORITHM
We now derive another equivalent lattice form, albeit one
that is described in terms of compact arrays
To arrive at the array form, we first define the following
quantities:
q b M(N) δ M(N)
ζ M ¯b/2(N), q
f
M(N) δ M ∗(N)
ζ M f /2(N),
q M ¯b(N) χ M(N)
ζ M v/2(N), q
v
M(N) χ M ∗(N)
ζ M ¯b/2(N).
(79)
Initialization For m = 0 to M, set
µ is a small positive number
η m(−1)= ω m(−1)= b m(−1)= v m (−1)=0
ζ b
m(−1)= π m − c ∗ mΠm c m
ϕ m+1(−1)=π˘m+1 −˘c m ∗Π ¯m ˘c m+1
ζ m+1 b (−1) (a m φ ∗ mΠm c m+A m)
For N ≥ 0, repeat
ζ b(N) = λζ b(N −1) +| u(N) |2
ζ0(N) = λζ0(N −1) +| d(N) |2
b0(N) = f0(N) = u(N)/ζ b/2
0 (N)
e 0(N) = d(N)/ζ1/2
0 (N)
v 0(N) =0
For m = 0 to M − 1, repeat
r b
m(N) =1− | b
m(N) |2, r m f(N) =1− | f
m(N) |2
r e
m(N) =1− | e
m(N) |2, r v
m(N) =1 +| v
m(N) |2
ϕ m(N) = r v
m(N)r b
m(N)ϕ m(N −1) +a ∗
m b ∗
m (N)v
m(N)
r m ϕ(N) =| a m |2+| ϕ m(N) |2
¯b
m(N) = a M+1 b m+1(N−1)+ϕ m+1(N−1)v
m+1(N−1)
r v m+1(N−1)r ϕ m+1(N−1)
r m ¯b(N) =1− | ¯b
m(N) |2
η m(N) = r m ¯b(N)r m f(N)η m(N −1) +f m (N)¯b m ∗(N)
r m η(N) =1− | η m(N) |2
ω m(N) = r b
m(N)r e
m(N)ω m(N −1) +b ∗
m (N)e
m(N)
r ω
m(N) =1− | ω m(N) |2
v m+1(N) = 1
r b m(N)r m ϕ(N)[− a ∗ M v m(N) + ϕ ∗ m(N)b m (N)]
e m+1(N) = 1
r b m(N)r e m(N)[ m(N) − ω m(N)b m(N)]
b m+1 (N) = 1
r m f(N)r η m(N)[¯b
m(N) − η ∗
m(N) f
m(N)]
f m+1 (N) = 1
r m(N)r m η(N)[f
m(N) − η m(N)¯b
m(N)]
Algorithm 4: Normalized extended RLS lattice filter
The second step is to rewrite all the recursions in
of the angle normalized prediction errors { b M(N),
e M (N), vM (N), ¯b M(N)}defined before, for example,
χ M(N)= χ M(N−1) +a M v ∗ M(N)bM(N),
ζ v
M(N)= λ −1ζ v
M(N−1)−v
M(N)2
,
ζ b
M(N)= λζ b
M(N−1) +b
M(N)2
.
(80)
The third step is to implement a unitary (Givens) trans-formation ΘM that lower triangularizes the following pre-array of numbers:
λ1/2 ζ M b/2(N−1) b M ∗(N)
λ −1/2 q v ∗
M(N−1) a M v ∗ M(N)
ΘM =
m 0
n p
!
(81)
... computation of the reflection coe fficients themselves instead. Trang 7Initialization For...
Trang 106.3 Recursion for ϕ M(N)
This is the only remaining recursion, which... a-priori-based lattice recursions with error feed-back.2
5 A-POSTERIORI-BASED REFLECTION COEFFICIENT RECURSIONS< /b>
Alternative recursions for the reflection coefficients{