As a result, the interference rejection system incorporates anadaptive FRESH filter consisting of 5 branches, each one using a FIR filter of 1024 coefficients.The frequency shifts of the FR
Trang 2Fig 17 Scenario for the detection of a CW-LFM signal hidden by a DS-BPSK interference.
frequency, (normalized)
SOI (radar, CW−LFM) Interference (DS−BPSK) Noise
Fig 18 PSD of noise, the CW-LFM signal (SOI) and the DS-BPSK interference (interference).SNR and INR fixed at 0 dB
Linear Frequency Modulated (CW-LFM) radar signal which is being transmitted by a hostileequipment and, therefore, unknown This signal is intentionally transmitted in the spectralband of a known Direct-Sequence Binary Phase Shift Keying (DS-BPSK) spread-spectrumcommunication signal, with the aim of hindering its detection The SOI sweeps a total
bandwidth of BW=20 MHz with a sweeping time T p=0.5 ms, while the DS-BPSK
interfering signal employs a chip rate 1/T c=10.23 Mcps The PSDs of the SOI, theinterference and the noise are shown in Fig 18
The structure of the FRESH filter is designed based on a previous study of the performance ofthe sub-optimal FRESH filters As a result, the interference rejection system incorporates anadaptive FRESH filter consisting of 5 branches, each one using a FIR filter of 1024 coefficients.The frequency shifts of the FRESH filter correspond to the 5 cycle frequencies of the DS-BPSKinterference with the highest spectral correlation level, which are (Gardner et al., 1987):
{± 1/T c } for the input x(t), and{ 2 f c , 2 f c ± 1/T c } for the complex conjugate of the input
x ∗(t); where f c is the carrier frequency and T cis the chip duration of the DS-BPSK interference.The adaptive algorithm used is Fast-Block Least Mean Squares (FB-LMS) (Haykin, 2001), with
a convergence factorμ=1
Next, we present some simulation results on the interception system performance obtainedafter the training interval has finished Firstly, the improvement in the SIR obtained at theoutput of the interference rejection system is represented in Fig 19, as a function of the input
Trang 3Adaptive-FRESH Filtering 33
−300 −20 −10 0 10 20 30 40 5
10 15 20 25 30 35 40 45
Input SINR, (dB)
INR = 0 dB INR = 10 dB INR = 30 dB
Fig 19 Simulated SIR improvement as a function of the input SINR and INR
−15
−10
−5 0 5 10 15 20 25 30 35 40
Input SINR, (dB)
INR = 0 dB INR = 10 dB INR = 30 dB
Fig 20 Simulated INR at the output of the interference rejection system
Signal-to-Interference-plus-Noise Ratio (SINR), for several values of Interference-to-NoiseRatio (INR) Two main results are revealed in Fig 19:
1 The SIR improvement tends to zero as the SINR increases, because the SOI is powerfulenough to mask the interference which makes the FRESH filter fail to estimate theinterference This is corroborated through Fig 20, where the simulated INR at the output
of the interference rejection system is shown For high enough input SINR, the ouputINR matches the input INR, indicating that the FRESH filter cannot extract any of theinterference power This allows to define a “useful region”, where the interference rejectionsystem obtains a significant SIR improvement In our example, the useful region comprises
Trang 4INR = 30 dB
(b) Atomic decomposition detector
Fig 21 Degradation of the PFAwhen the interference is present (with the interferencerejection system)
interference becomes masked by noise if the input INR is low enough (i.e INR=0 dB)
As the input INR increases, the interference rejection system increases its effectiveness, sothat the output INR decreases That is why the output INR is lower for INR=10 dB andINR=20 dB than for INR=0 dB
The output INR can provide an idea about the degradation of the probability of false alarm(PFA) of the detector (the probability of detection when the SOI is not present) However,each particular detector is affected in a different way We shall illustrate this fact with twodifferent detectors The first one is an energy detector (ED), which consists of comparing thetotal energy to a detection threshold set to attain a desired PFA The second one is a detectorbased on atomic decomposition (AD), such as that proposed in (López Risueño et al., 2003).This detector exhibits an excellent performance for LFM signals, as the SOI considered in ourcase study The AD-based detector can be thought as a bank of correlators or matched filters,each one matched to a chirplet (a signal with Gaussian envelope and LFM), whose maximumoutput is compared to a detection threshold Both detectors process the signal by blocks,taking a decision each 1024 samples
Fig 21 shows the degraded PFA of the whole interception system in the presence of theinterference, and when the detection threshold has been determined for an input consisting
of only noise The curves clearly show the different dependence of the PFAof both detectors
on the input INR The energy detector exhibits a higher sensitivity to the interference thanthe AD-based one Thus, the AD-based detector visibly degrades its PFA only for an inputINR=40 dB and above On the contrary, ED always exhibits a degraded PFAin the presence
of the interference due to the energy excess, which is proportional to the output INR shown
in Fig 20
Finally, we end this application example by showing the sensitivity improvement of theinterception system obtained thanks to the interference rejection system The sensitivity isdefined as the SNR at the input of the interception system required to attain an objectiveprobability of detection (PD=90%), for a given probability of false alarm (PFA=10−6) Thus,the detection threshold takes a different value depending on the input INR so that the PFA
holds for all the INR values The simulation results are gathered in Tab 2, with all valuesexpressed in dB At each INR value, the sensitivities of both detectors, AD and ED, with and
Trang 5without the interference rejection system, are shown The sensitivity improvement obtained
by using the interference rejection system is also shown
As can be seen, the improvement is very significant and proves the benefit of using theinterference rejection system Moreover, the improvement is higher for increasing input INR.However, there is still a sensitivity degradation as the INR increases due to an increase in thedetection threshold and/or a distortion produced by the interference rejection system to theSOI because of the signal leakage at the FRESH output (the latter only applies to AD, since ED
is insensitive to the signal waveform) And, as expected, the AD-based detector outperforms
ED (López Risueño et al., 2003)
8 Summary
This chapter has described the theory of adaptive FRESH filtering FRESH filtersrepresent a comprehensible implementation of LAPTV filters, which are the optimumfilters for estimating or extracting signal information when the signals are modelled asalmost-cyclostationary stochastic processes When dealing with complex signals, both thesignal and its complex conjugate must be filtered, resulting in WLAPTV filters Theknowledge required for the design optimal FRESH filters is rarely available beforehand inpractice, which leads to the incorporation of adaptive scheme Since FRESH filters consist
of a set of LTI filters, classical algorithms can be applied by simply use the stationarizedversions of the inputs of these LTI filters, which are obtained by time-averaging their statistics.Then, the optimal set of LTI filters is given by the multidimensional Wiener filter theory
In addition, thanks to their properties of signal separation in the cycle frequency domain,adaptive FRESH filters can operate blindly, that is without reference of the desired signal,
by simply using as frequency shifts the cycle frequencies belonging uniquely to the desiredsignal cyclic spectrum Furthermore, adaptive FRESH filters have the advantage of being able
to compensate small errors in their frequency shifts, which can be present in practice due tonon-ideal effects such as Doppler or the oscillators stability In this case, the convergence rate
of the adaptive algorithm must be carefully chosen in order to simultaneously minimize thegradient noise and the lag errors The chapter is finally closed by presenting an applicationexample, in which an adaptive FRESH filter is used to suppress known interferences for anunknown hidden signal interception system, demonstrating the potentials of adaptive FRESHfilters in this field of application
293Adaptive-FRESH Filtering
Trang 69 References
Adlard, J., Tozer, T & Burr, A (1999) Interference rejection in impulsive noise for
VLF communications, IEEE Military Communications Conference Proceedings, 1999 MILCOM 1999., pp 296–300.
Agee, B G., Schell, S V & Gardner, W A (1990) Spectral self-coherence restoral: A new
approach to blind adaptive signal extraction using antenna arrays, Proceedings of the IEEE 78(4): 753–767.
Brown, W A (1987) On the Theory of Cyclostationary Signals, Ph.D dissertation.
Chen, Y & Liang, T (2010) Application study of BA-FRESH filtering technique for
communication anti-jamming, IEEE 10th International Conference on Signal Processing (ICSP), pp 287–290.
Chevalier, P & Blin, A (2007) Widely linear MVDR beamformers for the reception of an
unknown signal corrupted by noncircular interferences, IEEE Transactions on Signal Processing 55(11): 5323–5336.
Chevalier, P & Maurice, A (1997) Constrained beamforming for cyclostationary signals,
International Conference on Acoustics, Speech, and Signal Processing, ICASSP-97.
Chevalier, P & Pipon, F (2006) New insights into optimal widely linear array
receivers for the demodulation of BPSK, MSK, and GMSK signals corrupted by
noncircular interferences - Application to SAIC, IEEE Transactions on Signal Processing
54(3): 870–883
Corduneanu, C (1968) Almost Periodic Functions, Interscience Publishers.
Franks, L E (1994) Polyperiodic linear filtering, in W A Gardner (ed.), Cyclostationarity in
Communicactions and Signal Processing, IEEE Press.
Gameiro, A (2000) Capacity enhancement of DS-CDMA synchronous channels by
frequency-shift filtering, Proceedings of the IEEE International Symposium on Personal, Indoor and Mobile Radio Communications.
Gardner, W A (1978) Stationarizable random processes, IEEE Transactions on Information
Theory 24(1): 8–22.
Gardner, W A (1986) Introduction to Random Processes with Applications to Signals and Systems,
Macmillan Publishing Company
Gardner, W A (1987) Spectral correlation of modulated signals: Part I – analog modulation,
IEEE Transactions on Communications 35(6): 584–594.
Gardner, W A (1991) Exploitation of spectral redundancy in cyclostationary signals, IEEE
Signal Processing Magazine 8(2): 14–36.
Gardner, W A (1993) Cyclic Wiener filtering: Theory and method, IEEE Transactions on
Communications 41(1): 151–163.
Gardner, W A (1994) Cyclostationarity in Communications and Signal Processing, IEEE Press.
Gardner, W A., Brown, W A & Chen, C K (1987) Spectral correlation of modulated signals:
Part II – digital modulation, IEEE Transactions on Communications 35(6): 595–601.
Gardner, W A & Franks, L E (1975) Characterization of cyclostationary random signal
processes, IEEE Transactions on Information Theory 21(1): 4–14.
Gelli, G & Verde, F (2000) Blind LPTV joint equalization and interference suppression,
Acoustics, Speech, and Signal Processing, 2000 ICASSP ’00 Proceedings 2000 IEEE International Conference on.
Giannakis, G B (1998) Cyclostationary signal analysis, in V K Madisetti & D Williams (eds),
The Digital Signal Processing Handbook, CRC Press.
Trang 7Adaptive-FRESH Filtering 37
Gonçalves, L & Gameiro, A (2002) Frequency shift based multiple access interference
canceller for multirate UMTS-TDD systems, The 13th IEEE International Symposium
on Personal, Indoor and Mobile Radio Communications.
Haykin, S (2001) Adaptive Filter Theory, Prentice Hall.
Hu, Y., Xia, W & Shen, L (2007) Study of anti-interference mechanism of multiple WPANs
accesing into a HAN, International Symposium on Intelligent Signal Processing and Communication Systems.
Jianhui, P., Zhongfu, Y & Xu, X (2006) A novel robust cyclostationary beamformer based
on conjugate gradient algorithm, 2006 International Conference on Communications, Circuits and Systems Proceedings, Vol 2, pp 777–780.
Lee, J H & Lee, Y T (1999) Robust adaptive array beamforming for cyclostationary
signals under cycle frequency error, IEEE Transactions on Antennas and Propagation
47(2): 233–241
Lee, J H., Lee, Y T & Shih, W H (2000) Efficient robust adaptive beamforming for
cyclostationary signals, IEEE Transactions on Signal Processing 48(7): 1893–1901.
Li, X & Ouyang, S (2009) One reduced-rank blind fresh filter for spectral overlapping
interference signal extraction and DSP implementation, International Workshop on Intelligent Systems and Applications, ISA, pp 1–4.
Loeffler, C M & Burrus, C S (1978) Optimal design of periodically time-varying and
multirate digital filters, IEEE Transactions on Acoustic, Speech and Signal Processing
66(1): 51–83
López Risueño, G., Grajal, J & Yeste-Ojeda, O A (2003) Atomic decomposition-based
radar complex signal interception, IEE Proceedings on Radar, Sonar and Navigation
150: 323–331
Martin, V., Chabert, M & Lacaze, B (2007) Digital watermarking of natural images based
on LPTV filters, Acoustics, Speech and Signal Processing, 2007 ICASSP 2007 IEEE International Conference on.
Mirbagheri, A., Plataniotis, K & Pasupathy, S (2006) An enhanced widely linear
CDMA receiver with OQPSK modulation, IEEE Transactions on Communications
54(2): 261–272
Napolitano, A & Spooner, C M (2001) Cyclic spectral analysis of continuous-phase
modulated signals, IEEE Transactions on Signal Processing 49(1): 30–44.
Ngan, L Y., Ouyang, S & Ching, P C (2004) Reduced-rank blind adaptive frequency-shift
filtering for signal extraction, IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, Vol 2.
Petrus, P & Reed, J H (1995) Time dependent adaptive arrays, IEEE Signal Processing Letters
2(12): 219–222
Picinbono, B & Chevalier, P (1995) Widely linear estimation with complex data, IEEE
Transactions on Signal Processing 43(8): 2030–2033.
Reed, J H & Hsia, T C (1990) The performance of time-dependent adaptive filters for
interference rejection, IEEE Transactions on Acoustic, Speech and Signal Processing
38(8): 1373–1385
Schell, S V & Gardner, W A (1990) Progress on signal-selective direction finding, Fifth ASSP
Workshop on Spectrum Estimation and Modeling, pp 144–148.
Schmidt, R O (1986) Multiple emitter location and signal parameter estimation, IEEE
Transactions on Antennas and Propagation 34(3): 276–280.
295Adaptive-FRESH Filtering
Trang 8Van Trees, H L (1968) Detection, Estimation, and Modulation Theory, Vol 1, John Wiley and
Sons
Whitehead, J & Takawira, F (2004) Low complexity constant modulus based cyclic blind
adaptive multiuser detection, AFRICON, 2004 7th AFRICON Conference in Africa,
Vol 1, pp 115–120
Whitehead, J & Takawira, F (2005) Blind adaptive multiuser detection for periodically time
varying interference suppression [DS-CDMA system applications], IEEE Wireless Communications and Networking Conference, 2005, Vol 1, pp 273–279.
Widrow, B., Glover, Jr., J R., McCool, J M., Kaunitz, J., Williams, C S., Hearn, R H., Zeidler,
J R., Dong, Jr., E & Goodlin, R C (1975) Adaptive noise cancelling: Principles and
applications, Proceedings of the IEEE 63(12): 1692–1717.
Widrow, B., McCool, J M., Larimore, M G & Johnson, Jr., C R (1976) Stationary and
nonstationary learning characteristics of the LMS adaptive filter, Proceedings of the IEEE 64(8): 1151–1162.
Wong, H E & Chambers, J A (1996) Two-stage interference immune blind equaliser which
exploits cyclostationary statistics, Electronic Letters 32(19): 1763–1764.
Yeste-Ojeda, O A & Grajal, J (2008) Cyclostationarity-based signal separation in interceptors
based on a single sensor, IEEE Radar Conference 2008, pp 1–6.
Yeste-Ojeda, O A & Grajal, J (2010) Adaptive-FRESH filters for compensation of
cycle-frequency errors, IEEE Transactions on Signal Processing 58(1): 1–10.
Zadeh, L A (1950) Frequency analysis of variable networks, Proceedings of the I.R.E.
pp 291–299
Zhang, H., Abdi, A & Haimovich, A (2006) Reduced-rank multi-antenna cyclic Wiener
filtering for interference cancellation, Military Communications Conference, 2006 MILCOM 2006.
Zhang, J., Liao, G & Wang, J (2004) A novel robust adaptive beamforming for cyclostationary
signals, The 7th International Conference on Signal Processing, ICSP, Vol 1, pp 339–342.
Zhang, J., Wong, K M., Luo, Z Q & Ching, P C (1999) Blind adaptive FRESH filtering for
signal extraction, IEEE Transactions on Signal Processing 47(5): 1397–1402.
Trang 9Transient Analysis of a Combination
of Two Adaptive Filters
Tõnu Trump
Department of Radio and Telecommunication Engineering,
Tallinn University of Technology
Estonia
1 Introduction
The Least Mean Square (LMS) algorithm is probably the most popular adaptive algorithm.The algorithm has since its introduction in Widrow & Hoff (1960) been widely used inmany applications like system identification, communication channel equalization, signalprediction, sensor array processing, medical applications, echo and noise cancellation etc Thepopularity of the algorithm is due to its low complexity but also due to its good propertieslike e.g robustness Haykin (2002); Sayed (2008) Let us explain the algorithm in the example
Fig 1 Usage of an adaptive filter for system identification
As seen from Figure 1, the input signal to the unknown plant, is x(n)and the output signal
from the plant is d(n) We call the signal d(n)the desired signal The signal d(n)also containsnoise and possible nonlinear effects of the plant We would like to estimate the impulse
12
Trang 10response of the plant observing its input and output signals To do so we connect an adaptive
filter in parallel with the plant The adaptive filter is a linear filter with output signal y(n)
We then compare the output signals of the plant and the adaptive filter and form the error
signal e(n) Obviously one would like have the error signal to be as small as possible in somesense The LMS algorithm achieves this by minimizing the mean squared error but doing this
in instantaneous fashion If we collect the impulse response coefficients of our adaptive filter
computed at iteration n into a vectorw(n)and the input signal samples into a vectorx(n), theLMS algorithm updates the weight vector estimate at each iteration as
w(n) =w(n −1) +μe ∗(n)x(n), (1)whereμ is the step size of the algorithm One can see that the weight update is in fact a low
pass filter with transfer function
extent of averaging is small and we get a less reliable estimate but we get it relatively fast.When designing an adaptive algorithm, one thus faces a trade–off between the initialconvergence speed and the mean–square error in steady state In case of algorithms belonging
to the Least Mean Square family this trade–off is controlled by the step-size parameter Largestep size leads to a fast initial convergence but the algorithm also exhibits a large mean–squareerror in the steady state and in contrary, small step size slows down the convergence butresults in a small steady state error
Variable step size adaptive schemes offer a possible solution allowing to achieve both fastinitial convergence and low steady state misadjustment Arenas-Garcia et al (1997); Harris
et al (1986); Kwong & Johnston (1992); Matthews & Xie (1993); Shin & Sayed (2004) Howsuccessful these schemes are depends on how well the algorithm is able to estimate thedistance of the adaptive filter weights from the optimal solution The variable step sizealgorithms use different criteria for calculating the proper step size at any given time instance.For example squared instantaneous errors have been used in Kwong & Johnston (1992)and the squared autocorrelation of errors at adjacent time instances have been used inArenas-Garcia et al (1997) The reference Matthews & Xie (1993) ivestigates an algorithmthat changes the time–varying convergence parameters in such a way that the change isproportional to the negative of gradient of the squared estimation error with respect to theconvergence parameter In reference Shin & Sayed (2004) the norm of projected weight errorvector is used as a criterion to determine how close the adaptive filter is to its optimumperformance
Recently there has been an interest in a combination scheme that is able to optimize thetrade–off between convergence speed and steady state error Martinez-Ramon et al (2002).The scheme consists of two adaptive filters that are simultaneously applied to the same inputs
as depicted in Figure 2 One of the filters has a large step size allowing fast convergenceand the other one has a small step size for a small steady state error The outputs of thefilters are combined through a mixing parameterλ The performance of this scheme has been
studied for some parameter update schemes Arenas-Garcia et al (2006); Bershad et al (2008);
Trang 11of Two Adaptive Filters 3
Candido et al (2010); Silva et al (2010) The reference Arenas-Garcia et al (2006) uses convexcombination i.e.λ is constrained to lie between 0 and 1 The references Silva et al (2010) and
Candido et al (2010) present transient analysis of a slightly modified versions of this scheme.The parameterλ is in those papers found using an LMS type adaptive scheme and possibly
computing the sigmoidal function of the result The reference Bershad et al (2008) takesanother approach computing the mixing parameter using an affine combination This paperuses the ratio of time averages of the instantaneous errors of the filters The error function ofthe ratio is then computed to obtainλ.
In Mandic et al (2007) a convex combination of two adaptive filters with different adaptationschemes has been investigated with the aim to improve the steady state characteristics One
of the adaptive filters in that paper uses LMS algorithm and the other one GeneralizedNormalized Gradient Decent algorithm The combination parameterλ is computed using
stochastic gradient adaptation In Zhang & Chambers (2006) the convex combination of twoadaptive filters is applied in a variable filter length scheme to gain improvements in low SNRconditions In Kim et al (2008) the combination has been used to join two affine projectionfilters with different regularization parameters The work Fathiyan & Eshghi (2009) uses thecombination on parallel binary structured LMS algorithms These three works use the LMSlike scheme of Azpicueta-Ruiz et al (2008b) to computeλ.
It should be noted that schemes involving two filters have been proposed earlier Armbruster(1992); Ochiai (1977) However, in those early schemes only one of the filters have beenadaptive while the other one has used fixed filter weights Updating of the fixed filter has beenaccomplished by copying of all the coefficients from the adaptive filter, when the adaptivefilter has been performing better than the fixed one
In this chapter we compute the mixing parameterλ from output signals of the individual
filters The scheme was independently proposed in Trump (2009a) and Azpicueta-Ruiz et al.(2008a), the steady state performance of it was investigated in Trump (2009b) and the trackingperformance in Trump (2009c) The way of calculating the mixing parameter is optimal in thesense that it results from minimization of the mean-squared error of the combined filter In themain body of this chapter we present a transient analysis of the algorithm We will assumethroughout the chapter that the signals are complex–valued and that the combination schemeuses two LMS adaptive filters The italic, bold face lower case and bold face upper case letterswill be used for scalars, column vectors and matrices respectively The superscript∗denotes
complex conjugation and H Hermitian transposition of a matrix The operator E [·]denotes
mathematical expectation, tr [·]stands for trace of a matrix andR e {·}denotes the real part of
299Transient Analysis of a Combination of Two Adaptive Filters
Trang 12# + y (n)
Fig 2 The combined adaptive filter
The input process is assumed to be a zero mean wide sense stationary Gaussian process The
desired signal d(n)is a sum of the output of the filter to be identified and the Gaussian, zero
mean i.i.d measurement noise v(n) We assume that the measurement noise is statisticallyindependent of all the other signals μ i is the step size of i–th adaptive filter We assume
without loss of generality thatμ1 > μ2 The caseμ1 =μ2is not interesting as in this case thetwo filters remain equal and the combination renders to a single filter
The outputs of the two adaptive filters are combined according to
where y i(n) =wH
i (n −1)x(n)and the mixing parameterλ can be any real number.
We define the a priori system error signal as difference between the output signal of the true system at time n, given by y o(n) = wH
ox(n) = d(n ) − v(n), and the output signal of our
Trang 13of Two Adaptive Filters 5
Setting the derivative to zero results in
λ(n) = E [R e {( d(n ) − y2(n))(y1(n ) − y2(n))∗ }]
E [|( y1(n ) − y2(n ))|2] , (8)
where we have replaced the true system output signal y o(n)by its observable noisy version
d(n) Note however, that because we have made the standard assumption that the inputsignalx(n)and measurement noise v(n)are independent random processes, this can be donewithout introducing any error into our calculations
The denominator of equation (8) comprises expectation of the squared difference of thetwo filter output signals This quantity can be very small or even zero, particularly in thebeginning of adaptation if the two step sizes are close to each other Correspondinglyλ
computed directly from (8) may be large To avoid this from happening we add a smallregularization constant to the denominator of (8).
3 Transient analysis
In this section we are interested in finding expressions that characterize transient performance
of the combined algorithm i.e we intend to derive formulae that characterize entire course ofadaptation of the algorithm Before we can proceed we need, however, to introduce some
notations First let us denote the weight error vector of i–th filter as
Trang 14In what follows we often drop the explicit time index n as we have done in (15), if it is not
necessary to avoid a confusion
k(n −1)x(n)xH(n)w˜l(n −1)]in order to reveal the time evolution of EMSE(n)andλ(n)
To do so we, however, concentrate first on the mean square deviation defined in (11)
For a single LMS filter we have after subtraction of (3) fromwo and expressing e i(n)through
the error of the corresponding Wiener filter e o(n)
˜
wi(n) =I− μ ixxH
˜
wi(n −1) − μ i xe ∗ o(n) (17)
We next approximate the outer product of input signal vectors by its correlation matrixxxH ≈
R x The approximation is justified by the fact that with small step size the weight error update
of the LMS algorithm (17) behaves like a low pass filter with a low cutoff frequency With thisapproximations we have
˜
This means in fact that we apply the small step size theory Haykin (2002) even if theassumption of small step size is not really true for the fast adapting filter In our simulationstudy we will see, however, that the assumption works in practice rather well
Let us now define the eigendecomposition of the correlation matrix as
whereQ is a unitary matrix whose columns are the orthogonal eigenvectors of RxandΩ is
a diagonal matrix having eigenvalues associated with the corresponding eigenvectors on itsmain diagonal We also define the transformed weight error vector as
and the transformed last term of equation (18) as
pi(n) =μ iQH xe ∗ o(n) (21)Then we can rewrite the equation (18) after multiplying both sides byQHfrom the left as
Trang 15of Two Adaptive Filters 7
The first term in the above is zero due to the principle of orthogonality and the second termequalsRJ min Hence we are left with
whereω m is the m-th eigenvalue ofRx and v i,m and p i,m are the m-th components of the
vectorsviandpirespectively
We immediately see that the mean value of v i,m(n)equals
as the vectorpihas zero mean
The expected values of v i,m(n)exhibit no oscillations as the correlation matrixR is positive
semidefinite with all its eigenvalues being nonnegative real numbers In order the LMSalgorithm to converge in mean, the weight errors need to decrease with time This will happenif
|1− μ i ω m | <1 (28)
for all m In that case all the natural modes of the algorithm die out as the number of iterations
n approaches infinity The condition needs to be fulfilled for all the eigenvalues ω m and isobviously satisfied if the condition is met for the maximum eigenvalue It follows that theindividual step sizeμ ineeds to be selected such that the double inequality
0< μ i < λ2
is satisfied
In some applications the input signal correlation matrix and its eigenvalues are not known a
priori In this case it may be convenient to use the fact that
tr {Rx } = N−1∑
where r x(i, i)is the i-th diagonal element of the matrixR Then we can normalize the step size
with the instantaneous estimate of the trace of correlation matrixxH(n)x(n)to get so callednormalized LMS algorithm The normalized LMS algorithm uses the normalized step size