Figure 7.1 shows a basic adaptive filter structure in which the adaptive fil-ter’s output y is compared with a desired signal d to yield an error signal e, which is fed back to the adapt
Trang 1앫 Adaptive structures
앫 The least mean square (LMS) algorithm
앫 Programming examples using C and TMS320C3x code
Adaptive filters are best used in cases where signal conditions or system meters are slowly changing and the filter is to be adjusted to compensate for thischange The least mean square (LMS) criterion is a search algorithm that can beused to provide the strategy for adjusting the filter coefficients Programmingexamples are included to give a basic intuitive understanding of adaptive filters
para-7.1 INTRODUCTION
In conventional FIR and IIR digital filters, it is assumed that the process meters to determine the filter characteristics are known They may vary withtime, but the nature of the variation is assumed to be known In many practicalproblems, there may be a large uncertainty in some parameters because of inad-equate prior test data about the process Some parameters might be expected tochange with time, but the exact nature of the change is not predictable In suchcases, it is highly desirable to design the filter to be self-learning, so that it canadapt itself to the situation at hand
para-The coefficients of an adaptive filter are adjusted to compensate for changes
in input signal, output signal, or system parameters Instead of being rigid, anadaptive system can learn the signal characteristics and track slow changes Anadaptive filter can be very useful when there is uncertainty about the character-istics of a signal or when these characteristics change
Figure 7.1 shows a basic adaptive filter structure in which the adaptive
fil-ter’s output y is compared with a desired signal d to yield an error signal e,
which is fed back to the adaptive filter The coefficients of the adaptive filter are
Trang 2adjusted, or optimized, using a least mean square (LMS) algorithm based on theerror signal.
We will discuss here only the LMS searching algorithm with a linear biner (FIR filter), although there are several strategies for performing adaptivefiltering
com-The output of the adaptive filter in Figure 7.1 is
y(n) = N – 1 k = 0冱 w k (n)x(n – k) (7.1)
where w k (n) represent N weights or coefficients for a specific time n The
con-volution equation (7.1) was implemented in Chapter 4 in conjunction with FIR
filtering It is common practice to use the terminology of weights w for the
co-efficients associated with topics in adaptive filtering and neural networks
A performance measure is needed to determine how good the filter is Thismeasure is based on the error signal,
which is the difference between the desired signal d(n) and the adaptive filter’s output y(n) The weights or coefficients w k (n) are adjusted such that a mean
squared error function is minimized This mean squared error function is
E[e2(n)], where E represents the expected value Since there are k weights or
co-efficients, a gradient of the mean squared error function is required An
esti-mate can be found instead using the gradient of e2(n), yielding
w k (n + 1) = w k (n) + 2 e(n)x(n – k) k = 0, 1, , N – 1 (7.3)which represents the LMS algorithm [1–3] Equation (7.3) provides a simplebut powerful and efficient means of updating the weights, or coefficients, with-out the need for averaging or differentiating, and will be used for implementingadaptive filters
196 Adaptive Filters
FIGURE 7.1 Basic adaptive filter structure.
Trang 3The input to the adaptive filter is x(n), and the rate of convergence and
accu-racy of the adaptation process (adaptive step size) is 
For each specific time n, each coefficient, or weight, w k (n) is updated or placed by a new coefficient, based on (7.3), unless the error signal e(n) is zero After the filter’s output y(n), the error signal e(n) and each of the coefficients
re-w k (n) are updated for a specific time n, a new sample is acquired (from an
ADC) and the adaptation process is repeated for a different time Note that from
(7.3), the weights are not updated when e(n) becomes zero.
The linear adaptive combiner is one of the most useful adaptive filter tures and is an adjustable FIR filter Whereas the coefficients of the frequency-selective FIR filter discussed in Chapter 4 are fixed, the coefficients, orweights, of the adaptive FIR filter can be adjusted based on a changing environ-ment such as an input signal Adaptive IIR filters (not discussed here) also can
struc-be used A major problem with an adaptive IIR filter is that its poles may struc-be dated during the adaptation process to values outside the unit circle, making thefilter unstable
up-The programming examples developed later will make use of equations(7.1)–(7.3) In (7.3), we will simply use the variable  in lieu of 2
7.2 ADAPTIVE STRUCTURES
A number of adaptive structures have been used for different applications inadaptive filtering
1 For noise cancellation Figure 7.2 shows the adaptive structure in Figure
7.1 modified for a noise cancellation application The desired signal d is rupted by uncorrelated additive noise n The input to the adaptive filter is a noise n ⬘ that is correlated with the noise n The noise n⬘ could come from the same source as n but modified by the environment The adaptive filter’s output y
cor-is adapted to the nocor-ise n When thcor-is happens, the error signal approaches the sired signal d The overall output is this error signal and not the adaptive filter’s output y This structure will be further illustrated with programming examples
de-using both C and TMS320C3x code
7.2 Adaptive Structures 197
FIGURE 7.2 Adaptive filter structure for noise cancellation.
Trang 42 For system identification Figure 7.3 shows an adaptive filter structure
that can be used for system identification or modeling The same input is to an
unknown system in parallel with an adaptive filter The error signal e is the ference between the response of the unknown system d and the response of the adaptive filter y This error signal is fed back to the adaptive filter and is used to update the adaptive filter’s coefficients, until the overall output y = d When this happens, the adaptation process is finished, and e approaches zero In this
dif-scheme, the adaptive filter models the unkown system
3 Additional structures have been implemented such as:
a) Notch with two weights, which can be used to notch or cancel/reduce a
si-nusoidal noise signal This structure has only two weights or coefficients,and is illustrated later with a programming example
b) Adaptive predictor, which can provide an estimate of an input This
struc-ture is illustrated later with three programming examples
c) Adaptive channel equalization, used in a modem to reduce channel tion resulting from the high speed of data transmission over telephonechannels
distor-The LMS is well suited for a number of applications, including adaptiveecho and noise cancellation, equalization, and prediction
Other variants of the LMS algorithm have been employed, such as the error LMS, the sign-data LMS, and the sign-sign LMS
sign-1 For the sign-error LMS algorithm, (7.3) becomes
Trang 52 For the sign-data LMS algorithm, (7.3) becomes
w k (n + 1) = w k (n) + e(n)sgn[x(n – k)] (7.6)
3 For the sign-sign LMS algorithm, (7.3) becomes
w k (n + 1) = w k (n) + sgn[e(n)]sgn[x(n – k)] (7.7)which reduces to
w k (n) +  if sgn[e(n)] = sgn[x(n – k)]
w k (n) –  otherwise which is more concise from a mathematical viewpoint, because no multiplica-tion operation is required for this algorithm
The implementation of these variants does not exploit the pipeline features
of the TMS320C3x processor The execution speed on the TMS320C3x forthese variants can be expected to be slower than for the basic LMS algorithm,due to additional decision-type instructions required for testing conditions in-volving the sign of the error signal or the data sample
The LMS algorithm has been quite useful in adaptive equalizers, telephonecancellers, and so forth Other methods such as the recursive least squares(RLS) algorithm [4], can offer faster convergence than the basic LMS but at theexpense of more computations The RLS is based on starting with the optimalsolution and then using each input sample to update the impulse response in or-der to maintain that optimality The right step size and direction are defined overeach time sample
Adaptive algorithms for restoring signal properties can also be found in [4].Such algorithms become useful when an appropriate reference signal is notavailable The filter is adapted in such a way as to restore some property of thesignal lost before reaching the adaptive filter Instead of the desired waveform
as a template, as in the LMS or RLS algorithms, this property is used for theadaptation of the filter When the desired signal is available, the conventional
approach such as the LMS can be used; otherwise a priori knowledge about the
Trang 6even if you have only a limited knowledge of C, since it illustrates the steps inthe adaptive process.
Example 7.1 Adaptive Filter Using C Code Compiled With
Borland C/C++
This example applies the LMS algorithm using a C-coded program compiledwith Borland C/C++ It illustrates the following steps for the adaptation processusing the adaptive structure in Figure 7.1:
1 Obtain a new sample for each, the desired signal d and the reference input
to the adaptive filter x, which represents a noise signal.
2 Calculate the adaptive FIR filter’s output y, applying (7.1) as in Chapter 4
with an FIR filter In the structure of Figure 7.1, the overall output is the
same as the adaptive filter’s output y.
3 Calculate the error signal applying (7.2)
4 Update/replace each coefficient or weight applying (7.3)
5 Update the input data samples for the next time n, with a data move
scheme used in Chapter 4 with the program FIRDMOVE.C Such schememoves the data instead of a pointer
6 Repeat the entire adaptive process for the next output sample point.Figure 7.4 shows a listing of the program ADAPTC.C, which implements theLMS algorithm for the adaptive filter structure in Figure 7.1 A desired signal is
chosen as 2cos(2n f/F s), and a reference noise input to the adaptive filter is
chosen as sin(2n f/F s ), where f is 1 kHz, and F s= 8 kHz The adaptation rate,filter order, number of samples are 0.01, 22, and 40, respectively
The overall output is the adaptive filter’s output y, which adapts or converges
to the desired cosine signal d.
The source file was compiled with Borland’s C/C++ compiler Execute thisprogram Figure 7.5 shows a plot of the adaptive filter’s output (y_out) con-verging to the desired cosine signal Change the adaptation or convergence rate
 to 0.02 and verify a faster rate of adaptation
Interactive Adaptation
A version of the program ADAPTC.C in Figure 7.4, with graphics and tive capabilities to plot the adaptation process for different values of  is on theaccompanying disk as ADAPTIVE.C, to be compiled with Turbo or BorlandC/C++ It uses a desired cosine signal with an amplitude of 1 and a filter order
interac-of 31 Execute this program, enter a  value of 0.01, and verify the results inFigure 7.6 Note that the output converges to the desired cosine signal Press F2
to execute this program again with a different beta value
200 Adaptive Filters
Trang 7//ADAPTC.C - ADAPTATION USING LMS WITHOUT THE TI COMPILER
#include <stdio.h>
#include <math.h>
#define beta 0.01 //convergence rate
#define N 21 //order of filter
#define NS 40 //number of samples
#define Fs 8000 //sampling frequency
#define pi 3.1415926
#define DESIRED 2*cos(2*pi*T*1000/Fs) //desired signal
#define NOISE sin(2*pi*T*1000/Fs) //noise signal
FILE *desired, *Y_out, *error;
desired = fopen (“DESIRED”, “w++”); //file for desired samples Y_out = fopen (“Y_OUT”, “w++”); //file for output samples error = fopen (“ERROR”, “w++”); //file for error samples for (T = 0; T < NS; T++) //start adaptive algorithm {
X[0] = NOISE; //new noise sample
D = DESIRED; //desired signal
Y = 0; //filter’output set to zero for (I = 0; I <= N; I++)
Y += (W[I] * X[I]); //calculate filter output
E = D - Y; //calculate error signal for (I = N; I >= 0; I—)
fprintf (desired, “\n%10g %10f”, (float) T/Fs, D);
fprintf (Y_out, “\n%10g %10f”, (float) T/Fs, Y);
fprintf (error, “\n%10g %10f”, (float) T/Fs, E);
Trang 8FIGURE 7.5 Plot of adaptive filter’s output converging to desired cosine signal.
FIGURE 7.6 Plot of adaptive filter’s output converging to desired cosine signal using
inter-active capability with program ADAPTIVE.C.
Trang 9Example 7.2 Adaptive Filter for Noise Cancellation
Using C Code
This example illustrates the adaptive filter structure shown in Figure 7.2 for thecancellation of an additive noise Figure 7.7 shows a listing of the programADAPTDMV.C based on the previous program in Example 7.1 Consider thefollowing from the program:
1 The desired signal specified by DESIRED is a sine function with a
fre-quency of 1 kHz The desired signal is corrupted/added with a noise signalspecified by ADDNOISE This additive noise is a sine with a frequency of 312
Hz The addition of these two signals is achieved in the program with DPLUSNfor each sample period
2 The reference input to the adaptive FIR filter is a cosine function with a
frequency of 312 Hz specified by REFNOISE The adaptation step or rate ofconvergence is set to 1.5 × 10–8, the number of coefficients to 30, and the num-ber of output samples to 128
3 The output of the adaptive FIR filter y is calculated using the convolution
equation (7.1), and converges to the additive noise signal with a frequency of
312 Hz When this happens, the “error” signal e, calculated from (7.2), proaches the desired signal d with a frequency of 1 kHz This error signal is the
ap-overall output of the adaptive filter structure, and is the difference between the
adaptive filter’s output y and the primary input consisting of the desired signal
with additive noise
In the previous example, the overall output was the adaptive filter’s output
In that case, the filter’s output converged to the desired signal For the structure
in this example, the overall output is the error signal and not the adaptive filter’soutput
This program was compiled with the TMS320 assembly language point tools, and the executable COFF file is on the accompanying disk Down-load and run it on the DSK
floating-The output can be saved into the file fname with the debugger command
save fname,0x809d00,128,Lwhich saves the 128 output samples stored in memory starting at the address809d00into the file fname, in ASCII Long format Note that the desired sig-nal with additive noise samples in DPLUSN are stored in memory starting at theaddress 809d80, and can be saved also into a different file with the debuggersavecommand
Figure 7.8 shows a plot of the output converging to the 1-kHz desired sinesignal, with a convergence rate of  = 1.5 × 10–8 The upper plot in Figure 7.9shows the FFT of the 1-kHz desired sine signal and the 312-Hz additive noisesignal The lower plot in Figure 7.9 shows the overall output which illustratesthe reduction of the 312-Hz noise signal
7.3 Programming Examples Using C and TMS320C3x Code 203
Trang 10#define NS 128 /*# of output sample points*/
#define Fs 8000 /*sampling frequency */
#define pi 3.1415926
#define DESIRED 1000*sin(2*pi*T*1000/Fs) /*desired signal */
#define ADDNOISE 1000*sin(2*pi*T*312/Fs) /*additive noise */
#define REFNOISE 1000*cos(2*pi*T*312/Fs) /*reference noise*/
volatile int *IO_OUTPUT= (volatile int*) 0x809d00;
volatile int *IO_INPUT = (volatile int*) 0x809d80;
Delay[0] = REFNOISE; /*adaptive filter’s input*/
DPLUSN = DESIRED + ADDNOISE; /*desired + noise, d+n */
Y = 0;
for (I = 0; I < N; I++)
Y += (W[I] * Delay[I]); /*adaptive filter output */
E = DPLUSN - Y; /*error signal */ for (I = N; I > 0; I—)
*IO_OUTPUT++ = E; /*overall output E */
*IO_INPUT++ = DPLUSN; /* store d + n */
}
}
FIGURE 7.7 Adaptive filter program for sinusoidal noise cancellation using data move
(ADAPTDMV.C).
Trang 117.3 Programming Examples Using C and TMS320C3x Code 205
FIGURE 7.8 Plot of overall output of adaptive filter structure converging to 1-kHz desired
signal.
FIGURE 7.9 Output frequency response of adaptive filter structure showing reduction of
312-Hz additive sinusoidal noise.
Trang 12Examine the effects of different values for the adaptation rate  and for thenumber of weights or coefficients.
Example 7.3 Adaptive Predictor Using C Code
This example implements the adaptive predictor structure shown in Figure 7.10,with the program ADAPTSH.C shown in Figure 7.11 The input to the adaptivestructure is a 1-kHz sine defined in the program The input to the adaptive filterwith 30 coefficients is the delayed input, and the adaptive filter’s output is theoverall output of the predictor structure
206 Adaptive Filters
FIGURE 7.10 Adaptive predictor structure.
/*ADAPTSH.C - ADAPTIVE FILTER WITH SHIFTED INPUT */
#define shift 90 /*desired amount of shift*/
#define Fs 8000 /*sampling frequency */
#define inp 1000*sin(2*pi*T*1000/Fs) /*input signal*/
Trang 137.3 Programming Examples Using C and TMS320C3x Code 207
int I, T;
double xin, x, ys, D, E, Y1;
double W[N+1];
double Delay[N+1];
volatile int *IO_OUTPUT = (volatile int*) 0x809d00;
ys = 0;
for (T = 0; T < N; T++)
{
W[T] = 0.0;
Delay[T] = 0.0;
}
for (T=0; T < NS; T++) /*# of output samples */
{ xin = inp/1000; /*input between 1 and -1 */
if (ys >= xin) /*is signal rising or falling */ x = acos(xin); /*signal is falling */
else /*otherwise */
x=asin(xin)-(pi/2); /*signal is rising */
x = x - (shift); /*shift */
Delay[0]=cos(x); /*shifted output=filter’s input*/ D = inp; /*input data */
Y1 = 0; /*init output */
ys = xin; /*store input value */
for (I=0; I <N; I++) /*for N coefficients */
Y1+=W[I]*Delay[I]; /*adaptive filter output */
E = D - Y1; /*error signal */
for (I=N; I>0; I—)
{ W[I]=W[I]+(beta*E*Delay[I]); /*update weights */
if (I != 0)
Delay[I] = Delay[I-1]; /*update delays */
} *IO_OUTPUT++ = Y1; /*overall output */
}
}
FIGURE 7.11 (continued)
Trang 14A shifting technique is employed within the program to obtain a delay of90° An optimal choice of the delay parameter is discussed in [5] Note that an-other separate input is not needed This shifting technique uses an arccosine orarcsine function depending on whether the signal is rising or falling.
The program SHIFT.C (on disk) illustrates a 90° phase shift A differentamount of delay can be verified with the program SHIFT.C The programADAPTSH.Cincorporates the shifting section of code
Verify Figure 7.12, which shows the output of the adaptive predictor (lowergraph) converging to the desired 1-kHz input signal (upper graph) When thishappens, the error signal converges to zero Note that 128 output sample pointscan be collected starting at memory address 809d00
The following example illustrates this phase shift technique using a tablelookup procedure, and Example 7.5 implements the adaptive predictor withTMS320C3x code
Example 7.4 Adaptive Predictor With Table Lookup for Delay, Using C Code
This example implements the same adaptive predictor of Figure 7.10 with theprogram ADAPTTB.C listed in Figure 7.13 This program uses a table lookupprocedure with the arccosine and arcsine values set in the file scdat (on theaccompanying disk) included in the program The arccosine and arcsine valuesare selected depending on whether the signal is falling or rising A delay of 270°
is set in the program This alternate implementation is faster (but not as clean)
208 Adaptive Filters
FIGURE 7.12 Output of adaptive predictor converging to desired 1-kHz input signal.