After subband decomposition using an analysis filter bank, a threshold process is applied to the two highest resolution highpass subbands before reconstruction using a synthesis filter b
Trang 1Wavelet Analysis 201
F IGURE 7.9 Application of the dyadic wavelet transform to nonlinear filtering
After subband decomposition using an analysis filter bank, a threshold process is
applied to the two highest resolution highpass subbands before reconstruction
using a synthesis filter bank Periodic convolution was used so that there is no
phase shift between the input and output signals
an = analyze1(x,h0,4); % Decompose signal, analytic
% filter bank of level 4
% Set the threshold times to equal the variance of the two higher
% resolution highpass subbands.
plot(t,x,’k’,t,sy-5,’k’); % Plot signals
axis([-.2 1.2-8 4]); xlabel(’Time(sec)’)
Trang 2202 Chapter 7
The routines for the analysis and synthesis filter banks differ slightly from
those used in Example 7.3 in that they use circular convolution In the analysis
filter bank routine (analysis1), the data are first extended using the periodic
or wraparound approach: the initial points are added to the end of the original
data sequence (see Figure 2.10B) This extension is the same length as the
filter After convolution, these added points and the extra points generated by
convolution are removed in a symmetrical fashion: a number of points equal to
the filter length are removed from the initial portion of the output and the
re-maining extra points are taken off the end Only the code that is different from
that shown in Example 7.3 is shown below In this code, symmetric elimination
of the additional points and downsampling are done in the same instruction
lpf = conv(a_ext,h0); % Lowpass FIR filter
hpf = conv(a_ext,h1); % Highpass FIR filter
lpfd = lpf(lf:2:lfⴙlx-1); % Remove extra points Shift to
hpfd = hpf(lf:2:lfⴙlx-1); % obtain circular segment; then
% downsample an(1:lx) = [lpfd hpfd]; % Lowpass output at beginning of
% array, but now occupies only
% half the data points as last
% pass
lx = lx/2;
The synthesis filter bank routine is modified in a similar fashion except
that the initial portion of the data is extended, also in wraparound fashion (by
adding the end points to the beginning) The extended segments are then
upsam-pled, convolved with the filters, and added together The extra points are then
removed in the same manner used in the analysis routine Again, only the
modi-fied code is shown below
function y = synthesize1(an,h0,L)
for i = 1:L
hpx = y(lsegⴙ1:2*lseg); % Get highpass outputs
lpx = [lpx(lseg-lf/2ⴙ1:lseg) lpx]; % Circular extension:
% lowpass comp.
hpx = [hpx(lseg-lf/2ⴙ1:lseg) hpx]; % and highpass component
l_ext = length(lpx);
TLFeBOOK
Trang 3Wavelet Analysis 203
up_lpx = zeros(1,2*l_ext); % Initialize vector for
% upsampling up_lpx(1:2:2*l_ext) = lpx; % Up sample lowpass (every
% odd point) up_hpx = zeros(1,2*l_ext); % Repeat for highpass
up_hpx(1:2:2*l_ext) = hpx;
syn = conv(up_lpx,g0) ⴙ conv(up_hpx,g1); % Filter and
% combine y(1:2*lseg) = syn(lfⴙ1:(2*lseg)ⴙlf); % Remove extra
% points
% for next pass end
The original and reconstructed waveforms are shown in Figure 7.9 The
filtering produced by thresholding the highpass subbands is evident Also there
is no phase shift between the original and reconstructed signal due to the use of
periodic convolution, although a small artifact is seen at the beginning and end
of the data set This is because the data set was not really periodic
Discontinuity Detection
Wavelet analysis based on filter bank decomposition is particularly useful for
detecting small discontinuities in a waveform This feature is also useful in
image processing Example 7.5 shows the sensitivity of this method for
detect-ing small changes, even when they are in the higher derivatives
Example 7.5 Construct a waveform consisting of 2 sinusoids, then add
a small (approximately 1% of the amplitude) offset to this waveform Create a
new waveform by double integrating the waveform so that the offset is in the
second derivative of this new signal Apply a three-level analysis filter bank
Examine the high frequency subband for evidence of the discontinuity
% Example 7.5 and Figures 7.10 and 7.11 Discontinuity detection
% Construct a waveform of 2 sinusoids with a discontinuity
% in the second derivative
% Decompose the waveform into 3 levels to detect the
Trang 4204 Chapter 7
F IGURE 7.10 Waveform composed of two sine waves with an offset discontinuity
in its second derivative at 0.5 sec Note that the discontinuity is not apparent in
the waveform
% waveform freqsin = [.23 8 1.8]; % Sinusoidal frequencies
% discontinuity offset = [zeros(1,N/2) ones(1,N/2)];
%
[x1 t] = signal(freqsin,ampl,N); % Construct signal
x1 = x1 ⴙ offset*incr; % Add discontinuity at
Trang 5Wavelet Analysis 205
F IGURE 7.11 Analysis filter bank output of the signal shown in Figure 7.10
Al-though the discontinuity is not visible in the original signal, its presence and
loca-tion are clearly identified as a spike in the highpass subbands
Trang 6206 Chapter 7
figure(fig2);
a = analyze(x,h0,3); % Decompose signal, analytic
% filter bank of level 3
Figure 7.10 shows the waveform with a discontinuity in its second
deriva-tive at 0.5 sec The lower trace indicates the position of the discontinuity Note
that the discontinuity is not visible in the waveform
The output of the three-level analysis filter bank using the Daubechies
4-element filter is shown in Figure 7.11 The position of the discontinuity is
clearly visible as a spike in the highpass subbands
Feature Detection: Wavelet Packets
The DWT can also be used to construct useful descriptors of a waveform Since
the DWT is a bilateral transform, all of the information in the original waveform
must be contained in the subband signals These subband signals, or some aspect
of the subband signals such as their energy over a given time period, could
provide a succinct description of some important aspect of the original signal
In the decompositions described above, only the lowpass filter subband
signals were sent on for further decomposition, giving rise to the filter bank
structure shown in the upper half of Figure 7.12 This decomposition structure
is also known as a logarithmic tree However, other decomposition structures
are valid, including the complete or balanced tree structure shown in the lower
half of Figure 7.12 In this decomposition scheme, both highpass and lowpass
subbands are further decomposed into highpass and lowpass subbands up till
the terminal signals Other, more general, tree structures are possible where a
decision on further decomposition (whether or not to split a subband signal)
depends on the activity of a given subband The scaling functions and wavelets
associated with such general tree structures are known as wavelet packets.
Example 7.6 Apply balanced tree decomposition to the waveform
con-sisting of a mixture of three equal amplitude sinusoids of 1 10 and 100 Hz The
main routine in this example is similar to that used in Examples 7.3 and 7.4
except that it calls the balanced tree decomposition routine,w_packet, and plots
out the terminal waveforms Thew_packetroutine is shown below and is used
in this example to implement a 3-level decomposition, as illustrated in the lower
half of Figure 7.12 This will lead to 8 output segments that are stored
sequen-tially in the output vector,a
% Example 7.5 and Figure 7.13
% Example of “Balanced Tree Decomposition”
% Construct a waveform of 4 sinusoids plus noise
% Decompose the waveform in 3 levels, plot outputs at the terminal
% level
TLFeBOOK
Trang 7Wavelet Analysis 207
F IGURE 7.12 Structure of the analysis filter bank (wavelet tree) used in the DWT
in which only the lowpass subbands are further decomposed and a more general
structure in which all nonterminal signals are decomposed into highpass and
% segments freqsin = [1 10 100]; % Sinusoid frequencies
Trang 8208 Chapter 7
F IGURE 7.13 Balanced tree decomposition of the waveform shown in Figure 7.8
The signal from the upper left plot has been lowpass filtered 3 times and
repre-sents the lowest terminal signal in Figure 7.11 The upper right has been lowpass
filtered twice then highpass filtered, and represents the second from the lowest
terminal signal in Figure 7.11 The rest of the plots follow sequentially
TLFeBOOK
Trang 9Wavelet Analysis 209
% Daubechies 10
%
[x t] = signal(freqsin,ampl,N); % Construct signal
a = w_packet(x,h0,levels); % Decompose signal, Balanced
% Tree for i = 1:nu_seg
i_s = 1 ⴙ (N/nu_seg) * (i-1); % Location for this segment
The balanced tree decomposition routine,w_packet, operates similarly to
the DWT analysis filter banks, except for the filter structure At each level,
signals from the previous level are isolated, filtered (using standard
convolu-tion), downsampled, and both the high- and lowpass signals overwrite the single
signal from the previous level At the first level, the input waveform is replaced
by the filtered, downsampled high- and lowpass signals At the second level,
the two high- and lowpass signals are each replaced by filtered, downsampled
high- and lowpass signals After the second level there are now four sequential
signals in the original data array, and after the third level there be will be eight
% Function to generate a “balanced tree” filter bank
% All arguments are the same as in routine ‘analyze’
% an = w_packet(x,h,L)
% where
% x = input waveform (must be longer than 2vL ⴙ L and power of
% h0 = filter coefficients (low pass)
% L = decomposition level (number of High pass filter in bank)
%
function an = w_packet(x,h0,L)
Trang 10210 Chapter 7
nu_low = 2v(i-1); % Number of lowpass filters
% at this level l_seg = lx/2v(i-1); % Length of each data seg at
% this level for j = 1:nu_low;
i_start = 1 ⴙ l_seg * (j-1); % Location for current
% segment a_seg = an(i_start:i_startⴙl_seg-1);
lpf = conv(a_seg,h0); % Lowpass filter
hpf = conv(a_seg,h1); % Highpass filter
The output produced by this decomposition is shown in Figure 7.13 The
filter bank outputs emphasize various components of the three-sine mixture
Another example is given in Problem 7 using a chirp signal
One of the most popular applications of the dyadic wavelet transform is
in data compression, particularly of images However, since this application is
not so often used in biomedical engineering (although there are some
applica-tions regrading the transmission of radiographic images), it will not be covered
here
PROBLEMS
1 (A) Plot the frequency characteristics (magnitude and phase) of the
Mexi-can hat and Morlet wavelets
(B) The plot of the phase characteristics will be incorrect due to phase wrapping
Phase wrapping is due to the fact that the arctan function can never be greater
that ± 2π; hence, once the phase shift exceeds ± 2π (usually minus), it warps
around and appears as positive Replot the phase after correcting for this
wrap-around effect (Hint: Check for discontinuities above a certain amount, and
when that amount is exceeded, subtract 2π from the rest of the data array This
is a simple algorithm that is generally satisfactory in linear systems analysis.)
2 Apply the continuous wavelet analysis used in Example 7.1 to analyze a
chirp signal running between 2 and 30 Hz over a 2 sec period Assume a sample
rate of 500 Hz as in Example 7.1 Use the Mexican hat wavelet Show both
contour and 3-D plot
3 Plot the frequency characteristics (magnitude and phase) of the Haar and
Daubechies 4-and 10-element filters Assume a sample frequency of 100 Hz
TLFeBOOK
Trang 11Wavelet Analysis 211
4 Generate a Daubechies 10-element filter and plot the magnitude spectrum
as in Problem 3 Construct the highpass filter using the alternating flip algorithm
(Eq (20)) and plot its magnitude spectrum Generate the lowpass and highpass
synthesis filter coefficients using the order flip algorithm (Eqs (23) and (24))
and plot their respective frequency characteristics Assume a sampling
fre-quency of 100 Hz
5 Construct a waveform of a chirp signal as in Problem 2 plus noise Make
the variance of the noise equal to the variance of the chirp Decompose the
waveform in 5 levels, operate on the lowest level (i.e., the high resolution
high-pass signal), then reconstruct The operation should zero all elements below a
given threshold Find the best threshold Plot the signal before and after
recon-struction Use Daubechies 6-element filter
6 Discontinuity detection Load the waveformxin fileProb7_6_datawhich
consists of a waveform of 2 sinusoids the same as in Figure 7.9, but with a
series of diminishing discontinuities in the second derivative The discontinuities
in the second derivative begin at approximately 0.5% of the sinusoidal
ampli-tude and decrease by a factor of 2 for each pair of discontinuities (The offset
array can be obtained in the variable offset.) Decompose the waveform into
three levels and examine and plot only the highest resolution highpass filter
output to detect the discontinuity Hint: The highest resolution output will be
located in N/2 to N of theanalysisoutput array Use a Harr and a Daubechies
10-element filter and compare the difference in detectability (Note that the Haar
is a very weak filter so that some of the low frequency components will still be
found in its output.)
7 Apply the balanced tree decomposition to a chirp signal similar to that used
in Problem 5 except that the chirp frequency should range between 2 and 100
Hz Decompose the waveform into 3 levels and plot the outputs at the terminal
level as in Example 7.5 Use a Daubechies 4-element filter Note that each
output filter responds to different portions of the chirp signal
Trang 13Advanced Signal Processing
Techniques: Optimal
and Adaptive Filters
OPTIMAL SIGNAL PROCESSING: WIENER FILTERS
The FIR and IIR filters described in Chapter 4 provide considerable flexibility
in altering the frequency content of a signal Coupled with MATLAB filter
design tools, these filters can provide almost any desired frequency
characteris-tic to nearly any degree of accuracy The actual frequency characterischaracteris-tics
at-tained by the various design routines can be verified through Fourier transform
analysis However, these design routines do not tell the user what frequency
characteristics are best; i.e., what type of filtering will most effectively separate
out signal from noise That decision is often made based on the user’s
knowl-edge of signal or source properties, or by trial and error Optimal filter theory
was developed to provide structure to the process of selecting the most
appro-priate frequency characteristics
A wide range of different approaches can be used to develop an optimal
filter, depending on the nature of the problem: specifically, what, and how
much, is known about signal and noise features If a representation of the
de-sired signal is available, then a well-developed and popular class of filters
known as Wiener filters can be applied The basic concept behind Wiener filter
theory is to minimize the difference between the filtered output and some
de-sired output This minimization is based on the least mean square approach,
which adjusts the filter coefficients to reduce the square of the difference
be-tween the desired and actual waveform after filtering This approach requires
213
Trang 14214 Chapter 8
F IGURE 8.1 Basic arrangement of signals and processes in a Wiener filter
an estimate of the desired signal which must somehow be constructed, and this
estimation is usually the most challenging aspect of the problem.*
The Wiener filter approach is outlined in Figure 8.1 The input waveform
containing both signal and noise is operated on by a linear process, H(z) In
practice, the process could be either an FIR or IIR filter; however, FIR filters
are more popular as they are inherently stable,† and our discussion will be
limited to the use of FIR filters FIR filters have only numerator terms in the
transfer function (i.e., only zeros) and can be implemented using convolution
first presented in Chapter 2 (Eq (15)), and later used with FIR filters in Chapter
4 (Eq (8)) Again, the convolution equation is:
y(n)=∑L
k=1
where h(k) is the impulse response of the linear filter The output of the filter,
y(n), can be thought of as an estimate of the desired signal, d(n) The difference
between the estimate and desired signal, e(n), can be determined by simple
subtraction: e(n) = d(n) − y(n).
As mentioned above, the least mean square algorithm is used to minimize
the error signal: e(n) = d(n) − y(n) Note that y(n) is the output of the linear
filter, H(z) Since we are limiting our analysis to FIR filters, h(k) ≡ b(k), and
e(n) can be written as:
*As shown below, only the crosscorrelation between the unfiltered and the desired output is
neces-sary for the application of these filters.
†IIR filters contain internal feedback paths and can oscillate with certain parameter combinations.
TLFeBOOK
Trang 15Advanced Signal Processing 215
After squaring the term in brackets, the sum of error squared becomes a
quadratic function of the FIR filter coefficients, b(k), in which two of the terms
can be identified as the autocorrelation and cross correlation:
Since we desire to minimize the error function with respect to the FIR
filter coefficients, we take derivatives with respect to b(k) and set them to zero:
Equation (5) shows that the optimal filter can be derived knowing only
the autocorrelation function of the input and the crosscorrelation function
be-tween the input and desired waveform In principle, the actual functions are
not necessary, only the auto- and crosscorrelations; however, in most practical
situations the auto- and crosscorrelations are derived from the actual signals, in
which case some representation of the desired signal is required
To solve for the FIR coefficients in Eq (5), we note that this equation
actually represents a series of L equations that must be solved simultaneously.
The matrix expression for these simultaneous equations is:
Equation (6) is commonly known as the Wiener-Hopf equation and is a
basic component of Wiener filter theory Note that the matrix in the equation is
Trang 16216 Chapter 8
F IGURE 8.2 Configuration for using optimal filter theory for systems identification
the correlation matrix mentioned in Chapter 2 (Eq (21)) and has a symmetrical
structure termed a Toeplitz structure.* The equation can be written more
suc-cinctly using standard matrix notation, and the FIR coefficients can be obtained
by solving the equation through matrix inversion:
The application and solution of this equation are given for two different
examples in the following section on MATLAB implementation
The Wiener-Hopf approach has a number of other applications in addition
to standard filtering including systems identification, interference canceling, and
inverse modeling or deconvolution For system identification, the filter is placed
in parallel with the unknown system as shown in Figure 8.2 In this application,
the desired output is the output of the unknown system, and the filter
coeffi-cients are adjusted so that the filter’s output best matches that of the unknown
system An example of this application is given in a subsequent section on
adaptive signal processing where the least mean squared (LMS) algorithm is
used to implement the optimal filter Problem 2 also demonstrates this approach
In interference canceling, the desired signal contains both signal and noise while
the filter input is a reference signal that contains only noise or a signal correlated
with the noise This application is also explored under the section on adaptive
signal processing since it is more commonly implemented in this context
MATLAB Implementation
The Wiener-Hopf equation (Eqs (5) and (6), can be solved using MATLAB’s
matrix inversion operator (‘\’) as shown in the examples below Alternatively,
*Due to this matrix’s symmetry, it can be uniquely defined by only a single row or column.
TLFeBOOK
Trang 17Advanced Signal Processing 217
since the matrix has the Toeplitz structure, matrix inversion can also be done
using a faster algorithm known as the Levinson-Durbin recursion
The MATLABtoeplitz function is useful in setting up the correlation
matrix The function call is:
Rxx = toeplitz(rxx);
whererxxis the input row vector This constructs a symmetrical matrix from a
single row vector and can be used to generate the correlation matrix in Eq (6)
from the autocorrelation function r xx (The function can also create an
asymmet-rical Toeplitz matrix if two input arguments are given.)
In order for the matrix to be inverted, it must be nonsingular; that is, the
rows and columns must be independent Because of the structure of the
correla-tion matrix in Eq (6) (termed positive- definite), it cannot be singular However,
it can be near singular: some rows or columns may be only slightly independent
Such an ill-conditioned matrix will lead to large errors when it is inverted The
MATLAB ‘\’ matrix inversion operator provides an error message if the matrix
is not well-conditioned, but this can be more effectively evaluated using the
MATLABcondfunction:
c = cond(X)
where X is the matrix under test and c is the ratio of the largest to smallest
singular values A very well-conditioned matrix would have singular values in
the same general range, so the output variable, c, would be close to one Very
large values ofcindicate an ill-conditioned matrix Values greater than 104have
been suggested by Sterns and David (1996) as too large to produce reliable
results in the Wiener-Hopf equation When this occurs, the condition of the matrix
can usually be improved by reducing its dimension, that is, reducing the range,
L, of the autocorrelation function in Eq (6) This will also reduce the number
of filter coefficients in the solution
Example 8.1 Given a sinusoidal signal in noise (SNR= -8 db), design
an optimal filter using the Wiener-Hopf equation Assume that you have a copy
of the actual signal available, in other words, a version of the signal without the
added noise In general, this would not be the case: if you had the desired signal,
you would not need the filter! In practical situations you would have to estimate
the desired signal or the crosscorrelation between the estimated and desired
signals
Solution The program below uses the routine wiener_hopf(also shown
below) to determine the optimal filter coefficients These are then applied to the
noisy waveform using the filter routine introduced in Chapter 4 although
correlation could also have been used
Trang 18218 Chapter 8
% Example 8.1 and Figure 8.3 Wiener Filter Theory
% Use an adaptive filter to eliminate broadband noise from a
F IGURE 8.3 Application of the Wiener-Hopf equation to produce an optimal FIR
filter to filter broadband noise (SNR= -8 db) from a single sinusoid (10 Hz.) The
frequency characteristics (bottom plot) show that the filter coefficients were adjusted
to approximate a bandpass filter with a small bandwidth and a peak at 10 Hz
TLFeBOOK
Trang 19Advanced Signal Processing 219
[xn, t, x] = sig_noise(10,-8,N); % xn is signal ⴙ noise and
% x is noise free (i.e.,
% desired) signal subplot(3,1,1); plot(t, xn,’k’); % Plot unfiltered data
labels, table, axis
%
% Determine the optimal FIR filter coefficients and apply
b = wiener_hopf(xn,x,L); % Apply Wiener-Hopf
% equations
y = filter(b,1,xn); % Filter data using optimum
% filter weights
%
% Plot filtered data and filter spectrum
subplot(3,1,2); plot(t,y,’k’); % Plot filtered data
labels, table, axis
%
subplot(3,1,3);
f = (1:N) * fs/N; % Construct freq vector for plotting
h = abs(fft(b,256)).v2 % Calculate filter power
plot(f,h,’k’); % spectrum and plot
labels, table, axis
The functionWiener_hopfsolves the Wiener-Hopf equations:
function b = wiener_hopf(x,y,maxlags)
% Function to compute LMS algol using Wiener-Hopf equations
% Inputs: x = input
% Outputs: b = FIR filter coefficients
%
rxx = xcorr(x,maxlags,’coeff’); % Compute the
autocorrela-% tion vector rxx = rxx(maxlagsⴙ1:end)’; % Use only positive half of
% symm vector rxy = xcorr(x,y,maxlags); % Compute the crosscorrela-
% tion vector rxy = rxy(maxlagsⴙ1:end)’; % Use only positive half
%
rxx_matrix = toeplitz(rxx); % Construct correlation
% matrix
Trang 20220 Chapter 8
b = rxx_matrix(rxy; % Calculate FIR coefficients
% using matrix inversion,
% Levinson could be used
% here
Example 8.1 generates Figure 8.3 above Note that the optimal filter
ap-proach, when applied to a single sinusoid buried in noise, produces a bandpass
filter with a peak at the sinusoidal frequency An equivalent—or even more
effective—filter could have been designed using the tools presented in Chapter
4 Indeed, such a statement could also be made about any of the adaptive filters
described below However, this requires precise a priori knowledge of the signal
and noise frequency characteristics, which may not be available Moreover, a
fixed filter will not be able to optimally filter signal and noise that changes over
time
Example 8.2 Apply the LMS algorithm to a systems identification task
The “unknown” system will be an all-zero linear process with a digital transfer
function of:
H(z) = 0.5 + 0.75z−1+ 1.2z−2
Confirm the match by plotting the magnitude of the transfer function for
both the unknown and matching systems Since this approach uses an FIR filter
as the matching system, which is also an all-zero process, the match should be
quite good In Problem 2, this approach is repeated, but for an unknown system
that has both poles and zeros In this case, the FIR (all-zero) filter will need
many more coefficients than the unknown pole-zero process to produce a
rea-sonable match
Solution The program below inputs random noise into the unknown
pro-cess using convolution and into the matching filter Since the FIR matching
filter cannot easily accommodate for a pure time delay, care must be taken to
compensate for possible time shift due to the convolution operation The
match-ing filter coefficients are adjusted usmatch-ing the Wiener-Hopf equation described
previously Frequency characteristics of both unknown and matching system are
determined by applying the FFT to the coefficients of both processes and the
resultant spectra are plotted
% Example 8.2 and Figure 8.4 Adaptive Filters System
% Identification
%
% Uses optimal filtering implemented with the Wiener-Hopf
% algorithm to identify an unknown system
%
% Initialize parameters
TLFeBOOK