“Hirschman optimal transform block adaptive filter,” International conference on Acoustics, Speech, and Signal Processing ICASSP, 2007.. Signal processing techniques and the application o
Trang 2where uk is the vector that contains the elements of u k(n) “·” indicates pointwisematrix multiplication and, throughout this chapter, pointwise matrix multiplication takes
a lower precedence than conventional matrix multiplication Combining all of the circularconvolutions into one matrix equation, we should have
According to the overlap-save method, only the second half of ck corresponds to the kthsection
of the linear convolution Denote the kthsection of the linear convolution by ykand the vector
that contains the elements of y(n)by y Then ykcan be written as
Trang 31 Divide u(n)into K overlapping sections and combine them into one vector to from u.
2 Perform K-band polyphase decomposition of u to form ˜u.
3 Take the HOT of ˜u.
4 Post append h(n)with N zeros and then stack the appended h(n)K times into one vector
to form hr
5 Perform K-band polyphase decomposition of h rto form ˜hr
6 Take the HOT of ˜hr
7 Point-wise multiply the vectors from steps 3 and 6
8 Take the inverse HOT of the vector from step 7
9 Perform K-band polyphase decomposition of the result from step 8.
10 Multiply the result of step 9 with G.
4 Development of the HOT DFT block LMS algorithm
Recall that in the block LMS algorithm there are two convolutions needed The firstconvolution is a convolution between the filter impulse response and the filter input and
is needed to calculate the output of the filter in each block The second convolution is aconvolution between the filter input and error and is needed to estimate the gradient in thefilter weight update equation If the block length is much larger than the filter length, thenthe fast HOT convolution developed in the previous section can be used to calculate the firstconvolution However, the second convolution is a convolution between two signals of thesame length and the fast HOT convolution can not be used directly without modification
Let N be the filer length and L=NK be the block length, where N, L, and K are all integers.
u(kL)
u(kL+1)
be the vector of input samples needed in the k th block To use the fast HOT convolution
described in the previous section, ˆu(k)is divided is into K overlapping sections Such sections
Trang 4can be formed by multiplying ˆu(k)with the following matrix:
.0
Trang 5The filter update equation is given by
The sum in equation (31) can be efficiently calculated using the(L+N)-point DFTs of the error
vector e(n)and input vector u(n) However, the(L+N)-point DFT of u(n)is not available
and only the 2N-point DFTs of the K sections of ˆu(k)are available Therefore, the sum in
equation (31) should be divided into K sections as follows:
Then the sum over j is just the first N elements of the circular convolution of e l(k) and
circularly shifted ul(k)and it can be computed using the DFT as shown below:
Trang 6ulF(k) =F2Nul(k), (37)and
Figure 2 shows the flow block diagram of the HOT DFT block LMS adaptive filter
5 Computational cost of the HOT DFT block LMS algorithm
Before looking at the convergence analysis of the new adaptive filter, we look at its
computational cost To calculate the the output of the k th block, 2K+1 2N-point DFTs are
needed Therefore,(2K+1)2N log22N+2NK multiplications are needed to calculate the output To calculate the gradient estimate in the filter update equation, 2K 2N-point DFTs are required Therefore, 6KN log22N+2NK multiplications are needed The total multiplication
count of the new algorithm is then(4K+1)2N log22N+4NK The multiplication count for the DFT block LMS algorithm is 10KN log22NK+4NK Therefore, as K gets larger the HOT
DFT block LMS algorithm becomes more efficient than the DFT block LMS algorithm For
example, for N=100 and K=10, the HOT DFT LMS algorithm is about 30% more efficient
and for for N=50 and K=20 the HOT DFT LMS algorithm is about 40% more efficient
Trang 7Fig 2 The flow block diagram of the HOT DFT block LMS adaptive filter.
Trang 8The ratio between the number of multiplications required for the HOT DFT block LMSalgorithm and the number of multiplications required for the DFT block LMS algorithm isplotted in Figure 3 for different filter lengths The HOT DFT block LMS filter is alwaysmore efficient than the DFT block LMS filter and the efficiency increases as the block lengthincreases.
100 200 300 400 500 600 700 800 900 1000 0.2
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
6 Convergence analysis of the HOT DFT LMS algorithm
Now the convergence of the new algorithm is analyzed The analysis is performed in the DFTdomain The adaptive filter update equation in the DFT domain is given by
Trang 9The convergence speed of the HOT DFT LMS algorithm can be increased if the convergencemoods are normalized using the estimated power of the tap-input vector in the DFT domain.The complete HOT DFT block LMS weight update equation is given by
Trang 107 Misadjustment of the HOT DFT LMS algorithm
In this section, the misadjusment of the HOT DFT block LMS algorithm is derived The meansquare error of the conventional LMS algorithm is given by
8 Simulation of the HOT DFT block LMS algorithm
The learning curves of the HOT DFT block LMS algorithm were simulated The desired
input was generated using the linear model d(n) =w o(n ) ∗ u(n) +e o(n), where e o(n)is themeasurement white gaussian noise with variance 10−8 The input was a first-order Markov
signal with autocorrelation function given by r(k) =ρ |k| The filter was lowpass with a cutoff
frequencyπ/2 rad.
Figure 4 shows the learning curves for the HOT DFT block LMS filter with those for the LMS
and DFT block LMS filters for N =4, K =3, andρ=0.9 Figure 5 shows similar curves for
N = 50, K =10, and ρ= 0.9 Both figures show that the HOT DFT block LMS algorithmconverges at the same rate as the DFT block LMS algorithm and yet is computationally more
efficient Figure 6 shows similar curves for N=50 and K=10 andρ=0.8 As the correlationcoefficient decreases the algorithms converges faster and the HOT DFT block LMS algorithmconverges at the same rate as the DFT block LMS algorithm
Trang 11Number of iteration
LMS DFT Block
Fig 4 Learning curves for the LMS, HOT DFT block LMS, and DFT block LMS algorithms
Number of iteration
LMS DFT Block
Fig 5 Learning curves for the LMS, HOT DFT block LMS, and DFT block LMS algorithms
N=50 and K=10.ρ=0.9
Another coloring filter was also used to simulate the learning curves of the algorithms
The coloring filter was a bandpass filter with H(z) = 0.1− 0.2z −1 − 0.3z −2+0.4z −3+
0.4z −4 − 0.2z −5 − 0.1z −6 The frequency response of the coloring filter is shown in Figure
7 The learning curves are shown in Figure 8 The simulations are again consistent with thetheoretical predictions presented in this chapter
Trang 12Number of iteration
LMS DFT Block
Fig 6 Learning curves for the LMS, HOT DFT block LMS, and DFT block LMS algorithms
Normalized Frequency (×π rad/sample)
Normalized Frequency ( ×π rad/sample)
Fig 7 Frequency response of the coloring filter
Trang 13Number of iteration
LMS DFT Block
Fig 8 Learning curves for the LMS, HOT DFT block LMS, and DFT block LMS algorithms
N=50 and K=10
9 Conclusions
In this chapter a new computationally efficient block LMS algorithm was presented Thisalgorithm is called the HOT DFT block LMS algorithm It is based on a newly developedtransform called the HOT The basis of the HOT has many zero-valued samples and resemblesthe DFT basis, which makes the HOT computationally attractive The HOT DFT block LMSalgorithm is very similar to the DFT block LMS algorithm and reduces it computationalcomplexity by about 30% when the filter length is much smaller than the block length Theanalytical predictions and simulations showed that the convergence characteristics of the HOTDFT block LMS algorithm are similar to that of the DFT block LMS algorithm
10 References
Alkhouli, O.; DeBrunner, V.; Zhai, Y & Yeary, M (2005) “FIR Adaptive filters based
on Hirschman optimal transform,” IEEE/SP 13th Workshop on Statistical SignalProcessing, 2005
Alkhouli, O & DeBrunner, V (2007) “Convergence Analysis of Hirschman Optimal
Transform (HOT) LMS adaptive filter,” IEEE/SP 14th Workshop on Statistical SignalProcessing, 2007
Alkhouli, O & DeBrunner, V (2007) “Hirschman optimal transform block adaptive filter,”
International conference on Acoustics, Speech, and Signal Processing (ICASSP), 2007
Clark, G.; Mitra S & Parker, S (1981) “Block implementation of adaptive digital filters,” IEEE
Trans ASSP, pp 744-752,˘aJun 1981.
DeBrunner, V.; Özaydin, M & Przebinda T (1999) “Resolution in time-frequency,” IEEE Trans.
ASSP, pp 783-788, Mar 1999.
Trang 14DeBrunner, V & Matusiak, E (2003) “An algorithm to reduce the complexity required to
convolve finite length sequences using the Hirschman optimal transform (HOT),”
ICASSP 2003, Hong Kong, China, pp II-577-580, Apr 2003.
DeBrunner, V.; Havlicek, J.; Przebinda, T & Özaydin M (2005) “Entropy-based uncertainty
measures for L2(R)n,2(Z), and2(Z/NZ)with a Hirschman optimal transform for
2(Z/NZ)," IEEE Trans ASSP, pp 2690-2696, August 2005.
Farhang-Boroujeny, B (2000) Adaptive Filters Theory and Applications Wiley, 1999.
Farhang-Boroujeny, B & Chan, K (2000) “Analysis of the frequency-domain block LMS
algorithm,” IEEE Trans ASSP, pp 2332, Aug 2000.
Ferrara, E (1980) “Fast implementation of LMS adaptive filters,” IEEE Trans ASSP, vol.
ASSP-28, NO 4, Aug 1980
Mitra, S (2000) Digital Signal Processing Mc Graw Hill, Second edition, 2000.
Haykin S (2002) Adaptive Filter Theory Prentice Hall information and system sciences series,
Fourth edition, 2002
Przebinda, H.; DeBrunner, V & Özaydin M (2001) “The optimal transform for the discrete
Hirschman uncertainty principle,” IEEE Trans Infor Theory, pp 2086-2090, Jul 2001 Widrow, B & Hoff, Jr., M (1980) “Adaptive switching circuit,” IRE WESCON Conv Rec., pt 4
pp 96-104, 1980
Trang 15Real-Time Noise Cancelling Approach
on Innovations-Based Whitening Application
to Adaptive FIR RLS in Beamforming Structure
Jinsoo Jeong
Faculty of Biomedical Engineering and Health Science,
Universiti Teknologi Malaysia, Johor,
Malaysia
1 Introduction
The techniques for noise cancellation have been developed with applications in signal processing, such as homomorphic signal processing, sensor array signal processing and statistical signal processing Some exemplar applications may be found from kepstrum (also known as complex ceptstrum) method, beamforming and ANC (adaptive noise cancelling) respectively as shown in Fig 1
Fig 1 Signal processing techniques and the application of methods for noise cancellation Based on the two-microphone approach, the applications are characterized as three methods, which are based on identification of unknown system in acoustic channels, adaptive speech beamforming and adaptive noise cancellation It can be described as generalized three sub-block diagram as shown in Fig 2, where it is shown as three processing stages of (1) kepstrum (complex cepstrum), (2) beamforming and (3) ANC and also two structures of beamforming and ANC
Trang 165 0
) (
2 z W
) (
1 z W
Adaptive filter 1 Error 1 Adaptive filter 2 Error 2
MIC 1
MIC 2
Fig 2 Generalized diagram for the typical two-microphone approach
1 Kepstrum - estimation of acoustic path transfer functions (H1and H2)
From the output of sensor array, the acoustic transfer functions (H1and H2) are estimated from the acoustic channels as noise statistics during the noise period and it is applied to speech and noise period for noise cancellation It can be applied as preprocessor to second processing stage, beamforming or directly to third processing stage, ANC The application can be found from (Moir & Barrett, 2003; Jeong & Moir, 2008), where the unknown system has been estimated as the ratio (H1/H2) of two acoustic transfer functions between each microphones and noise source Kepstrum filter is used as estimate of unknown system and it is applied in front of SS (sum and subtract) functions in beamforming structure (Jeong & Moir, 2008)
2 Beamforming - adaptive filter (W1) , delay filter (D1) and SS functions
The beamforming structure contains SS functions, where it is used as signal separator and enhancer by summing and subtracting the signals of the each microphones input (Griffiths
& Jim, 1982) An adaptive filter 1 is placed in front of SS functions and used as speech beamforming filter (Compernolle, 1990) It is used as a beam steering input and hence DS (delay and sum) beamformer in primary input during speech period using VAD (voice activity detector) and its output is then applied to third stage, ANC as an enhanced primary input Both output signals from the SS functions are divided by a number of microphones used (in the case of two microphone, it should be 0.5) Alternatively, adaptive filter 1 can be used as a first adaptive noise canceller For this application, its output is a noise reference input to next cascading adaptive filter 2 during noise period in VAD (Wallace & Goubran, 1992) Based on a same structure, two-stage adaptive filtering scheme is introduced (Berghe and Wouters, 1998) As a speech directivity function, GCC (genenalized cross-correlation) based TDOA (time difference of arrival) function may alternatively be used instead of adaptive filter 1 in beamforming structure (Knapp & Carter, 1976)
3 ANC - adaptive filter (W2) and delay filter (D2)
The last part of block diagram shows ANC method (Widrow et al., 1975), where it consists
of adaptive filter 2 and delay filter 2 The adaptive filter generally uses FIR (finite impulse response) LMS (least mean square) algorithm in signal processing or IIR (infinite impulse response) RLS (recursive least square) algorithm in adaptive control for the noise cancellation in MMSE (minimize mean square error) sense According to the application, both algorithms show compromised effects between performance and computational complexity It shows that RLS gives, on average, two-tenths of a decibel SNR (signal to
Trang 17noise ratio) improvement over the LMS algorithm (Harrison et al., 1986) but it requires a
high demand of computational complexity for the processing Delay filter 2 is used as
noncausality filter to maintain a causality
As desribed above, the techniques have been developed on the basis of above described
methods and the structures From the above analysis, kepstrum noise cancelling technique
has been studied, where the kepstrum has been used for the identification of acoustic
transfer functions between two microphones and the kepstrum coefficients from the ratio of
two acoustic transfer functions have been applied in front of adaptive beamforming
structure for noise cancellation and speech enhancement (Jeong & Moir, 2008)
Furthermore, by using the fact that the random signal plus noise may be represented as
output of normalized minimum phase spectral factor from the innovations white-noise
input (Kalman & Bucy, 1961), the application of an innovations-based whitened form (here
we call it as inverse kepstrum) has been investigated in a simulation test, where front-end
inverse kepstrum has been analyzed with application of cascaded FIR LMS algorithm
(Jeong, 2009) and also FIR RLS algorithm (Jeong, 2010a; 2010b), both in ANC structure for
noise cancellation
In this paper, for a practical real-time processing using RLS algorithm, analysis of
innovations-based whitening filter (inverse kepstrum) has been extended to beamforming
structure and it has been tested for the application in a realistic environment From the
simulation test, it will be shown that overall estimate from front-end inverse kepstrum
processing with cascaded FIR RLS approximates with estimate of IIR RLS algorithm in ANC
structure This provides alternative solution from computational complexity on ANC
application using pole-zero IIR RLS filter, which is mostly not acceptable to practical
applications For the application in realistic environment, it has been applied to
beamforming structure for an effective noise cancelling application and it will be shown that
the front-end kepstrum application with zero-model FIR RLS provides even better
performance than pole-zero model IIR RLS algorithm in ANC structure
2 Analysis of optimum IIR Wiener filtering and the application to
two-microphone noise cancelling approach
For the IIR Wiener filtering approach, the z- transform of optimum LS (least squares) filter is
constrained to be causal but is potentially of infinite duration, hence it has been defined by
(Kailath, 1968) as
Φ (z)1
From the equation (1), it may be regarded as a cascaded form of transfer functions A(z) and
B(z), where Φ (z) xd is the double-sided z-transform of the cross-correlation function between
the desired signal and the reference signal H z+( ) and H z−( )are the spectral factors of the
double-sided z-transform, Φxx( )z from the auto-correlation of reference signal These
spectral factors have the property that the inverse z-transform of H z+( ) is entirely causal
and minimum phase, on the other hand, the inverse z- transform of H z−( )is non causal The
notation of + in outside bracket indicates that the z- transform of the causal part of the
inverse z- transform of B(z) is being taken