Smith Georgia Institute of Technology 36.1 Filter Bank Equations The AC Matrix•Spectral Factorization•Lattice Implementa-tions •Time-Domain Design 36.2 Finite Field Filter Banks 36.3 Non
Trang 1Arrowood, J.; Randolph, T & Smith, M.J.T “Filter Bank Design”
Digital Signal Processing Handbook
Ed Vijay K Madisetti and Douglas B Williams
Boca Raton: CRC Press LLC, 1999
Trang 2Filter Bank Design
Joseph Arrowood
Georgia Institute of Technology
Tami Randolph
Georgia Institute of Technology
Mark J.T Smith
Georgia Institute of Technology
36.1 Filter Bank Equations
The AC Matrix•Spectral Factorization•Lattice Implementa-tions •Time-Domain Design
36.2 Finite Field Filter Banks 36.3 Nonlinear Filter Banks References
The interest in digital filter banks has grown dramatically over the last few years Owing to the trend toward lower cost, higher speed microprocessors, digital solutions are becoming attractive for
a wide variety of applications Filter banks allow signals to be decomposed into subbands, often facilitating more efficient and effective processing They are particularly visible in the areas of image compression, speech coding, and image analysis
The desired characteristics of a subband decomposition will naturally vary from application to application Moreover, within any given application, there are a myriad of issues to consider First, one might consider whether to use FIR or IIR filters IIR designs can offer computational advantages, while FIR designs can offer greater flexibility in filter characteristics In this chapter we focus exclusively
on FIR design Second, one might identify the time-frequency or space-frequency representation that is most appropriate Uniform decompositions and octave-band decompositions are particularly popular at present At the next level, characteristics of the analysis filters should be defined This involves imposing specifications on the analysis filter passband deviations, transition bands, and stopband deviations Alternately or in addition, time domain characteristics may be imposed, such
as limits on the step response ripples, and degree of regularity
One can consider similar constraints for the synthesis filters For coding applications, the charac-teristics of the synthesis filters often have a dominant effect on the subjective quality of the output Finally, one should consider analysis-synthesis characteristics That is, one has flexibility to specify the overall behavior of the system In most cases, one views having exact reconstruction as being ideal Occasionally, however, it may be possible to trade some small loss in reconstruction quality for signif-icant gains in computation, speed, or cost In addition to specifying the quality of reconstruction, it
is generally possible to control the overall delay of the system from end to end In some applications, such as two-way speech and video coding, latency represents a source of quality degradation Thus, having explicit control over the analysis-synthesis delay can lead to improvement in quality The intelligent design of applications-specific filter banks involves first identifying the relevant parameters and optimizing the system with respect to them As is typical, the filter bank analysis and reconstruction equations lead to complex tradeoffs among complexity, system delay, filter quality, filter length, and quality of performance This chapter is devoted to presenting an introduction to filter bank design Filter bank design has reached a state of maturity in many regards To cover all of
Trang 3FIGURE 36.1: Block diagram of anM-band analysis-synthesis filter bank.
FIGURE 36.2: Two-band analysis-synthesis filter bank
the important contributions in any level of detail would be impossible in a single chapter However,
it is possible to gain some insight and appreciation for general design strategies germane to this topic
In addition to discussing design methodologies for linear analysis-synthesis systems, we also consider the design of a couple of new nonlinear classes of filter banks that are currently receiving attention
in the literature This discussion along with the referenced articles should provide a convenient introduction to the design of many useful filter banks
36.1 Filter Bank Equations
A broad class of linear filter banks can be represented by the block diagram shown in Fig.36.1 This
is a linear time-varying system that decomposes the input intoM-subbands, each one of which is
decimated by a factor ofR When R = M, the system is said to be critically sampled or maxi-mally decimated Maximaxi-mally decimated systems are generally the ones of choice because they can be
information preserving, and are not data expansive
The simplest filter bank of this class is the two-band system, an example of which is shown in Fig.36.2 Here, there are only two analysis filters:H0(z), a lowpass filter; and H1(z), a highpass filter.
Similarly, there are two synthesis filters: a lowpassG0(z), and a highpass G1(z) Let us consider this
two-band filter bank first In the process, we will develop a design methodology that can be extended
to the more complex problem ofM-band systems.
Examining the two-band filter bank in Fig.36.2, we see that the inputx[n] is lowpass and highpass
filtered, resulting inv0[n] and v1[n] These signals are then downsampled by a factor of two, leading
to the analysis section outputs, y0[n] and y1[n] The downsampling operation is time varying,
which implies a non-trivial relationship betweenv k [n] and y k [n] (where k = 0, 1) In general,
downsampling a signalv k [n] by an integer factor R is described in the time domain by the equation
y k [n] = v k [Rn].
Trang 4In the frequency domain, this relationship is given by
Y ke jω
= 1
R
R−1X
r=0
V k
e j
ω+ 2πr R
.
The equivalent equation in thez domain is
Y k (z) = 1
R
R−1X
r=0
V kW R r z R1
whereW r
R = e −j2πr
R
In the synthesis section, the subband signalsy0[n] and y1[n] are upsampled to give s0[n] and s1[n].
They are then filtered by the lowpass and highpass filters,G0(z) and G1(z), respectively, before being
summed together The upsampling operation (for an arbitrary positive integerR) can be defined by
s k [n] =
y k [n/R] for n = 0, ±R, ±2R, ±3R,
in the time domain, and
S ke jω= Y ke jRω and S k (z) = Y kz R
in the frequency andz domains, respectively.
Using the expressions for the downsampling and upsampling operations, we can describe the two-band filter bank in terms ofz-domain equations The outputs after analysis filtering are
V k (z) = H k (z)X(z), k = 0, 1.
After decimation and recognizing thatW1
2 = −1, we obtain
Y k (z) = 1
2
h
H kz1
Xz1
+ H k−z1
X−z1i
, k = 0, 1. (36.1) Thus, Eq (36.1) defines completely the input-output relationship for the analysis section in thez
domain
In the synthesis section, the subbands are upsampled giving
S k (z) = Y k (z2), k = 0, 1.
This implies that
S k (z) =1
2(H k (z)X(z) + H k (−z)X(−z)) , k = 0, 1.
PassingS k (z) through the synthesis filters and then summing yields the reconstructed output
ˆX(z) = 12G0(z) [H0(z)X(z) + H0(−z)X(−z)]
+1
2G1(z) [H1(z)X(z) + H1(−z)X(−z)] (36.2) For virtually any application for which one can conceive, the synthesis filters should allow the input
to be reconstructed exactly or with a minimal amount of distortion In other words, ideally we want
ˆX(z) = z −n0 X(z) ,
wheren0is the integer system delay An intuitive approach to handing this problem is to use the AC-matrix formulation, which we introduce next
Trang 536.1.1 The AC Matrix
The aliasing component matrix (or AC matrix) represents a simple and intuitive idea originally introduced in [6] for handling analysis and reconstruction The analysis-synthesis equation (36.2) for the two-band case can be expressed as
ˆX(z) = 1
2[H0(z)G0(z) + H1(z)G1(z)] X(z)
+1
2[H0(−z)G0(z) + H1(−z)G1(z)] X(−z)
The idea of the AC matrix is to represent the equations in matrix form For the two-band system, this results in
ˆX(z) = 1
2[X(z), X(−z)]
H0(z) H1(z)
H0(−z) H1(−z)
AC matrix
G0(z)
G1(z)
,
where the AC matrix is as shown above The AC matrix is so designated because it contains the analysis filters and all the associated aliasing components Exact reconstruction is then obtained when
H0(z) H1(z)
H0(−z) H1(−z)
G0(z)
G1(z)
=
T (z)
0
whereT (z) is required to be the scaled integer delay 2z −n0 The termT (z) is the transfer function
of the overall system The zero term belowT (z) determines the amount of aliasing present in the
reconstructed signal Because this term is zero, all aliasing is explicitly removed
With the equations expressed in matrix form, we can solve for the synthesis filters, which yields
G0(z)
G1(z)
H0(z)H1(−z) − H0(−z)H1(z)
H1(−z) −H1(z)
−H0(−z) H0(z)
T (z)
0
. (36.3) Often for a variety of reasons, we would like both the analysis and synthesis filters to be FIR This means the determinant of the AC matrix should be a constant delay The earliest solution to the FIR filter bank problem was presented by Croisier et al in 1976 [18] Their solution was to let
H1(z) = H0(−z)
and
G0(z) = H0(z)
G1(z) = −H0(−z)
This is the quadrature mirror filter (QMF) solution From the equations in (36.3), it can be seen that this solution cancels all the aliasing and results in a system transfer function
T (z) = H0(z)H1(−z) − H0(−z)H1(z)
As it turns out, with careful designT (z) can be made to be close to a constant delay However, some
amount of distortion will always be present In 1980 Johnston designed a set of optimized QMFs which are now widely used The coefficient values may be found in several sources [16,17,19] Interestingly, the equations in (36.3) imply that exact reconstruction is possible by forcing the AC-matrix determinant to be a constant delay The design of such exact reconstruction filters is discussed in the next section
Trang 6FIGURE 36.3: Example of a zero-phase half-band lowpass filter.
36.1.2 Spectral Factorization
The question at hand is how do we determineH0(z) and H1(z) such that T (z) is an integer delay z −n0.
A solution to this problem was introduced in 1984 [7], based on the observation thatH0(z)H1(−z)
is a lowpass filter [which we denoteF0(z)] and H0(−z)H1(z) is its corresponding frequency shifted
highpass filter A unity transfer function can be constructed by forcingF0(z) and F0(−z) to be
complementary half-band lowpass and highpass filters Many fine techniques are available for the design of half-band lowpass filters, such as the Parks-McClellan algorithm, Kaiser window design, Hamming window design, the eigenfilter method, and others Zero-phase half-band filters have the property that zeros occur in the impulse response atn = ±2, ±4, ±6, , etc An illustration is
shown in Fig.36.3 Once designed,F0(z) can be factored into two lowpass filters, H0(z) and H1(−z).
The design procedure can be summarized as follows
1 First design a(2N − 1)-tap half-band lowpass filter, using the Parks-McClellan
algo-rithm, for example This can be done by constraining the passband and stopband cutoff frequencies to beω p = π −ω s, and using equal passband and stopband error weightings The resulting filter will have equal passband and stopband ripples, i.e.,δ p = δ s = δ.
2 Add the valueδ to the f [0] (center) tap value This forces F (e jω ) ≥ 0 for all ω.
3 Spectrally factorF (z) into two lowpass filters, H0(z) and H1(−z) Generally the best
way to factorF (z) is such that H1(−z) = H0(z−1) Note that the factorization will not
be unique and the roots should be split so that if a particular root is assigned toH0(z),
its reciprocal should be given toH0(z−1).
The result of the above procedure is thatH0(z) will be a power complementary, even length, FIR
filter that will form the basis for a perfect reconstruction filter bank Note that sinceH1(z) is just a
time-reversed, spectrally shifted version ofH0(z),
H0(e jω ) = H
1(−e jω ) Smith and Barnwell designed and published a set of optimal exact reconstruction filters [1] The filter coefficients forH0(z) are given in Table36.1 The analysis and synthesis filters are obtained fromH0(z) by
G0(z) = H0
z−1
G1(z) = H0(−z)
H1(z) = H0
−z−1 .
A complete discussion of this approach can be found in many references [1,6,7,25,27,28]
Trang 7TABLE 36.1 CQF (Smith-Barnwell) Filter Bank Coefficients with 40dB Attenuation
32-Tap filter 16-Tap filter
8.494372478233170D−03 2.193598203004352D−02
−9.617816873474045D−05 1.578616497663704D−03
−8.795047132402801D−03 −6.025449102875281D−02 7.087795490845020D−04 −1.189065962053910D−02 1.220420156035413D−02 0.137537915636625D+00
−1.762639314795336D−03 5.745450056390939D−02
−1.558455903573829D−02 −0.321670296165893D+00 4.082855675060479D−03 −0.528720271545339D+00 1.765222024089335D−02 −0.295779674500919D+00
−8.385219782884901D−03 2.043110845170894D−04
−1.674761388473688D−02 2.906699709446796D−02 1.823906210869841D−02 −3.533486088708146D−02 5.781735813341397D−03 −6.821045322743358D−03
−4.692674090907675D−02 2.606678468264118D−02 5.725005445073179D−02 1.033363491944126D−03 0.354522945953839D+00 −1.435930957477529D−02 0.504811839124518D+00
0.264955363281817D+00
−8.329095161140063D−02
−0.139108747584926D+00 3.314036080659188D−02 9.035938422033127D−02
−1.468791729134721D−02 8-Tap filter
−6.103335886707139D−02 6.606122638753900D−03 3.489755821785150D−02 4.051555088035685D−02 −1.098301946252854D−02
−2.631418173168537D−03 −6.286453934951963D−02
−2.592580476149722D−02 0.223907720892568D+00 9.319532350192227D−04 0.556856993531445D+00 1.535638959916169D−02 0.357976304997285D+00
−1.196832693326184D−04 −2.390027056113145D−02
−1.057032258472372D−02 −7.594096379188282D−02
For theM-channel case shown in Fig.36.1, where the bands are assumed to be maximally deci-mated, the same AC-matrix approach can be employed, leading to the equations
ˆX(z) = M1 hX(z), , X(zW M M−1 )i
xT
H0(z) · · · H M−1 (z)
H0(zW1
M ) · · · H M−1 (zW1
M )
H0(zW M M−1 ) · · · H M−1 (zW M M−1 )
H
G0(z)
G1(z)
G M−1 (z)
g
,
whereW M = e −j2π
M This can be rewritten compactly as
ˆX(z) M1xT (z)H(z)g(z) ,
where x is the input vector, g is the synthesis filter vector, and H is the AC matrix However, the
AC-matrix determinant for systems withM > 2 is typically too intricate for the spectral factorization
approach outlined above An effective approach for handling the design ofM-band systems was
introduced by Vaidyanathan in [30] It is based on a lattice implementation structure and is discussed next
Trang 8FIGURE 36.4: Flow graph of a two-band lattice structure with three stages.
36.1.3 Lattice Implementations
In addition to the direct form structures shown in Figs.36.1and36.2, filter banks can be implemented using lattice structures For simplicity, consider the two-band case first An example of a lattice structure for a two-band analysis system is shown in Fig.36.4 It is composed of a cascade of criss-cross elements, each of which has a set of coefficients associated with it Conveniently, each section,
which we denote Rm, can be described by a matrix For the two-band lattice, these matrices have the form
Rm=
1 r m
−r m 1
.
Interspersed between the coefficient matrices are delay matrices,3(z), having the form
3(z) =
1 0
0 z−1
.
It can be shown [27] that lattice filters can represent a wide class of exact reconstruction filter banks Two points regarding lattice filter banks are particularly noteworthy First, the lattice structure provides an efficient form of implementation Moreover, the synthesis filter bank is directly related
to the analysis bank, since each matrix in the analysis cascade is invertible Consequently, the synthesis bank consists of the cascade of inverse section matrices Second, the structure also provides
a convenient way to design the filter bank Each lattice coefficient can be optimized using standard minimization routines to minimize a passband-stopband error cost function for the filters This approach to design can be used for two-band as well asM-band filter banks [5,27,28]
36.1.4 Time-Domain Design
One of the most flexible design approaches is the time domain formulation proposed by Nayebi et
al [3,8] This formulation has enabled the discovery of previously unknown classes of filter banks, such as low and variable delay systems [12], time-varying filter banks [4], and block decimation systems [9] It is attractive because it enables the design of virtually all linear filter banks The idea underlying this approach is that the conditions for exact reconstruction can be expressed in the time domain in a convenient matrix form Let us explore this approach in the context of anM-band
filter bank Because of the decimation operations, the overallM-band analysis-synthesis system is
periodically time-varying Thus, we can view an arbitrary maximally decimatedM-band system as
havingM linear time invariant transfer functions associated with it One can think of the problem
as trying to deviseM subsampled systems, each one of which exactly reconstructs This is equivalent
to saying that for each impulse input,δ[n − i], to the analysis-synthesis system, that impulse should
appear at the system output at timen = i + n0, wherei = 0, 1, 2, , M − 1 and n0is the system delay
This amounts to setting up an overconstrained linear system AS = B, where the matrix A is created using the analysis filter coefficients, the matrix B is the desired response of zeros except at
the appropriate delay points (i.e.,δ[n − n0]) and S is a matrix containing synthesis filter coefficients.
Trang 9Particular linear combinations of analysis and synthesis filter coefficients occur at different points in
time for different input impulses The idea is to make A, S, and B such that they describe completely
allM transfer functions that comprise the periodically time-varying system.
The matrix A is a matrix of filter coefficients and zeros that effectively describe the decimated
con-volution operations inherent in the filter bank For convenience, we express the analysis coefficients
as a matrix h, where
h=
h0[0] h1[0] · · · h M−1[0]
h0[1] h1[1] · · · h M−1[1]
h0[N − 1] h1[N − 1] · · · h M−1 [N − 1]
.
The zeros are represented by anM × M matrix of zeros, denoted OM With these terms, we can write the(2N − M) × N matrix A,
A=
h[n]
O M
O M
O M
h[n]
O M
· · ·
· · ·
· · ·
· · ·
· · ·
O M
O M
h[n]
.
The synthesis filters S can be expressed most conveniently in terms of theM × M matrix
Qi =
g0[i] g0[i + 1] · · · g0[i + M − 1]
g1[i] g1[i + 1] · · · g1[i + M − 1]
g M−1 [i] g M−1 [i + 1] · · · g M−1 [i + M − 1]
,
wherei = 0, 1, , L − 1 and N is assumed to be equal to LM The synthesis matrix S is then given
by
S=
Q0
QM
QiM
Q(L−1)M
.
Finally, to achieve exact reconstruction we want the impulse responses associated with each of theM
constituent transfer functions in the periodically time-varying system to be an impulse Therefore, B
is a matrix of zero-element column vectors, each with a single “one” at the location of the particular transfer function group delay More specifically, the matrix has the form
B=
O M
O M
J M
O M
O M
Trang 10where J Mis theM × M antidiagonal identity matrix
J M=
0 · · · 0 1
0 · · · 1 0
1 · · · 0 0
.
It is important to mention here that the location of J M within the matrix B is a system design issue The case shown here, where it is centered within B, corresponds to an overall system delay ofN − 1.
This is the natural case for systems withN-tap filters There are many fine points associated with
these time domain conditions For a complete discussion, the reader is referred to [3]
With the reconstruction equations in place, we now turn our attention to the design of the filters
The problem here is that this is an over-constrained system The matrix A is of size(2N − M) × N.
If we think of the synthesis filter coefficients as the parameters to be solved for, we findM(2N − M)
equations andMN unknowns Clearly, the best we can hope for is to determine B in an approximate
sense Using least-squares approximation, we let
S=ATA−1
B.
Here, it is assumed that ATA−1
exists This is not automatically the case However, if reasonable lowpass and highpass filters are used as an initial starting point, there is rarely a problem
This solution gives the best synthesis filter set for a particular analysis set and system delayN − 1.
The resulting matrix AS = ˆB will be close to B but not equal to it in general The next step in
the design is to allow the analysis filter coefficients to vary in an optimization routine to reduce the Frobenius matrix norm, ˆB − B 2
F The locally optimal solution will be,
S=ATA−1
B, such that ˆB − B 2
F is minimized.
Any number of routines may be used to find this minimum A simple gradient search that updates the analysis filter coefficients will suffice in most cases Note that, as written, there are no
constraints on the analysis filters other than that they provide an invertible ATA matrix One can
easily start imposing constraints relevant to system quality Most often we find it appropriate to include constraints on the frequency domain characteristics of the individual analysis filters This can be done conveniently by creating a cost function comprised of the passband and stopband filter errors For example, in the two-band case, inclusion of such filter frequency constraints gives rise to the overall error function
= ˆB − B 2
Z π p
0
1 − H1(e jω ) 2
dω +
Z π
π s
H0(e jω ) 2
dω.
This reduces the overall system error of the filter bank while at the same time reducing the stopband errors in analysis filters Other options in constructing the error function can address control over the step response of the filters, the width of the transition bands, and whether anl2norm or anl∞
norm is used as an optimality criterion
By properly weighting the reconstruction and frequency response terms in the error function, exact reconstruction can be obtained, if such a solution exists If an exact reconstruction solution does not exist, the design algorithm will find the locally optimal solution subject to the specified constraints
... Barnwell designed and published a set of optimal exact reconstruction filters [1] The filter coefficients forH0(z) are given in Table36. 1 The analysis and synthesis filters... data-page="7">TABLE 36. 1 CQF (Smith-Barnwell) Filter Bank Coefficients with 40dB Attenuation
32-Tap filter 16-Tap filter< /small>
8.494372478233170D−03...
It can be shown [27] that lattice filters can represent a wide class of exact reconstruction filter banks Two points regarding lattice filter banks are particularly noteworthy First, the