Digital Image Processing: Image Restoration Matrix Formulation - Duong Anh Duc provides about matrix Formulation of Image Restoration Problem; constrained least squares filtering (restoration); a brief review of matrix differentiation; Pseudo-inverse Filtering; Minimum Mean Square Error (Wiener) Filter; Parametric Wiener Filter.
Trang 1Digital Image Processing
Image Restoration
Matrix Formulation
Trang 2 We will assume that the arrays f and h have been
zero-padded to be of size M, where M length(f) + length(h) -
1
Henceforth, we will not explicitly mention the zero-padding
The degradation equation:
can be written in matrix-vector form as follows:
g = Hf + n , where
Trang 3Matrix Formulation
of Image Restoration Problem
0 0
1 2
0 0
0 1
0 0
0 0
0 3
2 1
3 0
1 2
2 1
0 1
1 2
1 0
H
h h
h
h h
h
h M
h M
h M
h
M h
h h
h
M h
h h
h
M h
h h
Trang 4Matrix Formulation
of Image Restoration Problem
However, since the arrays f and h are padded, we can equivalently set:
zero- Notice that the (second) matrix H is circulant; i.e., each row of H is a circular shift of the previous
0 3
2 1
3 0
1 2
2 1
0 1
1 2
1 0
H
h M
h M
h M
h
h h
h h
h M
h h
h
h M
h M
h h
Trang 50 h
0 2 1
0
h f
f f
Trang 6Matrix Formulation
of Image Restoration Problem
0 1
0 0
0 0
1 0
0 0
0 1
0 0
0 0
0 1
2 3
0 0
1 2
0 0
0 1
0 0
0 0
0 1
2 3
1 0
1 2
2 1
0 1
3 2
1 0
H1
h h
h h
h h
h
h h
h h
h h
h
h h
h
h h
h h
h h
h h
h h
h h
h h
h h
Trang 7Matrix Formulation
of Image Restoration Problem
0 1
0 0
0 0
1 0
0 0
0 1
1 0
0 0
0 1
2 3
3 0
1 2
2 3
0 1
1 2
3 0
0 1
2 3
1 0
1 2
2 1
0 1
3 2
1 0
H2
h h
h h
h h
h h
h h
h h
h h
h h
h h
h h
h h
h h
h h
h h
M h h
h h
M h M
h h
h
M h M
h M
h h
Notice that H1f = H2f Indeed
Trang 8Matrix Formulation
of Image Restoration Problem
0 2 1 0
0 1
0 0
0 0
1 0
0 0
0 1
1 0
0 0
f H
0 2 1 0
0 1
0 0
0 0
1 0
0 0
0 1
0 0
0 0
f H
2
1
f f f
h h
h h
h h
h h
f f f
h h
h h
h h
h
Henceforth, we will use H = H2 , so that we can apply
Trang 9zero-g = Hf + n , where
Trang 10Matrix Formulation
of Image Restoration Problem
0 , 1
1 ,
1
0 , 1
1 ,
0
0 , 0
n
0 , 1
1 ,
1
0 , 1
1 ,
0
0 , 0
f
0 , 1
1 ,
1
0 , 1
1 ,
0
0 , 0
g
M N N
M f
N f
f
N f
f
M g
N g
g
N g
Trang 11Matrix Formulation
of Image Restoration Problem
Note that H is a MN MN block-circulant
matrix with M M blocks.
MN MN M
M M
M
M
M
0 3
2 1
3 0
1 2
2 1
0 1
1 2
1 0
H H
H H
H H
H H
H H
H H
H H
H H
Trang 12Matrix Formulation
of Image Restoration Problem
Each block Hj is itself an N N circulant matrix
Indeed, the matrix Hj is a circulant matrix formed from the j-th row of array h(m,n):
0 , 3
, 2
, 1
,
3 , 0
, 1
, 2
,
2 , 1
, 0
, 1
,
1 , 2
, 1
, 0
, H
j h N
j h N
j h N
j h
j h j
h j
h j
h
j h N
j h j
h j
h
j h N
j h N
j h j
Trang 13Matrix Formulation
of Image Restoration Problem
Given the degradation equation:
g = Hf + n,our objective is to recover f from observation g
We will assume that the array h(m,n) (usually
referred to as the blurring function) and statistics of the noise (m,n) are known The problem
becomes very complicated if array h(m,n) is
unknown and this case is usually referred to as
blind restoration or blind deconvolution
Trang 14Matrix Formulation
of Image Restoration Problem
Notice that, even when there is no noise; i.e
( m , n ) = 0, or the values of ( m , n ) were
exactly known, and matrix H is invertible,
computing
^f = H-1(g-n)
directly would not be practical.
Example : Suppose M = N = 256 Therefore
MN = 65536 and H would be a 65536 by
65536 matrix to be inverted!
Trang 15Matrix Formulation
of Image Restoration Problem
Naturally, direct inversion of H would not be feasible.
But H has several useful properties; in particular:
H is block circulant.
H is usually sparse (has very few non-zero entries).
We will exploit these properties to obtain ˆf more efficiently.
In particular, we will derive the theoretical solutions to the restoration problem using matrix algebra However, when it comes to implementing the solution, we can resort to the Fourier domain, thanks to the properties of circulant
matrices.
Trang 16Constrained least squares
filtering (restoration)
Recall that the knowledge of blur function
h ( m , n ) is essential to obtain a meaningful solution to the restoration problem.
Often, knowledge of h ( m , n ) is not perfect
and subject to errors.
One way to alleviate sensitivity of the result
to errors in h ( m , n ) is to base optimality of restoration on a measure of smoothness,
such as the second derivative of the image.
Trang 17Constrained least squares
filtering (restoration)
We will approximate the second derivative (Laplacian) by a matrix Q Indeed, we will
first formulate the constrained restoration
problem and obtain its solution in terms of a general matrix Q
Trang 18Constrained least squares
filtering (restoration)
Later different choices of matrix Q will be
considered, each giving rise to a different
restoration filter
Suppose Q is any matrix (of appropriate
dimension) In constrained image restoration, we choose ^f to minimize ||Qˆf||2, subject to the
constraint,
||g-Hˆf||2= ||n||2.(Recall the degradation equation
g=Hˆf +n g-Hˆf = n.)
Trang 19Constrained least squares
filtering (restoration)
Introduction of matrix Q allows considerable flexibility in the design of appropriate
restoration filters (we will discuss specific
choices of Q later) So our problem is
formulated as follows:
min ||Qˆf||2
subject to ||g-Hˆf||2= ||n||2 or ||g-Hˆf||2- ||n||2=0
Trang 20A brief review of matrix
2
2 1 1
2 1 2
1
,
, x
,
x
x x
f x
x x
f x
x f
Trang 21A brief review of matrix
differentiation
If
f(x 1 ,x 2 ) = ||Ax-b||2 = (Axb)T (Axb)
for some matrix A and some vector b , then
where superscript T denotes matrix
transpose.
b Ax
A
2 x
x f
Trang 22A brief review of matrix
differentiation
Recall from calculus that such a constrained
minimization problem can be solved by means of Lagrange multipliers We need to minimize the
augmented objective function J(ˆf):
J(ˆf) = ||Qˆf||2+ (||g-Hˆf||2- ||n||2),where is a Lagrange multiplier
We set the derivative of J(ˆf) with respect to ˆf to zero
J(ˆf) = 2QT Qˆf-2 HT(g-Hˆf) = 0
(QT Q + HTH)ˆf = HTg
Trang 23A brief review of matrix
constraint || g-Hˆf ||2= || n ||2.
We will now use the above formulation to derive a number of restoration filters.
Trang 25Pseudo-inverse Filtering
It can be implemented in the Fourier domain by
the following equation:
^F(u,v) = R(u,v)G(u,v), where
The parameter is a constant to be chosen
Note that = 0 gives us back the inverse filter For
> 0, the denominator of R(u, v) is strictly positive and the pseudo-inverse filter is well defined
2
2 2
*
,
, ,
1 ,
, ,
v u H
v u
H v
u H v
u H
v u
H v
u R
Trang 26Pseudo-inverse Filtering example
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1 25
Trang 27Pseudo-inverse Filtering
^f(x,y)
Trang 28Pseudo-inverse Filtering
^f(x,y)
Trang 29Minimum Mean Square Error
(Wiener) Filter
This is a restoration technique based on the
statistics (mean and correlation) of the image and noise
We consider each element of f and n as random variables Define the correlation matrices
1 1
1 1 0
1
1 1
1 1 0
1
1 0
1 0 0
0
R
MN MN
MN MN
MN
MN T
f f
E f
f E f
f E
f f E f
f E f
f E
f f E f
f E f
f E E
Trang 30Minimum Mean Square Error
(Wiener) Filter
The matrices R f and R n are real and symmetric,
with all eigenvalues being non-negative
The 2D-DFT of the correlations R f and R n are
called the power spectra and are denoted by
S (u,v) and S (u,v) respectively
1 1
1 1 0
1
1 1
1 1 0
1
1 0
1 0 0
0
R
MN MN
MN MN
MN
MN T
n n
E n
n E n
n E
n n E n
n E n
n E
n n E n
n E n
n E E
Trang 31Minimum Mean Square Error
Trang 32Minimum Mean Square Error
(Wiener) Filter
This can be implemented using DFT as
v u S v
u S v
u H
v u
H v
u H
v u S v
u S v
u H
v u
H v
u R
v u G v u R v
u
G v
u S v
u S v
u H
v u
H v
u F
f f
f
, /
, ,
, ,
1
, /
, ,
, ,
where
, ,
, ,
/ , ,
, ,
*
Trang 33Minimum Mean Square Error
(Wiener) Filter
Here S f (u,v) = E(|F(u,v ) | 2) is the power spectral
density of the image f(m, n) and S (u,v) = E(|
N(u,v ) | 2) is the power spectral density of the noise h(m, n)
The restoration filter R(u,v) is called the parametric Wiener filter, with parameter
Trang 34Minimum Mean Square Error
(Wiener) Filter
According to the constrained restoration filter
derived earlier, parameter g should be chosen to satisfy ||g-Hˆf||2= ||n||2
However, choice of = 1 yields an optimal filter in the sense of minimizing the error function e2=
E{[f(m,n)-^f(m,n)]2} In other words, setting = 1 yields a statistically optimal restoration
Implementation of the parametric Wiener filter
requires knowledge of the image and noise power spectra S f (u,v) and S n (u,v) In particular, we need the so called signal-to-noise ratio (SNR) (u,v) =
Trang 35Minimum Mean Square Error
(Wiener) Filter
This is not always available and a simple
approximation is to replace (u,v) by a constant
In this case, the Wiener filter is given by
Note that as (no noise), the Wiener filter
tends to the inverse filter
2
*
,
, ,
v u H
v u
H v
u R
Trang 36Wiener Filter example
r
v u
v u H
dB n
f n
f
2
2 2
2
10log
or
f(m,n)
Trang 37Wiener Filter example
= 25.9dB
Trang 38Wiener Filter example
= 15.9dB
Trang 39Wiener Filter example
= 5.9dB
Trang 40Parametric Wiener Filter example
(effect of parameter )
Trang 41Parametric Wiener Filter example
(effect of parameter )
^f(m,n), = 1 ^f(m,n), = 5 ^f(m,n), = 50
Trang 42Parametric Wiener Filter example
(effect of parameter )
Small values of result in better “blur
removal” and poor noise filtering.
Large values of result in poor “blur
removal” and better noise filtering.
Trang 43Constrained Least Squares
Trang 44Constrained Least Squares
Restoration
One possibility is to formulate a criterion of optimality (choice of Q ) that is based on a measure of smoothness (minimize
“roughness” or oscillatory behavior of the
solution)
This is normally done by choosing Q to
represent a second derivative of the image
Trang 45Constrained Least Squares
Restoration
Consider the 1-D case: A discrete
approximation of the second derivative at a point x = m x can be obtained as follows:
2
1 2
2
1 2
1
1 1
1
1
x
x m
f x
m f x
m f
x
x m
f x
m
f x
x m f x
m
f x
x
f x
f x
x
f
x m
x x
m x x
m x
Trang 46Constrained Least Squares
Restoration
In a discrete formulation with x = 1, this
can be written as
Therefore, we seek an estimate ˆ f of f
which is smooth in the sense that it
minimizes the above “roughness measure.”
1 2
1
where
,
* 1
2
m p
m p m
f m
f m
f m
f
m m
Trang 47Constrained Least Squares
Restoration
This can be formulated in
our standard matrix notation
0 0
0 0
0
1 2 1
0 0 0
0
0 0
0 1
2 1
0
0 0
0 0
1 2 1
0 0
0 0
0 1
2
0 0
0 0
0 0
Trang 48Constrained Least Squares
Restoration
or equivalently
is a “smoothing matrix” and ^f is a vector
1 2
1 0
0 0
0
0 1
2 0
0 0
0
0 0
1 0
0 0
0
0 0
0 1
2 1
0
0 0
0 0
1 2
1
1 0
0 0
0 1
2
2 1
0 0
0 0
Trang 49Constrained Least Squares
Restoration
In the 2D case (with x = y = 1 ), we have
n m f n
m f n
m f n
m f n
m
f y
f x
f
n m f n
m f n
f n
m f n
1 ,
1 ,
, 1 ,
1
1 ,
, 2
1 ,
, 1 ,
2 ,
Trang 50Constrained Least Squares
Restoration
The roughness measure can then be written
as
0 1
0
1 4
1
0 1
0 ,
where
,
* ,
1 ,
1 ,
, 1 ,
1 ,
4
2
2
n m p
n m p n
m f
n m f n
m f n
m f n
m f n
m f
n m
n m
Trang 51Constrained Least Squares
Restoration
This can be formulated in our standard
matrix notation as follows:
min ||C^f ||2 = {^fTCTC^f}
subject to ||g-Hˆf||2 - ||n||2 = 0 or ||g-Hˆf||2= ||n||2
where (recall zero-padding):
Trang 52Constrained Least Squares
Restoration
is a “smoothing matrix” and ˆ , 1 f , is a vector representing the 2 , 0
2 , 0
, 1
,
1 , 1
, 0
, C
C C
C C
C C
C C
C C
C C
C C
C C
C
0 3
2 1
3 0
1 2
2 1
0 1
1 2
1 0
i p N
i p N
i p
i p i
p i
p
i p N
i p i
p i
Trang 53Constrained Least Squares
Restoration
Notice that C is a block circulant matrix.
As before, the solution to the above
optimization problem is given by
^f = (HTH + CT C )-1HTg
Trang 54Constrained Least Squares
Restoration
Using properties of the block circulant matrix
C , we get the following implementation of
this filter:
2 2
*
2 2
*
, ,
, ,
where
, ,
, ,
,
, ,
ˆ
v u P v
u H
v u
H v
u R
v u G v u R v
u
G v
u P v
u H
v u
H v
u F
Trang 55Constrained Least Squares
Restoration
Here P(u,v) is the 2D-DFT of matrix p(m,n), after appropriate zeropadding
Compare this with the parametric Wiener filter:
no power spectrum information is required in the constrained leastsquares restoration!
v u S v
u S v
u H
v u
H v
u
R
f , /
, ,
,
Trang 56Constrained Least Squares
Restoration
However, for the new filter to be optimal, the parameter
must be chosen to satisfy the constraint ||g-Hˆf||= ||n||.
Define the residual vector
r = g-Hˆf = g - H (HTH + CT C ) -1 HTg
Therefore, we need to choose such that ||r|| = ||n||
It can be shown that the function
( )= rTr = ||r|| 2
is a monotonically increasing function of
We want to adjust so that
( ) = ||r|| 2 = ||n|| 2 ± a
Trang 57Constrained Least Squares
Trang 58Constrained Least Squares
Restoration
3 If ( k)<||n|| 2 –a, k+1= k +b, set k=k+1, return to step 2
If ( k)>||n|| 2 +a, k+1= k -b, set k=k+1, return to step 2
Otherwise, STOP (current ^ fk or ^ Fk(u,v) is the restored
image and is the optimal choice of parameter ).
2 2
, ˆ
, ,
1 fˆ
Trang 59Constrained Least Squares
Restoration
Implementation of this procedure requires
knowledge of ||n||2, which denotes the strength of
n m E
n m
2 2
1 η
MN
n
m MN
n m
Trang 60Constrained Least Squares
2
η
Trang 61Constrained LS Example
Trang 62Constrained LS Example
Trang 63Geometric Distortion
Trang 64Gray-level Interpolation
Trang 65Example
Trang 66Example