Thus, the order of convergence is 1. b) Give a modification of Newton’s method so that the order of convergence is 2.. The result is true for all i. The theorem is proved.. Proof.. We ha[r]
Trang 1FINAL PROJECT IN
Iterative Solution of Nonlinear Equations in Several Variables
2012
The Minh Tran
e x x
f ( ) = 3
a) Write down Newton’s method for this function What is the order of convergence?
-
Solution :
+ We will use the Newton’s method with the formula : ( )
) (
' 1
n
n n
n
x f
x f x
x + = − for the function
x e x
x
f ( ) = 3
n
x
f ( ) = 3 ⇒ '( )=3 2 + 3 = 2 x ( n +3)
n x n x n
n x e x e x e x x
( )
3
2 )
3 ( )
(
2 2
3 '
1
+
+
= +
−
=
−
=
+
n
n n
n x
x n n
n
n n
n
x
x x x
e x
e x x
x f
x f x
x
n
n
+ To continue, we apply the formula p
n
n n
x
x
α
α
−
−
+
∞
→
1
lim to find the order of convergence, where
0
)
f ; α =0 We compute that :
p n p
n
n n n
p n n
n n n p n
n n p n
n
x x
x x
x x
x
x x
x
3
2 lim
1 3
2 lim
lim
2 2
1 1
+
+
= +
+
=
=
−
−
+
∞
→
∞
→
+
∞
→
+
∞
α
We assume that x n →α =0 so :
p n
n n p n p
n
n n n
p n
n
x x
x
x x
x
x
3
lim 3
2 lim
2 1
∞
→ +
∞
→
+
∞
+
+
=
−
−
α
α
3
1 3
lim = >
∞
→ p
n
n
n x
x
, which is nonzero
and positive
Thus, the order of convergence is 1
b) Give a modification of Newton’s method so that the order of convergence is 2
-
Solution
Trang 2such that c
x
x f
k
−
→ ( )
) ( lim
0 α
We have : lim 3 1 0
3
→ x
e
x x
x Clearly, we see that the root α =0 has multiplicity 3 for f ( x ) = x3ex
Hence, we can have a modification of Newton’s method to become quadratic convergence is :
( )
3 )
3 (
3 )
(
2 2
3 '
1
+
= +
−
=
−
=
+
n
n
n x
x n n
n
n n
n
x
x x
e x
e x x
x f
x f N x
x
n
n
Where N = 3 is the multiplicity of root
αof f ( x ) = x3ex
Finally, we can observe that p
n n
n
3 lim
2
+
∞
→ converges to a nonzero constant whenever p = 2
2 Given a linear system Ax=b where A is SDD
a) Describe Jacobi method applied to this system and prove a convergence theorem
+ Describe Jacobi method applied to this system
-
Solution :
The system AX = b or
n n nn n
n
n n
b x a x
a x a
b x a x
a x a
= +
+
= +
+
.
.
.
2 2 1 1
1 1
2 12 1 11
We can rewrite ( assumption that a ii ≠0, i=1 .n) as :
1 2
12 1 11 1
1
.
1
−
−
−
−
−
=
−
−
−
=
n n n n
n nn n
n n
x a x
a b a x
x a x
a b a x
So, we have :
Trang 3
+
−
−
−
−
−
−
−
=
0
0
0
0
0
0
2 11 1
2 1
1 , 1
22 2 22
23 22
21
11 1 11
12 2
1
nn n
n nn
n nn
n
n n
n
a b
a b a b
x
x x
a
a a
a
a
a a
a a
a
a
a a
a
x
x x
Or X = BX + d If we write the matrix A in the form A = L +D +U where
=
− 0
0
0 0
0
1 , 2
1
32
21
n n
a
a
a
=
−
0 0 0
0 0
0
1
1 12
n n
n
a
a a
=
n
a
a a
D
,
22 11
0 0
0
0
0 0
From the above part, it is to see that :
B = − D−1( L + U ) , d = D−1b
With the Jacobi matrix B = − D−1( L + U ) , the vector Jacobi d =D−1band X = BX + d
We have :
X(k+1) = − D−1( L + U ) X( )k + D−1b
n i
x a b
a x
n
i j j
k j ij i
ii
k
1
−
=
≠
= +
+ To prove a convergence theorem
-
Theorem : If A is strictly diagonally dominant then the Jacobi method converges for any guess x( )0
Proof :
Because A is strictly diagonally dominant (SDD) , we have :
1
<
⇔
≠
≠ i j ii
ij j
i
ij ii
a
a a
a
We will prove that G ∞ <1 :
Trang 4Here G = D−1(L+U)
We choose ∞.Then
1
≠
≤
≤
∞
−
∞
i
j ii
ij m
a U
L D G
Thus, the Jacobi method converges for any guess x( )0
b) Describe Gauss-Seidel method applied to this system and prove a convergence theorem + Describe Gauss-Seidel method applied to this system
-
Solution :
The system AX = b or
n n nn n
n
n n
b x a x
a x a
b x a x
a x a
= +
+
= +
+
.
.
.
2 2 1 1
1 1
2 12 1 11
We have :
(D−L)X =UX +b
⇒
=
−
−
⇒
= (D L U)X
⇒ X(k+1) = ( D − L )− 1( UX( )k + b )
Where A = D – L –U and D, L, and U represent the diagonal, lower triangular, and upper triangular parts
n i
x a x
a b
a x
n
i j
k j ij n
i j
k j ij i
ii
k
−
−
=
>
<
+ +
(1)
+To prove a convergence theorem
-
Theorem : The Gauss-Seidel method for Ax=b is convergent if A is strictly diagonally dominant Proof :
From AX= ⇒ (D-L-U)X= ⇒(D−L)X =UX +b
Therefore : (D−L) (X(i+ 1 ) −X)=U(X(i+ 1 )−X)
(2)
We need to prove that x( )m →x as m→∞ or e( )m =x( )m −x→0as m→∞
Trang 5Apply from (2), we have :
( ) ( ) ( )
( ) 1 ( ) 1 ( 1 )
1 1
−
−
−
−
−
+
=
⇒
+
=
⇒
=
−
m m
m
m m
m
m m
Ue D Le D
e
Ue Le
De
Ue e
L
D
+
=
−
=
−
−
=
−
=
=
n i j ij i
j ij ii
a U
a L
a
D
1 1
1
1
;
;
1
we have :
− +
−
=
+
=
−
−
=
n
i j
m j ij i
j
m j ij ii
m
a
e
1
1 1
1
1
For i =1 :
=
−
−
=
j
m j j m
e a a
e
2
1 1
11 1
1
( )
( )
∞
−
=
∞
−
=
−
≤
≤
≤
⇒
∑
∑
1 1
2
1 11
1 2
1 1
11 1
1 1
m j
n
j j
m j
n
j
m j j m
e r
a a
e
e a a
e
2 1 11
1 = ∑ <
=
n j j
a a
n
i r
r
≤
=
1
max
Now, we have for i≥2 and assume that ( ) ( )
∞
−
≤ m1
m
j r e e
∞
−
∞
−
≠
=
∞
−
+
=
∞
−
−
=
∞
−
+
=
−
−
=
≤
=
<
+
=
+
≤
⇒
∑
∑
∑
∑
∑
1 1
1 1
1 1
1
1 1
1
1 1
1
1
1 1
m m
i n
i j j
ij ii
m
n
i j ij m
i
j ij m
ii
n
i j
m j ij i
j
m j ij ii
m
i
e r e
r a
a e
a e
a e
r a
e a e
a a
e
Trang 6The result is true for all i
∞
−
≤
1
n
∞
−
∞ ≤
e r e
Thus ( )
∞
e r
e m
Or e( )m = x( )m − x → 0 as m → ∞ as required The theorem is proved
( We also can apply this way to prove for the part a) )
R
a) State and prove a QR- decomposition theorem
-
Theorem : Suppose that A is an n×m matrix with linearly independent columns then A can be factored as,
A = QR
where Q is an n×m matrix with orthonormal columns and R is an invertible m×m upper triangular matrix
Proof :
Suppose that the columns of A are given by c1, c2 , cm We use the Gram- Schmidt process on these vectors and we have a set of orthonormal vectors u1, u2, , um We can write
A will have columns A = [ c1| c2 | | cm]
Q will be a matrix with orthonormal columns Q = [ u1| u2 | | um]
We can write each c i as a linear combination of u1, u2, , um in the following linear system :
m m m m
m m
m m
m m
u u c u
u c u u c c
u u c u
u c u u c c
u u c u
u c u u c c
,
,
,
.
,
,
,
, ,
,
2 2 1
1
2 2
2 2 1 1 2 2
1 2
2 1 1 1 1 1
+ +
+
=
+ +
+
=
+ +
+
=
Next, define R to be the m×m matrix as
Trang 7
=
m m m
m
m m
u c u
c u c
u c u
c u c
u c u
c u c R
, , ,
.
.
, , ,
, , ,
2 1
2 2
2 2 1
1 1
2 1 1
Now, we can observe that the product QR=A
A
c c
c
u c u
c u c
u c u
c u c
u c u
c u c
u u
u QR
m
m m m
m
m m
m
=
=
=
|.
.
|
|
, , ,
.
.
, , ,
, , ,
|
|
|
2 1
2 1
2 2
2 2 1
1 1
2 1 1
2 1
We continue to do is to show that R is an invertible upper triangular matrix
First, recall the matrix
=
m m m
m
m m
u c u
c u c
u c u
c u c
u c u
c u c R
, , ,
.
.
, , ,
, , ,
2 1
2 2
2 2 1
1 1
2 1 1
from Gram- Schmidt process we know that u kis orthogonal to c1, c2, , ck−1 This mean that all the inner product below the main diagonal must be zero and they are all of the form c i,u j =0 with
j
i< We know from Special matrices property that a triangular matrix will be invertible if the main diagonal entries c i,u i are non-zero We have the general formula for u i from the Gram-Schmidt process
1 1 2
2 1
1 '
,
,
−
'
'
≠
=
i
i
u u u
Trang 81 1 2
2 1
1 '
1 1 2
2 1
1 '
,
,
,
,
,
,
−
−
−
−
+ +
+ +
=
+ +
+ +
=
⇒
i i i i
i i i
i i i i
i i i
u u c u
u c u u c u u
u u c u
u c u u c u
c
Now, we can rewrite the formula using the properties of the inner product
i i i i i
i i i
i i i
i i i i i
i i i i
i
u u u c u
u u c u u u c u u u
u u u c u
u c u u c u u u
c
, ,
,
, ,
, ,
, ,
,
, ,
1 1 2
2 1
1 '
1 1 2
2 1
1 '
−
−
−
−
+ +
+ +
=
+ +
+ +
=
Because the u i are an orthonormal basis vectors and so we see that
0 ,
, 0
, 0
And from the above content ,we also have c i,u j =0 with i< j
Hence, R is an invertible upper triangular matrix and it is presented the form following
=
m m
m m
u c
u c u
c
u c u
c u c R
, 0 0 0
.
.
.
0
, , 0
, , ,
2 2
2
1 1
2 1 1
b) Prove a uniqueness theorem of the decomposition for a proper A, e.g nonsinglar and so on
-
Theorem : Let A be a m×n matrix with linearly independent columns Thus, A admits a QR decomposition.Further such a decomposition is unique
Proof
We have proved the existence of QR decomposition of the matrix A as in the part a)
Now we can prove the uniqueness of this decomposition
Indeed, from the matrices A, Q, R have the properties as in the part a)
Let A=Q1R1 =Q2R2
where Q1T Q1 =Q2T Q2 =Id
and both R1, R2 are upper triangular invertible matrices
Trang 9Then, we can do a reduction on the matrices and see that :
2 2
2 2 2 2
1 1 1 1 1 1
R R
R Q Q R
A A
R Q Q R R R
t
t t t
t t t
=
=
=
=
Hence
1 2 1 1 2 2
2
1
1
−
−
=
⇒
R
R t t t t We see the this equation have the left hand side is a lower triangular matrix and the right hand side is an upper triangular matrix Hence, both of them must be diagonal
Let αi and βi 1≤i≤n are the diagonal entries of R1 and R2,respectively Then αi >0; βi >0 for every i and
n i
n i
i i
i i i
i
≤
≤
−
≤
≤
= 1 ,
1 ,
β α
β
α β
α
Hence R R− =( )R− t R =Id
1 1 2 1
1
2 ⇒ R1 =R2 Since Q1R1 =Q2R2,it follows that Q1 =Q2
Thus, the decomposition is unique
, ,
1t t
A= be there vectors(polynomial) and let (f,g) f(x)g(x)dx
1 1
∫
−
under consideration Use the Gram-Schimidt process to orthogonalize the set A, what is the resulting orthonormal set
-
Solution :
We have a unit vector basic { 2}
, ,
1t t
A= Let A1 =1;A2 =t;A3 =t2 and (f,g) f(x)g(x)dx
1 1
∫
−
= Compute :
1
1
1 = A =
1 1 1
1 1 1 1
−
−
dx dx
q q q
q
t
dt t t
t t q q q
q A A
q
=
−
=
−
=
−
=
− 1 1
1 1 1
1 2 2
2
2
1 1 1 , ,
,
Trang 10( )
3
2 ,
1 2 1
2 2
2
−
−
dx t dx q q
q
q
3 1
2
3 2
1
, 2
3 1 1 , 2
1 ,
, ,
,
2
1 1 3 1
1
2 2
2 2
2 2 2 2
2 3 1
1 1
1 3 3
3
−
=
−
−
=
−
−
=
−
−
−
=
⇒
∫
∫
−
−
t
dt t t dt t t
t t t t
t q q q
q A q
q q
q A A
q
The resulting orthonormal set is
− 3
1 ,
,
1 t t2
We can check the inner product again as :
( )
−
⊥
⊥
⇒
=
−
=
−
=
−
=
−
=
−
−
−
3
1 1
0 3
1 3
1 , 1 3
1 3
1 , ,
1
2
1 1
2 2
1 1
2 2
1
1
t
t
dt t
t dt t
t t
t dt
t
t
5) Give three examples of isometry onR They should be a reflector, a rotation, and a composition 2
of the two You should specifically write down the matrix for each case
-
Solution :
Example 1 : Rotation
Let P be the point (x,y) where x=rcosϕ and y=rsinϕ
Rotating with the angle θ from P(x,y) to P'(X,Y)
Rotation through θ about the origin
+
= +
= +
=
−
=
−
=
−
=
θ θ
θ ϕ θ
ϕ θ
ϕ
θ θ
θ ϕ θ
ϕ θ
ϕ
sin cos
sin cos cos
sin sin
sin cos
sin sin cos
cos cos
y x
rs r
r
Y
y x
r r
r X
Trang 11We can write down matrix form
=
⇒
=
θ θ
θ θ
θ θ
θ θ
θ
cos sin
sin cos
cos sin
sin cos
R y
x Y
X
Example 2 : Reflector
Let P be the point (x,y) where x=rcosϕ and y=rsinϕ
From the above figure, we have computed two reflection angles are equal to θ −ϕ
2 Reflection in the line
2 tanθ
x
y=
We can find the angle θ θ ϕ=θ −ϕ
− +
=
∠
2 2
'
OX P
+
= +
= +
=
−
=
−
=
−
=
θ θ
ϕ θ ϕ
θ ϕ
θ
θ θ
ϕ θ ϕ
θ ϕ
θ
cos sin
sin cos cos
sin sin
sin cos
sin sin cos
cos cos
y x
r r
r
Y
y x
r r
r
X
We can write down matrix form
−
=
⇒
−
=
θ θ
θ θ
θ θ
θ θ
θ
cos sin
sin cos
cos sin
sin cos
M y
x Y
X
Example 3 :Composition of the two
Let A ∈ R2×2.Write the matrix as :
=
d c
b a A
Because of the orthogonality we have :
Trang 12
) 3 ( 0
) 2 ( 1
) 1 ( 1
2 2
2 2
= +
= +
= +
cd ab
d b
c a
From the equation (1), we can write a=cosθ ,c=sinθ for some θ
From the equation (2), we have b=cosϕ ,d =sinϕ for some ϕ
From the equation (3), we see that cosθcosϕ +sinθ.sinϕ=0
cosθ −ϕ =
Thus
=
+
=
−
=
+
= +
=
−
=
+
=
=
+
= +
=
θ θ
π θ
θ
π θ
π ϕ
θ θ
π θ
θ
π θ
π ϕ
cos 2
3 sin ,
sin 2
3 cos ,
2 3
cos 2
sin ,
sin 2
cos ,
2
d b
case which in
Or
d b
case which in
So
Finally, we have :
=
d c
b a
A and the values a, b, c, d are found
Thus
−
=
θ θ
θ θ
θ θ
θ θ
cos sin
sin cos
cos sin
sin cos
or A
6 ) Find the general solution of the linear difference equation
0 4
4 n+2 − n+1 + n =
U U
U
-
Solution :
The characteristic polynomial is :
( )
(2 1) 0
1 4 4
2 2
=
−
=
+
−
=
ξ
ξ ξ
ξ
ρ
The equation have double root
2
1
2
1 =ξ =
ξ So the general solution has the form :
n n
+
=
2
1 2
1
2 1
b) Consider the iteration
Trang 13
=
+ +
+
1 2
1
n n n
n
U
U A U
U
where 2× 2
∈ R
A using the associated Jordan decomposition Find its limit as
0
→
n Must the spectral radius of A be less than one?
-
Solution :
From the equation
=
+ +
+
1 2
1
n
n
n
n
U
U A
U
U
n n
n n
n n
U U
U U
U
U
4
1 0
4
4 +2 − +1+ = ⇒ +2 = +1 −
−
=
−
=
⇒
=
−
=
+ +
+
+
+
+ +
+
+
+
1 1
1
2
1
1 1
1
2
1
1 4 1 1 0 4
1
4 1
n
n
n n
n
n
n
n
n
n n
n
n
n
U
U U
U
U U
U
U
U A U
U
U
U
U
We set
^ 0
^
^
^ 1 2
1
^
U A U U
A U
U
U
n
n
=
⇒
=
⇒
+ +
According to in the part a) Using the Jordan decomposition to compute n
A
The equation have the root
( )
2
1 0
4
1 1
4
2
1
0 1
2
1 4 4
2
2 2
=
⇒
= +
−
=
−
−
−
=
−
=
⇒
=
−
=
+
−
=
λ λ
λ λ
λ
λ
λ
λ
λ λ
λ
ρ
I
A
Trang 14We have the Jordan matrix :
=
2
1 0
1 2
1
J
We have AR = RJ with R=[r1 r2 ]
With eigenvalues :
2
1
2
1 =λ =
λ
To apply Jordan decomposition :
=
⇒
=
−
=
−
=
⇒
=
−
=
−
2
0 2
1
1
2 0
2 1
2 1 2 2
2
1 1
1
1
r r r I A r
I
A
r r
I A r
I
A
λ
λ
−
=
−
=
⇒
=
2
1 4 1
0 2 1 2
1
0 2 4
1 2
1
0
2
r
R
We observe that the matrix A can presented by the matrices R, R−1,J
=
−
=
2
1 0
1 2
1
; 1
4
0
J A
Clearly,
1
2
1 4 1
0 2 1
2
1 0
1 2 1 2 1
0 2 1
4
0
−
=
−
=
−
A
We have
1
1 2 1
1
2
1
−
−
−
−
−
=
⇒
=
=
=
R RJ
A
R RJ RJR
RJR
A
RJR
A
n n
Trang 15
=
=
⇒
=
=
⇒
=
=
3 3 2
2
3
2
2 2
1
0
3 0
1 0
2
0
2 0
1 0
1 0
1 2
1 0
1 2
1
λ
λ λ λ
λ λ
λ λ
λ
λ λ λ
λ λ
λ λ
λ
J
J J
have
We
R R
A n
−
=
=
=
⇒
−
2
1 4 1
0 2 1 2
1
0 2 2
1 0
2 2 1 0
1
R and R
where R
n R
A
n J
Thus
n
n n n
n
n n n
λ
λ λ
When n→0, we need to compute
=
−
=
=
→
0 1 2
1 4 1
0 2 1 1 0
0 1 2 1
0 2 1
0
0 1 2
1 0 2
1 2
1 lim
0
n R
A
n
n n n
n
n
The eigenvalue of the matrix A is
2
1 0
4
1 1
4
1
1
2 1 2
=
=
⇒
= +
−
=
−
−
−
=
A
The spectral radius of A be less than one
2
1 max = <
=
i
ρ
7) Recall that the rank of a matrix is equal to the number of linearly independent columns
Prove that A∈R n×n has rank one if and only if there exist nonzero vectors u,v∈R n such that
T
uv
A= .To what extent is there flexibility in the choice of u and v ?
-
Proof
+ A is a matrix have the rank is 1, this mean that any row of A can be expressed in term of any other row
of A Let A=[a1,a2 , a n], where a i,i=1, ,n represent the row of matrix A Then rank A is 1