1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: PARALLEL ITERATIVE REGULARIZATION ALGORITHMS FOR LARGE OVERDETERMINED LINEAR SYSTEMS

13 102 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 13
Dung lượng 225,16 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

4 2010 525– 537c World Scientific Publishing Company PARALLEL ITERATIVE REGULARIZATION ALGORITHMS FOR LARGE OVERDETERMINED LINEAR SYSTEMS PHAM KY ANH∗and VU TIEN DUNG† Department of Math

Trang 1

Vol 7, No 4 (2010) 525– 537

c

 World Scientific Publishing Company

PARALLEL ITERATIVE REGULARIZATION ALGORITHMS FOR LARGE OVERDETERMINED LINEAR SYSTEMS

PHAM KY ANHand VU TIEN DUNG† Department of Mathematics Vietnam National University, 334 Nguyen Trai Thanh Xuan, Hanoi, Vietnam

∗ anhpk@vnu.edu.vn

† tiendunga2@yahoo.com

Received 22 March 2010 Accepted 4 August 2010

In this paper, we study the performance of some parallel iterative regularization methods for solving large overdetermined systems of linear equations.

Keywords: Iterative regularization method; ill-posed problem; parallel computation.

1 Introduction

Many scientific problems lead to the requirement of finding a vector x ∈ R n, such that:

where B ∈ R m×n and g ∈ R mare given matrix and vector, respectively

In the last years, many direct and iterative methods for solving linear systems (1)

on vector and parallel computers have been studied intensively (see, e.g [Calvetti

and Reichel (2002); Gallivan et al (1990); Saad and Vorst (2000)] and references

therein)

The aim of this paper is to implement some parallel iterative regularization methods proposed in Ref [Anh and Chung (2009)] for large overdetermined linear systems (1), when the number of equation is large compared to the number of

unknowns, i.e., m  n Large-scale linear discrete ill-posed problems arise in many

practical problems, such as image restoration, computer tomography, and inverse problems in electromagnetics

In what follows, we are interested in finding the minimal-norm solution of the consistent system (1)

Corresponding author.

525

Trang 2

For a parallel computation purpose, we shall partition the given data B and g into N blocks

B =

B1

B2

B N

; g =

g1

g2

g N

; B i ∈ R m i ×n; g

i ∈ R m i ,

where N ≥ 2; 1 ≤ m i ≤ m − 1, andN i=1 m i = m.

Clearly, x is a solution of Eq (1) if and only if it is a common solution of N

subsystems

B i x = g i , i = 1, 2, , N. (2) For solving a consistent system of ill-posed equations

A i (x) = 0, i = 1, 2, , N, (3)

where A i : H → H are given possibly nonlinear operators in a real Hilbert space H,

two parallel methods, namely, the parallel implicit iterative regularization method (PIIRM) and the parallel explicit iterative regularization method (PEIRM), which

are proposed in Ref [Anh and Chung (2009)] All the operators A i (x) in Eq (3) are supposed to be inverse-strongly monotone (see [Liu and Nashed (1998)]), i.e.,

∃c i , ∀x, y ∈ HA i (x) − A i (y), x − y ≥ 1

c i i (x) − A i (y) 2, i = 1, 2, , N. (4)

Clearly, each operator A i (x) is Lipschitz continuous and monotone, but not

neces-sarily strongly monotone, hence each Eq (3) may be ill-posed

For the sake of simplicity, we may assume that all the constants c i in Eq (4)

are the same, i.e., c i = c; i = 1, 2, , N

Theorem 1.1 [Anh and Chung (2009), Theorem 2.1] Let α n and γ n be two sequences of positive numbers, such that α n → 0, γ n → +∞, γ n |α n+1 −α n |

α2

as n → +∞ and ∞ n=1 α n

γ n = +∞ Suppose the nth-approximation x n is found (x0 is given) Then the following parallel regularization algorithm:

A i (x i n) +

α n

N + γ n

x i n = γ n x n , i = 1, 2, , N, (5)

x n+1= 1

N

N i=1

x i n , n = 0, 1, 2, , (6)

will converge to the minimal-norm solution x+ of the system (3).

Since all the problems (5) are well posed and independent from each other, they can be solved stably by parallel processors

Making a few iterates for approximating x i nin Eq (5) and inserting approximate

values of x i ninto Eq (6), we come to the PEIRM, whose convergence is guaranteed

by the following theorem

Trang 3

Theorem 1.2 [Anh and Chung (2009), Theorem 2.2] Suppose the sequences α n

and γ n satisfy all conditions in Theorem 1.1 Moreover, assume that

∀ n, N c + α n

γ n ≤ q < 1 and α n γ n → ∞ as n → ∞.

Then, for a fixed number l ≥ 1, the PEIRM

z k+1 n,i = z n − 1

γ n



A i (z n,i k ) +α n

N z

k n,i



k = 0, 1, , l − 1

z0

n,i = z n

(7)

z n+1= N1

N i=1

z n,i l , n = 0, 1, 2, (8)

converges to the minimal-norm solution of the system (3).

Obviously, the computation of Eq (7) can be performed in parallel In particular,

when l = 1, we come to the explicit iterative process studied in Ref [Buong and

Son (2008)]

x n+1 = x n − 1

N γ n

N

i=1

A i (x n ) + α n x n



It can be shown that Eq (9) coincides with the Bakushinski iterative regular-ization method [Bakushinski and Goncharsky (1989)] applied to the equation

N

i=1 A i (x) = 0, which is equivalent to Eq (3) if A i (x) are inverse-strongly mono-tone We refer to [Bakushinski and Goncharsky (1989; 1994); Kaltenbacher et al.

(2008)] for iterative regularization methods

The remainder of the paper is organized as follows In the next section, we implement the described above methods (5), (6) and (7), (8) to find a common

solution of N subsystems (2), when both B and g may be contaminated by errors.

Finally, in the last section, we present some numerical experiments

2 Parallel Algorithms for Large Overdetermined Linear Systems

Together with Eq (2), we consider the following system

A i (x) := B i T B i x − B T

i g i = 0, i = 1, 2, , N, (10)

where the symbol T denotes the transposition of a matrix or a vector In what

follows, we assume that the system (1) is consistent Then, the subsystems (2) have

a common solution, which is a common solution of Eq (10), too Conversely, a common solution of Eq (10) is a common quasi-solution of Eq (2), and due to the consistency of Eq (1), it is also a common solution of Eq (2)

In the sequel, we use the Euclidean vector norm and the matrix norm supx=0 Dx x for any D ∈ R k×s The identity matrix inRn×nand the scalar product

inRn are denoted by I and by ., , respectively.

Trang 4

Since for any real symmetric nonnegative matrix D = D T ∈ R n×n and any

vector ξ ∈ R n, one hasDξ, ξ ≥ 1

D 2, the operators A i defined by the

left-hand side of Eq (10) are inverse-strongly monotone Indeed,∀ x, y ∈ R n,A i (x) −

A i (y), x − y = B T

i B i (x − y), x − y ≥ 1

B T

i B i  i (x) − A i (y) 2.

The PIIRM applied to Eq (10) in the noise-free case gives



B T i B i+

α n

N + γ n

I



x i n = γ n x n + B i T g i , i = 1, 2, , N, (11)

x n+1= N1

N i=1

x i n , n = 0, 1, 2, (12)

Each subsystem (11) should be solved in parallel, for example, by the block Cholesky algorithm or by parallel Jacobi method

If the parameters α n and γ n satisfy all the conditions in Theorem 1.1, then the iterative process (2), (3) converges to the minimal norm solution of Eq (1)

Now, suppose we are given only approximate data B δ and g δ, such that δ −

we come to the following process:



B δi T B δi+



γ n+α n

N

I



z n i = γ n z n + B δi T g δi , i = 1, 2, , N, (13)

z n+1= N1

N i=1

z i n , n ≥ 0. (14)

Observe that 2=N

i=1 i 2, hence i

Setting M i,n := B i T B i +(γ n+α n

N )I;  M i,n := B δi T B δi +(γ n+α n

N )I; F i,n = γ n x n +B i T g i ,

we can rewrite the processes (11), (12) and (13), (14) shortly as

x i n = M −1

i,n F i,n; z n i = M −1

i,n Fi,n . Observing that

T

i g i − B T

δi g δi i − B δi)T g i + B T δi (g i − g δi) i δi 1δ,

we find

F i,n − F i,n n n − z n i T g i − B T

δi g δi n n − z n 1δ.

Trang 5

Further, as −1

i,n i T B i + (γ n+ α n

γ n+αn

N ≤ 1

γ n , and similarly,

M −1

γ n, we have

M −1 i,n − M i,n −1 M −1

i,n [M i,n −  M i,n ]M −1

i,n

1

γ n2

T

i (B i − B δi ) + (B i − B δi)T B δi

1

γ n2( i δi

≤ 2ω1δ

γ n2 .

Thus, taking into account the above inequalities, we find

i

n − x i

i,n[ F i,n − F i,n] + [ M −1

i,n − M −1 i,n ]F i,n

1

γ n (γ n n − z n 1δ) + 2ω1δ

γ n2 (γ n n T i g i

By Theorem 1.1, x n converges to the minimal-norm solution x+ of (1), hence

it is bounded Further, since γ n > 0 and γ n → +∞ as n → +∞, there exists a number γ > 0, such that γ n T i g i n ≤ ω2/γ, i =

1, 2, , N ; n ≥ 0.

Thus, n i − x i

γ n (1 + n B T

i g i 

γ n ) n − z n ω γ2δ

n for

some positive constant ω2 From the last relation, we get

n+1 − z n+1 N1

N i=1

i

n − z i

γ n , therefore, if z0= x0, then

n − z n 2δ

n−1 k=0

1

γ k ≤ ω2

γ nδ.

Choosing n = [δ −µ ], where µ

n(δ) − x+

n(δ) − x n(δ) n(δ) − x+ ω2

γ δ 1−µ+

n(δ) − x+

come to the following theorem

Theorem 2.1 Suppose the given data are contaminated by errors, namely δ −

in Theorem 1.1, then the iteration process (13), (14) with the termination index n(δ) = [δ −µ ], converges to the minimal-norm solution of (1)as the error level δ tends

to zero Moreover, there holds the estimate n(δ) − x+ ω2

γ δ 1−µ+

n(δ) − x+

where ω2 and γ are some positive constants, n(δ) = [δ −µ ] and µ ∈ (0, 1).

Now, we turn to the PEIRM (7), (8) For the sake of simplicity, we consider the method (9), where at each step only one inner iteration (7) is performed Letting

Trang 6

D = N

i=1 B T i B i , D δ = N

i=1 B T δi B δi , T n = (1− α n β n )I − β n D and T δn = (1

α n β n )I − β n D δ , where β n = 1

Nγ n and putting f =N

i=1 B i T g i , f δ =N

i=1 B δi T g δi,

we can rewrite the iterative process (9) for solving Eq (2) in both noise-free and noisy data cases as

x n+1 = T n x n + β n f, (15)

z n+1 = T δn z n + β n f δ , (16)

Moreover, if the sequences α n and γ nsatisfy all conditions in Theorem 1.2, then the

sequence x n, defined by Eq (15) converges to the minimal-norm solution of Eq (1) For evaluating the discrepancy n − z n

First, observe that i T B i − B T

δi B δi i − B δi)T B i δi T (B i − B δi)

1δ, hence δ 1N δ, therefore

Further,

N i=1

δi − B i)T g δi T i (g δi − g i) 1N β n δ. (18)

Finally, setting ξ = z n − x n, we have

δn (z n − x n) 2=

n β n )ξ − β n D δ ξ 2

= (1− α n β n)2 2− 2β n(1− α n β n)D δ ξ, ξ + β2

n δ ξ 2.

Using the factD δ ξ, ξ ≥ D δ ξ2

D δ  and the condition Nγ α n n < Nc+α n

γ n ≤ q < 1, we

find 1− α n β n= 1− α n

Nγ n > 0 Thus,

δn ξ 2≤ (1 − α n β n)2 2− β n

 2(1− α n β n)



Since α n → 0 and γ n → +∞, without loss of generality, we can assume that

γ n > ω

2 1

2 +

α n

Using Eq (20) and the estimate δ N

i=1 δi 2 ≤ Nω2

1, one can show that

2(1− α n β n)− β n δ 19), it follows

Now, from Eqs (15) and (16), we have n+1 −z n+1 δn −T n n δn n −

z n n δ 17), (18), and (21), we find

n+1 − z n+1 1N β n δ(1 + n n − z n (22)

Since the sequence x n converges to x+, it is bounded and there exists a positive

constant ω2 > 2ω1(1 + n 22) implies n+1 − z n+1 n −

z n ω2δ

γ , hence starting with z0= x0, we have n − z n 2δn−1

k=0 γ1 ≤ ω2

γ nδ.

Trang 7

Thus, choosing

n(δ) = [δ −µ ], 0 < µ < 1 (23)

as in Theorem 2.1, we come to the following result

Theorem 2.2 In the noisy-data case, the process (16) with the choice of the num-ber of iterations (24), converges to the minimal-norm solution of (1), provided the parameters α n and γ n satisfy all the conditions in Theorem 1.2 and the condi-tion (20).

In the following cases, the considered iterative regularization methods become much simpler

Case 1: The matrix B is split into N symmetric and nonnegative define matrices

B i ∈ R n×n , where m = nN Then, the PIIRM (11), (12) becomes



B i+



γ n+α n

N

I



x i n = γ n x n + B i g i , i = 1, 2, , N, (24)

x n+1= 1

N

N i=1

x i n , n = 0, 1, (25) Further, the PEIRM (15) is of the form

x n+1= (1− α n β n )x n − β n

N i=1

B i x n + β n

N i=1

Case 2: The matrix B is split into m row-vectors, B i = b T i ∈ R 1×n , m i = 1; m =

N, i = 1, 2, , N The PIIRM (11), (12) becomes an explicit method and can be rewritten as (see [Anh and Chung (2009)])

x i n = x n − b

T

i x n − α n +Nγ n

Nγ n g i

i 2 , i = 1, 2, , N (27)

x n+1= N γ N γ n

n + α n

 (1− λ n )x n+λ n

N

N i=1

x i n



, n ≥ 0, (28)

where λ n= α N

n +N+Nγ n

3 Numerical Experiment

Before considering some examples, we remark that discretization of a system of linear integral equations always leads to an overdetermined linear system Indeed, let us consider a system of linear first kind Fredholm integral equations,

 b a

K i (t, s)x(s)ds = f i (t) i = 1, 2, , N, (29)

where K i (t, s) and f i (t) are given continuous kernels and continuous functions,

respectively Choose a quadrature formula b

a h(t)dt = M

l=0 γ l h(t l ) + r, where

Trang 8

a ≤ t0< t1< · · · < t M ≤ b are the grid points, γ0, , γ M are the weights and r is

the remainder of the quadrature formula Now, applying this formula to system

(29), dropping the remainder and setting x i l := x i (t l ); f j i := f i (t j), we get the

following system of N (M + 1) linear equations with M + 1 unknowns.

M l=0

γ l K i (t j , t l )x i l = f j i j = 0, 1, , M ; i = 1, 2, , N. (30)

Let B i be a matrix and g i be a vector, whose entries are γ l K i (t j , t l ) and f j i ; j, l =

0, 1, , M ; i = 1, 2, , N , respectively Thus, a natural partition for Eq (30) is

defined If all the kernels in Eq (29) are symmetric and the rectangular rule for numerical integration is used, then the coefficient matrix of Eq (30) can be split

into N symmetric submatrices Some numerical examples in this special case are

given in Ref [Anh and Chung (2009)]

In this section, we present two numerical examples All computations were car-ried out on an IBM cluster 1350 with eight computing nodes Each node contains two Intel Xeon dual core 3.2 GHz, 2 GB RAM We consider system (1) with the

matrix B being split into two ill-conditioned matrices B i ∈ R m×m , i = 1, 2 Let

x ∗ ∈ R m be a vector, whose entries are 1 and f = Bx ∗ Further, let f δ = Bx ∗ + η, where the entries of η are normally distributed random numbers with zero mean, scaled to yield the noise level δ For solving system (1), we use both PIIRM (11), (12)

and PEIRM (15), starting from the initial approximation x0= 0 We evaluate the

accuracy of the method using the relation error norm (REN) err := n −x ∗ ∗

The notations used in this section are as follows:

nmax: Total number of iterations

T p: Time of the parallel execution taken in seconds

T s: Time of the sequential execution taken in seconds

S p = T s /T p: Speed up

E p=T p p : Efficiency of parallel computation by using p processors

REN: Relative error norm PIIRM: Parallel implicit iterative regularization method PEIRM: Parallel explicit iterative regularization method Figures 1–4 and Tables 1–4 show the relative error histograms of the PEIRM and PIIRM, their time of execution as well as speed up and efficiency

Example 3.1 We consider a system of two linear first kind Fredholm integral

Eq (29) with the kernels

K1(t, s) = 13 +t + s2 + ts;

K2(t, s) =



s(1 − t) s ≤ t;

t(1 − s) t ≤ s

Trang 9

0 0.5 1 1.5 2 2.5

x 104 0.015

0.02 0.025 0.03 0.035 0.04 0.045 0.05

Number of iterations

Fig 1 REN of iterates generated by PIIRM and PEIRM.

x 10 4 0.01

0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05

Number of iterations

Fig 2 REN of iterates generated by PIIRM and PEIRM.

and the trapezoid rule with [a, b] ≡ [0, 1]; M = 100 Note that the coefficient matrix

of Eq (30) in this case is split into two nonsymmetric submatrices We present the

computation for some chosen parameters α n , γ n , and δ.

Case 1: α n= (5∗ 10 −4 )/(n + 1) 0.4 ; γ

n = (n + 19 ∗ 107)0.5 ; δ = 10 −2.

In the noisy case, the PIIRM is more accurate and more time consuming than the PEIRM When the number of iterations is large enough, the relative error norm (REN) of approximate solutions increases

Trang 10

0 2000 4000 6000 8000 10000 0

0.02 0.04 0.06 0.08 0.1 0.12 0.14

0.16

PEIRM PIIRM Landweber

Fig 3 REN of iterates generated by PIIRM, PEIRM, and Landweber method.

x 10 4 0

0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08

Number of iterations

PEIRM PIIRM Landweber

Fig 4 REN of iterates generated by PIIRM, PEIRM, and Landweber method.

Case 2: α n= (5∗ 10 −4 )/(n + 1) 0.4 ; γ

n = (n + 19 ∗ 107)0.5 ; δ = 0.

In the noise-free case, the PIIRM is as accurate as PEIRM; however, the first method is more time consuming than the second one

Example 3.2 Let

B =



B1

B2



;

Ngày đăng: 16/12/2017, 00:56