1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: Parallel iteratively regularized Gauss-Newton method for systems of nonlinear ill-posed equations

11 106 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 11
Dung lượng 158,02 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

On: 11 November 2014, At: 06:31Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street

Trang 1

On: 11 November 2014, At: 06:31

Publisher: Taylor & Francis

Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

International Journal of Computer Mathematics

Publication details, including instructions for authors and subscription information:

http://www.tandfonline.com/loi/gcom20

Parallel iteratively regularized Gauss–Newton method for systems of nonlinear ill-posed equations

Pham Ky Anha & Vu Tien Dzunga a

Department of Mathematics, Vietnam National University, 334 Nguyen Trai, Thanh Xuan, Hanoi, Vietnam

Accepted author version posted online: 19 Mar 2013.Published online: 15 Apr 2013

To cite this article: Pham Ky Anh & Vu Tien Dzung (2013) Parallel iteratively regularized

Gauss–Newton method for systems of nonlinear ill-posed equations, International Journal of

Computer Mathematics, 90:11, 2452-2461, DOI: 10.1080/00207160.2013.782399

To link to this article: http://dx.doi.org/10.1080/00207160.2013.782399

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the

“Content”) contained in the publications on our platform However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content Any opinions and views expressed in this publication are the opinions and views of the authors,

and are not the views of or endorsed by Taylor & Francis The accuracy of the Content should not be relied upon and should be independently verified with primary sources

of information Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content

This article may be used for research, teaching, and private study purposes Any

substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Trang 2

International Journal of Computer Mathematics, 2013

Vol 90, No 11, 2452–2461, http://dx.doi.org/10.1080/00207160.2013.782399

Parallel iteratively regularized Gauss–Newton method for

systems of nonlinear ill-posed equations

Pham Ky Anh and Vu Tien Dzung*

Department of Mathematics, Vietnam National University, 334 Nguyen Trai, Thanh Xuan, Hanoi, Vietnam

(Received 21 May 2012; revised version received 16 January 2013; accepted 26 February 2013)

We propose a parallel version of the iteratively regularized Gauss–Newton method for solving a system of ill-posed equations Under certain widely used assumptions, the convergence rate of the parallel method is established Numerical experiments show that the parallel iteratively regularized Gauss–Newton method is computationally convenient for dealing with underdetermined systems of nonlinear equations on parallel computers, especially when the number of unknowns is much larger than that of equations.

Keywords: ill-posed problem; IRGNM; parallel computation; componentwise source condition;

undeter-mined system

2010 AMS Subject Classifications: 47J06; 47J25; 65J15; 65Y05

1 Introduction

Many parameter identification problems lead to a system of operator equations

where F i, 1≤ i ≤ N, are possibly nonlinear operators mapping a Hilbert space X of unknown parameter x into Hilbert spaces Y i of observations y i In the case of noisy data y δ

iwithy δ

i − y Y i

δ, we have the perturbed system

F i (x) = y δ

Clearly, systems (1) and (2) can be rewritten as operator equations in the product space

and

F(x) = y δ

where F : X → Y = Y1× Y2× · · · × Y N , F(x) = (F1(x) , , F N (x)) , y = (y1, , y N ) and y δ =

(y δ

1, , y δ

N ) For u = (u1, , u N ) ∈ Y and v = (v1, , v N ) ∈ Y, the inner product and the norm

in Y are defined as < u, v >=N

i=1< u i , v i > Y i andu Y = (N

i=1u i2

Y i ) 1/2, respectively

*Corresponding author Email: anhpk@vnu.edu.vn

Trang 3

One of the most efficient regularization methods for nonlinear ill-posed problems is the iter-atively regularized Gauss–Newton method (IRGNM), proposed by Bakushinskii [4] in 1992

Convergence results of the IRGNM were obtained by Blaschke et al [5], Hohage [10], Deuflhard

et al [7], Jin et al [11] and others, see [9,12,14].

Recently, an IRGN–Kaczmarz method has been introduced by Burger and Kaltenbacher [6] The main idea of the last method is to perform a cyclic IRGN iteration over the equations

However, when the number of equations N is large, the Kaczmarz-like methods are costly on a

single processor

In this note, we propose a parallel version of the IRGNM for the system of ill-posed operator Equation (1) Other parallel methods for solving systems of ill-posed equations can be found in [1–3]

Suppose that the exact system (1) has a solution x†, which may not depend continuously on

the right-hand side y Suppose that the operators F i, 1≤ i ≤ N, are continuously differentiable

in some set containing xand x0– an initial approximation of x Let x δ

n be the nth approximation

of x According to the IRGNM, for a fixed number n, we linearize the Tikhonov functional

J n δ (x):= F(x) − yδ2

Y + α n x − x02=

N



i=1

F i (x) − y δ

i2

Y i + α n x − x02

about x δ

n, and consider the unconstrained optimization problem

 δ n (x):=

N



i=1

F i (x δ n ) − y δ

i + F

i (x n δ )x2

Y i + α n x δ

n − x0+ x2→ min

x ∈X,

where F i(x δ

n ) stands for the Frechet derivative of F i (x) computed at x δ

n

Finding x from the equation ∂ δ

n /∂(x) = 0, we determine the next approximation as x δ

n+1 =

x δ n + x, or

x n δ+1 = x δ

n

 N



i=1

F i(x δ n )F

i (x n δ ) + α n I

−1 N



i=1

F i(x δ n )(F

i (x δ n ) − y δ

i ) + α n (x δ n − x0)

 (5)

However, in some cases, it is much more computationally convenient to apply the IRGNM to each subproblem (2) synchronously, i.e to find in parallel

x δ

n +1,i = x δ

n − (F

i (x n δ )F

i (x n δ ) + β n I)−1(F

i (x n δ )(F

i (x δ n ) − y δ

i ) + β n (x δ n − x0

i )), (6)

i = 1, , N, where β n := αn /N, and then define the next approximation as an average of the

intermediate approximations x n +1,i, i.e.

x δ n+1= 1

N

N



i=1

Although each step (6) consists exactly of one iterate of the IRGNM applied to subproblem (2), the convergence of the ordinary IRGNM (5) does not necessarily imply the convergence of the parallel iteratively regularized Gauss–Newton method (PIRGNM) (6), (7)

In the next section, we will study the stopping rule and the convergence of the PIRGNM (6), (7)

Trang 4

2454 P.K Anh and V.T Dzung

2 Convergence analysis

For convenience of the reader, we collect some facts, necessary for deriving an error estimate

of approximate solutions We begin with a particular case of Lemma 2.4 [5], whose proof is straight-forward

Lemma2.1 Let {γ n } be a sequence of nonnegative numbers satisfying the relations

γ n+1≤ a + bγ n + cγ2

n, n ≥ 0 for some a, b, c > 0.

Let M+:= (1 − b +(1− b)2− 4ac)/2c, M−:= (1 − b −(1− b)2− 4ac)/2c If b +

2√

ac < 1 and γ0≤ M+, then γ n ≤ l := max{γ0, M} for all n ≥ 0.

Proof Clearly, γ0≤ l Suppose γ k ≤ l, then γ k+1− l ≤ a + bγ k + cγ2

k − l ≤ a + (b − 1)l +

cl2≤ 0 because l ∈ [M, M+], hence γ k+1 ≤ l It follows by induction that γ n ≤ l for n ≥ 0 

Lemma2.2 Let A be a bounded linear operator on a Hilbert space H Then, for every β > 0, the following estimates hold.

(i) β1−μ (AA + βI)−1(AA) μ v  ≤ μ μ (1− μ)1−μ v ≤ v, for any fixed μ ∈ (0, 1] and v ∈ H.

(ii) (AA + βI)−1 ≤ 1/β.

(iii) (AA + βI)−1A∗ ≤ 1

2β −1/2.

(iv) A(AA + βI)−1(AA) 1/2 ≤ 1

The proof of the estimates in Lemma 2.2 can be found in [12, pp 72, 81, 82]

In what follows, the parameters α nare chosen such that

α n > 0, α n→ 0 and 1 ≤ α n

α n+1 ≤ ρ for some ρ > 1.

Let B r (x0) denote a closed ball centred at x0 and with radius r > 0 in X Before stating a

convergence theorem, we make some widely used assumptions

Assumption2.1 System (1) has an exact solution x∈ B r (x0) and F i , i = 1, 2, , N, are continuously differentiable in B 2r (x0).

Assumption2.2 The following componentwise source condition (cf [6, p 8]) holds

x− x0

i = (F

i (x)F

where 0 < μ ≤ 1 and x0

i ∈ B 2r (x0) , v i ∈ X, 1 ≤ i ≤ N Moreover, suppose that (i) If 0 < μ≤ 1

2, then F i , i = 1, 2, , N, satisfy the following condition (see [11,14]), ∀x, z ∈

B 2r (x0); ∀v ∈ X, ∃h i (x , z, v) ∈ X

(F

i (x) − F

i (z))v = F

i (z)h i (x , z, v); h i (x , z, v) ≤ K0x − zv, (9)

(ii) If21 < μ ≤ 1, then F

i are Lipschitz continuous, i.e.

F

i (x) − F

for all x, ˜x ∈ B (x0).

Trang 5

The assumption (8) is rather restricting and it requires the choice of appropriate initial guesses

x0

i, i = 1, , N Further, since the vectors v i in Equation (8) do not occur in the iteration process (6) and (7), they need not to be known explicitly.

Define the stopping index N δ in the PIRGNM (6) and (7) as the first number n satisfying the condition ηβ n μ +1/2 ≤ δ, i.e.

ηβ μ +1/2

N δ ≤ δ < ηβ μ +1/2

where β n = α n /N and η > 0 is a fixed parameter This stopping rule is an a priori one and has only

a theoretical meaning, since it depends on μ which is not often available in practice However, presently an a posteriori stopping rule for the PIRGNM has not been established yet.

The proposed here parallel algorithm consists of the following steps:

(1) Give an initial approximation x δ

0and set n := 0

(2) Compute in parallel the ith vectors x δ

n +1,i, 1≤ i ≤ N by Equation (6), where the given initial guesses x0

i , i = 1, , N, are associated with the componentwise source condition (8) (3) Define x δ

n+1by Equation (7).

(4) if n > N δ , where N δ is the stopping index defined by Equation (11), then stop Else put

n : = n + 1 and return to Step 2.

Theorem2.3 Let the assumptions 2.1 and 2.2 hold and let the stopping index n= N δ be chosen according to Equation (11) IfN

i=1v i  and η are sufficiently small and x δ

0 = x0is close enough

to x, then there holds the estimate

x δ

Proof We follow the techniques used in [5,6,11,12] to estimate the distance between x δ

n and x

Let x δ

n ∈ B r (x) and denote A i:= F

i (x) , A in:= F

i (x δ

n ) ; e n:= xδ

n − xand e i

n+1:= xδ

n +1,i − x

From Equation (6), we get e i

n+1= e n − (A

in A in + β n I)−1(A

in (F i (x δ n ) − y δ

i ) + β n (x δ n − x0

i )), or

e i n+1= (A

in A in + β n I)−1[β n (x i0− x) + A

in (y i δ − y i ) − A

in (F i (x δ n ) − y i − A in e n )] (13)

Depending on the value of μ, we consider two cases.

Case 1 Let μ ∈ (1

2, 1] Using the source condition (8) and taking into account

the identity (Ai A i + β n I)−1− (A

in A in + β n I)−1 = −(A

in A in + β n I)−1[(A

i − A

in )A i + A

in (A i

A in ) ](A

i A i + β n I)−1, we can rewrite e i

n+1as

e i n+1= −β n (A

i A i + β n I)−1(A

i A i ) μ v i

+ β n (A

in A in + β n I)−1[A

in (A i − A in ) + (A

i − A

in )A i ](A

i A i + β n I)−1(A

i A i ) μ v i

− (A

in A in + β n I)−1Ain (F i (x n δ ) − y i − A in e n ) + (A

in A in + β n I)−1Ain (y δ i − y i ) (14)

According to Lemma 2.2, we have ω ni (μ):= β1−μ

n (A

i A i + β n I)−1(A

i A i ) μ v i  ≤ μ μ (1−

μ) (1 −μ) v i  ≤ v i  for μ ∈ (0, 1]; (A

in A in + β n I)−1 ≤ 1

β n, (A

in A in + β n I)−1Ain ≤ 1

2β −1/2

n ;

A i (AA i + β n I)−1(AA i ) 1/2 ≤ 1 and A in − A i  = F(x δ ) − F(x)  ≤ Le n, hence

Trang 6

2456 P.K Anh and V.T Dzung

F i (x δ n ) − y i − A in e nY i = F i (x δ n ) − F i (x) − F

i (x δ n )e nY i≤ 1

2L e n2, therefore

(A

in A in + β n I)−1Ain (F i (x δ n ) − y i − A in e n )Y i ≤ 1

2β −1/2

n (12L e n2) (15) Further,

T 1 : = β n (A

in A in + β n I)−1) [A

in (A i − A in ) + (A

i − A

in )A i ](A

i A i + β n I)−1(A

i A i ) μ v i

≤ β n (A

in A in + β n I)−1Ain A i − A i,n (A

i A i + β n I)−1(A

i A i ) μ v i

+ β n (A

in A in + β n I)−1A

i − A

in A i (A

i A i + β n I)−1(A

i A i )1) (A

i A i ) μ− 1

v i Thus,

T1≤ Le n (1

2β μ −1/2

n ω ni (μ) + (A

i A i ) μ −1/2 v

Besides,

(A

in A in + β n I)−1Ain (y δ i − y i ) ≤ 1

2β −1/2

Finally,

β n (A

i A i + β n I)−1(A

i A i ) μ v i  = β μ

Combining relations (13)–(18), we find

e i

n+1 ≤ β μ

n ω ni (μ) + Le n (1

2β μ −1/2

n ω ni (μ) + (A

i A i ) μ −1/2 v

i ) +1

2β −1/2

n (1

2L e n2+ δ).

This and together with Equation (7) yields the estimate

e n+1 = 1

N







N



i=1

e i n+1





≤

1

N

N



i=1



β n μ ω ni (μ) + Le n

 1

2β

μ −1/2

n ω ni (μ) + (A

i A i ) μ −1/2

+1

2β

−1/2 n

 1

2L e n2+ δ

Now introducing the sequence γ n:= e n /β μ

n and observing that the stopping rule (11) implies

δ < ηβ μ +1/2

n for 0≤ n < N δ, from the last inequality, we have

γ n+1≤ 1

N

N



i=1



β n

β n+1

μ

ω ni (μ)+ L

2N

e n

β n μ



β n

β n+1

μ

β μ −1/2 n

N



i=1

ω ni (μ)

N

e n

β n μ



β n

β n+1

μN

i=1

(A

i A i ) μ −1/2 v

i +L

4β

μ −1/2 n

e

n

β n μ

2

β n

β n+1

μ

+η 2



β n

β n+1

μ

N ρ

μ N



i=1

ω ni (μ) + ρ μ η

2 +Lγ n

2N ρ

μ

β μ −1/2

0

N



i=1

ω ni (μ)

+Lγ n

N ρ

μ N



i=1

(A

i A i ) μ −1/2 v

i +L

4β

μ −1/2

0 ρ μ γ n2

≤ ρ μ

 1

N

N



i=1

v i +η 2



+ Lρ μ

 1

2N β

μ −1/2

0

N



i=1

v i + 1

N

N



i=1

(A

i A i ) μ− 1

v i



γ n

+L

4β

μ −1/2

0 ρ μ γ n2

Here, we use the inequality ω ni (μ) ≤ v i  Besides, the above-defined constant ρ satisfies the inequality ρ ≥ α n /α n+1 ≥ 1

Trang 7

γ n+1≤ a + bγ n + cγ2

where

a=

 1

N

N



i=1

v i + η 2



ρ μ; b=

 1

2N β

μ −1/2

0

N



i=1

v i + 1

N

N



i=1

(A

i A i ) μ −1/2 v

i



ρ μ

4β

μ −1/2

0 ρ μ

IfN

i=1v i  and η are small enough, then a and b will be small, hence b + 2ac≤ 1, and

Now if x0 is sufficiently close to x, then γ0= β0−μ x δ

0− x = β0−μ x0− x ≤ M+:=

(1− b +(1− b)2− 4ac)/2c Lemma 2.1 applied to the inequality (19) ensures that γ n:=

e n /β μ

n ≤ l := max{γ0; M} for 0 ≤ n ≤ N δ , where M= (1 − b −(1− b)2− 4ac)/2c = 2a/(1 − b +(1− b)2− 4ac) In particular, x δ

n+1− x = e n+1 = γ n+1β n μ+1≤ lβ μ

0

Observe that γ0β0μ = x0− x ≤ r From Equation (20), we find Mβ0μ = 2aβ μ

0/(1− b+



(1− b)2− 4ac) ≤ r, therefore, lβ μ

0 ≤ r, hence x δ

n+1∈ B r (x) Thus, for the case 1

2 < μ ≤ 1, the estimate γ n ≤ l yields e n  ≤ lβ μ

n = lα μ

n /N μ = O(α μ

n )for

0≤ n ≤ n∗:= N δ

Case 2 Let μ ∈ (0,1

2] and condition (9) hold First observe that F i (x δ

n ) − y i − F

i (x δ

n )(x δ

n

x)= 1

0(F

i (x+ t(x δ

n − x)) − F

i (x δ

n ))(x δ

n − x) dt= 1

0 F i(x δ

n )h i t dt = F

i (x δ

n ) 01h t i dt, where

h i:= hi (x+ t(x δ

n − x) , x δ

n , x δ

n − x)and 1

0 h i dt Y i ≤ K0/2xδ

n − x†2 From Equation (13), we find

e i

n+1 ≤ β n (A

in A in + β n I)−1(x0− x)  + (A

in A in + β n I)−1Ain (y δ i − y i )

+ (A

in A in + β n I)−1Ain A inK0

2 x δ

n − x†2 Thus,

e i

n+1 ≤ β n (A

in A in + β n I)−1(x0− x) + δ

2β

−1/2

n +K0

2 x δ

n − x†2 This and together with the source condition (8) and the estimate

β n (A

in A in + β n I)−1− (A

i A i + β n I)−1 ≤ 2K0x δ

n − x† (see [11, Lemma 4.2, p 1613]), gives

e i

n+1 ≤ β n (A

i A i + β n I)−1(A

i A i ) μ v i  + 2K0x δ

n − xx0− x† + δ

2β

−1/2 n

+K0

2 x δ

n − x†2≤ β μ

n ω ni (μ) + 2K0e n (A

i A i ) μ v i +δ

2β

−1/2

n +K0

2 e n2

Trang 8

2458 P.K Anh and V.T Dzung

Setting γ n:= e n /β μ

n, from the last relations, we find

γ n+1= e n+1

β n μ+1 ≤ 1

N

N



i=1

e i

n+1

β n μ+1 ≤ 1

N

N



i=1



β n

β n+1

μ

ω ni (μ)

+2K0

N

e n

β n μ



β n

β n+1

μN

i=1

(A

i A i ) μ v i

+δ

2β

−1/2

n 1

β n μ+1

+K0

2



e n

β n μ

2

β n 2μ

β n μ+1

The stopping rule (11) ensures that for 0≤ n < N δ,

γ n+1≤ 1

N ρ

μ N



i=1

ω ni (μ)+2K0

N ρ

μ γ n n



i=1

(A

i A i ) μ v i + η

2β

−1/2

n β μ +1/2 n

1

β n μ+1 +K0

2 γ

2

n ρ μ β0μ

Thus, γ n+1≤ a + bγ n + cγ2

n , where a = ρ μ (( 1/N)N

i=1v i  + η/2); b = (2K0ρ μ /N )N

i=1

(A

i A i ) μ v i , and c = (K0/ 2)ρ μ β0μ

Again, ifN

i=1v i  and η are sufficiently small and x δ

0= x0is close enough to x†, then arguing

similarly as in Case 1, we can show that x n δ+1∈ B r (x)andx δ

n − x = O(α μ

n )for 0≤ n ≤ N δ Thus, in both cases for 0≤ n ≤ N δ, we have

x δ

n − x = O(α μ

n )

Let n = n∗:= Nδ , then ηβ n μ+1/2 = η(α n/N ) μ +1/2 ≤ δ, hence, α μ

n≤ N μ (δ/η) μ/(μ +1/2), therefore

x δ

3 Numerical experiments

Underdetermined systems of equations arise in a variety of problems, such as, nonlinear comple-mentarity problems, problems of finding interior points of polytopes, image processing, etc

We consider a simultaneous underdetermined system of nonlinear equations

F i (x1, , x m ) = y i, i = 1, , N, (21)

where F i:Rm → R and m N.

First we rewrite Equation (6) as

x n δ +1,i = x0

i + (F

i (x n δ )F

i (x n δ ) + β n I)−1Fi (x n δ )(y δ

i − F i (x δ n ) − F

i (x δ n )(x0i − x δ

n )) (22)

Here, F i(x) = (∂F i /∂x1, , ∂F i /∂x m ); i = 1, , N are row vectors.

Further, noting that (F i(x δ

n )F

i (x δ

n ) + β n I X )−1F

i (x δ

n )= F

i (x δ

n )(F

i (x δ

n )F

i (x δ

n )+ β n I Y i )−1,

where I X and I Y i are the identity operators on spaces X and Y i, respectively, we have

x δ n +1,i = x0

i + F

i (x n δ )(F

i (x n δ )F

i (x n δ )+ β n I Y i )−1(y δ

i − F i (x n δ ) − F

i (x n δ )(x i0− x δ

n )) (23)

Taking into account that (F i(x δ

n )F

i (x δ

n )+ β n I Y i )−1= F

i (x δ

n2+ β n, we can rewrite formula (6) as

x n δ +1,i = x0

i +F i(x δ

n )T(y δ

i − F i (x δ

n ) − F

i (x δ

n )(x0i − x δ

n ))

F

i (x δ

n )2+ β n

; i = 1, , N, (24) where the symbol T denotes transposition of a matrix or a vector and the Euclidean norm is used

The next approximation x δ

n+1is defined by Equation (7) as before.

Trang 9

Denoting F = (F1, , F N )T; y δ = (y δ

1, , y δ

N )T and observing that F(x)TF(x)=N

i=1

F i(x)TF i(x), by a similar argument as in Equation (23), we can reduce Equation (5) to

x δ n+1= x0+ F(x δ

n )T(F(x δ

n )F(x δ

n )T+ α n I)−1(y δ − F(x δ

n ) − F(x δ

n )(x0− x δ

n )) (25)

At each iteration step the IRGNM (5) requires to solve an m × m system of linear equations, which is time consuming when m is very large On the other hand, using formula (25), we need

to solve a N

components x δ

n +1,iare computed by the explicit formula (24) in parallel, hence the algorithm (24),

Equation (7) can give a satisfactory result within reasonable computing time

For the sake of simplicity, we choose for our experiments m= 105, N = 64, x† =

( 1, 0, , 0)T, x δ

0= (0.5, 0, , 0)T,x δ

0− x = 0.5 and α n = 0.2 ∗ 64 ∗ (0.5) n

In all the experiments, the matrix[F(x)]T[F(x)] will be singular, hence the Newton method and its parallel modification (see [8,15]) may not converge, therefore, the IRGNM should be used However, due to formula (25) at each step, the IRGNM requires to solve a 64× 64 system of linear equations On the other hand, the application of the PIRGNM to Equation (21) leads to simple explicit formulae (24)

All the numerical experiments will be performed on a LINUX cluster 1350 with eight computing nodes Each node contains two Intel Xeon dual core 3.2 GHz, 2GBRam All the programs are written in C

We evaluate the accuracy of the IRGNM and PIRGNM using the relative error norm (REN), i.e REN := xδ

n − x/x In our examples, x = 1, hence REN = x δ

n − x†

The notations used in the tables are as follows:

T p : time of the parallel execution on p processors taken in seconds.

T s: time of the sequential execution taken in seconds

S p = T s /T p: speed up

E p = S p /p : efficiency of parallel computation by using p processors.

nmin: the first number n, where the REN of the corresponding method is less than a given

tolerance

N δ : the stopping index defined by Equation (11)

η: a fixed small positive parameter in stopping rule (11)

For the first experiment, we consider the following system of equations

F k (x):= xTA k x + bT

k x = y k,

where the matrices A k are 2k− 1 diagonal with the entries

a (k) ij =

1 |i − j| ≤ k − 1,

0 otherwise

Further, let b k = (8, , 8, 0, , 0)T; k = 1, , 64, where the component 8 in the vector b k

repeats exactly k times Finally, the right-hand sides y i = 9 + χ i , where the entries of χ i are

normally distributed random numbers with zero mean, scaled to yield the noise level δ.

In this case, the source condition (8) holds with μ = 1 and the initial guesses are x0

k =

(0.95,−0.05, , −0.05, 0, , 0)T, k = 1, , 64, where the entry −0.05 in x k occurs exactly

k − 1 times Moreover, all the derivatives F

i (x)are Lipschitz continuous

Table 1 gives the RENs of the PIRGNM and IRGNM as well as their execution times in sequential mode For solving systems of linear equations in IRGNM, we used the Cholesky method It shows that within a given tolerance, the PIRGNM is less time consuming than IRGNM Table 2 finds stopping indices of the PIRGNM and verifies the conclusion of Theorem 2.3 that

x δ

n − x = O(δ 2/3 ) , where n= N δ

Trang 10

2460 P.K Anh and V.T Dzung

Table 1. RENs and execution times in sequential mode with η= 2.

Table 2. Stopping indices of the PIRGNM with η= 0.02.

Finally, Table 3 gives the efficiency and the speed up of the PIRGNM in parallel mode

For our second experiment, we take F0(x) = x2

1+ x2

2+ · · · + x2

m + 8x1; F i (x)=m −i

j=1 x j x j +i+

10i

j=1x j + 9x i+1; i = 1, , 63 The right-hand sides y0= 9 + χ0; y i = 10 + χ i , i = 1, , 63 and the entries of χ i ; i≥ 0 are again normally distributed random numbers with zero mean, scaled

to yield the noise level δ.

Clearly, in this case the source condition (8) is satisfied with an exponent μ= 1 and initial

guesses x0= (0.5, 0, , 0)T; x0

i = (0.5, −0.5, , −0.5, 0, , 0)T; i = 1, , 63, where

num-ber−0.5 in x0

i repeats exactly i times Observe that in this example all the derivatives F i(x)are

Lipschitz continuous and the initial guesses x0

i need not to be closed to the exact solution x† Tables 4 and 5 for the second experiment are similar to Tables 1 and 2, respectively

Table 3 Efficiency and speed up of the PIRGNM.

Table 4. RENs and execution times in sequential mode with η= 0.4.

... 64 system of linear equations On the other hand, the application of the PIRGNM to Equation (21) leads to simple explicit formulae (24)

All the numerical experiments will be performed on... formula (24) in parallel, hence the algorithm (24),

Equation (7) can give a satisfactory result within reasonable computing time

For the sake of simplicity, we choose for. .. as well as their execution times in sequential mode For solving systems of linear equations in IRGNM, we used the Cholesky method It shows that within a given tolerance, the PIRGNM is less time

Ngày đăng: 16/12/2017, 03:08

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN