1. Trang chủ
  2. » Tất cả

A scaled three term conjugate gradient method for unconstrained optimization

16 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 16
Dung lượng 1,44 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

A scaled three term conjugate gradient method for unconstrained optimization Arzuka et al Journal of Inequalities and Applications (2016) 2016 325 DOI 10 1186/s13660 016 1239 1 R E S E A R C H Open Ac[.]

Trang 1

R E S E A R C H Open Access

A scaled three-term conjugate gradient

method for unconstrained optimization

Ibrahim Arzuka*, Mohd R Abu Bakar and Wah June Leong

* Correspondence:

arzukaibrahim@yahoo.com

Institute for Mathematical Research,

Universiti Putra Malaysia, Serdang,

Selangor 43400, Malaysia

Abstract

Conjugate gradient methods play an important role in many fields of application due

to their simplicity, low memory requirements, and global convergence properties In this paper, we propose an efficient three-term conjugate gradient method by utilizing the DFP update for the inverse Hessian approximation which satisfies both the sufficient descent and the conjugacy conditions The basic philosophy is that the DFP update is restarted with a multiple of the identity matrix in every iteration An acceleration scheme is incorporated in the proposed method to enhance the reduction in function value Numerical results from an implementation of the proposed method on some standard unconstrained optimization problem show that the proposed method is promising and exhibits a superior numerical performance in comparison with other well-known conjugate gradient methods

Keywords: unconstrained optimization; nonlinear conjugate gradient method;

quasi-Newton methods

1 Introduction

In this paper, we are interested in solving nonlinear large scale unconstrained optimization problems of the form

where f :n→  is an at least twice continuously differentiable function A nonlinear conjugate gradient method is an iterative scheme that generates a sequence{x k} of an approximation to the solution of (), using the recurrence

where α k>  is the steplength determined by a line search strategy which either

mini-mizes the function or reduces it sufficiently along the search direction and d kis the search direction defined by

d k=

–g k; k= ,

–g k + β k d k–; k≥ ,

© Arzuka et al 2016 This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, pro-vided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and

Trang 2

where g k is the gradient of f at a point x k and β kis a scalar known as the conjugate gradient

parameter For example, Fletcher and Reeves (FR) [], Polak-Ribiere-Polyak (PRP) [], Liu

and Storey (LS) [], Hestenes and Stiefel (HS) [], Dai and Yuan (DY) [] and Fletcher (CD)

[] used an update parameter, respectively, given by

β kFR= g

T

k g k

g T

k–g k–, β

PRP

k = g

T

k y k–

g T

k–g k–, β

LS

k =–g

T

k y k–

d T

k–g k–,

β kHS= g

T

k y k–

d T

k–y k–, β

DY

k = g

T

k g k

d T

k–y k–, β

CD

k = – g

T

k g k

d T

k–y k–,

where y k–= g k – g k– If the objective function is quadratic, with an exact line search the

performances of these methods are equivalent For a nonlinear objective function different

β klead to a different performance in practice Over the years, after the practical

conver-gence result of Al-Baali [] and later of Gilbert and Nocedal [] attention of researchers has

been on developing on conjugate gradient methods that possesses the sufficient descent

condition

for some constant c >  For instance the CG-DESCENT of Hager and Zhang []

β kHZ= max

β k N , η k



where

β k N= 

d T

k–y k–



y k–– d k–y k–

d T

k–y k–

T

g k

and

η k= –

d k– min{g k–, η},

which is based on the modification of HS method Another important class of

conju-gate gradient methods is the so-called three-term conjuconju-gate gradient method in which

the search direction is determined as a linear combination of g k , s k , and y kas

where τand τare scalar Among the generated three-term conjugate gradient methods in

the literature we have the three-term conjugate methods proposed by Zhang et al [, ]

by considering a descent modified PRP and also a descent modified HS conjugate gradient

method as

d k+= –g k++



g T

k+y k

g T g k



d k



g T

k+d k

g T g k



y k,

Trang 3

d k+= –g k++



g k T+y k

s T

k y k



s k



g k T+s k

s T

k y k



y k,

where s k = x k+– x k An attractive property of these methods is that at each iteration, the

search direction satisfies the descent condition, namely g T

k d k = –c g kfor some constant

c>  In the same manner, Andrei [] considers the development of a three-term

conju-gate gradient method from the BFGS updating scheme of the inverse Hessian

approxima-tion restarted as an identity matrix at every iteraapproxima-tion where the search direcapproxima-tion is given

by

d k+= –g k++y

T

k g k+

y T

k s k



y– y k

y T

k s k

T s T

k g k+

y T

k s k

s k



s T

k g k+

y T

k s k



y k

An interesting feature of this method is that both the sufficient and the conjugacy

condi-tions are satisfied and we have global convergence for a uniformly convex function

Mo-tivated by the good performance of the three-term conjugate gradient method, we are

interested in developing a three-term conjugate gradient method which satisfies both the

sufficient descent condition, the conjugacy condition, and global convergence The

re-maining part of this paper is structured as follows: Section  deals with the derivation of

the proposed method In Section , we present the global convergence properties The

numerical results and discussion are reported in Section  Finally, a concluding remark

is given in the last section

2 Conjugate gradient method via memoryless quasi-Newton method

In this section, we describe the proposed method which would satisfied both the sufficient

descent and the conjugacy conditions Let us consider the DFP method, which is a

Newton method belonging to the Broyden class [] The search direction in the

quasi-Newton methods is given by

where H kis the inverse Hessian approximation updated by the Broyden class This class

consists of several updating schemes, the most famous being the BFGS and the DFP; if H k

is updated by the DFP then

H k+= H k+s k s

T k

s T

k y k

H k y k y T k H k

y T

such that the secant equation

is satisfied This method is also known as a variable metric method, developed by Davidon

[], Fletcher and Powell [] A remarkable property of this method is that it is a

conju-gate direction method and one of the best quasi-Newton methods that encompassed the

advantage of both the Newton method and the steepest descent method, while

avoid-ing their shortcomavoid-ings [] Memoryless quasi-Newton methods are other techniques for

Trang 4

solving (), where at every step the inverse Hessian approximation is updated as an

iden-tity matrix Thus, the search direction can be determined without requiring the storage of

any matrix It was proposed by Shanno [] and Perry [] The classical conjugate

gradi-ent methods PRP [] and FR [] can be seen as memoryless BFGS (see Shanno []) We

proposed our three-term conjugate gradient method by incorporating the DFP updating

scheme of the inverse Hessian approximation (), within the frame of a memoryless

quasi-Newton method where at each iteration the inverse Hessian approximation is restarted as

a multiple of the identity matrix with a positive scaling parameter as

Q k+= μ k I+s k s

T k

s T k y k – μ k

y k y T k

y T k y k

and thus, the search direction is given by

d k+= –Q k+g k+= –μ k g k+–s

T

k g k+

s T k y k

s k + μ k

y T

k g k+

y T k y k

Various strategies can be considered in deriving the scaling parameter μ k; we prefer the

following which is due to Wolkowicz []:

μ k=s

T

k s k

y T

k s k



s T k s k

y T

k s k

 – s

T

k s k

y T

k y k

The new search direction is then given by

d k+= –μ k g k+– ϕs k + ϕy k, () where

ϕ=s

T

k g k+

s T

and

ϕ= μ k

y T

k g k+

y T

k y k

We present the algorithm of the proposed method as follows

2.1 Algorithm (STCG)

In this section, we present the algorithm of the proposed method It has been reported that

the line search in conjugate gradient method performs more function evaluations so as to

obtain a desirable steplength α kdue to poor scaling of the search direction (see Nocedal

[]) As a consequence, we incorporate the acceleration scheme proposed by Andrei [],

so as to have some reduction in the function evaluations The new approximation to the

minimum instead of () is determined by

where ϑ k=–r k

q , r k = α k g k T d k , q k = –α k (g k – g z )d k = –α k y k d k , g z=∇f (z) and z = x k + α k d k

Trang 5

Algorithm 

Step  Select an initial point x o and determine f (x o)and g(x o) Set do = –g o and k = .

Step  Test the stopping criteriong k  ≤ , if satisfied stop Else go to Step .

Step  Determine the steplength α kas follows:

Given δ ∈ (, ) and p, p, with  < p< p< 

(i) Set α = .

(ii) Test the relation

f (x + αd k ) – f (x k)≤ αδg T

(iii) If () is satisfied, then α k = αand go to Step  else choose a new

α ∈ [pα , pα]and go to (ii)

Step  Determine z = x k + α k d k , compute g z=∇f (z) and y k = g k – g z

Step  Determine r k = α k g k T d k and q k = –α k y T k d k

Step  If q k = , then ϑ k=r k

q k , x k+= x k + ϑ k α k d k else x k+= x k + α k d k

Step  Determine the search direction d k+by () where μ k , ϕ, and ϕare computed by

(), (), and (), respectively

Step  Set k := k +  and go to Step .

3 Convergence analysis

In this section, we analyze the global convergence of the propose method, where we

as-sume that g k =  for all k ≥  else a stationary point is obtained First of all, we show that

the search direction satisfies the sufficient descent and the conjugacy conditions In order

to present the results, the following assumptions are needed

Assumption  The objective function f is convex and the gradient g is Lipschitz

contin-uous on the level set

K=

x∈ n |f (x) ≤ f (x)

Then there exist some positive constants ψ, ψ, and L such that

and

ψz≤ z T G (x)z ≤ ψz, ()

for all z ∈ R n and x, y ∈ K where G(x) is the Hessian matrix of f

Under Assumption , we can easily deduce that

ψs k≤ s T

where s T

k y k = s T

k ¯Gs kand ¯G= 

G (x k + λs k )s k dλ We begin by showing that the updating matrix () is positive definite

Trang 6

Lemma . Suppose that Assumption  holds; then the matrix () is positive definite.

Proof In order to show that the matrix () is positive definite we need to show that μ kis

well defined and bounded First, by the Cauchy-Schwarz inequality we have



s T k s k

y T

k s k

 – s

T

k s k

y T

k y k =

(s T k s k )((s T k s k )(y T k y k ) – (y T k s k))

(y T

k s k)(y T

k y k)

≥ ,

and this implies that the scaling parameter μ kis well defined It follows that

 < μ k=s

T

k s k

y T

k s k



s T k s k

y T

k s k

 – s

T

k s k

y T

k y k



s T k s k

y T

k s k

s k

ψs k = 

ψ When the scaling parameter is positive and bounded above, then for any non-zero vector

p∈ nwe obtain

p T Q k+p = μ k p T pI+p

T s k s T

k p

s T

k y k – μ k

p T y k y T

k p

y T

k y k

= μ k

(p T p )(y T

k y k ) – p T y k y T

k p

y T k y k

+(p

T s k)

s T k y k

By the Cauchy-Schwarz inequality and (), we have (p T p )(y T

k y k ) – (p T y k )(y T

k p)≥  and

y T

k s k> , which implies that the matrix () is positive definite∀k ≥ .

Observe also that

tr(Q k+) = tr(μ k I) + s

T

k s k

s T

k y k – μ k

y T k y k

y T

k y k

= (n – )μ k+s

T

k s k

s T

k y k

n– 

ψ + s k

ψs k

=ψ+ n – 

Now,

 < 

ψ≤



s T k s k

y T

k s k



≤ tr(Q k+)≤ψ+ n – 

Thus, tr(Q k+) is bounded On the other hand, by the Sherman-Morrison House-Holder

formula (Q–k+ is actually the memoryless updating matrix updated from μ

k I using the direct DFP formula), we can obtain

Q–k+= 

μ k I– 

μ k

y k s T

k + s k y T k

s T y k

+



 + 

μ k

s T

k s k

s T y k



y k y T k

s T y k

Trang 7

We can also establish the boundedness of tr(Q–

k+) as

tr

Q–k+

= tr



μ k I

 – 

μ k

s T

k y k

s T k y k

+y k

s T k y k

+ 

μ k

s ky k

(s T k y k)

n

μ k – 

μ k +L

s k

ψs k + 

μ k

Ls k

ψs k

(n – )

ψ

 +L

ψ + L

ψ

where ω = (n–) ψ +L

ψ +L

Now, we shall state the sufficient descent property of the proposed search direction in the following lemma

Lemma . Suppose that Assumption  holds on the objective function f then the search

direction () satisfies the sufficient descent condition g T

k+d k+≤ –cg k+

Proof Since –g T

k+d k+ ≥ 

tr(Q–k+)g k+ (see for example Leong [] and Babaie-Kafaki []), then by using () we have

where c = min{,

ω} Thus,

Dai-Liao [] extended the classical conjugacy condition from y T

k d k+=  to

y T k d k+= –t

s T k g k+



where t≥  Thus, we can also show that our proposed method satisfies the above

Lemma . Suppose that Assumption  holds, then the search direction () satisfies the

conjugacy condition()

Proof By (), we obtain

y T k d k+= –μy T k g k+–s

T

k g k+

s T k y k

y T k s k + μ y

T

k g k+

y T k y k

y T k y k

= –μy T k g k+–s

T

k g k+

s T k y k

s T k y k + μ y

T

k g k+

y T k y k

y T k y k

= –μy T k g k+– s T k g k++ μy T k g k+

= –s T g k+,

Trang 8

where the result holds for t =  The following lemma gives the boundedness of the search

Lemma . Suppose that Assumption  holds then there exists a constant p >  such that

d k+ ≤ Pg k+, where d k+is defined by()

Proof A direct result of () and the boundedness of tr(Q k+) gives

d k+ = Q k+g k+

≤ tr(Q k+)g k+

where P = ( ψ+n–

In order to establish the convergence result, we give the following lemma

Lemma . Suppose that Assumption  holds Then there exist some positive constants γ

and γsuch that for any steplength α k generated by Step  of Algorithm  will satisfy either

of the following:

f (x k + α k d k ) – f (x k)≤–γ(g T

k d k)

or

f (x k + α k d k ) – f (x k)≤ γg T

Proof Suppose that () is satisfied with α k= , then

f (x k + α k d k ) – f (x k)≤ δg T

implies that () is satisfied with γ= δ.

Suppose α k < , and that () is not satisfied Then for a steplength αα k

p we have

f (x k + αd k ) – f (x k ) > δαg T

Now, by the mean-value theorem there exists a scalar τ k∈ (, ) such that

f (x k + αd k ) – f (x k ) = αg(x k + τ αd k)T d k () From () we have

(δ – )αg T k d k < α

g (x k + τ k αd k ) – g kT

d k

= αy T k d k

< L

α d k ,

Trang 9

which implies

α≥ –( – δ)(g k T d k)

Now,

α k ≥ pα≥ –( – δ)(g k T d k)

Substituting () in () we have the following:

f (x k + α k d k ) – f (x k)≤ –δ ( – δ)(g k T d k)

L d k



g k T d k



=–γ(g

T

k d k)

d k , where

γ=δ ( – δ)

L .

Therefore

f (x k + α k d k ) – f (x k)≤–γ(g T

k d k)



Theorem . Suppose that Assumption  holds Then Algorithm  generates a sequence of

approximation {x k } such that

lim

Proof As a direct consequence of Lemma ., the sufficient descent property (), and the

boundedness of the search direction () we have

f (x k + α k d k ) – f (x k)≤–γ(g T

k d k)

d k

–γcg k

Pg k

=–γc

or

f (x k + α k d k ) – f (x k)≤ γg k T d k

Trang 10

Hence, in either case, there exists a positive constant γsuch that

Since the steplength α kgenerated by Algorithm  is bounded away from zero, () and

() imply that f (x k ) is a non-increasing sequence Thus, by the boundedness of f (x k) we

have

 = lim

k→∞



f (x k+) – f (x k)

≤ –γ lim

k→∞g k, and as a result

lim



4 Numerical results

In this section, we present the results obtained from the numerical experiment of our

proposed method in comparison with the CG-DESCENT (CG-DESC) [], three-term

Hestenes-Stiefel (TTHS) [], three-term Polak-Ribiere-Polyak (TTPRP) [], and TTCG

[] methods We evaluate the performance of these methods based on iterations and

function evaluations By considering some standard unconstrained optimization test

problems obtained from Andrei [], we conducted ten numerical experiments for each

test function with the size of the variable ranging from ≤ n ≤ , The

algo-rithms were implemented using Matlab subroutine programming on a PC (Intel(R)

core(TM) Duo E . GHz  GB) -bit Operating system The program

termi-nates whenever g k  <  where  = –or a method failed to converges within ,

iterations The latter requirement is represented by the symbol ‘-’ An Armijo-type line

search suggested by Byrd and Nocedal [] was used for all the methods under

consid-eration Table  in the appendices gives the performance of the algorithms in terms of

iterations and function evaluations TTPRP solves % of the test problems, TTHS solves

% of the test problems, CG-DESCENT solves % of the test problems, and STCG

solves % of the test problems, whereas TTCG solves % of the test problems The

performance of STCG over TTPRP is that TTPRP needs % and % more, on average,

in terms of the number of iterations and function evaluations, respectively, than STCG

The improvement of STCG over TTHS is that STCG needs % and % less, on average,

in terms of number of iterations and function evaluations, respectively, than TTHS The

improvement of STCG over CG-DESCENT algorithms is that CG-DESCENT needs %

and % more, on average, in terms of the number of iterations and function evaluations,

respectively, than STCG Similarly, the improvement of STCG over TTCG is that STCG

needs % and % less, on average, in terms of the number of iterations and function

evaluations, respectively, than TTCG In order to further examine the performance of

these methods, we employ the performance profile of Dolan and Moré [] Figures -

give the performance profile plots of these methods in terms of iterations and function

evaluations and the top curve corresponds to the method with the highest win which

indicates that the performance of the proposed method is highly encouraging and

sub-stantially outperforms any of the other methods considered

... the Cauchy-Schwarz inequality and (), we have (p T p )(y T

k y k ) – (p T y k )(y T... k

Trang 7

We can also establish the boundedness of tr(Q–

k+)...

[] methods We evaluate the performance of these methods based on iterations and

function evaluations By considering some standard unconstrained optimization test

problems obtained

Ngày đăng: 19/11/2022, 11:46

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
1. Fletcher, R, Reeves, CM: Function minimization by conjugate gradients. Comput. J. 7(2), 149-154 (1964) Sách, tạp chí
Tiêu đề: Function minimization by conjugate gradients
Tác giả: Fletcher, R, Reeves, CM
Nhà XB: Comput. J.
Năm: 1964
4. Hestenes, MR: The conjugate gradient method for solving linear systems. In: Proc. Symp. Appl. Math VI, American Mathematical Society, pp. 83-102 (1956) Sách, tạp chí
Tiêu đề: The conjugate gradient method for solving linear systems
Tác giả: M. R. Hestenes
Nhà XB: American Mathematical Society
Năm: 1956
6. Fletcher, R: Practical Methods of Optimization. John Wiley &amp; Sons, New York (2013) Sách, tạp chí
Tiêu đề: Practical Methods of Optimization
Tác giả: R. Fletcher
Nhà XB: John Wiley & Sons, New York
Năm: 2013
7. Al-Baali, M: Descent property and global convergence of the Fletcher-Reeves method with inexact line search. IMA J.Numer. Anal. 5(1), 121-124 (1985) Sách, tạp chí
Tiêu đề: Descent property and global convergence of the Fletcher-Reeves method with inexact line search
Tác giả: M. Al-Baali
Nhà XB: IMA Journal of Numerical Analysis
Năm: 1985
8. Gilbert, JC, Nocedal, J: Global convergence properties of conjugate gradient methods for optimization. SIAM J.Optim. 2(1), 21-42 (1992) Sách, tạp chí
Tiêu đề: Global convergence properties of conjugate gradient methods for optimization
Tác giả: Gilbert, J. C., Nocedal, J
Nhà XB: SIAM Journal on Optimization
Năm: 1992
9. Hager, WW, Zhang, H: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16(1), 170-192 (2005) Sách, tạp chí
Tiêu đề: A new conjugate gradient method with guaranteed descent and an efficient line search
Tác giả: Hager, W. W., Zhang, H
Nhà XB: SIAM Journal on Optimization
Năm: 2005
10. Zhang, L, Zhou, W, Li, D-H: A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 26(4), 629-640 (2006) Sách, tạp chí
Tiêu đề: A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence
Tác giả: L. Zhang, W. Zhou, D.-H. Li
Nhà XB: IMA J. Numer. Anal.
Năm: 2006
11. Zhang, L, Zhou, W, Li, D: Some descent three-term conjugate gradient methods and their global convergence.Optim. Methods Softw. 22(4), 697-711 (2007) Sách, tạp chí
Tiêu đề: Some descent three-term conjugate gradient methods and their global convergence
Tác giả: Zhang, L, Zhou, W, Li, D
Nhà XB: Optimization Methods and Software
Năm: 2007
12. Andrei, N: On three-term conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput.219(11), 6316-6327 (2013) Sách, tạp chí
Tiêu đề: On three-term conjugate gradient algorithms for unconstrained optimization
Tác giả: N. Andrei
Nhà XB: Applied Mathematics and Computation
Năm: 2013
13. Broyden, C: Quasi-Newton methods and their application to function minimisation. Mathematics of Computation 21, 368-381 (1967) Sách, tạp chí
Tiêu đề: Quasi-Newton methods and their application to function minimisation
Tác giả: C. Broyden
Nhà XB: Mathematics of Computation
Năm: 1967
14. Davidon, WC: Variable metric method for minimization. SIAM J. Optim. 1(1), 1-17 (1991) Sách, tạp chí
Tiêu đề: Variable metric method for minimization
Tác giả: W. C. Davidon
Nhà XB: SIAM Journal on Optimization
Năm: 1991
17. Shanno, DF: Conjugate gradient methods with inexact searches. Math. Oper. Res. 3(3), 244-256 (1978) Sách, tạp chí
Tiêu đề: Conjugate gradient methods with inexact searches
Tác giả: Shanno, DF
Năm: 1978
18. Perry, JM: A class of conjugate gradient algorithms with a two step variable metric memory. Center for Mathematical Studies in Economies and Management Science. Northwestern University Press, Evanston (1977) Sách, tạp chí
Tiêu đề: A class of conjugate gradient algorithms with a two step variable metric memory
Tác giả: J. M. Perry
Nhà XB: Center for Mathematical Studies in Economies and Management Science
Năm: 1977
22. Leong, WJ, San Goh, B: Convergence and stability of line search methods for unconstrained optimization. Acta Appl.Math. 127(1), 155-167 (2013) Sách, tạp chí
Tiêu đề: Convergence and stability of line search methods for unconstrained optimization
Tác giả: Leong, WJ, San Goh, B
Nhà XB: Acta Applicandae Mathematicae
Năm: 2013
23. Babaie-Kafaki, S: A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization. 4OR 11(4), 361-374 (2013) Sách, tạp chí
Tiêu đề: A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization
Tác giả: S. Babaie-Kafaki
Nhà XB: 4OR
Năm: 2013
24. Dai, Y-H, Liao, L-Z: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim.43(1), 87-101 (2001) Sách, tạp chí
Tiêu đề: New conjugacy conditions and related nonlinear conjugate gradient methods
Tác giả: Dai, Y-H, Liao, L-Z
Nhà XB: Applied Mathematics and Optimization
Năm: 2001
27. Dolan, ED, More, JJ: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201-213 (2002) Sách, tạp chí
Tiêu đề: Benchmarking optimization software with performance profiles
Tác giả: Dolan, E. D., More, J. J
Nhà XB: Mathematical Programming
Năm: 2002
2. Polak, E, Ribiere, G: Note sur la convergence de méthodes de directions conjuguées. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique 3(R1), 35-43 (1969) Khác
3. Liu, Y, Storey, C: Efficient generalized conjugate gradient algorithms, part 1: theory. J. Optim. Theory Appl. 69(1), 129-137 (1991) Khác
5. Dai, Y-H, Yuan, Y: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim.10(1), 177-182 (1999) Khác