1. Trang chủ
  2. » Ngoại Ngữ

A new adaptive conjugate gradient algorithm for large-scale unconstrained optimization

13 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề A new adaptive conjugate gradient algorithm for large-scale unconstrained optimization
Tác giả Neculai Andrei
Trường học Research Institute for Informatics, Center for Advanced Modeling and Optimization
Chuyên ngành Unconstrained Optimization
Thể loại Research Paper
Năm xuất bản 2023
Thành phố Bucharest
Định dạng
Số trang 13
Dung lượng 808,5 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Keywords: Unconstrained optimization; adaptive conjugate gradient method; sufficient descent condition; conjugacy condition; eigenvalues clustering; numerical comparisons.. The idea, tak

Trang 1

A new adaptive conjugate gradient algorithm for large-scale

unconstrained optimization

Neculai Andrei

Research Institute for Informatics, Center for Advanced Modeling and Optimization, 8-10, Averescu Avenue, Bucharest 1, Romania,

Academy of Romanian Scientists E-mail: nandrei@ici.ro

Abstract An adaptive conjugate gradient algorithm is presented The search direction is

computed as the sum of the negative gradient and a vector determined by minimizing the

quadratic approximation of objective function at the current point Using a special

approximation of the inverse Hessian of the objective function, which depends by a positive

parameter, we get the search direction which satisfies both the sufficient descent condition

and the Dai-Liao’s conjugacy condition The parameter in the search direction is determined

in an adaptive manner by clustering the eigenvalues of the matrix defining it The global

convergence of the algorithm is proved for uniformly convex functions Using a set of 800

unconstrained optimization test problems we prove that our algorithm is significantly more

efficient and more robust than CG-DESCENT algorithm By solving five applications from

the MINPACK-2 test problem collection, with 106 variables, we show that the suggested adaptive conjugate gradient algorithm is top performer versus CG_DESCENT

Keywords: Unconstrained optimization; adaptive conjugate gradient method; sufficient

descent condition; conjugacy condition; eigenvalues clustering; numerical comparisons

Dedication This paper is dedicated to Prof Boris T Polyak on the occasion of his

80th birthday Prof Polyak’s contributions to linear and nonlinear optimization methods, linear algebra, numerical mathematics, linear and nonlinear control systems are well-known His articles and books give careful attention to both mathematical rigor and practical relevance In all his publications he proves to be a refined expert in understanding the nature, purpose and limitations of nonlinear optimization algorithms and applied mathematics in general It is my great pleasure and honour to dedicate this paper to Prof Polyak, a pioneer and a great contributor in his area of interests

1 Introduction

For solving the large-scale unconstrained optimization problem

min{ ( ) :f x x   n}, (1) where :fn  is a continuously differentiable function, we consider the following algorithm

x k1x k k d k, (2) where the step size k is positive and the directions d are computed using the updating k

formula:

Trang 2

d k1g k1u k1 (3) Here, g k f x( ),k and 1 n

k

u    is a vector to be determined Usually, in (2), the steplength k

is computed using the Wolfe line search conditions [32, 33]:

( ) ( ) T

f x  df x  g d , (4)

T 1 T ,

k k k k

g d g d (5) where 0  1 Also, the strong Wolfe line search conditions consisting of (4) and the following strengthened version of (5):

g d k T1 k g d T k k (6) can be used

Observe that (3) is a general updating formula for the search direction computation The following particularizations of (3) can be presented If u k10, then we get the steepest descent

      then the Newton method is obtained Besides, if 1

1 ( 1) 1,

     where B k1 is an approximation of the Hessian 2

1 ( k ),

f x

 then we find the quasi-Newton methods On the other hand, if u k1k d k, where k is a scalar and d0g0, the family of conjugate gradient algorithms is generated

In this paper we focus on conjugate gradient method This method was introduced by Hestenes and Stiefel [20] and Stiefel [29], ( HS T1 / T

k g y k k y d k k

   ), to minimize positive definite quadratic objective functions (Herey kg k1 g k.) This algorithm for solving positive definite

linear algebraic systems of equations is known as linear conjugate gradient Later, the algorithm was generalized to nonlinear conjugate gradient in order to minimize arbitrary differentiable

nonlinear functions, by Fletcher and Reeves [13], ( FR 1 2/ 2

k g k g k

   ), Polak and Ribière [25] and Polyak [26], ( PRP T1 / 2

k g y k k g k

   ), Dai and Yuan [11], ( DY 1 2/ T

k g k y d k k

others An impressive number of nonlinear conjugate gradient algorithms have been established, and a lot of papers have been published on this subject insisting both on theoretical and computational aspects An excellent survey of the development of different versions of nonlinear conjugate gradient methods, with special attention to global convergence properties, is presented

by Hager and Zhang [19]

In this paper we consider another approach to generate an efficient and robust conjugate gradient algorithm We suggest a procedure for u k1 computation by minimizing the quadratic

approximation of the function f in x k1 and using a special representation of the inverse Hessian which depend by a positive parameter The parameter in the matrix representing the search direction is determined in an adaptive manner by minimizing the largest eigenvalue of it The idea, taken from the linear conjugate gradient, is to cluster the eigenvalues of the matrix representing the search direction

The algorithm and its properties are presented in section 2 We prove that the search direction used by this algorithm satisfies both the sufficient descent condition and the Dai and Liao conjugacy condition [9] Using standard assumptions, section 3 presents the global convergence of the algorithm for uniformly convex functions In section 4 the numerical comparisons of our algorithm versus the CG-DESCENT conjugate gradient algorithm [17] are presented The computational results, for a set of 800 unconstrained optimization test problems, show that this new algorithm substantially outperform CG-DESCENT, being more efficient and more robust Considering five applications from the MINPACK-2 test problem collection [4], with 10 variables, we show that out algorithm is way more efficient and more robust than CG-6 DESCENT

Trang 3

2 The algorithm

In this section we describe the algorithm and its properties Let us consider that at the k 

iteration of the algorithm an inexact Wolfe line search is executed, that is the step-length k

satisfying (4) and (5) is computed With these the following elements s kx k1 x k and

1

yg   g are computed Now, let us take the quadratic approximate of function f in x k1 as

1( ) 1 1 1 1 ,

2

kd f kg d kd B d k

    (7) where B k1 is an approximation of the Hessian 2

1 ( k )

f x

of function f and d is the direction

to be determined The search direction d k1 is computed as in (3), where u k1 is computed as solution of the following minimizing problem

n



 (8) Introducing d k1 from (3) in the minimizing problem (8), then u k1 is obtained as

1

1 ( 1) 1

     (9) Clearly, using different approximations B k1 of the Hessian 2

1 ( k )

f x

 different search directions 1

k

d  can be obtained In this paper we consider the following expression of 1

1

k

B

:

11 ,

k k k k k k

   (10) where k is a positive parameter which follows to be determined Observe that 11

k

B is the sum

of a skew symmetric matrix with zero diagonal elements ( T T) / T ,

k k k k k k

s yy s y s and a pure symmetric and positive definite one 2( T) / ( T ) 2

k k k k k k

I y s s y s

Now, from (9) we get:

1 1

k k k k k k

(11)

1 1

   Therefore, using (11) in (3) the search direction can be expressed as

d k1H g k1 k1, (12) where

1

k k k k k k

   (13)

Observe that the search direction (12), where H k1 is given by (13), obtained by using the

expression (10) of the inverse Hessian 1 ,

1

k

B is given by:

1 1 1

(14)

Proposition 2.1 Consider k 0 and the step length k in (2) is determined by the Wolfe line search conditions (4) and (5) Then the search direction (14) satisfies the descent condition

1 1 0

T

k k

g d  

Proof By direct computation, since k 0, we get:

Trang 4

2

0

T

k k

g s

y s

      

Proposition 2.2 Consider k 0 and the step length k in (2) is determined by the Wolfe line search conditions (4) and (5) Then the search direction (14) satisfies the Dai and Liao conjugacy condition T 1 ( T 1),

k k k k k

y d  v s g where v  k 0

Proof By direct computation we have

2

k k

y

y s

where

2

k

k k

y v

y s

  By Wolfe line search conditions (4) and (5) it follows that T 0,

k k

y s 

therefore v   k 0 Observe that, although we have considered the expression of the inverse Hessian as that given by (10), which is a non-symmetric matrix, the search direction (14), obtained in this manner, satisfies both the descent condition and the Dai and Liao conjugacy condition Therefore, the search direction (14) leads us to a genuine conjugate gradient algorithm The expression (10) of the inverse Hessian is only a technical argument to get the search direction (14) It is remarkable

to say that from (12) our method can be considered as a quasi-Newton method in which the inverse Hessian, at each iteration, is expressed by the non-symmetric matrix H k1. More than this, the algorithm based on the search direction given by (14) can be considered as a three-term conjugate gradient algorithm

In this point, to define the algorithm the only problem we face is to specify a suitable value for the positive parameter k As we know, the convergence rate of the nonlinear conjugate gradient algorithms depend on the structure of the eigenvalues of the Hessian and the condition number of this matrix The standard approach is based on a singular value study on the matrix 1

k

H  (see for example [6]), i.e the numerical performances and the efficiency of the quasi-Newton methods are based on the condition number of the successive approximations of the inverse Hessian A matrix with a large condition number is called an ill-conditioned matrix Ill-conditioned matrices may produce instability in numerical computation with them Unfortunately, many difficulties occur when applying this approach to general nonlinear optimization problems Mainly, these difficulties are associated to the condition number computation of a matrix This is based on the singular values of the matrix, which is a difficult and laborious task However, if the matrix H k1 is a normal matrix, then the analysis is simplified because the condition matrix of a normal matrix is based on the eigenvalues of it, which are easier to be computed

As we know, generally, in a small neighborhood of the current point, the nonlinear objective function in the unconstrained optimization problem (1) behaves like a quadratic one for which the results from linear conjugate gradient can apply But, for faster convergence of linear conjugate gradient algorithms some approaches can be considered like: the presence of isolated smallest and/or largest eigenvalues of the matrix H k1, as well as gaps inside the eigenvalues spectrum [5], clustering of the eigenvalues about one point [31] or about several points [22], or

preconditioning [21] If the matrix has a number of certain distinct eigenvalues contained in m

disjoint intervals of very small length, then the linear conjugate gradient method will produce a

very small residual after m iterations This is an important property of linear conjugate gradient

method and we try to use it in nonlinear case in order to get efficient and robust conjugate

Trang 5

gradient algorithms Therefore, we consider the extension of the method of clustering the eigenvalues of the matrix defining the search direction from linear conjugate gradient algorithms

to nonlinear case

The idea is to determine k by clustering the eigenvalues of H k1, given by (13), by minimizing the largest eigenvalue of the matrix H k1 from the spectrum of this matrix The structure of the eigenvalues of the matrix H k1 is given by the following theorem

Theorem 2.1 Let H k1 be defined by (13) Then H k1 is a nonsingular matrix and its eigenvalues consist of 1 ( n  multiplicity), 2 k1

and k 1,

where

2 2

1

1

2

k k k b k k b a k

  (15)

2 2

1

1

2

k k k b k k b a k

  (16)

and

2 1,

k k

k k

a

y s

 

2 0

k

k T

k k

s b

y s

  (17)

Proof By the Wolfe line search conditions (4) and (5) we have that y T k s k 0 Therefore, the vectors y k and s k are nonzero vectors Let V be the vector space spanned by {s k,y k}

Clearly, dim(V)2 and dim(V ) n 2 Thus, there exist a set of mutually unit orthogonal

 V

u n

i

i

k

2

1

}

, 0

T k k i

i k

T

k u y u

which from (13) leads to

1 i i,

k k k

H u u i  1 ,  ,n 2 Therefore, the matrix H k1 has n 2 eigenvalues equal to 1, which corresponds to 2

1 }

n i

i k

eigenvectors

Now, we are interested to find the rest of the two remaining eigenvalues, denoted as

1

k

 and k 1,

 respectively From the formula of algebra (see for example [30])

det(Ipq Tuv T) (1 q p T )(1v u T ) ( p v q u T )( T ), where k T k k,

k k

p

y s

q sk, T k

k k

s u

y s

 and v yk, it follows that

     (18) But, a  and k 1 b  , therefore, k 0 H k1 is a nonsingular matrix

On the other hand, by direct computation

2 1

k k

s

y s

     (19)

By the relationships between the determinant and the trace of a matrix and its eigenvalues, it follows that the other eigenvalues of H k1 are the roots of the following quadratic polynomial

2 (2k k b )(a kk k b ) 0. (20)

Trang 6

Clearly, the other two eigenvalues of the matrix H k1 are determined from (20) as (15) and (16), respectively Observe that a  follows from Wolfe conditions and the inequality k 1

2

T

k

k k

T

k k k

y

y s

y s

s  

In order to have both k1

 and k1

 as real eigenvalues, from (15) and (16) the following condition must be fulfilled k k2 2b  4a k  4 0, out of which the following estimation of the parameter k can be determined:

2 k 1

k

k

a b

   (21) Since a  if k 1, s  it follows that the estimation of k 0, k given in (21) is well defined

From (20) we have

k1 k 1 2 k k b 0,

      (22)  k1 k 1 a kk k b 0

     (23) Therefore, from (22) and (23) we have that both k1

 and k 1

 are positive eigenvalues Since

k k b a k

    from (15) and (16) we have that k1 k 1

   By direct computation, from (15), using (21) we get

k1 1 a k 1 1. (24)

A simple analysis of equation (20) shows that 1k1k1 Therefore H k1 is a positive definite

matrix The maximum eigenvalue of H k1is k1

 and its minimum eigenvalue is 1

Proposition 2.3 The largest eigenvalue

2 2

1

1

2

k k k b k k b a k

        

  (25)

gets its minimum 1a k 1, when 2 k 1

k

k

a b

Proof Observe that a  By direct computation the minimum of (25) is obtained for k 1

k a k b k

   for which its minimum value is 1 a k  1 

We see that according to proposition 2.3 when k (2 a k 1) /b k the largest eigenvalue of H k1

arrives at the minimum value, i.e the spectrum of H k1 is clustered In fact for

k a k b k

   k1 k 1 1 a k 1

      Therefore, from (17) the following estimation of

k

 can be obtained:

2 2 1 2 1

T

k

k k

k k

y

y s

s s

     (26) From (17) a  hence if k 1, s  it follows that the estimation of k 0 k given by (26) is well defined However, we see that the minimum of k1

 obtained for k 2 a k 1 / ,b k is given by

Trang 7

1 a k 1 Therefore, if a is large, then the largest eigenvalue of the matrix k H k1 will be large.

This motivates the parameter k to be computed as:

k

k k

k

k k k

y

a s

y a s



(27)

where  1 is a positive constant Therefore, our algorithm is an adaptive conjugate gradient algorithm in which the value of the parameter k in the search direction (14) is computed as in (27) trying to cluster all the eigenvalues of H k1 defining the search direction of the algorithm

Now, as we know, Powell [28] constructed a three dimensional nonlinear unconstrained optimization problem showing that the PRP and HS methods could cycle infinitely without converging to a solution Based on the insight gained by his example, Powell [28] proposed a simple modification of PRP method where the conjugate gradient parameter PRP

k

 is modified as max{ ,0}

 Later on, for general nonlinear objective functions Gilbert and Nocedal [14] studied the theoretical convergence and the efficiency of PRP+ method In the following, to attain a good computational performance of the algorithm we apply the idea of Powell and consider the following modification of the search direction given by (14) as:

(28) where k is computed as in (27)

Using the procedure of acceleration of conjugate gradient algorithms presented in [1], and taking into consideration the above developments, the following algorithm can be presented

NADCG Algorithm (New Adaptive Conjugate Gradient Algorithm)

Step 1 Select a starting point x   and compute: 0 n f x( ),0 g0f x( ).0 Select some

positive values for  and  used in Wolfe line search conditions Consider a positive value for the parameter  ( 1) Set d0g0 and k 0

Step 2 Test a criterion for stopping the iterations If this test is satisfied, then stop; otherwise

continue with step 3

Step 3 Determine the steplength k by using the Wolfe line search (4) and (5).

Step 4 Compute z xk k d k, g z f z( ) and y kg kg z.

k k z k

a  g d and T

k k k k

b  y d

Step 6 Acceleration scheme If b  then compute k 0, k a b k / k and update the variables

as x k1x k  k k d k, otherwise update the variables as x k1x k k d k

Step 7 Compute k as in (27).

Step 8 Compute the search direction as in (28)

Step 9 Powell restart criterion If T1 0.2 1 2,

g g  g  then set d k1g k1 Step 10 Consider k k 1 and go to step 2 

If function f is bounded along the direction d then there exists a stepsize k, k satisfying the Wolfe line search (see for example [12] or [27]) In our algorithm when the Beale-Powell restart condition is satisfied, then we restart the algorithm with the negative gradient  gk1. More

Trang 8

sophisticated reasons for restarting the algorithms have been proposed in the literature [10], but we are interested in the performance of a conjugate gradient algorithm that uses this restart criterion associated to a direction satisfying both the descent and the conjugacy conditions Under reasonable assumptions, the Wolfe conditions and the Powell restart criterion are sufficient to prove the global convergence of the algorithm The first trial of the step length crucially affects the practical behavior of the algorithm At every iteration k 1 the starting guess for the step k in the line search is computed as k1 dk1 / dk . For uniformly convex functions, we can prove the linear convergence of the acceleration scheme used in the algorithm [1]

3 Global convergence analysis

Assume that:

(i) The level set Sxn: ( )f xf x( )0  is bounded

(ii) In a neighborhood N of S the function f is continuously differentiable and its

gradient is Lipschitz continuous, i.e there exists a constant L  such that0

     for all , x y N

Under these assumptions on f there exists a constant  0 such that f x( )  for all x S For any conjugate gradient method with strong Wolfe line search the following general result holds [24]

Proposition 3.1 Suppose that the above assumptions hold Consider a conjugate gradient

algorithm in which, for all k  the search direction 0, d is a descent direction and the k steplength k is determined by the Wolfe line search conditions If

2

0

1 ,

kd k



 (29)

then the algorithm converges in the sense that

liminfk g k 0.

   (30)

For uniformly convex functions we can prove that the norm of the direction d k1 computed as in (28) with (27) is bounded above Therefore, by proposition 3.1 we can prove the following result

Theorem 3.1 Suppose that the assumptions (i) and (ii) hold Consider the algorithm NADCG

where the search direction d is given by (28) and kk is computed as in (27) Suppose that d is k

a descent direction and k is computed by the strong Wolfe line search Suppose that f is a uniformly convex function on , S i.e there exists a constant 0 such that

(f x( ) f y( )) (T x y ) x y 2 (31)

for all , x y N Then

limk g k 0.

   (32)

Proof From Lipschitz continuity we have y kL s k On the other hand, from uniform convexity it follows that T 2

y s  s Now, from (27)

Trang 9

2 1 k 2 1 k 2 1.

k

L

On the other hand, from (28) we have

L

showing that (29) is true By proposition 3.1 it follows that (30) is true, which for uniformly convex functions is equivalent to (32) 

4 Numerical results and comparisons

The NADCG algorithm was implemented in double precision Fortran using loop unrolling of depth 5 and compiled with f77 (default compiler settings) and run on a Workstation Intel Pentium

4 with 1.8 GHz We selected a number of 80 large-scale unconstrained optimization test functions

in generalized or extended form presented in [2] For each test function we have considered 10 numerical experiments with the number of variables increasing as n 1000, 2000, ,10000. The algorithm uses the Wolfe line search conditions with cubic interpolation, 0.0001,  0.8 and the same stopping criterion g k 10 ,6

 where is the maximum absolute component of a vector

Since, CG-DESCENT [18] is among the best nonlinear conjugate gradient algorithms proposed in the literature, but not necessarily the best, in the following we compare our algorithm NADCG versus CG-DESCENT The algorithms we compare in these numerical experiments find local solutions Therefore, the comparisons of algorithms are given in the following context Let 1

ALG

i

f and ALG2

i

f be the optimal value found by ALG1 and ALG2, for problem i  1, ,800, respectively We say that, in the particular problem ,i the performance of ALG1 was better than the performance of ALG2 if:

ALG1 ALG2 103

  (33) and the number of iterations (#iter), or the number of function-gradient evaluations (#fg), or the CPU time of ALG1 was less than the number of iterations, or the number of function-gradient evaluations, or the CPU time corresponding to ALG2, respectively

Figure 1 shows the Dolan-Moré’s performance profiles subject to CPU time metric for different values of parameter  Form figure 1, for example for  2, comparing NADCG versus CG-DESCENT with Wolfe line search (version 1.4), subject to the number of iterations, we see that NADCG was better in 631 problems (i.e it achieved the minimum number of iterations for solving 631 problems, CG-DESCENT was better in 88 problems and they achieved the same number of iterations in 52 problems, etc Out of 800 problems, we considered in this numerical study, only for 771 problems does the criterion (33) hold From figure 1 we see that for different values of the parameter  NADCG algorithm has similar performances versus CG-DESCENT Therefore, in comparison with CG-DESCENT, on average, NADCG appears to generate the best search direction and the best step-length We see that this very simple adaptive scheme lead us to

a conjugate gradient algorithm which substantially outperform the CG-DESCENT, being way more efficient and more robust

From figure 1 we see that NADCG algorithm is very little sensitive to the values of the parameter  In fact, for a k , from (28) we get:

Trang 10

1 1 1 ,

1

T k

k T

k k k

y

s

s y s



  (34) where  1. Therefore, since the gradient of the function f is Lipschitz continuous and the

quantity T 1

k k

s g  is going to zero it follows that along the iterations d k1/ tends to zero, showing that along the iterations the search direction is less and less sensitive subject to the value

of the parameter . For uniformly convex functions, using the assumptions from section 3 we get:

1 1 .

1

k

  (35) Therefore, for example, for larger values of  the variation of d k1 subject to  decreases showing that the NADCG algorithm is very little sensitive to the values of the parameter  This

is illustrated in Figure 1 where the performance profiles have the same allure for different values

of 

Ngày đăng: 19/10/2022, 02:53

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[1] Andrei, N., Acceleration of conjugate gradient algorithms for unconstrained optimization. Applied Mathematics and Computation, 213 (2009) 361-369 Sách, tạp chí
Tiêu đề: Acceleration of conjugate gradient algorithms for unconstrained optimization
[2] Andrei, N., Another collection of large-scale unconstrained optimization test functions. ICI Technical Report, January 30, 2013 Sách, tạp chí
Tiêu đề: Another collection of large-scale unconstrained optimization test functions
[3] Aris, R., The Mathematical Theory of Diffusion and Reaction in Permeable Catalysts, Oxford, 1975 Sách, tạp chí
Tiêu đề: The Mathematical Theory of Diffusion and Reaction in Permeable Catalysts
[4] Averick, B.M., Carter, R.G., Moré, J.J., Xue, G.L., The MINPACK-2 test problem collection, Mathematics and Computer Science Division, Argonne National Laboratory, Preprint MCS-P153-0692, June 1992 Sách, tạp chí
Tiêu đề: The MINPACK-2 test problem collection
Tác giả: Averick, B.M., Carter, R.G., Moré, J.J., Xue, G.L
Nhà XB: Mathematics and Computer Science Division, Argonne National Laboratory
Năm: 1992
[5] Axelsson, O., Lindskog, G., On the rate of convergence of the preconditioned conjugate gradient methods. Numer. Math., 48, 499-523, 1986 Sách, tạp chí
Tiêu đề: On the rate of convergence of the preconditioned conjugate gradient"methods
[6] Babaie-Kafaki, S., Ghanbari, R, A modified scaled conjugate gradient method with global convergence for nonconvex functions. Bulletin of the Belgian Mathematical Society-Simon Stevin 21 (3), 465-47, 2014 Sách, tạp chí
Tiêu đề: A modified scaled conjugate gradient method with global convergence for nonconvex functions
Tác giả: Babaie-Kafaki, S., Ghanbari, R
Nhà XB: Bulletin of the Belgian Mathematical Society-Simon Stevin
Năm: 2014
[7] Bebernes, J., Eberly, D., Mathematical Problems from Combustion Theory, in: Applied Mathematical Sciences, vol. 83, Springer-Verlag, 1989 Sách, tạp chí
Tiêu đề: Mathematical Problems from Combustion Theory
[8] Cimatti, G., On a problem of the theory of lubrication governed by a variational inequality, Applied Mathematics and Optimization 3 (1977) 227–242 Sách, tạp chí
Tiêu đề: On a problem of the theory of lubrication governed by a variational inequality
[9] Dai, Y.H., Liao, L.Z., New conjugacy conditions and related nonlinear conjugate gradient methods.Appl. Math. Optim. 43 (2001) 87-101 Sách, tạp chí
Tiêu đề: New conjugacy conditions and related nonlinear conjugate gradient methods
w