Research Article  Open Access
Modified ThreeTerm Conjugate Gradient Method and Its Applications
Abstract
We propose a modified threeterm conjugate gradient method with the Armijo line search for solving unconstrained optimization problems. The proposed method possesses the sufficient descent property. Under mild assumptions, the global convergence property of the proposed method with the Armijo line search is proved. Due to simplicity, low storage, and nice convergence properties, the proposed method is used to solve tensor systems and a kind of nonsmooth optimization problems with norm. Finally, the given numerical experiments show the efficiency of the proposed method.
1. Introduction
We consider the following unconstrained optimization problem:where is a continuous function. It is wellknown that the nonlinear conjugate gradient method is one of the most effective methods for solving largescale unconstrained optimization problems due to its simplicity and low storage [1â€“8]. Let be the initial approximation of the solution to (1); the general format of the nonlinear conjugate gradient method is as follows:where can be obtained by some linear searches, i.e., [6â€“8], and search direction is computed bywhere is the gradient of at point and is a parameter. Different choices for the parameter correspond to different nonlinear conjugate gradient methods. The FletcherReeves (FR) method, the PolakRibierePolyak (PRP) method, the HestenesStiefel (HS) method, the DaiYuan (DY) method, and the Conjugate Descent (CD) method are some famous nonlinear conjugate gradient methods [1, 2, 9â€“12], and the parameters of them are, respectively, defined bywhere and is the Euclidean norm. Because of the good numerical performance of the conjugate gradient method, in recent years, the nonlinear threeterm conjugate gradient method has been paid much attention by researchers, such as the threeterm conjugate gradient method [5], the threeterm form of the LBFGS method [13], the threeterm PRP conjugate gradient method [14], and the newtype conjugate gradient update parameter similar with [15]. On the other hand, we know that the Armijo line search is widely used in solving optimization problems; i.e., see [8]. So, in this paper, we propose a new modified threeterm conjugate gradient method with the Armijo line search. The proposed method is used to solve tensor systems [16, 17] and a kind of nonsmooth optimization problems with norm [18â€“22].
The remainder of this paper is organized as follows: In the next section, we give the new modified threeterm conjugate gradient method. Firstly, we give the smooth case of the proposed method and prove the sufficient descent property and the global convergence property of it. Then, we give the nonsmooth case of the proposed method. In Section 3, we present tensor systems and a kind of nonsmooth minimization problems with norm, which can be solved by the proposed method. And, we also give some numerical results to show the efficiency of the proposed method. In Section 4, we give the conclusion of this paper.
2. Modified ThreeTerm Conjugate Gradient Method
In this section, we consider the nonlinear conjugate gradient method for solving (1); we discuss the problem in two cases: (1) is a smooth function; (2) is a nonsmooth function.
2.1. Smooth Case
Based on nonlinear conjugate gradient methods in [5, 8], we propose a modified threeterm conjugate gradient method with the Armijo line search. We consider the search directionwhere is the gradient of at , , andFrom (5), (6), and (7), we can obtain that
Now, we present the modified threeterm conjugate gradient method.
Algorithm 1 (modified threeterm conjugate gradient method). Step 0. Choose and give an initial point , let , compute , and let .
Step 1. If , stop; otherwise, go to Step 2.
Step 2. Compute the search direction by (5), where and are defined by (6) and (7).
Step 3. Compute by the Armijo line search, where and satisfiesStep 4. Compute , where is given in Step 2 and is given in Step 3.
Step 5. Set and go to Step 1.
Next, we will give the global convergence analysis of Algorithm 1. Firstly, we give the following assumptions.
Assumption 2. The level set is bounded; i.e., there exists a positive constant such that for all .
Assumption 3. In the neighborhood of , is continuously differentiable and its gradient is Lipschitz continuous; that is, there exists a positive constant , , such that
Remark 4. Because is a decreasing sequence, so the sequence generated by Algorithm 1 is contained in . And by Assumptions 2 and 3, we can easily obtain that there exists a positive constant such that
Lemma 5. Suppose and are generated by Algorithm 1, then
Proof. Firstly, we prove that there exists a constant such that, for sufficiently large ,The proof of (13) can be divided into two following cases.
Case 1 (). By (8) and the Cauchy inequality, , then we have . Let , then we obtain (13).
Case 2 (). Due to the linear search step, that is the Step 3 of Algorithm 1, does not satisfy (9); i.e.,By Assumption 3 and the mean value theorem, there exists such that By the above formula, (8) and (14), we haveLet , then we obtain (13).
By (9) and Assumption 2, we haveFrom (8), (13), and (17), we have then we get Hence, the result follows.
Now we can get the global convergence of Algorithm 1.
Theorem 6. Suppose and are generated by Algorithm 1, then
Proof. Using the technique similar to Theorem 3.1 in [5], we can get this theorem.
Remark 7. The Armijo type line search [7] is given as follows:where , , , and The Wolfe type line search [6] is given as follows:where Obviously, Algorithm 1 is also true for the Armijo type line search and the Wolfe type line search.
2.2. Nonsmooth Case
In this subsection, by using smoothing function, we extend Algorithm 1 to the nonsmooth case. Firstly, we give the definition of smoothing function.
Definition 8. Let be a local Lipschitz continuous function. If , , and is fixed, is continuously differentiable and satisfiesthen we call is a smoothing function of .
Denote . Now, we present the following smoothing modified threeterm conjugate gradient method.
Algorithm 9 (smoothing modified threeterm conjugate gradient method). Step 0. Choose and give an initial point , let , compute , and let .
Step 1. If , stop; otherwise, go to Step 2.
Step 2. Compute the search direction by using and , wherewhere .
Step 3. Compute by the Armijo line search, where and satisfiesStep 4. Compute , if , set ; otherwise, let .
Step 5. Set and go to Step 1.
Next, we give the global convergence analysis of Algorithm 9.
Theorem 10. Suppose that is a smoothing function of . If for every fixed , satisfies Assumptions 2 and 3, then generated by Algorithm 9 satisfies
Proof. Denote . If is finite, then there exists an integer such that, for all ,and . That is to solveHence, from Theorem 6, we getwhich contradicts with (27). This shows that must be infinite and . Since is infinite, we can assume that with . Then we have
3. Applications
In this section, the applications of the proposed modified threeterm conjugate gradient method are given. The conjugate gradient method is suitable for solving unconstrained optimization problems. In the first subsection, we consider the tensor systems, which can be transformed into the unconstrained minimization problem and solved by Algorithm 1. Then in the second subsection, we consider a kind of nonsmooth optimization problems with norm, which can be solved by Algorithm 9. And in each subsection, the numerical results are given to show the feasibility of the proposed method.
3.1. Applications in Solving Tensor Systems
In this subsection, we consider the tensor systems, which can be transformed into the general unconstrained minimization problem. We use Algorithm 1 to solve it. The problem of tensor systems [16, 17] is an important problem in tensor optimization [23â€“26]. We consider the tensor systemwhere and . Then the th element of (31) is defined asAnd if and satisfywherethen we call is an eigenvalue of and is a corresponding eigenvector of [25]. The spectral radius [26] of a tensor is defined asLet be the identity tensor [17], i.e.,for all . If there exists a nonnegative tensor and a positive real number such that , then the tensor is called an tensor [16]. And if , it is called a nonsingular tensor. Suppose is a nonsingular tensor, then for every positive vector , (31) has a unique positive solution [16]. Then (31) can be transformed into the following unconstrained minimization problem
Now, we present numerical experiments for solving tensor systems. Some examples are taken from [16]. We implement Algorithm 1 with the codes in Matlab Version R2014a and Tensor Toolbox Version 2.6 on a laptop with an Intel(R) Core(TM) i52520M CPU(2.50GHz) and RAM of 4.00GB. The parameters involved in the algorithm are taken as .
Example 11. Consider (31) with a 3rdorder 2dimensional tensor, where . And contains the entries with and , and other entries are zeros. Let , . Hence is a upper triangular nonsingular tensor. The starting point is set to be and is set to be .
The numerical results are given in Table 1 and Figure 1.

Example 12. Consider (31) with a 3rdorder tensor, where withBy , let , . Hence is a symmetric nonsingular tensor. The starting point is set to be and is set to be .
When , the corresponding numerical results are given in Table 2 and Figure 2.

When , the starting points are set to be , then the corresponding numerical results are shown as follows:
Figure 3 shows the numerical results of this example.
3.2. Applications in Solving Norm Problems
In this subsection, we consider a kind of nonsmooth optimization problems with norm. This kind of nonsmooth optimization problems can be solved by Algorithm 9. We considerwhere , , and is a parameter to trade off both terms for minimization. This problem is widely used in compressed sensing, signal reconstruction, and some related problems [18â€“22, 27â€“29]. In this subsection, we translate (40) into the absolute value equation problem based on the equivalence between the linear complementary problem and the absolute value equation problem [30] and then use Algorithm 9 to solve it.
We first give the transformation form of (40). As in [19, 21], let where . , we set
Due to the definition , we get that where is an ndimensional vector. Therefore, as in [19, 21], problem (40) can be rewritten as follows: Then, the above problem can be transformed intowhere is a 2ndimensional vector. Solving (45) is equivalent to solving the following linear complementary problem.
To find , such that
then (46) can be transformed into the following absolute value equation problem that is,
By the smoothing approximation function of , i.e., then we get where
Now, we give some numerical experiments of Algorithm 9, which are also considered in [19, 21, 22, 27, 28]. The numerical results of all examples indicate that the modified threeterm conjugate gradient method is also effective for solving the norm minimization problem (40). In our numerical experiments, all codes run in Matlab R2014a. For Examples 13 and 14, the parameters used in Algorithm 9 are chosen as , , , and .
Example 13. Consider (40) withIn this example, we choose The numerical results are given in Figure 4.
Example 14. Consider (40) with In this example, we take The numerical results are given in Figure 5.
Example 15. Consider a typical compressed sensing problem with the form as (40), which is also considered in [21, 22, 27, 28]. We choose , , , , , , , and . The original signal contains 520 randomly generated spikes. Further, the matrix A is obtained by first filling it with independent samples of a standard Gaussian distribution and then orthogonalization of its rows. We choose and . The numerical results are shown in Figure 6.
4. Conclusion
In this paper, we propose a modified threeterm conjugate gradient method and give the applications in solving tensor systems and a kind of nonsmooth optimization problems with norm. The global convergence of the proposed method is also given. Finally, we present some numerical experiments to demonstrate the efficiency of the proposed method.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was supported by the Shandong Provincial Natural Science Foundation, China (no. ZR2016AM29), and National Natural Science Foundation of China (no. 11671220).
References
 E. Polak and G. RibiĂ¨re, â€śNote sur la convergence de mĂ©thodes de directions conjuguĂ©es,â€ť Revue FranĂ§aise d'Informatique et de Recherche OpĂ©rationnelle, vol. 3, no. 16, pp. 35â€“43, 1969. View at: Google Scholar  MathSciNet
 B. T. Polyak, â€śThe conjugate gradient method in extreme problems,â€ť USSR Computational Mathematics and Mathematical Physics, vol. 9, pp. 94â€“112, 1969. View at: Publisher Site  Google Scholar
 W. W. Hager and H. Zhang, â€śA survey of nonlinear conjugate gradient methods,â€ť Pacific Journal of Optimization. An International Journal, vol. 2, no. 1, pp. 35â€“58, 2006. View at: Google Scholar  MathSciNet
 X. Chen and W. Zhou, â€śSmoothing nonlinear conjugate gradient method for image restoration using nonsmooth minimization,â€ť SIAM Journal on Imaging Sciences, vol. 3, no. 4, pp. 765â€“790, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 J. K. Liu, Y. M. Feng, and L. M. Zou, â€śSome threeterm conjugate gradient methods with the inexact line search condition,â€ť Calcolo, vol. 55, no. 2, article 16, 2018. View at: Publisher Site  Google Scholar  MathSciNet
 S. Du and Y. Chen, â€śGlobal convergence of a modified spectral FR conjugate gradient method,â€ť Applied Mathematics and Computation, vol. 202, no. 2, pp. 766â€“770, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 L. Zhang, W. J. Zhou, and D. H. Li, â€śGlobal convergence of a modified FletcherReeves conjugate gradient method with Armijotype line search,â€ť Numerische Mathematik, vol. 104, no. 2, pp. 561â€“572, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 L. Zhang and W. Zhou, â€śOn the global convergence of the HagerZhang conjugate radient method with Armijo line search,â€ť Acta Mathematica Scientia, vol. 28, no. 5, pp. 840â€“845, 2008. View at: Google Scholar  MathSciNet
 R. Fletcher and C. M. Reeves, â€śFunction minimization by conjugate gradients,â€ť The Computer Journal, vol. 7, pp. 149â€“154, 1964. View at: Publisher Site  Google Scholar  MathSciNet
 M. R. Hestenes and E. L. Stiefel, â€śMethods of conjugate gradients for solving linear systems,â€ť Journal of Research of the National Bureau of Standards, vol. 49, pp. 409â€“432, 1952. View at: Publisher Site  Google Scholar  MathSciNet
 Y. H. Dai and Y. Yuan, â€śA nonlinear conjugate gradient with a strong global convergence property,â€ť SIAM Journal on Optimization, vol. 10, no. 1, pp. 177â€“182, 1999. View at: Publisher Site  Google Scholar  MathSciNet
 R. Fletcher, Practical Methods of Optimization, John Wiley & Sons, New York, NY, USA, 2nd edition, 1987. View at: MathSciNet
 J. Nocedal, â€śUpdating quasinewton matrices with limited storage,â€ť Mathematics of Computation, vol. 35, no. 151, pp. 773â€“782, 1980. View at: Publisher Site  Google Scholar  MathSciNet
 L. Zhang, W. Zhou, and D. H. Li, â€śA descent modified PolakRibierePolyak conjugate gradient method and its global convergence,â€ť IMA Journal of Numerical Analysis (IMAJNA), vol. 26, no. 4, pp. 629â€“640, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 M. Rivaie, M. Mamat, L. W. June, and I. Mohd, â€śA new class of nonlinear conjugate gradient coefficients with global convergence properties,â€ť Applied Mathematics and Computation, vol. 218, no. 22, pp. 11323â€“11332, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 W. Ding and Y. Wei, â€śSolving multilinear systems with Mtensors,â€ť Journal of Scientific Computing, vol. 68, no. 2, pp. 689â€“715, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 Z. Xie, X. Jin, and Y. Wei, â€śTensor methods for solving symmetric Mtensor,â€ť Journal of Scientific Computing, vol. 74, no. 1, pp. 412â€“425, 2018. View at: Publisher Site  Google Scholar  MathSciNet
 S. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, â€śAn interiorpoint method for largescale l_{1}regularized least squares,â€ť IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 4, pp. 606â€“617, 2007. View at: Publisher Site  Google Scholar
 M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, â€śGradient projection for sparse reconstruction, application to compressed sensing and other inverse problems,â€ť IEEE Journal of Selected Topics in Signal Processing, vol. 1, pp. 586â€“597, 2007. View at: Google Scholar
 E. T. Hale, W. Yin, and Y. Zhang, â€śFixedpoint continuation for l_{1}minimization: methodology and convergence,â€ť SIAM Journal on Optimization, vol. 19, no. 3, pp. 1107â€“1130, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Xiao, Q. Wang, and Q. Hu, â€śNonsmooth equations based method for l_{1}norm problems with applications to compressed sensing,â€ť Nonlinear Analysis, vol. 74, no. 11, pp. 3570â€“3577, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Chen, Y. Gao, Z. Liu, and S. Du, â€śThe smoothing gradient method for a kind of special optimization problem,â€ť Operations Research Transactions, vol. 21, pp. 119â€“125, 2017. View at: Google Scholar
 X. Li and M. K. Ng, â€śSolving sparse nonnegative tensor equations: algorithms and applications,â€ť Frontiers of Mathematics in China, vol. 10, no. 3, pp. 649â€“680, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 S. Du, L. Zhang, C. Chen, and L. Qi, â€śTensor absolute value equations,â€ť Science China Mathematics, vol. 61, no. 9, pp. 1695â€“1710, 2018. View at: Publisher Site  Google Scholar  MathSciNet
 L. Qi, â€śEigenvalues of a real supersymmetric tensor,â€ť Journal of Symbolic Computation, vol. 40, no. 6, pp. 1302â€“1324, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Yang and Q. Yang, â€śFurther results for PerronFrobenius theorem for nonnegative tensors,â€ť SIAM Journal on Matrix Analysis and Applications, vol. 31, no. 5, pp. 2517â€“2530, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 S. Du and M. Chen, â€śA new smoothing modified threeterm conjugate gradient method for l_{1}norm minimization problem,â€ť Journal of Inequalities and Applications, vol. 105, 2018. View at: Publisher Site  Google Scholar  MathSciNet
 J. Yang and Y. Zhang, â€śAlternating direction algorithms for l_{1}problems in compressive sensing,â€ť SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250â€“278, 2011. View at: Publisher Site  Google Scholar
 W. Yin, S. Osher, D. Goldfarb, and J. Darbon, â€śBregman iterative algorithms for l_{1}minimization with applications to compressed sensing,â€ť SIAM Journal on Imaging Sciences, vol. 1, no. 1, pp. 143â€“168, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 O. L. Mangasarian, â€śAbsolute value equation solution via concave minimization,â€ť Optimization Letters, vol. 1, no. 1, pp. 3â€“8, 2007. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2019 Jiankun Liu and Shouqiang Du. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.