Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2019, Article ID 5976595, 9 pages
https://doi.org/10.1155/2019/5976595
Research Article

Modified Three-Term Conjugate Gradient Method and Its Applications

School of Mathematics and Statistics, Qingdao University, Qingdao 266071, China

Correspondence should be addressed to Shouqiang Du; nc.ude.udq@udqs

Received 16 January 2019; Revised 15 March 2019; Accepted 25 March 2019; Published 17 April 2019

Academic Editor: Yann Favennec

Copyright © 2019 Jiankun Liu and Shouqiang Du. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We propose a modified three-term conjugate gradient method with the Armijo line search for solving unconstrained optimization problems. The proposed method possesses the sufficient descent property. Under mild assumptions, the global convergence property of the proposed method with the Armijo line search is proved. Due to simplicity, low storage, and nice convergence properties, the proposed method is used to solve -tensor systems and a kind of nonsmooth optimization problems with -norm. Finally, the given numerical experiments show the efficiency of the proposed method.

1. Introduction

We consider the following unconstrained optimization problem:where is a continuous function. It is well-known that the nonlinear conjugate gradient method is one of the most effective methods for solving large-scale unconstrained optimization problems due to its simplicity and low storage [18]. Let be the initial approximation of the solution to (1); the general format of the nonlinear conjugate gradient method is as follows:where can be obtained by some linear searches, i.e., [68], and search direction is computed bywhere is the gradient of at point and is a parameter. Different choices for the parameter correspond to different nonlinear conjugate gradient methods. The Fletcher-Reeves (FR) method, the Polak-Ribiere-Polyak (PRP) method, the Hestenes-Stiefel (HS) method, the Dai-Yuan (DY) method, and the Conjugate Descent (CD) method are some famous nonlinear conjugate gradient methods [1, 2, 912], and the parameters of them are, respectively, defined bywhere and is the Euclidean norm. Because of the good numerical performance of the conjugate gradient method, in recent years, the nonlinear three-term conjugate gradient method has been paid much attention by researchers, such as the three-term conjugate gradient method [5], the three-term form of the L-BFGS method [13], the three-term PRP conjugate gradient method [14], and the new-type conjugate gradient update parameter similar with [15]. On the other hand, we know that the Armijo line search is widely used in solving optimization problems; i.e., see [8]. So, in this paper, we propose a new modified three-term conjugate gradient method with the Armijo line search. The proposed method is used to solve -tensor systems [16, 17] and a kind of nonsmooth optimization problems with -norm [1822].

The remainder of this paper is organized as follows: In the next section, we give the new modified three-term conjugate gradient method. Firstly, we give the smooth case of the proposed method and prove the sufficient descent property and the global convergence property of it. Then, we give the nonsmooth case of the proposed method. In Section 3, we present -tensor systems and a kind of nonsmooth minimization problems with -norm, which can be solved by the proposed method. And, we also give some numerical results to show the efficiency of the proposed method. In Section 4, we give the conclusion of this paper.

2. Modified Three-Term Conjugate Gradient Method

In this section, we consider the nonlinear conjugate gradient method for solving (1); we discuss the problem in two cases: (1) is a smooth function; (2) is a nonsmooth function.

2.1. Smooth Case

Based on nonlinear conjugate gradient methods in [5, 8], we propose a modified three-term conjugate gradient method with the Armijo line search. We consider the search directionwhere is the gradient of at , , andFrom (5), (6), and (7), we can obtain that

Now, we present the modified three-term conjugate gradient method.

Algorithm 1 (modified three-term conjugate gradient method). Step 0. Choose and give an initial point , let , compute , and let .
Step 1. If , stop; otherwise, go to Step 2.
Step 2. Compute the search direction by (5), where and are defined by (6) and (7).
Step 3. Compute by the Armijo line search, where and satisfiesStep 4. Compute , where is given in Step 2 and is given in Step 3.
Step 5. Set and go to Step 1.
Next, we will give the global convergence analysis of Algorithm 1. Firstly, we give the following assumptions.

Assumption 2. The level set is bounded; i.e., there exists a positive constant such that for all .

Assumption 3. In the neighborhood of , is continuously differentiable and its gradient is Lipschitz continuous; that is, there exists a positive constant , , such that

Remark 4. Because is a decreasing sequence, so the sequence generated by Algorithm 1 is contained in . And by Assumptions 2 and 3, we can easily obtain that there exists a positive constant such that

Lemma 5. Suppose and are generated by Algorithm 1, then

Proof. Firstly, we prove that there exists a constant such that, for sufficiently large ,The proof of (13) can be divided into two following cases.
Case 1 (). By (8) and the Cauchy inequality, , then we have . Let , then we obtain (13).
Case 2 (). Due to the linear search step, that is the Step 3 of Algorithm 1, does not satisfy (9); i.e.,By Assumption 3 and the mean value theorem, there exists such that By the above formula, (8) and (14), we haveLet , then we obtain (13).
By (9) and Assumption 2, we haveFrom (8), (13), and (17), we have then we get Hence, the result follows.

Now we can get the global convergence of Algorithm 1.

Theorem 6. Suppose and are generated by Algorithm 1, then

Proof. Using the technique similar to Theorem 3.1 in [5], we can get this theorem.

Remark 7. The Armijo type line search [7] is given as follows:where , , , and The Wolfe type line search [6] is given as follows:where Obviously, Algorithm 1 is also true for the Armijo type line search and the Wolfe type line search.

2.2. Nonsmooth Case

In this subsection, by using smoothing function, we extend Algorithm 1 to the nonsmooth case. Firstly, we give the definition of smoothing function.

Definition 8. Let be a local Lipschitz continuous function. If , , and is fixed, is continuously differentiable and satisfiesthen we call is a smoothing function of .

Denote . Now, we present the following smoothing modified three-term conjugate gradient method.

Algorithm 9 (smoothing modified three-term conjugate gradient method). Step 0. Choose and give an initial point , let , compute , and let .
Step 1. If , stop; otherwise, go to Step 2.
Step 2. Compute the search direction by using and , wherewhere .
Step 3. Compute by the Armijo line search, where and satisfiesStep 4. Compute , if , set ; otherwise, let .
Step 5. Set and go to Step 1.
Next, we give the global convergence analysis of Algorithm 9.

Theorem 10. Suppose that is a smoothing function of . If for every fixed , satisfies Assumptions 2 and 3, then generated by Algorithm 9 satisfies

Proof. Denote . If is finite, then there exists an integer such that, for all ,and . That is to solveHence, from Theorem 6, we getwhich contradicts with (27). This shows that must be infinite and . Since is infinite, we can assume that with . Then we have

3. Applications

In this section, the applications of the proposed modified three-term conjugate gradient method are given. The conjugate gradient method is suitable for solving unconstrained optimization problems. In the first subsection, we consider the -tensor systems, which can be transformed into the unconstrained minimization problem and solved by Algorithm 1. Then in the second subsection, we consider a kind of nonsmooth optimization problems with -norm, which can be solved by Algorithm 9. And in each subsection, the numerical results are given to show the feasibility of the proposed method.

3.1. Applications in Solving -Tensor Systems

In this subsection, we consider the -tensor systems, which can be transformed into the general unconstrained minimization problem. We use Algorithm 1 to solve it. The problem of tensor systems [16, 17] is an important problem in tensor optimization [2326]. We consider the tensor systemwhere and . Then the th element of (31) is defined asAnd if and satisfywherethen we call is an eigenvalue of and is a corresponding eigenvector of [25]. The spectral radius [26] of a tensor is defined asLet be the identity tensor [17], i.e.,for all . If there exists a nonnegative tensor and a positive real number such that , then the tensor is called an -tensor [16]. And if , it is called a nonsingular -tensor. Suppose is a nonsingular -tensor, then for every positive vector , (31) has a unique positive solution [16]. Then (31) can be transformed into the following unconstrained minimization problem

Now, we present numerical experiments for solving -tensor systems. Some examples are taken from [16]. We implement Algorithm 1 with the codes in Matlab Version R2014a and Tensor Toolbox Version 2.6 on a laptop with an Intel(R) Core(TM) i5-2520M CPU(2.50GHz) and RAM of 4.00GB. The parameters involved in the algorithm are taken as .

Example 11. Consider (31) with a 3rd-order 2-dimensional -tensor, where . And contains the entries with and , and other entries are zeros. Let , . Hence is a upper triangular nonsingular -tensor. The starting point is set to be and is set to be .

The numerical results are given in Table 1 and Figure 1.

Table 1: The numerical results of Example 11.
Figure 1: Numerical results for Example 11(n=2).

Example 12. Consider (31) with a 3rd-order -tensor, where withBy , let , . Hence is a symmetric nonsingular -tensor. The starting point is set to be and is set to be .

When , the corresponding numerical results are given in Table 2 and Figure 2.

Table 2: The numerical results of Example 12.
Figure 2: Numerical results for Example 12(n=2).

When , the starting points are set to be , then the corresponding numerical results are shown as follows:

Figure 3 shows the numerical results of this example.

Figure 3: Numerical results for Example 12(n=5).
3.2. Applications in Solving -Norm Problems

In this subsection, we consider a kind of nonsmooth optimization problems with -norm. This kind of nonsmooth optimization problems can be solved by Algorithm 9. We considerwhere , , and is a parameter to trade off both terms for minimization. This problem is widely used in compressed sensing, signal reconstruction, and some related problems [1822, 2729]. In this subsection, we translate (40) into the absolute value equation problem based on the equivalence between the linear complementary problem and the absolute value equation problem [30] and then use Algorithm 9 to solve it.

We first give the transformation form of (40). As in [19, 21], let where . , we set

Due to the definition , we get that where is an n-dimensional vector. Therefore, as in [19, 21], problem (40) can be rewritten as follows: Then, the above problem can be transformed intowhere is a 2n-dimensional vector. Solving (45) is equivalent to solving the following linear complementary problem.

To find , such that

then (46) can be transformed into the following absolute value equation problem that is,

By the smoothing approximation function of , i.e., then we get where

Now, we give some numerical experiments of Algorithm 9, which are also considered in [19, 21, 22, 27, 28]. The numerical results of all examples indicate that the modified three-term conjugate gradient method is also effective for solving the -norm minimization problem (40). In our numerical experiments, all codes run in Matlab R2014a. For Examples 13 and 14, the parameters used in Algorithm 9 are chosen as , , , and .

Example 13. Consider (40) withIn this example, we choose The numerical results are given in Figure 4.

Figure 4: Numerical results for solving Example 13 with Algorithm 9.

Example 14. Consider (40) with In this example, we take The numerical results are given in Figure 5.

Figure 5: Numerical results for solving Example 14 with Algorithm 9.

Example 15. Consider a typical compressed sensing problem with the form as (40), which is also considered in [21, 22, 27, 28]. We choose , , , , , , , and . The original signal contains 520 randomly generated spikes. Further, the matrix A is obtained by first filling it with independent samples of a standard Gaussian distribution and then orthogonalization of its rows. We choose and . The numerical results are shown in Figure 6.

Figure 6: Numerical results for solving Example 15 with Algorithm 9.

4. Conclusion

In this paper, we propose a modified three-term conjugate gradient method and give the applications in solving -tensor systems and a kind of nonsmooth optimization problems with -norm. The global convergence of the proposed method is also given. Finally, we present some numerical experiments to demonstrate the efficiency of the proposed method.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Shandong Provincial Natural Science Foundation, China (no. ZR2016AM29), and National Natural Science Foundation of China (no. 11671220).

References

  1. E. Polak and G. Ribière, “Note sur la convergence de méthodes de directions conjuguées,” Revue Française d'Informatique et de Recherche Opérationnelle, vol. 3, no. 16, pp. 35–43, 1969. View at Google Scholar · View at MathSciNet
  2. B. T. Polyak, “The conjugate gradient method in extreme problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, pp. 94–112, 1969. View at Publisher · View at Google Scholar
  3. W. W. Hager and H. Zhang, “A survey of nonlinear conjugate gradient methods,” Pacific Journal of Optimization. An International Journal, vol. 2, no. 1, pp. 35–58, 2006. View at Google Scholar · View at MathSciNet
  4. X. Chen and W. Zhou, “Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth minimization,” SIAM Journal on Imaging Sciences, vol. 3, no. 4, pp. 765–790, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  5. J. K. Liu, Y. M. Feng, and L. M. Zou, “Some three-term conjugate gradient methods with the inexact line search condition,” Calcolo, vol. 55, no. 2, article 16, 2018. View at Publisher · View at Google Scholar · View at MathSciNet
  6. S. Du and Y. Chen, “Global convergence of a modified spectral FR conjugate gradient method,” Applied Mathematics and Computation, vol. 202, no. 2, pp. 766–770, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  7. L. Zhang, W. J. Zhou, and D. H. Li, “Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search,” Numerische Mathematik, vol. 104, no. 2, pp. 561–572, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. L. Zhang and W. Zhou, “On the global convergence of the Hager-Zhang conjugate radient method with Armijo line search,” Acta Mathematica Scientia, vol. 28, no. 5, pp. 840–845, 2008. View at Google Scholar · View at MathSciNet
  9. R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, pp. 149–154, 1964. View at Publisher · View at Google Scholar · View at MathSciNet
  10. M. R. Hestenes and E. L. Stiefel, “Methods of conjugate gradients for solving linear systems,” Journal of Research of the National Bureau of Standards, vol. 49, pp. 409–432, 1952. View at Publisher · View at Google Scholar · View at MathSciNet
  11. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. View at Publisher · View at Google Scholar · View at MathSciNet
  12. R. Fletcher, Practical Methods of Optimization, John Wiley & Sons, New York, NY, USA, 2nd edition, 1987. View at MathSciNet
  13. J. Nocedal, “Updating quasi-newton matrices with limited storage,” Mathematics of Computation, vol. 35, no. 151, pp. 773–782, 1980. View at Publisher · View at Google Scholar · View at MathSciNet
  14. L. Zhang, W. Zhou, and D. H. Li, “A descent modified Polak-Ribiere-Polyak conjugate gradient method and its global convergence,” IMA Journal of Numerical Analysis (IMAJNA), vol. 26, no. 4, pp. 629–640, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. M. Rivaie, M. Mamat, L. W. June, and I. Mohd, “A new class of nonlinear conjugate gradient coefficients with global convergence properties,” Applied Mathematics and Computation, vol. 218, no. 22, pp. 11323–11332, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. W. Ding and Y. Wei, “Solving multi-linear systems with M-tensors,” Journal of Scientific Computing, vol. 68, no. 2, pp. 689–715, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  17. Z. Xie, X. Jin, and Y. Wei, “Tensor methods for solving symmetric M-tensor,” Journal of Scientific Computing, vol. 74, no. 1, pp. 412–425, 2018. View at Publisher · View at Google Scholar · View at MathSciNet
  18. S. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, “An interior-point method for large-scale l1-regularized least squares,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 4, pp. 606–617, 2007. View at Publisher · View at Google Scholar · View at Scopus
  19. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction, application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, pp. 586–597, 2007. View at Google Scholar
  20. E. T. Hale, W. Yin, and Y. Zhang, “Fixed-point continuation for l1-minimization: methodology and convergence,” SIAM Journal on Optimization, vol. 19, no. 3, pp. 1107–1130, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  21. Y. Xiao, Q. Wang, and Q. Hu, “Non-smooth equations based method for l1-norm problems with applications to compressed sensing,” Nonlinear Analysis, vol. 74, no. 11, pp. 3570–3577, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  22. Y. Chen, Y. Gao, Z. Liu, and S. Du, “The smoothing gradient method for a kind of special optimization problem,” Operations Research Transactions, vol. 21, pp. 119–125, 2017. View at Google Scholar
  23. X. Li and M. K. Ng, “Solving sparse non-negative tensor equations: algorithms and applications,” Frontiers of Mathematics in China, vol. 10, no. 3, pp. 649–680, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. S. Du, L. Zhang, C. Chen, and L. Qi, “Tensor absolute value equations,” Science China Mathematics, vol. 61, no. 9, pp. 1695–1710, 2018. View at Publisher · View at Google Scholar · View at MathSciNet
  25. L. Qi, “Eigenvalues of a real supersymmetric tensor,” Journal of Symbolic Computation, vol. 40, no. 6, pp. 1302–1324, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. Y. Yang and Q. Yang, “Further results for Perron-Frobenius theorem for nonnegative tensors,” SIAM Journal on Matrix Analysis and Applications, vol. 31, no. 5, pp. 2517–2530, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  27. S. Du and M. Chen, “A new smoothing modified three-term conjugate gradient method for l1-norm minimization problem,” Journal of Inequalities and Applications, vol. 105, 2018. View at Publisher · View at Google Scholar · View at MathSciNet
  28. J. Yang and Y. Zhang, “Alternating direction algorithms for l1-problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011. View at Publisher · View at Google Scholar
  29. W. Yin, S. Osher, D. Goldfarb, and J. Darbon, “Bregman iterative algorithms for l1-minimization with applications to compressed sensing,” SIAM Journal on Imaging Sciences, vol. 1, no. 1, pp. 143–168, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  30. O. L. Mangasarian, “Absolute value equation solution via concave minimization,” Optimization Letters, vol. 1, no. 1, pp. 3–8, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus