Special Issue

## Machine Learning and its Applications in Image Restoration

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 6157294 | https://doi.org/10.1155/2020/6157294

Junyue Cao, Jinzhao Wu, "A Descent Conjugate Gradient Algorithm for Optimization Problems and Its Applications in Image Restoration and Compression Sensing", Mathematical Problems in Engineering, vol. 2020, Article ID 6157294, 9 pages, 2020. https://doi.org/10.1155/2020/6157294

# A Descent Conjugate Gradient Algorithm for Optimization Problems and Its Applications in Image Restoration and Compression Sensing

Guest Editor: Wenjie Liu
Accepted25 Aug 2020
Published29 Sep 2020

#### Abstract

It is well known that the nonlinear conjugate gradient algorithm is one of the effective algorithms for optimization problems since it has low storage and simple structure properties. This motivates us to make a further study to design a modified conjugate gradient formula for the optimization model, and this proposed conjugate gradient algorithm possesses several properties: (1) the search direction possesses not only the gradient value but also the function value; (2) the presented direction has both the sufficient descent property and the trust region feature; (3) the proposed algorithm has the global convergence for nonconvex functions; (4) the experiment is done for the image restoration problems and compression sensing to prove the performance of the new algorithm.

#### 1. Introduction

Consider the following model defined bywhere is a continuous function. The above problem (1) has many practical applied fields, such as economics, biology, and engineering. It is well known that the nonlinear conjugate gradient (CG) method is one of the most effective methods for (1). The CG algorithm has the following iterative formula withwhere denotes the steplength, is the th iterative point, and is the search direction designed bywhere is the gradient and is a scalar which determines different CG algorithms (, etc.), where the Polak–Ribière–Polak (PRP) formula [6, 7] is one of the well-known nonlinear CG formulas withwhere and is the Euclidean norm. The PRP method has been studied by many scholars, and many results are obtained (see , etc.) since the PRP algorithm has the superior numerical performance but for convergence. At present, under the weak Wolfe–Powell (WWP) inexact line search and for nonconvex functions, the global convergence of the PRP algorithm is still open, and it is one of the well-known open problems in optimization fields. Based on the PRP formula, many modified nonlinear CG formulas are done (, etc.) because many scholars want to use the perfect numerical attitude of it. Recently, Yuan et al.  open up a new way by modifying the WWP line search technique and partly proved the global convergence of the PRP algorithm. Further results are obtained (see , etc.) by this technique. It has been proved that the nonlinear CG algorithms can be used in nonlinear equations, nonsmooth optimization, and image restoration problems (see , etc.).

We all know that the sufficient descent property designed byplays an important role for convergence analysis in CG methods (see [13, 14, 24], etc.), where is a constant. There is another crucial condition about the scalar that has been pointed out by Powell  and further emphasized in the global convergence [11, 12]. Thus, under the assumption of the sufficient descent condition and the WWP technique, a modified PRP formula is presented by Gilbert and Nocedal , and its global convergence for nonconvex functions is established. All of these observations tell us that both property (5) and are very important in the CG algorithms. To get one of the conditions or both of them, many scholars made a further study and got many interesting results. Yu  presented a modified PRP nonlinear CG formula designed bywhere is a positive constant and , which has property (5) with . Yuan  proposed a further formula defined bywhich possesses not only property (5) with but also the scalar . To get a greater drop, a three-term FR CG formula is given by Zhang et al. :where it has (5) with . Dai and Tian  gave another CG direction designed bywhich also possesses (5) with . The global convergence of the above CG method is proved by Dai and Tian  for and . For nonconvex functions and the effective Armijo line search, they did not analyze them. One the main reasons lies in the trust region feature. To overcome it, we  proposed a CG formula designed bywhere , which possesses not only (5) with but also the trust region property. It has been proved that the CG formula will have better numerical performance if it possesses not only the gradient value information but also the function value . This motivates us to present a CG formula based on (10) designed bywhere , , and with and and . The new vector  has been proved that it has some good properties in theory and experiment. Yuan et al.  use it in the CG formula and get some good results. These achievements inspire us to propose the new CG direction (11), and this paper possesses the following features:(i)The sufficient property and the trust region feature are obtained(ii)The new direction possesses not only the gradient value but also the function value(iii)The given algorithm has the global convergence under the Armijo line search for nonconvex functions(iv)The experiments for image restoration problems and compression sensing are done to test the performance of the new algorithm

The next section states the given algorithm. The convergence analysis is given in Section 3, and experiments are done in Section 4, respectively. The last section proposes one conclusion.

#### 2. Algorithm

Based on the discussions of the above section, the CG algorithm is listed in Algorithm 1.

 Initial step: given any initial point and positive constants , , set and . Step 1: stop if is true. Step 2: find such that where satisfying the equation in Step 2 mentioned in Algorithm 1. Step 3: set . Step 4: stop if holds. Step 5: compute by (11). Step 6: set and go to Step 2.

Theorem 1. The direction is defined by (11); then there exists a positive satisfying

Proof. By (11), we directly get (12) and (13) for with . If , using (11) again, we havethen (12) is true. By (11) again, we can getwhich implies that (13) holds by choosing . We complete the proof.

Remark 1. The relation (13) is the so-called trust region feature, and the above theorem tells us that direction (11) has not only the sufficient descent property but also the trust region feature. Both these relations (12) and (13) will make the proof of the global convergence of Algorithm 1 be easy to be established.

#### 3. Global Convergence

For the nonconvex functions, the global convergence of Algorithm 1 is established under the following assumptions.

Assumption 1. Assume that the function has at least a stationary point , namely, is true. Suppose that the level set defined by is bounded.

Assumption 2. The function is twice continuously differentiable and bounded below, and its is Lipschitz continuous. We also assume that there exists a positive constant such thatNow, we prove the global convergence of Algorithm 1.

Theorem 2. Let Assumption 1 be true. Then, we get

Proof. Using (12) and the Step 2 of Algorithm 1, we obtainwhich means that the sequence is descent and the following relationis true. For to , by summing the above inequalities and Assumption 1, we deduce thatholds. Thus, we haveThis implies thatorSuppose that (22) holds, the proof of this theorem is complete. Assuming that (23) is true, we aim to get (17). Let the stepsize satisfy the equation in Step 2 in Algorithm 1; for , we haveBy (12) and (13) and the well-known mean value theorem, we obtainwhich implies thatis true. This is a contradiction to (23). Then, only relation (22) holds. We complete the proof.

Remark 2. We can see that the proof process of the global convergence is very simple since the defined direction (11) has not only the good sufficient descent property (12) but also the perfect trust region feature (13).

#### 4. Numerical Results

The numerical experiments for image restoration problems and compression sensing will be done by Algorithm 1 and the normal PRP algorithm, respectively. All codes are run on a PC with an Intel (R) Core (TM) i7-7700T CPU @ 2.9 GHz, 16.00 GB of RAM, and the Windows 10 operating system and written by MATLAB r2014a. The parameters are chosen as , , , and .

##### 4.1. Image Restoration Problems

Setting be the true image which has pixels and . At a pixel location , denotes the gray level of . Then, defining a set bywhich is the index set of the noise candidates. Suppose that is the observed noisy image of corrupted by salt-and-pepper noise, we let be the neighborhood of . By applying an adaptive median filter to the noisy image , is defined by the image obtained. denotes the maximum of a noisy pixel, and denotes the minimum of a noisy pixel. The following conclusions can be obtained: (i) if , then must be restored. A pixel is identified as uncorrupted, and its original value is kept, which means that with the element of the denoised image by the two-phase method. (ii) If holds, is stetted and is restored if . Chan et al.  presented the new function and minimized it for the restored images without a nonsmooth term, which has the following form:where is a constant and is an even edge-preserving potential function. The numerical performance of is noteworthy [32, 33].

We choose Barbara , man , Baboon , and Lena as the tested images. The well-known PRP CG algorithm (PRP algorithm) is also done to compare with Algorithm 1. The detailed performances are listed in Figures 1 and 2.

Figures 1 and 2 tell us that these two algorithms (Algorithm 1 and the PRP algorithm) are successful to solve these image restoration problems, and the results are good. To directly compare their performances, the restoration performance is assessed by applying the peak signal-to-noise ratio (PSNR) defined in  which is computed and listed in Table 1. From the value of Table 1, we can see that Algorithm 1 is competitive to the PRP algorithm since its PSNR value is less than that of the PRP algorithm.

 20% noise Barbara Man Baboon Lena Average Algorithm 1 31.115 38.0355 29.4393 41.0674 34.9143 PRP algorithm 31.1118 37.9583 29.4534 41.356 34.969 40% noise Barbara Man Baboon Lena Average Algorithm 1 27.5415 34.0063 25.8947 36.6496 31.0230 PRP algorithm 27.6153 34.5375 25.8571 36.701 31.1777
##### 4.2. Compressive Sensing

In this section, the following compressive sensing images are tested: Phantom , Fruits , and Boat . These three images are treated as 256 vectors, and the size of the observation matrix is . The so-called Fourier transform technology is used, and the measurement is the Fourier domain.

Figures 35 turn out that these two algorithms work well for these figures, and they can successfully solve them.

#### 5. Conclusion

This paper, by designing a CG algorithm, studies the unconstrained optimization problems. The given method possesses not only the sufficient descent property but also the trust region feature. The global convergence is proved by a simple way. The image restoration problems and compressive sensing problems are tested to show that the proposed algorithm is better than the normal algorithm. In the future, we will focus on the following aspects to be paid attention: (i) we believe there are many perfect CG algorithms which can be successfully used for image restoration problems and compressive sensing; (ii) more experiments will be done to test the performance of the new algorithm.

#### Data Availability

All data are included in the paper.

#### Conflicts of Interest

There are no potential conflicts of interest.

#### Acknowledgments

The authors would like to thank the support of the funds. This work was supported by the National Natural Science Foundation of China under Grant no. 61772006, the Science and Technology Program of Guangxi under Grant no. AB17129012, the Science and Technology Major Project of Guangxi under Grant no. AA17204096, the Special Fund for Scientific and Technological Bases and Talents of Guangxi under Grant no. 2016AD05050, and the Special Fund for Bagui Scholars of Guangxi.

1. Y. Dai and Y. Yuan, “A nonlinear conjugate gradient with a strong global convergence properties,” SIAM Journal on Optimization, vol. 10, pp. 177–182, 2000. View at: Publisher Site | Google Scholar
2. R. Fletcher, Practical Methods of Optimization, John Wiley and Sons, New York, NY, USA, 2nd edition, 1987.
3. R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, no. 2, pp. 149–154, 1964. View at: Publisher Site | Google Scholar
4. M. R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems,” Journal of Research of the National Bureau of Standards, vol. 49, no. 6, pp. 409–436, 1952. View at: Publisher Site | Google Scholar
5. Y. Liu and C. Storey, “Efficient generalized conjugate gradient algorithms, part 1: theory,” Journal of Optimization Theory and Applications, vol. 69, no. 1, pp. 129–137, 1991. View at: Publisher Site | Google Scholar
6. B. T. Polak, “The conjugate gradient method in extreme problems,” Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. View at: Publisher Site | Google Scholar
7. E. Polak and G. Ribière, “Note sur la convergence de méthodes de directions conjuguées,” Revue française d'informatique et de recherche opérationnelle. Série rouge, vol. 3, no. 16, pp. 35–43, 1969. View at: Publisher Site | Google Scholar
8. Y. Dai, “Convergence properties of the BFGS algorithm,” SIAM Journal on Optimization, vol. 13, no. 3, pp. 693–701, 2002. View at: Publisher Site | Google Scholar
9. Y. Dai, “Analysis of conjugate gradient methods,” Institute of Computational Mathematics and Scientific/Engineering Computing, Chese Academy of Sciences, Beijing, China, 1997, Ph.D. Thesis. View at: Google Scholar
10. M. J. D. Powell, “Nonconvex minimization calculations and the conjugate gradient method,” Lecture Notes in Mathematics, vol. 1066, Spinger-Verlag, Berlin, Germany, 1984. View at: Google Scholar
11. M. J. D. Powell, “Convergence properties of algorithms for nonlinear optimization,” SIAM Review, vol. 28, no. 4, pp. 487–500, 1986. View at: Publisher Site | Google Scholar
12. G. Yuan, “Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems,” Optimization Letters, vol. 3, no. 1, pp. 11–21, 2009. View at: Publisher Site | Google Scholar
13. J. C. Gilbert and J. Nocedal, “Global convergence properties of conjugate gradient methods for optimization,” SIAM Journal on Optimization, vol. 2, no. 1, pp. 21–42, 1992. View at: Publisher Site | Google Scholar
14. W. W. Hager and H. Zhang, “A new conjugate gradient method with guaranteed descent and an efficient line search,” SIAM Journal on Optimization, vol. 16, no. 1, pp. 170–192, 2005. View at: Publisher Site | Google Scholar
15. W. W. Hager and H. Zhang, “Algorithm 851,” ACM Transactions on Mathematical Software, vol. 32, no. 1, pp. 113–137, 2006. View at: Publisher Site | Google Scholar
16. Z. Wei, S. Yao, and L. Liu, “The convergence properties of some new conjugate gradient methods,” Applied Mathematics and Computation, vol. 183, no. 2, pp. 1341–1350, 2006. View at: Publisher Site | Google Scholar
17. G. Yuan, Z. Wei, and X. Lu, “Global convergence of BFGS and PRP methods under a modified weak Wolfe-Powell line search,” Applied Mathematical Modelling, vol. 47, pp. 811–825, 2017. View at: Publisher Site | Google Scholar
18. X. Li, S. Wang, Z. Jin, and H. Pham, “A conjugate gradient algorithm under Yuan-Wei-Lu line search technique for large-scale minimization optimization models,” Mathematical Problems in Engineering, vol. 2018, Article ID 4729318, 11 pages, 2018. View at: Publisher Site | Google Scholar
19. G. Yuan, Z. Sheng, B. Wang, W. Hu, and C. Li, “The global convergence of a modified BFGS method for nonconvex functions,” Journal of Computational and Applied Mathematics, vol. 327, pp. 274–294, 2018. View at: Publisher Site | Google Scholar
20. G. Yuan, Z. Wei, and Y. Yang, “The global convergence of the Polak-Ribière-Polyak conjugate gradient algorithm under inexact line search for nonconvex functions,” Journal of Computational and Applied Mathematics, vol. 362, pp. 262–275, 2019. View at: Publisher Site | Google Scholar
21. J. Cao and J. Wu, “A conjugate gradient algorithm and its applications in image restoration,” Applied Numerical Mathematics, vol. 152, pp. 243–252, 2020. View at: Publisher Site | Google Scholar
22. G. Yuan, T. Li, and W. Hu, “A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems,” Applied Numerical Mathematics, vol. 147, pp. 129–141, 2020. View at: Publisher Site | Google Scholar
23. G. Yuan, J. Lu, and Z. Wang, “The PRP conjugate gradient algorithm with a modified WWP line search and its application in the image restoration problems,” Applied Numerical Mathematics, vol. 152, pp. 1–11, 2020. View at: Publisher Site | Google Scholar
24. G. Yuan, Z. Meng, and Y. Li, “A modified Hestenes and Stiefel conjugate gradient algorithm for large-scale nonsmooth minimizations and nonlinear equations,” Journal of Optimization Theory and Applications, vol. 168, no. 1, pp. 129–152, 2016. View at: Publisher Site | Google Scholar
25. G. Yu, “Nonlinear self-scaling conjugate gradient methods for large-scale optimization problems,” Sun Yat-Sen University, Guangzhou, China, 2007, Thesis of Doctors Degree. View at: Google Scholar
26. L. Zhang, W. Zhou, and D. Li, “Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search,” Numerische Mathematik, vol. 104, no. 4, pp. 561–572, 2006. View at: Publisher Site | Google Scholar
27. Z.-F. Dai and B.-S. Tian, “Global convergence of some modified PRP nonlinear conjugate gradient methods,” Optimization Letters, vol. 5, no. 4, pp. 615–630, 2011. View at: Publisher Site | Google Scholar
28. J. Cao and J. Wu, “A conjugate gradient algorithm and its applications in image restoration,” Applied Numerical Mathematics, vol. 152, pp. 243–252, 2019. View at: Publisher Site | Google Scholar
29. G. Yuan, Z. Wei, and Q. Zhao, “A modified Polak-Ribière-Polyak conjugate gradient algorithm for large-scale optimization problems,” IIE Transactions, vol. 46, no. 4, pp. 397–413, 2014. View at: Publisher Site | Google Scholar
30. G. Yuan and Z. Wei, “Convergence analysis of a modified BFGS method on convex minimizations,” Computational Optimization and Applications, vol. 47, no. 2, pp. 237–255, 2010. View at: Publisher Site | Google Scholar
31. R. H. Chan, C. W. Ho, C. Y. Leung, and M. Nikolova, “Minimization of detail-preserving regularization functional by Newton’s method with continuation,” in Proceedings of IEEE International Conference on Image Processing, pp. 125–128, Genova, Italy, September 2005. View at: Publisher Site | Google Scholar
32. J. F. Cai, R. H. Chan, and B. Morini, “Minimization of an edge-preserving regularization functional by conjugate gradient types methods,” in Image Processing Based on Partial Differential Equations, pp. 109–122, Springer, Berlin, Germany, 2007. View at: Publisher Site | Google Scholar
33. Y. Dong, R. H. Chan, and S. Xu, “A detection statistic for random-valued impulse noise,” IEEE Transactions on Image Processing, vol. 16, no. 4, pp. 1112–1120, 2007. View at: Publisher Site | Google Scholar
34. A. Bovik, Handbook of Image and Video Processing, Academic, New York, NY, USA, 2000.
35. F. Rahpeymaii, K. Amini, T. Allahviranloo, and M. R. Malkhalifeh, “A new class of conjugate gradient methods for unconstrained smooth optimization and absolute value equations,” Calcolo, vol. 56, 2019. View at: Publisher Site | Google Scholar
36. G. Yu, J. Huang, and Y. Zhou, “A descent spectral conjugate gradient method for impulse noise removal,” Applied Mathematics Letters, vol. 23, no. 5, pp. 555–560, 2010. View at: Publisher Site | Google Scholar

#### More related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.