Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018, Article ID 5817931, 9 pages
https://doi.org/10.1155/2018/5817931
Research Article

The Smoothing FR Conjugate Gradient Method for Solving a Kind of Nonsmooth Optimization Problem with -Norm

School of Mathematics and Statistics, Qingdao University, Qingdao 266071, China

Correspondence should be addressed to Shou-qiang Du; nc.ude.udq@udqs

Received 9 October 2017; Accepted 27 December 2017; Published 23 January 2018

Academic Editor: Elisa Francomano

Copyright © 2018 Miao Chen and Shou-qiang Du. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We study the method for solving a kind of nonsmooth optimization problems with -norm, which is widely used in the problem of compressed sensing, image processing, and some related optimization problems with wide application background in engineering technology. Transformated by the absolute value equations, this kind of nonsmooth optimization problem is rewritten as a general unconstrained optimization problem, and the transformed problem is solved by a smoothing FR conjugate gradient method. Finally, the numerical experiments show the effectiveness of the given smoothing FR conjugate gradient method.

1. Introduction

In the last few years, the study work of finding sparsest solutions to undermined system of equations has been extensively done. Finding the sparsest solution of an undermined system of equations is equivalent to solving the -norm regularized minimization problem as the following: where , , , and denotes -norm. From [14], we know that the above problem is difficult to solve in a straight way. In order to solve the -norm problem effectively, an approximation model is to replace the -norm by the -norm to solve the Basis Pursuit problem, such as in [5, 6]: Therefore the convex envelope of is , where is the -norm of . And this problem is also NP-hard problem. When contains some noise in practice application, the above problem is rewritten as the following nonsmooth optimization problem with -norm: where , , , denotes 2-norm, and denotes 1-norm. For the minimization of the -norm having a good recovery, (3) is widely used in the sense compression, image processing, and other related fields in engineering technology, one can see [7, 8] and the references therein. For some , is a convex function but not a differentiable function. Recently, many scholars have studied the method for solving (3). For instance, gradient projection for sparse reconstruction was proposed by Figueiredo et al. in [9], a two-step iterative shrinkage thresholding (IST) method was proposed by Bioucas-Dias and Figueiredo in [10], a fast IST algorithm was presented by Beck and Teboulle in [11], SPGL1 (A solver for large-scale sparse reconstruction) was proposed by Ewout and Friedlander in [12], who consider a least-squares problem with -norm constraint and use a spectral gradient projection method, and ADM method was proposed by Yang and Zhang in [13]. Problem (3) was formulated to a convex quadratic problem in [14]. Among all the references mentioned above, there is no use of the relationship between the linear complementarity problems and the absolute value equations to solve (3). They do not use the structure of the absolute value equation to propose new method to solve (3). They also do not translate the original problem into an absolute value equation problem and use the effective methods of absolute value equations. Just recently, only in [15], a smoothing gradient method is given for solving (3) based on the absolute value equations. Therefore, in this paper, we study how to use this new transformation to solve (3) by the useful FR conjugate method.

As we all know, the transformed linear complementarity problem is rewritten as the absolute value equation problem mainly based on the equivalence between the linear complementarity problem and the absolute value equation problem, such as [1618]. The absolute value equation can be solved easily now. On the other hand, the conjugate gradient method is suitable for solving large-scare optimization problems and has sample structure and global convergence [1925]. In addition, the smoothing methods are used to solve the related nonsmooth optimization problems, such as [2628] and the references therein. Therefore, based on the above analysis, we present a new smoothing FR conjugate gradient method to solve (3); this is also our motivation to write this paper. The global convergence analysis of the given method is also presented. Finally, some computational results show that the smoothing FR conjugate gradient method is efficient in practice.

The remainder of this paper is organized as follows: In Section 2, the preliminaries are proposed, which include the description of how the linear complementarity problem is transformed into the absolute value equation problem. In Section 3, we present the smoothing FR conjugate gradient method and give its convergence analysis. Finally, in Section 4, we give some numerical results of the given method, which show the effectiveness of the given method.

2. Preliminaries

Firstly, we give the transformation form of (3). For any vector , it can be formulated to where , and for all . By the definition of , we get , where . Therefore, as in [1315], problem (3) can be rewritten as The above problem can be transformed towhereSince is a positive semidefinite matrix, problem (3) can be transformed into convex optimization problem. Then problem (6) can be transformed into a linear variable inequality problem, which is to find , such that Given that the feasible region of is a special structure (such as nonnegative orthant), (8) can be rewritten as the linear complementary problem, which is used to find , such that

Now, we give some results about the absolute value equations and the linear complementarity problems as follows; one can see sources such as [16, 29, 30]. The absolute value equations have the form , where , . The linear complementarity problems have the form , where , .

Proposition 1. (i) Conversely, if is not an eigenvalue of , then the linear complementary problem can be reduced to the following equation: (ii) The absolute value equations is equivalent to the bimultilinear program: (iii) And the absolute value equations is equivalent to the generalized linear complementary problem:

Proposition 2. Equation (9) can be transformed into the following absolute value equation problem, which is defined as

Proof. Based on Proposition 1 and (9), we know that Then by (14), we have To satisfy the last equation of (15) for all , denote Substituting (16) in (15), we getDue to the above (i) in Proposition 1, (9) can be reduced to the absolute value equations: and substituting form (17) in (18), we get the following absolute value equation problem, which has the form Thus, we get (13). Then, problem (3) can be transformed into the following problem:

3. The Smoothing FR Conjugate Gradient Method

In this section, we give the smoothing FR conjugate gradient method for solving (20). Firstly, we give the definition of smoothing function and the smoothing approximation function of the absolute value function; one can see [15, 26, 27].

Definition 3. Let be a local Lipschitz continuous function. We call a smoothing function of , if is continuously differentiable in for any fixed , and There are so many smoothing functions; for example, Chen and Mangasarian introduced a class of smooth approximations of the function . Let be a density function satisfying Then Then is a smoothing function of .

In this paper, we use the smoothing approximation function of the absolute value function as And (24) is the smoothing approximation function of (20). Then, problem (20) is solved by smoothing conjugate gradient method. And formula (24) also satisfies Based on (24), we can get the following smoothing function of (20): where , .

Now, we give the smoothing FR conjugate gradient method for solving (20).

Algorithm 4 (the smoothing FR conjugate gradient method).
Step 1. Choose , , , , and consider an initial point in . Let ; compute . Let
Step 2. If , then terminate the method; otherwise, let , where Step 3. Compute by Armijo line search, where and satisfies Step 4. If , then set ; otherwise, let .
Step 5. Set ; go to Step 2.

Now, we give the convergence analysis of Algorithm 4.

Theorem 5. Suppose , then

Proof. According to definition and , we can obtain It is obviously that .

Theorem 6. Suppose is a smoothing function of . And if for any constant , is bounded on the level set , then the generated by Algorithm 4 satisfies

Proof. Define . If is a finite set, then there exists an integer , such that for all And in Step 4 of Algorithm 4. By considering a smooth function, Theorems and in [26], the conjugate gradient method for solving satisfies which contradicts (31). This shows that must be infinite and Because is infinite, we can suppose , where , and then we can get

4. Numerical Tests

In this section, we give numerical experiment results of Algorithm 4. The numerical experiments are also considered in [9, 14, 15]. In computing Examples 1, 2, 3, and 4, we compare Algorithm 4 with the smoothing gradient method in [15]. In computing Example 5, we compare Algorithm 4 with GPSR, debiased and minimum norm methods proposed in [4, 9, 13]. The numerical results of all the examples illustrate that Algorithm 4 is effective. All codes for the test problems are finished in the MATLAB8.0. For example, 1–4, the parameters used in Algorithm 4, are choosen as , , , .

Example 1. Consider the following optimization problem:where

The problem has an optimal solution in [14, 15]. The optimal solution of our method is . The optimal solution of smoothing gradient method is . In Figures 1 and 2, we, respectively, plot the evolution of the objective function versus iteration number when solving Example 1 with Algorithm 4 and smoothing gradient method in [15].

Figure 1: Numerical results for solving Example 1 with Algorithm 4.
Figure 2: Numerical results for solving Example 1 with smoothing gradient method.

Example 2. Consider the following optimization problem:where

Figures 3 and 4 plotted the evolution of the objective function versus iteration number (, ) when solving Example 2 with Algorithm 4 and smoothing gradient method. Figures 5 and 6 plotted the evolution of the objective function versus iteration number (, ) when solving Example 2 with Algorithm 4 and smoothing gradient method. By comparison, we know that the number of iterations of the Algorithm 4 is less than that of the smoothing gradient method in [15].

Figure 3: Numerical results for solving Example 2 with Algorithm 4 (, ).
Figure 4: Numerical results for solving Example 2 with smoothing gradient method (, ).
Figure 5: Numerical results for solving Example 2 with Algorithm 4 (, ).
Figure 6: Numerical results for solving Example 2 with smoothing gradient method (, ).

Example 3. Consider the following optimization problem:where

Figures 7 and 8, respectively, show the results while , , the change in the number of iterations of the objective function when solving Example 3 with Algorithm 4 and smoothing gradient method. Figures 9 and 10, respectively, show the results while , , the change in the number of iterations of the objective function when solving Example 3 with Algorithm 4 and smoothing gradient method. By comparison, the objective function variants are faster in Algorithm 4 than that of the smoothing gradient method in [15].

Figure 7: Numerical results for solving Example 3 with Algorithm 4 (, ).
Figure 8: Numerical results for solving Example 3 with smoothing gradient method (, ).
Figure 9: Numerical results for solving Example 3 with Algorithm 4 (, ).
Figure 10: Numerical results for solving Example 3 with smoothing gradient method (, ).

Example 4. Consider the following optimization problem:where

Figures 11 and 12, respectively, show the objective function plotted against iteration number while , , when solving Example 4 with Algorithm 4 and smoothing gradient method. Figures 13 and 14, respectively, show the objective function plotted against iteration number while , , when solving Example 4 with Algorithm 4 and smoothing gradient method in [15]. By comparison, Algorithm 4 is more effective than the smoothing gradient method.

Figure 11: Numerical results for solving Example 4 with Algorithm 4 (, ).
Figure 12: Numerical results for solving Example 4 with smoothing gradient method (, ).
Figure 13: Numerical results for solving Example 4 with Algorithm 4 (, ).
Figure 14: Numerical results for solving Example 4 with smoothing gradient method (, ).

Example 5. We consider a typical CS scenario, which is also considered in [14, 1315]. The goal is to reconstruct a length- sparse signal from observations, where . In this example, we choose , , , , , , and the original signal contains 15 randomly generated spikes. The observation is generated according to , where is contaminated by noise. Furthermore, the matrix is obtained by first filling it with independent samples of a standard Gaussian distribution and then orthonormalizing its rows. In this example, we choose and as suggested in [14]. Figure 15 shows the results of the signal reconstruction.

Figure 15: Numerical results for solving Example 5 with Algorithm 4.

5. Conclusion

Compared with the GPSR method and other methods in [4, 9, 1315], the smoothing FR conjugate gradient method is simple and needs small storage. The establishment and continuous improvement of the smoothing method for (3) provide a very useful tool to meet the challenges of many practical problems. For example, Figure 15 shows that the smoothing FR conjugate gradient method works well, and it provides an efficient approach to denoise sparse signals. Compare with the smoothing gradient method in [15], the smoothing FR conjugate gradient method is significantly faster than the smoothing gradient method, especially in large-scale iterations. We have also shown that, under weak conditions, the smoothing FR conjugate gradient method converges globally.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by National Natural Science Foundation of China (no. 11671220) and Natural Science Foundation of Shandong Province (no. ZR2016AM29).

References

  1. M. Hosein, B. Z. Massoud, and J. Christian, “A fast approach for overcomplete sparse decomposition based on smoothed norm,” IEEE Transactionon Signal Processing, vol. 57, no. 1, pp. 289–301, 2009. View at Google Scholar
  2. J. Jin, Y. Gu, and S. Mei, “A stochastic gradient approach on compressive sensing signal reconstruction based on adaptive filtering framework,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 409–420, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. K. Shi and P. Shi, “Adaptive sparse Volterra system identification with -norm penalty,” Signal Processing, vol. 91, no. 10, pp. 2432–2436, 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. Y. Liu and J. Hu, “A neural network for minimization based on scaled gradient projection: application to compressed sensing,” Neurocomputing, vol. 173, pp. 988–993, 2016. View at Publisher · View at Google Scholar · View at Scopus
  5. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1998. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Review, vol. 43, no. 1, pp. 129–159, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. A. Y. Yang, S. S. Sastry, A. Ganesh, and Y. Ma, “Fast -minimization algorithms and an application in robust face recognition: a review,” in Proceedings of the 17th IEEE International Conference on Image Processing (ICIP '10), pp. 1849–1852, Hong Kong, September 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Review, vol. 51, no. 1, pp. 34–81, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 4, pp. 586–597, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. J. M. Bioucas-Dias and M. A. Figueiredo, “A new TwIst: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing, vol. 16, no. 12, pp. 2992–3004, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  11. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. V. D. B. Ewout and M. P. Friedlander, “Probing the Pareto frontier for basis pursuit solutions,” SIAM Journal on Scientific Computing, vol. 31, no. 2, pp. 890–912, 2008/09. View at Publisher · View at Google Scholar · View at MathSciNet
  13. J. Yang and Y. Zhang, “Alternating direction algorithms for -problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  14. Y. Xiao, Q. Wang, and Q. Hu, “Non-smooth equations based method for -norm problems with applications to compressed sensing,” Nonlinear Analysis. Theory, Methods and Applications. An International Multidisciplinary Journal, vol. 74, no. 11, pp. 3570–3577, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  15. Y. Chen, Y. Gao, Z. Liu, and S. Du, “The smoothing gradient method for a kind of special optimization problem,” OperationsResearchTransactions, pp. 119–125, 2017. View at Google Scholar
  16. L. Caccetta, B. Qu, and G. Zhou, “A globally and quadratically convergent method for absolute value equations,” Computational optimization and applications, vol. 48, no. 1, pp. 45–58, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. O. L. Mangasarian, “Absolute value programming,” Computational optimization and applications, vol. 36, no. 1, pp. 43–53, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. O. L. Mangasarian, “A generalized Newton method for absolute value equations,” Optimization Letters, vol. 3, no. 1, pp. 101–108, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. G. Yu, L. Qi, and Y. Dai, “On nonmonotone Chambolle gradient projection algorithms for total variation image restoration,” Journal of Mathematical Imaging and Vision, vol. 35, no. 2, pp. 143–154, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. Z. Yu, J. Lin, J. Sun, Y. Xiao, L. Liu, and Z. Li, “Spectral gradient projection method for monotone nonlinear equations with convex constraints,” Applied Numerical Mathematics, vol. 59, no. 10, pp. 2416–2423, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, pp. 149–154, 1964. View at Publisher · View at Google Scholar · View at MathSciNet
  22. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. S. Du and Y. Chen, “Global convergence of a modified spectral FR conjugate gradient method,” Applied Mathematics and Computation, vol. 202, no. 2, pp. 766–770, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  24. Y. Dai and Y. Yuan, “A class of globally covergent conjugate gradient methods,” Science China Mathematics, vol. 46, no. 2, pp. 251–261, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  25. Y. Dai, J. Han, G. Liu, D. Sun, H. Yin, and Y.-X. Yuan, “Convergence properties of nonlinear conjugate gradient methods,” SIAM Journal on Optimization, vol. 10, no. 2, pp. 345–358, 1999. View at Publisher · View at Google Scholar · View at Scopus
  26. D. Pang, S. Du, and J. Ju, “The smoothing Fletcher-Reeves conjugate gradient method for solving finite minimax problems,” ScienceAsia, vol. 42, no. 1, pp. 40–45, 2016. View at Publisher · View at Google Scholar · View at Scopus
  27. X. Chen, “Smoothing methods for nonsmooth, nonconvex minimization,” Mathematical Programming, vol. 134, no. 1, Ser. B, pp. 71–99, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. L. Zhang, S.-Y. Wu, and T. Gao, “Improved smoothing Newton methods for nonlinear complementarity problems,” Applied Mathematics and Computation, vol. 215, no. 1, pp. 324–332, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  29. O. L. Mangasarian, “Absolute value equation solution via concave minimization,” Optimization Letters, vol. 1, no. 1, pp. 3–8, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  30. O. L. Mangasarian and R. R. Meyer, “Absolute value equations,” Linear Algebra and its Applications, vol. 419, no. 2-3, pp. 359–367, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus