Abstract

We study the method for solving a kind of nonsmooth optimization problems with -norm, which is widely used in the problem of compressed sensing, image processing, and some related optimization problems with wide application background in engineering technology. Transformated by the absolute value equations, this kind of nonsmooth optimization problem is rewritten as a general unconstrained optimization problem, and the transformed problem is solved by a smoothing FR conjugate gradient method. Finally, the numerical experiments show the effectiveness of the given smoothing FR conjugate gradient method.

1. Introduction

In the last few years, the study work of finding sparsest solutions to undermined system of equations has been extensively done. Finding the sparsest solution of an undermined system of equations is equivalent to solving the -norm regularized minimization problem as the following: where , , , and denotes -norm. From [1ā€“4], we know that the above problem is difficult to solve in a straight way. In order to solve the -norm problem effectively, an approximation model is to replace the -norm by the -norm to solve the Basis Pursuit problem, such as in [5, 6]: Therefore the convex envelope of is , where is the -norm of . And this problem is also NP-hard problem. When contains some noise in practice application, the above problem is rewritten as the following nonsmooth optimization problem with -norm: where , , , denotes 2-norm, and denotes 1-norm. For the minimization of the -norm having a good recovery, (3) is widely used in the sense compression, image processing, and other related fields in engineering technology, one can see [7, 8] and the references therein. For some , is a convex function but not a differentiable function. Recently, many scholars have studied the method for solving (3). For instance, gradient projection for sparse reconstruction was proposed by Figueiredo et al. in [9], a two-step iterative shrinkage thresholding (IST) method was proposed by Bioucas-Dias and Figueiredo in [10], a fast IST algorithm was presented by Beck and Teboulle in [11], SPGL1 (A solver for large-scale sparse reconstruction) was proposed by Ewout and Friedlander in [12], who consider a least-squares problem with -norm constraint and use a spectral gradient projection method, and ADM method was proposed by Yang and Zhang in [13]. Problem (3) was formulated to a convex quadratic problem in [14]. Among all the references mentioned above, there is no use of the relationship between the linear complementarity problems and the absolute value equations to solve (3). They do not use the structure of the absolute value equation to propose new method to solve (3). They also do not translate the original problem into an absolute value equation problem and use the effective methods of absolute value equations. Just recently, only in [15], a smoothing gradient method is given for solving (3) based on the absolute value equations. Therefore, in this paper, we study how to use this new transformation to solve (3) by the useful FR conjugate method.

As we all know, the transformed linear complementarity problem is rewritten as the absolute value equation problem mainly based on the equivalence between the linear complementarity problem and the absolute value equation problem, such as [16ā€“18]. The absolute value equation can be solved easily now. On the other hand, the conjugate gradient method is suitable for solving large-scare optimization problems and has sample structure and global convergence [19ā€“25]. In addition, the smoothing methods are used to solve the related nonsmooth optimization problems, such as [26ā€“28] and the references therein. Therefore, based on the above analysis, we present a new smoothing FR conjugate gradient method to solve (3); this is also our motivation to write this paper. The global convergence analysis of the given method is also presented. Finally, some computational results show that the smoothing FR conjugate gradient method is efficient in practice.

The remainder of this paper is organized as follows: In Section 2, the preliminaries are proposed, which include the description of how the linear complementarity problem is transformed into the absolute value equation problem. In Section 3, we present the smoothing FR conjugate gradient method and give its convergence analysis. Finally, in Section 4, we give some numerical results of the given method, which show the effectiveness of the given method.

2. Preliminaries

Firstly, we give the transformation form of (3). For any vector , it can be formulated to where , and for all . By the definition of , we get , where . Therefore, as in [13ā€“15], problem (3) can be rewritten as The above problem can be transformed towhereSince is a positive semidefinite matrix, problem (3) can be transformed into convex optimization problem. Then problem (6) can be transformed into a linear variable inequality problem, which is to find , such that Given that the feasible region of is a special structure (such as nonnegative orthant), (8) can be rewritten as the linear complementary problem, which is used to find , such that

Now, we give some results about the absolute value equations and the linear complementarity problems as follows; one can see sources such as [16, 29, 30]. The absolute value equations have the form , where , . The linear complementarity problems have the form , where , .

Proposition 1. (i) Conversely, if is not an eigenvalue of , then the linear complementary problem can be reduced to the following equation: (ii) The absolute value equations is equivalent to the bimultilinear program: (iii) And the absolute value equations is equivalent to the generalized linear complementary problem:

Proposition 2. Equation (9) can be transformed into the following absolute value equation problem, which is defined as

Proof. Based on Proposition 1 and (9), we know that Then by (14), we have To satisfy the last equation of (15) for all , denote Substituting (16) in (15), we getDue to the above (i) in Proposition 1, (9) can be reduced to the absolute value equations: and substituting form (17) in (18), we get the following absolute value equation problem, which has the form Thus, we get (13). Then, problem (3) can be transformed into the following problem:

3. The Smoothing FR Conjugate Gradient Method

In this section, we give the smoothing FR conjugate gradient method for solving (20). Firstly, we give the definition of smoothing function and the smoothing approximation function of the absolute value function; one can see [15, 26, 27].

Definition 3. Let be a local Lipschitz continuous function. We call a smoothing function of , if is continuously differentiable in for any fixed , and There are so many smoothing functions; for example, Chen and Mangasarian introduced a class of smooth approximations of the function . Let be a density function satisfying Then Then is a smoothing function of .

In this paper, we use the smoothing approximation function of the absolute value function as And (24) is the smoothing approximation function of (20). Then, problem (20) is solved by smoothing conjugate gradient method. And formula (24) also satisfies Based on (24), we can get the following smoothing function of (20): where , .

Now, we give the smoothing FR conjugate gradient method for solving (20).

Algorithm 4 (the smoothing FR conjugate gradient method). ā€‰
Step 1. Choose , , , , and consider an initial point in . Let ; compute . Let
Step 2. If , then terminate the method; otherwise, let , where Step 3. Compute by Armijo line search, where and satisfies Step 4. If , then set ; otherwise, let .
Step 5. Set ; go to Step 2.

Now, we give the convergence analysis of Algorithm 4.

Theorem 5. Suppose , then

Proof. According to definition and , we can obtain It is obviously that .

Theorem 6. Suppose is a smoothing function of . And if for any constant , is bounded on the level set , then the generated by Algorithm 4 satisfies

Proof. Define . If is a finite set, then there exists an integer , such that for all And in Step 4 of Algorithm 4. By considering a smooth function, Theorems and in [26], the conjugate gradient method for solving satisfies which contradicts (31). This shows that must be infinite and Because is infinite, we can suppose , where , and then we can get

4. Numerical Tests

In this section, we give numerical experiment results of Algorithm 4. The numerical experiments are also considered in [9, 14, 15]. In computing Examples 1, 2, 3, and 4, we compare Algorithm 4 with the smoothing gradient method in [15]. In computing Example 5, we compare Algorithm 4 with GPSR, debiased and minimum norm methods proposed in [4, 9, 13]. The numerical results of all the examples illustrate that Algorithm 4 is effective. All codes for the test problems are finished in the MATLAB8.0. For example, 1ā€“4, the parameters used in Algorithm 4, are choosen as , , , .

Example 1. Consider the following optimization problem:where

The problem has an optimal solution in [14, 15]. The optimal solution of our method is . The optimal solution of smoothing gradient method is . In Figures 1 and 2, we, respectively, plot the evolution of the objective function versus iteration number when solving Example 1 with Algorithm 4 and smoothing gradient method in [15].

Example 2. Consider the following optimization problem:where

Figures 3 and 4 plotted the evolution of the objective function versus iteration number (, ) when solving Example 2 with Algorithm 4 and smoothing gradient method. Figures 5 and 6 plotted the evolution of the objective function versus iteration number (, ) when solving Example 2 with Algorithm 4 and smoothing gradient method. By comparison, we know that the number of iterations of the Algorithm 4 is less than that of the smoothing gradient method in [15].

Example 3. Consider the following optimization problem:where

Figures 7 and 8, respectively, show the results while , , the change in the number of iterations of the objective function when solving Example 3 with Algorithm 4 and smoothing gradient method. Figures 9 and 10, respectively, show the results while , , the change in the number of iterations of the objective function when solving Example 3 with Algorithm 4 and smoothing gradient method. By comparison, the objective function variants are faster in Algorithm 4 than that of the smoothing gradient method in [15].

Example 4. Consider the following optimization problem:where

Figures 11 and 12, respectively, show the objective function plotted against iteration number while , , when solving Example 4 with Algorithm 4 and smoothing gradient method. Figures 13 and 14, respectively, show the objective function plotted against iteration number while , , when solving Example 4 with Algorithm 4 and smoothing gradient method in [15]. By comparison, Algorithm 4 is more effective than the smoothing gradient method.

Example 5. We consider a typical CS scenario, which is also considered in [1ā€“4, 13ā€“15]. The goal is to reconstruct a length- sparse signal from observations, where . In this example, we choose , , , , , , and the original signal contains 15 randomly generated spikes. The observation is generated according to , where is contaminated by noise. Furthermore, the matrix is obtained by first filling it with independent samples of a standard Gaussian distribution and then orthonormalizing its rows. In this example, we choose and as suggested in [14]. Figure 15 shows the results of the signal reconstruction.

5. Conclusion

Compared with the GPSR method and other methods in [4, 9, 13ā€“15], the smoothing FR conjugate gradient method is simple and needs small storage. The establishment and continuous improvement of the smoothing method for (3) provide a very useful tool to meet the challenges of many practical problems. For example, Figure 15 shows that the smoothing FR conjugate gradient method works well, and it provides an efficient approach to denoise sparse signals. Compare with the smoothing gradient method in [15], the smoothing FR conjugate gradient method is significantly faster than the smoothing gradient method, especially in large-scale iterations. We have also shown that, under weak conditions, the smoothing FR conjugate gradient method converges globally.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by National Natural Science Foundation of China (no. 11671220) and Natural Science Foundation of Shandong Province (no. ZR2016AM29).