Mathematical Problems in Engineering

Volume 2018, Article ID 5817931, 9 pages

https://doi.org/10.1155/2018/5817931

## The Smoothing FR Conjugate Gradient Method for Solving a Kind of Nonsmooth Optimization Problem with -Norm

School of Mathematics and Statistics, Qingdao University, Qingdao 266071, China

Correspondence should be addressed to Shou-qiang Du; nc.ude.udq@udqs

Received 9 October 2017; Accepted 27 December 2017; Published 23 January 2018

Academic Editor: Elisa Francomano

Copyright © 2018 Miao Chen and Shou-qiang Du. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We study the method for solving a kind of nonsmooth optimization problems with -norm, which is widely used in the problem of compressed sensing, image processing, and some related optimization problems with wide application background in engineering technology. Transformated by the absolute value equations, this kind of nonsmooth optimization problem is rewritten as a general unconstrained optimization problem, and the transformed problem is solved by a smoothing FR conjugate gradient method. Finally, the numerical experiments show the effectiveness of the given smoothing FR conjugate gradient method.

#### 1. Introduction

In the last few years, the study work of finding sparsest solutions to undermined system of equations has been extensively done. Finding the sparsest solution of an undermined system of equations is equivalent to solving the -norm regularized minimization problem as the following: where , , , and denotes -norm. From [1–4], we know that the above problem is difficult to solve in a straight way. In order to solve the -norm problem effectively, an approximation model is to replace the -norm by the -norm to solve the Basis Pursuit problem, such as in [5, 6]: Therefore the convex envelope of is , where is the -norm of . And this problem is also NP-hard problem. When contains some noise in practice application, the above problem is rewritten as the following nonsmooth optimization problem with -norm: where , , , denotes 2-norm, and denotes 1-norm. For the minimization of the -norm having a good recovery, (3) is widely used in the sense compression, image processing, and other related fields in engineering technology, one can see [7, 8] and the references therein. For some , is a convex function but not a differentiable function. Recently, many scholars have studied the method for solving (3). For instance, gradient projection for sparse reconstruction was proposed by Figueiredo et al. in [9], a two-step iterative shrinkage thresholding (IST) method was proposed by Bioucas-Dias and Figueiredo in [10], a fast IST algorithm was presented by Beck and Teboulle in [11], SPGL1 (A solver for large-scale sparse reconstruction) was proposed by Ewout and Friedlander in [12], who consider a least-squares problem with -norm constraint and use a spectral gradient projection method, and ADM method was proposed by Yang and Zhang in [13]. Problem (3) was formulated to a convex quadratic problem in [14]. Among all the references mentioned above, there is no use of the relationship between the linear complementarity problems and the absolute value equations to solve (3). They do not use the structure of the absolute value equation to propose new method to solve (3). They also do not translate the original problem into an absolute value equation problem and use the effective methods of absolute value equations. Just recently, only in [15], a smoothing gradient method is given for solving (3) based on the absolute value equations. Therefore, in this paper, we study how to use this new transformation to solve (3) by the useful FR conjugate method.

As we all know, the transformed linear complementarity problem is rewritten as the absolute value equation problem mainly based on the equivalence between the linear complementarity problem and the absolute value equation problem, such as [16–18]. The absolute value equation can be solved easily now. On the other hand, the conjugate gradient method is suitable for solving large-scare optimization problems and has sample structure and global convergence [19–25]. In addition, the smoothing methods are used to solve the related nonsmooth optimization problems, such as [26–28] and the references therein. Therefore, based on the above analysis, we present a new smoothing FR conjugate gradient method to solve (3); this is also our motivation to write this paper. The global convergence analysis of the given method is also presented. Finally, some computational results show that the smoothing FR conjugate gradient method is efficient in practice.

The remainder of this paper is organized as follows: In Section 2, the preliminaries are proposed, which include the description of how the linear complementarity problem is transformed into the absolute value equation problem. In Section 3, we present the smoothing FR conjugate gradient method and give its convergence analysis. Finally, in Section 4, we give some numerical results of the given method, which show the effectiveness of the given method.

#### 2. Preliminaries

Firstly, we give the transformation form of (3). For any vector , it can be formulated to where , and for all . By the definition of , we get , where . Therefore, as in [13–15], problem (3) can be rewritten as The above problem can be transformed towhereSince is a positive semidefinite matrix, problem (3) can be transformed into convex optimization problem. Then problem (6) can be transformed into a linear variable inequality problem, which is to find , such that Given that the feasible region of is a special structure (such as nonnegative orthant), (8) can be rewritten as the linear complementary problem, which is used to find , such that

Now, we give some results about the absolute value equations and the linear complementarity problems as follows; one can see sources such as [16, 29, 30]. The absolute value equations have the form , where , . The linear complementarity problems have the form , where , .

Proposition 1. *(i) Conversely, if is not an eigenvalue of , then the linear complementary problem can be reduced to the following equation: **(ii) The absolute value equations is equivalent to the bimultilinear program: **(iii) And the absolute value equations is equivalent to the generalized linear complementary problem: *

Proposition 2. *Equation (9) can be transformed into the following absolute value equation problem, which is defined as *

*Proof. *Based on Proposition 1 and (9), we know that Then by (14), we have To satisfy the last equation of (15) for all , denote Substituting (16) in (15), we getDue to the above (i) in Proposition 1, (9) can be reduced to the absolute value equations: and substituting form (17) in (18), we get the following absolute value equation problem, which has the form Thus, we get (13). Then, problem (3) can be transformed into the following problem:

#### 3. The Smoothing FR Conjugate Gradient Method

In this section, we give the smoothing FR conjugate gradient method for solving (20). Firstly, we give the definition of smoothing function and the smoothing approximation function of the absolute value function; one can see [15, 26, 27].

*Definition 3. *Let be a local Lipschitz continuous function. We call a smoothing function of , if is continuously differentiable in for any fixed , and There are so many smoothing functions; for example, Chen and Mangasarian introduced a class of smooth approximations of the function . Let be a density function satisfying Then Then is a smoothing function of .

In this paper, we use the smoothing approximation function of the absolute value function as And (24) is the smoothing approximation function of (20). Then, problem (20) is solved by smoothing conjugate gradient method. And formula (24) also satisfies Based on (24), we can get the following smoothing function of (20): where , .

Now, we give the smoothing FR conjugate gradient method for solving (20).

*Algorithm 4 (the smoothing FR conjugate gradient method). * *Step 1*. Choose , , , , and consider an initial point in . Let ; compute . Let *Step 2*. If , then terminate the method; otherwise, let , where *Step 3*. Compute by Armijo line search, where and satisfies *Step 4*. If , then set ; otherwise, let .*Step 5*. Set ; go to Step 2.

Now, we give the convergence analysis of Algorithm 4.

Theorem 5. *Suppose , then *

*Proof. *According to definition and , we can obtain It is obviously that .

Theorem 6. *Suppose is a smoothing function of . And if for any constant , is bounded on the level set , then the generated by Algorithm 4 satisfies *

*Proof. *Define . If is a finite set, then there exists an integer , such that for all And in Step 4 of Algorithm 4. By considering a smooth function, Theorems and in [26], the conjugate gradient method for solving satisfies which contradicts (31). This shows that must be infinite and Because is infinite, we can suppose , where , and then we can get

#### 4. Numerical Tests

In this section, we give numerical experiment results of Algorithm 4. The numerical experiments are also considered in [9, 14, 15]. In computing Examples 1, 2, 3, and 4, we compare Algorithm 4 with the smoothing gradient method in [15]. In computing Example 5, we compare Algorithm 4 with GPSR, debiased and minimum norm methods proposed in [4, 9, 13]. The numerical results of all the examples illustrate that Algorithm 4 is effective. All codes for the test problems are finished in the MATLAB8.0. For example, 1–4, the parameters used in Algorithm 4, are choosen as , , , .

*Example 1. *Consider the following optimization problem:where

The problem has an optimal solution in [14, 15]. The optimal solution of our method is . The optimal solution of smoothing gradient method is . In Figures 1 and 2, we, respectively, plot the evolution of the objective function versus iteration number when solving Example 1 with Algorithm 4 and smoothing gradient method in [15].