Abstract

We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

1. Introduction

Sparse reconstruction is the term used to describe the process of extracting some underlying original source signals from a number of observed mixture signals, where the mixing model is either unknown or the knowledge about the mixing process is limited. The problem of recovering a sparse signal from noisy linear observation arises in many real world sensing applications [15]. Mathematically, a signal recovery problem can be formulated as estimating the original signal based on noisy linear observations, which can be expressed as where is the mixing matrix, is the original signal, is the observed signal, and is the noise. In many cases, is a matrix of block Toeplitz with Toeplitz blocks (BTTB) when zero boundary conditions are applied and block Toeplitz-plus-Hankel with Toeplitz-plus-Hankel blocks (BTHTHB) when Neumann boundary conditions are used [6]. Then, this problem can be viewed as a linear inverse problem. A standard approach to solve linear inverse problems is to define a suitable objective function and to minimize it. It is often divided into two steps to solve this problem, which are estimation of mixture matrix and recovery of original signal . In this paper, we focus on the study of the second step, where we assume that we have known mixture matrix .

Generally, finding a solution with few nonzero entries for an underdetermined linear system with noise is often modeled as the regularization problem: where and is defined by the number of nonzero entries in . However, the regularized problem (2) is difficult to deal with because of the discrete structure of the norm, which derives researchers to pay attention to the continuous minimization problem: The first term in (3) is often called as date fitting term, which forces the solutions of (3) closeness to the data, and the second term in it is often called as regularization or potential term, which is to push the solutions to exhibit some prior expected features. Under certain conditions, problem and problem have the same solution sets [7]. The problem is a continuous convex optimization problem and can be efficiently solved, which is known as Lasso [8].

A class of signal recovery problems can be formulated as where is a linear operator, is the regularization parameter that controls the trade-off between the regularization term and the data-fitting term, and constraint set is a closed convex subset of .

Optimization problems arise in a variety of scientific and engineering applications and they really need real time solutions. Since the computing time greatly depends on the dimension and the structure of the optimization problems, numerical algorithms are usually less effective in large scale or real time optimization problems. In many applications, real time optimal solutions are usually imperative, such as on-board signal processing and robot motion planning and control. One promising approach to handle these problems is to employ artificial neural network. During recent decades, neural dynamical method for solving optimization problems has been a major area in neural network research based on circuit implementation [912]. First, the structure of a neural network can be implemented physically by designated hardware such as application-specific integrated circuits where the computational procedure is distributed and parallel. This lets the neural network approach solve optimization problems in running time at the order of magnitude much faster than conventional optimization algorithms executed on general-purpose digital computers. Second, neural networks can solve many optimization problems with time-varying parameters. Third, the dynamical and ODE techniques can be applied to the continuous-time neural networks. And recent reports have shown that the global convergence can be obtained by the neural network approach under some weaker conditions.

Since the neural network was first proposed for solving linear [13, 14] and nonlinear [15] programming problems, many researchers were inspired to develop neural networks for optimization. Many types of neural networks have been proposed to solve various optimization problems, for example, the recurrent neural network, the Lagrangian network, the deterministic annealing network, the projection-type neural network, a generalized neural network, and so forth. In [16], Chong et al. proposed a neural network for linear programming problem with finite time convergence. In [17], a generalized neural network was presented for solving a class of nonsmooth convex optimization problems. In [18], a neural network was defined by using the penalty function method and differential inclusion for solving a class of nonsmooth convex optimization problems. In fact, in many important applications, neural network built by a differential inclusion is an important method to solve a class of nonsmooth optimization problems. One has to mention that the optimization problems are not differentiable in many important applications. Moreover, the neural networks for smooth optimization problems required the gradients of the objective and constrained functions in such neural networks. So these networks cannot solve nonsmooth optimization problems. Using smoothing techniques in neural network is an effective method for solving nonsmooth optimization problems [19, 20]. The main feature of smoothing method is to approximate the nonsmooth functions by parameterized smooth functions [21, 22]. By smoothing approximations, we can give a class of smooth functions, which converge to the original nonsmooth function and whose gradients converge to the subgradient of nonsmooth function. For solving many constrained optimization problems, projection is a simple and effective method for solving the constraints. In [23, 24], projection had been used in neural networks for solving some kind of constrained optimization problems.

Basing on the advantages of the neural networks, in this paper, we will propose a neural network and use some mathematical techniques to solve optimization problem (4). The problem (4) is nonsmooth. Many neural networks are modeled by differential inclusions, which have the difficulty in the choice of the right set-valued map. In this paper, we will introduce a smoothing function to overcome this problem. Using smoothing techniques into neural network is an interesting and promise method for solving (4).

Notation. Throughout this paper, denotes the norm and denotes the norm.

2. Preliminary Results

In this section, we will introduce several basic definitions and lemmas, which are used in the development.

Definition 1. Suppose that is Lipschitz near ; the generalized directional derivative of at in the direction is given by Furthermore, the Clarke generalized gradient of at is defined as

Moreover, if is a convex function, then it has the following properties as well.

Proposition 2. If is a convex function, the following property holds:

Since the constraint set of (4) is a closed convex subset of , then we use the projection operator to handle the constraint. The projection operator of to the closed convex subset is defined by

The projection operator has the following properties.

Proposition 3. Consider the following:

Definition 4. Let be a locally Lipschitz function. We call a smoothing function of , if satisfies the following conditions. (i)For any fixed , is continuously differentiable in , and for any fixed , is differentiable in .(ii)For any fixed , .(iii)There is a positive constant such that (iv).

From the above definition, we can get that for any fixed ,

Next, we present a smoothing function of the absolute value function, which is defined by

Proposition 5 (see [21]). Consider the following:(i) is continuously differentiable about in for any fixed and differentiable about for any fixed ; (ii), for all , for all ; (iii), for all , for all ; (iv) is convex about for any fixed and .

3. Theoretical Results

In (4), can be rewritten as , where is an dimensional vector. Then (4) can be rewritten as

In the following, we use to approximate . From the idea of the projected gradient method, we construct our neural network as follows: where , , and is the projection operator on .

Next, we will give some analysis on the proposed neural network (14).

Theorem 6. For any initial point , there is a global and uniformly bounded solution of (14).

Proof. The right hand of (14) is continuous about and , then there is a local solution of (14) with . And we assume that is the maximal existence interval of . First, we prove that for all . Obviously, (14) can be rewritten as
From the integration about the above differential equation, we have where .
Since , , and is a closed convex subset, we confirm that
Differentiating along this solution of (14), we obtain
Using the inequality of project operator to (14), we obtain
Thus, which follows that is nonincreasing along the solution of (14). On the other hand, by Proposition 5, we know that
Thus, is bounded on . Using the extension theorem, the solution of (14) is globally existent and uniformly bounded.

Theorem 7. For any initial point , the solution of (14) is unique and satisfies the following:(i) is nonincreasing on and ;(ii)the solution of (14) is convergent to the optimal solution set of (4).

Proof. Suppose that there exist two solutions and of (14) with initial point , which means that Thus, where and .
From the expression of and , for all , we get Thus, we have
Since , is convex about for any fixed , we have Thus, for all , Using (25) and (27) into (23), we have
By integrating (28) from to , we derive that Therefore, , for all , when , which derives the uniqueness of the solution of (14).
Let , where ; (29) implies that Therefore is nonincreasing.
From (20), we obtain that is nonincreasing and bounded form below on ; therefore we have that Using Proposition 5 to the above result, we obtain that
Moreover, we have that
Combining (20) and (33), we confirm that
Since is uniformly bounded on the global interval, there is a cluster point of it, denoted as , which follows that there exists an increasing sequence such that
Using the expression of (14) and , we have
Using Proposition 3 in the above equation, we have
From Proposition 5, there exists such that
Therefore, is a Clarke stationary point of (4). Since (4) is a convex programming, is an optimal solution of (4). Owning to the random cluster point of , we know that any cluster point of is an optimal solution of (4), which means that the solution of (14) converges to the optimal solution set of (4).

4. Numerical Experiments

In this section, we will give two numerical experiments to validate the theoretical results obtained in this paper and the good performance of the proposed neural network in solving the sparse reconstruction problems.

Example 1. In this experiment, we will test an experiment for the signal recovered with noise. Every original signal with sparsity 1 means that there is only one sound at time point. We use the following MATLAB codes to generate a 100 length original signal , mixing matrix , observed signal , and noise :s=zeros(5,100);for l=1:100;q=randperm(5);s(q(1:2),l) = (2*randn(2,1));endA=randn(3,5); n = 0.05 * randn(3,100);b=A*s-n.
We denote as the recovered signal using our method. Figures 1(a)-2(a) show the original, observed, and recovered signals using (14). Figure 2(b) presents the convergence of signal-to-noise ratio (SNR) along the solution of the proposed neural network. From this figure, we see that our method recovers this random original effectively. And we should state that the SNR of the recovered signal is 22.15 dB, where

Example 2. In this experiment, we perform the proposed network (14) on the restoration of circle image. The observed image is distorted from the unknown true image mainly by two factors: the blurring and the random noise. The blurring is a Gaussian function: which is truncated such that the function has a support of . A Gaussian noise with zero mean and standard derivation of is added to the blurred image. Figures 3(a) and 3(b) present the original and the observed images, respectively. The peak signal-to-noise ratio (PSNR) of the observed image is . Denote and as the original and corresponding observed images and use the PSNR to evaluate the quality of the restored image; that is,

We use problem (13) to recover this image, where we let , , and

Choose . The recovered image by (14) with is figured in Figure 3(c) with  dB. The convergence of the objective value and PSNR() along the solution of (14) with initial point is presented in Figures 4(a) and 4(b). From Figures 4(a) and 4(b), we find that the objective value is monotonely decreasing and the PSNR is monotonely increasing along the solution of (14).

5. Conclusion

Basing on the smoothing approximation technique and projected gradient method, we construct a neural network modeled by a differential equation to solve a class of constrained nonsmooth convex optimization problems, which have wide applications in sparse reconstruction. The proposed network has a unique and bounded solution with any initial point in the feasible region. Moreover, the solution of proposed network converges to the solution set of the optimization problem. Simulation results on numerical examples are elaborated upon to substantiate the effectiveness and performance of the neural network.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the Editor-in-Chief Professor Huaiqin Wu and the three anonymous reviewers for their insightful and constructive comments, which help to enrich the content and improve the presentation of the results in this paper. This work is supported by the Fundamental Research Funds for the Central Universities (DL12EB04) and National Natural Science Foundation of China (31370565).