Abstract

This paper proposes a novel sparsity adaptive simulated annealing algorithm to solve the issue of sparse recovery. This algorithm combines the advantage of the sparsity adaptive matching pursuit (SAMP) algorithm and the simulated annealing method in global searching for the recovery of the sparse signal. First, we calculate the sparsity and the initial support collection as the initial search points of the proposed optimization algorithm by using the idea of SAMP. Then, we design a two-cycle reconstruction method to find the support sets efficiently and accurately by updating the optimization direction. Finally, we take advantage of the sparsity adaptive simulated annealing algorithm in global optimization to guide the sparse reconstruction. The proposed sparsity adaptive greedy pursuit model has a simple geometric structure, it can get the global optimal solution, and it is better than the greedy algorithm in terms of recovery quality. Our experimental results validate that the proposed algorithm outperforms existing state-of-the-art sparse reconstruction algorithms.

1. Introduction

In recent years, the research of compressed sensing (CS) [13] has received more attention in many fields. CS is a novel signal processing theory, which can efficiently reconstruct a signal by finding solutions to underdetermined linear equations [4]. It is very magical that CS theory can exactly restore the sparse signal or compressible signals by a sampling rate which does not satisfy the Nyquist–Shannon sampling theorem. And many papers have demonstrated that CS can effectively get the key information from a few noncorrelative measurements.

The core idea of CS is that sampling and compression are done simultaneously. CS technology can reduce the hardware requirements, further reduce the sampling rate, improve the signal restoration quality, and save the cost of signal processing and transmission. Currently, CS has been widely used in wireless sensor networks [5, 6], information theory [7], image processing [810], earth science, optical/microwave imaging, pattern recognition [11], wireless communications [12, 13], atmosphere, geology, and other fields.

CS theory is mainly divided into three aspects: (1) sparse representation; (2) uncorrelated sampling; (3) sparse reconstruction. However, how to reconstruct sparse signals is crucial. It is a huge challenge for the researchers to propose a reconstruction algorithm with reliable accuracy. Since reconstruction methods need to recover the original signal from the low-dimensional measurements, the signal reconstruction requires solving an underdetermined equation. That may include infinite solutions. Mathematically, the issue can be regarded as the l0-norm minimization. However, it is an NP-hard problem [14]. It requires exhaustively listing all possible solutions of the original signal. That is unrealistic for large-scale data processing problems.

At present, some scholars have proposed many reconstruction algorithms. Mostly, the nonconvex l0-norm minimization will be replaced with the convex l1-norm minimization to design the reconstruction model. There are two important kinds of reconstruction algorithms. One is greedy pursuit algorithms, and the other is the convex optimization algorithm. The convex optimization algorithm solves a much easier l1-norm minimization problem (e.g., basis pursuit algorithm). But it requires high computational complexity to achieve exact restoration. However, a series of iterative greedy algorithms receive great attention due to their low computational complexity and simple geometric interpretation. The greedy algorithms include the orthogonal matching pursuit (OMP) [15], the stagewise OMP (StOMP) [4], the regularized OMP (ROMP) [16], the subspace pursuit (SP) [17], the compressive sampling OMP (CoSaMP) [18], and so on. The key step of these methods is to estimate the support sets with high probability. At each iteration, one or several indexes of the sparse signal are added into the support sets for subsequent processing. If those indexes are reliable, they will be added to the final support sets to recover the original signal.

In this paper, we propose a novel sparsity adaptive simulated annealing method (SASA) to reconstruct the sparse signal efficiently. This new method combines the advantage of the SAMP [19] algorithm and the simulated annealing (SA) algorithm to solve the CS reconstruction problem. The simulated annealing algorithm is known for its efficient global optimization strategies and superior results in solving nonconvex problems. The proposed algorithm can estimate the sparsity level, and it achieves superior performance in finding the optimization solution. The experimental results have shown that the proposed SASA method outperforms existing state-of-the-art sparse reconstruction algorithms.

The main contributions of this paper are listed as follows:(1)We take advantage of the sparsity adaptive simulated annealing algorithm in global searching to guide the sparse reconstruction(2)The proposed algorithm can reconstruct a sparse signal accurately, and it does not require sparsity as a priori information(3)The proposed algorithm has both the simple geometric structure of the greedy algorithm and the global optimization ability of the SA algorithm

The remainder of this paper is organized as follows. Section 2 introduces the background of CS and some approximation methods that may be applied in the later section. In Section 3, we propose the SASA algorithm and give the algorithm’s frameworks. We discuss the experimental results in Section 4. Finally, Section 5 draws the conclusion.

In this section, we introduce the compressive sensing model and the sparse signal approximation method.

2.1. Compressive Sensing Model

Sparse reconstruction is to recover the high dimensional original signal . is the measurement vector. The compressive sensing model can be defined aswhere is the sensing matrix . Mathematically, this problem can be solved by the l0-norm minimization. It can be written as follows:where is the zero norm. However, the l0-norm minimization is the NP-hard problem. It needs to exhaustively list all (K is the sparsity) possible solutions for determining the locations of the nonzero entries in . Therefore, it is usually converted to the convex l1-norm minimization problem as follows:

The above model is a convex optimization problem. It can resort to linear programming methods or Bregman split algorithm. And linear programming methods have shown to be effective in solving such problems with high accuracy. Theoretically, the CS has explained that the sparse signal x can be exactly reconstructed from only a few observation y when the sensing matrix satisfies the restricted isometry property (RIP). This is an important property for us to analyze and design the reconstruction algorithm. The RIP is given as follows:where is a constant . Denote as the l2 norm of a vector x.

Generally speaking, if is very close to one, then it is possible that y cannot preserve any information on x (i.e., , x is the null space of the sensing matrix ). In this case, it is nearly impossible to reconstruct the sparse signal by using any greedy algorithms.

There are many kinds of sensing matrix . It is mainly divided into deterministic matrices and learning matrices. Based on signal characteristics, the learning matrix can be continuously updated by using a machine learning method. However, the commonly used sensing matrices include the Gaussian random matrix, Fourier random matrix, and Toeplitz matrix.

2.2. Sparse Signal Approximation Method

In this section, we give some notations and matrix operators to facilitate the summary of the proposed algorithm.

Definition 1 (projection and residue). Let and . If is the full column rank, then is the Moore–Penrose pseudoinverse of , and represents the matrix transpose. The projection of the measurements onto span is written aswhere is the pseudoinverse of the matrix and is defined asIt can be seen from equation (5) that we can get the restoration of the sparse signal x by using the following formulation:The residue of the projection vector y is expressed asSuppose that is invertible. The projection of onto span () is defined asE is the unit matrix. To illustrate the relationship between projection and residue in iteration, we draw Figure 1 [17]. Refer to Figure 1 to help better understand the definition.
As can be seen from Figure 1, the residue is perpendicular to the hyperplane which can be ensured by using equation (7). It means that the residue will be minimized in each iteration. And, that can speed up the convergence of the algorithm.

3. SASA: Sparsity Adaptive Simulated Annealing Matching Pursuit Algorithm

The greedy pursuit algorithms for sparse recovery have low computational complexity and perfect reconstruction quality. Thus, how to design a stable and efficient CS reconstruction algorithm is a meaningful research issue. But some existing greedy algorithms often get a local optimization solution to the reconstruction problem. To better solve this issue, we introduce an intelligent optimization algorithm to get better reconstruction result.

Simulated annealing (SA) is a famous heuristic optimization algorithm, and it has been empirically indicated that SA has superior performance in finding global optimization solutions. As a special optimization reconstruction problem, we can directly solve the problem (2) by using the SA algorithm. However, due to the high computational complexity, it is not practical to directly use SA to solve the sparse recovery problem.

As we all know, the greedy algorithms have low complexity and simple geometric interpretation. But greedy algorithms do not have global optimization capabilities, and it is easy to get a suboptimal solution. This means that the greedy strategy and SA can complement each other. We combine the advantage of the SAMP algorithm and the SA algorithm to recover a sparse signal from only a few measurements y.

Inspired by this, we propose a new method to solve the sparse reconstruction issue. The proposed algorithm not only does not require sparsity as a necessary prior but also has superior optimization performance. This paper proposes a novel sparse signal recovery algorithm called sparsity adaptive simulated annealing matching pursuit algorithm (SASA).

3.1. Brief of the Simulated Annealing Algorithm

Kirkpatrick proposed the SA algorithm in 1983. SA was firstly proposed to solve thermodynamic problems. The SA algorithm can escape from a local optimization solution. Specially, since it can add a few interference items with a certain probability, SA is more likely to find global optimums or approximate optimization solutions. So far, the SA algorithm is still receiving great attention due to its special strategy.

The detail procedure of the standard SA is summarized in Algorithm 1 [20]. In Algorithm 1, is the objective function and is a function that returns a random neighbor solution to neighborhood structure H.

(1)Input:
(2) Initial solution S;
(3) Initial the temperature t0;
(4) Cooling rate ;
(5) The outer-loop iteration ;
(6) The inner-loop iteration .
(7),
(8)Iteration:
(9) While do
(10)  for do
(11)    random neighbor of a random neighborhood ,
(12)   
(13)   if then
(14)    
(15)    if then
(16)     
(17)   else
(18)    Take a random
(19)    if then
(20)     
(21)  : adjusting temperature
(22)  if then
(23)Output:
3.2. Proposed SASA

In this part, we introduce the processing of the SASA algorithm in detail. In real signal processing, the sparsity is usually unavailable for restoration task. We cannot know the signal sparsity in advance. If the sparsity level K can be accurately obtained, we can solve the problem (2) easily.

Theorem 1. Suppose the support set is the unique solution for the ground truth and . The is the minimum number of columns from , and those columns are linearly dependent. Then, we can get the estimation solution if and only if . is the estimated solution. And, the can be obtained by the least squares.

From theorem 1, we can define the cost function aswhere I denotes the support set in each iteration. In the following, we give the pseudocode of the SASA algorithm, and the main steps of signal reconstruction are summarized in Algorithm 2.

(1)Input:
(2) Measurement signal ;
(3) Sensing matrix ;
(4) Initialize , , .
(5)Iteration:
(6) Estimated sparsity level .
(7) Initialize the support set .
(8);
(9) Initialize ;
(10)While do
(11)  for do
(12);
(13) Choose elements from and combined with as ;
(14) Calculate ;
(15) Choose the largest elements from as ;
(16)  Calculate ;
(17)  if then
(18)  
(19)  else
(20)   Calculate
(21)   Take a random
(22)   if then ;
(22)   else
(23)   Calculate
(24)  if then break
(25)  end for
(26)Update
(27)End while
(28)Output: ;

4. Simulation Results and Analysis

In this section, a series of simulations are explained to evaluate the superior performance of the proposed SASA algorithm. To demonstrate the efficiency of the proposed algorithm, the SASA is compared with those existing state-of-the-art reconstruction methods. Those methods include SP, StOMP, CoSaMP, SWOMP [21], and ROMP. We performed simulation experiments on one-dimensional data and two-dimensional data, respectively. We adopted the Gaussian random matrix as the sampling operator for all methods.

All the simulations were performed on the MATLAB (R2017b) on Windows 10 system with a 3.20 GHz Intel Core i7 processor and 8 GB memory.

4.1. One-Dimensional Signal

In the experiment, the Gaussian random sparse signal length is set to N = 256. The sensing matrix uses the random matrix to get the observation y. To evaluate the reconstruction performance of all algorithms, we conducted a variety of simulation experiments. Simultaneously, we calculated the average of 500 independent experiments as the final experimental result to avoid randomness of the experiments.

In Figure 2, we give the curve of the exact reconstruction rate under different measurement number M. The exact reconstruction rate can be interpreted as the number of correctly reconstructed signal in 500 independent experiments. If a signal is correctly reconstructed, we can think . is the reconstruction signal. The sparsity K is set to 20. We can observe that the proposed algorithm SASA has a higher exact reconstruction rate for the different M. Especially, even when the measurement number M is less than 55, the proposed SASA is significantly superior to other algorithms.

In Figure 3, we provide the curve of the exact reconstruction rate under different sparsity level K. The measurement number is set to 60. K varies between 5 and 35. From Figure 3, we find that SASA is far superior to other algorithms under different sparsity levels. Even if K > 25, the proposed SASA is better than other classical algorithms.

In Figure 4, we provide the curve of the exact reconstruction rate under different signal length N. K = 10 and M = 25 are fixed, and signal length varies between 30 and 120. We observe that the proposed SASA performs better than the other five methods in the exact recovery rate. In general, the proposed SASA achieves higher exact reconstruction rate under different signal lengths.

In Figure 5, we tested the exact reconstruction performance of the SASA algorithm under different sparsity levels. We give the exact reconstruction rate as a function of measurement number M. K varies between 5 and 25, and M varies between 10 and 200. We find that the SASA can exactly reconstruct the signal of different sparsity levels.

Through the above various simulation experiments, we can demonstrate that the proposed SASA algorithm has superior reconstruction performance under different reconstruction conditions.

4.2. Two-Dimensional Image

To validate the performance of the proposed SASA algorithm for real data, we use standard test images with a size of . In addition, we need to note that the input image will be normalized, which facilitates sparse processing by using the discrete wavelet transform (DWT). We adopted a Gaussian random matrix as the compression sampling operator . M = 190 and N = 256 are fixed. Figure 6 shows the image reconstructed by CoSaMP, SP, SWOMP, OMP, and our SASA algorithms. To assess the superior performances of all algorithms, the peak signal-to-noise ratio (PSNR) is employed as the reconstruction image quality indices. The PSNR is defined as follows:

All methods are evaluated on three different images, and those images include the Lena, Barche, and Camer. As shown in Figure 6, we can find that the proposed SASA method gets better image reconstruction quality than other algorithms. The proposed SASA achieves the best PSNR. The PSNR also demonstrates that the SASA algorithm can make the reconstructed image smoother and achieve more details.

Conclusively, various experiments show that the proposed SASA algorithm achieves superior performance for different types of data. Compared with other algorithms, the proposed SASA got higher exact reconstruction rate and PSNR.

5. Conclusions

In this paper, we propose a new sparsity adaptive simulated annealing algorithm. It can solve the sparse reconstruction issue efficiently. The proposed algorithm considers both adaptive sparsity and global optimization for sparse signal reconstruction. A series of experimental results show that the proposed SASA algorithm achieves the best reconstruction performance and is superior to the existing state-of-the-art methods in terms of reconstruction quality. For those advantages, the proposed SASA algorithm has broad application prospects and a higher guiding significance for the sparse signal reconstruction.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The research was supported by the National High-Tech Research and Development Program (National 863 Project, No. 2017YFB0403604) and the project supported by the National Nature Science Foundation of China (No. 61771262). Yangyang would also like to thank the Chinese Scholarship Council.