Journal of Electrical and Computer Engineering

Volume 2019, Article ID 6950819, 8 pages

https://doi.org/10.1155/2019/6950819

## The Sparsity Adaptive Reconstruction Algorithm Based on Simulated Annealing for Compressed Sensing

^{1}College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China^{2}Electrical Engineering and Computer Science, Northwestern University, Evanston, IL 60208, USA

Correspondence should be addressed to Jianping Zhang; ude.nretsewhtron.u@8102gnahzgnipnaij

Received 27 January 2019; Revised 17 May 2019; Accepted 2 July 2019; Published 14 July 2019

Academic Editor: Panajotis Agathoklis

Copyright © 2019 Yangyang Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper proposes a novel sparsity adaptive simulated annealing algorithm to solve the issue of sparse recovery. This algorithm combines the advantage of the sparsity adaptive matching pursuit (SAMP) algorithm and the simulated annealing method in global searching for the recovery of the sparse signal. First, we calculate the sparsity and the initial support collection as the initial search points of the proposed optimization algorithm by using the idea of SAMP. Then, we design a two-cycle reconstruction method to find the support sets efficiently and accurately by updating the optimization direction. Finally, we take advantage of the sparsity adaptive simulated annealing algorithm in global optimization to guide the sparse reconstruction. The proposed sparsity adaptive greedy pursuit model has a simple geometric structure, it can get the global optimal solution, and it is better than the greedy algorithm in terms of recovery quality. Our experimental results validate that the proposed algorithm outperforms existing state-of-the-art sparse reconstruction algorithms.

#### 1. Introduction

In recent years, the research of compressed sensing (CS) [1–3] has received more attention in many fields. CS is a novel signal processing theory, which can efficiently reconstruct a signal by finding solutions to underdetermined linear equations [4]. It is very magical that CS theory can exactly restore the sparse signal or compressible signals by a sampling rate which does not satisfy the Nyquist–Shannon sampling theorem. And many papers have demonstrated that CS can effectively get the key information from a few noncorrelative measurements.

The core idea of CS is that sampling and compression are done simultaneously. CS technology can reduce the hardware requirements, further reduce the sampling rate, improve the signal restoration quality, and save the cost of signal processing and transmission. Currently, CS has been widely used in wireless sensor networks [5, 6], information theory [7], image processing [8–10], earth science, optical/microwave imaging, pattern recognition [11], wireless communications [12, 13], atmosphere, geology, and other fields.

CS theory is mainly divided into three aspects: (1) sparse representation; (2) uncorrelated sampling; (3) sparse reconstruction. However, how to reconstruct sparse signals is crucial. It is a huge challenge for the researchers to propose a reconstruction algorithm with reliable accuracy. Since reconstruction methods need to recover the original signal from the low-dimensional measurements, the signal reconstruction requires solving an underdetermined equation. That may include infinite solutions. Mathematically, the issue can be regarded as the *l*_{0}-norm minimization. However, it is an NP-hard problem [14]. It requires exhaustively listing all possible solutions of the original signal. That is unrealistic for large-scale data processing problems.

At present, some scholars have proposed many reconstruction algorithms. Mostly, the nonconvex *l*_{0}-norm minimization will be replaced with the convex *l*_{1}-norm minimization to design the reconstruction model. There are two important kinds of reconstruction algorithms. One is greedy pursuit algorithms, and the other is the convex optimization algorithm. The convex optimization algorithm solves a much easier *l*_{1}-norm minimization problem (e.g., basis pursuit algorithm). But it requires high computational complexity to achieve exact restoration. However, a series of iterative greedy algorithms receive great attention due to their low computational complexity and simple geometric interpretation. The greedy algorithms include the orthogonal matching pursuit (OMP) [15], the stagewise OMP (StOMP) [4], the regularized OMP (ROMP) [16], the subspace pursuit (SP) [17], the compressive sampling OMP (CoSaMP) [18], and so on. The key step of these methods is to estimate the support sets with high probability. At each iteration, one or several indexes of the sparse signal are added into the support sets for subsequent processing. If those indexes are reliable, they will be added to the final support sets to recover the original signal.

In this paper, we propose a novel sparsity adaptive simulated annealing method (SASA) to reconstruct the sparse signal efficiently. This new method combines the advantage of the SAMP [19] algorithm and the simulated annealing (SA) algorithm to solve the CS reconstruction problem. The simulated annealing algorithm is known for its efficient global optimization strategies and superior results in solving nonconvex problems. The proposed algorithm can estimate the sparsity level, and it achieves superior performance in finding the optimization solution. The experimental results have shown that the proposed SASA method outperforms existing state-of-the-art sparse reconstruction algorithms.

The main contributions of this paper are listed as follows:(1)We take advantage of the sparsity adaptive simulated annealing algorithm in global searching to guide the sparse reconstruction(2)The proposed algorithm can reconstruct a sparse signal accurately, and it does not require sparsity as a priori information(3)The proposed algorithm has both the simple geometric structure of the greedy algorithm and the global optimization ability of the SA algorithm

The remainder of this paper is organized as follows. Section 2 introduces the background of CS and some approximation methods that may be applied in the later section. In Section 3, we propose the SASA algorithm and give the algorithm’s frameworks. We discuss the experimental results in Section 4. Finally, Section 5 draws the conclusion.

#### 2. Background and Related Work

In this section, we introduce the compressive sensing model and the sparse signal approximation method.

##### 2.1. Compressive Sensing Model

Sparse reconstruction is to recover the high dimensional original signal . is the measurement vector. The compressive sensing model can be defined aswhere is the sensing matrix . Mathematically, this problem can be solved by the *l*_{0}-norm minimization. It can be written as follows:where is the zero norm. However, the *l*_{0}-norm minimization is the NP-hard problem. It needs to exhaustively list all (*K* is the sparsity) possible solutions for determining the locations of the nonzero entries in . Therefore, it is usually converted to the convex *l*_{1}-norm minimization problem as follows:

The above model is a convex optimization problem. It can resort to linear programming methods or Bregman split algorithm. And linear programming methods have shown to be effective in solving such problems with high accuracy. Theoretically, the CS has explained that the sparse signal **x** can be exactly reconstructed from only a few observation **y** when the sensing matrix satisfies the restricted isometry property (RIP). This is an important property for us to analyze and design the reconstruction algorithm. The RIP is given as follows:where is a constant *.* Denote as the *l*_{2} norm of a vector **x**.

Generally speaking, if is very close to one, then it is possible that **y** cannot preserve any information on **x** (i.e., , **x** is the null space of the sensing matrix ). In this case, it is nearly impossible to reconstruct the sparse signal by using any greedy algorithms.

There are many kinds of sensing matrix . It is mainly divided into deterministic matrices and learning matrices. Based on signal characteristics, the learning matrix can be continuously updated by using a machine learning method. However, the commonly used sensing matrices include the Gaussian random matrix, Fourier random matrix, and Toeplitz matrix.

##### 2.2. Sparse Signal Approximation Method

In this section, we give some notations and matrix operators to facilitate the summary of the proposed algorithm.

*Definition 1 (projection and residue). * Let and . If is the full column rank, then is the Moore–Penrose pseudoinverse of , and represents the matrix transpose. The projection of the measurements onto span is written aswhere is the pseudoinverse of the matrix and is defined asIt can be seen from equation (5) that we can get the restoration of the sparse signal **x** by using the following formulation:The residue of the projection vector **y** is expressed asSuppose that is invertible. The projection of onto span () is defined as**E** is the unit matrix. To illustrate the relationship between projection and residue in iteration, we draw Figure 1 [17]. Refer to Figure 1 to help better understand the definition.

As can be seen from Figure 1, the residue is perpendicular to the hyperplane which can be ensured by using equation (7). It means that the residue will be minimized in each iteration. And, that can speed up the convergence of the algorithm.