Abstract
Sparse signal reconstruction, as the main link of compressive sensing (CS) theory, has attracted extensive attention in recent years. The essence of sparse signal reconstruction is how to recover the original signal accurately and effectively from an underdetermined linear system equation (ULSE). For this problem, we propose a new algorithm called regularization reweighted smoothed norm minimization algorithm, which is simply called RRSL0 algorithm. Three innovations are made under the framework of this method: (1) a new smoothed function called compound inverse proportional function (CIPF) is proposed; (2) a new reweighted function is proposed; and (3) a mixed conjugate gradient (MCG) method is proposed. In this algorithm, the reweighted function and the new smoothed function are combined as the sparsity promoting objective, and the constraint condition is taken as a deviation term. Both of them constitute an unconstrained optimization problem under the regularization criterion and the MCG method constructed is used to optimize the problem and realize high-precision reconstruction of sparse signals under noise conditions. Sparse signal recovery experiments on both the simulated and real data show the proposed RRSL0 algorithm performs better than other popular approaches and achieves state-of-the-art performances in signal and image processing.
1. Introduction
CS [1, 2] has been successfully applied in a multitude of scientific fields, ranging from image processing tasks to radar to coding theory, making the potential impact of advancements in theory and practice rather large. In fact, various CS tasks eventually boil down to the sparse signal recovery problem in the following underdetermined linear system: where is a sensing matrix (also known as measurement matrix). is the sparse vector (or signal) to be solved. is the vector (or signal) of measurements. Moreover, the denotes the additive noise.
For solving the ULSE in (1), we try to recover the sparse signal from given . In this case, the sensing matrix contains more columns than rows, which means there would be more than one solution that satisfies the constraint ( is a very small constant). This makes the recovery of sparse signal an ill-posed problem. Luckily, since the target signal itself is sparse, the most straightforward method is to use its sparsity to improve the problem. That is, the problem of recovering the target signal can be converted into solving the -norm minimization problem as follows:
This rather wonderful transformation is actually supported by outstanding theory [3]. The theory points out that, under the condition of no noise, the sparsest solution is indeed a real signal if is sufficiently sparse and satisfies the restricted isometry property (RIP) [4]:where is the sparsity of signal . In (2), -norm is nonconvex, which leads to a NP-hard problem. In fact, there are two alternative approaches to solve this problem [5, 6]: (i)Greedy search(ii)Relaxation method for the
Greedy search requires known sparsity as a constraint and the main methods are the approximate algorithms based on greedy matching pursuit (GMP) algorithms, such as OMP [7], StOMP [8], ROMP [9], CoSaMP [10], GOMP [11], and SP [12] algorithms. The objective function of these algorithms is given by [6]where is the sparsity of signal as shown in (3).
Based on this, the characteristics of GMP algorithms can be summarized as follows [6]:(i)Sparsity is used as a prior information(ii)Least square error is employed as an iterative criterion
The main advantage of GMP algorithms is that it is simple to calculate, but its application range is limited due to its low reconstruction accuracy in the case of noise.
At present, relaxation method for the is a main method. Relaxation method is mainly divided into two categories: -norm minimization methods and smoothed -norm minimization methods. The representative algorithm of the former is BP algorithm [13], and the latter is SL0 algorithm. Both are optimized using an approximate or equivalent -norm objective function. Therefore, the effect of sparse signal recovery is similar to that of GMP algorithms.
The above sparse signal recovery methods focus more on signal recovery in the absence of noise; they work not well in noise case. However, sparse signal recovery under noise is a very realistic and inevitable problem. Fortunately, the regularization mechanism makes it possible to solve this problem. The regularization mechanism relaxes into the following unconstrained recovery problem [6]: where is the parameter that balances the trade-off between the deviation term and the sparsity regularizer . The sparse prior information is enforced by the regularizer . Proper is crucial to the successful recovery of sparse signal: it should support sparse solutions and ensure that problem can be effectively solved simultaneously.
For regularization, there are many sparsity regularizers for relaxing the -norm, among which the convex -norm [14, 15] and the nonconvex -norm to the -th power [16–18] are the main methods:(i)-norm:(1).(ii)-norm to the -th power:(1);(2);(3).
In the case of no noise, -norm is equivalent to -norm, and since -norm is the only norm with sparsity and convexity, the sparse solution can be approximated by convex optimization method. However, the equivalence between -norm and -norm is no longer valid in the case of noise, so the effect of using -norm to promote sparsity is not obvious in this case. Compared with -norm, the nonconvex - norm to the p-th power is closer to the -norm, so -norm minimization performs better than -norm minimization in recovering sparse signal. Based on the idea of approximation, a Gauss function is employed to approximate the -norm in [19]. Moreover, [20] uses the Gauss function as a sparsity regularizer. The approximation of -norm by Gauss function can be expressed as follows [6]:According to this equation, it is obvious thatWhen is a small enough positive value, the sparse regularizer almost equivalent to -norm, so it can promote sparsity. Furthermore, the Gauss function can be optimized by optimization methods because it is differentiable and smoothed. Based on this idea, a hyperbolic tangent (tanh) function is proposed in [21]:
The tanh function proposed in [21] is a smoothed function, and it performs better in approximating -norm than the Gauss function in [20]. Based on this, the tanh function has better performance in sparse signal recovery. Indeed, we confirmed this view by a large number of simulation experiments.
In this paper, we propose a CIPF function as new sparsity regularizer and show their effectiveness and advantages over other popular regularizers in promoting sparse solutions with both theoretical analyses and experimental evaluations.
This paper is organized as follows. Section 2 introduces the main contributions of the proposed RRSL0 algorithm. In Section 3, the steps of the RRSL0 algorithm and the selection of related parameters are described in detail. Section 4 provides the experimental results to evaluate the performance of the proposed method. The paper is finally concluded in Section 5.
2. Main Contributions of the Proposed RRSL0 Algorithm
In this paper, based on the in (5), we propose a new objective function, which is given byFrom the equation, we can see that there is an obvious difference compared with other normalized objective functions. The obvious difference is objective regularizer. In this paper, as an objective regularizer, we not only propose a smoothed function approximating norm but also propose a reweighted function to promote sparsity. This section focuses on the relevant contents of reweighted function and .
2.1. New Smoothed Function: CIPF
According to [22], some properties of the smoothed functions are summarized in the following.
Property. Let and define for any . The function has the Property, if(a) is real analytic on for some ;(b), , where is some constant;(c) is convex on ;(d);(e).
It follows immediately from that converges to -norm as ; i.e.,
Based on the , this paper proposes a new smoothed function model called CIPF, which satisfies the and better approximates -norm. The smoothed function model is given as
In (11), regulatory factor is a large constant (because it satisfies for any ). By experiments, the factor is determined to be 10, which is the good result of the simulation. represents a smoothed factor, and when it is smaller, it will make the proposed model closer to the -norm. Obviously,or approximately is satisfied. Letwhere for small values of , and the approximation tends to equality when .
Figure 1 shows the situation of CIPF model approximates the -norm. Obviously, the CIPF model makes a better approximation.

In conclusion, the merits of CIPF model can be summarized as follows:(i)It closely approximates -norm;(ii)It is simpler than Gauss and tanh function model.
These merits make it possible to reduce the computational complexity on the premise of ensuring the sparse signal reconstruction accuracy, which is of practical significance for sparse signal reconstruction.
2.2. New Reweighted Function
Candès et al. [23] proposed the reweighted -norm minimization method, which employs the reweighted norm to enhance the sparsity of the solution. Moreover, they provided an analytical result of the improvement in the sparsity recovery by incorporating reweighted function to the objective function. Pant et al. [24] applied another reweighted smoothed -norm minimization method, which uses a similar reweighted function to improve sparsity. The reweighted function can be summarized as follows:(i)Candès et al.: (ii)Pant et al.: , is a small enough positive constant.
From the two reweighted functions, we can find a phenomenon: a large signal entry is reweighted with a small value ; on the contrary, a small signal entry is reweighted with a large value . By analysis, the large force the solution to concentrate on the indices where is small, and by construction these correspond precisely to the indices where is nonzero.
Combined with the above idea, we propose a new reweighted function, which is given by
As for Candès et al., when signal entry is zero or close to zero, the value of will be very large, which is not suitable for computation by computer. Although Pant et al. noticed the problem and improved the reweighted function to avoid this problem, the constant depends on experience. Actually, the proposed reweighted function can avoid the two problems; therefore, the proposed reweighted function can play a better effect.
2.3. Representation of Solution Set in Nullspace Measurement Matrix
As for , it is well known that all of its solutions can be parameterized aswhere denotes particular solution of system equation, which can be described as . is a matrix, and its column vectors , , are made by orthonormal basis of nullspace measurement matrix . can be evaluated by singular-value decomposition or, more efficiently, decomposition of and . Adopting the solution can diminish the constraint of the optimization and the dimensions of space of solution are reduced from to , thereby reducing the complexity of computation.
3. New Algorithm for CS: RRSL0
3.1. RRSL0 Algorithm and Its Steps
As explained above, the proposed objective function can be eventually described aswhere is a differentiable smoothed accumulated function. Reweighted function . Let , ; therefore, we have According to (19), the gradient of the proposed objective function is written asCombining (20) and (21), the second derivative is expressed as
Solving problem of ULSE is to solve the optimization problem in (18). As for this problem, there are many methods, such split Bregman methods [25–27], FISTA [28], alternating direction methods [29], and gradient descent [30]. In order to reduce the computational complexity, this paper adopts conjugate gradient (CG) method to optimize the proposed objective function. There are two classical methods in CG: FR conjugate gradient (FR-CG) method [31] and PRP conjugate gradient (PRP-CG) method [32]. This paper combines FR-CG with PRP-CG, forming a mixed CG algorithm; here we called it MCG method. Based on this, the MCG method is employed to solve the optimization problem in this paper.
The problem firstly can be solved by using a sequential strategy as detailed in the next paragraph.
Given a small target value , and a sufficiently large initial value , after referring to the annealing mechanism in simulated annealing [33], this paper proposes a monotonically decreasing sequence , which is generated aswhere , can be a constant that is larger than 1, and is the maximum of iterations. Using such a monotonically decreasing sequence can avoid the case of too small leading to local optimum.
According to CG algorithm, the solution is updated aswhere the parameter can be given bythe parameter is given aswhere and , and the parameter is updated as
From above equations, we can see that is positive if is positive definite (PD). As shown in (22), is PD, and is PD, so is PD if is PD. To get the PD of , this paper makes the following corrections to the diagonal elements of matrix :where is a small positive constant with a value of . By this processing in (28), the matrix is PD, is PD as well, and hence the optimization direction of each component of keeps consistency.
The above is the optimization process analysis of MCG method. For MCG method, the direction of each iteration of this method is the combination of the last iteration direction and the negative gradient direction, which can overcome the sawtooth phenomenon caused by the steepest descent method.
According to the explanation above, we can conclude the steps of proposed RRSL0 algorithm, which is given in Algorithm 1. As for , it can be shown that function remains convex in the region where the largest magnitude of the component of is less than . As the algorithm starts at the original value , the above choice of ensures that the optimization starts in a convex region. This greatly facilitates the convergence of the RRSL0 algorithm.
Initialization: and | |
Step 1: Set ; | |
Step 2: decomposition , the equals to the columns of , so the initial value | |
; | |
Step 3: For | |
(1) Set , compute by equation (23); | |
(2) Set , and iterative termination threshold ; | |
(3) While | |
(a) Compute by equations (24), (25), (26), (27), (28); | |
(b) Compute ; | |
(4) Let , and compute ; | |
Step 4: Output . |
3.2. Selection of Parameters
The selection of parameters and will affect the performance of the RRSL0 algorithm; thus this paper discusses the selection of the two above parameters in this section.
3.2.1. Selection of Parameter
The choice of parameter is closely related to the noise level. If the noise level is known, the (DP) can be used to determine the parameter . However, in practice, the noise level cannot be measured accurately. In this case, it is necessary to use to select parameter , such as . Ref [34], etc. prove the rationality of under appropriate conditions, so this paper uses this to determine . The is given by Then we determine by .
3.2.2. Selection of Parameter
According to (23), descending sequence of is generated by (it is obtained through simplification of (23)). Parameter and parameter should be appropriately selected. The selection of and is discussed below.
For initial value , here let ; in order to make the algorithm converge quickly, let parameter satisfies
From the equation, we can see constant satisfies ; thus ; here we define constant as . Hence .
As for final value , when , . That is, the smaller , the more can reflect the sparsity of signal , but at the same time it is also more sensitive to noise; therefore, the value should not be too small. Combining [19], we choose .
4. Numerical Simulation and Analysis
The numerical simulation platform is MATLAB 2017b, which is installed on the WINDOWS 10, 64-bit operating system. The CPU of simulation computer is Intel (R) Core (TM) i5-3230M, and the frequency is 2.6 GHz. In this section, the performance of RRSL0 algorithm is verified by signal and image recovery in the noiseless case and noise case.
Here some state-of-the-art algorithms are selected for comparison. The parameters are selected to obtain best performance for each algorithm: for BPDN algorithm [35], ; for SL0 algorithm, , , and scale factors are set as , ; for L2-SL0 algorithm, , , , and ; for -RLS algorithm, , , , , , and ; and for RRSL0 algorithm, smoothed factor , , and iterations . All experiments are based on the 100 trials.
4.1. Comparison of Reconstruction Performance
For signal recovery under no noise conditions, we evaluate performance of algorithms by (RSR), (NMSE), and (CRT). RSR is defined as the ratio of the number of successful experiments to the total number of experiments, and if , the recovery is considered to be a success. NMSE is defined as . CRT is measured with and . The case of signal recovery under noise is considered by NMSE and (SNR). SNR is defined as .
4.1.1. Signal Recovery in Noiseless Case
In this section, the SL0, BPDN, L2-SL0, and Lp-RLS are used for comparison. Here we fix and and the sparsity , for Figure 2 and let , , for Table 1. For every experiment, we generate a pair of : is a random Gauss matrix with normalized rows; the entries of the sparse signal are generated according to the Gaussian distribution .

(a) RSR of recovery signal

(b) NMSE of recovery signal
Figure 2(a) shows the RSR of all algorithms. When is large enough and continues to increase, the RSR of all algorithms tends to decrease until they drop to . In fact, the speed of changes in the RSR reflects the of the algorithm. Specifically, the slower the RSR changes, the better the of the algorithm. Obviously, the RRSL0 algorithm has a more slowly changing trend than other comparison algorithms. By comparison, SL0 is superior to BPDN and L2-SL0, but inferior to -RLS and RRSL0. BPDN, L2-SL0, -RLS, and RRSL0 are the unconstrained optimization algorithms. When recovering signal in noiseless case, the deviation term is equivalent to the error between original signal and recovered signal, which interferes with the of the algorithm. Luckily, the -RLS adopts FR-CG algorithm to optimize, which improves the stability of the optimized solution. The proposed RRSL0 algorithm applies MCG algorithm to enhance the reconstruction performance.
The NMSE of all algorithms is shown in Figure 2(b). NMSE reflects the signal reconstruction accuracy, and the lower the NMSE, the higher the accuracy of the reconstructed signal. As shown in this figure, all algorithms perform well when the sparsity is lower than 29. NMSE of BDPN increases fastest, followed by L2-SL0, SL0, -RLS, and proposed RRSL0. When sparsity is increasing, L2-SL0 does not perform well, while -RLS and RRSL0 perform well. This result indirectly verified that MCG and FR-CG method are superior to steepest descent method in SL0 and L2-SL0; furthermore, the term has a relatively large influence on the recovery results. In addition, the experiment also verified that RRSL0 performs better than -RLS and other algorithms.
Table 1 shows the CRT of all algorithms. The changes according to a given sequence . From the table, for any , SL0 has the shortest computation time, followed by RRSL0, -RLS, and L2-SL0, and BPDN takes the longest computation time. BPDN algorithm is generally implemented by quadratic programming method, and the computational complexity of this method is very high, thus resulting in a large increase in the overall computation time of the algorithm. Furthermore, In L2-SL0, the step factor needs to be obtained by a one-dimensional linear search method, while -RLS and RRSL0 do not. Compared with -RLS, RRSL0 is more prominent in the decrease of computation time. In order to verify the difference in computation time between the two algorithms, the convergence comparison of the two algorithms is shown in Figure 3. From this figure, we can draw that the convergence rate of RRSL0 is much faster than -RLS.

4.1.2. Signal Recovery in Noise Case
In this section, we discuss signal recovery performance in noise case. We add noise to the measurement vector ; moreover, is randomly formed and follows the Gauss distribution of . In order to analyze the antinoise performance of the RRSL0 algorithm more closely to the real situation, we constructed a certain signal as an experimental object in the experiments in this section. The certain signal is given bywhere , , , and . Hz, Hz, Hz, and Hz. Here is a sequence with , is sampling interval valued . is sampling frequency valued 800Hz. The object that needs to be reconstructed can be expressed aswhere , is the basis (here called sparse basis matrix) given by , is a unit matrix, and is a sparse signal in frequency domain obtained by the of . Moreover, is a random matrix generated by Gauss distribution, sensing matrix . . Here, let and . Therefore, the sparse signal can be recovered from given by CS recovery methods, and then the original signal can be obtained by .
Figure 4 shows the signal recovery effect by different algorithms under noise intensity = 0.2. Meanwhile, time-frequency characteristics are shown in Figure 5. Obviously, BPDN and SL0 perform not well while L2-SL0, -RLS, and proposed RRSL0 perform quite well. This verifies that the regularization mechanism has good antinoise effect. The performance of each algorithm under different noise intensities is shown in Table 2. When , the SL0 outperforms other algorithms, but with the increase of , the effect of SL0 is getting worse and worse. This result further illustrates that the traditional constrained sparse recovery algorithm does not have the performance of antinoise. For BPDN, L2-SL0, -RLS, and RRSL0, they all applied regularization mechanism, and they are indeed superior to SL0 in noise case. Therefore, the proposed RRSL0 in this paper has the best antinoise performance.

(a)

(b)

(c)

(d)

(e)

(a)

(b)

(c)

(d)

(e)

(f)
4.2. Comparison of Image Recovery Performance
Real images are considered to be approximately sparse under some proper basis, such as the Discrete Cosine Transform (DCT) basis and Discrete Wavelet Transform (DWT) basis. Here we choose DWT basis to recover these images. We compare the recovery performances based on the 2 real images in Figure 6: Boat and Barbara, and the size of these images is . The compression ratio (CR, it is defined as ) is 0.5, and noise equals 0.01. We still choose SL0, BPDN, L2-SL0, and -RLS to make comparisons. For image recovery, the object of image processing is given byHere , , and are matrices; among these, , . In order to meet the basic requirements of CS, we make the following processing:where , , is the column vector of . Therefore we can process images by CS.

(a) Original Boat

(b) Original Barbara
For performance of image recovery, we valuate it by (PSNR) and (SSIM). PSNR is defined aswhere , and SSIM is defined as
Among this, is the mean of image , is the mean of image , is the variance of image , is the variance of image , and is covariance between image and image . Parameters and , where , and is dynamic range of pixel values. Range of SSIM is ; when these two images are the same, the SSIM equals to 1.
Figure 6 shows the original images. Figure 7 shows the recovery effect of and with noise intensity and . In the , the recovered images by SL0 and BPDN have the obvious water ripples while recovered images by other algorithms have no such water ripples. Similarly, for , recovered images by SL0 and BPDN are blurred compared with recovered images by other algorithms. L2-SL0, -RLS, and RRSL0 algorithms are also effective for noisy image recovery. For L2-SL0, -RLS, and RRSL0 algorithms, their recovery effects are very similar. In order to further analyze the advantages and disadvantages of these algorithms, we analyze PSNR and SSIM of the images recovered by these algorithms and results are shown in Tables 3–6. By observation and analysis, -RLS performs better than L2-SL0 on the whole (except SSIM when , because the robustness of -RLS is not strong); at the same time, RRSL0 outperforms -RLS. Hence, the RRSL0 proposed by this paper is superior to other selected algorithms in image processing.

(a) Recovered Boat with noise intensity

(b) Recovered Barbara with noise intensity

(c) Recovered Boat with noise intensity

(d) Recovered Barbara with noise intensity
5. Conclusions
In this paper, we propose the RRSL0 algorithm to recover sparse signal from given in noise case. For the RRSL0 algorithm, it is constructed under the framework of regularization, in which the constraint condition is the deviation term, and the regularization term is replaced by a reweighted smoothed function . As a key part of the RRSL0 algorithm, the reweighted smoothed function is to promote sparsity and provide guarantee for robust and accurate signal recovery. Furthermore, We consider the value of and the initial value and final value . Combined with previous literatures, we used -method to determine the value of and deduced the method of initial value and final value of . Sparse signal recovery experiments on both the simulated signal and real images show the proposed RRSL0 algorithm performs better than the or regularization methods and classical regularization methods. In addition, we would also like to apply the proposed algorithm to other CS applications such as the RPCA [36, 37] and SAR imaging [38].
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflicts of interest.
Authors’ Contributions
All authors have made great contributions to the work. Jianhong Xiang, Huihui Yue, and Xiangjun Yin conceived and designed the experiments; Jianhong Xiang and Huihui Yue performed the experiments and analyzed the data; Xiangjun Yin and Linyu Wang gave insightful suggestions for the work; Jianhong Xiang, Huihui Yue, and Xiangjun Yin wrote the paper.
Acknowledgments
This paper is supported by the National Key Laboratory of Communication Anti-jamming Technology.