Abstract

The preliminary atom set exits redundant atoms in the stochastic gradient matching pursuit algorithm, which affects the accuracy of the signal reconstruction and increases the computational complexity. To overcome the problem, an improved method is proposed. Firstly, a limited soft-threshold selection strategy is used to select the new atoms from the preliminary atom set, to reduce the redundancy of the preliminary atom set. Secondly, before finding the least squares solution of the residual, it is determined whether the number of columns of the measurement matrix is smaller than the number of rows. If the condition is satisfied, the least squares solution is calculated; otherwise, the loop is exited. Finally, if the length of the candidate atomic index set is less than the sparsity level, the current candidate atom index set is the support atom set. If the condition is not satisfied, the support atom index set is determined by the least squares solution. Simulation results indicate that the proposed method is better than other methods in terms of the reconstruction probability and shorter running time than the stochastic gradient matching pursuit algorithm.

1. Introduction

Compressed sensing (CS) [1, 2] theory has three core problems: sparse representation of signals, and design of the measurement matrix, and design of reconstruction algorithms. The reconstruction algorithm is directly related to the accuracy of the recovery signal and the convergence rate of the algorithm, which determines whether the theory is feasible. The recovery algorithm is described as the recovery of the high-dimensional original signal from the low-dimensional measurement vectors. At present, many reconstruction algorithms have been proposed to recover the signal, which include convex optimization methods, combinational methods, and greedy pursuit methods. Convex optimization methods approximate the signal by transforming nonconvex problems into convex ones, which include basis pursuit (BP) [3], the gradient projection for sparse reconstruction (GPSR) [4] algorithm, iterative threshold (IT) [5], interior-point method [6], Bergman iteration (BT) [7], and total variation (TV) [8]. Although convex optimization algorithms have fewer observations, their higher complexity is not suitable for practical applications.

Combinational methods include Fourier sampling [9,10], chaining pursuit (CP) [11], heavy hitters on steroids pursuit (HHSP) [12]. Combinational methods are low in complexity; however, the accuracy of the recovery signal is not as good as convex optimization algorithms. Greedy algorithms have many advantages, such as simple structure, fast convergence, and low complexity, becoming the first choice for the recovery algorithm.

To date, many greedy pursuit algorithms have been proposed, including the first, the matching pursuit (MP) [13] algorithm. Based on this algorithm, the orthogonal matching pursuit (OMP) [14] algorithm was proposed to optimize MP via orthogonalization of the atoms of the support set. However, the OMP algorithm only selects one of the atoms of the support set at each round of iteration, and the efficiency is lower. Successors include the regularized OMP (ROMP) [15], subspace pursuit (SP) [16] algorithm, compressive sampling matching pursuit (CoSaMP) [17, 18] algorithm, and stagewise OMP (StOMP) [19] algorithm. The ROMP algorithm realizes the effective selection of the atom via the regularized rule, to improve the speed of the OMP. The SP and CoSaMP algorithms are similar. Both are proposed with the backtracking strategy. The difference between the SP and CoSaMP is the number of atoms that are selected from measurement matrix to compose preliminary atomic set in each round of iteration. The SP selects s atoms, while the CoSaMP selects 2s atoms. The StOMP algorithm selects multiple atoms or columns of the measurement matrix in each round of iteration via a threshold parameter. The greedy algorithms mentioned above all require the sparsity information of the signal, and the choice of the type of atoms is inflexible, which affects the convergence speed, robustness, and reconstruction performance of the algorithms.

Because the traditional greedy pursuit algorithm needs to compute the inverse matrix of the sensing matrix, this process requires a significant amount of computation time and storage space, resulting in lower reconstruction probability. In recent years, some research workers have proposed the fast version of the greedy algorithm, which avoids computation of the inverse matrix [20]. The GP algorithm uses the gradient idea of the unconstrained optimization method to replace the computation of the inverse matrix and to reduce the computational complexity and storage space of the traditional greedy algorithms [21]. However, the GP algorithm has a slow convergence and lower efficiency. In addition, the GP algorithm cannot solve the problem of large-scale data recovery. To improve those problems, the conjugate gradient pursuit (CGP) [22] algorithm and approximate conjugate gradient pursuit (ACGP) [23] algorithm are proposed, respectively. Although CGP and ACGP algorithms effectively reduce the computational complexity and storage space of traditional greedy algorithms, the reconstruction performance still needs to be improved. Based on the GP algorithm, the stagewise weak selection (SWGP) [24] algorithm was proposed to improve the reconstruction performance and convergence speed of the GP algorithm, via introduction of the conjugate direction and weak selection strategy. Motivated by the stochastic gradient descent methods, the stochastic gradient matching pursuit (StoGradMP) algorithm [25] was recently proposed for the optimization problem with sparsity constraints. The atomic selection method of the fixed number in the StoGradMP algorithm (that is, selecting 2s atoms at the preliminary stage of each iteration) will lead to the redundant atoms of the preliminary atomic set. When joined with the candidate atomic set, this will reduce the accuracy of the least squares solution and the inaccuracy of the support atomic set estimation, which affects the precision of the signal reconstruction and increases the computational complexity of the algorithm. In this study, we use the limited soft-threshold selection strategy to realize the second selection of the preliminary atom set after the preliminary stage, which will improve the reconstruction accuracy. The combination with reliability verification conditions ensures the correctness and effectiveness of the proposed method.

2. Compressed Sensing Theory

The recent work in the area of the compressed sensing [26] demonstrated that it was possible to algorithmically recover sparse (and, more generally compressible) signals from incomplete observations. The simplest model is an -dimensional signal with a small number of nonzero entries under no noise conditions:where and are the number of nonzero entries and sparsity level of the signal, respectively. Such signals are called s-sparse. Any signal in can be represented in terms of vectors . For simplicity, we assume that the basis is orthonormal. Forming the basis matrix by stacking the vectors as columns, we can express any signal aswhere is the column vector of projection coefficients, is the projection coefficient. Obviously, and are the equivalent representations of the same signals in the different domains. That is, and are the signals in the time domain and domain, respectively.

When is s-sparse, let , is the unit matrix, and . We use a measurement matrix to make a linear measurement of the projection coefficient vector , obtaining an observation vector , expressed aswhere Equation (3) is a linear projection of the original signals on . Note that the measurement process is nonadaptive; that is, does not depend in any way on the signal . Clearly, the dimension of is much lower than the dimension of . That is, this problem is an under-determined problem; Equation (3) has infinitely many solutions. It is difficult to recover the projection coefficient vector directly from the observation vector . However, the original signal is -sparse and satisfies certain conditions in Equation (3); can be recovered by solving the -minimization problem:where is the -norm of the vector, representing the number of nonzero entries. Candes and Tao demonstrated that if the -sparse signal is to be accurately recovered, the number of measurements (or the dimension of observation vector ) must satisfy , and the measurement matrix must satisfy the restricted isometric property (RIP) [27, 28].

When is not s-sparse in the time domain, the signal recovery process cannot be directly used in the Equation (4). The signal can be sparse representations on the sparse basis matrix . Combining Equations (2) and (3), we obtainwhere is the sensing matrix [29]. According to [30], the equivalent condition of the RIP is that the measurement matrix is not correlated with the sparse basis matrix . Note that if the sensing matrix also satisfies the RIP, the recovery signal (or projection coefficient) can be obtained by solving an optimal -norm problem similar to Equation (4):

From Equation (5), we see that is fixed, so that also satisfies the RIP. The measurement matrix must meet certain conditions. It is shown in [31] that when the measurement matrix is a Gaussian random matrix, the sensing matrix can satisfy the RIP condition with a larger probability.

However in most scenarios, the signal contains noise. In this case, the measurement process can be expressed aswhere the matrix is the sensing matrix and is the -dimensional noise vector. is the s-sparse signal in the domain. In this study, for simplicity, we assume that the signal is s-sparse, that is, and . Therefore, Equation (7) can be written as , and we minimize the following formula to recover :where controls the sparsity of the solution to Equation (9).

To analyze (8), we combine Equation (2), where are the projection coefficients of signal . is considered sparse with respect to if is relatively small compared to the ambient dimension . Therefore, we can express the optimization (9) in the form as follows:where , , is a smooth function that is nonconvex; is defined as the norm that captures the sparsity level of . In particular, is the smallest number of atoms in such that can be represented as

For sparse signal recovery, the set consists of basic vectors, each of size in the Euclidean space. This problem can be seen as a special case of Equation (10), with and . We decompose the vector into nonoverlapping vectors with a size of and denote as . is the submatrix of measurement matrix . According to Equation (9), the objective function is . We then denote aswhere , which is a positive integer. We treat each function as , and each function represents a collection (or block) of observations of size . Here, we spilt the function into multiple subfunctions or block the measurement matrix into the submatrix , which will be beneficial for the calculation of the gradient.

3. StoGradMP Algorithm

CoSaMP [17] is a popular algorithm for recovering a sparse signal from its linear measurements. The idea of the CoSaMP algorithm is generalized to prove the GradMP algorithm that solves a broader class of sparsity-constrained problems. In this section, we describe the stochastic version of the GradMP, the StoGradMP [25] algorithm, where at each iteration, only the evaluation of the gradient of a function is required.

The StoGradMP algorithm is described in Algorithm 1, which consists of the following steps in each iteration:Randomize. The measurement matrix is subject to random block processing, to determine the region of the submatrix . Then, according to Equation (11), calculates the subfunction .Proxy. Compute the gradient of . Here, the gradient is an column vector.Identify. Select the column indexes of the submatrix corresponding to the maximum components in the gradient vector, forming a preliminary index set .Merge. Merge the preliminary index set and the support index set of the previous iteration to form a candidate index set .Estimation. The estimation of signal by a suboptimization method is determined, which is the least squares method. Generally, is the transition signal.Prune. Select the column indexes of the measurement matrix corresponding to the maximum components in the signal estimation vector that forms a support index set .Update. Update the signal estimation .Check. When the residual is less than the tolerance error of the proposed method iteration, stop the iteration. If the loop index is greater than the maximum number of iterations , the proposed method ends and the approximation of signal is output. If the iteration ends, the condition is not satisfied. Otherwise, continue the iteration until the halting condition is met.

Input:
 Sparsity level
 Sensing matrix
 Observation vector
 Block size
 Tolerance used to exit loop
 Maximum number of iterations
Initialization parameter:
{initialize signal approximation}
{loop index}
{while loop flag}
{empty preliminary index set}
{empty candidate index set }
{empty support index set}
{number of block}
While (∼done)
  
(1) Randomize
  
  
  
(2) Computation of gradient
  
(3) Identify the large components
  
(4) Merge to update candidate index set
  
(5) Signal estimation by the least square method
  
(6) Prune to obtain current support index set
  
(7) Update
  
  
(8) Check the iteration condition
  If
   done = 1 quit iteration
  end
end
Output: (s-sparse approximation of signal )

4. Improved StoGradMP Algorithm

The StoGradMP algorithm selects atoms in each iteration, where is a fixed number. The choice of atoms is inflexible and will increase the redundancy of the preliminary atom set. This affects the reconstruction and computational speed. To solve this problem, we use the limited soft selection strategy to select atoms from the preliminary atom set.

Firstly, according to Equation (11), we randomly block the measurement matrix to obtain a stochastic block matrix (or submatrix) , which can be expressed aswhere is the number of the block measurement matrix, , and is the number of rows of the block matrix. and generate a maximum integer smaller than and uniformly distributed pseudorandom numbers, respectively. obtains a minimum integer greater than . is the row index of the measurement matrix . From Equation (12), we know that the area of the block matrix is randomly determined. For simplicity, we express and as and , respectively. Next, we compute the subfunction as follows:where is the number of iterations, . is the submatrix of the measurement matrix , which is stochastically determined and of size . The subfunctions are also stochastically determined, and . Here, is the subfunction of .

After determining the area of submatrix and the subfunction , we calculate the gradient of the subfunction, which is expressed aswhere is the gradient of the subfunction at the iteration and the gradient is an column vector. and represent the derivative of the subfunction and the transpose matrix of submeasurement matrix , respectively. is the approximation of sparse signal at the iteration.

In the atomic preliminary selection stage of the stochastic algorithm, we obtain the gradient vector of the function through the derivation of randomly determined subfunctions, and then select the largest gradient values from the gradient vector , thereby determining the column index of the measurement matrix corresponding to the values of the gradient vector, and form a preliminary atom index set aswhere is the absolute value of the gradient vector , is the number of iterations, and is the atomic (or column) index of the measurement matrix corresponding to the maximal value from that forms a preliminary atom index set .

After the preliminary atomic stage, the preliminary atomic set exits the redundant atoms, which will reduce the accuracy of the least square solutions of signal and the inaccuracy of the support atomic set estimation. This will eventually affect the precision of signal reconstruction, and increase the computational complexity of the algorithm. To improve the reconstruction accuracy, we use the limited soft-threshold selection strategy to realize the second selection of the preliminary atom set.

In the atomic index set , the atoms corresponding to the index are expressed as , . We select the atoms that are larger than the threshold in the atomic set , to form a new atomic set. The index of the new atomic set forms the new atomic index set . This process is described aswhere is the maximum gradient value at the iteration. represents the largest 2s gradient values from the absolute value of the gradient vector, which corresponds to the atomic index set . is the threshold, whose value range is generally . It should be noted that if the size of the threshold is greater than 1, the soft-thresholds selection strategy fails. That is to say that the selection of the preliminary atomic index set cannot be completed. Meanwhile, if the value of the threshold is too small, it will lead to the redundant atoms cannot be effectively eliminated in the preliminary atomic set. is the soft-thresholds selection strategy. searches the corresponding index that satisfies the soft-threshold condition and forms a new atomic index set .

After the soft-threshold selection strategy is completed, we merge the new atomic set and the support set to update the candidate atom set , which can be expressed aswhere , , and represent the candidate atomic index set, the support atomic index set, and the preliminary atomic index set, respectively. is the candidate atomic set at the iteration. is the support atomic set at the previous iteration. is the new atomic set corresponding to atomic index set at the iteration.

Before solving the suboptimization method, we must ensure that the number of rows is greater than the number of columns in the candidate atomic matrix ; that is, is a full column-rank matrix. Therefore, we provide the first reliability verification condition; that is, ifthenwhere is the candidate atomic set (or matrix) at the iteration. If the condition is not satisfied, namely, the number of rows is smaller than the number of columns, the matrix is not inversed. If this occurs, we exit the loop. Let . Next, solving the least squares solution of the sparse approximation signal , it can be expressed aswhere is the estimation vector of the sparse signal at the iteration. This is known as the transition signal. is the inverse matrix of the candidate atomic set (or matrix), and is the observing vector.

Since the soft-threshold selection strategy is used to complete the second selection of the preliminary atomic set, the size of the candidate atomic index set may be less than . Therefore, to ensure the correctness and effectiveness of the proposed method, we provide the second reliability verification condition, that is, ifthenwhere s is the size of sparsity of the signal. is the size of the candidate atomic index set . Ifthenwhere is the absolute value of the transition signal at the iteration. determines the atomic (or column) index of the measurement matrix corresponding to the maximum value from and the former support atomic index set .

Next, we update the approximation of sparse signal x, which is expressed aswhere is the approximation of the signal at the k-th iteration, and is the recovery signal corresponding to the support atomic index set .

Finally, we check the iteration stopping condition. Ifthenwhere represents the residual at the iteration and is the tolerance error of the algorithm iteration. is the maximum number of iteration, with special value in this study. represents the algorithm stops and outputs the signal approximation . If the iteration stopping criteria is not satisfied, then the iteration is continued until the condition is satisfied. The entire procedure is shown in Algorithm 2.

Input:
 Sparsity level
 Sensing matrix
 Observation vector
 Block size
 Tolerance used to exit loop
 Maximum number of iterations
Initialize parameter:
{initialize signal approximation}
{loop index}
{while loop flag}
{empty preliminary index set}
{empty candidate index set}
{empty support index set}
{number of block}
While (∼done)
  
(1) Randomize
  
  
  
(2) Computation of gradient
  
(3) Identify the large 2s components
  
(4) Soft-threshold selection strategy
  
  
(5) Merge to update the candidate index set
  
  Reliability verification conditions 1
  If
   
  else
   if
    
   end
    break;
   end
(6) Estimation of signal by least square method
  
(7) Prune to obtain current support index set
  Reliability verification conditions 2
  If ()
   
  else
   
  end
(8) Update
  
  
(9) Check the iteration stopping condition
  If
   done = quit iteration
  end
end
Output: (s-sparse approximation of signal )

5. Results and Discussion

In this section, we used the signal with s-sparsity as the original signal. The measurement matrix is randomly generated with a Gaussian distribution. All performance is an average after running the simulation 100 times using a computer with a quadcore, 64-bit processor, and 4G memory.

In Figure 1, we compare the reconstruction probability of different sparsity of the proposed algorithm with different measurements in different threshold conditions. We set the sparsity level set as . The threshold parameter set is . From Figures 1(a)1(e), we can see that when the size of the threshold parameter is 0.1, 0.2, 0.3, 0.4, and 0.5, the reconstruction probability of the proposed method is very close, with almost no difference. In Figure 1(f), when the size of the threshold parameter is 0.6, the proposed method can complete signal reconstruction with less measurements, compared with other thresholds, when the size of sparsity is 16 and 20. In Figures 1(g)1(j), we see that when the size of the threshold is 0.7, 0.8, 0.9, and 1.0, the proposed method requires more measurements to complete the signal reconstruction under the same sparsity level.

Figure 2 compares the reconstruction probability of different measurements of the proposed method with different thresholds under the same sparsity level. We set the sparsity set as . The threshold parameter set is consistent with the threshold parameter set in Figure 1. From Figure 2, we see that different threshold parameters have certain influences on the reconstruction probability of the signal for different sparsity levels. In Figures 2(a) and 2(b), we can see that when the sparsity levels are s = 8 and s = 12, the reconstruction probability of all thresholds conditions is very close, except for the thresholds 0.8, 0.9, and 1.0. Meanwhile, from Figures 2(c) and 2(d), we can see that when the sparsity levels are s = 16 or s = 20 and the sizes of the threshold are 0.5, 0.6, and 0.7, the reconstruction performance of the proposed method is better than the reconstruction probability of the other thresholds for different measurements. Therefore, from Figure (2), it demonstrates that the soft-threshold strategy is more advantageous for larger sparsity. In particular, when the sparsity level s = 20 and threshold t = 0.6, the reconstruction probability of the proposed method is better than the reconstruction probability of other threshold conditions. Based on the analysis of Figures 1 and 2, we can see that that when the threshold is t = 0.6, the proposed method has better performance. Therefore, in the following simulations, we set the threshold as 0.6.

In Figure 3, we compared the average runtime of different thresholds of the proposed algorithm with different measurements. From Figures 1 and 2, we see that the reconstruction probability of the proposed method is 100% when the number of the measurements is greater than or equal to 145. In particular, when the sparsity level is s = 20 and the threshold is 0.6, the reconstruction probability is better than the reconstruction probability of other thresholds. Therefore, we set the range of the number of the measurements as [145 200] in the simulation of Figure 3. From this, we see that the proposed algorithm with has the shortest runtime, and the next shortest are the proposed algorithm with and the proposed algorithm with . The proposed method with has the longest runtime. This means that the selection of the threshold parameter is important to the runtime of the proposed method.

From the aspects of reconstruction probability and runtime of the proposed method, we conclude that when the size of threshold parameter is t = 0.6 and the sparsity level is s = 20, reconstruction performance of the proposed method is better than the reconstruction performance of the other threshold conditions for different measurements. Therefore, in the following without special instructions, the default threshold is 0.6 and the sparsity level is s = 20.

In Figure 4, we compared the reconstruction performance of the proposed algorithm in the single reconstruction. From Figure 4, we see that the recovery error is less than , which is much less than the tolerance error of the proposed method iteration. This shows that the reconstruction of the proposed method is ideal.

In Figure 5, we compare the reconstruction probability of different sparsities of the proposed method and the StoGradMP in different measurements. We set the sparsity set as . The size of the threshold is 0.6. From Figure 5, we see that when the sparsity level is , the reconstruction probability of the proposed algorithm and the StoGradMP algorithm is nearly identical for all measurements. When the sparsity level is s = 8, the reconstruction probability of the StoGradMP algorithm and the proposed algorithm is nearly identical. When s = 12, 16, and 20, the reconstruction probability of the proposed method is higher than the StoGradMP algorithm under the same sparsity level. Among them, when the sparsity level s = 20, the difference between the reconstruction probability of the two algorithms is the largest.

In Figure 6, we compared the reconstruction probability of the proposed algorithm with StoGradMP, GradMP, and StoIHT algorithms for different measurements. From Figure 5, we see that the reconstruction probability is 0% when the sparsity is equal to 20 and the number of the measurements is smaller than or equal to 40. Therefore, we set the range of the measurements as in the simulation of Figure 6. From Figure 6, we see that when , the reconstruction probability of almost all algorithms is equal to 0%. For , the reconstruction probability of the proposed algorithm ranges from 0 to 97% and has a higher reconstruction probability than other methods. When , the reconstruction probability of t GradMP begins to increase, from approximately 0% to 97%, while the reconstruction probability ranges from 0 to 51%, and the reconstruction probability is 0%. When , the reconstruction probability of the proposed method and StoGradMP ranges from to 97–100% and 51–100%, respectively. However, the reconstruction probability of the GradMP algorithm declines from 98% to 82%. When , the reconstruction probability of the GradMP algorithm increases from 82% to 100% and the reconstruction probability of the proposed method and StoGradMP algorithm is 100%, with almost no change. However, the reconstruction probability of the StoIHT algorithm is still 0%. When , the reconstruction probability of the StoIHT is increased from 0% to 100%. When , all algorithms can complete the reconstruction.

In Figure 7, we compare the average runtime of different algorithms for different measurements. From Figure 6, we see that the reconstruction probability of all recovery algorithms is 100% when the sparsity is equal to 20 and the number of measurements is greater than or equal to 120. Therefore, we set the range of the number of the measurements as [120 250] in the simulation of Figure 7, and the sparsity level is equal to 20. We see that the GradMP algorithm has the lowest runtime, and the next lowest are the StoIHT and proposed algorithms. The StoGradMP algorithm has the highest running time. From Figures 6 and 7, we see that although the runtime of the proposed method is much more than the GradMP and StoIHT algorithms, its reconstruction probability is much better than the other two algorithms. This means that the proposed algorithm has lower complexity than other algorithms, except for the GradMP and StoIHT algorithms in different measurements.

Based on the above analysis, the proposed method with threshold has better reconstruction performance for different measurements than others and lower computational complexity than StoGradMP.

6. Conclusion

In this study, an improved reconstruction method was proposed. The proposed method utilizes the limited soft-threshold selection strategy to select the most relevant atoms from the preliminary atom set, which could reduce the redundancy of the preliminary atoms set and improve the accuracy of the support atom set estimation, thereby improving the reconstruction precision of the signal and reducing the computational complexity of the algorithm. In addition, the combination with reliability verification conditions ensured the correctness and effectiveness of the proposed method. The simulation results proved that the proposed method has better performance than other recovery methods.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61271115) and the Foundation of Jilin Educational Committee (JJKH20180336KJ). The authors would also like to thank Yanfei Jia for proof reading of this paper.