Abstract

To improve the reconstruction performance of the generalized orthogonal matching pursuit, an improved method is proposed. Columns are selected from the sensing matrix by generalized orthogonal matching pursuit, and indices of the columns are added to the estimated support set to reconstruct a sparse signal. Those columns contain error columns that can reduce the reconstruction performance. Therefore, the proposed algorithm adds a backtracking process to remove the low-reliability columns from the selected column set. For any -sparse signal, the proposed method firstly computes the correlation between the columns of the sensing matrix and the residual vector and then selects columns that correspond to the largest correlation in magnitude and adds their indices to the estimated support set in each iteration. Secondly, the proposed algorithm projects the measurements onto the space that consists of those selected columns and calculates the projection coefficient vector. When the size of the support set is larger than , the proposed method will select high-reliability indices using a search strategy from the support set. Finally, the proposed method updates the estimated support set using the selected high-reliability indices. The simulation results demonstrate that the proposed algorithm has a better recovery performance.

1. Introduction

In recent years, a new theory named compressive sensing (CS) [1] has surpassed the limits of the Nyquist sampling rate. Because CS can recover signals at a sampling frequency far lower than the Nyquist sampling rate, CS has aroused tremendous interests over the past few years [2, 3]. CS differs from the traditional Nyquist sampling theory and includes three procedures: sparse representation, nonrelated linear measurement, and signal reconstruction. The reconstruction algorithm aims to recover signals accurately from the measurements, and this step is one of most important parts of CS.

Recently, many reconstruction algorithms have been proposed to obtain the original sparse signal from measurements. Two major classes of reconstruction algorithms are -minimization and greedy pursuit algorithms. Common -minimization approaches include basis pursuit (BP) [4], Gradient projection for sparse reconstruction (GPSR) [5], iterative thresholding (IT) [6], and other algorithms. Those algorithms possess good performance in solving a convex minimization problem, but they have a higher computational complexity.

Greedy algorithms have received increasing attention for their excellent performance and small cost in recovering sparse signals from compressed measurements. A greedy algorithm proposed early on was the matching pursuit algorithm (MP) [7]. Building on the MP algorithm, the orthogonal matching pursuit algorithm (OMP) [8] was proposed to optimize the MP via orthogonalization of the estimate support set. The OMP has become a well-known greedy algorithm with wide application. The regularized orthogonal matching pursuit algorithm (ROMP) [9] was developed to refine the selected columns of the measurement matrix with a regularized rule to improve the speed of OMP. The stage wise orthogonal matching pursuit (StOMP) [10] selects multiple columns in each iteration via a presupposed threshold. The subspace pursuit (SP) [11] and compressive sampling matching pursuit (CoSaMP) [12] proposed similar improvement methods. Both of these algorithms were proposed with the idea of backtracking, and the difference is that SP selects columns from the sensing matrix for each iteration, while CoSaMP selects . The generalized orthogonal matching pursuit (GOMP) was proposed by Wang et al. [13, 14]. The algorithm selects columns in each iteration. When , GOMP is identical to OMP. Compared to OMP, which selects only one column in each iteration, GOMP changes the number of columns that are selected in each iteration to improve the computational efficiency and recovery performance. The generalized OMP (GOMP) has received increasing attention in recent years, because the method can enhance the recovery performance of OMP. Several papers have been published on the analysis of the theoretical performance of GOMP [1317].

2. Compressive Sensing Model

Compressive sensing requires that the target signal is a -sparse signal. It means that if we regard the signal as a dimensional vector , there should be at most no-zero elements in . However, in practical applications, sparse signals may not exist in all cases. The target signal has to be transformed into a sparse signal based on a set of sparse basis . In this case, can be defined as where ; denotes the number of nonzero elements in a vector. Thus, the signal is equivalently represented by -sparse vector under some linear transformation in some domains. The process of compressive sensing can be regarded as a technique that automatically selects relevant information from signals by a measurement. In the theory, is translated into -dimensional measurements via a matrix multiplication with . We express it as where is defined as the measurement matrix with dimensions . Combining (1) with (2), we can obtain where .

In most scenarios, . Thus, we can interpret that is compressed into with a dimension ranging from to . Clearly, (3) is an underdetermined equation, and it is difficult to obtain an accurate solution based on the equation. That is to say, it is impossible with traditional methods that obtain the inverse of the matrix to reconstruct the original signal accurately. In this case, we can obtain by solving the -minimization problem:Several methods exist for solving this problem. When , the problem is a -minimization problem, which can be solved by using a convex optimization algorithm. When , the problem is a -minimization problem, which can be solved using a greedy algorithm. An appropriate condition for exact recovery is that the matrix satisfies the condition of restricted isometry property (RIP) condition [1].

Definition 1. A sensing Matrix is said to satisfy the RIP condition with the smallest number of the -restricted isometry constant , ifholds for any -sparse vector with .

3. GOMP Algorithm

Greedy algorithms are used widely to recover signals in CS due to their simple algorithms and low computational complexity. Two methods exist for improving the OMP algorithm. The first method is based on the idea of backtracking, as in the subspace pursuit algorithm (SP) [11]. The second method selects more than one atom in each iteration, as in GOMP. The computational complexity of the backtracking method is higher, but it yields higher accuracy in most cases. In this section, we will introduce the GOMP algorithm.

GOMP firstly computes the correlation between the columns of the sensing matrix and the residual vector by and then selects columns that correspond to the largest correlation in magnitude adding their indices to the estimated support set in each iteration. Next the projection coefficient vector of measurements onto space of is obtained using the least square method (LS). The residual is revised by subtracting from . These operations are repeated until either the iteration number reaches the maximum , or the -norm of the residual falls below a threshold . For ease of understanding, we describe the GOMP algorithm in Algorithm 1 according to [13, 14].

Input:  measurements ,
       sensing matrix ,
       Sparsity ,
Initialize: number of indices of columns for each
selection .
      iteration count ,
      residual vector ,
      estimated support set .
While   and do
       ;
  (Identification)
Select largest entries (in magnitude) from .
Then record the indices corresponding
to the entries.
  (Augmentation)
  .
  (Estimation of )
  .
  (Residual Update)
  
End
Output The estimated support
and signal .

In Algorithm 1, we observe that the only difference between GOMP and OMP is that the GOMP selects more than one atom in each iteration. The GOMP algorithm selects atoms in each iteration. When , the GOMP is identical to OMP.

4. Improved GOMP Algorithm

For the usage of greedy algorithms, it is important to generate an estimate of the correct support set. We assume the correct support is , and the estimate of is . The goal is to determine the indices for that are most similar to . GOMP selects columns from the sensing matrix and added the indices of these columns to the estimated support set to reconstruct a sparse signal. Those columns contain columns that be selected by error. Although GOMP allows for a number of error indices in , these error indices will lead to the size of larger than . This increases the algorithm complexity greatly in the process of estimation of and residual update in the GOMP.

In the process of estimation of , the projection coefficient vector of the measurements onto the space of can be expressed aswhere ; are the measurements of the signal vector . The cost of (6) iswhere represents the number of indices of columns for each selection, is the number of iterations, and is the dimensionality of measurements [13].

In the process of estimation of residual update, the residuals in GOMP can be expressed aswhere represents the measurements, is the estimated support set, and is the projection coefficient vector of the measurements onto the space of the estimated support set. The cost of is where is the number of iterations and is the number of indices of columns for each selection.

GOMP selects indices in each iteration and adds them to . On the -th iteration, indices will be selected, and the dimensionality of is . If all the indices in the estimated support set are correct, is identical to and . Although a selection rule exists to ensure that newly added indices belong to the correct support in GOMP, it is unavoidable that the error indices are selected and once selected, the error indices of the selected columns will remain in the support set throughout the remainder of the reconstruction process. It means . Generally speaking, when a large is selected, the GOMP should exert higher efficiency than a small . However, as increases, the probability of the selected error indices increases as well. When a large number of error indices are selected, it leads to a great increase in , further increasing the cost of the algorithm.

To overcome this problem, we propose a method to improve the performance of GOMP. The proposed method will retain the size of the estimated support set as and update the indices using the backtracking method to reduce the number of error indices, when the number of indices is greater than . Even if an index is deemed reliable in some iteration, when it is considered unreliable in subsequent iterations, the index will be removed from the estimated support set. This can reduce the number of error indices in the estimated support set.

The main difference between the proposed and the conventional GOMP algorithm is that the new algorithm can add or remove the index from the estimated support set at any stage of the recovery process. The proposed algorithm increases the reliability of the indices of the estimated support set and furthermore reduces the cost of the reconstruction process. The process of proposed algorithm is expressed in Algorithm 2.

Input:  measurements ,
      sensing matrix ,
      Sparsity ,
Initialize: number of indices of columns for each selection
    
      iteration count ,
      residual vector ,
      estimated support set .
While    and do
        ;
  (Identification)
Select largest entries (in magnitude) from . Then
record the indices corresponding to the
entries.
  (Augmentation)
  .
  (Estimation of )
  
  (Backtracking) While , select largest
elements of . Then recording the indices corresponding
to the elements, and renew the with those indices.
  (Residual Update)  .
End
Output The estimated support
and signal .

In Algorithm 2, we observe that the proposed method adds a backtracking step to the GOMP algorithm. When is small, it has a high accuracy for the identification process. When , GOMP is identical to OMP. At this step, GOMP has the highest accuracy in the identification step. As increases, the probability of the selected error indices increases as well. The proposed method adds a backtracking process to remove those error indices in subsequent iterations. When , the proposed method requires more than one iteration to create a -sized support set. When , the support set cannot contain correct indices, and executing the backtracking step is unnecessary and will cause wasteful computation. Therefore, we designed the algorithm to execute the backtracking step when . The dimensionality of is before executing the backtracking step. So the improved algorithm determines whether to execute the backtracking step by the size of . Experimental evidence demonstrated that our changes improved the performance. We will describe our simulation in the next section.

5. Simulation and Discussion

In this part, we will demonstrate the performance of the proposed algorithm with sparse signals. We used the same sparse signal sources as in GOMP with -sparse to compare the performances of the different algorithms. The components of the sensing matrix were generated randomly with Gaussian distribution, and the size of the sensing matrix was . We used MATLAB 7.0 with a quad-core 64-bit processor in a Windows 10 environment. We executed each algorithm 1000 times and recorded the probability of the exact reconstructions. We set the threshold used in [13] in both GOMP and the proposed method.

In Figure 1, we compare the probability of successful reconstruction of the proposed method with the ROMP [9], OMP [8], SP [11], CoSaMP [12], and StOMP [10] algorithms. It is evident from Figure 1 that the reconstruction performance of ROMP decreased rapidly with a rise in . When , the recovery probability was near zero. The probability of successful recovery of OMP decreased at a smaller rate than that of ROMP, and its critical sparsity was reached at . CoSaMP and StOMP had similar performances. Both retained a high recovery performance when . The probability of successful recovery of SP was higher than for ROMP, OMP, CoSaMP, and StOMP with the same . For our proposed method, we tested the performance with different values, representing the number of selected columns in each iteration. We set and . Figure 1 indicates that the smaller the value of , the higher the probability of successful recovery of the proposed algorithm with the same . Compared to the other algorithms, under the same conditions, the proposed method with had the highest probability of successful recovery, followed by the proposed method with , and followed by the SP algorithm. The proposed algorithm with different values had a higher probability than the other algorithms.

In Figures 2 and 3, we compare the performances of the proposed method and GOMP with different values for the parameter . In Figure 2, we selected smaller values to compare the algorithms. All algorithms resulted in high levels of probability of recovery when . The probability of exact recovery decreased with an increase in and reached zero when . Both the proposed algorithm and GOMP had a higher probability of exact recovery with a smaller value. When , the two algorithms had nearly the same probability values. When , the proposed method had a higher probability of recovery. The probability of exact reconstruction of both algorithms was 100% when and zero when for and , and the difference between two algorithms mainly concentrated on the sparsity level from 40 to 70. Therefore, we compared two algorithms with to in Figure 2.

In Figure 3, we selected a larger value of to compare the proposed algorithm and GOMP. The probability of exact reconstruction of both algorithms was 100% for and zero for , so we compared the two algorithms with to for and in Figure 3.

When and , the probability of exact recovery of GOMP was 48.7%, while the probability value for the proposed method was 99.4%. When and , the resulting probability values were 4.5% for GOMP and 99.4% for the proposed method. These results indicated that, for different values, the proposed method had a higher probability of exact recovery than the GOMP method.

Figures 2 and 3 demonstrate that, for different values, the curves of probability for exact recovery were closer to each other for the proposed method than the GOMP method. This indicated that the proposed method had a more stable performance than the GOMP method for different values.

Based on the analysis and comparison, we determined that even though the proposed algorithm had an additional backtracking process compared to GOMP, the proposed method demonstrated an excellent performance with regard to running time. In order to compare running times for all the algorithms, we ran each algorithm one thousand times to calculate the average running time. The computing environment was the same as for the determination of the probability of exact reconstruction.

In Figure 2, we can see that the probability of extract reconstruction of both algorithms was below 100% for with and , and the effective running time should be computed for successful reconstructing. Therefore, we just compared the running time with for two methods. The difference was not obvious with in running time for both methods with and , so we compared the running time with and in Figure 4.

In Figure 3, we can see that probability of extract reconstruction of GOMP algorithm with was below 100% for , and with for , and the probability of extract reconstruction of proposed algorithm with and all were below 100% for . The effective running time should be computed for successful reconstructing, so we just compared the running time of both algorithms for with , and with . The difference was not obvious in running time for both methods with smaller . Based on the above analysis, we just compared their running time with and for , and and for . In order to unify the sparsity level scope, we set from 5 to 20 in Figure 5.

In Figure 4, we set and to compare the running times for GOMP and the proposed method. The running times for GOMP with and the proposed method with and were nearly identical. The running times under those conditions were less than those for GOMP with . When , the running time of the proposed method with and was less than that of GOMP with .

In Figure 5, we set and . From Figure 5, we can observe that the running time of the proposed algorithms was less than GOMP for with to 20, and for with to 10. This shows that the proposed algorithm has faster speed for reconstructing.

These results show that the proposed method resulted not only in better performance with regard to running time, but also in a higher probability of exact reconstruction. The advantage is more obvious when larger is taken in both algorithms.

6. Conclusion

In this paper, a novel method for sparse signal reconstruction has been proposed. The proposed method adds a backtracking step to maintain a -sized estimated support set, avoiding the extra computation cost for an oversized estimated set. At the same time, the proposed method can increase the reliability of the estimated support set by removing the low-reliability columns from the estimated support set. The proposed reconstruction algorithm performs well at not only reconstructing the sparse signal (i.e., when is small), but also the less sparse signal (i.e., when is large). The simulation results showed that the proposed method had a better performance than GOMP with regard to both the probability of exact reconstruction and running time, especially with larger values. In future research, we want to improve on the proposed algorithm to optimize the greedy algorithm further.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61271115) and the Foundation of Jilin Educational Committee (2015235).