Abstract

The modified adaptive orthogonal matching pursuit algorithm has a lower convergence speed. To overcome this problem, an improved method with faster convergence speed is proposed. In respect of atomic selection, the proposed method computes the correlation between the measurement matrix and residual and then selects the atoms most related to residual to construct the candidate atomic set. The number of selected atoms is the integral multiple of initial step size. In respect of sparsity estimation, the proposed method introduces the exponential function to sparsity estimation. It uses a larger step size to estimate sparsity at the beginning of iteration to accelerate the algorithm convergence speed and a smaller step size to improve the reconstruction accuracy. Simulations show that the proposed method has better performance in terms of convergence speed and reconstruction accuracy for one-dimension signal and two-dimension signal.

1. Introduction

Compressed sensing (CS) [1] has become a popular research topic in recent years. Compared with traditional compression methods, the CS can be sampled at a rate far below the Nyquist sampling theorem, and the signal can be reconstructed with high probability. It can be used to reduce the amount of data transferred. Compressed sensing method has been applied in the context of medical imaging, radar imaging, and video transmission[25].

Signal reconstruction is one of the most important parts of compressed sensing. A good reconstruction algorithm can improve the accuracy and time of signal recovery. In the design of the reconstruction algorithm, the signal reconstruction based on the -norm optimization is firstly adopted. However, the reconstructed signal obtained by -norm optimization is not sparse, and the reconstruction accuracy is large. Therefore, many researchers pay more attention to the use of the optimization algorithms based on -norm or -norm to reconstruct sparse signal in compressed sensing. The sparse signal reconstruction methods based on -norm include the basis pursuit (BP) method [6], iterative thresholding (IT) method [7], and homotopy method [8]. Some researchers also proposed the sparse signal recovery method with minimization of -norm minus -norm [9].

The matching pursuit algorithm is also a reconstruction algorithm based on -norm. Compared with convex optimization algorithm, it has lower computational complexity. Therefore, it is the most commonly used algorithm. The orthogonal matching pursuit algorithm (OMP) [10] is the earliest matching pursuit algorithm. On the basis of OMP, regularized orthogonal matching pursuit (ROMP) [11] is proposed using regularized rule to refine the selected columns of the measurement matrix. Researchers also propose the generalized orthogonal matching pursuit (gOMP) algorithm [12]. The difference between the OMP and gOMP algorithm is that the OMP algorithm selects one atom with the highest correlation in each iteration and the gOMP algorithm selects the most relevant atom for reconstruction, . Therefore, the gOMP reduces the running time. However, if the gOMP algorithm selects the atoms which do not contain the signal information, the gOMP algorithm cannot delete these atoms in the following iteration. This affects the reconstruction performance. Therefore, the researchers propose compressed sampling matching pursuit (CoSaMP) [13] and subspace pursuit (SP) [14]. Both of them use backtracking strategy to select the most relevant atoms firstly and then check the atomic correlation again to remove unrelated atoms in the end. Thus, they can improve the reconstruction accuracy. Besides, block orthogonal matching pursuit algorithm (BOMP) is also proposed for block sparse signal [15, 16], and sharp sufficient conditions for stable recovery are also given [17]. In this paper, we mainly consider the normal sparse signal.

However, these algorithms have the common limitation that sparsity information needs be known, but the sparsity is often unknown in the practical application. To solve this problem, Thong proposes the sparsity adaptive matching pursuit (SAMP) [18] that can recover signals without knowing the sparsity value. It firstly sets a small estimated sparsity value and then uses a fixed step size to make the estimated sparsity value increase after each iteration and finally approach the true sparsity value to reconstruct the signal. However, the fixed step size may cause the estimated sparsity value to be inaccurate and affect the accuracy of the reconstruction. To overcome the problem, the modified adaptive orthogonal matching pursuit algorithm (MAOMP) [19] was proposed. It used a smaller factor, less than 1, to modify the step size and made the step size become smaller with the increase of iterations. The factor used in MAOMP is fixed. If the initial step size is too small, it requires a large number of iterations. This will affect the convergence speed of MAOMP algorithm.

To improve the convergence speed of MAOMP algorithm, we use a nonlinear function to modify the factor used in MAOMP algorithm. The factor is variable in our proposed method. The factor is larger at the beginning of iteration and gradually decreases with the increase of iteration number. This can accelerate the convergence speed. Besides, the generalized atom selection strategy is also used to select the atoms that are most related to residual for constructing the candidateatomic set and thus improving the accuracy of reconstruction.

2. Compressed Sensing Theory

The basic assumption of compressed sensing is that the signal is sparse. If a signal with length has nonzero values, , it is called sparse signal. In some real applications, the signal is not sparse, so the signal is not compressible. In order to make the original signal sparse, the sparse basis matrix is used. It can be expressed aswhere is the sparse signal with nonzero values (). The sparse basis methods include fast Fourier transform basis, discrete Fourier transform basis, wavelet transform basis, and redundant dictionary. In order to compress the observed signal , a measurement matrix is designed to deal with the signal . The compressed signal iswhere , , and . The length of observed signal is and the length of compressed signal is . Mixed with (1), (2) can be expressed aswhere is a matrix () called sensing matrix.

The compressed signal, measurement matrix, and sparse basis are known. The aim of reconstruction is to recover the sparse signal and original signal under the above unknown information. Therefore, the number of sensing matrix rows is smaller than the number of columns in (3). It cannot be solved by traditional method. In the compressed sensing method, the minimum norm method is used to solve (3). That is,

Some researchers also proposed to use minimum norm method to solve (3). That is,

3. MAOMP Algorithm

The SAMP algorithm uses a fixed step size method to estimate the sparsity, which is expressed in as follows:where is the iteration number, is the estimated sparsity, and is fixed step size. From (6) and (7), the estimated sparsity becomes larger with the increasing iteration number, but the step size is fixed. Therefore, the estimated sparsity is often insufficient or overestimated, which has negative influence on signal reconstruction accuracy. In order to solve this question, MAOMP algorithm was proposed. The MAOMP algorithm uses (8), (9), and (10) to estimate sparsity:where is a constant value between 0 and 1, is the sensing matrix, is compressed vector, is the number of iterations, is the estimated sparsity, is also a constant value between 0 and 1, and denotes the smallest integer that is not smaller than . If (8) is satisfied, the initial sparsity will be increased by 1. If (8) is not satisfied, MAOMP will use (9) and (10) to continue estimating the sparsity. The variable step method can be expressed as (9) and (10). In (9) and (10), it can be seen that the step size is gradually reduced to 1 with the increase of iteration number. This can make the sparsity value more accurate, so the algorithm can improve reconstruction accuracy. The detailed steps of MAOMP are shown in Algorithm 1.

Input:
Sensing matrix
Observation vector
Initial step size
Constant parameter
Stop threshold
Initialization parameter:
{initialize signal approximation}
{loop index}
{initial sparsity estimate}
{empty preliminary index set}
{empty candidate index set}
{empty support index set}
{residual vector}
{while loop flag}
While (∼done)
(1)Compute the projective set
(2)Compute the estimated sparsity
 If
Then return on step (1)
Else , , , return on step (3)
(3)Compute a new projective set
(4)Merge to update the candidate index set
(5)Get the estimate signal value by least squares algorithm:
(6)Prune to obtain current support index set
(7)Update signal final estimate
 residual error:
(8)check the iteration condition
If
   quit iteration
else if
else
end
end
Output: ( s-sparse approximation of signal )

4. Proposed Method

Although the MAOMP uses (8) to modify the initial sparsity to reduce the iteration number of sparsity estimation, it also increases the computational complexity of sparsity initial estimation. Besides, if the initial step size is smaller, the step size based on (9) will be rapidly reduced to 1. When the initial sparsity is far away from the real sparsity, it will cost much time to converge. And in [19], the researcher has shown that when the real sparsity is relatively larger, whatever the value of is, the optimal result of the estimated initial sparsity is only about half of the real sparsity value. There is still a large distance between the estimated initial sparsity and the real sparsity. Therefore, the algorithm may require more iterations to make the estimated sparsity value approach the real sparsity value.

To overcome these problems in MAOMP, we improve the MAOMP method in terms of estimating sparsity and selecting atom. Firstly, we directly set the initial sparsity as 1 and use a nonlinear function to adjust the step size to make it have larger value at the beginning phase of iteration process and smaller value with the increasing of iteration. It is expressed as follows:where is the number of iterations, is the step size, is constant that is larger than 1, denotes the smallest integer that is not smaller than , and is the estimated sparsity. According to (11), at the beginning of iteration, the step size is larger. As the number of iterations increases, the step size becomes smaller and gradually decreases to 1. This means that if the distance between the real sparsity and initial sparsity is larger, the proposed method can quickly make the estimated sparsity close to the real sparsity to reduce the consumption time for estimating the sparsity. As seen from (11), with increasing the number of iterations, the distance between the real sparsity and estimated sparsity is close, and we use a smaller step size to adjust the estimated sparsity to prevent the estimated sparsity from being insufficient or overestimated. This can make the sparsity estimation more accurate and faster.

The value of nonlinear function is shown in Figure 1, where is the number of iterations and . From Figure 1, we can see that the function has faster descent speed for smaller iteration and slower descent speed for larger iteration. This will make the estimated sparsity move toward the real sparsity faster at the beginning phase of iteration process and move toward the real sparsity slower at the end phase of iteration process. Therefore, it makes the proposed method have faster convergence speed and lower reconstruction error. However, if is too large, the step size will be also very large at the beginning phase of iteration process. This can lead to overestimation of the sparsity value, thus affecting the accuracy of the algorithm.

Secondly, we use the generalized orthogonal matching pursuit to select the atoms according to the correlation between the sensing matrix and residual vector. It is expressed aswhere is projective set, is the fixed number to select atomics, is sensing matrix, and is residual error. Compared with MAOMP, our proposed method has the number of selected atomics fixed at each iteration. This reduces the algorithm complexity. The more the correlation atomics are selected to extend the candidate atomic set, the better the accuracy is. However, if the number of selected atomics is larger, it will contain some atomics with lower correlation and reduce algorithm accuracy. How to choose a suitable is discussed in the simulation.

The proposed algorithm begins to select atoms using the generalized orthogonal matching pursuit method, then updates the step size, and estimates sparsity using the new variable step method. The detailed steps of the proposed method are shown in Algorithm 2.

Input:
 Sensing matrix
 Observation vector
 Constant parameter
 Initial step size
 The number of atoms selected each time ;
 Tolerance used to exit loop
Initialize parameter:
{initialize signal approximation}
{loop index}
{initial sparsity estimate}
{while loop flag}
{empty preliminary index set}
{empty candidate index set}
{empty support index set}
While (∼done)
(1)Compute the projective set
(2)Merge to update the candidate index set
(3) Get the estimate signal value and residual error by least squares algorithm:
(4)prune to obtain current support index set
(5)update signal final estimate by least squares algorithm and compute residual error:
 residual error:
(6)Check the iteration condition
If
   quit iteration
else if
else
end
End
Output:(s-sparse approximation of signal )

5. Results and Discussion

5.1. Parameters Selection

The selection of parameter and the number of selected atomics directly affect the algorithm performance. In this section, we use the source signal as a Gaussian signal with length and measurement value . And the stop iteration parameter . Therefore, we firstly set the as 25 and the as variable to search the optimal . The relationship between the and reconstruction probability is shown in Figure 2. From Figure 2(a), we can see that the reconstruction probability decreases with the increasing of sparsity level. When the sparsity level is between 40 and 45, the reconstruction probability for the proposed method with is the highest; next is . When the sparsity level is greater than 50, the reconstruction probability for the proposed method with is the highest; next is . Because the large step size leads to overestimation, the reconstruction probability drops rapidly at when . And when , the reconstruction probability is not as good as . Based on the above analysis, the reconstruction probability for is higher than the other values for the larger sparsity level. However, the reconstruction probability becomes poor when . Thus, we select the value of between 2 and 3, as shown in Figure 2(b). From Figure 2(b), the reconstruction probability for the proposed method with is the highest value. Thus, we select for the experiments.

At the next experiment we set the as constant and the number of selected atomics as variable. The result can be seen in Figures 3 and 4. From Figure 3, the initial step size is 5, and we can see that, as the sparsity level increases, the reconstruction probability of proposed method for is the lowest compared with the other values. When the sparsity level is between 45 and 50, the reconstruction probability for the proposed method with is the highest; next is . When the sparsity level is greater than 55, the reconstruction probability of proposed method for is the highest. Moreover, from Figure 4, when initial step size is 10, we can see that the reconstruction probability of proposed method for is the lowest compared with the other values. Furthermore, the reconstruction probability of proposed method for is the highest for all the sparsity levels. Comparing Figures 3 and 4, we can conclude that when is four times the initial step size, the reconstruction quality is the best.

Based on the above analysis, we set the number of selected atomics as four times the initial step size and the parameter as 2.5 in the flowing experiments. The experiment conditions are as follows: the CPU is Intel® Core™ i5-8300H at 2.30 GHz and the size of RAM is 8 GB.

5.2. One-Dimensional Signal Reconstruction

In this section, we use one-dimensional signal as source signal to test the reconstruction performance of different methods (SAMP algorithm, MAOMP algorithm, and our proposed method). The source signal is a Gaussian signal with length and sparsity level . And the stop iteration parameter . The initial step size of all methods is set as 5 and 10. As known in [19], in MAOMP algorithm, when the parameter is 0.2 and is 0.6, the result of signal reconstruction is the best. Therefore, we select the parameter as 0.2 and as 0.6 in these experiments. In our proposed algorithm, the parameter and are four times the initial step size. Figure 5 shows the reconstruction probability of signals under different measurement values. When the measurement value increases, the reconstruction probability becomes higher. Our proposed method is significantly better than the other two algorithms. When the measurement value is 70, the algorithm we proposed can reconstruct the signal, but the reconstruction probability of the other two algorithms is still 0. Moreover, whatever the measurement value is, our proposed algorithm has higher reconstruction probability than the other two algorithms. This means that the proposed method has higher reconstruction probability under different measurement values.

Next, when comparing the reconstruction probability of the algorithm under different sparsity levels, the fixed measurement value , and the other experimental conditions are the same as the previous experiment. Figure 6 shows the reconstruction probability with different sparsity levels.

It can be seen from Figure 6 that the reconstruction probability decreases with the increasing of sparsity level. When , the SAMP with begins to decline, but other algorithms still maintain almost 100% reconstruction probability. When , all algorithm reconstruction probability values begin to decline. In particular, in , the reconstruction probability of SAMP and MAOMP, in which the initial step size is 10, dropped to 0. However, the reconstruction probability of our proposed method is higher than that of the other two algorithms. And when the sparsity level is 70, the proposed method with still can successfully reconstruct the signal with probability 37.54%, but the reconstruction probability of the other two algorithms has dropped to 0. This shows that the proposed method has higher reconstruction probability under different sparsity levels. By comparing the one-dimensional signal with different measurement values and different sparsity levels, it can be seen that our proposed method has obvious advantage in one-dimensional signal reconstruction.

5.3. Two-Dimensional Image Reconstruction

In this section, we use a grayscale image of size called Lena as source signal to test the reconstruction performance of different methods (SAMP algorithm, MAOMP algorithm, and our proposed method). The wavelet basis is used as the sparse basis to sparse the image. The initial step sizes of the three algorithms are 1, 5, and 10, respectively. The stop iteration parameter , the of the MAOMP algorithm is 0.6, and takes 0.2. In our proposed method, parameter and are four times the initial step size. Figure 7 shows the two-dimensional signal Lena that is reconstructed by our proposed method, SAMP method, and MAOMP method with sampling rate 0.6. We use the peak signal-to-noise ratio (PSNR) to represent the quality of image reconstruction, and it can be expressed aswhere , represents the reconstructed value of the corresponding position of the test image, and represents the original value of the corresponding position of the test picture. represents mean squared error. represents the maximum value of the image. The size of the image is 256, because , so that the sampling point is 8 bits and the value of is 8. The larger the PSNR value, the better the image quality.

Figures 7 to 9 show the original signal and the reconstructed signals by SAMP, MAOMP, and the proposed method with initial step sizes of 1, 5, and 10, respectively. It can be seen from Figures 7 and 8 that the PSNR of SAMP and MAOMP decreases when the initial step size becomes larger. The SAMP cannot reconstruct signal when the initial step size is 5 and 10. The PSNR is also lower for MAOMP algorithm when the initial step size is 10. This is because a large initial step size causes the sparsity value to be overestimated and affects accuracy. However, in Figure 9, our proposed method uses (11) to adjust the step size to make the step size gradually decrease. This can prevent overestimation for larger initial step size and reduce error. Therefore, the proposed method has better performance in terms of error and stability than SAMP and MAOMP algorithm for the two-dimensional image based on analysis of Figures 7 to 9.

And can be seen from Figures 7 to 9, when the initial step size is 1, SAMP and MAOMP have better reconstructed result. Therefore, we select in the following experiments. Figure 10 shows the PSNR value with different sampling rates. In this experiment, we choose sampling rate of 0.3, 0.4, 0.5, 0.6, 0.7, and 0.8 and select the initial step size as 1. The test image is Lena with size. The other experimental conditions are the same as the previous experiment. As can be seen from Figure 8, as the sampling rate increases, the PSNR value becomes larger. Compared with the other two algorithms, the PSNR value of our proposed method is the highest. This shows that the proposed method has smaller error in reconstructing images.

Figure 11 shows the recovery times of the three algorithms with different sampling rates. When the sampling rate is 0.3, the three algorithms have almost the same recovery time. With the sampling rate increasing, recovery time becomes longer. Our proposed method still consumes less time than the other methods for different sampling rates. When the sampling rate is 0.8, the proposed method runs 10.28 seconds shorter than SAMP and 4.46 seconds shorter than MAOMP. This proves that the proposed method has better performance than the other two algorithms in terms of convergence speed. Based on the above analysis, the proposed method has smaller error, better stability, and faster convergence speed than the SAMP and MAOMP algorithm.

6. Conclusions

In this paper, we proposed a generalized sparsity adaptive matching pursuit algorithm with variable step size. This algorithm uses the idea of generalized atom selection to choose more atoms at the beginning of the iteration, and the signal estimation solution is more accurate by backtracking. Regarding variable step size, an idea of variable step size of a nonlinear function is proposed. This can make the step size large at the beginning, which is used to speed up the convergence of the algorithm. Then the step length is reduced to 1, so the sparsity estimation value is more accurate, thereby improving the reconstruction accuracy and reducing the running time of the algorithm.

Simulation results demonstrate that our proposed method has a better reconstruction performance compared with SAMP algorithm and MAOMP algorithm. For one-dimensional Gaussian signal, among the different measurement values and different sparsity levels, the reconstruction probability of our proposed method is the best, and the signal can be reconstructed at low observation or high sparsity. For two-dimensional image, our proposed method has better reconstruction quality, which is measured by PSNR. Compared to MAOMP and SAMP, our proposed method has faster convergence speed. Moreover, as the initial step size increases, our proposed method can still reconstruct images with high quality. In a word, our proposed method is better than similar algorithms in terms of convergence speed and reconstruction accuracy.

Data Availability

All the data used for the simulations can be obtained from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61271115) and the Science and Technology Innovation and Entrepreneurship Talent Cultivation Program of Jilin (20190104124).