Journal of Electrical and Computer Engineering

Journal of Electrical and Computer Engineering / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 5598180 | https://doi.org/10.1155/2021/5598180

Xinhe Zhang, Yufeng Liu, Xin Wang, "A Sparsity Preestimated Adaptive Matching Pursuit Algorithm", Journal of Electrical and Computer Engineering, vol. 2021, Article ID 5598180, 8 pages, 2021. https://doi.org/10.1155/2021/5598180

A Sparsity Preestimated Adaptive Matching Pursuit Algorithm

Academic Editor: Yang Li
Received01 Feb 2021
Revised16 Apr 2021
Accepted21 Apr 2021
Published30 Apr 2021

Abstract

In the matching pursuit algorithm of compressed sensing, the traditional reconstruction algorithm needs to know the signal sparsity. The sparsity adaptive matching pursuit (SAMP) algorithm can adaptively approach the signal sparsity when the sparsity is unknown. However, the SAMP algorithm starts from zero and iterates several times with a fixed step size to approximate the true sparsity, which increases the runtime. To increase the run speed, a sparsity preestimated adaptive matching pursuit (SPAMP) algorithm is proposed in this paper. Firstly, the sparsity preestimated strategy is used to estimate the sparsity, and then the signal is reconstructed by the SAMP algorithm with the preestimated sparsity as the iterative initial value. The method reconstructs the signal from the preestimated sparsity, which reduces the number of iterations and greatly speeds up the run efficiency.

1. Introduction

The Nyquist sampling theorem specifies that, to avoid losing information when capturing a signal, one must sample at least two times faster than the signal bandwidth. The traditional information collection and compression process is accompanied by a large amount of data waste, resulting in the increase of equipment cost. With the advent of the era of big data, for some high-frequency signals, it is difficult for hardware devices to meet the sampling requirements. There is an urgent need for a new method to capture and represent signals at a rate that is significantly below the Nyquist rate. Compressed sensing (CS) [1, 2] has been a rising fancy signal processing theory in recent years, which is different from Nyquist sampling theorem. As a new signal processing theory, it has attracted extensive attention of most researchers in the industrial field. It has been applied in different fields and achieved gratifying results [37].

Most of research about compressed sensing focused on three directions: signal sparse representation, measurement matrix, and reconstruction algorithm. Signal sparse representation is to transform nonsparse signal into sparse signal by some transformations, such as discrete cosine transform, discrete Fourier transform, and wavelet transform. The compressed signal can be obtained by multiplying the sparse signal with the measurement matrix. The dimension of compressed signal must be smaller than that of the sparse signal. A large number of studies have shown that Gaussian matrix, Bernoulli matrix, and partial Hadamard matrix can be used as measurement matrix. Reconstruction algorithm is to reconstruct sparse signal from compressed signal, and the reconstruction algorithm directly affects the quality of the reconstructed signal. In recent years, great achievements have been made in the field of reconstruction algorithms. However, many reconstruction algorithms have their own limitations. Due to the expensive computing cost of the convex optimization algorithm [8, 9], it is difficult for realization. Greedy reconstruction algorithms mainly include orthogonal matching pursuit (OMP) [10, 11], regularized OMP (ROMP) [12], compressive sampling matching pursuit (CoSaMP) [13], subspace pursuit (SP) [14], generalized OMP (gOMP) [15], stagewise OMP (StOMP) [16], and sparsity adaptive matching pursuit (SAMP) [17]. Compared with convex optimization algorithm, greedy algorithm has faster reconstruction speed. Among the greedy reconstruction algorithms mentioned above, OMP, ROMP, CoSaMP, SP, and gOMP algorithms need to know the signal sparsity. If the signal sparsity is not known, we can only reconstruct the signal by guessing the sparsity. The sparsity directly affects the quality of signal reconstruction. StOMP and SAMP algorithms can directly reconstruct the original signal without knowing the signal sparsity. The SAMP algorithm reconstructs the signal by setting a fixed step size and approximating the true signal sparsity through multiple iterations. SAMP algorithm approximates the sparsity gradually from zero, which includes many iterations, extensive computation, and long reconstruction time. Several improved SAMP algorithms were proposed in [1820]. To speed up the reconstruction speed, we developed a sparse preestimated adaptive matching pursuit (SPAMP) algorithm. First, a preestimated sparsity close to the true sparsity is obtained. Then the SAMP algorithm is used to reconstruct the signal.

The primary contributions of this are twofold:(1)We present a new sparsity preestimation method for estimating the sparsity quickly and accurately. The method uses the criteria of sparsity underestimation and overestimation to estimate the sparsity.(2)We developed an improved SAMP algorithm, termed SPAMP. In the first stage, the sparsity is estimated by sparse preestimation criteria. In the second stage, the SAMP is used to reconstruct the signal. The experimental simulations show that the reconstruction performance of SPAMP is almost the same as that of SAMP algorithm, while the run speed is obviously faster than that of SAMP algorithm.

The rest of this paper is organized as follows. In Section 2, compressed sensing theory is introduced. Section 3 gives the criteria of sparsity underestimation and overestimation. Section 4 explains the proposed algorithm. Section 5 illustrates the simulation results. Finally, we conclude the paper with a summary in Section 6.

Notations. Boldface uppercase symbols denote matrices, while boldface lowercase letters represent vectors. represent the inner product of two vectors; is the norm of a vector; is the amplitude of a complex quantity or the cardinality of a set; is the transpose of a vector or a matrix; denotes the field of real numbers; denotes a Gaussian random variable with mean and variance .

2. Compressed Sensing

Let be a real signal of length , that is, . If a signal only has nonzero elements, we can say that is -sparse. A measurement vector is computed using an matrix ; that is,where measurement vector and measurement matrix . The measurement matrix must allow the reconstruction of the length- signal from measurements (the measurement vector ). Since , the problem seems to be ill-conditioned. However, if it is known that the signal only has nonzero entries, the problem can be solved provided that . Stable recovery procedure proposed in [21] shows that measurement matrix satisfies the restricted isometry property (RIP) of order if there exists such thatholds. The RIP is to ensure that each pair of columns of is orthogonal to each other with a high probability. Research shows that the measurement matrix whose entries are sampled from , , is highly likely to satisfy RIP [22, 23].

The reconstruction of the original signal from the observation signal can be transformed into the smallest norm by solving

Unfortunately, solving (3) is an NP-hard problem, requiring an exhaustive enumeration of all possible locations of the nonzero entries in . Considering the influence of noise, the optimization problem of (3) further becomeswhere is a minuteness. This method can obtain faster speed at the expense of reconstruction precision.

norm optimization is equivalent to norm optimization under the RIP condition; that is,

norm optimization can use standard convex optimization methods to solve problems. This method has good reconstruction precision but high time complexity.

3. Criteria of Sparsity Preestimation

The method of sparsity underestimation for a specific atom set given in [24] provides us with a solution. To estimate the sparsity quickly and accurately, sparsity underestimation and overestimation methods are given in this section.

Notations. and denote the preestimated sparsity and the true sparsity, respectively. denotes the inner product of measurement matrix and measurement vector . denotes the -th entry in vector . Set consists of indexes of the maximum elements in , and . The real support set of signals is , and .

3.1. Criterion of Sparsity Underestimation

Proposition 1. Suppose that satisfies the RIP with . If , then .

Proof. Suppose that the estimated sparsity is greater than or equal to the true sparsity ; that is, . This implies . Obviously, is satisfied. Furthermore, since , we obtainBased on RIP, the singular value of lies in . Let denote the eigenvalue of , and we have . Thus, we can obtainSince , this yieldsFrom (6)–(8), we can obtainThe proof is completed.
From the converse negative proposition of Proposition 1, we can get the criterion of sparsity underestimation.

Suppose that satisfies the RIP with . If , then the estimated sparsity is less than the true sparsity; that is, .

3.2. Criterion of Sparsity Overestimation

Proposition 2. Suppose that satisfies the RIP with . If , then .

Proof. Suppose that the estimated sparsity is less than or equal to the true sparsity ; that is, . This implies . Obviously, is satisfied; and since , we can conclude thatBased on the definition of RIP, the eigenvalue of satisfies . We obtainSince , we haveCombining with (10)–(12), we can get the following result:Therefore,The proof is completed.
From the converse negative proposition of Proposition 2, we obtain the criterion of sparsity overestimation.

Suppose that satisfies the RIP with . If , then .

In the following, the sparsity is estimated based on the criteria of sparsity underestimation and sparsity overestimation. This can prevent the preestimated sparsity from being too small or too large, even exceeding the true sparsity.

4. The Proposed Algorithm

In this section, the implementation of sparsity preestimated adaptive matching pursuit (SPAMP) algorithm is given. Figure 1 shows the flowchart of the SPAMP algorithm. The implementation steps of the Algorithm 1 are as follows.

Input: -dimensional measurement matrix , -dimensional measurement vector , weak matching parameter , estimated factor , step size S.
Output: the reconstructed signal .
Initialization: residual , initial value of , iteration counter , the lower bound of preestimated sparsity and the upper bound of preestimated sparsity .
 Stage 1: preestimate the sparsity.
 Step 1: Compute the amplitude of the inner product of the residual and the measurement matrix , that is, . The -th entry of vector is denoted by , .
 Step 2: Initialize the preestimated sparsity by .
 Step 3: The criterion of sparsity underestimation is used to estimate the sparsity. Judge whether is satisfied or not. If it is satisfied, update the sparsity bound , , then go to Step 4. Otherwise, , repeat Step 2.
 Step 4: The criterion of sparsity overestimation is used to estimate the sparsity. Update . Judge whether is satisfied or not. If it is satisfied, update sparsity , then go to Stage 2. Otherwise, update the upper bound of sparsity, repeat Step 4 until the condition is satisfied.
 Stage 2: The signal is reconstructed by SAMP algorithm.
Initialization: the residual , the index set , atom set , the finalist set , denotes the empty set, the iteration counter , sparsity initial value .
 Step 5: Compute , select indices corresponding to the largest absolute values in to form the set . Augment the index set . is the submatrix of with the indices in set .
 Step 6: If the length of is less than , return , otherwise go to Step 7.
 Step 7: Solve a least squares problem to obtain a new estimation: .
 Step 8: Select entries from the absolute value of to consist , the corresponding submatrix is , the corresponding index set is , and update the finalist set .
 Step 9: Increment , calculate the new residual .
 Step 10: If , then go to Step 13, otherwise go to Step 11.
 Step 11: If , then , go to Step 5, otherwise go to Step 12.
 Step 12: Update , and go to Step 5.
 Step 13: Return based on the nonzero values of at position .

The computational complexity of SAMP and SPAMP is and , respectively. is the observation dimension, is the length of sparse signal, and and are the iteration times of SAMP and SPAMP algorithms, respectively. Due to the sparsity preestimation in SPAMP algorithm, is less than . The computational complexity of SPAMP is lower than SAMP.

5. Simulation Experiments

To verify the effectiveness of the proposed algorithm, a series of simulation experiments are carried out. In the following experiments, a -sparse random signal is adopted. The experimental environment is CPU i7-6700U, main frequency of 3.40 GHz, and 16 G memory.

5.1. Parameter Selection Experiments

In the first stage of SPAMP algorithm, the sparsity is preestimated. The performance of sparsity preestimation depends on two parameters: weak matching parameter and estimation factor . The estimation factor determines the accuracy of sparsity estimation. The weak matching parameter is a parameter of function , which determines the speediness of sparsity estimation. The above parameters are obtained by simulation experiments.

A one-dimensional 35-sparse random signal with length of 256 is generated. The measurement matrix is a Gaussian matrix. The measurement vector is computed by . Firstly, the estimation factor is fixed at , and the weak matching parameter changes. The simulation curve of the influence of weak matching parameter on sparsity estimation is shown in Figure 2.

It can be seen from Figure 2 that the simulation results under different parameter are the same when the weak matching parameter is in the range of . It shows that the change of parameter does not affect the result of the sparsity preestimation.

In order to verify the influence of estimation factor on the sparsity preestimation, the weak matching parameter is fixed at 0.5, and the estimation factor changes. The simulation curves of the influence of different estimation factor on sparsity estimation are shown in Figure 3.

As can be seen from Figure 3, different estimation factors have a great influence on the sparsity preestimation. With the increase of estimation factor , the preestimated sparsity increases. When the estimation factor equals 0.2, the preestimated sparsity exceeds the true sparsity of the signal. Overestimated sparsity will lower the recovery quality of the second stage and increase the recovery time. In order to ensure that the preestimated sparsity does not exceed the true sparsity, the estimation factor is set as 0.15. In the following simulation experiments, the weak matching parameter is 0.5, and the estimation factor is 0.15.

5.2. One-Dimensional Signal Experiments

In this subsection, we first compare the effects of sparsity and observation dimension on the reconstruction probability and then compare the reconstruction time of each algorithm.

When is fixed and changes, the reconstruction probability is compared. The step sizes S of SAMP and SPAMP are 3, 5, and 7, respectively. The length of the random signal is 256, and the observation dimension ranges from to 160. The measurement matrix is an -dimensional Gaussian matrix. The observation dimension is 128 and 160, respectively. When the error between the reconstructed signal and the original signal is less than , we consider that the signal is reconstructed successfully. The reconstruction probability is the ratio of the number of successful reconstructions to the number of experiments. The simulation results are shown in Figures 4 and 5.

It can be seen from Figures 4 and 5 that, with the increase of signal sparsity, the reconstruction probability of traditional algorithms with known sparsity decreases rapidly. When the sparsity exceeds 40, the traditional algorithms cannot reconstruct the signal accurately. SAMP and SPAMP algorithms can also reconstruct the signal with high probability. The reconstruction probability of SPAMP algorithm is almost the same as that of SAMP algorithm. They have obvious advantages compared with other traditional algorithms, which need to know the sparsity.

When is fixed and changes, the reconstruction probability is compared. The step sizes S of SAMP and SPAMP are 3, 5, and 7 respectively. The measurement matrix is an -dimensional Gaussian matrix. The sparsity is 30 and 40, respectively. The simulation curves are shown in Figures 6 and 7.

It can be seen from Figures 6 and 7 that when the sparsity is fixed, with the increase of the observation dimension, the curves of SAMP and SPAMP rise faster, while the curves of traditional algorithms that need to know the sparsity rise slower. This shows that the traditional algorithms have higher requirements for the sparsity of the signal, and the selection of the sparse signal is more stringent. When the sparsity cannot meet the requirements, the compressed signal cannot be reconstructed. However, SAMP and SPAMP algorithms have relatively low requirements for sparsity.

When the signal length is 256 and the observation dimension is 128, the simulation curve of average runtime versus sparsity is shown in Figure 8.

We can conclude from Figure 8 that the runtime of SPAMP algorithm is faster than that of SAMP algorithm, and the time is increased compared with the traditional algorithms, which need to know the sparsity in advance. In SPAMP algorithm, the process of preestimated sparsity takes a certain amount of time, and then the SAMP algorithm is used to reconstruct the signal. The runtime of SPAMP algorithm is faster than that of SAMP algorithm, which indicates that the preestimated sparsity can effectively reduce the number of iterations of SAMP algorithm.

For the reconstruction of one-dimensional signal, we can draw the following conclusions through the above experiments:(1)When the step sizes are the same, the reconstruction probability of the proposed SPAMP algorithm is almost the same as that of the SAMP algorithm, and the run speed is obviously faster than that of SAMP algorithm. In SPAMP algorithm, the sparsity is preestimated, and then the signal is reconstructed by SAMP algorithm with the fixed step size, so the reconstruction quality will not be significantly improved, and only the run speed can be improved.(2)We notice that the run speeds of the proposed SPAMP and SAMP algorithms are lower than those of OMP, ROMP, SP, CoSaMP, and gOMP algorithms. The main reason is that SPAMP and SAMP algorithms adopt a tentative way to gradually reconstruct the signal. If the step is too large, the reconstruction probability will be reduced; if the step size is too small, the reconstruction time will be increased. Meanwhile other algorithms know the signal sparsity in advance; thus SPAMP and SAMP algorithms have no obvious advantage in the reconstruction time.

5.3. Two-Dimensional Image Experiments

To verify the effectiveness of the proposed SPAMP algorithm in two-dimensional image reconstruction, simulation experiments are carried out in different compression ratios. Lena image with pixels is used in this experiment. Because the image is a nonsparse signal, the wavelet transform matrix is used to obtain the sparse signal. Peak signal-to-noise ratio (PSNR) and reconstruction time are used to evaluate the performance. The PSNR (dB) and reconstruction time (s) of different algorithms are shown in Table 1.


AlgorithmM/N = 0.5M/N = 0.6
PSNR (dB)Time (s)PSNR (dB)Time (s)

OMP26.330.5727.850.62
SP27.182.6528.772.78
CoSaMP26.653.7328.803.94
ROMP26.380.2328.350.25
gOMP26.702.5929.263.86
StOMP26.900.3829.040.46
SAMP (S = 1)27.008.7728.6610.61
SPAMP (S = 1)27.144.7628.934.77
SAMP (S = 3)27.043.2428.733.88
SPAMP (S = 3)27.222.2628.922.34
SAMP (S = 5)27.092.1428.782.51
SPAMP (S = 5)27.251.6928.941.85

As can be seen from Table 1, with the increase of compression ratio, PSNR of different algorithms also increase. When SPAMP and SAMP algorithms adopt the same step size and compression ratio, the PSNR of the proposed SPAMP algorithm is slightly higher than that of SAMP algorithm. Meanwhile, we notice that the SPAMP algorithm effectively reduces the reconstruction time. Experimental results show that the proposed SPAMP algorithm reduces the reconstruction time and slightly improves the reconstruction quality.

For the reconstruction of two-dimensional image signal, we can draw the following conclusions:(1)In PSNR, the proposed SPAMP algorithm is better than OMP, SP, and ROMP algorithms and is slightly better than SAMP algorithm. In reconstruction time, the speed of SPAMP is obviously faster than that of SAMP algorithm. In the first stage of SPAMP algorithm, the sparsity is preestimated, and, in the second stage, the signal is reconstructed by SAMP algorithm. Since the step size of SPAMP is the same as that of SAMP, its quality is confined to SAMP.(2)With the increase of compression ratio, the PSNR and reconstruction time of all algorithms increase. The increase of compression ratio, that is, the increase of observation dimension, will inevitably lead to the improvement of the reconstruction quality and the increase of reconstruction time.

6. Conclusion

An improved sparsity adaptive matching pursuit algorithm was proposed in this paper. In the first stage, the SPAMP algorithm preestimates the sparsity. In the second stage, the SAMP algorithm is adopted to reconstruct the signal. Through the comparison of one-dimensional signal and two-dimensional image, it can be seen that the SPAMP algorithm has almost the same reconstruction quality as the SAMP algorithm, the reconstruction time is greatly reduced, and the reconstruction quality is better than those of OMP, ROMP, and SP algorithms. Since the SPAMP algorithm can reconstruct the signal without knowing the sparsity, it is more practical than OMP, ROMP, and other algorithms.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Doctoral Scientific Research Foundation of Liaoning Province (Grant no. 2020-BS-225) and the Excellent Talents Training Program of the University of Science and Technology Liaoning (Grant no. 2017RC10).

References

  1. Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications, Cambridge University Press, Cambridge, UK, 2012.
  2. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at: Publisher Site | Google Scholar
  3. L. Li and D. Li, “Sparse array SAR 3D image for continuous scene based on compressed sensing,” Journal of Electronics & Information Technology, vol. 36, no. 9, pp. 2166–2172, 2014. View at: Google Scholar
  4. S. Li, K. Liu, F. Zhang, L. Xiao, and D. Han, “Infrared remote video staring imagery based on compressed sensing online sparse,” Acta Electronica Sinica, vol. 43, no. 3, pp. 518–522, 2015. View at: Google Scholar
  5. S. Li, J. Yang, W Chen, and X. Ma, “Overview of radar imaging technique and application based on compressive sensing theory,” Journal of Electronics & Information Technology, vol. 38, no. 2, pp. 495–508, 2016. View at: Publisher Site | Google Scholar
  6. C. A. Rogers and D. C. Popescu, “Compressed sensing MIMO radar system for extended target detection,” IEEE Systems Journal, vol. 15, no. 1, pp. 1381–1389, 2021. View at: Publisher Site | Google Scholar
  7. H. Wang, W. Zhang, X. An, and Y. Liu, “Sparsity adaptive compressed sensing and reconstruction architecture based on reed-solomon codes,” IEEE Communications Letters, vol. 25, no. 3, pp. 716–720, 2021. View at: Publisher Site | Google Scholar
  8. I. W. Selesnick and I. Bayram, “Sparse signal estimation by maximally sparse convex optimization,” IEEE Transactions on Signal Processing, vol. 62, no. 5, pp. 1078–1092, 2014. View at: Publisher Site | Google Scholar
  9. R. G. Baraniuk, E. Candes, M. Elad, and Y. Yi Ma, “Applications of sparse representation and compressive sensing [scanning the issue],” Proceedings of The IEEE, vol. 98, no. 6, pp. 906–909, 2010. View at: Publisher Site | Google Scholar
  10. J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655–4666, 2007. View at: Publisher Site | Google Scholar
  11. Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition,” in Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, pp. 40–44, Pacific Grove, CA, USA, November 1993. View at: Publisher Site | Google Scholar
  12. D. Needell and R. Vershynin, “Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit,” Foundations of Computational Mathematics, vol. 9, no. 3, pp. 317–334, 2009. View at: Publisher Site | Google Scholar
  13. D. Needell and J. A. Tropp, “CoSaMP: iterative signal recovery from incomplete and inaccurate samples,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301–321, 2009. View at: Publisher Site | Google Scholar
  14. W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing signal reconstruction,” IEEE Transactions on Information Theory, vol. 55, no. 5, pp. 2230–2249, 2009. View at: Publisher Site | Google Scholar
  15. J. Wang, S. Kwon, and B. Shim, “Generalized orthogonal matching pursuit,” IEEE Transactions on Signal Processing, vol. 60, no. 12, pp. 6202–6216, 2012. View at: Publisher Site | Google Scholar
  16. D. L. Donoho, Y. Tsaig, I. Drori, and J.-L. Starck, “Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 58, no. 2, pp. 1094–1121, 2012. View at: Publisher Site | Google Scholar
  17. T. D. Thong, G. Lu, N. Nam, and D. T. Trac, “Sparsity adaptive matching pursuit algorithm for practical compressed sensing,” in Proceedings of 2008 42nd Asilomar Conference on Signals, Systems and Computers, pp. 581–587, Pacific Grove, CA, USA, October 2008. View at: Publisher Site | Google Scholar
  18. R. Shoitan, Z. Nossair, I. I. Ibrahim, and A. Tobal, “Improving the reconstruction efficiency of sparsity adaptive matching pursuit based on the Wilkinson matrix,” Frontiers of Information Technology & Electronic Engineering, vol. 19, no. 4, pp. 503–512, 2018. View at: Publisher Site | Google Scholar
  19. X. Bi, Z. Shang, Z. Qiang, and H. Liu, “Improvement of sparsity adaptive matching pursuit based on variable iteration steps,” Journal of System Simulation, vol. 26, no. 9, pp. 2116–2121, 2014. View at: Google Scholar
  20. R. Yang and X. Zhang, “An improved algorithm basd on sparsity adaptive matching pursuit,” Journal of Nanjing University (Natural Science), vol. 54, no. 3, pp. 538–542, 2018. View at: Google Scholar
  21. S. Foucart, “Hard thresholding pursuit: an algorithm for compressive sensing,” Siam Journal on Numerical Analysis, vol. 49, no. 6, pp. 2543–2563, 2011. View at: Publisher Site | Google Scholar
  22. C.-M. Yu, S.-H. Hsieh, H.-W. Liang et al., “Compressed sensing detector design for space shift keying in MIMO systems,” IEEE Communications Letters, vol. 16, no. 10, pp. 1556–1559, 2012. View at: Publisher Site | Google Scholar
  23. E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203–4215, 2005. View at: Publisher Site | Google Scholar
  24. C. Yang, W. Feng, H. Feng, T. Yang, and H. Bo, “A sparsity adaptive subspace pursuit algorithm for compressive sampling,” Acta Electronica Sinica, vol. 38, no. 8, pp. 1914–1917, 2010. View at: Google Scholar

Copyright © 2021 Xinhe Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views372
Downloads401
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.