Research Article | Open Access
Lele Qu, Shimiao An, Yanpeng Sun, "Cross Validation Based Distributed Greedy Sparse Recovery for Multiview Through-the-Wall Radar Imaging", International Journal of Antennas and Propagation, vol. 2019, Article ID 5651602, 9 pages, 2019. https://doi.org/10.1155/2019/5651602
Cross Validation Based Distributed Greedy Sparse Recovery for Multiview Through-the-Wall Radar Imaging
Multiview through-the-wall radar imaging (TWRI) can improve the imaging quality and target detection by exploiting the measurement data acquired from various views. Based on the established joint sparsity signal model for multiview TWRI, a cross validation (CV) based distributed greedy sparse recovery algorithm which combines the strengths of the CV technique and censored simultaneous orthogonal matching pursuit algorithm (CSOMP) is proposed in this paper. The developed imaging algorithm named by CV-CSOMP which separates the total measurements into reconstruction measurements and CV measurements is able to achieve the accurate imaging reconstruction and estimation of recovery error tolerance by the iterative CSOMP calculation. The proposed CV-CSOMP imaging algorithm not only can reduce the communication costs among radar units, but also can provide the desirable imaging performance without the prior information such as the sparsity or noise level. The experimental results have verified the validity and effectiveness of the proposed imaging algorithm.
Through-the-wall radar imaging (TWRI) is a promising technique to detect, localize, and identify the objects behind the walls [1–3]. The TWRI technology has a wide range of military and civilian applications, such as urban battle, law enforcement, and earthquake and avalanche rescue missions. Due to the shadowing effect and the limited scattering range of targets behind walls, the echoes of targets measured from single view have relatively weak energy, which may lead to inferior imaging quality. In contrast to single-view TWRI measurement configuration, multiview TWRI can provide more informative and high-quality imaging result by enhancing the target intensity and reducing the false alarms [4–8].
In order to achieve high imaging resolution in TWRI, both long observation aperture and ultrawideband illumination signal are demanded, which leads to a large amount of data to be acquired, stored, and processed. Recently, compressive sensing (CS) and sparse reconstruction techniques have been applied to TWRI to increase the speed of data acquisition and provide the enhanced imaging result [9–15]. From the viewpoint of CS, the multiview images can be described by using the joint sparsity model. It is demonstrated that CS-based multiview image formation can enhance the quality of the reconstructed image compared to single-view TWRI using CS . A multiview TWRI formation algorithm based on joint Bayesian sparse recovery framework is proposed to enhance imaging quality in the strong wall clutter environment . A hybrid matching pursuit algorithm is developed to deal with the multiview TWRI in the distributed manner . The CS technique is applied to multiview TWRI reconstruction with the consideration of the multiple reflections off the targets in conjunction with the surrounding walls .
Nevertheless, the existing CS-based methods for multiview TWRI require data measurements from the radar unit at each view to be collected at a centralized location for processing. Due to the limitation of communication links and noncooperative environments, it may not be feasible to accumulate the measurement data at one centralized location. A distributed greedy signal recovery algorithm called censored simultaneous orthogonal matching pursuit (CSOMP)  without the centralized data accumulation for multiview TWRI is proposed, which has the advantage of reducing the communication costs among all radar units. However, this imaging approach requires prior information such as the sparsity or noise level for accurate imaging reconstruction. Such information is generally not available in practical multiview TWRI scenarios. To solve the above problem, this paper proposes a distributed signal recovery algorithm for multiview TWRI reconstruction which combines the advantages of CSOMP algorithm and cross validation (CV) technique [21, 22]. The proposed imaging approach called CV-CSOMP algorithm divides the total measurements into reconstruction measurements and CV measurements. The former are used to reconstruct the signal by the CSOMP algorithm and the latter are used to compute the CV residual. The process of accurate imaging reconstruction and estimation of recovery error tolerance are simultaneously achieved by the iterative CSOMP calculation in the proposed imaging algorithm. The proposed imaging approach can provide the accurate imaging reconstruction results without prior knowledge such as sparsity or noise level while reducing the communication costs among radar units from various views.
The remainder of this paper is organized as follows. In Section 2, the signal model of multiview TWRI is introduced. In Section 3, the CV-CSOMP algorithm is proposed in detail. The effectiveness of the proposed algorithm is evaluated by using simulated and measured multiview TWRI data. Finally, Section 5 gives concluding remarks.
2. Signal Model
Consider radar units, placed at known positions either along the front wall or surrounding the building being imaged. For the radar unit at each view, the transceiver moves in a fixed step along the direction parallel to the wall, resulting in antenna positions. At each transceiver position, a stepped frequency signal of frequencies equispaced over the desired bandwidth is used to illuminate the scene. Assuming the observed scene contains point targets, the signal received by the th transceiver position and the th frequency from the th view can be expressed aswhere is the complex reflectivity of the th target corresponding to the th view, is the th working frequency, is the starting frequency, is the uniform frequency step, represents the two-way traveling time between the th transceiver position and the th target corresponding to the th view, and is the measurement noise. Given the perfect knowledge of surrounding wall parameters such as thickness and permittivity is obtained, can be calculated using the Snell's law [23, 24]. The target reflectivities are assumed constant and independent of frequency. It is worth mentioning that the strong wall reflection signal and multipath propagation signal are omitted in the signal model of (1). The wall reflection signal can be suppressed by the suitable wall clutter mitigation technique such as background subtraction , spatial filtering , and subspace projection  when full data volume is available. The ghosts introduced by the multipath propagation signal can be eliminated by the methods developed in [28, 29].
Assume that the observed scene is divided into a finite number of pixels in the crossrange and downrange; an equivalent matrix-vector representation of the signal model in (1) can be obtained aswhere denotes the measurement vector obtained by stacking the signals from all the transceiver locations, is the scene reflectivity vector associated with the th view, and is the additive noise vector. is the sensing matrix whose th element is given bywhere , , , , and is the round-trip propagation time between the th transceiver position and the th pixel associated with the th view.
In many TWRI scenarios, the number of point targets is typically much less than the number of scene pixels and thus the scene reflectivity vector is a sparse vector for each view. To reduce the amount of data in multiview TWRI, consider an undersampled measurement vector , which is a vector of length composed of elements selected from as follows:where is the dictionary matrix; is the measurement matrix constructed by randomly selecting rows of the identity matrix. It is noted that the wall reflection signal can be also removed by the technique proposed in [12, 14] when we only have access to a reduced set of measurements.
Because the vectors represent reflectivities of the same scene, the indices of the nonzero elements in sparse vectors remain the same. On the other hand, because of the aspect angle dependence of target scattering characteristic, the values of these nonzero elements in sparse vectors are different. As a consequence, all the scene reflectivity vectors share the same sparsity pattern for and the measurement data obtained by various views can be processed by the distributed sparse reconstruction methods.
3. Proposed Algorithm Description
In order to reduce the communication costs among the radar units from various views, the CSOMP algorithm can be utilized to reconstruct the scene reflectivity vectors associated with different views. In contrast to the traditional SOMP algorithm  which needs to share the complete observation vector among radar units, the CSOMP algorithm exploits the censoring procedure in the finding support set step, which means that only largest elements and corresponding indices of the observation vectors need to be transmitted among the radar units from various views. The detailed procedures of the CSOMP algorithm are listed in Algorithm 1.
|Input: The measurement vectors , the dictionary matrices , for , the measurement noise level and the|
|censored level .|
|Initialization: Let the residual vectors , the support set , for , and the iteration times ;|
|Iteration: (1) Compute the observation vectors for ;|
|(2) (a) Censoring: , , where denotes the set of indices corresponding|
|to largest entries of ;|
|(b) Communication: sharing and with all units;|
|(c) Construct new sparse vector , whose nonzero entries are located at the indices indicated by with the|
|(3) Sum up all observation vectors: ;|
|(4)Update support set: finding the largest entry in , , ;|
|(5) Update the residual: , where consists of columns of corresponding to the|
|indices in ;|
|(6) If , then , return to step (1); otherwise, stop the iteration and compute sparse solution ,|
|whose nonzero entries are located at the indices indicated by with the coefficients ;|
|Output: Obtain the additive fusion result of from various views .|
The CSOMP algorithm can significantly reduce the communication costs by censoring operation. Assume that the number of iterations is , the coefficients which are floating-point numbers are labeled as and the corresponding indices which are integers are represented by . Then the communication costs of traditional SOMP algorithm are , while the communication costs of CSOMP algorithm are reduced to .
Here, it should be emphasized that the preset measurement noise level which is also known as the recovery error tolerance has a significant impact on the reconstruction results of CSOMP algorithm. The inaccurate selection of will lead to the loss of genuine targets or the introducing of ghost targets. However, the noise level of multiview TWRI measurement data is usually unknown in practice. In order to estimate the measurement noise level, the CV-CSOMP algorithm which combines the CV technique and CSOMP algorithm is developed. It has been shown that CV strategy is the computationally efficient method to determine the recovery error tolerance [21, 22]. In the CV-CSOMP algorithm, the measurement vectors are separated into reconstruction measurement vectors and CV measurement vectors . Accordingly, the dictionary matrices are separated into the reconstruction matrices and CV matrices . The detailed steps of the CV-CSOMP algorithm are depicted in Algorithm 2. In each iteration of the CV-CSOMP algorithm, the scene reflectivity is reconstructed by the CSOMP algorithm and the outcome is evaluated by the CV technique, which is used to indicate that the iteration process starts to overfit the noise. The reconstructed imaging result is selected as the output on the criterion that its CV residual is the smallest one.
|Input: Reconstruction vectors , CV measurement vectors , reconstruction matrices , CV matrices , the|
|number of iterations , and the censoring level .|
|Initialization: Set the CV residual , ;|
|Iteration: (1) Reconstruct using the CSOMP algorithm with , , and ;|
|(2) Update the CV residual , ;|
|(3) Until , compute the measurement noise level ;|
|Output: The sparse solution .|
4. Results and Discussion
4.1. Simulation Results
In this section, simulation experiments are conducted using the synthetic data. In the simulation, the surrounding homogeneous walls composed of solid concrete blocks have the thickness of 0.2 m and the relative permittivity of 7.66. The scene containing nine targets is illuminated at the down, left, and right views, respectively. At each view, a total of 41 transceiver positions parallel to the wall with a spacing of 0.08 m have a standoff distance of 1 m from the corresponding wall. The stepped frequency signal ranging from 2 GHz to 3 GHz with a frequency step of 5 MHz is used for imaging. Thus, the TWRI system transmits and receives 8241 (41×201) monochromatic signals. The actual geometry of multiview TWRI under consideration is depicted in Figure 1. The 4 m× 4 m square region to be imaged is partitioned into 33×33 pixels along the crossrange and downrange directions. The reflectivities of nines targets drawn from the complex Gaussian distribution are assumed to remain unchanged for each view, but vary with different views due to the aspect-dependent scattering characteristic of the targets. The complex white Gaussian noise is added to the measurements to account for the measurement noise.
The value of is set to 4100 where one-half of frequencies and total antenna locations are used for scene reconstruction. The censored levels of CSOMP and CV-CSOMP algorithms are both set to 50. For the CV-CSOMP algorithm, 60% of total reduced measurements are used to reconstruct the targets via the CSOMP algorithm and 40% of remaining total reduced measurements are utilized to compute the CV residual, while for the CSOMP algorithm, the measurement noise level as the input parameter is unknown and empirically set to . The imaging reconstruction results of CSOMP and CV-CSOMP algorithms are depicted in Figure 2. All the images are normalized to their own maxima and shown on same 20 dB scale. Figures 2(a) and 2(b) show that both the CSOMP and CV-CSOMP algorithms are able to achieve the accurate reconstruction of the targets when the signal-to-noise ratio (SNR) is equal to 10 dB. It is observed from Figure 2(c) that, because of the improper selection of the measurement noise level, the CSOMP algorithm misses two targets when SNR is set to 20 dB. However, the CV-SOMP algorithm can accurately localize the nine targets in the case that SNR is equal to 20 dB. As seen in Figures 2(e) and 2(f), the CV residual of the CV-CSOMP algorithm decreases to a constant value after about 7 iterations in the iteration process, which means that the CV-CSOMP algorithm is able to determine the measurement noise level and obtain the promising imaging reconstruction results with a small number of iterations.
In order to provide the quantitative comparison of imaging performance, the normalized mean square error (NMSE) is applied to evaluate the imaging quality. The definition of the NMSE is given bywhere is the original signal vector, is the reconstructed signal vector, and is the number of Monte Carlo experiment.
For the analysis of imaging performance, several parameters are varied. First, we vary the SNR from 0 dB to 20 dB with 5 dB increment. For a fixed value of SNR, the experiment is repeated 100 times and the NMSE of the formed images is computed. Figure 3(a) shows the NMSE of the reconstruction results obtained by CSOMP and CV-CSOMP algorithms as a function of different SNR values. It is observed that the NMSE of CV-CSOMP algorithm is slightly higher than that of CSOMP algorithm when the SNR is less than 10 dB. When the SNR is greater than 10 dB, due to the improper selection of the measurement noise level, the imaging performance of CSOMP algorithm starts to degrade severely. However, the CV-CSOMP algorithm can still maintain high imaging accuracy because it can adaptively determine the proper measurement noise level through the CV procedure.
To further assess the effectiveness of the CV-CSOMP imaging algorithm, the relationship of NMSE and the percentage of full measurement set when the SNR is equal to 20 dB is demonstrated in Figure 3(b). As can be seen, the CV-CSOMP algorithm achieves consistently lower NMSE than the CSOMP algorithm. The imaging quality of CSOMP algorithm is nearly unchanged while the imaging quality of CV-CSOMP algorithm improves with the increase of the percentage of measurements. It is also observed from Figure 3(b) that the CV-CSOMP algorithm can exploit only 20% of full measurement data to successfully detect and localize the targets. In practical scenarios, the number of measurements critically impacts the time required for data collection time for the synthetic aperture system . It can be concluded that the proposed CV-CSOMP algorithm allows the significant reduction of data acquisition time and provides the desirable image quality.
The selection of the number of reconstruction measurements and CV measurements is very important for the CV-CSOMP algorithm. For the one-half reduced data set, Figure 4 presents the relationship of the NMSE and the percentage of reconstruction measurements chosen from the total reduced measurement set. It can be observed that the NMSE decreases when the number of reconstruction measurements increases. When the percentage of reconstruction measurements exceeds 60%, the value of NMSE starts to remain almost unchanged. To obtain a reliable imaging reconstruction, 60% of total measurements are used as reconstruction measurements and 40% of remaining total measurements are used as CV measurements in the simulated and experimental scenarios.
To analyze the influence of different censoring levels on the reconstruction performance of the CV-CSOMP algorithm, Figure 5 depicts the NMSE versus the different censoring levels. It is obviously observed that the CV-CSOMP algorithm has the nearly same NMSE when the censoring level varies from 10 to 100. A censoring level of is selected for the considered scenario.
4.2. Experimental Results
The proposed CV-CSOMP algorithm is evaluated using the real multiview TWRI data. As depicted in Figure 6(a), the slider controller and vector network analyzer (VNA) are under the control of a desktop computer, which achieves the mechanical scan and data collection. The scanning frequency band of VNA is from 1 GHz to 3 GHz with the step size of 10 MHz. A dual-polarized horn antenna is used as the transceiver and mounted on the slider to synthesize a 41-element linear array with an interelement spacing of 4 cm. Ports 1 and 2 of the network analyzer are, respectively, connected to V and H feeds of the antenna and full polarization measurements are conducted under the monostatic measurement configuration. As shown in Figure 6(b), two trihedral targets suspended 52 cm from the ground are placed behind the homogeneous wooden wall of thickness of and relative permittivity of . The antenna array is sequentially placed at the standoff distance 0.8 m and 1.2 m from the front face of the wall, which means that two different views are employed to interrogate the scene. While the full polarimetric data are collected, only monostatic VV copolarized measurements are utilized for imaging. For each view, the data set of 201 frequencies and 41 monostatic antennas, i.e., space-frequency measurements, are acquired during the experiment. Here, only 100 frequency points at each antenna location which represents 50% of the full data volume are randomly selected at each view for image formation. The randomly selected frequencies are the same for all antenna locations at each view. Then the spatial filtering method  can be directly applied to the reduced data set to remove the strong wall reflection signal as well as the background clutter from the room.
Figure 7 demonstrates the imaging results obtained by the CSOMP and CV-CSOMP algorithms, respectively. The censoring level is set to for both two algorithms. The CSOMP algorithm fails to recover the trihedral target at the downrange of 162 cm. In comparison, the proposed CV-CSOMP algorithm can accurately detect and locate the two trihedral targets.
In this paper, we have proposed a distributed greedy sparse recovery algorithm for multiview TWRI. The proposed CV-CSOMP algorithm exploits the CV technique to determine the proper measurement noise level through the iterative CSOMP calculation. The simulation and experimental results demonstrate that the proposed CV-CSOMP algorithm can provide the desirable multiview TWRI imaging results without the prior information such as the sparsity or noise level while the communication load among the radar units is significantly reduced. It is worth mentioning that although the proposed algorithm can localize the target correctly, it cannot retain the edge and shape of the target. The development of advanced imaging approaches which can preserve the edge and shape of extended target will be pursued in the future research.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This work was supported in part by the National Natural Science Foundation of China under Grant 61671310, in part by the Aeronautical Science Foundation of China under Grant 2016ZC54013, in part by the Innovative Talents Program of Universities of Liaoning Province under Grant LR2016062, in part by the Scientific Research Project of the Department of Education of Liaoning Province under Grant L201752, and in part by the Young and Middle-aged Science and Technology Innovation Talents Support Project of Shenyang under Grant RC180038.
- M. G. Amin, Through-the-Wall Radar Imaging, CRC Press, Boca Raton, FL, USA, 2010.
- S. Guo, X. Yang, G. Cui, Y. Song, and L. Kong, “Multipath ghost suppression for through-the-wall imaging radar via array rotating,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 6, pp. 868–872, 2018.
- L. Qu, S. An, T. Yang, and Y. Sun, “Group sparse basis pursuit denoising reconstruction algorithm for polarimetric through-the-wall radar imaging,” International Journal of Antennas and Propagation, vol. 2018, 8 pages, 2018.
- F. Ahmad and M. G. Amin, “Multi-location wideband synthetic aperture imaging for urban sensing applications,” Journal of The Franklin Institute, vol. 345, no. 6, pp. 618–639, 2008.
- F. Soldovieri, R. Solimene, and G. Prisco, “A multiarray tomographic approach for through-wall imaging,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 4, pp. 1192–1199, 2008.
- Y. Jia, G. Cui, L. Kong, and X. Yang, “Multichannel and multiview imaging approach to building layout determination of through-wall radar,” IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 5, pp. 970–974, 2014.
- S. Xin, L. Biying, L. Pengfei, and Z. Zhimin, “A multiarray refocusing approach for through-The-wall imaging,” IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 4, pp. 880–884, 2015.
- X. Chen and W. Chen, “Double-layer fuzzy fusion for multiview through-wall radar images,” IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 10, pp. 2075–2079, 2015.
- Y. S. Yoon and M. G. Amin, “Compressed sensing technique for high-resolution radar imaging,” in Proceedings of the Signal Processing, Sensor Fusion, and Target Recognition XVII, vol. 6968, pp. 69681A-1–69681A-10, Orlando, Fla, USA, March 2008.
- Q. Huang, L. Qu, B. Wu, and G. Fang, “UWB through-wall imaging based on compressive sensing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 3, pp. 1408–1415, 2010.
- M. G. Amin and F. Ahmad, “Compressive sensing for through-the-wall radar imaging,” Journal of Electronic Imaging, vol. 22, no. 3, pp. 030901.1–030901.21, 2013.
- F. Ahmad, J. Qian, and M. G. Amin, “Wall Clutter mitigation using discrete prolate spheroidal sequences for sparse reconstruction of indoor stationary scenes,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 3, pp. 1549–1557, 2015.
- Q. Wu, Y. D. Zhang, F. Ahmad, and M. G. Amin, “Compressive-sensing-based high-resolution polarimetric through-the-wall radar imaging exploiting target characteristics,” IEEE Antennas and Wireless Propagation Letters, vol. 14, pp. 1043–1047, 2015.
- V. H. Tang, S. L. Phung, F. H. C. Tivive, and A. Bouzerdoum, “A sparse bayesian learning approach for through-wall radar imaging of stationary targets,” IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 5, pp. 2485–2501, 2017.
- X. Wang, G. Li, Y. Liu, and M. G. Amin, “Two-level block matching pursuit for polarimetric through-wall radar imaging,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 3, pp. 1533–1545, 2018.
- J. Yang, A. Bouzerdoum, and M. G. Amin, “Multi-view through-the-wall radar imaging using compressed sensing,” in Proceedings of the 18th European Signal Processing Conference, EUSIPCO 2010, pp. 1429–1433, Denmark, August 2010.
- V. H. Tang, A. Bouzerdoum, S. L. Phung, and F. H. C. Tivive, “Multi-view indoor scene reconstruction from compressed through-wall radar measurements using a joint Bayesian sparse representation,” in Proceedings of the 40th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2015, pp. 2419–2423, Australia, April 2014.
- G. Li and R. J. Burkholder, “Hybrid matching pursuit for distributed through-wall radar imaging,” IEEE Transactions on Antennas and Propagation, vol. 63, no. 4, pp. 1701–1711, 2015.
- M. Leigsnering, F. Ahmad, M. G. Amin, and A. M. Zoubir, “Parametric dictionary learning for sparsity-based TWRI in multipath environments,” IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no. 2, pp. 532–547, 2016.
- M. Stiefel, M. Leigsnering, A. M. Zoubir, F. Ahmad, and M. G. Amin, “Distributed greedy signal recovery for through-the-wall radar imaging,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 10, pp. 1477–1481, 2016.
- R. Ward, “Compressed sensing with cross validation,” IEEE Transactions on Information Theory, vol. 55, no. 12, pp. 5773–5782, 2009.
- J. Zhang, L. Chen, P. T. Boufounos, and Y. Gu, “On the theoretical analysis of cross validation in compressive sensing,” in Proceedings of the 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014, pp. 3370–3374, Italy, 2014.
- M. G. Amin and F. Ahmad, “Wideband synthetic aperture beamforming for through-the-wall imaging,” IEEE Signal Processing Magazine, vol. 25, no. 4, pp. 110–113, 2008.
- W. Zhang, M. G. Amin, F. Ahmad, A. Hoorfar, and G. E. Smith, “Ultrawideband impulse radar through-the-wall imaging with compressive sensing,” International Journal of Antennas and Propagation, vol. 2012, Article ID 251497, 11 pages, 2012.
- K. E. Browne, R. J. Burkholder, and J. L. Volakis, “Fast optimization of through-wall radar images via the method of Lagrange multipliers,” IEEE Transactions on Antennas and Propagation, vol. 61, no. 1, pp. 320–328, 2013.
- Y.-S. Yoon and M. G. Amin, “Spatial filtering for wall-clutter mitigation in through-the-wall radar imaging,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 9, pp. 3192–3208, 2009.
- F. H. C. Tivive, A. Bouzerdoum, and M. G. Amin, “A subspace projection approach for wall clutter mitigation in through-the-wall radar imaging,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 4, pp. 2108–2122, 2015.
- P. Setlur, M. Amin, and F. Ahmad, “Multipath model and exploitation in through-the-wall and urban radar sensing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 10, pp. 4021–4034, 2011.
- M. Leigsnering, F. Ahmad, M. G. Amin, and A. M. Zoubir, “Compressive Sensing-based multipath exploitation for stationary and moving indoor target localization,” IEEE Journal of Selected Topics in Signal Processing, vol. 9, no. 8, pp. 1469–1483, 2015.
- J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms for simultaneous sparse approximation. Part I: greedy pursuit,” Signal Processing, vol. 86, no. 3, pp. 572–588, 2006.
- F. Soldovieri, R. Solimene, and F. Ahmad, “Sparse tomographic inverse scattering approach for through-the-wall radar imaging,” IEEE Transactions on Instrumentation and Measurement, vol. 61, no. 12, pp. 3340–3350, 2012.
Copyright © 2019 Lele Qu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.