Optimization Theory, Methods, and Applications in Engineering 2014
View this Special IssueResearch Article  Open Access
Multiple Sparse Measurement Gradient Reconstruction Algorithm for DOA Estimation in Compressed Sensing
Abstract
A novel direction of arrival (DOA) estimation method in compressed sensing (CS) is proposed, in which the DOA estimation problem is cast as the joint sparse reconstruction from multiple measurement vectors (MMV). The proposed method is derived through transforming quadratically constrained linear programming (QCLP) into unconstrained convex optimization which overcomes the drawback that norm is nondifferentiable when sparse sources are reconstructed by minimizing norm. The convergence rate and estimation performance of the proposed method can be significantly improved, since the steepest descent step and BarzilaiBorwein step are alternately used as the search step in the unconstrained convex optimization. The proposed method can obtain satisfactory performance especially in these scenarios with low signal to noise ratio (SNR), small number of snapshots, or coherent sources. Simulation results show the superior performance of the proposed method as compared with existing methods.
1. Introduction
Direction of arrival (DOA) estimation of multiple narrowband sources is an important research topic in array signal processing. It has been extensively studied in acoustic source localization, radar, and medical imaging [1–3]. Many effective DOA estimation algorithms have been proposed and developed, which mainly include beamforming algorithms such as MVDR [4] and subspacebased algorithms such as MUSIC [5]. To obtain preferable estimation performance, the Nyquist sampling theorem must be used in these conventional methods of data acquisition. However, highspeed sampling rate can impose so enormous pressure on capturing and storing data that requirements on both hardware and software are increased. Moreover, these methods suffer from serious performance degradation in these scenarios with low signal to noise ratio (SNR), small number of snapshots, or coherent sources.
Recently, many applications involving compressed sensing (CS) [6–8], especially DOA estimation, have been attracting tremendous research interest in the signal processing. CS is an emerging area, and it can not only capture and store compressed or sparse sources simultaneously at a rate much lower than the Nyquist sampling rate but also reconstruct original sources using nonadaptive linear projection measurements onto a suitable measurement matrix, which satisfies the restricted isometry property (RIP) [9–11]. The sparse reconstruction aims to find the support which is shared by the unknown sparse vectors from multiple measurement vectors (MMV). The support denotes the indices of the nonzero elements in the unknown sparse vectors.
CS has been widely applied to DOA estimation since sources are sparse in the spatial domain which results from the fact that there are much fewer true sources directions than all potential directions. Stoica et al. [12] proposed a sparse iterative covariancebased estimation method (SPICE) for array processing which is semiparametric estimation method and can avoid parameter selection. Hyder and Mahata [13] proposed an alternative strategy called joint approximation (JLZADOA) algorithm based on spatial sparsity, which can resolve closely spaced and high correlated sources even if the number of sources is unknown. Figueiredo et al. [14] proposed gradient projection algorithm to solve the boundconstrained quadratic programming formulation. Although it can be simply implemented, the regularization parameter is selected with difficulty and it is only suitable for the single measurement vector (SMV) model, which limits its practical engineering application.
In this paper, we propose a novel multiple sparse measurement gradient reconstruction method called MSMGR for DOA estimation in CS. The method is derived through transforming quadratically constrained linear programming (QCLP) into unconstrained convex optimization to overcome the drawback that norm is nondifferentiable when minimizing the norm for the sparse reconstruction. The search steepest descent step [15] and BarzilaiBorwein step [16] are alternately used as the search step to improve the convergence rate and estimation performance significantly. Furthermore, the singular value decomposition (SVD) is incorporated into the proposed method to reduce the computational complexity and the sensitivity to the noise. The proposed method is suitable for both SMV and MMV, and it has higher estimation accuracy and resolution than existing methods especially in these scenarios with low SNR, small number of snapshots, or coherent sources. Simulation results show the superior performance of the proposed method as compared with existing methods.
2. Problem Formulation
Consider narrowband farfield sources, , , impinging on the sensor array consisting of omnidirectional sensors from different directions, , . The array observation model at th snapshot can be formulated aswhere is a complex Gaussian white noise vector with zero mean and covariance matrix and is the steering vector of the source from the direction . Although the DOA estimation based on the single snapshot, which is a typical SMV model, has its value, the number of snapshots is larger than one in the most practical applications. Correspondingly, the multiple snapshots model is a typical MMV model.
In order to cast the DOA estimation as a sparse reconstruction, let denote a fine enough grid which covers the entire spatial domain where there are potential directions of the sources so that the true directions are aligned or are close to the grids. It means that there exists being equal to , respectively. Thus, we have
The multiple snapshots model can be written as the following sparse form:where is the number of snapshots and is the array manifold matrix corresponding to all the potential directions which is also defined as an overcomplete dictionary in CS. is the sparse vector with nonzero elements at positions corresponding to the true directions and zero elements at the remaining positions, where denotes the transpose operation. Hence, the matrix has nonzero rows, that is, row sparse, since share the common support. Obviously, the DOA estimation problem of multiple snapshots is that of identifying the row support of the unknown matrix from the matrix which is given bywith sensing matrix , noise matrix , and a common measurement matrix of the size with where is the number of nonadaptive linear projection measurements.
It is well known that sparse sources can be reconstructed by solving the norm minimization problem. However, the optimization problem is nonconvex and the optimization method is both numerically unstable and computationally unaccepted [17]. Then, this problem is transformed into the norm minimization problem [18] so that we can accurately reconstruct the matrix by solving the following QCLP problem:where is the unknown sparse vector and represents the Frobenius norm of matrices or the Euclidean norm of vectors. The th entry of is equal to Euclidean norm of the th row of ; that is, .
3. DOA Estimation
The SVD is employed on the matrix to reduce the computational complexity and the sensitivity to the noise. Hence, we havewhere is the orthogonal matrix and denotes the conjugate transpose operation. and denote the signal subspace and noise subspace, respectively. The eigenvalues of the matrix are arranged from the largest to the smallest; that is, , where large eigenvalues are dominant. and , respectively, consist of left singular eigenvectors corresponding to big eigenvalues and small eigenvalues. Denote , , and where , is an identity matrix of the size , and is a zero matrix of the size so that we have
As one may note, the dimension of the matrix is reduced from to which can significantly reduce the computational complexity and the sensitivity to the noise especially in the scenario with large number of snapshots. The essence of the dimension reduction is to keep the signal subspace and discard the noise subspace. By using the SVD decomposition, (5) can be rewritten as the following form:where is also a sparse vector and shares the same support with . To overcome the drawback that norm is nondifferentiable by minimizing the norm for solving the sparse sources, (8) is transformed into the unconstrained convex optimization by using Lagrange multiplication [19, 20]:where is nonnegative and it is called regularized factor that can serve as a tradeoff between the ability of suppressing noise and source sparsity. Note that the search path, which is obtained by projecting the negative gradient of the objective function in (9) onto the feasible set, cannot perform a backtracking line search well. Therefore, we adopt the norm of matrices to change the search direction which results inwhere denotes the norm of matrices. A detailed derivation process of using MSMGR for DOA estimation is shown as follows. Assume that denotes the residual of the th iteration, denotes the support of the th iteration, denotes the submatrix of with columns indexed by , and denotes the reconstructed source after the th iteration. Therefore, the objective function of the th iteration can be written as
The purpose of the current iteration is to find the sparse reconstructed source which can minimize the objective function ; that is, the residual is minimum after the current iteration. The expansion of (11) can be expressed aswhere denotes the element of the th row and th column of the matrix . With further derivation, the minimum of the objective function (12) is equal to the maximum of (13) which can be given as the following form:
Based on the properties of matrix trace, (13) can be further simplified to
Then, the negative gradient is obtained by the partial derivative of (14) with respect to which is given bywhere is referred to as polarity matrix that can judge the polarity of nonzero elements: where denotes the element of the th row and th column of the matrix . Since the conventional search step is too small based on the orthogonality, the steepest descent step and BarzilaiBorwein step are alternately exploited as the search step in order to improve the convergence rate and estimation performance. Then, we have where and are the steepest descent step and BarzilaiBorwein step, respectively. The specific steps of MSMGR are given as follows.
Initialization. Set the number of iterations , , , , and .
Step 1. Calculate the inner product of the residual of the th iteration and sensing matrix. Then, update the index :
Step 2. Update the support and the corresponding submatrix .
Step 3. Calculate the negative gradient and the search step in terms of (15) and (17), respectively.
Step 4. Update the constructed source in terms of the negative gradient and the search step.
Step 5. Update the polarity matrix , where is the zero matrix of the size .
Step 6. Update the residual . If the residual satisfies the stopping criterion, stop the iteration; otherwise, set and return to Step 1.
The core of the new method is that of updating the polarity matrix by zeropadding process in Step 5 since the dimensions of the support and corresponding submatrix are both expanded in Step 2. Moreover, zeropadding process can guarantee the precision of the DOA estimation. The spectrum of the proposed method is obtained by estimating constructed source power from all potential directions. Like other spectralbased methods, the true directions are estimated by the locations of the highest peaks of the spectrum.
4. Simulation Results
In this section, the superior performance of the proposed method is shown as compared with existing JLZADOA and MUSIC methods by several numerical simulations. Consider the spatial sources impinging on the uniform linear array (ULA) with interspacing , where denotes the wavelength of source. In the ULA case, the steering vector corresponding to the source from the direction is given bywhere the number of array elements is set to . In the simulation, the regularized factor can be chosen as suggested in [21]Following [22], it is easy to see that the unique minimum of (10) is the zero matrix for . In the simulation, the average root mean square error (RMSE) of the DOA estimation is defined as the significant performance index:where is the number of independent Monte Carlo runs and is the estimate of in the th run. The resolution of the grid is closely related to the precision of the DOA estimation. A coarse grid leads to poor precision, but a too fine grid increases computational complexity. Therefore, an adaptive grid refinement method is used for the tradeoff between precision and computational complexity. In the simulation, we set a coarse grid with step in the range of to and make a local fine grid in the vicinity of the estimated angle.
In the first simulation, we show the spatial spectra of three methods in the scenario with low SNR, small number of snapshots, and two uncorrelated sources impinging from and , respectively. The spatial spectra are shown in Figure 1 with SNR = 3 dB and 50 snapshots. The following conclusion can be acquired from Figure 1 that since the spatial spectra obtained by MUSIC and JLZADOA have only one peak, MUSIC and JLZADOA cannot identify the closely spaced sources accurately. However, the proposed method MSMGR has a nearly ideal spectrum and a precise estimation for the closely spaced sources. Therefore, MSMGR outperforms JLZADOA and MUSIC in terms of the spatial spectrum.
The RMSEs of three methods are analyzed under different conditions in the second simulation. Consider three sources impinging from and , respectively, where the latter two closely spaced sources are correlated and the first source is uncorrelated to them. The forward spatial smoothing method is exploited on the MUSIC called FSSMUSIC to resolve the correlated sources. Figure 2 shows the RMSE as a function of SNR of three methods in 100 Monte Carlo runs for the fixed number of 50 snapshots whereas the RMSE versus the number of snapshots is shown in Figure 3 for the fixed SNR −6 dB in 100 Monte Carlo runs. The conclusions can be drawn from Figures 2 and 3 that the RMSE of MSMGR is smaller than those of other two methods and MSMGR has better estimation performance than the other two methods, especially in the scenarios with low SNR or small number of snapshots. The reason is that MSMGR can overcome the nondifferentiable drawback and exploit the alternate search step to improve the convergence rate and estimation performance. Moreover, the RMSE of MSMGR is close to those of the other two methods with the increase of SNR and the number of snapshots.
In Figure 4, the relation between the RMSE and angle separation of correlated sources is shown, which can illustrate the resolving capability. Let two correlated sources at directions and , where the step of is , be impinged on the ULA. It can be seen from Figure 4 that MSMGR suffers from serious performance degradation if the angle separation is . However, MSMGR can still provide the most precise estimation as long as the angle separation is no less than . Simulation results show that MSMGR has higher resolution than the other two methods.
Finally, we compare the computation time of different methods versus number of snapshots in Table 1. Two correlated sources located at and impinge on the ULA. MSMGRnosvd denotes that the SVD is not adopted in the process of employing MSMGR for DOA estimation.

It is easy to see from Table 1 that the computation time of MSMGRnosvd is the longest and the computation time of MSMGR is longer than that of JLZADOA and FSSMUSIC, but it is important to note that the performance of MSMGR is much better than that of these two methods. Moreover, it is proved that the SVD can significantly reduce the computation time.
5. Conclusion
In this paper, a novel MSMGR method for DOA estimation is proposed in CS. The proposed method is obtained by transforming QCLP into unconstrained convex optimization to overcome the drawback that norm is nondifferentiable when sparse sources are reconstructed by minimizing the norm. An alternate search step is used to improve the convergence rate and estimation performance. The SVD is used to reduce the computational complexity and the sensitivity to the noise. Simulation results show that MSMGR outperforms JLZADOA and MUSIC in terms of the spatial spectrum and has more precise estimation as well as higher resolution; in particular when SNR is low, the number of snapshots is small and sources are coherent.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
This work was supported by the Fundamental Research Funds for the Center University of China under Grant no. HEUCF130804.
References
 V. V. Reddy, M. Mubeen, and B. P. Ng, “ReducedComplexity SuperResolution DOA estimation with unknown number of sources,” IEEE Signal Processing Letters, vol. 22, no. 6, pp. 772–776, 2015. View at: Publisher Site  Google Scholar
 Z. Tan and A. Nehorai, “Sparse direction of arrival estimation using coprime arrays with offgrid targets,” IEEE Signal Processing Letters, vol. 21, no. 1, pp. 26–29, 2014. View at: Publisher Site  Google Scholar
 P. Falcone, F. Colone, A. Macera, and P. Lombardo, “Twodimensional location of moving targets within local areas using WiFibased multistatic passive radar,” IET Radar, Sonar & Navigation, vol. 8, no. 2, pp. 123–131, 2014. View at: Publisher Site  Google Scholar
 J. P. Burg, “Maximum entropy spectral analysis,” in Proceedings of the 37th Meeting of the Society of Exploration Geophsicists, 1967. View at: Google Scholar
 R. O. Schmidt, A Signal Subspace Approach to Multiple Emitter Location and Spectrum Estimation, Stanford University, Stanford, Calif, USA, 1981.
 D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at: Publisher Site  Google Scholar
 J. D. Blanchard, J. Tanner, and K. Wei, “Conjugate gradient iterative hard thresholding: observed noise stability for compressed sensing,” IEEE Transactions on Signal Processing, vol. 63, no. 2, pp. 528–537, 2015. View at: Publisher Site  Google Scholar
 K. Sano, R. Matsushita, and T. Tanaka, “To average or not to average: tradeoff in compressed sensing with noisy measurements,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT '14), pp. 1316–1320, Honolulu, Hawaii, USA, JuneJuly 2014. View at: Publisher Site  Google Scholar
 P. Koiran and A. Zouzias, “Hidden cliques and the certification of the restricted isometry property,” IEEE Transactions on Information Theory, vol. 60, no. 8, pp. 4999–5006, 2014. View at: Publisher Site  Google Scholar
 A. M. Tillmann and M. E. Pfetsch, “The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing,” IEEE Transactions on Information Theory, vol. 60, no. 2, pp. 1248–1259, 2014. View at: Publisher Site  Google Scholar
 C. B. Song and S. T. Xia, “Sparse signal recovery by l_{p} minimization under restricted isometry property,” IEEE Signal Processing Letters, vol. 21, no. 9, pp. 1154–1158, 2014. View at: Google Scholar
 P. Stoica, P. Babu, and J. Li, “SPICE: a sparse covariancebased estimation method for array processing,” IEEE Transactions on Signal Processing, vol. 59, no. 2, pp. 629–638, 2011. View at: Publisher Site  Google Scholar
 M. M. Hyder and K. Mahata, “Directionofarrival estimation using a mixed ${l}_{\mathrm{2,0}}$ norm approximation,” IEEE Transactions on Signal Processing, vol. 58, no. 9, pp. 4646–4655, 2010. View at: Publisher Site  Google Scholar
 M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE Journal on Selected Topics in Signal Processing, vol. 1, no. 4, pp. 586–597, 2007. View at: Publisher Site  Google Scholar
 J. Fliege and B. F. Svaiter, “Steepest descent methods for multicriteria optimization,” Mathematical Methods of Operations Research, vol. 51, no. 3, pp. 479–494, 2000. View at: Publisher Site  Google Scholar
 J. Barzilai and J. M. Borwein, “Twopoint step size gradient methods,” IMA Journal of Numerical Analysis, vol. 8, no. 1, pp. 141–148, 1988. View at: Publisher Site  Google Scholar
 C. L. Bao, H. Ji, Y. H. Quan, and Z. W. Shen, “L_{0} norm based dictionary learning by proximal methods with global convergence,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '14), pp. 3858–3865, Columbus, Ohio, USA, June 2014. View at: Publisher Site  Google Scholar
 P. P. Markopoulos, G. N. Karystinos, and D. A. Pados, “Optimal algorithms for l_{1}subspace signal processing,” IEEE Transactions on Signal Processing, vol. 62, no. 19, pp. 5046–5058, 2014. View at: Publisher Site  Google Scholar
 J. SandovalMoreno, G. Besancon, and J. J. Martinez, “Lagrange Multipliers based price driven coordination with constraints consideration for multisource power generation systems,” in Proceedings of the European Control Conference (ECC '14), pp. 1987–1992, IEEE, Strasbourg, France, June 2014. View at: Publisher Site  Google Scholar
 Z.Q. Lü and X. An, “Nonconforming finite element tearing and interconnecting method with one Lagrange multiplier for solving largescale electromagnetic problems,” IET Microwaves, Antennas & Propagation, vol. 8, no. 10, pp. 730–735, 2014. View at: Publisher Site  Google Scholar
 S. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinvesky, “An interiorpoint method for largescale${l}_{\mathrm{1}}$—regularized least squares,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 4, pp. 606–616, 2007. View at: Google Scholar
 J. J. Fuchs, “On sparse representations in arbitrary redundant bases,” IEEE Transactions on Information Theory, vol. 50, no. 6, pp. 1341–1344, 2004. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2015 Weijian Si et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.