Mathematical Problems in Engineering

Volume 2015, Article ID 152570, 6 pages

http://dx.doi.org/10.1155/2015/152570

## Multiple Sparse Measurement Gradient Reconstruction Algorithm for DOA Estimation in Compressed Sensing

Department of Information and Communication Engineering, Harbin Engineering University, 150001 Harbin, China

Received 8 July 2014; Revised 9 November 2014; Accepted 16 March 2015

Academic Editor: Dane Quinn

Copyright © 2015 Weijian Si et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A novel direction of arrival (DOA) estimation method in compressed sensing (CS) is proposed, in which the DOA estimation problem is cast as the joint sparse reconstruction from multiple measurement vectors (MMV). The proposed method is derived through transforming quadratically constrained linear programming (QCLP) into unconstrained convex optimization which overcomes the drawback that -norm is nondifferentiable when sparse sources are reconstructed by minimizing -norm. The convergence rate and estimation performance of the proposed method can be significantly improved, since the steepest descent step and Barzilai-Borwein step are alternately used as the search step in the unconstrained convex optimization. The proposed method can obtain satisfactory performance especially in these scenarios with low signal to noise ratio (SNR), small number of snapshots, or coherent sources. Simulation results show the superior performance of the proposed method as compared with existing methods.

#### 1. Introduction

Direction of arrival (DOA) estimation of multiple narrowband sources is an important research topic in array signal processing. It has been extensively studied in acoustic source localization, radar, and medical imaging [1–3]. Many effective DOA estimation algorithms have been proposed and developed, which mainly include beamforming algorithms such as MVDR [4] and subspace-based algorithms such as MUSIC [5]. To obtain preferable estimation performance, the Nyquist sampling theorem must be used in these conventional methods of data acquisition. However, high-speed sampling rate can impose so enormous pressure on capturing and storing data that requirements on both hardware and software are increased. Moreover, these methods suffer from serious performance degradation in these scenarios with low signal to noise ratio (SNR), small number of snapshots, or coherent sources.

Recently, many applications involving compressed sensing (CS) [6–8], especially DOA estimation, have been attracting tremendous research interest in the signal processing. CS is an emerging area, and it can not only capture and store compressed or sparse sources simultaneously at a rate much lower than the Nyquist sampling rate but also reconstruct original sources using nonadaptive linear projection measurements onto a suitable measurement matrix, which satisfies the restricted isometry property (RIP) [9–11]. The sparse reconstruction aims to find the support which is shared by the unknown sparse vectors from multiple measurement vectors (MMV). The support denotes the indices of the nonzero elements in the unknown sparse vectors.

CS has been widely applied to DOA estimation since sources are sparse in the spatial domain which results from the fact that there are much fewer true sources directions than all potential directions. Stoica et al. [12] proposed a sparse iterative covariance-based estimation method (SPICE) for array processing which is semiparametric estimation method and can avoid parameter selection. Hyder and Mahata [13] proposed an alternative strategy called joint approximation (JLZA-DOA) algorithm based on spatial sparsity, which can resolve closely spaced and high correlated sources even if the number of sources is unknown. Figueiredo et al. [14] proposed gradient projection algorithm to solve the bound-constrained quadratic programming formulation. Although it can be simply implemented, the regularization parameter is selected with difficulty and it is only suitable for the single measurement vector (SMV) model, which limits its practical engineering application.

In this paper, we propose a novel multiple sparse measurement gradient reconstruction method called MSMGR for DOA estimation in CS. The method is derived through transforming quadratically constrained linear programming (QCLP) into unconstrained convex optimization to overcome the drawback that -norm is nondifferentiable when minimizing the -norm for the sparse reconstruction. The search steepest descent step [15] and Barzilai-Borwein step [16] are alternately used as the search step to improve the convergence rate and estimation performance significantly. Furthermore, the singular value decomposition (SVD) is incorporated into the proposed method to reduce the computational complexity and the sensitivity to the noise. The proposed method is suitable for both SMV and MMV, and it has higher estimation accuracy and resolution than existing methods especially in these scenarios with low SNR, small number of snapshots, or coherent sources. Simulation results show the superior performance of the proposed method as compared with existing methods.

#### 2. Problem Formulation

Consider narrowband far-field sources, , , impinging on the sensor array consisting of omnidirectional sensors from different directions, , . The array observation model at th snapshot can be formulated aswhere is a complex Gaussian white noise vector with zero mean and covariance matrix and is the steering vector of the source from the direction . Although the DOA estimation based on the single snapshot, which is a typical SMV model, has its value, the number of snapshots is larger than one in the most practical applications. Correspondingly, the multiple snapshots model is a typical MMV model.

In order to cast the DOA estimation as a sparse reconstruction, let denote a fine enough grid which covers the entire spatial domain where there are potential directions of the sources so that the true directions are aligned or are close to the grids. It means that there exists being equal to , respectively. Thus, we have

The multiple snapshots model can be written as the following sparse form:where is the number of snapshots and is the array manifold matrix corresponding to all the potential directions which is also defined as an overcomplete dictionary in CS. is the sparse vector with nonzero elements at positions corresponding to the true directions and zero elements at the remaining positions, where denotes the transpose operation. Hence, the matrix has nonzero rows, that is, row -sparse, since share the common support. Obviously, the DOA estimation problem of multiple snapshots is that of identifying the row support of the unknown matrix from the matrix which is given bywith sensing matrix , noise matrix , and a common measurement matrix of the size with where is the number of nonadaptive linear projection measurements.

It is well known that sparse sources can be reconstructed by solving the -norm minimization problem. However, the optimization problem is nonconvex and the optimization method is both numerically unstable and computationally unaccepted [17]. Then, this problem is transformed into the -norm minimization problem [18] so that we can accurately reconstruct the matrix by solving the following QCLP problem:where is the unknown sparse vector and represents the Frobenius norm of matrices or the Euclidean norm of vectors. The th entry of is equal to Euclidean norm of the th row of ; that is, .

#### 3. DOA Estimation

The SVD is employed on the matrix to reduce the computational complexity and the sensitivity to the noise. Hence, we havewhere is the orthogonal matrix and denotes the conjugate transpose operation. and denote the signal subspace and noise subspace, respectively. The eigenvalues of the matrix are arranged from the largest to the smallest; that is, , where large eigenvalues are dominant. and , respectively, consist of left singular eigenvectors corresponding to big eigenvalues and small eigenvalues. Denote , **,** and where , is an identity matrix of the size , and is a zero matrix of the size so that we have

As one may note, the dimension of the matrix is reduced from to which can significantly reduce the computational complexity and the sensitivity to the noise especially in the scenario with large number of snapshots. The essence of the dimension reduction is to keep the signal subspace and discard the noise subspace. By using the SVD decomposition, (5) can be rewritten as the following form:where is also a sparse vector and shares the same support with . To overcome the drawback that -norm is nondifferentiable by minimizing the -norm for solving the sparse sources, (8) is transformed into the unconstrained convex optimization by using Lagrange multiplication [19, 20]:where is nonnegative and it is called regularized factor that can serve as a tradeoff between the ability of suppressing noise and source sparsity. Note that the search path, which is obtained by projecting the negative gradient of the objective function in (9) onto the feasible set, cannot perform a backtracking line search well. Therefore, we adopt the -norm of matrices to change the search direction which results inwhere denotes the -norm of matrices. A detailed derivation process of using MSMGR for DOA estimation is shown as follows. Assume that denotes the residual of the th iteration, denotes the support of the th iteration, denotes the submatrix of with columns indexed by , and denotes the reconstructed source after the th iteration. Therefore, the objective function of the th iteration can be written as

The purpose of the current iteration is to find the sparse reconstructed source which can minimize the objective function ; that is, the residual is minimum after the current iteration. The expansion of (11) can be expressed aswhere denotes the element of the th row and th column of the matrix . With further derivation, the minimum of the objective function (12) is equal to the maximum of (13) which can be given as the following form:

Based on the properties of matrix trace, (13) can be further simplified to

Then, the negative gradient is obtained by the partial derivative of (14) with respect to which is given bywhere is referred to as polarity matrix that can judge the polarity of nonzero elements: where denotes the element of the th row and th column of the matrix . Since the conventional search step is too small based on the orthogonality, the steepest descent step and Barzilai-Borwein step are alternately exploited as the search step in order to improve the convergence rate and estimation performance. Then, we have where and are the steepest descent step and Barzilai-Borwein step, respectively. The specific steps of MSMGR are given as follows.

*Initialization*. Set the number of iterations , , , , and .

*Step 1. *Calculate the inner product of the residual of the th iteration and sensing matrix. Then, update the index :

*Step 2. *Update the support and the corresponding submatrix .

*Step 3. *Calculate the negative gradient and the search step in terms of (15) and (17), respectively.

*Step 4. *Update the constructed source in terms of the negative gradient and the search step.

*Step 5. *Update the polarity matrix , where is the zero matrix of the size .

*Step 6. *Update the residual . If the residual satisfies the stopping criterion, stop the iteration; otherwise, set and return to Step 1.

The core of the new method is that of updating the polarity matrix by zero-padding process in Step 5 since the dimensions of the support and corresponding submatrix are both expanded in Step 2. Moreover, zero-padding process can guarantee the precision of the DOA estimation. The spectrum of the proposed method is obtained by estimating constructed source power from all potential directions. Like other spectral-based methods, the true directions are estimated by the locations of the highest peaks of the spectrum.

#### 4. Simulation Results

In this section, the superior performance of the proposed method is shown as compared with existing JLZA-DOA and MUSIC methods by several numerical simulations. Consider the spatial sources impinging on the uniform linear array (ULA) with interspacing , where denotes the wavelength of source. In the ULA case, the steering vector corresponding to the source from the direction is given bywhere the number of array elements is set to . In the simulation, the regularized factor can be chosen as suggested in [21]Following [22], it is easy to see that the unique minimum of (10) is the zero matrix for . In the simulation, the average root mean square error (RMSE) of the DOA estimation is defined as the significant performance index:where is the number of independent Monte Carlo runs and is the estimate of in the th run. The resolution of the grid is closely related to the precision of the DOA estimation. A coarse grid leads to poor precision, but a too fine grid increases computational complexity. Therefore, an adaptive grid refinement method is used for the tradeoff between precision and computational complexity. In the simulation, we set a coarse grid with step in the range of to and make a local fine grid in the vicinity of the estimated angle.

In the first simulation, we show the spatial spectra of three methods in the scenario with low SNR, small number of snapshots, and two uncorrelated sources impinging from and , respectively. The spatial spectra are shown in Figure 1 with SNR = 3 dB and 50 snapshots. The following conclusion can be acquired from Figure 1 that since the spatial spectra obtained by MUSIC and JLZA-DOA have only one peak, MUSIC and JLZA-DOA cannot identify the closely spaced sources accurately. However, the proposed method MSMGR has a nearly ideal spectrum and a precise estimation for the closely spaced sources. Therefore, MSMGR outperforms JLZA-DOA and MUSIC in terms of the spatial spectrum.