Table of Contents Author Guidelines Submit a Manuscript
Journal of Electrical and Computer Engineering
Volume 2015 (2015), Article ID 287915, 8 pages
Research Article

DOA Estimation Based on Sparse Signal Recovery Utilizing Double-Threshold Sigmoid Penalty

Department of Electronics and Information, Northwestern Polytechnical University, Xi’an, Shaanxi 710129, China

Received 14 September 2015; Revised 3 November 2015; Accepted 10 November 2015

Academic Editor: John N. Sahalos

Copyright © 2015 Hanbing Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper proposes a new algorithm based on sparse signal recovery for estimating the direction of arrival (DOA) of multiple sources. The problem model we build is about the sample covariance matrix fitting by unknown source powers. We enhance the sparsity by the double-threshold sigmoid penalty function which can approximate the norm accurately. Our method can distinguish closely spaced sources and does not need the knowledge of the number of the sources. In addition, our method can also perform well in low SNR. Besides, our method can handle more sources accurately than other methods. Simulations are done to certify the great performance of the proposed method.

1. Introduction

The estimation of the direction of arrival (DOA) of multiple sources plays a key role in many applications including radar, sonar, and wireless communication. So far, amounts of superresolution algorithms for DOA estimation have been developed. The nonparametric methods include Capon method [1] and subspace-based methods. The traditional subspace-based algorithms, like MUSIC which firstly exploits the orthogonality between the signal space and the noise subspace [2] and ESPRIT which utilizes the rotational invariance of the signal subspace [3], can achieve excellent performance in high SNR, specially, when the snapshots are long. The maximum likelihood (ML) methods including the deterministic maximum likelihood (DML) and stochastic maximum likelihood (SML) possess good statistical properties [47] but require a large number of samples. All the above methods need the knowledge of the number of sources.

Sparse representation of signals and compressed sensing have become a hot topic in many fields [8, 9], and the DOA estimation methods based on sparse reconstruction have already been paid more attention by researchers. The well-known -SVD is a pretty good algorithm [10]; it combines the sparse signal recovery method based on norm with the singular value decomposition (SVD). The -SVD algorithm can handle closely spaced correlated sources when the number of sources is known, while its performance is degraded without knowing the number of sources. In [11], the joint approximation DOA (JLZA-DOA) algorithm is proposed. This algorithm processes with a mixed approximation approach; it can acquire high resolution without the knowledge of the number of sources and with only a few snapshots. However, the JLZA-DOA algorithm may fail to make a good estimation in low SNR.

The algorithms we mentioned in previous paragraph are all based on the multiple measurement vectors (MMV) problem [12]. Recently, the coprime array technique is proposed [13, 14]. This technique can enhance the degrees of freedom (DOFs) of array. With vectorizing the sample covariance matrix, it deals with the sparse covariance fitting problem eventually when signals are sparsely represented. That is to say, the MMV problem is transformed into the single measurement vector (SMV) problem. In the SMV case, DOA estimation can be implemented using many techniques, such as the orthogonal matching pursuit (OMP) and the improved smoothed approximation algorithm (ISL0) [15, 16]. The OMP algorithm enjoys a short computational time. However, it needs to know the sparsity in advance; that is, it requires the knowledge of the number of sources. The ISL0 performs better than OMP, but it costs longer computational time. And its performance is degraded in low SNR.

In this paper, we adopt the coprime array technique and propose a new Newton-like algorithm based on double-threshold sigmoid penalty for handling the sparse covariance fitting problem. We know that the direct norm optimization problem is NP-hard. Many algorithms approximate the norm by norm, but the estimation errors increase when the magnitudes of the nonzero elements to be estimated are greater than one. In addition, the norm method is not robust to noise and takes lots of iteration to converge. Instead of replacing the norm with norm, we utilize the double-threshold sigmoid penalty function to approach the norm [17]. And by adjusting the upper threshold at each step on the iteration, our algorithm can preserve most of the advantages of norm; this contributes to improving the estimation performance. Besides it also can accelerate the speed of convergence by adjusting the upper threshold. Numerical simulations demonstrate that our proposed method can achieve high resolution without the knowledge of the number of sources and perform well in low SNR.

Throughout the paper, the lower-case and upper-case bold letters are used to denote the vectors and matrices, respectively. , , , and present transpose, conjugate transpose, complex conjugate, and inverse, respectively. denotes the Kronecker product, and is the statistical expectation operator.

2. Problem Model

Consider a coprime array with compressed interelement spacing, as illustrated in Figure 1. The two subarrays consist of and sensors, respectively, where . is usually set to , where denotes the wavelength. Consequently, the array sensors are positioned atthe two subarrays share the first sensor at zeroth position; denote as the positions of the array sensors, where , .

Figure 1: Coprime array configuration.

Supposing that uncorrelated narrow-band signals impinge on the array from angles , the th observation can be represented aswhere is the vector of unknown signals and denotes the additive white Gaussian noises. is the manifold matrix, the column of which is corresponding to the directions of the sourcewhere    ().

The covariance matrix of receive data vector can be obtained aswhere and represents an identity matrix.

The classical DOA estimation problem can be reformulated as a sparse representation problem. We consider a gird of equally interval angles with . Assuming is a subset of , we construct an overcomplete matrix by collecting the steering vectors corresponding to all the potential source locations. Accordingly, received signal model (2) can be represented aswhere , is a sparse vector, and the nonzero entries of are the positions which correspond to the source locations. That is to say, the th component of is nonzero only if .

Consequently, we can obtain a spatial covariance matrix in terms of ; it takes the following form:where . In practice, the spatial covariance matrix is replaced by the sample covariance matrix , where denotes the sample snapshots.

Under the assumption that sources are uncorrelated, is a sparse diagonal matrix, and only entries are nonzero. Denote , and it is obvious that is a sparse vector with nonzero entries at positions which correspond to source locations. Moreover, the elements of are real valued and negative; that is, .

Applying vectorization to (6) yieldswhere , , and . The distinct rows of behave like the manifold of an array whose sensor locations are given by the values in the set of cross differences and the self-differences and . The set of these differences include all the differences continuously from to . As such, is similar to the observation data from a long array which includes actual elements and virtual elements.

To avoid the calculation of complex number and reduce the computational cost, we can separate the real and image part. Then (7) can be reformulated aswhere , , , and .

Our proposed method can work on both real valued case and complex valued case; we just take the complex valued case into account in the following discussion.

3. Proposed Method

3.1. Proposed L0A Algorithm

The problem expressed in (7) can be solved by the regularization method whose model can be written aswhere is the penalty term which approximates the norm. denotes the regularization parameter; it controls the tradeoff between the sparsity of the signal and the residual energy. In the following, we derive a gradient-based method with double-threshold sigmoid (DTHS) penalty.

Before deriving our method, we first deal with the problem that the derivation of is not defined at zero, which always happens when handling the sparse problem. Some approximation techniques for this problem are adopted in practice [18]. Now we make an approximation to as , where is a small positive constant, and approaches when . It is set to 0.0001 in this paper. Then Obviously, this equation is also held in the complex valued case.

In order to approximate the norm more accurately, some thresholding penalties have been utilized. Then (10) can be rewritten aswhere denotes the thresholding function and is an approximation of .

Now, we adopt a DTHS function in our method. The ideal one is divided into three parts by the upper and lower threshold points, which is shown in Figure 2. Denote the upper threshold as and the lower one as . The values smaller than are set to zero; the values between and are multiplied by a particular constant, and the values larger than are set to a particular positive value. However, the derivative in the two threshold points does not exist, so we replace the ideal DTHS function by the flowing function given bywhich can approximate the ideal DTHS function greatly when ; we name this function the DTHS function and set to 50 in this paper. Substituting the thresholding function in (12) by (13), we obtain a new problem:

Figure 2: The ideal DTHS function and DTHS function.

Figure 3 shows the graph of the DTHS function with four different upper thresholds. For any given , we haveConsequently, the function behaves like . That is to say, the DTHS penalty will approach norm when a smaller is used. However, while we choose a smaller , the function might have many local minima. As increases, becomes smoother. Then, we can handle our problem by solving a sequence optimization problem. Start solving (14) with a larger ; subsequently, we reduce by a small factor and solve (14) again for .

Figure 3: DTHS function with four upper thresholds.

We can obtain the gradient of (11) aswhere is a diagonal matrix:

Instead of calculating the Hessian matrix of (12), we adopt to approximate it; that is, . Then the iteration process can be accelerated, and we obtain the th quasi-Newton iteration formula of problem (8), which is written aswhere denotes the step size. The whole algorithm is summarized in Algorithm 1; we call our method the approximation (L0A) algorithm.

Algorithm 1 (L0A algorithm).
Initialization. , , , , , .
Main Iteration(1)Calculate : , .(2)Update : .(3)When , .(4)If , then .(5)If , stop the iteration.Output. The solution is after iterations.

We initialize by for accelerating the iteration, where denotes the Moore-Penrose pseudoinverse of . As discussed above, if is too small, function (15) may have many local minima. This will degrade the estimation performance of our method. So should be initialized by a larger value. To preserve most of the advantages of norm, also should not be chosen to be too large. Consequently, the value of should be selected moderately according to the signal magnitude. Usually, we set , where denotes the absolute value function. As for the lower threshold , some entries close to zero will be picked into the nonzero components of when . Consequently, we fixed it as 0. According to numerous experiments, we suggest that should be set from 0.3 to 0.5, and is set from 0.6 to 0.8. In this paper we set to 0.01.

Now, we talk about the selection of the regularization parameter . Firstly, we have estimated an approximate range by calculating the root mean square error (RMSE) of our proposed method as a function of , as shown in Figure 4. It demonstrates that the low RMSE can be achieved when and . However, if , there will be false peaks. And it is going to worsen as decreases; the speed of convergence also becomes slower simultaneously. When , some true peaks may disappear, and the performance will be more terrible as increases. Consequently, we set from 5 to 10. Some more simulations about how affects the estimation performance are conducted in Section 4.

Figure 4: RMSE as a function of .
3.2. Applying MUSIC Algorithm

Now we make a brief introduction about how to apply MUSIC algorithm correctly for DOA estimation under the problem model which is obtained by vectorizing in (4); more details can be seen in [19]. It is obvious that the virtual source signal becomes a single snapshot of . And the rank of the covariance matrix of , , is one in the noise-free case. Then the problem resembles dealing with fully coherent sources. Consequently, MUSIC will fail to work when multiple sources are impinging the array. As described in [19], the spatial smoothing technique can be used to overcome this problem. Since spatial smoothing requires a continuous set of differences, we construct a new matrix of size where we have extracted precisely those rows from which correspond to the successive differences and also sorted them. This is equivalent to removing the corresponding rows from the observation vector and sorting them to get a new vector expressed as

We divide this virtual array into overlapping subarrays and construct a full-rank covariance matrix so that the MUSIC algorithm can be applied for DOA estimation.

4. Simulation Results

In this section, we illustrate the simulation results of our proposed method. We consider the coprime arrays talked about in Section 2 consisting of 12 array sensors, and .

We assume four uncorrelated sources impinging on the array; their locations are 10°, 14°, 50°, and 61°. The SNR is set to 5 dB and the number of snapshots is 200. The estimated spectrum is shown in Figure 5 with above conditions, where the grid resolution is 0.5°. It demonstrates that L0A method performs well in low SNR and can distinguish closely spaced sources. We also consider two correlated sources at 21° and 30° based on the same conditions. As it is shown in Figure 6, the estimated results are close to the actual angles with small errors.

Figure 5: Spectrum of four uncorrelated sources.
Figure 6: Spectrum of two correlated sources.

By adopting the coprime arrays, higher degrees of freedom can be achieved. Now we consider 25 uncorrelated sources with uniform space between −60° and 60°. We set the number of snapshots to 500 and the SNR is 5 dB, and the grid resolution is set to 0.2°. We compare the performance of L0A and MUSIC method under the same condition. As shown in Figures 7 and 8, the L0A method apparently performs better than MUSIC method in low SNR; it can recognize all the closely spaced sources greatly, while the MUSIC method fails to solve some angles of sources.

Figure 7: Spectrum estimated by L0A-DOA.
Figure 8: Spectrum estimated by MUSIC.

Figure 9 shows the RMSEs of the L0A, OMP, ISL0, and MUSIC as a function of the SNR. Twenty uncorrelated sources with 500 snapshots are taken into account in the simulations, and the grid resolution is set to 0.5°. It manifests that the estimation performance of these algorithms degrades with SNR decreasing. Our proposed L0A enjoys a better performance than others, especially in low SNR. In Figure 10, we make a comparison among their performance of DOA estimation under different snapshots in 5 dB. With the snapshots increasing, the performance becomes better. However, to achieve the same RMSE, our proposed method just needs smaller snapshots.

Figure 9: RMSE as a function of SNR.
Figure 10: RMSE as a function of the snapshot number.

Our simulations were run on the computer with a 3.40 GHz Intel (R) Core (TM) i7-2600 CPU, where the operating system is Microsoft Windows XP with 32 bits. Table 1 shows the computation time for different algorithms. Here, we consider the same case used in Figure 9. We can see that SNR has little effect on the computation time of OMP and MUSIC. As SNR becomes low, the computation time of L0A and ISL0 increases. However, our proposed L0A costs shorter computation time than ISL0.

Table 1: Computation time comparison.

Finally, we show the effect of on the performance of DOA estimation from Figures 11 and 12. We consider the same case used in Figure 7. Figure 11 shows the case where . Obviously, there are some false peaks in the spectrum. When , true peaks disappear at the position of −50° and 55° as shown in Figure 12. Consequently, an appropriate value of is important to the DOA estimation.

Figure 11: Spectrum of L0A-DOA with .
Figure 12: Spectrum of L0A-DOA with .

5. Conclusion

In this paper, we propose a new method based on sparse reconstruction for finding the directions of sources impinging on a coprime array. By approximating the norm with the DTHS function and adjusting the upper threshold dynamically, our method achieves an excellent performance. The proposed method not only can perform well without the knowledge of the number of sources but also could work on correlated sources with small bias. Furthermore, it can also distinguish the closely spaced sources. With the high DOFs, this method could resolve much more the number of sources and have a better performance in low SNR and with smaller snapshots.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This work was supported by the National Natural Science Foundation of China (61171155 and 61571364).


  1. R. J. Weber and Y. Huang, “Analysis for Capon and MUSIC DOA estimation algorithms,” in Proceedings of the IEEE Antennas and Propagation Society International Symposium (APSURSI '09), pp. 1–4, IEEE, Charleston, SC, USA, June 2009. View at Publisher · View at Google Scholar
  2. X. Li, G. Yang, and Y. Gu, “Simulation analysis of music algorithm of array signal proccessing DOA,” in Proceedings of the International Conference on Automatic Control and Artificial Intelligence (ACAI '12), pp. 1838–1841, March 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. A. T. Y. Lok, P. Davoodian, R. C. Chin, J. Bermudez, Z. Aliyazicioglu, and H. K. Hwang, “Sensitivity analysis of DOA estimation using the ESPRIT algorithm,” in Proceedings of the IEEE Aerospace Conference, pp. 1–7, Big Sky, Mont, USA, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. P. Stoica and K. C. Sharman, “Maximum likelihood methods for direction-of-arrival estimation,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 7, pp. 1132–1143, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  5. X. Mestre, P. Vallet, and P. Loubaton, “On the resolution probability of conditional and unconditional maximum likelihood DOA estimation,” in Proceedings of the 21st European Signal Processing Conference (EUSIPCO '13), pp. 1–5, Marrakech, Morocco, September 2013. View at Scopus
  6. J.-W. Shin, Y.-J. Lee, and H.-N. Kim, “Reduced-complexity maximum likelihood direction-of-arrival estimation based on spatial aliasing,” IEEE Transactions on Signal Processing, vol. 62, no. 24, pp. 6568–6581, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. C. E. Chen, F. Lorenzelli, R. E. Hudson, and K. Yao, “Stochastic maximum-likelihood DOA estimation in the presence of unknown nonuniform noise,” IEEE Transactions on Signal Processing, vol. 56, no. 7, pp. 3038–3044, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. D. L. Donoho, M. Elad, and V. N. Temlyakov, “Stable recovery of sparse overcomplete representations in the presence of noise,” IEEE Transactions on Information Theory, vol. 52, no. 1, pp. 6–18, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. D. Malioutov, M. Cetin, and A. S. Willsky, “A sparse signal reconstruction perspective for source localization with sensor arrays,” IEEE Transactions on Signal Processing, vol. 53, no. 8, pp. 3010–3022, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. M. M. Hyder and K. Mahata, “Direction-of-arrival estimation using a mixed approximation,” IEEE Transactions on Signal Processing, vol. 58, no. 9, pp. 4646–4655, 2010. View at Google Scholar
  12. S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-Delgado, “Sparse solutions to linear inverse problems with multiple measurement vectors,” IEEE Transactions on Signal Processing, vol. 53, no. 7, pp. 2477–2488, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. S. Qin, Y. D. Zhang, and M. G. Amin, “Generalized coprime array configurations for direction-of-arrival estimation,” IEEE Transactions on Signal Processing, vol. 63, no. 6, pp. 1377–1390, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. K. Han and A. Nehorai, “Improved source number detection and direction estimation with nested arrays and ULAs using jackknifing,” IEEE Transactions on Signal Processing, vol. 61, no. 23, pp. 6118–6128, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655–4666, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. M. M. Hyder and K. Mahata, “An improved smoothed l0 approximation algorithm for sparse representation,” IEEE Transactions on Signal Processing, vol. 58, no. 4, pp. 2194–2205, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. J. Shi, R. H. Ding, G. Xiang, and X. L. Zhang, “Complex-valued sparse recovery via double-threshold sigmoid penalty,” Signal Processing, vol. 114, pp. 231–244, 2015. View at Publisher · View at Google Scholar
  18. M. Schmidt, G. Fung, and R. Rosales, “Optimization methods for l1-regularization,” Tech. Rep. TR-2009-19, University of British Columbia, 2009. View at Google Scholar
  19. P. Pal and P. P. Vaidyanathan, “Coprime sampling and the music algorithm,” in Proceedings of the IEEE Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop (DSP/SPE '11), pp. 289–294, IEEE, Sedona, Ariz, USA, January 2011. View at Publisher · View at Google Scholar · View at Scopus