Research Article  Open Access
Superresolution 2D DOA Estimation for a Rectangular Array via Reweighted Decoupled Atomic Norm Minimization
Abstract
This paper proposes a superresolution twodimensional (2D) direction of arrival (DOA) estimation algorithm for a rectangular array based on the optimization of the atomic norm and a series of relaxation formulations. The atomic norm of the array response describes the minimum number of sources, which is derived from the atomic norm minimization (ANM) problem. However, the resolution is restricted and high computational complexity is incurred by using ANM for 2D angle estimation. Although an improved algorithm named decoupled atomic norm minimization (DAM) has a reduced computational burden, the resolution is still relatively low in terms of angle estimation. To overcome these limitations, we propose the direct minimization of the atomic norm, which is demonstrated to be equivalent to a decoupled rank optimization problem in the positive semidefinite (PSD) form. Our goal is to solve this rank minimization problem and recover two decoupled Toeplitz matrices in which the azimuthelevation angles of interest are encoded. Since rank minimization is an NPhard problem, a novel sparse surrogate function is further proposed to effectively approximate the two decoupled rank functions. Then, the new optimization problem obtained through the above relaxation can be implemented via the majorizationminimization (MM) method. The proposed algorithm offers greatly improved resolution while maintaining the same computational complexity as the DAM algorithm. Moreover, it is possible to use a single snapshot for angle estimation without prior information on the number of sources, and the algorithm is robust to noise due to its iterative nature. In addition, the proposed surrogate function can achieve local convergence faster than existing functions.
1. Introduction
The problem of twodimensional (2D) direction of arrival (DOA) estimation is encountered in various signal processing applications, such as 2D angle estimation using 2D arrays in multipleinput and multipleoutput (MIMO) communication systems. Although numerous algorithms for this purpose have emerged in the past three decades [1–5], there is still an urgent need for fast and superresolution algorithms that can utilize bits of snapshots, especially for massive systems.
Conventional superresolution algorithms such as MUSIC and ESPRIT [6–9] are based on subspace techniques. Considering that these methods either require a heavy computational load for the 2D spectrum search or are suitable only for arrays that exhibit geometric invariance based on a number of samples that is larger than the number of sensors, we instead focus on algorithms that rely on the popular theory of compressed sensing (CS), which makes it possible to use a smaller number of snapshots, or even a single snapshot, to estimate the DOA, even when the sources are correlated. However, another challenge arises due to grid mismatch [10, 11].
Recently, the mathematical concept of continuous compressed sensing (CCS) was introduced by Candès and FernandesGranda [12]. CCS no longer follows traditional gridbased models, which require linear independence among the grids, but rather relies on a superresolution concept based on the total variation norm; thus, it fundamentally avoids the problem of grid mismatch and further facilitates the application of CS in parameter estimation. Subsequently, Chandrasekaran et al. [13] and Tang et al. [14] developed an atomic norm metric and presented an atomic norm minimization (ANM) problem in the positive semidefinite (PSD) form utilizing the Vandermonde structure of the atoms. The theory was further developed for the estimation of onedimensional (1D) parameters by Yang et al. [15], who also proposed the reweighted atomic norm minimization (RAM) algorithm to enhance the resolution [16]. In addition, the metric was extended for higherdimensional parameter estimation by Xu et al. [17] and the authors [18, 19]. However, the computational complexity is enormous even for 2D applications. To combat this issue, Tian et al. [20] presented a decoupled atomic norm minimization (DAM) algorithm in which the twolevel Toeplitz matrix is divided into two onelevel matrices, thereby improving the calculation efficiency. However, the resolution is limited because the minimum angular interval is inversely proportional to the number of rows or columns in the array [12, 19, 20].
To address the above issues, we propose a lowcomplexity superresolution optimization algorithm for 2D DOA estimation with a rectangular array. In essence, the proposed algorithm adopts an extended atomic norm that avoids the constraint on the angular intervals, and we argue that the problem can be translated into a decoupled rank minimization problem in the PSD form, whereafter since the rank minimization problem is difficult to solve directly, a novel sparse surrogate for it is presented, inspired by the works [16, 21]. Then, the new concave objective function can be iteratively optimized by the wellknown majorizationminimization (MM) method. Our algorithm eventually takes the form of a weighted DAM problem in each iteration; thus, we name it reweighted decoupled atomic norm minimization (RDAM). A series of proofs are presented in this paper to illustrate the rationality of the proposed method, and corresponding simulations are also provided to show its validity. Our proposed algorithm enhances the resolution to a great extent without increasing the computational burden, and it requires neither prior information on the number of sources nor the noise level which can be obtained in the iterative process. In addition, our method is more robust to noise than the DAM algorithm due to its iterative nature.
2. Signal Model
Suppose that farfield narrowband sources are impinging on a uniform rectangular array (URA) composed of sensors, where and are the numbers of elements in the and directions, respectively. For the sake of subsequent discussion, we adopt a decoupled observation model for the array output [22]. As shown in Figure 1, we redefine the azimuth of the received signal as the angle between the signal and the plane instead of using the traditional definition; the definition of the elevation remains unchanged. Correspondingly, the singlesnapshot array observation model without noise iswhere is the array response, with dimensions of , and is the th received source, with denoting the amplitude of the source and denoting the phase, which takes values in the range (the frequency part of each source is ignored for simplicity). Moreover, and are the 1D steering vectors in the and directions, respectively. Suppose that the array spacing is set to half the wavelength; then, we have and . Note that is the conjugate of the regular vector.
Our goal in this paper is to estimate and from the observed data , and we assume that the following conditions hold:
(1) and have the Vandermonde structure.
(2) and .
(3) The phases are either i.i.d. and uniformly drawn from the complex unit circle or independent random variables with mean .
The first two conditions, which obviously hold for array models, are the basic requirements for the algorithm subsequently proposed in this paper, and the last one guarantees that the solution of our proposed algorithm is exact with a probability of at least , where is a very small value [14, 19, 23].
3. Proposed Reweighted Decoupled Atomic Norm Minimization (RDAM) Algorithm
3.1. Decoupled Rank Minimization Problem
The atomic norm was proposed in [14] as a measure of the sparsest number of spikes with unknown positions that could compose the observed signal in the frequency domain. In this paper, we employ this norm to express the number of sources with unknown directions composing the array output in the spatial domain and implement 2D DOA estimation by optimizing the atomic representation.
More precisely, we merge the signal phase information into the steering vectors and redefine them as follows:Regarding as an atom, we define the atom set asThen, the atomic norm induced by is given byThis is an optimization problem with the objective of finding the fewest possible number of sources given the observed data , which can be represented as a positive combination of the atoms . Inspired by the idea proposed in [14] and previous research on rank minimization [24–26], we will demonstrate that the above norm can be represented as the sum of the ranks of two 1D Toeplitz matrices satisfying a PSD constraint and propose a decoupled rank minimization problem as depicted by the following proposition.
Proposition 1. Let be an array observed data that has the following atomic form and satisfies the conditions given in the previous section:The corresponding atomic norm minimization problem can be expressed asThen, (7) is equivalent to the optimum value of the following decoupled rank minimization problem:where and denote two 1D Toeplitz matrices and and are their first rows, respectively.
Proof. The trivial case of is unnecessary to discuss. For , let and be the optimal values of and , respectively; then, the corresponding optimal value of (8) can be expressed as . Furthermore, let the optimal be denoted by .
On one side, we will show that given problem (7). Suppose that can be realized in the arbitrary atomic decomposed form , where is a set of positive real numbers. Let and ; then, the homologous Toeplitz matrices can be represented asandThus, we obtain the following PSD matrix:i.e., the relational expressions and hold. Hence, .
On the other side, given problem (8), we will try to verify that in the case of and (the fullrank case is trivial). Given the PSD condition, and letting and compose the optimal solution to (8), the following can be derived in accordance with the Schur Complement Lemma [27, 28]:In addition, note that and have Vandermonde decompositions of the forms and [14], where , , and both (with dimensions of ) and (with dimensions of ) are diagonal matrices with positive real elements. Therefore, (12) implies that lies in the range of , which means that can be composed of no more than atoms; i.e., . Similarly, (13) leads to the conclusion that . Thus, .
Based on the above proposition, we can straightforwardly optimize the decoupled rank minimization problem (8) under the PSD constraint to implement the optimization of (7). It is evident from the above proof that and are encoded in the Toeplitz matrices and , respectively. Once the optimal solution is obtained, and can be determined from the Vandermonde decompositions of these two matrices; there are many offtheshelf computational methods such as the dual rootfinding method [12, 14, 15] and the matrix pencil method [29, 30]. In addition, supposing and are acquired, the simple and straightforward method of angle pairing provided in [20] can be applied. Therefore, our task is simply to optimize problem (8) for .
3.2. A Novel Sparse Surrogate for Rank Minimization
The essential objective of the decoupled rank minimization problem (8) discussed above is to recover two lowrank matrices within the PSD feasible set. Unfortunately, due to the difficulty of solving this problem (which is similar to that of solving the original optimization problem), there is an urgent need to seek new metrics to express the objective function. The main idea is to find one simpler surrogate function that not only can effectively simulate the ranks of the two Toeplitz matrices but also can simultaneously encourage a lowrank solution. In most papers that discuss the rank minimization problem [16, 27, 31, 32], the logdet function , which is expressed as follows, is generally used:However, in this paper, we consider a novel concave surrogate function for , given by where represents the identity matrix. The regularization parameter enables the function to approximate as it varies and prevents the function from taking a value of when is rankdeficient. Moreover, note that the parameter does the same job with .
From the properties of matrix eigenvalues and trace, it follows thatandwhere is the set of eigenvalues of . Equation (17) implies that the essence of function is to operate on ; thus, to better understand the rankapproximation property of on a PSD definition domain, we denote a new subfunction by for any eigenvalue , which is an inverse function obtained through a certain translation and scaling process and has the features of monotonicity and concavity. Note that the constant term is added merely for consistency with the norm; it is equivalent to adding an identity matrix to and does not influence the outcome. Obviously, we have the identity ; at and near the zero point, has an increasingly steep slope of as , which equivalently imposes a large penalty on nearzero and ensures that the values of at these become much smaller. In addition, tends towards a constant value of for a sufficiently large . The above trend illustrates that as , favours the separation of at different levels, thus becoming a relaxation of the norm , which has an infinite slope near the zero point and a constant value of except at the zero point. To provide an intuitive illustration, we plot along different ’s at in Figure 2. Meanwhile, the curves of the norm, the norm, and the logdet subfunction represented by are provided for comparison. Suitable proportionality constants are set to ensure that and are zero at and have a constant value of at without modifying their properties. As shown in Figure 2, both and approach the norm as the regularization parameters decrease; the smaller is, the steeper the slope is, which offers a basis for the selection of during both initialization and iteration, as discussed in Section 3.3. Moreover, the function is more likely to perform better since it has a much faster speed of approximation; i.e., is deemed to be more sparsityenhancing.
The approximate relationship between and provides an explanation for why can be relaxed by . In particular, since each larger corresponds to a value of that tends towards a constant value of as and the addition operation does not affect the monotonicity and concavity of a function, the sum of all becomes a relaxation of the number of larger eigenvalues, which is considered to be the rank of the Hermitian matrix or the number of sources; i.e., the sparse surrogate can be regarded as a relaxation of . Furthermore, the proposed function leads to better performance than in encouraging lowrank solutions, as seen by comparing the curves for with the curves for ; stronger interpretations will be presented in later subsections.
3.3. RDAM via the MajorizationMinimization (MM) Algorithm
Using the surrogate function , the decoupled rank minimization problem (8) can be relaxed as follows:Here, the parameters and are deemed to be mutually independent. In this subsection, we aim to minimize the objective function in (18), denoted by , on the PSD cone . Considering that is a sum of two concave functions, the MM algorithm [33, 34] can be adopted to obtain a local optimum. This algorithm follows an iterative procedure consisting of two steps, i.e., majorization and minimization, in each iteration.
Given an initial point , a series of feasible points are generated. At each feasible point, the majorization step requires the construction of a tractable function majorizing while ensuring that its value at the current point is equal to . Here, we use the firstorder Taylor expansion for this purpose. Since both terms of possess the same structure, we consider only the first term as an example, for which we have the following firstorder approximate expression:where is a constant. The derivation is presented in Appendix B. Thus, has the following upper bound:where is constant. It is evident that . In addition, many other construction methods [33] (e.g., secondorder Taylor expansion, inequality, and the Schur complement) could be used; these possibilities are left for future work.
Then, in the minimization step, the following minimization problem is solved, ignoring the constant , to update and :where the optimization problem is a semidefinite programming (SDP) that can be solved in polynomial time. Notice that this problem is equivalent to a weighted DAM problem in the PSD form [20], with weighting factors of and , and that the proposed algorithm is iterative in nature; therefore, our algorithm is called RDAM. In addition, we have the following inequality: i.e., is a nonincreasing sequence, implying that the iterative process makes the objective function repeatedly smaller, and it can be expected to converge to a local minimum , as proven in detail in [21, 33]. Furthermore, since the algorithm is locally convergent, suitable initial values are indispensable for guaranteeing convergence to the desired local optimum. One immediate possibility is to use the DAM solution for initialization. In particular, let and be identity matrices; then, we can obtain the initial values and by solving a DAM problem. Next, let us emphasize the selection of and , both of which influence the outcome. As argued in Section 3.2, is the sum of two subfunctions, denoted by and , for and , respectively. For consistency with the curves in Figure 2, we set and . Then, both the factors and can be viewed as equivalent to the parameter in this figure, and we have and . Now, let us consider the choice of as an example. Let larger eigenvalues, corresponding to the signals, be denoted by , while smaller eigenvalues, corresponding to the noise, are denoted by ; thus, we should be able to ensure a relatively large difference in the values of between the points and by tuning the initial factor . For example, for points and that are close to each other, we can select a small value, such as or smaller; then, there will be a sharp slope such that the subfunction values will be distinctly separated. In contrast, for the relatively distant points, a large value, such as , will be sufficient to achieve good estimation performance. The same analysis applies for the initialization of the factor .
Finally, it is instructive to explore the potential implications of the RDAM algorithm in combination with the atomic norm [13, 14]. Notice that the concept of the 1D weighted atom set proposed in [16] by Yang et al. can be extended to a new 2D weighted atom set as follows:where denotes a pair of nonnegative weights on the 2D space angles. Accordingly, the weighted atom norm is given byThen, a 2D weighted ANM problem in decoupled PSD form can be obtained based on the following proposition.
Proposition 2. Let denote the 2D weighted atomic norm. The following equivalence relation holds given the data recovered from the iteration.whereThis implies that iteratively minimizing the weighted atomic induced by the 2D weighted atom set serves the same purpose as the proposed RDAM algorithm; i.e., RDAM can also be called decoupled reweighted atom norm minimization (DRAM). More significantly, our proposed trace function gives rise to quadratic weighting factors, which greatly accelerates the speed of local convergence and further enhances the preference for lowrank solutions in (18) compared with the firstorder factors generated by the logdet function [16].
3.4. Another Interpretation from the Perspective of Conventional Direction of Arrival (DOA) Estimation Methods
The proposed RDAM algorithm can also be interpreted from the perspective of traditional DOA estimation methods. Suppose that the signals are independent of each other and that all steering vectors have been normalized, without exception. Then, we can redefine two decoupled covariance matrices for the noiseless observation data as follows:From the poof of Proposition 1, we know that the Toeplitz matrices and are exactly equivalent to the two decoupled covariance matrices above except that the coefficients include a squared difference. The whole RDAM process, therefore, can be viewed as the process of recovering and with the regulators and . Furthermore, the two atomic weight values in (24) are related to the Capon algorithm; in particular, and happen to be the 1D modified cost function of the Capon algorithm. In other words, we utilize the inverse of the revised power spectrum to weight the atoms, with the aim of penalizing directions without sources. Moreover, note that the atomic weight values produced by the logdet function correspond to the conventional cost function; this comparison, on the other hand, shows that our surrogate performs better.
Remarkably, however, our algorithm demonstrates superiority beyond 2D DOA estimation with respect to conventional subspace methods such as MUSIC and ESPRIT. The 2D MUSIC method not only requires prior information on the number of sources but also incurs a heavy computational load for the 2D spectrum search; by contrast, the proposed algorithm can directly estimate continuousvalued angles through two decompositions, and it merely needs prior knowledge of the noise level (moreover, even this is unnecessary, as shown by a later analysis). Meanwhile, the 2D ESPRIT method is suitable only for an array with the property of socalled geometric invariance, whereas the proposed algorithm can be applied to a sparse rectangular array (SRA) with missing array outputs or only a few elements of a URA and still achieves better estimation performance under certain conditions [12, 15, 19, 35]. In addition, far fewer snapshots than the array size are required for exact estimation, whereas this is not true of the subspace methods.
4. Further Analysis
4.1. Superresolution
The DAM algorithm requires the following angular interval conditions [20] to guarantee that directions can be estimated from the observed data with probability , where is the estimation precision.Here, and are the wraparound distances on the complex unit circle. These conditions mean that the resolutions in terms of and are inversely proportional to the numbers of sensors by row and column, respectively. By contrast, the proposed algorithm (RDAM) has no such restriction on the angular intervals. As the analysis in Section 3 shows, the quadratic inverse of the solution from the previous iteration, denoted by and , is utilized as the weighting factors for the objective function (21) in the current iteration; thus, large weights are applied to small eigenvalues, while small weights are applied to large eigenvalues. Therefore, minimizing (21) is equivalent to penalizing small eigenvalues, making them much smaller while making large ones relatively larger. Thus, it becomes far easier to pick out the eigenvalues corresponding to signals. In other words, the iterative nature of the algorithm allows it to overcome the limits of the minimal angular interval, and, hence, the resolution is greatly improved. In addition, the simulations presented in the next section similarly indicate that, given a proper initialization, our algorithm yields a very high resolution.
4.2. Computational Complexity
Another merit of RDAM is its low complexity. As mentioned in [18], the vectorbased atomic norm minimization (vectorANM) method has a computational complexity as high as due to its PSD constraint of size . However, in the DAM method [20], the twolevel Toeplitz matrix is divided into two onelevel Toeplitz matrices, thus reducing the computational complexity to .
Fortunately, our proposed RDAM algorithm has the same complexity as DAM in a single iteration; thus the computational efficiency depends merely on the number of iterations. Since such a reweighted iterative algorithm can converge within a few steps [21], the computational cost of RDAM is no greater than , where denotes the number of iterations; i.e., the complexity is comparable to that of DAM but far below the heavy load of vectorANM. An empirical runtime comparison is also provided based on the simulations reported below.
4.3. The Noisy Case
In fact, DOA estimation is typically implemented in a noisy environment; therefore, we consider the following observation model with additive noise , which involves independent entries drawn from :Then, an iterative denoising optimization problem is obtained as follows:where is a regularization factor that keeps a balance between data fitting and a lowrank solution. When , where represents the spectral norm, the soft threshold denoising algorithm has the following asymptotic mean square error (MSE) [36]:
In this paper, we directly optimize the dual problem [37] of (32) as follows:where . In practice, we can set an initial value for and then gradually decrease it by a suitable small factor in each iteration; as shown by the simulations reported below, this approach results in better performance.
5. Numerical Simulations
In this section, we carry out numerical simulations conducted to illustrate the performance of the proposed superresolution algorithm (RDAM) in 2D DOA estimation for a rectangular array. In all experiments, the value of was not given a priori, and a random phase (uniformly distributed between and ), associated with a constant magnitude of , was set for each received signal. In addition, all SDPs involved were implemented using CVX [38], and the matrix pencil method was adopted for the Vandermonde decomposition of the Toeplitz matrices. In addition, the RDAM algorithm was initialized as discussed in Section 3, and we not only selected suitable initial factors and , as required, but also iteratively tuned them by decreasing their values by a factor of in each subsequent iteration.
First, let us present an intuitive illustration of the higher resolution of RDAM compared with DAM. Suppose that narrowband signals with directions of , , , , and impinge on a rectangular array of size . Note that both and are no larger than . Figure 3(a) shows the DAM estimation result, while Figure 3(b) shows the performance of RDAM with initial factors ; all results were obtained through 100 Monte Carlo experiments. It is apparent that, in the noiseless case, all the angles are estimated correctly and stably by our algorithm, whereas DAM yields unsatisfactory outcomes. Moreover, RDAM is even robust to changes in the signal phases, possibly extending to certain special fields.
(a)
(b)
Second, let us compare the proposed algorithm with three other methods, namely, DAM, vectorANM, and vectorbased reweighted atomic norm minimization (vectorRAM) [18], to provide a more reliable verification of the superresolution of our algorithm. In particular, let and . We fix the direction of one of the two signals as , while the other signal is radially separated from the fixed signal by azimuth and elevation spacings that both range from to . Each interval value is allowed to randomly fluctuate within a range of . Let , and let the number of iterations of RDAM be four; the same parameter settings are considered for vectorRAM. Figure 4 compares the results of 500 Monte Carlo experiments each for the azimuth and the elevation. It is clear that the proposed algorithm can distinguish spacing angles that differ by in both dimensions, while the other algorithms cannot. Moreover, it achieves better performance than vectorRAM, which is a reweighted algorithm based on stretched vectors, under the same initial conditions. In addition, we list the runtimes of these four methods in Table 1, from which we can see that, for both vectorbased methods, the computational complexity increases dramatically as the size of the array increases. By contrast, the proposed algorithm and the DAM algorithm both have very low complexity regardless of the array size.

(a)
(b)
Furthermore, we examine the iterative process of RDAM to investigate the hypothesis that the proposed surrogate function enables the number of iterations to be reduced, i.e., more effectively encourages lowrank solutions and enables a faster convergence speed. Accordingly, let and , and consider randomly selected signal directions of , , and . Initial factors of and are selected for the functions and , respectively; under these conditions, the subfunctions have similar approximation properties, as seen from Figure 2. The iterative processes using the two surrogate functions are shown in Figure 5, where the first two subplots in both (a) and (b) present the changes in the eigenvalues for and , respectively, and the third subplot shows the estimation performance in each iteration. It is obvious that the previous function requires four iterations wihle the proposed function needs only three iterations, which indicates that the proposed function has a faster estimation speed.
(a)
(b)
Finally, we carry out 200 Monte Carlo experiments considering the case of very closely spaced sources to illustrate the antinoise performance of the proposed algorithm relative to the CramérRao bound (CRB). The array response with i.i.d. additive Gaussian noise satisfying was obtained; the signaltonoise ratio (SNR) was since the powers of all signals were set to a constant value of 1. As before, we set , and the directions of the two sources were set to and . Figure 6 shows a graph of the average root mean square error (RMSE) with respect to the SNR with the parameters set to and the initial noise level set to . It is worth noting that the noise level was reduced by a factor of 0.8 in the first weighted iteration for both our RDAM algorithm and the vectorRAM algorithm. Obviously, the performance of the proposed algorithm is close to the CRB at higher SNRs, although its antinoise performance may be worse below SNR=15dB; the main reason for the performance degradation is that, at lower SNRs, the probability of efficient estimation decreases so that a very small amount of deteriorating estimates is obtained, which influences the performance of the curve. However, our algorithm still shows more robust performance than the others in the absence of noise.
6. Conclusion
In this paper, we proposed a superresolution decoupled reweighted atomic norm minimization (RDAM) algorithm for 2D DOA estimation based on the popular theory of continuous compressed sensing (CCS). By directly employing the atomic norm and converting the optimization problem into a decoupled rank minimization problem, we ensured that the computational complexity is greatly reduced. On the other side, we presented a novel surrogate function and formulated a new concave optimization problem, overcoming the shortcomings of existing methods in terms of resolution and also accelerating the convergence speed. Our algorithm is even suitable for application to sparse rectangular array (SRA), although there is a need for better denoising processing, which remains to be studied in future work.
Appendix
A. Proof of Proposition 2
First, we rewrite the constraint condition asThis implies that lies in the ranges of both and , and because is composed of atoms of the form , we assume the following Vandermonde decomposition for and :Let the righthand side of (25) be denoted by ; then, we haveNext, the following conclusion is obtained from the given conditions:In addition, according to the analysis in Section 3, the following two decompositions hold for the weighted atoms:Thus, we have and . Therefore, .
B. Derivation of the FirstOrder Taylor Polynomial (11)
Inspired by the derivations in [39], the partial derivative of the first term of (18) is computed as follows:Thus, the firstorder approximation of this term at the current point can be expressed as follows:
Data Availability
The data used to support the findings of this study are not available because of the following reasons: (1) The data in this paper is based on computer simulation and is generated randomly by itself. (2) This generation method is common in the field of DOA estimation; we consulted the authoritative passages, such as “G. Tang, B. N. Bhaskar, P. Shah, ‘Compressed sensing off the grid,’ IEEE Trans. Inf. Theory, vol. 59, no. 11, pp. 74657490, Nov. 2013,” and “Z. Tian, Z. Zhang, Y. Wang, ‘Lowcomplexity optimization for twodimensional directionofarrival estimation via decoupled atomic norm minimization,’ in IEEE International Conference on Acoustics, Speech and Signal Processing, 2017.”
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported in part by the Natural Science Basic Research Project of Shaanxi Province under Grant 2018JQ6046 and in part by the Fundamental Research Funds for the Central Universities of China under Grant XJS18033.
References
 A. J. van der Veen, P. B. Ober, and E. F. Deprettere, “Azimuth and elevation computation in high resolution DOA estimation,” IEEE Transactions on Signal Processing, vol. 40, no. 7, pp. 1828–1832, 1992. View at: Publisher Site  Google Scholar
 F. Sakarya and M. Hayes, “Estimating 2D DOA angles using nonlinear array configurations,” IEEE Transactions on Signal Processing, vol. 43, no. 9, pp. 2212–2216, 1995. View at: Publisher Site  Google Scholar
 J. Ramos, C. P. Mathews, and M. D. Zoltowski, “FCAESPRIT: a closedform 2D angle estimation algorithm for filled circular arrays with arbitrary sampling lattices,” IEEE Transactions on Signal Processing, vol. 47, no. 1, pp. 213–217, 1999. View at: Publisher Site  Google Scholar
 G. Wang, J. Xin, N. Zheng, and A. Sano, “Computationally efficient subspace based method for twodimensional direction estimation with Lshaped array,” IEEE Transactions on Signal Processing, vol. 59, no. 7, pp. 3197–3212, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 M. D. Zoltowski and K. T. Wong, “ESPRITbased 2D direction finding with a sparse uniform array of electromagnetic vector sensors,” IEEE Transactions on Signal Processing, vol. 48, no. 8, pp. 2195–2204, 2000. View at: Publisher Site  Google Scholar
 R. Schmidt, “Multiple emitter location and signal parameter estimation,” IEEE Transactions on Antennas and Propagation, vol. 34, no. 3, pp. 276–280, 1986. View at: Publisher Site  Google Scholar
 R. Roy and T. Kailath, “ESPRITestimation of signal parameters via rotational invariance techniques,” IEEE Transactions on Signal Processing, vol. 37, no. 7, pp. 984–995, 1989. View at: Publisher Site  Google Scholar
 P. Vallet, X. Mestre, and P. Loubaton, “Performance analysis of an improved MUSIC DoA estimator,” IEEE Transactions on Signal Processing, vol. 63, no. 23, pp. 6407–6422, 2015. View at: Publisher Site  Google Scholar
 J. Steinwandt, F. Roemer, and M. Haardt, “Generalized least squares for ESPRITtype direction of arrival estimation,” IEEE Signal Processing Letters, vol. 24, no. 11, pp. 1681–1685, 2017. View at: Publisher Site  Google Scholar
 Y. Chi, L. L. Scharf, A. Pezeshki, and A. R. Calderbank, “Sensitivity to basis mismatch in compressed sensing,” IEEE Transactions on Signal Processing, vol. 59, no. 5, pp. 2182–2195, 2011. View at: Publisher Site  Google Scholar
 R. Jagannath, G. Leus, and R. Pribic, “Grid matching for sparse signal recovery in compressive sensing,” in Proceedings of the IEEE European Radar Conference, 2013. View at: Google Scholar
 E. Candes and C. FernandezGranda, “Towards a mathematical theory of superresolution,” Communications on Pure and Applied Mathematics, vol. 67, no. 6, pp. 906–956, 2014. View at: Publisher Site  Google Scholar
 V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky, “The convex geometry of linear inverse problems,” Foundations of Computational Mathematics, vol. 12, no. 6, pp. 805–849, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 G. Tang, B. N. Bhaskar, P. Shah, and B. Recht, “Compressed sensing off the grid,” IEEE Transactions on Information Theory, vol. 59, no. 11, pp. 7465–7490, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 Z. Yang and L. Xie, “On gridless sparse methods for line spectral estimation form complete and incomplete data,” IEEE Transactions on Signal Processing, vol. 63, no. 12, pp. 3139–3153, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 Z. Yang and L. Xie, “Enhancing sparsity and resolution via reweighted atomic norm minimization,” IEEE Transactions on Signal Processing, vol. 64, no. 4, pp. 995–1006, 2016. View at: Publisher Site  Google Scholar
 W. Xu, J. Cai, K. V. Mishra, M. Cho, and A. Kruger, “Precise semidefinite programming formulation of atomic norm minimization for recovering ddimensional (d≥2) offthegrid frequencies,” in Proceedings of the Information Theory and Applications Workshop (ITA '14), pp. 1–4, IEE, San Diego, CA, USA, 2014. View at: Publisher Site  Google Scholar
 Z. Yang, L. Xie, and P. Stoica, “Vandermonde decomposition of multilevel toeplitz matrices with application to multidimensional superresolution,” IEEE Transactions on Information Theory, vol. 62, no. 6, pp. 3685–3701, 2016. View at: Publisher Site  Google Scholar
 Y. Chi and Y. Chen, “Compressive twodimensional harmonic retrieval via atomic norm minimization,” IEEE Transactions on Signal Processing, vol. 63, no. 4, pp. 1030–1042, 2015. View at: Publisher Site  Google Scholar
 Z. Tian, Z. Zhang, and Y. Wang, “Lowcomplexity optimization for twodimensional directionofarrival estimation via decoupled atomic norm minimization,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '17, 2017. View at: Google Scholar
 E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted minimization,” Journal of Fourier Analysis and Applications, vol. 14, no. 56, pp. 877–905, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 G. Zhao, G. Shi, F. Shen, X. Luo, and Y. Niu, “A sparse representationbased doa estimation algorithm with separable observation model,” IEEE Antennas and Wireless Propagation Letters, vol. 14, pp. 1586–1589, 2015. View at: Publisher Site  Google Scholar
 Z. Yang and L. Xie, “Exact joint sparse frequency recovery via optimization methods,” IEEE Transactions on Signal Processing, vol. 64, no. 19, pp. 5145–5157, 2016. View at: Publisher Site  Google Scholar
 D. Gross, “Recovering lowrank matrices from few coefficients in any basis,” IEEE Transactions on Information Theory, vol. 57, no. 3, pp. 1548–1566, 2011. View at: Publisher Site  Google Scholar
 B. Recht, “A simpler approach to matrix completion,” Journal of Machine Learning Research (JMLR), vol. 12, pp. 3413–3430, 2011. View at: Google Scholar  MathSciNet
 B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization,” SIAM Review, vol. 52, no. 3, pp. 471–501, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 M. Fazel, H. Hindi, and S. P. Boyd, “Logdet heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices,” in Proceedings of the American Control Conference, vol. 3, pp. 2156–2162, IEEE, 2003. View at: Google Scholar
 F. Z. Zhang, The Schur Complement and its Applications, vol. 4 of Numerical Methods and Algorithms, Springer, New York, NY, USA, 2005. View at: Publisher Site  MathSciNet
 Y. Hua and T. K. Sarkar, “Matrix pencil method for estimating paramters of exponentially damped/undamped sinusoids in noise,” IEEE Transactions on Signal Processing, vol. 38, no. 5, pp. 814–824, 1990. View at: Publisher Site  Google Scholar  MathSciNet
 N. Yilmazer, J. Koh, and T. K. Sarkar, “Utilization of a unitary transform for efficient computation in the matrix pencil method to find the direction of arrival,” IEEE Transactions on Antennas and Propagation, vol. 54, no. 1, pp. 175–181, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 K. Mohan and M. Fazel, “Reweighted nuclear norm minimization with application to system identification,” in Proceedings of the American Control Conference, ACC '10, 2010. View at: Google Scholar
 X. Wu, W. Zhu, and J. Yan, “A highresolution doa estimation method with a family of nonconvex penalties,” IEEE Transactions on Vehicular Technology, vol. 67, no. 6, pp. 4925–4938, 2018. View at: Publisher Site  Google Scholar
 Y. Sun, P. Babu, and D. P. Palomar, “Majorizationminimization algorithm in signal processing, communications, and machine learning,” IEEE Transactions on Signal Processing, vol. 65, no. 3, pp. 794–816, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 Z. Hua et al., “MM algorithm for variance components models,” Journal of Computational and Graphical Statistics, 2015. View at: Google Scholar
 X. Wu, W. Zhu, and J. Yan, “A fast gridless covariance matrix reconstruction method for one and twodimensional directionofarrival estimation,” IEEE Sensors Journal, vol. 17, no. 15, pp. 4916–4927, 2017. View at: Publisher Site  Google Scholar
 B. N. Bhaskar, G. Tang, and B. Recht, “Atomic norm denoising with applications to line spectral estimation,” IEEE Transactions on Signal Processing, vol. 61, no. 23, pp. 5987–5999, 2013. View at: Publisher Site  Google Scholar
 S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, UK, 2004. View at: MathSciNet
 M. Grant, S. Boyd, and Y. Ye, “CVX: matlab software for disciplined convex programming,” CVX Research, 2008. View at: Google Scholar
 K. B. Petersen, “Matrix Cookbook,” 2012, http://matrixcookbook.com. View at: Google Scholar
Copyright
Copyright © 2019 MingMing Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.