International Journal of Antennas and Propagation

Volume 2015 (2015), Article ID 959856, 10 pages

http://dx.doi.org/10.1155/2015/959856

## Adaptive Beamforming Based on Compressed Sensing with Smoothed Norm

School of Electronic and Optical Engineering, Nanjing University of Science & Technology, Nanjing 210094, China

Received 22 September 2014; Revised 12 February 2015; Accepted 18 February 2015

Academic Editor: Giuseppe Castaldi

Copyright © 2015 Yubing Han and Jian Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

An adaptive beamforming based on compressed sensing with smoothed norm for large-scale sparse receiving array is proposed in this paper. Because of the spatial sparsity of the arriving signal, compressed sensing is applied to sample received signals with a sparse array and reduced channels. The signal of full array is reconstructed by using a compressed sensing reconstruction method based on smoothed norm. Then an iterative linearly constrained minimum variance beamforming algorithm is adopted to form antenna beam, whose main lobe is steered to the desired direction and nulls to the directions of interferences. Simulation results and Monte Carlo analysis for linear and planar arrays show that the beam performances of our proposed adaptive beamforming are similar to those of full array antenna.

#### 1. Introduction

Adaptive beamforming is widely used in array signal processing for enhancing a desired signal while suppressing interference and noise at the output of an array of sensors [1, 2]. In bistatic radar systems, digital beamforming receiving array antennas are urgently needed so that the receiving antenna beams can cover the transmitting antenna beam flexibly. On receiving radar stations, to obtain high antenna gain and high angular measurement accuracy, an antenna array with a large number of elements should be used. So the high cost is a major drawback for a large-scale array. Meanwhile, the computational burden and high-rate data transmission are two bottlenecks in the implementation of an adaptive beamforming algorithm. To reduce the number of radio frequency (RF) front-ends without reducing the antenna aperture, we can use sparse arrays. Compared to the fully populated array, sparse arrays are attractive because they can reduce the number of elements and receiver channels, weight, power consumption, and cost. The sparse equally spaced arrays produce grating lobes inevitably and reduce the scanning range of the beam. In order to avoid grating lobes, sparse arrays are usually designed to be unequally spaced. There are many techniques that have been proposed over the last fifty years to design and synthesize such unequally spaced sparse arrays. The most common is stochastic methods, such as genetic algorithm [3], particle swarm [4], ant colony [5], and simulated annealing [6]. Matrix pencil methods have lately been efficiently applied to reconstruct focused and shaped beam patterns while reducing the number of the array elements [7]. Differential evolution [8] and sparse-periodic hybrid array [9] are also good methods for reducing the side-lobe levels. In [10], a procedure to synthesize sparse arrays with antenna selected via sequential convex optimization is presented. In [11], a method of pattern synthesis with sparse arrays based on Bayesian compressive sampling is proposed. However, these methods only optimize static patterns and are difficult to guarantee the performance of the beam patterns when we need to suppress interference adaptively.

Recently, Candes and Donoho reported a novel sampling theory called compressed sensing (CS), also known as compressive sampling [12, 13]. CS theory asserts that one can recover certain signals from far fewer samples or measurements than traditional methods use. To make this possible, CS relies on one principle: sparsity, which pertains to the signals of interest. The theory states that as long as the signal is sparse or compressible, you can reconstruct the original signal from a small number of samples by solving an optimization problem with high probability. The sparsity property of signals has been utilized in a variety of applications including astronomy [14], optical imaging [15], radar [16, 17], and spectrum sensing [18]. In the literatures [19–23], some CS-based DOA estimations have been investigated and compared with each other. In [24], a method of DOA estimation through Bayesian compressive sensing is proposed. To obtain the sparsest solution, we may search for a solution with norm, that is, a minimum number of nonzero components [13]. It is well known that searching the minimum norm is an intractable combinatorial problem as the dimension increases, and it is too sensitive to noise [25]. To overcome this drawback, we can relax the problem by substituting the norm for the norm. One of the most successful approaches is basis pursuit (BP) [26], which finds the solution by linear programming (LP) methods. However, for the large systems of equations, this algorithm is very slow. Another family of algorithms is iterative reweighted least squares, with FOCUSS as an important member [27]. These methods are faster than BP, but their estimation quality is worse, especially if the number of nonzero elements of the sparsest solution is not very small. Another approach is the greedy algorithms such as matching pursuit (MP) [28] and orthogonal matching pursuit (OMP) [29, 30]; these methods are very fast, but do not provide good estimation of the sources. Different from the above approaches, the literatures [31, 32] proposed an algorithm to minimize the norm directly. By choosing a continuous function to approximate norm, the algorithm searches the optimal solution based on gradient ascent method. It is shown that this algorithm is about two to three orders of magnitude faster than BP (based on the interior-point LP solvers), while resulting in the same or better accuracy [32].

An adaptive digital beamforming based on CS with smoothed norm for large-scale sparse receiving array is proposed in this paper. Due to the spatial sparseness of the arriving signal, CS theory is adopted to sample received signals with a sparse array, and the signal of full array is reconstructed by using the CS reconstruction method based on smoothed norm. Then an iterative linearly constrained minimum variance (LCMV) is used to form antenna beam, whose main lobe is steered to the desired direction and nulls are steered to the directions of interferences. For the proposed adaptive beamforming, it greatly reduces the number of elements and has similar performance to full array, which means that the beams have low side-lobes, deep nulls in the directions of interferences, and no grating lobes. Different from most CS-based DOA estimation in array processing applications, our proposed method applies the CS algorithm to reconstruct the signal of full array and then uses adaptive beamforming to form antenna beam. It cares of the accuracy of the reconstructed signals more than the accuracy of DOA estimations.

The paper is organized as follows. In Section 2, the signal model and spatial sparsity are discussed. The proposed adaptive beamforming based on CS with smoothed norm is developed in Section 3. In Section 4, when targets are not on the grid, a coarse-to-fine method is proposed to improve the signal reconstruction results. The numerical simulations under different situations are provided to illustrate the effectiveness of the algorithm in Section 5, and conclusions are given in Section 6.

#### 2. Signal Model

Consider an array antenna with elements and assume that narrowband signals are incident to the antenna array. One of these signals is the desired signal, and the others are interferences. The received signal can be expressed bywhere , is the white Gaussian noise, and and are the complex amplitude and steering vector of the th incident signal. For an ideal uniform linear array with the spacing of , we havewhere is the carrier wavelength and represents the DOA of signal . For a planar array, the steering vector can be expressed bywhere , , and are the elevation and azimuth angles and are the coordinates of the th element. For the 3D array with element position of , the steering vector is given bywhere , , and .

Without loss of generality, we only consider the linear array in this paper unless noted otherwise. For the DOAs in linear array, we assume that there is a partition grid of the entire angle space , such thatwhere is the number of grid points. We define a transformation matrix asSo the received signal can be rewritten aswhere is a complex vector with nonzero elements. The indices of nonzero elements in represent the DOAs of incident signals. In general, the number of incident signal is small; that means , so it is reasonable to assume that is spatially sparse.

Next we design a dimension measurement matrix with , and the measurement is obtained bywhere represents the number of receiver channels, and it is smaller than the number of array elements. The measurement matrix means a method of compressed sampling space signals. Substituting (7) into (8) results inwhere is called the observation matrix. Theoretical studies in [12, 13, 33] have shown that we can accurately reconstruct the “sparse” signal from the “compressed” vector when the observation matrix satisfies the condition of restricted isometry property (RIP). For the selection of , Candes et al. point out that if you want to reconstruct the signal completely, the observation matrix must guarantee that two different -sparse signals will not be mapped to the same collection [12]. Here we define a correlation coefficients matrix with elements ofwhere is the th row vector of . For reconstructing the signal completely, we should make the correlation coefficients as small as possible. With given , the optimal correlation coefficients and measurement matrix can be optimized by using genetic algorithm [3], particle swarm optimization [4], or other methods [5, 6]. With the optimization of element positions, it can help reducing the recovery errors. In this paper, we do not discuss the selection of and just randomly select rows from an identity matrix of size for convenience. That is to say, we randomly select elements from the full array to sample the space signals.

Owing to the spatial sparsity of incident signals, we can obtain the estimation of from measurement by solving the CS problemand reconstruct the received signal using , where is norm of a vector which stands for the number of nonzeros in a vector; stands for the noise level. From [25], we can know that solving (11) is both numerically unstable and NP-complete. To solve this problem, we do not relax (11) by substituting the norm for the norm such as BP [26], FOCUSS [27], and OMP [29]. The method we present in this paper is based on direct minimization of the norm. The basic idea is to choose a smoothed continuous function to approximate norm and search the optimal solution based on gradient ascent and projection method.

#### 3. Proposed Adaptive Beamforming

##### 3.1. Smoothed Norm [31, 32]

It is known that norm of a vector is a discontinuous function; searching the minimum norm solution is a combinatorial problem, which is nonconvex and highly nonsmoothed. Furthermore, norm of a vector is extremely sensitive to noise, and its value will be completely changed even though the noise is very weak. In this section, we introduce a family of smooth complex approximators of norm, whose minimization can be implemented using gradient based methods. Consider a complex zero-mean Gaussian family of functions with the parameter where , , and represent the module and real and imaginary parts of , we haveor approximatelyThen, for a complex vector , we can defineSince the number of entries in is and the function is an indicator of the number of zero-entries in , the norm of can be approximated byfor small values of , and the approximation tends to equality when .

Substituting this approximation into (11) yields the CS problem with smoothed norm

##### 3.2. Compressed Sensing Reconstruction Algorithm

For small values of in (17), contains a lot of local maxima. Consequently, it is very difficult to directly maximize this function for very small values of . However, as the value of grows, the function becomes smoother and smoother, and, for sufficiently large values of , there is no local maxima. Similar to [31, 32], our approach to solve the problem (17) is then to decrease the value of gradually. The underlying thought is to select a which ensures that the initial optimization problem is convex and gradually increase the accuracy of the approximation. By careful selection of the sequence of , nonconvexity and thereby local minima can be effectively avoided. For each , we use some iterations of the steepest ascent algorithm (followed by projection onto the feasible set) to maximize , and the initial value of this steepest ascent algorithm is the maximizer of obtained for the previous (larger) value of . The CS reconstruction algorithm can be described in Algorithm 1. In Algorithm 1, is a small positive constant, denotes the Moore-Penrose pseudoinverse of the matrix , and denotes the Hadamard product (entrywise multiplication) of the vectors and .

*Algorithm 1 (compressed sensing reconstruction algorithm). *

*Step 1.* Initialization.

*Step 1.1.* Let be equal to the minimum norm solution of , obtained by .

*Step 1.2.* Choose a suitable decreasing sequence .

*Step 2.* For each , , maximize (approximately) the function on the feasible set .

*Step 2.1.* Initialize .

*Step 2.2.* Calculate the gradient .

*Step 2.3.* Carry out the steepest ascent algorithm .

*Step 2.4.* Project onto the feasible set .

*Step 2.5.* Assign .

*Step 2.6.* If and is less than the maximum iteration time, then return to Step 2.2.

*Step 3.* Obtain the final solution .

As stated in [31, 32], some remarks are presented here. (1) The internal loop for a fixed is repeated a fixed and small number of times. In other words, for increasing the speed, we do not wait for the steepest ascent algorithm to converge. We just need to enter the region near the (global) maximizer of for escaping from its local maximizers. (2) In Step 1.1 of Algorithm 1, we use the minimum norm solution (which corresponds to ) as initial estimation of the sparse solution. For the value of , it may be chosen about two to four times of the maximum absolute value of the elements in . Then we use , , to determine the next values of , where is usually chosen between 0.5 and 1. For the smallest (i.e., ), it can be set about one to two times of (a rough estimation) the standard deviation of the noise. (3) For the selection of , the line search can be applied to find the locally optimal to minimize . Performing the line search can be time-consuming, so we just use a fixed small in this paper (e.g., ). It is worth mentioning that we use iteration step-size in the steepest ascent algorithm, which is proportional to the . The reason is that for smaller values of , the function is more “fluctuating” and hence smaller step-sizes should be used for its maximization.

At the end of this section, we take a brief complexity analysis for Algorithm 1. The cost of gradient calculation (Step 2.2) and steepest ascent algorithm (Step 2.3) is about , where is the exponential complexity. For the projection (Step 2.4), it has complexity of which arises from the matrix inverse and matrix-vector multiplications. So the complexity of Algorithm 1 is about , where and are the numbers of outer and inner iterations.

##### 3.3. Iterative LCMV Beamforming Algorithm

After obtaining and reconstructing the echo signals using , we should apply adaptive beamforming algorithms to enhance the desired signal and suppress interferences and noise with the recovered data. In this paper, we use the iterative LCMV beamforming algorithm to form antenna beam. Of course, other beamforming methods can also be applied, such as subspace projection algorithm, and generalized side-lobe canceller [1].

The LCMV algorithm minimizes the total variance of the beamformer output and gets the weight by solving the linear constraint equation [34]where is the covariance matrix of reconstructed signal and is the steering vector in the desired direction. Using the method of Lagrange multiplier, the optimized weight vector can be achieved by

For online implementation, we can also develop an iterative formula for beamforming weight calculation [35]:where is the iteration step size, , and .

For the stability of the algorithm, should satisfywhere is the maximum eigenvalue of the covariance matrix . Since is positive semidefinite, we havewhere is the trace of the matrix argument. Then we haveas an upper limit for . When exceeding this limit, is likely to cause the iterative LCMV algorithm unstability.

After obtaining the beamforming weight, we can get the output of adaptive beamformer using

#### 4. DOAs Not on the Partition Grid

Thus far, in our framework, the estimates of the signal locations are confined to be on the grid of partition. When the signals are not on the grids, the error of the reconstructed signal grows heavily. Increasing the parameter to reduce the grid spacing can solve this problem. But finer uniform grid will increase the computational complexity significantly. Instead of having a uniform fine grid, we make the grid fine only around the regions where signals are present [36]. This requires an approximate knowledge of the locations of the sources, which can be obtained by using a coarse grid first. The steps to form nonuniform refined grid are as follows.

(1) Create a coarse partition grid of potential signal locations such that . The grid should not be too rough in order to get the approximated locations estimation of the sources. Here, for example, we initialize .

(2) Form the transformation matrix related to and use the CS reconstruction algorithm to reconstruct the sparse signal . Find the indices of largest amplitude of nonzero elements in and get the coarse locations estimation , .

(3) As illustrated in Figure 1, for each , , divide the interval into parts in terms of angular sine representation, where is the refined times. Then we can get the refined partition grid around the locations , , and denote the new nonuniform partition grid as .