The performance of traditional constrained-LMS (CLMS) algorithm is known to degrade seriously in the presence of small training data size and mismatches between the assumed array response and the true array response. In this paper, we develop a robust constrained-LMS (RCLMS) algorithm based on worst-case SINR maximization. Our algorithm belongs to the class of diagonal loading techniques, in which the diagonal loading factor is obtained in a simple form and it decreases the computation cost. The updated weight vector is derived by the descent gradient method and Lagrange multiplier method. It demonstrates that our proposed recursive algorithm provides excellent robustness against signal steering vector mismatches and the small training data size and, has fast convergence rate, and makes the mean output array signal-to-interference-plus-noise ratio (SINR) consistently close to the optimal one. Some simulation results are presented to compare the performance of our robust algorithm with the traditional CLMS algorithm.

1. Introduction

Adaptive beamforming is used for enhancing a desired signal while suppressing interference and noise at the output of an array of sensors. It has a long and rich history of practical applications to numerous areas such as sonar, radar, radio astronomy, medical imaging, and more recently wireless communications [15].

In the practical applications, the adaptive beamforming methods become very sensitive to any violation of underlying assumptions on the environment, sources, or sensor array. The performance of the existing adaptive array algorithms is known to degrade substantially in the presence of even slight mismatches between the actual and presumed array responses to the desired signal. Similar types of degradation can take place when the signal array response is known precisely, but the training sample size is small. Therefore, robust approaches to adaptive beamforming appear to be one of the important issues. There are several efficient approaches to design robust adaptive beamformers, such as the linearly constrained minimum variance beamformer [6], the eigenspace-based beamformer [7], and the projection beamforming techniques [8]. For instance, additional linear constraints on the array beam pattern have been proposed to better attenuate the interference and broaden the response around the nominal look direction [9]. There is another popular class of robust beamforming techniques called diagonal loading (DL) [10]. In these methods the array correlation matrix is loaded with an appropriate multiple, called the loading level, of the identity matrix in order to satisfy the imposed quadratic constraint. However, it is somewhat difficult to calculate the loading level with uncertain bounds of the array steering vector, which may not be available in practical situations. Based on a spherical or ellipsoidal uncertainty set of the array steering vectors, robust Capon beamforming maximizes the output power, which belongs to the extended class of diagonal loading methods, but the corresponding value of diagonal loading can be calculated precisely [11, 12]. From the above analysis, we note that these methods cannot be expected to provide sufficient robustness improvements.

In more recent years, some new robust adaptive beamforming approaches have been proposed [1317]. The problem of finding a weight vector maximizes the worst-case SINR over the uncertainty model. With a general convex uncertainty model, the worst-case SINR maximization problem can be solved by using convex optimization [13]. In [14], a robust downlink beamforming optimization algorithm is proposed for secondary multicast transmission in a multiple-input multiple-output (MIMO) spectrum sharing cognitive radio (CR) network. Recognizing that all channel covariance matrices form a Riemannian manifold, Ciochina et al. propose worst-case robust downlink beamforming on the Riemannian manifold in order to model the set of mismatched channel covariance matrices for which robustness shall be guaranteed [15]. In [16], a robust beamforming scheme is proposed for the multiantenna nonregenerative cognitive relay network where the multiantenna relay with imperfect channel state information (CSI) helps the communication of single-antenna secondary users (SUs). Exploiting imperfect channel state information (CSI), with its error modeled by added Gaussian noise, robust beamforming in cognitive radio is developed, which optimizes the beamforming weights at the secondary transmitter [17].

Apart from the LMS-type algorithm, another well-known iterative adaptive algorithm is the recursive least square (RLS) algorithm or the complex-valued widely linear RLS algorithm, which updates the weight vector with small steady-state misadjustment and fast convergence speed. However, the RLS-type algorithms have much higher computation cost than the LMS-type algorithms [18, 19]. In order to reduce the complexity, RLS algorithm based on orthonormal polynomial basis function is proposed, which is as simple as LMS algorithm [20]. To yield low complexity cost and keep fast convergence speed, the LMS algorithms based on variable step size have been presented in [2123]. These LMS algorithms can have faster convergence speed and require less computational cost per iteration than the RLS-type algorithms. However, they cannot enjoy both fast tracking and small misadjustment with simple implementation. It is known that the performance of traditional CLMS algorithm degrades seriously due to the small training sample size and signal steering vector mismatches. In this paper, in order to overcome the drawbacks of CLMS algorithm, we propose a robust CLMS algorithm based on worst-case SINR maximization, which provides sufficient robustness against some types of mismatches. The parameters in our paper can derive in a simple form, which decreases computation cost. The improved performance of the proposed algorithm is demonstrated by comparing with traditional linearly CLMS algorithm via several examples.

2. Background

2.1. Mathematical Formulation

We consider a uniform linear array (ULA) with omnidirectional sensors spaced by the distance . We assume narrowband incoherent plane waves that impinge from directions of arrival . The output of a designed beamformer is expressed as follows:where is the complex vector of array observation, is the array number, and is the complex vector of weights; here and are the Hermitian transpose and transpose, respectively. The array observation at time can be written aswhere , , and are the desired signal, noise, and interference components, respectively. Here is the desired signal waveform, and is the signal steering vector.

The weights can be optimized from the following maximum of the signal-to-interference-plus-noise ratio (SINR):where is the signal power and interference-plus-noise correlation matrix :

2.2. Linearly Constrained-LMS (CLMS) Algorithm

Linear constrained-LMS algorithm is a real-time constrained algorithm for determining the optimal weight vector. The problem of finding optimum beamformer weights is as follows:

Using Lagrange multiplier method to solve problem (5), the optimal weight vector can be derived:

In practical situations, we cannot know completely the signal characteristics and it is also time-varying circumstance. So, we need to update the weights in an iterative manner. The Lagrange function of (5) is written as

Computing the gradient of (7), we can update the weight vector of CLMS algorithm:where is the step size and is the gradient vector of . Inserting (8) into linear constraint , we can obtain the Lagrange multiplier:

According to (8) and (9), the weight vector of traditional CLMS algorithm can be rewritten as [24] where and .

From (10), we note that the performance of the traditional CLMS algorithm is dependent on exact signal steering vector and it is sensitive to some types of mismatches. In addition, the interference-plus-noise correlation matrix is unknown. Hereby, the sample covariance matrixis used to substitute in (10), where is the training sample size. Therefore, the performance degradation of CLMS algorithm can occur due to the signal steering vector mismatches and small training sample size.

3. Robust CLMS Algorithm Based on Worst-Case SINR Maximization

In order to solve the above-mentioned problems of the linearly constrained-LMS algorithm, we propose a robust recursive algorithm based on worst-case SINR maximization, which provides robustness against mismatches.

We assume that, in practical situations, the mismatch vector is norm-bounded by some known constant ; that is,

Then, the actual signal steering vector belongs to a ball set:

The weight vector is selected by minimizing the mean output power while maintaining a distortionless response for the mismatched steering vector. So, the cost function of robust constrained-LMS (RCLMS) algorithm is formulated as

According to [25], the constraint in (14) is equivalent to the following form:

Using (15), problem (14) can be rewritten in the following way:

In order to improve the robustness against the mismatch that may be caused by the small training sample size, we can get a further extension of the optimization problem (16). The actual covariance matrix is where is the norm of the error matrix and it is bounded by a certain constant , .

Applying the worst-case performance optimization, we can rewrite

To obtain the optimal weight vector, we can first solve the following simpler problem [26]:

Using Lagrange multiplier method to yield the matrix error,

Consequently, the minimization problem (18) is converted to the following form:

The solution to (21) can be derived by minimizing the Lagrange function:where and is Lagrange multiplier. Computing the gradient vector of , we can get the gradient :

The gradient of is equal to zero and we can obtain the optimum weight vector:where .

From (24), we note that the proposed algorithm belongs to the class of diagonal loading techniques, but the loading factor is calculated in a complicated way.

Using (23), the updated weight vector is obtained bywhere and is step size.

Next, we need to compute the Lagrange multiplier . The quadratic constraint of the optimization problem (21) iswhere

Inserting (25) into (26), we can obtain the Lagrange multiplier :where

3.1. The Choice of Step Size

The weight vector (25) is rewritten as

Let where is a diagonal matrix in which the diagonal elements of matrix are equal to the eigenvalues of , and the columns of contain the corresponding eigenvectors.

We can get the following equation via multiplying (30) by :

As demonstrated in (32), if the proposed algorithm converges, it is required to satisfy the constrained condition:

It follows from (33) that where is the maximum eigenvalue:

In recursive algorithm, the choice of the step size is very important and it is varied with each new training snapshot [27, 28].

Therefore, we can obtain the optimal parameter :

3.2. The Approximation of Lagrange Multiplier

From (28) and (29), we note that the computation cost of weight vector is very high. Next, we obtain the Lagrange multiplier by linear combination to decrease the computation cost.

From (24), the proposed beamformer belongs to the class of diagonal loading techniques. According to [29], we consider a linear combination of and :where the parameters and . The initial value of is assumed to identify matrix [30]. We can rewrite (37) as

Contrasting the diagonal loading covariance matrices in (24) and in (38), we note that the parameter is replaced by . In this way, we can compute the Lagrange multiplier as follows:

From (39), the Lagrange multiplier is calculated simply, which decreases the complexity cost of the proposed algorithm. We need first to obtain the parameters and . Minimize the following function:where is the theoretical covariance matrix. By inserting (37) into (40), we have [31]

Computing the gradient of (41) with respect to , for fixed , we can give the optimal value:

Inserting into (41) and replacing by , minimization problem is written as

By minimizing (43), the optimal solution for is derived as where and .

In practical situations, is replaced by to obtain estimation value:

We can estimate the parameter :where is the element of .

Substituting (45) and (46) into (44), the estimation value of is written as

Inserting (45) and (46) into (42), we can obtain estimation value of :

Consequently, we can obtain the expression of Lagrange multiplier:

Our proposed RCLMS algorithm belongs to the class of diagonal loadings, but the diagonal loading factor is derived fully automatically from the observation vectors without the need of specifying any user knowledge. The parameter is determined easily and we can conclude that the proposed RCLMS algorithm is not sensitive to the choice of parameter in [28]. From the literature [11], it is clear that the major computational demand of the algorithm comes from the eigendecomposition, which requires flops. This leads to a high computational cost. However, the proposed algorithm, which does not need eigendecomposition, can reduce the complexity to flops. In addition, robust Capon algorithm and our proposed algorithm belong to the class of the diagonal loadings, in which the loading factors can be calculated precisely.

3.3. The Analysis of Complexity Cost

The complexity cost of two algorithms can be shown as in Tables 1 and 2.

4. Simulation Results

In this section, we present some simulations to justify the performance of the proposed robust recursive algorithm based on worst-case SINR maximization. We assume a uniform linear array with omnidirectional sensors spaced half a wavelength apart. For each scenario, 200 simulation runs are used to obtain each simulation point. Assume that both directions of arrival (DOAs) of the presumed and actual signal are and , respectively. This corresponds to a mismatch in the direction of arrival. Suppose that two interfering sources are plane waves impinging from the DOAs and , respectively. We choose the parameter .

Example 1 (output SINR versus the number of snapshots). We assume that signal-to-noise ratio SNR = 10 dB and the parameter . Figure 1 shows the performance of the methods tested in no mismatch case. From Figure 1, we see that the output SINR of traditional CLMS is about 14 dB. Note that it is sensitive to the small training sample size. However, our proposed algorithm can provide improved robustness. Figure 2 shows the array output SINR of the methods tested in a mismatch.

In Figure 2, we note that the output SINR of CLMS algorithm is about −10 dB, which is sensitive to the signal steering vector mismatches. However, that of the proposed robust CLMS (RCLMS) algorithm is about 18 dB, which is close to the optimal one. In this scenario, the proposed recursive algorithm outperforms the traditional linear constrained-LMS algorithm. Moreover, robust constrained-LMS algorithm has faster convergence rate.

Example 2 (output SINR versus SNR). In this example, there is a 3° mismatch in the signal look direction. We assume that the fixed training data size is equal to 100. Figure 3 displays the performance of these algorithms versus the SNR in no-mismatch case. The performance of these algorithms versus the SNR in a mismatch case is shown in Figure 4.

In this example, the traditional algorithm is very sensitive even to slight mismatches which can easily occur in practical applications. It is observed from Figure 4 that, with the increase of SNR, CLMS algorithm has poor performance at all values of the SNR. However, our proposed recursive robust algorithm provides improved robustness against signal steering vector mismatches and small training sample size, has faster convergence rate, and yields better output performance than the CLMS algorithm.

5. Conclusions

In this paper, we propose a robust constrained-LMS algorithm based on the worst-case SINR maximization. The RCLMS algorithm provides robustness against some types of mismatches and offers faster convergence rate. The updated weight vector is derived by the gradient descent method and Lagrange multiplier method, in which the diagonal loading factor is obtained in a simple form. This decreases the computation cost. Some simulation results demonstrate that the proposed robust recursive algorithm enjoys better performance as compared with the traditional CLMS algorithm.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


The authors would like to thank the anonymous reviewers for their insightful comments that helped improve the quality of this paper. This work is supported by Program for New Century Excellent Talents in University (no. NCET-12-0103), by the National Natural Science Foundation of China under Grant no. 61473066, by the Fundamental Research Funds for the Central Universities under Grant no. N130423005, and by the Natural Science Foundation of Hebei Province under Grant no. F2012501044.