Journal of Electrical and Computer Engineering

Volume 2016 (2016), Article ID 2467198, 7 pages

http://dx.doi.org/10.1155/2016/2467198

## Adaptive Complex-Valued Independent Component Analysis Based on Second-Order Statistics

^{1}College of Information and Communication Engineering, Harbin Engineering University, Heilongjiang 150001, China^{2}College of Eletrical and Information Engineering, Beihua University, Jilin 132012, China^{3}Collaborative Research Center, Meisi University, Tokyo 1918506, Japan

Received 1 April 2016; Revised 28 July 2016; Accepted 16 August 2016

Academic Editor: Panajotis Agathoklis

Copyright © 2016 Yanfei Jia and Xiaodong Yang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper proposes a two-stage fast convergence adaptive complex-valued independent component analysis based on second-order statistics of complex-valued source signals. The first stage constructs a cost function by extending the real-valued whiten cost function to a complex-valued domain and optimizes the cost function using a complex-valued gradient. The second stage uses the restriction that the pseudocovariance matrix of the separated signal is a diagonal matrix to construct the cost function and the geodesic method is used to optimize the cost function. Compared with other adaptive complex-valued independent component analysis, the proposed method shows a faster convergence rate and smaller error. Computer simulations were performed on synthesized signals and communications signals. The simulation results demonstrate the validity of the proposed algorithm.

#### 1. Introduction

Blind source separation (BSS) is the separating of a set of source signals from a set of mixed signals without the aid of information (or with very little information) about either the source signals or the mixing process. Independent component analysis (ICA) is an attractive approach for solving blind source separation problems. ICA can be divided into real-valued ICA and complex-valued ICA according to the mixed signals. Complex-valued ICA is widely used to estimate the mixing matrix or to separate complex-valued mixed signals, such as frequency domain signals [1, 2], digital communication signals [3, 4], functional magnetic resonance imaging signals [5], and power system signals [6].

Studies of complex-valued ICA can be divided into three categories. The first category includes methods based on a nonlinear function, such as complex-valued fastICA (C-fastICA) [7], noncircular complex fastICA (NC-fastICA) [8], complex maximization of non-Gaussiantiy (CMN) [9], complex-valued ICA by entropy bound minimization (CEBM) [10], complex-valued ICA by entropy rate bound minimization (CERBM) [11], and others [3, 12]. The second category includes methods that are based on kurtosis or higher-order cumulants, such as joint approximative diagonalization of eigenmatrix (JADE) [4], kurtosis maximization (KM) [13], pseudo-Euclidean gradient iteration ICA (GEGI-ICA) [14], and others [15–17]. The third category includes methods based on second-order statistics, such as strong-uncorrelating transform (SUT) [18, 19] and its adaptive algorithms [2, 20–22] and pseudo-uncorrelating transform (PUT) [23]. Recently, the performance and separability of complex-valued Gaussian mixtures of SUT method have also been studied [24, 25]. Every complex-valued ICA category has its own merits and appropriate application conditions. The methods based on second-order statistics have a simple structure and low computation complexity and are suitable for complex Gaussian and non-Gaussian noncircular signals. In contrast, ICA methods in the first and second categories are not suitable for use with complex Gaussian noncircular signals.

The major advantage of SUT is that “*whenever applicable, remains perhaps the simplest and most accessible approach*” [24]. SUT is a batch algorithm and cannot be used to process signals in real time, so some adaptive complex-valued ICA algorithms have been proposed based on second-order statistics [2, 20–22]. Compared with other complex-valued ICA strategies, adaptive complex-valued ICA algorithms based on second-order statistics are simpler in structure and do not require the probability density of the real and imaginary parts of a complex-valued source signal to be non-Gaussian. The Scott method [20] proposes an updating formula of the separating matrix for adaptive complex-valued ICA without mathematical speculation. The Cong method [2] simultaneously uses diagonal covariance and pseudocovariance noncircular signals as the cost function to deduce the adaptive complex ICA. The convergence condition of the Scott and Cong methods requires that the covariance and pseudocovariance of the separated signal are simultaneously diagonal. For example, if only the covariance of the separated signal is diagonal, the method is unable to reach convergence until the pseudocovariance is also diagonal. This requirement could affect convergence speed. The Yang method [22] uses a two-step serial updating method to make the separated signals satisfy the above convergence condition. In the second step, Yang uses the orthogonal method to force the separating matrix to be a unitary matrix. This changes the updating direction of the separating matrix and leads to slow convergence speed.

To increase the rate of convergence, a fast complex-valued ICA method is proposed in this work. The proposed method first extends the real-valued whitening process to a complex-valued domain to provide unit variance for the processed signal. Second, this work uses the restriction that the pseudocovariance matrix of the separated signals is a diagonal matrix to construct cost function and optimize the cost function using the geodesic method. This avoids computing the square root and inverse of the separating matrix and also keeps the separating matrix to be an orthogonal matrix, without any forcing operation. This improves the convergence speed of the proposed method compared to the other adaptive methods.

#### 2. Complex-Valued ICA and Second-Order Statistics

##### 2.1. Complex-Valued Linear ICA Model

Generally, a linear complex-valued ICA model that is noise-free can be expressed as follows:where is the unknown column vector of source signals, is the number of source signals, is the unknown complex-valued mixing matrix, is the column vector of observed complex-valued mixed signals, and is the number of observed signals. The components of the source signals are mutually independent. Most complex-valued ICA algorithms assume that the number of observed signals is not less than the source signals, and only one Gaussian source signal is allowed. The aim of complex-valued ICA is to search the separating matrix and estimate the source signals and mixing matrix. Given that complex-valued ICA does not utilize any information about the source signals or mixing matrix, it has some indeterminacy in amplitude, sequence, and phase. This indeterminacy does not affect the shape of the estimated source signal waveform, which contains most information about source signals.

##### 2.2. Second-Order Statistics of Complex-Valued Signals

Assume a complex-valued random column vector , where and are the real and imaginary part of , respectively, and . The expectation of the random vector is defined as follows:Its covariance matrix is defined as follows:where denotes the Hermitian transpose. Its corresponding pseudocovariance matrix is defined as follows:where denotes the matrix transpose. The covariance matrix together with the pseudocovariance matrix is the full expression of second-order statistics [19]. If the pseudocovariance matrix equals zero, the random vector is considered circular or proper. If both the covariance matrix and pseudocovariance matrix of the random vector are diagonal with nonzero diagonal elements, the random vector is noncircular or improper, and components of the random vector are called strong uncorrelated components.

##### 2.3. Complex-Valued ICA Based on SUT

For any complex random vector , if the vector can be transformed into a random vector by use of a nonsingular square matrix , where has covariance that is a unit matrix and pseudocovariance that is a diagonal matrix with diagonal elements between zero and one, then the matrix is called SUT. If the observed signal is the complex random vector and the source signal is , then the SUT is the separating matrix in complex-valued ICA. The procedure for complex-valued ICA based on SUT is as follows [18].

(1) Whitening the complex-valued observed signals : the whitening procedure is given by where the whitening matrix is the inverse of the matrix square root of the covariance matrix and is the whitened signal with a unit covariance matrix.

(2) Determining the separating matrix of the whitened signal by use of Takagi’s factorization: this is done according toFrom (5) and (6) we obtain the separating matrix .

#### 3. Proposed Adaptive Complex-Valued ICA

In this section, we describe an adaptive fast convergence complex-valued ICA algorithm based on second-order statistics, used in the SUT method. This is unlike other adaptive complex-valued ICA methods that simultaneously force separated signals to comply with second-order statistics. Instead, this method uses an adaptive serial updating method to realize the SUT. First, we use an adaptive method to whiten the observed signals. The cost function used in real-value whitening is directly extended to the complex-valued signal. The cost function is given as follows:where is the whitening matrix and is the th whitening signal. In complex-valued signal processing, the steepest descent direction of cost function (7) iswhere is the observed signal, , , and . To avoid computing the matrix inverse, a complex-valued natural gradient is used to simplify (8):So, adaptive whitening can be expressed as follows:If we use the instantaneous value instead of the expected value in (10), we obtain the adaptive real-time whitening method:Second, we must modify the separated signals to satisfy a diagonal pseudocovariance matrix while keeping the covariance matrix as a unit matrix. We use the cost function in [22], which can be expressed as follows:where , is the separating matrix of the whitened signals and is the diagonal matrix of . The ordinary gradient with is as follows:The update of can be written as follows:where is the correlated matrix of the whitened signal. At the convergence point, the pseudocovariance matrix of the separated signal is diagonal. To keep the covariance matrix of the separated signal as a unit matrix, the separating matrix must be a unitary matrix. In [22], they directly used the method of fixed-point fastICA to force the separating matrix to be a unitary matrix:This approach has two major drawbacks. One is that (16) changes the steepest gradient direction in every iteration, which slows the convergence speed. The second is that (16) must compute the square root and the inverse of the separating matrix in every iteration, which increases the algorithm computation complexity, slowing the time of convergence.

To overcome this problem, we use a geodesic method to search the optimized separating matrix . The geodesic method causes the separating matrix to move on the surface of the orthogonal matrix to converge to a local minimum without a forcing operation. The geodesic method is given bywhere If is a unitary matrix, then is also a unitary matrix. By using the geodesic method, we do not need additional operations to make the separating matrix be an orthogonal matrix and change its search direction.

Using the geodesic method with self-tuning [26] to optimize the cost function (12), we can describe a fast convergence complex-valued ICA method. The implementation process of the proposed adaptive ICA method is as follows:(1)Initialize the whitening matrix and separating matrix using unit matrix, learning rate and , and iterative number for optimizing (7) and (12), respectively.(2)Use (10) to whiten the observed signal and obtain the whitening signal and whitening matrix .(3)Compute the gradient of the cost function in Riemannian space, which can be expressed as follows: where is a diagonal matrix with diagonal elements , , , and .(4)Compute the rotation matrix and .(5)If , , where where and are diagonal matrices corresponding to and , respectively.(6)If , .(7)Update the separating matrix (8)If is sufficiently small, then STOP; else return to step (3).

#### 4. Experimental Results and Analysis

In order to test the algorithm, we used five synthesized signals with different spectral coefficients, three digital communication signals with different spectral coefficients, and three synthesized signals of which two signals have same spectral coefficients as the source signals. For simplicity, we directly used the expectation of the signal instead of the instantaneous value. Quality of separation was assessed using the performance index (PI), a widely used index in ICA. PI can be expressed as [27]where is the element of the global system matrix , is the separating matrix of mixed signal, is the mixing matrix, and and are the maximum absolute value of the elements in the row and column vector , respectively. When perfect separation is achieved, the performance index is zero. “In practice, the value of performance index 10^{−2} gives quite a good performance” [27]. The smaller the value of PI, the better the performance.

In the first experiment, five complex-valued synthesized source signals with 10000 samples were used, constructed as follows: where , is a sample drawn from a normal random distribution within , and . The mixing matrix is a complex-valued random matrix with real and imaginary parts generated from a random uniform distribution between 0 and 1. All algorithms have the same learning rate of 0.01 and were run 100 times. Each time, the source signal and mixing matrix was independently generated.

In contrast, convergence curves are shown in Figure 1 that correspond to the four methods: Yang method [22], Scott method [20], SUT method [18], and our proposed method. Every method has 100 convergence curves, and every convergence curve corresponds to results from one run. The SUT method is a batch method without iterative computations. Therefore, the convergence curves are straight lines. From Figure 1, we can see that all the convergence curves of the proposed method are more closer than the other adaptive methods except for the SUT method. This suggests that the proposed method shows improved, stable performance for different mixed sources that is better than the other adaptive methods. The SUT method shows the smallest fluctuation range, followed by the proposed method, Scott method, and then the Yang method. This indicates that the proposed method is more suitable for processing different mixed signals than the other adaptive methods, except for the SUT method. Although the performance of SUT is more stable than the other methods for separating different mixed signals, its realization involves Takagi’s factorization that is difficult to implement and is not suitable for real-time separation of mixed signals. The adaptive complex-valued BSS method is easy to perform and is more appropriate for real-time separation of mixed signals.