Research Article  Open Access
Pengcheng Xu, Zhigang Yuan, Wei Jian, Wei Zhao, "Variable StepSize Method Based on a Reference Separation System for Source Separation", Journal of Sensors, vol. 2015, Article ID 964098, 7 pages, 2015. https://doi.org/10.1155/2015/964098
Variable StepSize Method Based on a Reference Separation System for Source Separation
Abstract
Traditional variable stepsize methods are effective to solve the problem of choosing stepsize in adaptive blind source separation process. But the initial setting of learning rate is vital, and the convergence speed is still low. This paper proposes a novel variable stepsize method based on reference separation system for online blind source separation. The correlation between the estimated source signals and original source signals increases along with iteration. Therefore, we introduce a reference separation system to approximately estimate the correlation in terms of mean square error (MSE), which is utilized to update the stepsize. The use of “minibatches” for the computation of MSE can reduce the complexity of the algorithm to some extent. Moreover, simulations demonstrate that the proposed method exhibits superior convergence and better steadystate performance over the fixed stepsize method in the noisefree case, while converging faster than classical variable stepsize methods in both stationary and nonstationary environments.
1. Introduction
Blind source separation (BSS) aims at extracting the latent unknown source signals from their observed mixtures by an array of sensors without a priori knowledge of the original source signals and the mixing coefficients. In the separating process, nothing can be used except for the observation sequences and the statistical characteristic assumptions of the sources. This makes BSS become a versatile tool used in many multisensor systems such as antenna arrays in acoustics or electromagnetism, chemical sensor arrays, and electrode arrays in electroencephalography [1].
Several optimization algorithms have been proposed for BSS [2] and can be generally categorized into batchbased algorithms and adaptive (sequential) algorithms. Batchbased algorithms are blockwise and will not work until a block of data samples is received, such as the fast fixedpoint algorithm [3]. In this paper, we consider the latter, which have particular practical advantages due to their computational simplicity and latent ability in tracing a nonstationary environment [4].
However, the traditional adaptive BSS algorithms such as equivariant adaptive separation via independence algorithm (EASI) [5] and natural gradient algorithm (NGA) [6] usually assume that the stepsize is a small positive constant, leading to an inevitable conflict between the learning rate and stability performance, that is, slow convergence speed or large steady state error. A simple way to solve the conflict is reducing the learning rate as the iteration goes on [7, 8], but it brings about another new problem: if the learning rate decreases to be too small before source components are extracted, the separation system will fail to separate sources properly. To improve the learning rate and stability performance, variable stepsize algorithms have been proposed. The variable stepsize algorithms can exploit the online measurements of the state of the separation system from the outputs and the parameter updates. In [4, 9, 10], variable stepsize algorithms have been derived according to the gradient of different contrasts, that is, NGA, EASI, and SNGA algorithms. Zhang et al. put forward a grading learning algorithm based on the measurements of correlation of the separating signals, whose learning rate is updated by the state of separating [11]. Hsieh et al. proposed an effective learning rate adjustment method based on an improved particle swarm optimizer [12]. But the separating performance of these variable stepsize algorithms is usually sensitive to the initial parameter settings. As a result, the convergence is still slow and improper initial value of learning rate results in large steady state error or even divergence. Ou et al. proposed a variable stepsize algorithm based on an auxiliary separation system [13]. The stepsize is updated by estimating a pseudoperformance index in the light of the index descending in an exponential form. Compared to classical variable stepsize methods, the separation performance of Ou’s method is less sensitive to the initial settings.
In order to improve the initial convergence and stability performance, we consider using a reference separation system based on MSE of the instantaneous outputs to update the stepsize. This technique is shown to improve the convergence speed and the steadystate performance. Moreover, the use of “minibatches” can reduce the whole computational load of the algorithm. The remainder of this paper is organized as follows. In Section 2, the principle of adaptive source separation methods is briefly summarized. Our algorithm is proposed in Section 3. Numerical stimulation results and discussion are provided in Section 4. At the end of the paper, a concise conclusion is given. What is more, this paper can be regarded as an important complement for Ou’s method in [13].
2. Adaptive Algorithms for BSS
In the noisefree instantaneous case, we assume that unknown statistically independent zero mean source signals, with at most one having a Gaussian distribution, contained within pass through an unknown mixing system ; therefore mixed signals can be modeled aswhere is the time index and is the vector transpose operator. To simplify the problem, we further assume that the number of sources matches the number of mixtures, that is, , an exactly determined problem.
The blind separation problem is then to recover original source signals from observations , which is equivalent to estimate an separating matrix that performs the inverse operation of the mixing process, as subsequently used in separation model. Figure 1 shows a block diagram of adaptive BSS model. Then the output signal vector is obtained:where is an estimate of to within the wellknown permutation and scaling ambiguities.
Based on classical contrast such as mutual information contrast, maximum likelihood contrast, and informax principle, many adaptive algorithms have been proposed to estimate . Amari proved the NGA algorithm is the fastest leastmeansquare (LMS) type BSS algorithm [6]. The natural gradient BSS algorithm based on the mutual information contrast, maximum likelihood contrast, and informax principle have the same form:where is identity matrix, and , , are increasing odd functions, usually called activation functions.
Based on the fact that the separating matrix can be factorized into the product of an orthogonal matrix and the prewhitening matrix, via combining LMStype updating formulas of these two matrixes above, using some reasonable approximation, the EASI algorithm is derived [5]:
It has been shown that, as compared with using a fixed stepsize, the algorithm with a variable stepsize has an improved convergence rate. Yuan et al. derived a gradient variable stepsize algorithm for the NGA algorithm [10], which adapts the stepsize in the form ofwhere is a small constant, and is an instantaneous estimate of the cost function from which the NGA algorithm is derived.
What should be noticed is that the activation functions (i.e., the stepsize update functions) can be identical, when sources are all subGaussian or superGaussian signals. The distinct distributions of signals determine the different activation functions; that is, the separation of all subGaussian sources usually utilizes the cubic function while the proper choice for superGaussian sources separation is hyperbolic tangent function.
3. The Proposed Algorithm
As is known to us, in the process of adaptive BSS the estimated signals will approximate to source signals as iteration goes on if the permutation and scaling ambiguities of the estimated signals can be eliminated [14]. The correlation between the estimated signals and source signals can be evaluated by meansquareerror, which is defined aswhere is the sample size, and as well as is normalized before the evaluation of . When the separation system is steady, the meansquareerror matrix MSE, whose element is , has one, and only one, zero entry in each row and column.
If we can calculate the matrix MSE at each update of the separating matrix , a rule for variable stepsize algorithm is to adjust adaptively in terms of MSE. However, since the source signals are unknown, the matrix MSE at each update is not accessible in practice.
In this section, we propose to estimate MSE approximately by combining a reference separation system , which follows the same optimization criteria and updating principle as based on natural gradient algorithm (NGA), except for the initialization. Hence, we obtain thatwhere which represents the reference signal. The correlation between from the primary separation system and from the reference system should increase as iteration goes on regardless of the ambiguities. Therefore, at every iterative, we replace meansquareerror in (6) bywhere
where norm denotes the rootmeansquare value of the output vectors, and the operator takes the absolute value of the normalized vector. In this way, scaling ambiguity can be removed.
Online procedures use a given sample every time [6], whereas to appropriately evaluate meansquareerror one time requires some samples just as (9) indicates. Therefore, we consider updating the separating matrix once over a “minibatch,” that is, a small block of signal samples, while the observation window slides [15, 16]. Hence, the online updating equation of the separating matrix becomeswhere is the iteration number index (or the minibatch index), and the stepsize parameter is updated by a nonlinear function in the form ofwhich is a widely used rule in adaptive filtering algorithms [17]. The primary separation system follows the same updating rule above, where the parameters and are two positive constants, which control the shape of the function curve and initial stepsize, respectively. The effects of these two parameters on performance of the algorithm will be investigated in the next section. We define the correlation function aswhere denotes meansquareerror of th minibatch, represents a weighting factor , and is the number of weighting functions. Thus we introduce exponential weighting into past data, which is proper especially when the channel characteristics are timevariant [18, 19].
The separation procedure using ((9)–(13)) represents the proposed variable stepsize algorithm. Figure 2 shows the scheme of our proposed algorithm.
Regarding the computation load, in the proposed algorithm products per iteration are required, but the total iteration number is . Therefore, products are calculated in a whole separation procedure, which has the same quantity level, that is, products, as the other algorithms in [5, 6, 10, 13]. If the sample size of “minibatches” is large enough, the operation quantity will be much smaller than the others. But considering the tracking performance especially in nonstationary environment, the sample size of “minibatches” should be selected moderately.
4. Simulation Results and Discussion
Here, several sets of simulation results are provided to demonstrate the performance of the proposed algorithm. Generally speaking, comparisons among fixed stepsize algorithms, classical variable stepsize algorithms and the proposed algorithm in both stationary and nonstationary environments have been carried out.
Experiment 1.
Comparison between the proposed algorithm and fixed stepsize algorithms.
In this experiment, we consider the separation of three zero mean subGaussian sources in stationary environment:where is a random source signal distributed uniformly in . The mixing matrix is randomly generated subject to the normal distribution with mean 0 and standard deviation 1, and three receivers are used . The sampling period is set to 0.0001 s.
To evaluate the performance of the BSS algorithms, we use the crosstalking error as the performance index [5, 20–22]:where the matrix is the combined mixingseparating matrix. As converges to PDA^{−1}, the combined mixingseparating matrix will converge to PD, a generalized permutation matrix, and will converge to zero.
In the algorithms, activation function is applied. The stepsize and 0.01 is taken in natural gradient algorithm [6] and optimized EASI algorithm [20], respectively. The parameters of the proposed algorithm are set to , . Considering the balance between tracking performance and evaluation accuracy of the meansquareerror matrix MSE, the sample size of “minibatches” in all the experiments. The effects of crucial parameters and on the performance of proposed algorithm are investigated in Figure 3. What should be noticed is that larger and , respectively, lead to faster initial learning rate and better convergence performance, so the results provide reference for choosing appropriate parameters and . Hence, we set the parameters and to be 0.06 and 10^{4}, respectively.
(a) Sample
(b) Sample
Besides, if sources include both subGaussian and superGaussian signals, the activation functions should not be the same increasing odd function. The activation functions might be initialized by polynomials or kernel functions with some adjustable parameters, so that the optimal activation functions vector can be estimated adaptively along with the iteration [23]. However, further investigation on the activation functions might go beyond this paper.
Figure 4 plots the average PI value obtained from the simulations of three adaptive algorithms for 500 Monte Carlo trials. From the plots, we can see that the proposed algorithm provides the fastest convergence speed, while achieving lower steady state error than both the NGA and optimized EASI approaches. The stepsize , evolution of which is demonstrated in Figure 5, decreases generally in the exponential type as iterations. We observe that stepsize maintains a constant 0.06 during about 100 iterations of the beginning. This is attributed to the choosing of parameter , which can benefit for high initial learning rate yet sensitive detection in the state of separating. As result, the separating performance is more robust to the setting of initial learning rate.
Experiment 2.
Comparison between the proposed algorithm and variable stepsize algorithms.
In this experiment, we firstly define the function in (11). In order to allow fair comparison, the same function in [10] is used for the proposed algorithms; that is,where and denote the operation of taking the diagonal elements and offdiagonal elements of a matrix, respectively, and two zero mean subGaussian sources are mixed by a mixing matrix ; that is,
Zeromean independent white Gaussian noise is added to the mixture with the signaltonoise ratio being equal to 20 dB. The parameters including the initial stepsize in LMStype algorithms are manually tuned so each algorithm has nearly the same steady state performance. The initial value of for classical variable stepsize algorithms, that is, VSNGA and VSSNGA in [10], is set to 0.004, , and 500 Monte Carlo trials are run for averaged performance. The parameters of Ou’s method in [13] are set to , . For the proposed algorithm, the parameters are, respectively, , , , and . The parametric settings imply that the proposed algorithm can lead to a higher learning rate while maintaining appropriate steady state performance. The average values of resulting from three approaches are compared in Figure 6. The proposed algorithm only requires approximately 500 samples for convergence; however, the other three algorithms need 600 samples at least. Clearly, the performance of the proposed algorithm is considerably improved over the classical variable stepsize algorithms in noisy case.
Figure 7 plots the average value of three approaches in a nonstationary environment. The mixing matrix to simulate the timevarying environments is chosen aswhere , is MATLAB builtin function [24], and the initial is set to a null matrix. Here, and . The initial parameter for classical variable stepsize algorithms is the same as the experiment in noisy case. The parameters of the proposed algorithm are reset to , , , . Likewise, results are obtained over 500 Monte Carlo runs. From this figure, it is observed that the proposed algorithm converges faster than VSNGA and VSSNGA algorithms in the nonstationary environment.
Finally, we checked the computational time and separation performance for different separation methods in noisy case. The FastICA algorithm, a classical batchbased method, is also utilized for comparison. The sources, mixing matrix, and initial parameters are set as Experiment 2. The data length is set to 10000 samples, which is enough for achieving the convergence. The iteration number of FastICA was set to 100. The results are provided in Table 1. It can be seen that FastICA generally has better separation performance under high SNR (signal to noise ratios). However, it costs large computational time since it is a kind of batchbased algorithm, and it will not work until a large number of data samples are received. In contrast, though the proposed algorithm performs slightly worse than FastICA, it behaves better when the noise power is increased (SNR = 0 dB, 5 dB, 10 dB). As a novel adaptive online algorithm, the proposed algorithm has particular advantage due to its computational simplicity and latent ability in tracing the noise and nonstationary environments. It also shows that the proposed algorithm performs even better than optimized EASI, NGA, and VSNGA in terms of average PI. Similar separation performance with lower computational load is also obtained compared to VSSNGA and Ou’s method. This demonstrates the complexity analysis in Section 3; that is, the utilization of “minibatches” would probably reduce the computational cost.

5. Conclusion
In this paper, we propose a new variable stepsize algorithm for blind source separation. Reference separation system is utilized to acquire the meansquareerror matrix which is treated as the metric to update the stepsize. As for performance comparison, fixed stepsize algorithms, classical variable stepsize algorithms, and the proposed algorithm have been carried out in both stationary and nonstationary environments. The performance of the abovementioned approaches is analyzed and compared in terms of crosstalking error. It is revealed that the proposed scheme has improved learning rate and stability performance over the fixed stepsize algorithms and converges faster than classical variable stepsize algorithms.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The author would like to thank anonymous referees for constructive comments which are valuable for improving the paper. This work was supported by the National Natural Science Foundation of China under Grant 61172061 and the Natural Science Foundation of JiangSu Province in China under Grant BK2011117. This work was also supported by the National Natural Science Foundation of China under Grant 61201242 and Grant 60772083.
References
 P. Comon and C. Jutten, Handbook of Blind Source Separation: Independent Component Analysis and Applications, chapter 1, Elsevier Press, New York, NY, USA, 2010.
 G. S. Fu, R. Phlypo, M. Anderson, X. L. Li, and T. Adali, “Blind source separation by entropy rate minimization,” IEEE Transactions on Signal Processing, vol. 62, no. 16, pp. 4245–4255, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 B. Loesch and B. Yang, “CramérRao bound for circular and noncircular complex independent component analysis,” IEEE Transactions on Signal Processing, vol. 61, no. 2, pp. 365–379, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 J.T. Chien and H.L. Hsieh, “Nonstationary source separation using sequential and variational Bayesian learning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 5, pp. 681–694, 2013. View at: Publisher Site  Google Scholar
 J.F. Cardoso and B. H. Laheld, “Equivariant adaptive source separation,” IEEE Transactions on Signal Processing, vol. 44, no. 12, pp. 3017–3030, 1996. View at: Publisher Site  Google Scholar
 S.I. Amari, “Natural gradient works efficiently in learning,” Neural Computation, vol. 10, no. 2, pp. 251–276, 1998. View at: Publisher Site  Google Scholar
 H. H. Yang, “Serial updating rule for blind separation derived from the method of scoring,” IEEE Transactions on Signal Processing, vol. 47, no. 8, pp. 2279–2285, 1999. View at: Publisher Site  Google Scholar
 S. Amari, A. Cichocki, and H. H. Yang, “A new learning algorithm for blind signal separation,” in Proceedings of the Advances in Neural Information Processing Systems (NIPS '96), vol. 8, pp. 757–763, 1996. View at: Google Scholar
 S. C. Douglas and A. Cichocki, “Adaptive step size techniques for decorrelation and blind source separation,” in Proceedings of the 32nd Asilomar Conference on Signals, Systems & Computers, vol. 2, pp. 1191–1195, November 1998. View at: Google Scholar
 L. Yuan, W. Wang, and J. A. Chambers, “Variable stepsize sign natural gradient algorithm for sequential blind source separation,” IEEE Signal Processing Letters, vol. 12, no. 8, pp. 589–592, 2005. View at: Publisher Site  Google Scholar
 X. D. Zhang, X. L. Zhu, and Z. Bao, “Grading learning for blind source separation,” Science in China E, vol. 32, no. 5, pp. 693–703, 2002. View at: Google Scholar
 S.T. Hsieh, T.Y. Sun, C.L. Lin, and C.C. Liu, “Effective learning rate adjustment of blind source separation based on an improved particle swarm optimizer,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 2, pp. 242–251, 2008. View at: Publisher Site  Google Scholar
 S. F. Ou, X. H. Zhao, and Y. Gao, “Variable stepsize blind source separation algorithm with an auxiliary separation system,” Acta Electronica Sinica, vol. 37, no. 7, pp. 1588–1593, 2009 (Chinese). View at: Google Scholar
 H. Nakajima, K. Nakadai, Y. Hasegawa, and H. Tsujino, “Blind source separation with parameterfree adaptive stepsize method for robot audition,” IEEE Transactions on Audio, Speech and Language Processing, vol. 18, no. 6, pp. 1476–1485, 2010. View at: Publisher Site  Google Scholar
 F. Nesta, T. S. Wada, and B.H. Juang, “Batchonline semiblind source separation applied to multichannel acoustic echo cancellation,” IEEE Transactions on Audio, Speech and Language Processing, vol. 19, no. 3, pp. 583–599, 2010. View at: Publisher Site  Google Scholar
 F. Nesta, T. S. Wada, S. Miyabe, and B.H. Juang, “On the nonuniqueness problem and the semiblind source separation,” in Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA '09), pp. 101–104, New Paltz, NY, USA, October 2009. View at: Publisher Site  Google Scholar
 M. Z. Bhotto and A. Antoniou, “Robust setmembership affineprojection adaptivefiltering algorithm,” IEEE Transactions on Signal Processing, vol. 60, no. 1, pp. 73–81, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 X.L. Zhu and X.D. Zhang, “Adaptive RLS algorithm for blind source separation using a natural gradient,” IEEE Signal Processing Letters, vol. 9, no. 12, pp. 432–435, 2002. View at: Publisher Site  Google Scholar
 Y. X. Chen, L. N. Tho, B. Champagne, and C. J. Xu, “Recursive least squares constant modulus algorithm for blind adaptive array,” IEEE Transactions on Signal Processing, vol. 52, no. 5, pp. 1452–1456, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 J. M. Ye, H. H. Jin, S. T. Lou, and K. J. You, “An optimized EASI algorithm,” Signal Processing, vol. 89, no. 3, pp. 333–338, 2009. View at: Publisher Site  Google Scholar
 T. Adali, M. Anderson, and G. S. Fu, “Diversity in independent component and vector analyses: identifiability, algorithms, and applications in medical imaging,” IEEE Signal Processing Magazine, vol. 31, no. 3, pp. 18–33, 2014. View at: Publisher Site  Google Scholar
 P. W. Chen, H. Hung, O. Komori, S.Y. Huang, and S. Eguchi, “Robust independent component analysis via minimum γdivergence estimation,” IEEE Journal on Selected Topics in Signal Processing, vol. 7, no. 4, pp. 614–624, 2013. View at: Publisher Site  Google Scholar
 Y. Xue, F. Ju, Y. Wang, and J. Yang, “A source adaptive independent component analysis algorithm through solving the estimating equation,” Expert Systems with Applications, vol. 36, no. 10, pp. 12306–12313, 2009. View at: Publisher Site  Google Scholar
 G. Marsaglia and W. W. Tsang, “The Ziggurat method for generating random variables,” Journal of Statistical Software, vol. 5, no. 8, 2000, http://www.jstatsoft.org/v05/i08/. View at: Google Scholar
Copyright
Copyright © 2015 Pengcheng Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.