Learning rate plays an important role in separating a set of mixed signals through the training of an unmixing matrix, to recover an approximation of the source signals in blind source separation (BSS). To improve the algorithm in speed and exactness, a sampling adaptive learning algorithm is proposed to calculate the adaptive learning rate in a sampling way. The connection for the sampled optimal points is described through a smoothing equation. The simulation result shows that the performance of the proposed algorithm has similar Mean Square Error (MSE) to that of adaptive learning algorithm but is less time consuming.

1. Introduction

With the fast development of the information and computation technologies, the big data analysis and cognitive computing have been widely used in many research areas such as medical treatment [1], transportation [2], and wireless communication [3, 4]. Blind Source Separation (BSS) is a popular research topic in the area of wireless communication. With the fast development of mobile computing, BSS has been widely used in the mobile signal analysis. BSS is an integration of artificial neural network, statistical signal processing, and information theory. The core of BSS is its ability to extract independent components from an observed mixture signal, without requiring a prior knowledge. Such flexibility has made BSS popular in many applications [57] especially in mobile intelligence [8, 9].

Artificial neural network based Independent Component Analysis (ICA) is the widely used method in BSS, because it provides powerful tools to capture the structure in data by learning. Based on this theory, Natural Gradient Algorithm (NGA) is employed to find the appropriate coefficient vector of artificial neural network [10]. Nonholonomic Natural Gradient Algorithm (NNGA) is addressed and applied in the BSS [11, 12]. In its application, learning rate for training coefficient vector plays an important role on the performance of the algorithm, which has relationship with not only the update times but also the speed of convergence. This attracts many researchers’ attention to the learning algorithms [13, 14].

Most well-known traditional learning algorithms assume that the learning rate is a small positive constant. Inappropriate constant will lead to relative slow convergence speed or big steady state error. There are lots of studies on the learning rate which aim at the better performance and higher convergence speed. von Hoff and Lindgren [15] developed adaptive step size control algorithm for gradient-based BSS. They used the coefficients of the estimating function to provide an appropriate “measure of error” and serve as the basis for a self-adjusting time-varying step-size. Hai proposed a conjugate gradient procedure for second-order gradient-based BSS [16]. The second-order gradient-based BSS was formulated as a constrained optimization problem. A conjugate gradient procedure with the step size derived optimally at each iteration was proposed to solve the optimization problem. In these algorithms, the step size is updated in iteration, whose value is adjusted according to the time-varying dynamics of the signals. These approaches lead to better performance. The real time search for the step size, however, requires more online calculations as well as recursion, resulting in an increase in computational complexity. On the other hand, in the recursion for optimal step size, there is still a constant left to be estimated, which leads to an endless loop.

The objective of this paper is to find appropriate learning algorithm, to provide better performance as well as less computation time. The proposed sampling learning rate is based on adaptive learning algorithm, which only calculates and samples a few appropriate points. These selected points are connected by the proposed normalized smooth equation.

In the following, we first review the principle of blind signal separation. Then, we discuss the adaptive learning algorithm and propose the sampling adaptive learning algorithm. Finally, we present two typical examples in mobile voice signal. Different constant learning rates are compared firstly to analyze the relationship between the convergence speed and steady state error. Then the comparison between the adaptive learning algorithm and the sampling adaptive learning algorithm is made, illustrating that the proposed algorithm has similar Mean Square Error (MSE) to that of the adaptive learning algorithm but consumes less computational time.

2. Blind Signal Separation

The model considered in this paper is described by Figure 1. A set of individual source signals is mixed with matrix to produce a set of mixed signals .where is the an unknown mixing matrix independent of time, is the vector of source signals, and is the vector containing the observed signals.

BSS separates the mixed signals, through the determination of an unmixing matrix to recover the approximation of the original signals. The recovered output signal is described by where is the matrix to be adjusted such that . With the NGA, the matrix is updated from at time to by using the following adaptation rule.where is the learning rate. is the score function defined bywhere is the probability density function of the source signal. It is assumed that the source signals are zero mean. Hence, we have where denotes the expectation.

According to the reference [17], can be set to be when is the sub-Gaussian signal; when is the super-Gaussian signal. Most mobile voice signals are super-Gaussian signals and most mobile image signals are sub-Gaussian signals. The function is accordingly selected based on the mobile signal type.

The learning rate is a very important factor for the performance of BSS in controlling the magnitudes of the updates of the estimated parameters. It can be constant or variable. The constant learning rate means that the adaptation in (3) is based on fixed step-size parameter. Because the step size is proportional to convergence speed and the steady state error, the big step size leads to high convergence speed and big steady state error. However, what we expect is the high convergence speed and small steady state error. In conclusion, the main problem of the constant learning rate focuses on the incompatibility between the convergence speed and the steady state error [18, 19]. Therefore, many researchers were aiming at adaptiveness by using the variable step-size approaches. We will discuss the adaptive learning algorithm and propose a sampling adaptive learning algorithm in the following section.

3. Sampling Adaptive Learning Algorithm

3.1. Adaptive Learning Algorithm for BSS

The idea of adaptively changing the step size of the learning rate is called adaptive learning algorithm. The adaptive learning algorithm can balance the convergence speed and the steady error performance. In this case, we discuss the adaptive learning algorithm which updates the learning rate step size through the estimate function. Although the distance between the estimated parameter and its optimal value is not directly available to control the step size, evaluation function can be developed to estimate the distance so that the step size can be determined by the following recursion:where is a constant and is an estimate function. is the evaluation function. In this adaptive learning algorithm, the estimate function is set to beA smoothed version of the evaluation function, denoted by , can be set to be and the evaluation function is set to beThis idea is similar to Least Mean Square Adaptive Filter (LMS) and Reinforcement Learning (RL). It can be concluded from (6)–(9) that the step size depends on the evaluation of the estimate function. The principle of this adaptive algorithm implies that the step size is small when the errors are small and the step size is large when the errors are large. For time-invariant systems the step size systematically decreases when the learning rate is close to their optimum.

However, we must notice that, in the process to obtain a better learning rate, the adaptive step-size control algorithm actually introduces another recursion. In this recursion shown in (6), there is still an unknown constant to be determined. If we want to find the optimal , we also need to find another recursion like (6), which leads to endless iteration.

On the other side, the recursion as described in (6) introduces another calculation recycle which adds additional computation. In some case, calculation with adaptive learning rate even consumes more time than the calculation with constant learning rate. Figure 2 is the ideal adaptive step size curve for noise-free signal. Every point on the curve is calculated with (6)–(9). We did the simulation for BSS based on the adaptive step-size and compared it with the selected ideal constant step size. Table 1 shows the performance index resulting from the application of the two methods and illustrates that the convergence speed of ALA is considerably higher when the adaptive step-size algorithm is employed. The steady state error of these two algorithms is similar when the random possibility is considered. As for the computation time, it is obvious that ALA consumes more. We can conclude from the analysis that although the adaptive algorithm balances the convergence speed and the steady state error, it consumes more computation time as shown in Table 1.

3.2. Adaptive Learning Algorithm Based on Sampling

An alternative way to use the adaptive strategy is to implement the adaptive step size on sampling. In this method, only several points need the adaptive calculation. For training the matrix, it is not necessary to have the variable step size all the time. As shown in Figure 3, several sampling points are enough for keeping updated. Let be the sampling interval; the time variable becomeswhere means rounding.

Then the learning rate can be represented asEquations ((7)–(9)) therefore becomeThe new equations reduce the times of iteration by dividing the sampling interval We could choose the according to the required accuracy and speed. We also provide the connection for the sampled optimal points, which smooth the curve between two optimal points. Based on this analysis, the learning rate between two optimal points can be expressed aswhere and are two sampling points which can be obtained at times and with sampling interval. When can be added as the condition used to determine where to switch the step size. In (13), is the only variable that keeps changing. This algorithm will not bring another recursion for the system but will still have the optimal value choice.

For the convenience of application, the normalized form for is given by where is set to be 0.053. and are the first and second sampling points, respectively.

4. Experiment and Result

4.1. Case  1

To test the algorithm, five sub-Gaussian source signals commonly studied in the mobile system are employed. The source signals areThese source signals are shown in Figure 5. A mixing coefficient was assigned to be independent white Gaussian noise, which is with zero mean. These sources were mixed producing mixtures as shown in Figure 6. The mixtures were separated using fixed learning rate, variable learning rate, and sampled learning rate which are named Constant Learning Algorithm (CLA), Adaptive Learning Algorithm (ALA), and sampling adaptive learning algorithm (SALA).

Firstly, we employ four fixed learning rate to check the performance. Figure 4 depicts the Mean Square Error (MSE) for these four fixed learning rates. In this case the MSE [20] is defined asAs can be seen from Figure 4, MSE varied according to different fixed learning rate. The bigger learning rate has fast convergence speed at the beginning while the smaller learning rate leads to better stability at the end.

Adaptive Learning Algorithm (ALA) can balance the requirement of convergence speed and steady state error. However, ALA requires more computation time in iteration. To solve this problem, we use the sampling adaptive learning algorithm (SALA), by using the sampling interval in the adaptive learning algorithm. Figure 7 is the output signal with SALA. To compare SALA with ALA, (6) and (11) are adopted, respectively, but all with the initial learning rate and negative constant . Figure 8 depicts the MSE of ALA and SALA.

Figures 7 and 8 show that the MSE of ALA and SALA are similar. The high MSE in Figure 8 before the 500th iterative time leads to inaccurate estimated curve in Figure 7 correspondingly while the output estimated curve in Figure 7 becomes accurate when the steady state error in Figure 7 gets small. The results were evaluated using the performance index which is listed in Table 2. The convergence time and the mean steady state error of ALA and SALA are on the same level considering the random factor, but the computation time of SALA is obviously less than ALA. We can conclude from the analysis that the proposed SALA has advantage over ALA in the computation time.

4.2. Case  2

In order to verify the effectiveness of the proposed SALA, two music sources from the real environment are tested through the simulation. These music sources were mixed by random Gaussian noise matrix of full rank . The mixtures were separated using SALA with the sampled variable step size. Figure 9 confirms the accurate and fast estimate as observed in Case  1.

5. Conclusions

Based on the discussion of fixed step-size algorithm and adaptive step-size algorithm for the blind separation of sources, a sampling adaptive step-size algorithm has been proposed. The algorithm has similar MSEs with adaptive step-size algorithm, but less computational time. By a smooth connection between two optimal points, the sampling method also has smooth curve and does not bring more recursion.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


This work is supported by the Natural Science Foundation of China through the Grant 11702016.