Abstract

A kurtosis optimization method is proposed to improve the blind separated signal qualities based on the extend-infomax algorithm. The kurtosis of the hypothetical source signal was optimized based on the probability density function of sub-Gaussian signals. Obtained parameters after kurtosis optimization were then utilized to validate the effectiveness of the algorithm, which showed that the running time of the algorithm was significantly reduced, and the qualities of the separated signals were enhanced. Methods. Using kurtosis as a control variable, a one-way analysis of variance (ANOVA) was carried out on the algorithm’s performance metrics, the number of iterations, and the signal-to-noise ratio of the separated signal. Results. The results showed that there were significant differences in the above metrics under different kurtosis levels. The curves of average metric values indicate that, with the increase in kurtosis of the hypothetical source signal, the performance of the algorithm was improved.

1. Introduction

Blind signal separation (BSS) refers to the estimation of source signals based on an observed signal in the condition where both the source signal and the mixed system are unknown. The BSS digital signal processing technology was first proposed in the 1990s and has since become a popular research topic in the field of signal processing [14]. Several studies on BSS have been implemented in various areas, such as mechanical fault detection [5, 6], audio signal processing [7], image processing [8], biomedical engineering [9], and radar signal detection [10].

In a linear instantaneous mixture BSS model, n statistically independent source signals are processed by an unknown aliasing matrix and n observation signals were obtained:where A is a n × n nonsingular constant matrix. The task of the BSS algorithm is to recover the source signals from mixtures x(t). The separation model of BSS algorithms can be divided into two categories: batch approach and extraction approach.

The goal of BSS is to obtain the source signal based on the observed signal only, without knowing the source signal and the aliasing matrix . It has been applied in various scenarios. For example, in the “cocktail party” problem, BSS technology was used to separate the speech and music signals from the mixed background signal [11]. In brain, electroencephalogram (EEG) processing BSS technology was used to automatically remove eye movement and blink artifacts to extract the characteristics of neural signals [12, 13]. In addition, BSS was used to extract the components of mechanical vibration signals in mechanical fault detection thereby increasing the accuracy of fault detection [14].

Due to the increased popularity and its wide application area that BSS can be applied to, a lot of research has been conducted to resolve this issue. Scarpiniti has developed one effective blind source separation based on the Adam algorithm [15]. The method is based on a novel stochastic optimization approach known as the Adaptive Moment Estimation (Adam) algorithm [16], which provides excellent properties of itself to the BSS solution. One of the most effective algorithms is the InfoMax algorithm proposed by Belland Sejnowski [11]. The algorithm is based on the joint entropy maximization of one single neural network output. The efficiency of this method is guaranteed by the fact that the joint entropy gradient can be evaluated in a simple closed-form. The extend-infomax algorithm was proposed by Girolami et al. [17] and Lee et al. [18]. The extended version is proved to be able to separate 20 sources with a variety of source distributions easily.

However, little research has been done in further boosting the performances of the extend-infomax algorithm. This paper is adopting one approach of kurtosis optimization for BSS performance improvement. Research on BSS is of significant practical and application value. An overview of the main method applied in the field of BSS is included in Section 2.1.

Major contributions of this paper are as follows:(1)For blind signal separation, independent component analysis is used (ICA).(2)Different algorithms have been discussed in the literature. The infomax algorithm is an optimization principle for artificial neural networks and other information processing systems. Infomax algorithms are learning algorithms that perform this optimization process.(3)The extend-infomax algorithm is used. One of the objectives of the extend-infomax algorithm is to provide a simple learning rule that can separate sources with a variety of distributions.(4)The influence of kurtosis optimization on the performance of the algorithm was analyzed.(5)One-way ANOVA was then carried out on the performance metrics of the algorithm under different kurtosis levels to derive the source signal with improved qualities.

The outline of this paper is given as follows.

In Section 2, the Methods section, the independent component analysis (ICA) method is discussed, used for the separation of the blind signal. The infomax algorithm is described which is a neural network method. Then, the extend-infomax algorithm is discussed to provide a simple learning rule that can separate sources with a variety of distributions.

In Section 3, aiming at the mixture separation of Gaussian source signals in extend-infomax algorithm, the parameter setting of this algorithm is explained from the perspective of kurtosis optimization of assumed source signals, and the way to improve the algorithm is given.

In Section 4, the Analysis section, one-way ANOVA analysis of the number of iterations, one-way ANOVA analysis of SN1, and one-way ANOVA analysis of SN2 are discussed.

2. Methods

In this section, we described the independent component analysis (ICA). It is a technology used for the separation of the blind signal. Also, the infomax algorithm is described which is a neural network method. Then, the extend-infomax algorithm is discussed to provide a simple learning rule that can separate sources with a variety of distributions.

2.1. Independent Component Analysis (ICA)

ICA is currently the primary technology used for blind signal separation. It is generally assumed that the source signals are statistically independent, and the independence is measured by the cost function. Using optimization algorithms, the separation matrix that makes the cost function reach the extreme values is obtained, and the signal is taken as the estimation for the source signals. The cost function can be selected using three different methods including minimum mutual information (MMI), maximum entropy (ME) [11], and maximum likelihood estimation (MLE) [15, 19]. Optimization of the cost function is performed using the stochastic gradient (SG) [11] and natural gradient (NG methods) [20]. Different algorithms have been proposed in the literature to solve the BSS problem. In general, these algorithms can be classified into two groups: the algorithm based on (i) statistical analysis and (ii) neural networks methods. (i) The neural network-based methods are considered to be more computationally efficient. (ii) The statistical analysis methods are slow as compared with neural network-based methods, but they might be slow in convergence.

2.2. Infomax Algorithm

Infomax is an optimization principle for artificial neural networks and other information processing systems. The principle was described by Linsker in 1988. It prescribes that a function that maps a set of input values I to a set of output values O should be chosen or learned to maximize the average Shannon mutual information between I and O, subject to a set of specified constraints and/or noise processes. Infomax algorithms are learning algorithms that perform this optimization process. One of the limitations of the Infomax algorithm is that it cannot adapt itself to inputs with a variety of distributions. As the learning rule defined by Bell and Sejnowski states that there is only one nonlinear function for the mapping network, therefore, it can only separate signals with the same distribution.

2.3. Extend-Infomax Algorithm

The extend-infomax algorithm was proposed by Girolami et al. There are three different types of distributions for signals: super-Gaussian, Gaussian, and sub-Gaussian signals. The difference between PDF of sub-Gaussian and super-Gaussian distributions is shown in Figure 1. Signals with different distribution such as (a) a sub-Gaussian signal, (b) a Gaussian signal, and (c) a super-Gaussian signal are reproduced by Ashouri et al. 2009, under the Creative Commons Attribution License/public domain [21]. One of the limitations of the existing BSS algorithms is that they cannot adapt themselves to a variety of inputs distributions.

The algorithm measures the independence of the separated signals based on the MI (i.e., Kullback–Leibler divergence):

Assume the separated signals are independent which gives . Minimization of MI of the separated signals is equivalent to the maximization of the likelihood function in Equation (2):

Based on the conventional gradient method, we can obtain the following equation:where is the hypothetical probability density of the source signal.

To avoid the inversion of the separation matrix and speed up the convergence of the algorithm, the NG method was used as follows:

The probability density of the hypothetical source signal in the extend-infomax algorithm is

Let , the following equations, are obtained:

The infomax algorithm is effective in separating sub-Gaussian sources. This is due to using only one nonlinear function in the learning rule of the neural network. Hence, one of the objectives of the extend-infomax algorithm is to provide a simple learning rule that can separate sources with a variety of distributions. In this paper, based on the hypothetical probability density of a sub-Gaussian source signal in the extend-infomax algorithm, the influence of kurtosis optimization on the performance of the algorithm was analyzed. One-way ANOVA was then carried out on the performance metrics of the algorithm under different kurtosis levels to derive the source signal with improved qualities.

3. Proposed Algorithm

In this section, the kurtosis algorithm is discussed, that is, the assumed kurtosis of sub-Gaussian source signal in extend-infomax algorithm, and kurtosis is used as a control variable.

3.1. Kurtosis

Kurtosis is the fourth-order cumulant of a signal. For a normal distribution, the kurtosis can be calculated from its characteristic function from [22] in the following equation:where .

The kth-order central moment of the normal distribution is as follows:

Thus, are normally distributed random variables as follows:

The kurtosis of normally distributed random variables is as follows:

Assuming the source signal satisfies , based on the central limit theorem, there iswhere is the pth row of the matrix .

Therefore, the kurtosis of the mixture signal should be approximately zero. If kurtosis > 0, the signal is called super-Gaussian, while kurtosis < 0, it is sub-Gaussian. can be used as a measure of the degree to which the signal is far away from the Gaussian signal.

The hypothetical probability density of the sub-Gaussian source signal in the extend-infomax algorithm is from the Gaussian mixture model of the study by Pearson [18]:where and .

For simplicity, it is assumed that and ; thus,where and .

The kurtosis of the hypothetical source signal is as follows:

The following constrained optimization problem is to be solved:

Using a random parameter setting off as the initial iteration conditions, the minimum value was −2 and after 100 iterations. showed irregular variation. According to the Gaussian mixture model of Pearson [18], set ; then, the probability density function graph is as shown in Figure 2.

Here, .

For BSS of sub-Gaussian signals, the algorithm equation is as follows:

3.2. Influences of Kurtosis on the Source Signal

In the probability density function of the hypothetical source signal in the extend-infomax algorithm, . A higher value means a larger distance between the source signal and Gaussian signal, and a lower indicates the smaller distance between them. Both and will change with . The parameters can be obtained by solving the following function:

In this paper, and . A total of 10000 sampling points in the equal distance were selected within the interval .

The aliasing matrix is as follows: ; the step size is as follows: , and the convergence criterion is as follows: .

Taking as the reference (Table 1), which was derived when and , the parameters obtained with increasing are used to validate the performance of the algorithm based on the following metrics:(1)Performance index (PI) is as follows:(2)Signal-to-noise ratio (SNR) is as follows:where is the separated signal. The other experimented alternations of k4 ranging from −0.51 to −0.55 with a step of −0.01 and their corresponding performance indices are included in Tables 26, where SN1 refers to the SNR of the recovered signal corresponding to the first source signal, expressed as , and SN2 means the SNR of the recovered signal corresponding to the second source signal, expressed as .

From the results, it can be seen that at the same kurtosis level, the running time and iterations of different combinations and are similar. As shown in Figure 4, when the kurtosis of the source signal increases to a certain level, at the maximum number of iterations, the PI value does not tend to zero or the global matrix is empty. If the value is not too big (e.g., ), the algorithm becomes unstable after the obtained parameters were imported. In this case, the iteration step can be reduced (e.g., ) to ensure convergence.

To evaluate the influence of kurtosis of the hypothetical source signal on the performance of the extend-Infomax algorithm, kurtosis was used as a control variable. Under six kurtosis levels, one-way ANOVA analysis of the running time, the number of iterations, , and was carried out using SPSS 25.0.

4. Analysis

In this section, one-way ANOVA [23] analysis of the number of iterations, one-way ANOVA analysis of SN1, and one-way ANOVA analysis of SN2 are discussed.

4.1. One-Way ANOVA Analysis of the Running Time

At a significance level , the running time at different kurtosis levels does not meet the homogeneity of variance assumption. The results of Welch and Brown–Forsythe tests are shown in Table 7.

The results showed significant differences in the algorithm running time among the six kurtosis levels. The average running time decreases with the increase in the kurtosis, as shown in Figure 5.

4.2. One-Way ANOVA Analysis of the Number of Iterations

At a significance level , the number of iterations at different kurtosis levels does not meet the homogeneity of variance assumption. The results of Welch and Brown–Forsythe tests are shown in Table 8.

There are significant differences in the number of iterations between the six kurtosis levels. The average number of iterations decreases with increasing kurtosis, as shown in Figure 6.

4.3. One-Way ANOVA Analysis of SN1

At a significance level , the SN1 results at different kurtosis levels meet the homogeneity of variance assumption, indicating significant differences in SN1 among different kurtosis levels. The average SN1 increases with increasing kurtosis (Figure 7). The result of the F test is shown in Table 9.

4.4. One-Way ANOVA Analysis of SN2

At a significance level , the SN2 at different kurtosis levels does not meet the homogeneity of variance assumption. The results of Welch and Brown–Forsythe tests are shown in Table 10.

There are significant differences in SN2 among the different kurtosis levels, and the average SN2 increased with increasing kurtosis (Figure 8).

5. Conclusions

Using the hypothetical probability density of sub-Gaussian signals in the extend-infomax algorithm, the constrained optimization problem was solved according to the principle of optimal kurtosis. The optimal parameter a in the probability density of the hypothetical source signal is obtained: . The different combinations of parameters at different kurtosis levels were examined to validate the performance of the algorithm using such indices as the performance index, running time, number of iterations, SN1, and SN2. With kurtosis as a control variable, a one-way ANOVA analysis of the above indices was carried out. The results showed that there are significant differences in the indices among different kurtosis levels. With an increasing kurtosis, the average running time and the number of iterations showed a decreasing trend, whereas the average signal noise ratio increased. Once the kurtosis reaches a certain level, the iteration step size needs to be reduced for the algorithm to converge. The experiment has validated the effectiveness of the proposed method in improving the quality of blind recovered signals.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by Key Field Special Project Guangdong Provincial Grant no. 2021ZDZX3019 and Science Project of Dongguan Grant no. 20201800500012.