Complexity

Complexity / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 3284587 | https://doi.org/10.1155/2020/3284587

Liyun Su, Xiu Ling, "Estimating Weak Pulse Signal in Chaotic Background with Jordan Neural Network", Complexity, vol. 2020, Article ID 3284587, 14 pages, 2020. https://doi.org/10.1155/2020/3284587

Estimating Weak Pulse Signal in Chaotic Background with Jordan Neural Network

Academic Editor: Mohamed Boutayeb
Received06 Feb 2020
Revised19 May 2020
Accepted17 Jun 2020
Published20 Jul 2020

Abstract

In target estimating sea clutter or actual mechanical fault diagnosis, useful signal is often submerged in strong chaotic noise, and the targeted signal data are difficult to recover. Traditional schemes, such as Elman neural network (ENN), backpropagation neural network (BPNN), support vector machine (SVM), and multilayer perceptron- (MLP-) based model, are insufficient to extract the weak signal embedded in a chaotic background. To improve the estimating accuracy, a novel estimating method for aiming at extracting problem of weak pulse signal buried in a strong chaotic background is presented. Firstly, the proposed method obtains the vector sequence signal by reconstructing higher-dimensional phase space data matrix according to the Takens theorem. Then, a Jordan neural network- (JNN-) based model is designed, which can minimize the error squared sum by mixing the single-point jump model for targeting signal. Finally, based on short-term predictability of chaotic background, estimation of weak pulse signal from the chaotic background is achieved by a profile least square method for optimizing the proposed model parameters. The data generated by the Lorenz system are used as chaotic background noise for the simulation experiment. The simulation results show that Jordan neural network and profile least square algorithm are effective in estimating weak pulse signal from chaotic background. Compared with the traditional method, (1) the presented method can estimate the weak pulse signal in strong chaotic noise under lower error than ENN-based, BPNN-based, SVM-based, and -ased models and (2) the proposed method can extract the weak pulse signal under a higher output SNR than BPNN-based model.

1. Introduction

As early as the end of the 19th century, the French Poincare found that the solution of the three-body problem could be random within a certain range when he studied the three-body problem, but this discovery did not arouse widespread concern among scholars. It was not until 1963 that Lorenz, an American meteorologist, published a paper [1] that gave the initial condition parameters for generating chaos in differential equations, which were called Lorenz equations. With the continuous in-depth study of chaos, the characteristics and scientific concepts of chaos gradually come into people’s vision. Chaotic phenomena can be described as random phenomena in a deterministic system, which is a complex behavior generated by a nonlinear dynamic system. Moreover, chaos has three characteristics: sensitivity to initial value, short-term predictability, and intrinsic randomness.

Pulse signal is a kind of discrete signal, which is usually submerged by noise and not easy to detect. It is widely existed in communication engineering, biomedicine, and industrial automatic control. Therefore, the prediction of the pulse signal from chaotic noise has certain practical significance. Experts and scholars at home and abroad have also studied the prediction of the pulse signal in chaotic noise and obtained a lot of research results, such as Volterra filter, correlation detection in time domain, spectral analysis in frequency domain, chaotic Duffing oscillator [2], local linear [35], multiple kernel extreme learning machine [6], and Kalman filter , extend Kalman filter (EKF) [7], wavelet transform and nonlinear autoregressive model, etc.

At present, experts and scholars have proposed many methods to predict chaotic time series and extract impulse signal from chaotic noise [8, 9]. Jiang Xiangcheng proposed the Volterra adaptive method to predict chaotic time series [10]. The improved local projection algorithm proposed by Han Min is a nonlinear chaotic noise reduction method, which combines wavelet analysis with local projection to estimate noisy chaotic signals [11]. Wang Shiyuan et al. proposed the fractional order maximum correlation entropy algorithm [12].

Neural network is a model that imitates the information processing mode of the human nervous system. It is the abstraction and simplification of human neural network. Neural networks can approximate and model the processes of nonlinear dynamic systems [13]. Neural network is divided into forward network, feedback network, random neural network, and competitive network. Because of the rich diversity of neural networks, different neural network to build models of nonlinear systems is also being studied. For example, radial quantity function (RBF) [14, 15], support vector machine (SVM) [16], least squares support vector machine (LS-SVM) [17], backpropagation neural network (BPNN) [13], fuzzy neural network (FNN) [18], recursive neural network (RNN) [19, 20], multilayer perceptron (MLP), nonlinear regression model (NAR), recursive predictor (RPNN), and ESN [21]. Chandra used the method of memetic cooperative coevolution to train the Elman recurrent neural network on the problem of grammatical inference [22]. Two years later, Chandra used Elman neural network to characterize chaotic time series and added cooperation and competition into them, so as to achieve the purpose of prediction [20]. This paper uses the Jordan network, a neural network established by Jordan in 1986 [23]. Both the Jordan neural network and the Elman neural network (ENN) are cyclic neural networks. The difference is that the Elman neural network feeds the output value of the hidden layer at the previous moment back to the feedback layer, while the Jordan neural network feeds the output value of the output layer at the previous moment back to the feedback layer, which can be used to represent the dynamic system.

In this paper, the Jordan neural network is used to fit the chaotic background and estimate the pulse signal from its residuals. Because the pulse signal is very weak, the characteristics of chaotic time series are mainly reflected in the chaotic background. Firstly, the observed signal was reconstructed, then the Jordan neural network and the single-point pulse signal model were established, the weight and bias of the Jordan neural network were optimized by the gradient descent algorithm, and then the amplitude of the pulse signal was estimated by the profile least square method.

The remainder of the paper is organized as follows. Section 2 gives a brief background and related work. Section 3 presents and analyzes this problem. Section 4 estimates the weak pulse signal from a chaotic background by Jordan neural network. Section 5 presents 3 experimental simulations results and discussion. Finally, conclusions and discussion on future work are provided in Section 6.

Our work mainly involves two aspects: the application of feedback neural network and the application of feedback neural network in time series prediction.

2.1. Application of Feedback Neural Network

Chaotic time series is a kind of irregular motion in deterministic system and a special time series. However, the traditional time series prediction models—MA, ARMA, and ARIMA—and other models are not ideal for chaotic time series prediction.

The applications for chaotic time series prediction are wide and range from financial prediction to weather prediction [20].

Recursive neural networks are used in a wide range of applications, such as weather prediction [20], unbalanced fault diagnosis [24], and detection of network malware [25]. Classification is also an important application of recursive neural network. Recently, recursive neural networks have been widely integrated into convolutional neural networks. Epstein integrates the recursive neural network into the convolutional neural network for image classification. Experiments show that the model is more robust to category-independent attributes [26]. Not only is the wide application range of neural network reflected in its high frequency of use in different fields but also it can be flexibly combined with other networks, models, and methods. Slawek Smyl found that the preprocessing parameters were actually updated formulas from some models of the exponential smoothing family. Therefore, it proposes a hybrid prediction method in [27], which mixes the exponential smoothing model with the advanced long and short-term memory neural network in a common framework. The advantages of this hybrid prediction method are that the exponential smoothing equation enables the method to effectively capture the major components of a single sequence, while neural networks allow for nonlinear trending and cross-learning.

2.2. Application of Feedback Neural Network in Time-Series Prediction

Liu et al. combined the dual stage two-phase model with the feedback neural network to make a long-term prediction of multiple time series in [28]. And it has been successfully used to develop widely used expert or intelligent systems. In [29], Wei Wu used ARIMA model, Elman neural network, and Jordan neural network, respectively, to predict the nonlinear time series data of human brucellosis in China, and the results showed that the feedback neural network was more accurate than the traditional ARIMA model. In [30], Zhihao Wang proposed a heart sound signal recovery method based on the long and short time memory prediction model based on the structure of circular neuron network. This method can not only restore incomplete or disturbed signals, but also have filtering effect. The damaged or incomplete heart sound signal has a good recovery effect, which has a promoting effect on medical research. Zhang proposed a new nonstationary time series modeling optimization algorithm in [31]. The algorithm uses moving window and exponential decay weight to eliminate the influence of historical gradient. The regretful boundary of the new algorithm is analyzed to ensure the convergence of the calculation. The simulation results of this algorithm on the data set of short-term power load are better than the existing optimization algorithm.

3. Problem Analysis

This paper analyzes the weak pulse signal which is fixed chaotic noise. Let be a time parameter. We can assume that observation sequence, expressed as , contains chaotic noise , white noise signal , and weak pulse signal . So we have a formula as follows:

Pulse signal can be boiled down by periodic signal:wherewhere is the amplitude of the weak pulse signal and is the periodic of . Estimating weak pulse signal from , we can consider that estimating weak pulse signal in chaotic background is to estimate the value of . Then, our ultimate goal turns to estimating .

4. Estimating the Pulse Signal from Strong Chaotic Background

4.1. Phase Space Reconstruction

Chaotic time series is a univariate or multivariable time series produced by a dynamical system. Phase space reconstruction theorem is mapping a chaotic time series to high-dimensional space. According to the Takens theorem of embedding [32], there is a correlation between one component of the dynamical system and the other components.

For an observed chaotic signal , the embedding dimension and the time delay could be obtained by using Cao’s method [33], and a phase point in the reconstructed phase space can be expressed as , where and . Then there is a smooth mapping, , which provides the relationship between and , denoted as . To predict chaotic time series is to find the function or the approximate function of .

Remark 1. In mathematics, Takens embedding theorem gives the conditions under which a chaotic dynamical system can be reconstructed from a sequence of observations of the state of a dynamical system. The reconstruction preserves the properties of the dynamical system that do not change under smooth coordinate changes, but it does not preserve the geometric shape of structures in phase space. It provides the conditions under which a smooth attractor can be reconstructed from the observations made with a generic function.
Cao’s method is a practical method which determines the minimum embedding dimension from a scalar time series. The average ratio of Euclidean distance of a vector with embedded dimension m to Euclidean distance with embedded dimension m + 1 is denoted as E (m). The ratio of E (m + 1) to E (m) stops changing when m is greater than some value m0 if the time series comes from an attractor. Then m0 + 1 is the minimum embedding dimension we look for.

4.2. Jordan Neural Network

Jordan combined the Hopfield network storage concept and distributed parallel processing theory to create a new circular neural network in 1986 [23]. Jordan neural network is made of 4 parts: input layer, hidden layer, output layer, and context layer, respectively. There is a first-order delay operator in the connection from the output layer to the context layer, so that the context layer stores the information of the output layer. There are two types of activation functions in the neural network: linear activation function and nonlinear activation function. Sigmoid nonlinear activation function which is expressed as is used at the hidden layer and the linear function is used at the output layer in this paper. Jordan neural network topology is shown in Figure 1.

According to Figure 1, is used to represent the input vector of the Jordan neural network, where and . is the output vector of hidden layer. are the weights from input layer neuron (j) to ith first hidden layer neuron. are the weights from context layer neuron to ith first hidden layer neuron. are the weights from hidden layer neuron (i) to output layer neuron. are the biases of hidden layer neuron (i). and are the biases of output layer and context layer, respectively. and are the activation function of output layer and context layer, respectively, and they are usually linear function, while are the activation function of hidden layer neuron (i). is the value of hidden layer neuron (i). and are the value of context layer and output layer.

In order to express clearly and write easily, the following signs will be introduced:

So the Jordan neural network is obtained:

To better understand equation (5), we provide a detailed diagram of the first three steps of the Jordan neural network in Figure 2.

The following equation is obtained by the Takens theorem of embedding:where is the fitting value of the Jordan neural network at time . And the smooth mapping can be expressed as in

4.3. Model for Estimating the Pulse Signal

According to the Takens theorem of embedding, the model of pulse signal, and Jordan neural network, we can obtain the following equation for estimating the pulse signal:

Then, we can deduce the following equation with unknown parameters involving only and :

Profile least-square method [34] is a basic idea for fitting a semiparametric model. It is semiparametrically efficient for parametric components, and the nonparametric components are fitted if the parametric components were known. Details of the profile least-square estimation are given in the next section.

4.3.1. Profile Least Square Algorithm

Profile least square algorithm is usually divided into two steps: first, estimating the nonparametric function with a given , resulting in and second, estimating the unknown parameter with the estimated function .(A)Estimation ofwith Given.. From equation (10), given the initial value (null vector), the nonparametric function is obtained by minimizing the residual of fitting:Gradient descent algorithm is used to update the weights of Jordan neural network to achieve the purpose of optimization. The basic idea of gradient descent algorithm is the linear approximation by first-order Taylor expansion. Therefore, the partial derivatives of the weights and the bias that need to be optimized in this paper are, respectively, calculated as follows:The updates of weights and the bias are equal to the sum of the corresponding old one and the product of the partial derivative and the learning rate.(B)Estimatingfor Given. For the given , the key to estimate is to minimize the following equation:

The method of estimating is Newton–Raphson iteration method. Assuming that and is the minimum solution of , so we have . Then there is the Taylor expansion at the point near :

The first derivative and the second derivative of are obtained with respect to the estimated parameters :

You obtain the following equation from Sections 4.1 through 4.3:

It can be seen that equation (16) contains the first derivative and the second derivative of with respect to , and the first derivative is

The component form of the first derivative in equation (18) can be written as the following vector form:

The second derivative of with respect to is

Similarly, the component form of the second derivative in equation (20) can be written as the following vector form:where is expressed as

4.3.2. Procedure of Algorithm

We now outline the Algorithm 1.

Input: Observation sequence , the initial value of (), precision (0.00001).
Output:
Begin
While ( is greater than the given precision);
Step 1: For given , update the series by equation (9).
Step 2: The phase point data set is obtained by normalizing and reconstructing with the embedding dimension and the time delay.
Step 3: Based on (A) in Section 4.3.1 above, the Jordan neural network was trained to use the phase point data set .
Step 4: is updated using the difference.
End while
Obtain the optimal to achieve the accuracy.
End

Remark 2. Because the mathematical expression of the Jordan neural network is very complex, the difference is used to solve the approximate first and second derivatives for simplifying the calculation when the profile least square is used for estimation. The solution process of difference is as follows:We take 3 points () near and compute the first derivative and the second derivative of the function , where is a vector of the same form as . is a matrix with k rows and k columns.

5. Experimental Simulations

In order to verify the validity and feasibility of the proposed prediction model, three simulation experiments were carried out. The experimental data in this paper were generated by Lorenz dynamic system and realized by R language. The codes of experiments are uploaded to GitHub website, link is https://github.com/ling-xiu/jordan/blob/master/An application of Jordan neural network. The mean square error(MSE) is used to measure the recovery accuracy of the model:where

The equation of Lorenz dynamic system is in the following form: where . Let , , and be the time parameters that are measured according to discretization and the discretization step size is 0.01. We consider the length of data which is used to solve numerical equation (26) by the fourth-order Runge–Kutta method is 10000. We select the first component of equation (26) as the data of chaotic background noise . With complex autocorrelation and the method of Cao, the embedding dimension can be determined as and the delay time can be determined as .

We normalize the original data sets and there are two kinds of normalized equations. Different neural network structures correspond to different normalization equations in this paper:where is the original data and is the normalized data sets. is the mean of and is the standard deviation of .

Absolute error (AE) and absolute percentage error (APE) are used to measure the error, and they are defines the ratio of the variance of to the variance of the sum of and .

5.1. Example 1: Prediction of Pulse Signals with Different SNRs

Chaotic time series is generated by the Lorenz dynamic system and the index ranges from 3000 to 7000 for ensuring the chaotic nature of the data. Pulse signal is generated by equation (2) and the unknown parameters and are assumed as shown in Table 1. The experimental data consists of , , and white noise sequences with a mean of zero.


AEAPEMSE

(1)1300.1−59.181640.09697370.00302630.0302636.9 × 10−8
(2)1300.12−57.598020.11618130.00381870.0318231.1 × 10−7
(3)1400.15−55.957260.15274670.00274670.0183115.3 × 10−8
(4)1400.22−52.630630.22157330.00157330.0071511.7 × 10−8
(5)1500.3−50.256320.30053460.00053460.0017821.9 × 10−9
(6)1500.4−47.757550.39823380.00176620.0044152.0 × 10−8
(7)1500.5−45.819350.49887640.00112360.0022478.2 × 10−9
(8)1800.8−42.659030.79419310.00580690.0072591.9 × 10−7
(9)2001.1−40.325451.10674870.00674870.0061352.2 × 10−7
(10)2001.5−37.631481.50226240.00226240.0015082.4 × 10−8
(11)2001.9−35.578231.90663620.00663620.0034932.1 × 10−7
(12)2103.6−30.260973.55521660.04478340.0124409.0 × 10−6

To demonstrate the stability of the model presented in this paper, we performed Monte Carlo simulations consisting of 50 runs on the same data and the experimental results were averaged.

From Table 1 and Figure 3, we can see that the convergence rate is very fast, and the iteration times of the 12 sets of data in the experiment can all reach the given accuracy within 20 times. Second, both the absolute error (AE) and the absolute percentage error (APE) of the amplitude of the impulse signal are very small. The mean square error (MSE) of the recovered signal is less than 10−5. As can be seen from Table 2, the model proposed in this paper is very stable. For these 12 sets of data, no difference in the amplitude of the recovered pulse signal was found in either one experiment or 50 Monte Carlo simulation experiments. The standard deviation of the results of 50 Monte Carlo simulation experiments is less than 0.02 and the standard deviation (std) increases with the increase of pulse signal amplitude.


Mean of Standard deviation (std)

(1)1300.10.096651550.001731646
(2)1300.120.11617270.002553445
(3)1400.150.14972190.001991941
(4)1400.220.21914950.001827229
(5)1500.30.30300180.001105854
(6)1500.40.40288390.001321245
(7)1500.50.50305580.001121901
(8)1800.80.79902410.001515569
(9)2001.11.1024120.001636155
(10)2001.51.5020590.002687494
(11)2001.91.9016250.004017169
(12)2103.63.5639630.011684620

5.2. Example 2: Prediction Comparison of Different Models

In this experiment, different models were used to fit the same set of observed data, which had an amplitude of 2.5 and a period of 400. The SNR of the observed data is −36.42872. The chaotic noise and original observation signal are shown in Figure 4. The models that fit the reconstructed original data are Jordan neural network (JNN), Elman neural network (ENN), backpropagation neural network (BPNN), support vector machine (SVM), and multilayer perceptron (MLP).

From Table 3 and Figure 5, the Jordan neural network has the smallest absolute error between the estimated amplitude and the real value, while the MLP has the largest absolute error. It can be seen from in Figures 6(a), 6(c), and 6(e) that the fitted value of the original signal by Jordan neural network is significantly closer to the real value than support vector machine (SVM) and multilayer perceptron (MLP). It can be seen from the prediction error graph of Figures 6(b), 6(d), and 6(f) that the prediction error of Jordan neural network is relatively large at the time point with pulse signal, and the error value is close to the amplitude of the pulse signal, while the time point error of the rest without pulse signal is relatively small. Support vector machine and multilayer perceptron have large prediction error.


ModelMean of AEAPEstd

JNN2.52.5019520.0019520.00078080.004211562
BPNN2.52.5032970.0032970.00131880.006069684
ENN2.52.5034100.0034100.00136400.005308634
SVM2.52.3878660.1121340.04485360.010575260
MLP2.52.9696290.4696290.18785160.091993390

5.3. Example 3: Jordan Neural Network Compared with Backpropagation Neural Network

is the signal-to-noise ratio after extracting the pulse signal. defines the ratio of the variance of to the variance of residual:

Figure 7 takes the input SNR as the horizontal axis and the output SNR as the vertical axis. The experimental data are 12 sets of data from experiment 1, and then the backpropagation neural network estimation is carried out on the basis of experiment 1. It can be seen from Figure 7 that the output SNR increases with the increase of the input SNR, and they are proportional to each other. And with the increase of the input SNR, the increase of the output SNR slows down. It can also be seen from the figure that the output SNR of the Jordan neural network is 10–20db higher than the output SNR of the backpropagation neural network.

5.4. Discussion

This section reports the performance of the profile method combined with Jordan neural network in estimation of pulse signals against chaotic background. The purpose of the experiments was to evaluate if JNN can maintain quality in prediction performance when compared to conventional methods.

The results have shown that the method proposed in this paper not only solves the weak pulse signal in chaotic signal well, but also has good stability. We evaluate the application breadth of the proposed model by comparing the experimental data of different SNR. Table 1 shows that the estimated SNR range of the proposed model is −60 to −30. Within this range, the model proposed in this paper can effectively estimate the pulse signal amplitude, and its absolute error (AE) can be as low as 0.00112 and as high as 0.05, and its MSE is also below .

The results report the estimation of pulse signals by different neural networks under chaotic noise background, as well as the mean amplitude and standard deviation (std) of 50 test runs in Table 3. We can also see from Table 3 that the two minimum standard deviations are 0.00421 and 0.00531, which are, respectively, from the Jordan neural network and the Elman neural network. We know that both the Jordan neural network and the Elman neural network are feedback neural networks. The difference between them is that the context layer connects to a different layer. Compared with [35], we found that the Jordan neural network could better fit the time series than the Elman neural network.

A major innovation of this method is to use Jordan neural network to fit the chaotic noise background and then estimate the impulse signal in the chaotic noise by combining the profile method. Compared with other neural networks, this method is more accurate and the overall training time is shorter.

6. Conclusions

In this paper, we are interested in weak pulse signal in chaotic background. Based on the short-term predictability and Takens theorem, we provide an algorithm for directly estimating the problems; that is, the Jordan neural network is used to fit the chaotic time series, and the one-step prediction error was obtained. Then, starting from the error, the single-point jump signal model was connected, and the amplitude of the pulse signal was estimated by the profile least square method, so as to achieve the prediction effect of the pulse signal under the chaotic background. The following conclusions can be drawn from the experimental results: the model proposed in this paper can predict the weak pulse signal in the chaotic background, and it can be seen from the results of experiment 1 that the prediction accuracy is high. It can be seen from the results of experiment 2 that the Jordan neural network is significantly better than other feed-forward neural networks in fitting the nonlinear dynamic system, and the absolute percentage error of the Jordan neural network is the smallest. In future work, we can try to improve in two aspects, so that it has the best generalization performance in most cases, and apply the method to solve the relevant practical problems. Firstly, we can try to estimate the amplitude of the impulse signal without knowing the period of the impulse signal. Secondly, the Jordan neural network can be trained by other optimization methods, and the Jordan neural network can be improved.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Fundamental and Advanced Research Project of CQ CSTC of China (Grant no. cstc2018jcyjAX0464).

References

  1. E. N. Lorenz, “Deterministic nonperiodic flow,” Journal of the Atmospheric Sciences, vol. 20, no. 2, pp. 130–141, 1963. View at: Publisher Site | Google Scholar
  2. T. Jin and H. Zhang, “Statistical approach to weak signal detection and estimation using Duffing chaotic oscillators,” Science China Information Sciences, vol. 54, no. 11, pp. 2324–2337, 2011. View at: Publisher Site | Google Scholar
  3. L. Su, H. Sun, J. Wang, and L. Yang, “Detection and estimation of weak pulse signal in chaotisc background noise,” Acta Physica Sinica, vol. 66, no. 9, Article ID 090503, 2017. View at: Google Scholar
  4. L. Su and C. Li, “Extracting narrow-band signal from a chaotic background with LLVCR,” Wireless Personal Communications, vol. 96, no. 2, pp. 1907–1927, 2017. View at: Publisher Site | Google Scholar
  5. C. Li and L. Su, “Extracting harmonic signal from a chaotic background with local linear model,” Mechanical Systems and Signal Processing, vol. 84, pp. 499–515, 2017. View at: Publisher Site | Google Scholar
  6. X. Wang, “Multivariate chaotic time series prediction using multiple kernel extreme learning machine,” Acta Physica Sinica, vol. 64, no. 7, Article ID 070504, 2015. View at: Google Scholar
  7. X. Wu and Y. Wang, “Extended and Unscented Kalman filtering based feedforward neural networks for time series prediction,” Applied Mathematical Modelling, vol. 36, no. 3, pp. 1123–1131, 2012. View at: Publisher Site | Google Scholar
  8. L. Su, H. Sun, and C. Li, “LL-P-KF hybrid algorithm for detecting and recovering sinusoidal signal in strong chaotic noise,” Acta Electronica Sinica, vol. 45, no. 4, pp. 837–843, 2017. View at: Google Scholar
  9. L. Su, Li Deng, W. Zhu, and S. Zhao, “Detection and extraction of weak pulse signals in chaotic noise with PTAR and DLTAR models,” Mathematical Problems in Engineering, vol. 2019, Article ID 4842102, 12 pages, 2019. View at: Publisher Site | Google Scholar
  10. X. Jiang, “Prediction of hydrologic chaotic time series using volterra-NLMS adaptive filter,” Journal of Applied Statistics and Management, vol. 34, no. 3, pp. 434–441, 2015. View at: Google Scholar
  11. M. Han, Y. Liu, Z. Shi, and Mu Xiang, “The study of chaotic noise reduction method with improved local projection,” Journal of System Simulation, vol. 19, no. 2, pp. 364–368, 2007. View at: Google Scholar
  12. Y. Wang, C. Shi, G. Qian, and W. Wang, “Prediction of chaotic time series based on the fractional-order maximum correntropy criterion algorithm,” Acta Physica Sinica, vol. 67, no. 1, Article ID 018401, 2018. View at: Google Scholar
  13. H. Lu, D. Li, and H. Sun, “Prediction for chaotic time series of optimized BP neural network based on PSO,” Computer Engineering and Applications, vol. 51, no. 2, pp. 224–229, 2015. View at: Google Scholar
  14. L. Yin, Y. He, X. Dong, and Z. Lu, “Adaptive chaotic prediction algorithm of RBF neural network filtering model based on phase space reconstruction,” Journal of Computers, vol. 8, no. 6, pp. 1449–1455, 2013. View at: Publisher Site | Google Scholar
  15. S. Li, Y. Zhu, C. Xu, and Z. Zhou, “Study of personal credit evaluation method based on PSO-RBF neural network model,” American Journal of Industrial and Business Management, vol. 03, no. 4, pp. 429–434, 2013. View at: Publisher Site | Google Scholar
  16. Y.-Y. Fu, C.-J. Wu, J.-T. Jeng, and C.-N. Ko, “ARFNNs with SVR for prediction of chaotic time series with outliers,” Expert Systems with Applications, vol. 37, no. 6, pp. 4441–4451, 2010. View at: Publisher Site | Google Scholar
  17. L. Han, L. Ding, and H.-P. Ren, “Chaos control based on least square support vector machines,” Acta Physica Sinica, vol. 54, no. 9, pp. 4019–4024, 2005. View at: Google Scholar
  18. X. Zhou, Li Liao, M. Zhang, and Q. Sheng, “Speech recognition based on fuzzy neural network and chaotic differential evolution algorithm,” Journal of Information and Computational Science, vol. 12, no. 14, pp. 5451–5458, 2015. View at: Publisher Site | Google Scholar
  19. R. Chandra and M. Zhang, “Cooperative coevolution of elman recurrent neural networks for chaotic time series prediction,” Neurocomputing, vol. 86, pp. 116–123, 2012. View at: Publisher Site | Google Scholar
  20. R. Chandra, “Competition and collaboration in cooperative coevolution of elman recurrent neural networks for time-series prediction,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 12, pp. 3123–3136, 2015. View at: Publisher Site | Google Scholar
  21. Z. Shi and M. Han, “Support vector echo-state machine for chaotic time-series prediction,” IEEE Transactions on Neural Networks, vol. 18, no. 2, pp. 359–372, 2007. View at: Publisher Site | Google Scholar
  22. R. Chandra, “Memetic cooperative coevolution of Elman recurrent neural networks,” Soft Computing, vol. 18, no. 8, pp. 1549–1559, 2013. View at: Publisher Site | Google Scholar
  23. M. I. Jordan, “Serial order: a parallel distributed processing approach,” Tech. Rep., University of California, Institute for Cognitive Science, San Diego, CA, USA, 1997, Tech. Rep. No. 8604. View at: Google Scholar
  24. P. Peng, W. Zhang, Yi Zhang, Y. Xu, H. Wang, and H. Zhang, “Cost sensitive active learning using bidirectional gated recurrent neural networks for imbalanced fault diagnosis,” Neurocomputing, vol. 407, pp. 232–245, 2020. View at: Publisher Site | Google Scholar
  25. S. Jeon and J. Moon, “Malware-detection method with a convolutional recurrent neural network using opcode sequences,” Information Sciences, vol. 535, pp. 1–15, 2020. View at: Publisher Site | Google Scholar
  26. R. Gao, Y. Huo, S. Bao et al., “Multi-path X-D recurrent neural networks for collaborative image classification,” Neurocomputing, vol. 397, pp. 48–59, 2020. View at: Publisher Site | Google Scholar
  27. S. Smyl, “A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting,” International Journal of Forecasting, vol. 36, no. 1, pp. 75–85, 2020. View at: Publisher Site | Google Scholar
  28. Y. Liu, C. Gong, L. Yang, and Y. Chen, “DSTP-RNN: a dual-stage two-phase attention-based recurrent neural network for long-term and multivariate time series prediction,” Expert Systems With Applications, vol. 143, Article ID 113082, 2019. View at: Publisher Site | Google Scholar
  29. W. Wu, S. An, P. Guan, D. Huang, and B. Zhou, “Time series analysis of human brucellosis in mainland China by using Elman and Jordan recurrent neural networks,” BMC Infectious Diseases, vol. 19, no. 1, 2019. View at: Publisher Site | Google Scholar
  30. Z. Wang, G. Horng, and T. Hsu, “Heart sound signal recovery based on time series signal prediction using a recurrent neural network in the long short-term memory model,” The Journal of Supercomputing, vol. 1, 2019. View at: Publisher Site | Google Scholar
  31. Y. Zhang, Y. Wang, and G. Luo, “A new optimization algorithm for non-stationary time series prediction based on recurrent neural networks,” Future Generation Computer Systems, vol. 102, pp. 738–745, 2020. View at: Publisher Site | Google Scholar
  32. F. Takens, “Detecting strange attractors in turbulence,” Dynamical Systems and Turbulence, Warwick, Springer, Berlin, Germany, 1980. View at: Google Scholar
  33. L. Cao, “Practical method for determining the minimum embedding dimension of a scalar time series,” Physica D: Nonlinear Phenomena, vol. 110, no. 1-2, pp. 43–50, 1997. View at: Publisher Site | Google Scholar
  34. J. Fan and Q. Yao, Nonlinear Time Series: Nonparametric and Parametric Methods, Springer-Verlag, Heidelberg, Germany, 2003.
  35. N. Mohana Sundaram and N. Sivanandam, “A hybrid elman neural network predictor for time series prediction,” International Journal of Engineering & Technology, vol. 7, no. 2, pp. 159–163, 2018. View at: Publisher Site | Google Scholar

Copyright © 2020 Liyun Su and Xiu Ling. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views104
Downloads275
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.