Computational Intelligence and Neuroscience

Volume 2015, Article ID 145874, 10 pages

http://dx.doi.org/10.1155/2015/145874

## Impact of Noise on a Dynamical System: Prediction and Uncertainties from a Swarm-Optimized Neural Network

Departamento de Física y Astronomía, Universidad de La Serena, Casilla 554, La Serena, Chile

Received 27 April 2015; Revised 15 July 2015; Accepted 27 July 2015

Academic Editor: Saeid Sanei

Copyright © 2015 C. H. López-Caraballo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

An artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass chaotic time series in the short-term . The performance prediction was evaluated and compared with other studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called *stochastic* hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute the uncertainties of predictions for noisy Mackey-Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level from 0.01 to 0.1.

#### 1. Introduction

Currently, the prediction of time series has played an important role in many science fields of practical application as engineering, biology, physics, meteorology, and so forth. In particular, and due to their dynamical properties, the analysis and prediction of chaotic time series have been of interest for the science community. In general, the chaotic time series are usually modeled by delay-differential equations; standard examples are the Mackey-Glass system [1], or the Ikeda equation [2] (for more examples, see [3]). Also, many methods have been used in the chaotic time series analysis [4]. However, in the last decades, different types of artificial neural networks (ANN) have been widely used for forecasting of chaotic time series, for example, backpropagation algorithm [5], radial basic function [6], and recurrent network [7].

On the other hand, the analysis of real-life time series requires taking into account the error propagation of input uncertainties. The observed data could be contaminated for different instrumental noise types as white noise or proportional to signal (the latter mainly arises from instrumental calibration). In the modeling of chaotic time series, the impact of noise can be treated as errors-invariable problem where the noise is propagated into the prediction model. In the literature, the noisy impact on chaotic time series prediction has been barely considered. We can find studies where the algorithms were tested from a theoretical point of view (e.g., see [8–12]) and works where the implementation was applied on real-life time series (e.g., see [9, 13, 14]). In addition, some authors have proposed a modification to the standard methods in order to improve the performance prediction in presence of noise [9, 14].

In this work, we used the Mackey-Glass chaotic time series in order to study the short-term prediction with an artificial neural network optimized with a particle swarm algorithm (ANN+PSO). The method was applied on noiseless and noisy chaotic time series. In order to carry out the error propagation of the input noise, this hybrid algorithm was complemented with a Gaussian stochastic procedure to compute a new estimator of the predictions and their uncertainties. Note that ANNs have been used in combination with PSO in several applications. Principally, these applications include feed-forward neural network training [15–18], design of recurrent neural networks [19], design of radial basis function networks [20], and neural network control for nonlinear processes [21]. In addition, there are several current versions of PSO available in the literature (e.g., see the following reviews [22–24]), but our application uses a standard PSO with inertial weight [25]. In this point, the use of a PSO with inertial weight is based on the following reasons: (1) this version of PSO is easy to understand and implement due to its simple concept and learning strategy; (2) as pointed out in [26], the PSO with inertia weight [25] and PSO with constriction factor [27] are mathematically equivalent, and PSO with constriction factor can be considered as a special case of PSO with inertia weight [22, 26] (note that this equivalence can be applied to other improved PSO algorithms that include a varying inertia weight schedule); (3) inertia weight PSO algorithm is quite stable to population changes [23]; (4) the advantages and disadvantages of variants of PSO depend on the problem to solve [22–24]; (5) as a first approach for study of noise effect on dynamical systems using an ANN combined with inertia weight PSO algorithm, the present study may motivate and help the researchers working in the field of evolutionary algorithms to develop new hybrid models or to apply other existing PSO models to solve this problem. To the best of the authors’ knowledge, there is no application for forecasting the noisy chaotic time series such as the one presented here, using a hybrid method that combined ANN with PSO algorithm.

Organization of this paper is as follows. In Section 2, we present a detailed description of the hybrid ANN+PSO method. Sections 3 and 4 present the simulation, algorithm implementation, and the principal results obtained for the forecasting of noiseless chaotic time series and noisy time series, respectively. Finally, conclusions are given in Section 5.

#### 2. Hybrid ANN+PSO Algorithm

Artificial neural networks (ANNs) are similar to biological neural networks in performing functions collectively and in parallel using connection nodes. Thus, ANNs are a family of statistical learning algorithms biologically inspired.

In this study, we consider one of the most successful and frequently used types of neural networks: a multilayer feed-forward neural network with a backpropagation learning algorithm (gradient descent error). This ANN was implemented replacing standard backpropagation with particle swarm optimization (PSO).

PSO is a population-based optimization tool, where the system is initialized with a population of random particles and the algorithm searches for optima by updating generations [28]. In each iteration, the velocity of each particle is calculated according to the following formula [29]:where and denote a particle position and its corresponding velocity in a search space, respectively. is the current step number, is the inertia weight, and are the acceleration constants, and , are elements from two random sequences in the range . is the current position of the particle, is the best one of the solutions that this particle has reached, and is the best solutions that all the particles have reached. In general, the value of each component in can be clamped to the range [] control excessive roaming of particles outside the search space [28, 29]. After calculating the velocity, the new position of each particle is

The procedure to calculate the output values, using the input values, is described in detail in [30].

The net inputs () are calculated for the hidden neurons coming from the inputs neurons. In the case of a neuron in the hidden layer, one haswhere is the vector of the inputs of the training, is the weight of the connection among the input neurons with the hidden layer , and the term corresponds to the bias of the neuron of the hidden layer , reached in its activation. The PSO algorithm is very different than any of the traditional methods of training [28]. Each neuron contains a position and velocity. The position corresponds to the weight of a neuron . The velocity is used to update the weight . Starting from these inputs, the outputs () of the hidden neurons are calculated, using a transfer function associated with the neurons of this layer:

The transfer functions can be linear or nonlinear. We used one hidden layer with as a tangent hyperbolic function (*tansing*) and as a linear function in the output layer:All the neurons of the ANN have an associated activation value for a given input pattern, and the algorithm continues finding the error that is presented for each neuron, except those of the input layer. After finding the output values, the weights of all layers of the network are actualized by PSO, using (1) and (2) [29]. The velocity is used to control how much the position is updated. On each step, PSO compares each weight using the data set. The network with the highest fitness is considered the global best. The other weights are updated based on the global best network rather than their personal error or fitness [28]. In this paper, we used the mean square error (MSE) to determine network fitness for the entire training set: where is the real data and is the calculated output value obtained from the normalized output () of the network. This process was repeated for the total number of patterns in the training set. For a successful process, the objective of the algorithm is to modernize all the weights minimizing the total root mean squared error (RMSE):

In PSO, the inertial weight , the constants and , the number of particles , and the maximum speed of particle summarize the parameters to synchronize for their application in a given problem. Then, an exhaustive trial-and-error procedure was applied to tune the PSO+ANN parameters. Firstly, the effect of population is analyzed for values of 25 to 100 individuals in the swarm. For other applications, some authors have shown that a larger swarm increases the number of function evaluations to converge to an error limit [31]. In addition, Shi and Eberhart [32] illustrated that the population size has hardly any effect on the performance of a swarm algorithm. Figure 1(a) shows that the best population to solve the problem is of 50 individuals. Next, the effect of is analyzed for values of 0.1 to 0.9. Figure 1(b) shows the values of that favoured the search of the particles and accelerated the convergence. This figure shows that for a linearly decreasing inertia weight starting at 0.7 and ending at 0.5, the PSO+ANN presents a good convergence. In other aspect, a usual choice for the acceleration coefficients and is [31]. The effect of variation of constants was evaluated for the commonly used values of and such as 1.49 and 2.00 [31, 32]. For this analysis, presents a better convergence than other values. Table 1 shows the selected parameters for this hybrid algorithm.