Mathematical Problems in Engineering

Volume 2015, Article ID 687313, 11 pages

http://dx.doi.org/10.1155/2015/687313

## Parallel kd-Tree Based Approach for Computing the Prediction Horizon Using Wolf’s Method

^{1}Universidad de Magallanes, Avenida Bulnes 01855, Casilla Postal 113-D, Punta Arenas, Chile^{2}Universidad de Castilla-La Mancha, Campus Universitario, s/n, 02071 Albacete, Spain

Received 20 July 2015; Revised 2 October 2015; Accepted 11 October 2015

Academic Editor: Meng Du

Copyright © 2015 J. J. Águila et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In different fields of science and engineering, a model of a given underlying dynamical system can be obtained by means of measurement data records called time series. This model becomes very important to understand the original system behaviour and to predict the future values of that system. From the model, parameters such as the prediction horizon can be computed to obtain the point where the prediction becomes useless. In this work, a new parallel kd-tree based approach for computing the prediction horizon is presented. The parallel approach uses the maximal Lyapunov exponent, which is computed by Wolf’s method, as an estimator of the prediction horizon.

#### 1. Introduction

In large amount of applications in science and engineering, the main purpose to study a given dynamical system is to be able to obtain a prediction in order to carry out preventive actions. For example, in the case of the analysis of electrocardiogram signals, the information obtained through that study could be used to prevent heart related diseases; in air temperature regulation, the prediction information could be used to manage energy in an efficient manner either into a building or in a crop; moreover, breakdown in wind turbine generators caused by a broken wing could be prevented applying the same principles over the study of the wind speed. However, in all these cases and in the most part of the real-world applications based on dynamical systems, there is not a mathematical model or description of the underlying dynamical system. It is precisely the context in which the nonlinear time series analysis plays an important role. According to this kind of analysis, from time series of one single physic variable, for example, voltage, air temperature, or wind speed, it is possible to obtain parameters which describe the behaviour of the underlying dynamical systems represented for these time series.

This work deals with the problem of computing an estimation for the prediction horizon, that is, the point where the prediction becomes useless. To calculate this value, we used the parameter known as the maximal Lyapunov exponent. From nonlinear time series analysis, there are several methods in the literature to obtain this parameter: Wolf et al. [1], Rosenstein et al. [2], and Kantz [3]. The advantage of Wolf’s method is the conceptually simpler and direct approach which converges to the maximal Lyapunov exponent value without subsequent computing. However, the implementation presented in [1] uses a brute force approach which is enough for shorter time series but becomes very expensive in terms of the execution time for larger data records. As far as the authors know, there is not any implementation of the method considering other approaches.

Therefore, the main goal of this work is to obtain a runtime reduction of Wolf’s method, bearing in mind the time requirements of the real-world applications for the subsequent prediction stage. Thus, to improve the runtime of the method, we developed a parallel approach introducing kd-tree data structure into the implemented code. In order to assert the results, four time series have been considered as case studies. These data records of one single physic variable were obtained from the Lorenz system, the Hénon map, the Rössler system, and electrocardiogram signal.

The parallel approach was developed as part of a multidisciplinary framework. Into the framework, a group formed by computer scientists and physicists is working applying high performance programming onto nonlinear time series analysis algorithms. To present this particular work, we have structured the paper as follows. Wolf’s method is introduced in Section 2. The parallel approach is introduced in Section 3. The experimental results are shown in Section 4 and the conclusions and future works are presented in Section 5.

#### 2. Wolf’s Method

Into the context of nonlinear time series analysis, we mainly focused on time series from deterministic dynamical systems that exhibit sensitive dependence on initial conditions. For the purpose of quantifying this sensitivity, the maximal Lyapunov exponent can be used. This exponent describes the average rate at which the predictability is lost [4]. Thus, a crucial fact is that the prediction horizon can be defined by

Before computing , let us introduce you to the basic concepts related with nonlinear time series analysis. Let be a time series, a set of sequence scalar values of one single physic variable which are typically measured at successive times and spaced at uniform time intervals. From these measurement values, points in delay coordinates, also known as delay vectors, can be constructed according towhere is the embedding delay and is the embedding dimension (Fraser and Swinney [5]). Note that the number of delay vectors constructed in this manner is .

Takens’ embedding theorem [6] states that, for a large enough embedding dimension greater than or equal to , where , the delay vectors yield a phase space that has exactly the same properties as the one formed by the original variables of the system. We can use the mutual information method [5] to obtain the proper value of and the false nearest neighbors method [7] for determining the minimal embedding dimension .

For the reconstructed -dimensional phase space obtained from a single variable, we can compute Lyapunov exponents . These exponents determine the rate of convergence or divergence for two initially nearby trajectories in that phase space. The fact that if one or more of these exponents are larger than zero we are in presence of a system that exhibits sensitive dependence on initial conditions is a well-established fact [8, 9]. In this case, two arbitrary close trajectories of the system will diverge apart exponentially, eventually ending up in completely different phase space areas as time progresses.

For our purpose, as we mentioned at Introduction, only the maximal Lyapunov exponent computation will be sufficient. To calculate this parameter we use the method introduced by Wolf et al. [1], referred to by Wolf’s method. The method works as follows. In the embedding dimension , we selected a point as the starter point, also known as fiducial point, which by default is . A near neighbor searching is performing to obtain a point which satisfies the relation , where is the square norm of a vector and is a small constant. Then, both points are iterated forward in time for a fixed evolution time , which should be a couple of times larger than the embedding delay but not much larger than . If the dynamical system has sensitive dependence on initial conditions, the distance after the evolved time will be larger than the initial ; that is, . But, if the dynamical system does not have sensitive dependence on initial conditions, then . After the evolution, a near neighbor searching is performing to find a new point , the replacement point, whose distance to the evolved point is less than . The angular separation between the vectors constituted by the points and and and should be small. If we found a point that satisfies the constraint, the point is replaced by that point. This procedure is repeated until the fiducial point of the trajectory reaches the last one which is called the fiducial trajectory. Finally, the maximal Lyapunov exponent can be calculated according towhere is the total number of replacement steps and denotes the natural logarithm. Figure 1 shows a partial snapshot of the trajectories followed by the computing, and Algorithm 1 depicts the algorithmic notation by Wolf’s method. The algorithm is called fet1 by fixed evolution time for such as in the open source code presented by Wolf et al. [1].