Computational Intelligence and Neuroscience

Volume 2019, Article ID 9323482, 12 pages

https://doi.org/10.1155/2019/9323482

## Concurrent, Performance-Based Methodology for Increasing the Accuracy and Certainty of Short-Term Neural Prediction Systems

^{1}Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 14, 18000 Niš, Serbia^{2}Innovation Centre of Advanced Technologies, Bulevar Nikole Tesle 61, Loc. 5, 18000 Niš, Serbia^{3}Faculty of Economics, University of Niš, Trg Kralja Aleksandra Ujedinitelja 11, 18000 Niš, Serbia^{4}Tigar Tyres, Nikole Pašića 213, 18300 Pirot, Serbia

Correspondence should be addressed to Miljana Milić; sr.ca.in.kafle@cilim.anajlim

Received 28 December 2018; Revised 25 February 2019; Accepted 7 March 2019; Published 1 April 2019

Guest Editor: Vlado Delic

Copyright © 2019 Miljana Milić et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Accurate prediction of the short time series with highly irregular behavior is a challenging task found in many areas of modern science. Such data fluctuations are not systematic and hardly predictable. In recent years, artificial neural networks have widely been exploited for those purposes. Although it is possible to model nonlinear behavior of short time series by using ANNs, very often they are not able to handle all events equally well. Therefore, alternative approaches have to be applied. In this study, a new, concurrent, performance-based methodology that combines best ANN topologies in order to decrease the forecasting errors and increase the forecasting certainty is proposed. The proposed approach is verified on three different data sets: the Serbian Gross National Income time series, the municipal traffic flow for a particular observation point, and the daily electric load consumption time series. It is shown that the method can significantly increase the forecasting accuracy of the individual networks, regardless of their topologies, which makes the methodology more applicable. For quantitative comparison of the accuracy of the proposed methodology with that of similar methodologies, a series of additional forecasting experiments that include a state-of-the-art ARIMA modelling and a combination of ANN and linear regression forecasting have been conducted.

#### 1. Introduction

Prediction is a process that uses data from the present and the past in order to estimate future. The result of this process is the information about probable events in the future and their effects and outcomes. Making good forecasts is essential for making good decisions and planning in all areas of life. Although it does not have to reduce uncertainties and difficulties of the future, it can increase the certainty and the level of the preparedness for challenges and environmental changes that future events bring.

The need for development of prediction methods occurs in almost every area of life—technology, engineering, industry, science, politics, economy, business, sport, medicine, etc. Good forecasts can ensure lower cost of the services and products, increased customer/client satisfaction, and significant competitive advantage [1].

Every daily activity begins with planning. The planning begins with a prediction [2]. Prediction errors may have crucial implications on decision-making, profits and investment justification, risk assessment, alerting events, hard real-time systems’ actions, timely handling of emergency health and medical conditions, etc. [3]. Because of that, decreasing the error of the prediction is an essential task for every forecasting expert, regardless of the applied prediction methods.

Prediction methods described in the literature can be roughly categorized into two large groups: traditional and modern. Each of them has advantages and disadvantages. None of them is superior to all others if we consider all possible criteria of evaluation [4]. Traditional prediction methods try to extrapolate time series data using different modelling: exponential smoothing [5], linear or nonlinear regression [6–8], simple (AR) or more complex autoregressive models (ARMA, ARIMA, and double seasonal ARIMA) [6, 9]. On the other hand, modern prediction methods exploiting the artificial intelligence (AI) behavior can model both nonlinear and linear structures of time series [10] and can produce good accuracy of the forecasting. Such techniques use different topologies of artificial neural network, fuzzy modelling, vector machines, and genetic-simulated annealing algorithms to predict time series data [4, 11–14]. Different authors have shown that AI-based models frequently express better predictive characteristics compared to models using standard multilinear regression [14]. Finally, both theoretical and empirical findings in the literature show that combination of two or more different methods can be an effective and efficient way to improve forecasts and decrease the error [10]. Such hybrid methods are studied in [2, 3, 10, 15–17].

Despite numerous ways to predict the future mathematically, there are many cases of variables that could not reliably be predicted. Causes for this limitation could be found in the randomness of the events and the lack of significant relationship in data. When factors considered during forecasting of a certain variable are not well known or understood, prediction becomes imprecise or mistaken. Sometimes, there is simply not enough data about everything that affects the forecasted variable. The prediction process relies on some specific hypothesis. If they are set wrong, due to bad judgment, i.e., human error, the prediction will be mistaken. Although the forecasting is based on past events, no one can guarantee that the history will repeat every time in the same way. Therefore, forecasts are subject to human error.

A time series can be defined as a sequence of numerical data occurring in regular intervals over a period of time, collected in a successive order. Short time series are characterized by a lack of trend information, randomness and periodicity, and demands for such forecasting represent a challenging problem [18]. Usually, time series cases where the sample length is very small are not applicable for generating statistically reliable variants of forecasting. In this paper, we will focus on such time series and their forecasting. We will propose a new methodology that can be applied to irregular series. The methodology is applicable to all types and topologies of neural networks, or similar AI based forecasting methods, in order to improve their accuracy.

The usual step in development of forecasting ANN is to train many networks, while changing the number of neurons in some particular layer. The ANN with the most accurate forecasting wins. Nevertheless, if we observe the forecasts of all obtained networks, we can conclude that sometimes different networks predict different directions of the trend change in the next forecasting step. In this point, one cannot determine which one is the correct. This is particularly noticeable when dealing with volatile data series. Therefore, incorporating more than one network in the forecasting decision could make better predictions of the future events. The methodology that is suggested in this paper improves the forecasting accuracy of the ANN in the sense that it concurrently exploits several most accurate networks instead of the winner one. In this way, the forecasting accuracy can be significantly improved, as well as the confidence of the prediction. The performance of the proposed method is verified on an example of Serbian Gross National Income (GNI) data series, using Feed Forward Accommodated for Prediction (FFAP) neural networks’ topology. The results demonstrate higher forecasting accuracy compared to individual FFAP networks.

The rest of the paper is organized in the following manner. In Section 2, the structure of the FFAP neural network topology is presented. The section that follows describes in detail the concurrent best-performance-based methodology for increasing the accuracy of the short time series FFAP forecasting. Three case studies are performed and analyzed: Serbian Gross National Income time series, the municipal traffic flow for a particular observation point, and the daily based electric load consumption time series; the forecasting results of the proposed methodology and other state-of-the-art forecasting techniques and their combinations are given in Section 4. The obtained results are discussed in Section 5, while conclusions are summarized in the last section.

#### 2. FFAP Neural Networks

In general, neural network-based computational and forecasting methods developed from the desire to reveal, realize, and emulate the capability of the brain to process information [14]. The entire brain is composed of many neural networks that receive information from the surroundings, extract and recombine their relevant parts, and make the decisions about the needs of the organism. Artificial neural networks (ANN) emulate such abilities of the brain in order to realize complex nonlinear input-output transformations.

Consider a time series denoted by *y*_{i}, . It is a set of observables of an undefined function , that are taken at regular time intervals Δ*t*, where . In the forecasting process, the historical data are used to determine the direction of future trends, while one-step-ahead forecasting implies the mathematical search for such a function *f* that can accurately perform the following mapping:where the term represents the desired response, while *ε* is the acceptable error.

In the past decades, ANNs have been developed as a tool that has great capabilities for recognizing and modelling data patterns that are not easily identifiable by traditional methods. However, one may notice a common feature in all existing ANN applications in forecasting. It is the necessity for a relatively long time series in order to achieve high accuracy. Usually, there should be at least 50 data points to consider [19]. Because of this and due to previous research in short-term forecasting [20–22], we have chosen the FFAP neural network topology, as a base to be used throughout this study. This structure will briefly be explained next.

General structure of a feed forward neural network is illustrated in Figure 1. It has just one hidden layer, since it is confirmed to be sufficient enough to solve univariate forecasting problems [23]. In this figure, indices “in,” “h,” and “o” denote input, hidden, and output layers of the ANN, respectively. Weights are labeled with , where connections of the input and the hidden layer are designated with *k* = 1, 2, …, *m*_{in}, *l* *=* 1, 2, …, *m*_{h}, while connections of the hidden and output layer are designated with *k* *=* 1, 2, …, *m*_{h}, *l* *=* 1, 2, …, *m*_{o}. The thresholds are denoted with *θ*_{x,r}, *r* *=* 1, 2, …, *m*_{h} or *m*_{o}, depending on the layer. The neurons in the input layer distribute the input signals, while neurons in the hidden layer are activated by a sigmoid function. Finally, linear function activates neurons in the output layer. A modified version of the steepest-descent minimization algorithm is applied as a learning method [24]. The problem of initialization was solved using the method described in [25].