Research Article  Open Access
Youzhu Li, Chongguang Li, Mingyang Zheng, "A Hybrid Neural Network and HP Filter Model for ShortTerm Vegetable Price Forecasting", Mathematical Problems in Engineering, vol. 2014, Article ID 135862, 10 pages, 2014. https://doi.org/10.1155/2014/135862
A Hybrid Neural Network and HP Filter Model for ShortTerm Vegetable Price Forecasting
Abstract
This paper is concerned with time series data for vegetable prices, which have a great impact on human’s life. An accurate forecasting method for prices and an earlywarning system in the vegetable market are an urgent need in people’s daily lives. The time series price data contain both linear and nonlinear patterns. Therefore, neither a current linear forecasting nor a neural network can be adequate for modeling and predicting the time series data. The linear forecasting model cannot deal with nonlinear relationships, while the neural network model alone is not able to handle both linear and nonlinear patterns at the same time. The linear HodrickPrescott (HP) filter can extract the trend and cyclical components from time series data. We predict the linear and nonlinear patterns and then combine the two parts linearly to produce a forecast from the original data. This study proposes a structure of a hybrid neural network based on an HP filter that learns the trend and seasonal patterns separately. The experiment uses vegetable prices data to evaluate the model. Comparisons with the autoregressive integrated moving average method and back propagation artificial neural network methods show that our method has higher accuracy than the others.
1. Introduction
Among all the price fluctuations in the market, the prices of agricultural products have the most obvious and basic impact on the cost of living. Many countries have established price early warning systems to monitor and evaluate grain prices so that the price can be timely adjusted and controlled when it is in abnormal state, in order to guarantee that the grain economy develops in a sustainable, healthy, and stable way [1, 2].
To predict a time series price is a challenging problem according to current studies [3, 4]. In recent decades, several linear and nonlinear prediction models have been developed for time series data forecasting. The autoregressive integrated moving average (ARIMA) model [5] is one of the most popular methods based on time series and has been widely used for data prediction [6]. Bianco et al. proposed ARIMA and transfer function models for the prediction of electricity consumption [7]. Linear regression models have also been proposed for the prediction of energy consumption [8].
However, time series data forecasting is usually a nonlinear problem, so linear approaches may fail to capture the nonlinear dynamics of the process. There are some nonlinear methods for forecasting future prices using a machine learning model [9–12]. As the price of vegetables has seasonal cyclical factors, some authors use signal processing methods to analyze the historic data and predict future prices [6, 13]. The limitation of this method is that it assumes that the period of price changes is fixed and will follow the same cycle in the future. As the price is impacted by many factors, it cannot have a fixed change cycle. Those methods cannot handle both linear and nonlinear patterns at the same time.
In this paper, a hybrid approach, combining HP filtering [14] and a neural network model [15, 16], is developed to predict shortterm time series price data. Due to the high correlation of time series data, they often contain a trend component. Hence, using an HP filter to decompose the demand time series into its trend and cyclical components is proposed as an effective technique for time series data forecasting. The use of combined models in water quality time series data could assist in capturing patterns in the data and could improve prediction accuracy. The motivation behind this hybrid approach is large that water quality problems are often complex in nature and so any individual model may not be able to capture all the different patterns equally well.
This paper is organized as follows. Section 2 briefly describes the situation of the current vegetable market in China and the factors affecting prices. In Section 3, we introduce some technical background knowledge. Section 4 describes the proposed HP filter based hybrid neural network model. We discuss the experimental results in Section 5. Our conclusion is presented in Section 6.
2. A Brief Description of the Vegetable Market in China
The Chinese vegetable market plays an important role in people’s daily life and in the agricultural industry. In 2011, the area planted with vegetables was about 19.7 square kilometers, and vegetable production reached 6.79 million tons. According to a report from the Food and Agriculture Organization (FAO), China has 43 percent of the planted area of the world and 50 percent of the production of the world, and so it is ranked as the first in the world vegetable production.
However, vegetable prices have been volatile in recent years. There are many factors that have an influence on vegetable prices, such as population, national policies, area of arable land, international financial markets, price of alternatives, economic growth, and international trade. Here, we study the hidden patterns behind the history of vegetable prices to see if we can build a more accurate model for price forecasting.
3. Background
There are several methods for extracting the trend and cyclical components from an original single set of data. As the prediction results from the two components need to be merged together, we have to find a linear filter to separate the original single set of data. Here, we choose the linear HodrickPrescott filter to get the trend and cyclical components [17]. Then, we use a traditional neural network to learn from the patterns.
3.1. HodrickPrescott Filter Model
The HP filter [17] decomposes time series data into trend and cyclical components: , , where and are the trend and cyclical components, respectively. This decomposition assumes that the trend component does not contain any seasonality and, because the cycle is derived residually, it does not separate out the cycle from any irregular movements. Hodrick and Prescott [17] minimize the variance of , subject to a penalty for variations in the second difference of the growth term. Their filter is given by
The parameter controls the smoothness of . The minimization of (1) provides a mapping from to , with determined residually. The estimate of potential output using the HP filter depends on the choice of , the smoothing parameter. A of zero corresponds to an extreme real business cycle model where all of the fluctuations in real output are caused by technology shocks, because the HP trend is the same as the series being detrended. Conversely, as tends to infinity, the HP trend moves towards a deterministic time trend. Following Hodrick and Prescott, researchers typically set at 1600 for use with quarterly data [17] but test the robustness of their results with different values.
3.2. Principles of Neural Networks
Artificial neural networks (ANN) are popular models for studying nonlinear relationship functions. One of the most significant advantages of an ANN model is that it can approximate a large class of functions with a high degree of accuracy [18]. This capability comes from the parallel processing of the information. No prior assumption of the model form is required in the model building process. Instead, the network structure is mainly determined by the features of the data. The key element of an artificial neural network is an artificial neuron. For a given neuron, there are multiple inputs and one output. The output of the th neuron is , where is called the action function.
A single hidden layer feedforward network is the most widely used model form for time series modeling and forecasting [19]. The model is characterized by a network of three layers of simple processing units connected by links, as shown in Figure 1. The output and the inputs have the following relationship: where and are parameters often called connection weights, is the number of input nodes, and is the number of nodes in the hidden layer. There are several types of activation function. The most widely used activation function for the output layer is the linear function, as a nonlinear activation function may introduce distortion to the predicted output. The logarithmic and hyperbolic functions are often used as the hidden layer transfer function and they are shown in (3) and (4), respectively. Consider
Hence, the neural network model of (2) acts as a nonlinear function mapping from past observations to the future value . The function can be represented as where is a vector and is the mapping function that we want to find.
The function learning process uses a back propagation training algorithm [20] to minimize the errors between the output results and the desired results. This minimization is done by adjusting the parameters of the neural network by an amount according to the following formula:
Finally, the estimated model is evaluated using a separate holdout sample that has not been exposed to the training process.
4. Model of HP Filter Based Hybrid Neural Network
In this section, we will introduce the proposed forecasting model. The whole system framework will be presented first. Then, the model formulation will be given. Finally, we will discuss the workflow of our forecasting scheme.
4.1. Framework of the Proposed Approach
The module description of our proposed time series data forecast framework is presented in Figure 2. As shown in Figure 2, the proposed approach includes three main stages: data preprocessing, data forecasting, and data merge.
In the first stage, the time series price data are passed through the HP filter. Trend and cyclical components are generated. This decomposition allows us to model the trend and the cyclical fluctuations of the time series separately and more accurately. As the HP filter is a linear filter, we can merge these two components after forecasting. It must be noted that the trend and the cyclical components are separately learned by ANN_{T} and ANN_{C}. Then, in the next stage, we select suitable features to be applied to the ANN models and forecast each component individually. In the third stage, we use a linear function to merge the two components together as the original data series forecasting result.
4.2. Hybrid Model Formulation
The behavior of vegetable prices may not easily be captured by standalone models because the price time series data could include a variety of characteristics such as seasonality, heteroskedasticity, or a nonGaussian error. Our hybrid model aims to reduce the dimension of the raw data. As the raw data contains multidimensional variables, we need to find a more complex function to fit the data. The artificial neural network needs to have more nodes and more layers to learn the complex function. This also requires more time to train the ANN, which will increase the risk of failure and make the results less accurate. After we have reduced the dimension of the raw data, the artificial neural networks are easier to train and the predictive performance will be improved in the combined models.
Based on Box’s [21] work in linear modeling, time series data are considered as a nonlinear function of several past observations and random errors as follows: where is a nonlinear function learned by the neural network, the noise is the residual at time , and and are integers. Suppose the time series is composed of a trend and a cyclical component: where and are the trend and the cyclical components. The HP filter separates the function into and . This is the first stage shown in Figure 2. In the second stage, two neural networks are used in order to model the nonlinear relationships and . Consider
As we can see from (9), we still use the original data as the inputs for the two neural networks, and the output result is the trend value and the cyclical value. Then, in the forecasting process, we use the following function to compute the future data:
4.3. Forecasting Scheme
The overall flowchart of the proposed HP filter based neural network forecasting is shown in Figure 3. The proposed training scheme is briefly described in the following steps.
Step 1. Prepare the raw data of the vegetable price time series and compute the cycle of the time series data using Fourier transform.
Step 2. Preprocess the raw price data using the HP filter and generate two series of data: and .
Step 3. According to the time series data cycle , build the training set for the neural network.
Step 4. Use the heuristic algorithm to select the optimal number of neurons in the hidden layers of ANN_{T} and ANN_{C} and initialize the parameters of each neural network.
Step 5. Use the back propagation algorithm to train the neural network.
Step 6. Repeat Steps 4 and 5 until the best fitness value satisfies the minimum requirement, or the given count of total generations is reached.
The proposed forecasting scheme is briefly described in the following steps.
Step 1. Select the latest data within a cycle as the input data.
Step 2. Preprocess these data using the HP filter.
Step 3. Use the trained ANN_{T} and ANN_{C} to forecast the price at the next date.
Step 4. Combine the results of the two networks as the final forecast price value.
5. Experimental Results
In this section, we will use two popular forecasting models to compare with our proposed method. These time series come from the monthly price data for five types of vegetable from 2012 to 2013. We randomly choose the price data for one cycle as the testing dataset and the remaining data as the training dataset. All the experiments are done using Matlab R2013b in the Windows 7 platform. We choose the neural network function in the Matlab toolbox [22]. Now, we study the characteristics of the prices of those five kinds of vegetable.
5.1. Study of Vegetable Price Data
We choose the prices of five types of vegetable: cabbages, peppers, cucumbers, green beans, and tomatoes. The price trends of the five types of vegetable are shown in Figure 4. As we can see from Figure 4, the price series data have a significant cyclical character. This cycle is mainly influenced by the seasonality of agricultural production. We can also note that there is a growth trend in this series data. This is mainly due to the effects of inflation.
In Table 1, we illustrate the price characteristics for the five types of vegetable. They have very similar cycles as they have annual changes. The prices of peppers and green beans have a higher variance than the other prices. It can be seen from Figure 4 that the prices of those two vegetables have relatively larger fluctuations than the prices of the others.

5.2. Experiment Modeling
We implement two traditional forecasting models ARIMA and ANN to compare with the proposed model.
5.2.1. ARIMA Modeling
In the present study, several trials were carried out to choose the optimal ARIMA model parameters. The model parameters that satisfy the statistical residual diagnostic checking were chosen for the ARIMA forecasting model. In an ARIMA model, the future value of a variable is assumed to be a linear function of the past observations and random errors; that is, the underlying process that generates the time series with the mean has the form where and are the actual value and random error at time period .
The Box [21] methodology includes three iterative steps: model identification, parameter estimation, and diagnostic checking. The basic idea of model identification is that if a time series is generated from an ARIMA process, it should have theoretical autocorrelation properties. Box [21] proposed the use of the autocorrelation function (ACF) and the partial autocorrelation function (PACF) of the sample data as the basic tools for identifying the order of the ARIMA model. Based on the ARIMA model, we build five ARIMA models, to forecast the price of each of the five types of vegetable. The five ARIMA models are Cabbage ARIMA (2, 1, 5)(1, 1, 1)^{12}, Pepper ARIMA (2, 1, 2)(1, 1, 1)^{12}, Cucumber ARIMA (1, 1, 1)(1, 1, 1)^{12}, Green bean ARIMA (2, 1, 1)(1, 1, 1)^{12}, and Tomatoes ARIMA (1, 1, 1)(1, 1, 1)^{12}.
From Tables 2, 3, 4, 5, and 6, we can see that the ARIMA models can basically predict the results. However, we need to calculate the different parameters each time for each model, and it is difficult to find an easy way to get the parameters. The artificial neural network is easier to use because we just need to determine the number of layers.





5.2.2. Neural Network Modeling
A threelayer feedforward neural network model was developed for the prediction of the price series data using an optimized back propagation training algorithm. In the present study, the scaled conjugated gradient algorithm was selected as the optimized training method. The network structure is shown in Figure 1. We choose 12 neural nodes for the input data and 8 nodes in the hidden layer. The output has one node with the “purelin” function. In what follows, artificial neural network model performances were validated for flow prediction under a monthly timestep condition. The forecast results are shown in Figures 5(a), 5(b), 5(c), 5(d), and 5(e) for the prices of the five different types of vegetable for one season.
(a) Cabbage price prediction using ANN
(b) Pepper price prediction using ANN
(c) Cucumber price prediction using ANN
(d) Green bean price prediction using ANN
(e) Tomato price prediction using ANN
From Figure 5, we can see that the dashed lines are very close to the solid lines. This means that our ANN model can successfully learn the patterns of the time series price data and predict the future results. The neural network is more robust than the ARIMA model, because it does not need us to analyze the characteristics of the original data and find suitable parameters. ANN is easier to use and has similar accuracy.
5.2.3. Hybrid Modeling
Our proposed model uses the same data from the previous 12 months as the testing dataset. First, we need to extract the trend and cyclical components. Figure 6 is an example for the pepper price data after using an HP filter. We use the decomposed data as the training datasets. The forecast results are shown in Figures 7(a), 7(b), 7(c), 7(d), and 7(e).
(a) Cabbage price prediction using the proposed model
(b) Pepper price prediction using the proposed model
(c) Cucumber price prediction using the proposed model
(d) Green bean price prediction using the proposed model
(e) Tomato price prediction using the proposed model
The neural network uses the same structure as we used in Section 5.2.2. As the cycle of the price data is 12 months, we use the latest 12 months’ data as the input datasets. The input training dataset is illustrated in (12). The left column is the label data, which are the prices for the latest 12 months. The 12 columns on the right are the historical data, which are used as the 12 input values for the neural networks. Consider
Figure 7 shows a cycle’s real data and the predicted results. We can see that the proposed method has good performance in learning the trend and cyclical patterns of the original data. The forecasting results are more accurate than the traditional results. The performance comparison is presented in the next section.
5.3. Model Verification and Comparison
To illustrate the accuracy of the method, two different forecast consistency measures are used for the different types of vegetables. The root mean squared error (RMSE, (13)) is used as the error criterion; it is the ratio of the root mean squared error to the variance of the time series. The mean absolute error (MAE, (14)) is also employed as a performance indicator. The RMSE and the MAE are defined as follows:
Table 7 contains a statistical analysis of the performance of the three forecasting models. We can see that our proposed model has the best performance in predicting the future prices for the vegetables. The ARIMA model is unstable for different kinds of time series data. It did well for price data for cabbage but worse for the data for the price of peppers. This is because we did not find the best parameters for the ARIMA model. Our proposed model is more stable when handling different kinds of time series data. The ANN model has a middling performance.

6. Conclusion
Time series forecasting is one of the most important quantitative models and has received a considerable amount of attention in the literature. This study presents a novel adaptive approach to extending the artificial neural network model; adaptive metrics of the inputs and a new mechanism for mixing the outputs are proposed for time series prediction. Due to the individual modeling of the trend and cyclical components, the forecasting accuracy is improved. The experimental results generated by a set of consistent performance measures with different metrics (RMSE, MAE) show that this new method can improve the accuracy of time series prediction. The performance of the proposed method is validated by time series data for five sets of vegetables.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported in part by the Major Program of National Social Science Foundation of China (Project no. 12&ZD048) and by the National Social Science Foundation of China (Project no. 13CJY104).
References
 J. Wang, H. Pan, and F. Liu, “Forecasting crude oil price and stock price by jump stochastic time effective neural network model,” Journal of Applied Mathematics, vol. 2012, Article ID 646475, 15 pages, 2012. View at: Publisher Site  Google Scholar
 S. Sriboonchitta, H. T. Nguyen, A. Wiboonpongse, and J. Liu, “Modeling volatility and dependency of agricultural price and production indices of Thailand: static versus timevarying copulas,” International Journal of Approximate Reasoning, vol. 54, no. 6, pp. 793–808, 2013. View at: Publisher Site  Google Scholar
 B. Oancea and S. C. Ciucu, “Time series forecasting using neural networks,” in Proceedings of the 7th International Scientific Conference Challenges of the Knowledge Society, pp. 1401–1408, Bucharest, Romania, 2013. View at: Google Scholar
 G. Dudek, “Forecasting time series with multiple seasonal cycles using neural networks with local learning,” in Artificial Intelligence and Soft Computing, Springer, 2013. View at: Google Scholar
 W. W. S. Wei, Time Series Analysis, AddisonWesley, Redwood City, Calif, USA, 1994.
 A. J. Conejo, M. A. Plazas, R. Espínola, and A. B. Molina, “Dayahead electricity price forecasting using the wavelet transform and ARIMA models,” IEEE Transactions on Power Systems, vol. 20, no. 2, pp. 1035–1042, 2005. View at: Publisher Site  Google Scholar
 J. L. Harris and L. LonMu, “Dynamic structural analysis and forecasting of residential electricity consumption,” International Journal of Forecasting, vol. 9, no. 4, pp. 437–455, 1993. View at: Google Scholar
 V. Bianco, O. Manca, and S. Nardini, “Electricity consumption forecasting in Italy using linear regression models,” Energy, vol. 34, no. 9, pp. 1413–1421, 2009. View at: Google Scholar
 E. Hadavandi, H. Shavandi, and A. Ghanbari, “Integration of genetic fuzzy systems and artificial neural networks for stock price forecasting,” KnowledgeBased Systems, vol. 23, no. 8, pp. 800–808, 2010. View at: Publisher Site  Google Scholar
 C.Y. Yeh, C.W. Huang, and S.J. Lee, “A multiplekernel support vector regression approach for stock market price forecasting,” Expert Systems with Applications, vol. 38, no. 3, pp. 2177–2186, 2011. View at: Publisher Site  Google Scholar
 W.Y. Chang, “An RBF neural network combined with OLS algorithm and genetic algorithm for shortterm wind power forecasting,” Journal of Applied Mathematics, vol. 2013, Article ID 971389, 9 pages, 2013. View at: Publisher Site  Google Scholar
 C. Pierdzioch, J.C. Rülke, and G. Stadtmann, “Oil price forecasting under asymmetric loss,” Applied Economics, vol. 45, no. 17, pp. 2371–2379, 2013. View at: Publisher Site  Google Scholar
 S. Fang and H. Shang, “A wavelet kernelbased primal twin support vector machine for economic development prediction,” Mathematical Problems in Engineering, vol. 2013, Article ID 875392, 6 pages, 2013. View at: Publisher Site  Google Scholar
 M. O. Ravn and H. Uhlig, “On adjusting the HodrickPrescott filter for the frequency of observations,” Review of Economics and Statistics, vol. 84, no. 2, pp. 371–376, 2002. View at: Publisher Site  Google Scholar
 J. Hu, Z. Li, Z. Hu, D. Yao, and J. Yu, “Spam detection with complexvalued neural network using behaviorbased characteristics,” in Proceedings of the 2nd International Conference on Genetic and Evolutionary Computing (WGEC '08), pp. 166–169, September 2008. View at: Publisher Site  Google Scholar
 D. Ömer Faruk, “A hybrid neural network and ARIMA model for water quality time series prediction,” Engineering Applications of Artificial Intelligence, vol. 23, no. 4, pp. 586–594, 2010. View at: Publisher Site  Google Scholar
 R. J. Hodrick and E. C. Prescott, “Postwar U.S business cycles: an empirical investigation,” Journal of Money, Credit and Banking, vol. 29, no. 1, pp. 1–16, 1997. View at: Google Scholar
 G. P. Zhang and M. Qi, “Neural network forecasting for seasonal and trend time series,” European Journal of Operational Research, vol. 160, no. 2, pp. 501–514, 2005. View at: Publisher Site  Google Scholar
 G. Zhang, B. Eddy Patuwo, and M. Y. Hu, “Forecasting with artificial neural networks: the state of the art,” International Journal of Forecasting, vol. 14, no. 1, pp. 35–62, 1998. View at: Google Scholar
 D. E. Rumelhart and J. L. McClelland, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations, 1986.
 G. E. P. Box, “Science and statistics,” Journal of the American Statistical Association, vol. 71, 356, pp. 791–799, 1976. View at: Google Scholar
 D. Howard, M. Beale, and M. Hagan, “Neural network toolbox for use with MATLAB,” User’s Guide Version 3, 1998. View at: Google Scholar
Copyright
Copyright © 2014 Youzhu Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.