Research Article  Open Access
Wuwei Liu, Jingdong Yan, "Financial Time Series Image Algorithm Based on Wavelet Analysis and Data Fusion", Journal of Sensors, vol. 2021, Article ID 5577852, 11 pages, 2021. https://doi.org/10.1155/2021/5577852
Financial Time Series Image Algorithm Based on Wavelet Analysis and Data Fusion
Abstract
In recent years, people are more and more interested in time series modeling and its application in prediction. This paper mainly discusses a financial time series image algorithm based on wavelet analysis and data fusion. In this research, we conducted an indepth study on the scale decomposition sequence and wavelet transform sequence in different scale domains of wavelet transform according to the scale change rule based on wavelet transform. We use wavelet neural network with different input neurons and hidden neurons to predict, respectively. Finally, the prediction results are integrated into the final prediction results based on the original time series by using wavelet reconstruction technology. Using RBF algorithm in neural network and SPSS Clementine, the wavelet transform sequences on five scales are modeled. Each network model has three layers: one input layer, one hidden layer, and one output layer, and each output layer has only one output element. In order to compare the prediction effect of the model proposed in this study, the ordinary RBF network is used to model and predict the log yield itself. When the input sample is 5, the minimum mean square error is obtained when the hidden layer is 6, and the mean square error is 1.6349. The mean square error of the training phase is 0.0209, and the validation error is 1.6141. The results show that the prediction results of the wavelet prediction method combined with the RBF network prediction method are better than those of wavelet prediction or RBF network prediction.
1. Introduction
As a new research achievement in recent years, wavelet analysis has been widely used because of its local amplification and multiresolution analysis characteristics. However, due to the superficial understanding of wavelet analysis method, it has not made full use of its powerful function. However, in the future, the wavelet toolkit can only be used to improve the results of MATLAB.
For stock price trend prediction, the commonly used techniques are moving average method and index smoothing method or other conventional technical index analysis methods. However, the stock price financial data is a typical nonstationary time series, with a peak thick tail; the traditional forecasting method is obviously not suitable. Moreover, due to the influence of various accidental factors, the data of stock market are often random. Moreover, it is difficult to analyze the longterm trend of China’s stock market due to its shortterm establishment and imperfect aspects. With the introduction of wavelet analysis theory into the financial field, it is found that wavelet transform is effective in extracting useful information from noisy data.
For lossless coding, a novel method based on wavelet is proposed to improve the coefficient independence of hyperspectral images. The regression wavelet analysis (RWA) proposed by Amrani et al. use multiple regressions to exploit the relationship between wavelet transform components. It is based on the previous nonlinear scheme, and their scheme estimates each coefficient from neighbor coefficients. Specifically, RWA performs pyramid estimation in the wavelet domain, thereby reducing the residual and the statistical relationship representing energy compared with the existing waveletbased scheme. They proposed three regression models to solve problems related to estimation accuracy, component scalability, and computational complexity. Other suitable regression models can be designed for other goals. Their research process lacks data [1]. Arneodo et al. use continuous wavelet transform to formalize multifractals into fractal functions. They reported on the latest application results of the socalled Wavelet Transform Modulus Maximum (WTMM) method to fully developed turbulence data and DNA sequences. As a summary, they briefly introduced some ongoing work that may become a guide for future research. Their research has no practical significance [2]. Bousefsaf et al. use a lowcost webcam to record and analyze pulse wave signals to extract amplitude information and evaluate participants’ vasomotor activity. They used continuous wavelet transform to analyze the photoplethysmographic signal obtained from the webcam. After a brief but intense physical exercise, they used an approved contact probe to evaluate a group of 12 healthy subjects to evaluate the performance of the proposed filtration technology. During rest, skin vasodilation can be observed. Their research lacks comparative experiments [3]. Alcala et al. believe that because the Fourier method assumes that the energy is distributed in the entire given window. They used wavelet analysis techniques to divide the rocket data set collected during the polar midlevel summer echo event, which was characterized by the Norwegian sounding system radar. They found that edges can be isolated in space or juxtaposed with turbulence, and similarly, turbulence areas without steep edges can be found. Their research method lacks innovation [4]. Chen uses continuous wavelet analysis to study the dynamic relationship between American health development and economic growth. His discovery reconciled the longterm anticyclicality of longevity with the procyclicality of longevity related to the business cycle in the long run. In addition, he also identified four causal relationships between health progress and economic growth: income view, health view, feedback view, and neutrality assumption. His research sample is too small [5, 6].
In this study, wavelet transform depends on the law of scale changes. The different scale domains of wavelet transform correspond to the scale decomposition sequence and wavelet transform sequence. The wavelet neural network containing different input neurons and hidden neurons is used to make predictions. Finally, the separately predicted results are then combined with wavelet reconstruction technology into the final prediction results based on the original time series. Using the RBF algorithm in the neural network and using SPSS Clementine to model wavelet transform sequences on 5 scales, each network model has three layers, namely, an input layer, a hidden layer, and an output layer, and each output layer has only one output element. In order to compare the forecasting effect of the model proposed in this study, at the same time, the ordinary RBF network is used to model and predict the logarithmic return itself.
2. Financial Time Series Image Algorithm
2.1. RBF Neural Network Algorithm
The neuron in the hidden layer of the RBF neural network algorithm is
Among them, is the threshold of the neural network [7, 8]. The output is
The relationship between b1 and is , and the output of the hidden layer neuron becomes
is the corresponding weight; the output is
The complete process structure is shown in Figure 1.
2.2. Wavelet Analysis
In order to avoid excessive network data from affecting the convergence speed of the network and to improve the generalization ability of wavelet neural networks, the original data is normalized so that the corresponding mapping of the processed data is in the range of [0,1] [9]. The normalized data can not only eliminate the dimension but also simplify the calculation, while the processed data still retain the characteristics of the original data [10, 11]. There are two main normalization methods:
The maxmin normalization method [12]: this kind of wavelet is similar to the RBF network, but the scale parameter and displacement parameter of the wavelet network can be set according to the timefrequency localization characteristics of the wavelet [13, 14]. where is the th value after normalization [1].
01 mean normalization method:
is the th value in the original data [2, 5].
The input vector of wavelet neural network is standardized by the maximumminimum normalization method, and then, the model is fitted [15, 16]. In order to accurately analyze the characteristics of the original data sequence, after the network training prediction is completed, we need to reversenormalize the predicted value [17, 18]. The calculation formulas are
The denormalized data is in the same dimension as the original data, and analyzing it can more accurately extract the features and laws contained in the relevant data [19]. Data normalization and denormalization are realized using the MATLAB command map minmax [20, 21].
2.3. WaveletRBF Network Prediction
There are many decomposition methods for wavelet decomposition, as long as the direct sum of the set of subspaces can cover the space without overlapping each other. Wavelet decomposition is actually just a special case of wavelet packet decomposition [22]. The diverse and flexible decomposition characteristics of wavelet provide the possibility to select the optimal decomposition according to different purposes [23, 24]. For the convenience of discussion, here, we no longer use and to denote subspaces but unify them as , where is the scale and is the number of the subspace in a certain scale, then
The corresponding sequence satisfies:
The traditional method is based on some additivity cost functions that can measure concentration, such as maximum entropy. However, these cost functions are of great significance in data compression and other research fields, and they are useless for the decomposition of this research for the purpose of prediction. What this research needs is a wavelet packet decomposition method that can achieve the highest prediction accuracy. For this reason, we must first consider the judgment method of prediction accuracy [25]. There are many indicators to judge the prediction accuracy. This paper selects the mean square error (MSE), which reflects the deviation variance of the predicted value from the actual value, which is more suitable for the application of this article. Its definition is shown in the following formula:
Among them, is the predicted value. This research creates a relatively fast optimal decomposition search algorithm, which uses neural networks to predict , , and , respectively.
In this case,
Among them, is the actual error.
It shows that decomposition can improve the prediction accuracy and the decomposition is effective, and further considers the decomposition of and in the same way; on the contrary, it shows that the neural network has been able to directly grasp the fluctuation law of the sequence , and its prediction accuracy is no longer sensitive to the decomposition. Further decomposition can only make the prediction accuracy fluctuate within a small range. Therefore, once the decomposition cannot further improve the prediction accuracy, the decomposition can be stopped, and this sequence can be directly retained as one of the optimal decomposition sequences of the wavelet packet. Therefore, by using this topdown search algorithm to compare the magnitude of MSE and MSE, the optimal wavelet decomposition with the highest prediction accuracy, that is, the smallest MSE, can be gradually obtained [26, 27].
3. Financial Time Series Image Algorithm Experiment
3.1. Neural Network Learning
Step 1. First, use the function prestd to normalize the input data.
Step 2. Establish a network net = new ff (min max (input), [1, 7], tansig, purelin, traingdx); the learning algorithm used for weight update is tradingdx.
Step 3. Determine the number of training of the network which is 1000; the training target is 0001.
Step 4. Train the network (net, input, output).
Step 5. Network training is completed when the number of network training or training target reaches the preset value.
Use the trained neural network to simulate and predict the data for a total of 18 days, (net, input test); (num, meant_test, stdt_test); input test is the preprocessed test set data network input; yuce is the final output prediction result.
Neural networks are used for time series modeling, and time needs to be embedded in the network structure. The commonly used method is time delay, which can be implemented in the input layer of the network. The nonlinear system model is used for prediction. When is 1, it is a singlestep prediction. The normalized RBF network can be used as this nonlinear system model, and its parameters can be learned through input samples with the ability to predict.
3.2. Establishment of RBF Model
The parameters related to the prediction effect in the actual application of the RBF network are the size of the training set, the number of input nodes, the number of output nodes, the number of hidden nodes, the center of the hidden layer, the weight , the width parameter, etc., among which the number and center of the hidden layer have a greater impact on the performance of the network; the following describes the selection of parameters. The data is divided into a training sample set and a validation sample set. The 138 days before the closing price of the 50th Shanghai Stock Exchange and 198 hours before the timesharing closing price are used as the training sample set, and the next 35 days and 50 hours are, respectively, used as the validation sample set. The error of the training sample is called the fitting error, and the error of the validation sample is called the prediction error. For the daily chart considering input nodes 5 and 3, the actual closing price training samples should be 133 and 135 groups, respectively. For the timesharing chart considering input nodes 8 and 5, the actual training samples are 190 and 193, respectively. At the same time, the classical technical analysis wave theory of stocks points out that a complete upcycle consists of 5 waves and a downcycle consists of 3 waves. The 53 rule exists in large numbers. Therefore, when the RBF model is established, there are 5 daily charts. The RBF model with 8 input points or 5 input points and one output point on the input point and 3 input points and the hour chart examine its performance.
Too many or too few hidden nodes will lead to the weakening of the RBF generalization ability. Similarly, choosing different hidden layer centers will also cause the same problem. This study will use the RBF optimization algorithm for the number of hidden nodes and the selection of hidden layer centers.
3.3. Modeling of Wavelet Transform Sequence
The basic method of this research is to use the law that wavelet transform depends on the scale change, and the different scale domains of wavelet transform correspond to the scale decomposition sequence and the wavelet transform sequence, and the wavelet neural network containing different input neurons and hidden neurons is used to perform, respectively. Forecast and finally combine the results of the respective predictions using wavelet reconstruction technology into the final prediction results based on the original time series. Using the RBF algorithm in the neural network and using SPSS Clementine to model the wavelet transform sequence on 5 scales, each network model has three layers, namely, an input layer, a hidden layer, and an output layer, and each output layer has only one output element. In order to compare the forecasting effects of the models proposed in this article, we also use ordinary RBF networks to model and predict the logarithmic rate of return itself. The specific modeling steps are as follows: (1)Setting input and output fields: each model has 10 input fields; that is, the lag 1 to 10 periods of each wavelet transform sequence are taken as the input fields of the model, and the current value of the wavelet transform is used as the output field(2)Use the maximum minimum specification method to normalize the input variables(3)Set the training set(4)Set modeling parameters(a)Modeling method: the method of modeling is rapid modeling; in SPSS Climentine12, the method of rapid modeling is BP algorithm(b)Number of training: since the training samples in this article are only 2030, the amount of data is limited, but the prevention of overtraining of the amount of data must be considered, so the setting of the prevention of overtraining parameters is relatively high, 80%. That is, every time you model, 20% of the data is not added to the neural network learning process(5)Validate the model with the data of the CSI 300 Index closing price return rate for a total of 18 trading days. Combine the predicted value of each wavelet transformation sequence with the predicted value of the scale transformation sequence obtained using the ARMA model, and construct the prediction of the original sequence through wavelet reconstruction(6)Input the data of the verification set into the neural network model of the return rate sequence to obtain the predicted value using the ordinary RBF neural network modeling. Evaluate the degree of fit and compare it with the degree of fit of the unobtained prediction in (5)
Since the logarithmic return rate of the daily closing price of the Shanghai and Shenzhen 300 Index has great volatility, we choose wavelets with orthogonal and symmetric characteristics to eliminate redundancy and deviation; in order to separate periodicity and trend, the decomposition layer is now the number which is relatively high. Based on the above considerations, this research selects sym4 wavelet for 5layer decomposition.
3.4. Wavelet Analysis Combined with Neural Network Prediction
The learning process consists of forward propagation of information and back propagation of errors. In the process of forward propagation, the input information is processed from the input layer through the hidden layer and then transmitted to the output layer. The state of neurons in the first layer only affects the state of neurons in the next layer. If the desired output result is not obtained in the output layer, it will switch to back propagation and return the error signal along the original connection channel. By modifying the weights of neurons in each layer, the error mean square is minimized. The threelayer network can realize any nonlinear mapping between input and output.
The wavelet decomposition divides the frequency space occupied by the original sequence. Assuming that a wavelet is used to decompose the original index sequence into three scales, the principle of the waveletRBF neural network prediction method is shown in Figure 2.
Among them, d1, d2, and d3 are the highfrequency part (i.e., the detail part) of the sequence at each scale, respectively, and a3 is the lowfrequency part (i.e., the approximate part) of the sequence. However, since the Mallat decomposition algorithm of wavelet requires two extractions, the length of the subsequence obtained from each decomposition will be halved relative to the parent sequence of the previous layer. Therefore, it is necessary to reconstruct the subsequence to have the same length as the original sequence for our forecast needs. In this way, the waveletRBF neural network prediction method can be divided into 4 steps: (i)Perform wavelet decomposition on the original index sequence(ii)Using wavelet coefficients to reconstruct the decomposed subsequences to make them the same length as the original sequence(iii)Using the RBF neural network to model and predict wavelet subsequences(iv)Reconstruct the predicted value of each subsequence to generate the final predicted value of the Shanghai Stock Exchange Composite Index sequence.
4. Financial Time Series Image Algorithm Analysis
4.1. Prediction of Each Subsequence Wavelet Decomposition
The selected 18day data and timesharing data are normalized after wavelet filtering is performed to remove the noise data. Considering that the stock market pays more attention to the trend transformation of the index, it is then input into the RBF network for training and testing. After denormalization, 40 predicted values of each subsequence are obtained. The predicted value results are shown in Table 1 below. It should be noted that the optimal value of spread of each subsequence is obtained through repeated tests of the MATLAB program based on the mean square error (MSE) and the smoothness of the curve. The wavelet decomposition of the predicted value of each subsequence is reconstructed to obtain a total of 40 Shanghai Composite Index predicted by the waveletRBF network model. The actual value and the predicted value are shown in Figure 3.

4.2. RBF Network Model Analysis
Use sym4 wavelet to decompose the original data signal (256 data at the daily closing price of the Shanghai Composite Index) into 3 scales, and use wavelet coefficients to reconstruct a single branch to obtain a lowfrequency subsequence and the same length as the original sequence The wavelet decomposition results of highfrequency subsequences d1, d2, d3 and signals are shown in Figure 4. The number of input nodes and hidden nodes is shown in Table 2. Generally speaking, using wavelet network of the RBF algorithm to predict stock price, the result is quite satisfactory. Through the above results, it is found that the increase of embedding dimension does not cause the decrease of network prediction error. No matter how many times the network is iterated, from the error evaluation index, when the embedding dimension , the network prediction result is the most ideal. It shows that when using history to predict the future, it is not that the more historical data, the more accurate the prediction of the future; that is, the current stock price is only closely related to the daily closing price of the last two days. It may also be a problem with the algorithm itself. When the embedding dimension is increased, it is prone to poor convergence performance when the quantum wavelet analysis is training highdimensional problems, and the convergence to the best point cannot be guaranteed. When the input sample is 3, the mean square error is the lowest when the hidden layer is 6, and the network generalization performance is the best when the hidden layer nodes are 131, 92, 29, 13, 117, and 68, and the mean square error is 0.2028. The mean square error in the training phase is 0.0198, and the validity error is 0.2028.

According to the predicted values obtained by the three models, the respective prediction accuracy MSEs are calculated, and the results are shown in Table 3. It can be seen that wavelet decomposition and wavelet packet decomposition have significantly improved the prediction accuracy of the Shanghai Composite Index series. Therefore, the RBF network model based on wavelet decomposition and the RBF network model based on wavelet packet decomposition are both suitable for shortterm stock price time series. For prediction, especially for wavelet packet decomposition, because the optimal decomposition subsequence set that minimizes MSE is selected, the MSE of the original sequence is also significantly reduced, and the prediction effect is very good. In addition, since there are many points with large fluctuations in the data we selected, comparing Table 2 can see that the RBF network model is not very effective in predicting these skyrocketing points. However, the waveletBF network model and wavelet packetRBF the network model still gives a better fit to them. However, in the process of using RBF algorithm and wavelet analysis to optimize wavelet parameters for security forecasting, when reaching an almost equivalent error target, wavelet analysis has an advantage in time. Of course, the results obtained here are based on all the algorithms are iteratively run 1000 times. Perhaps, for wavelet analysis, premature phenomena appear in the iterative process, and the possibility of falling into local extremes is relatively high.

4.3. WaveletRBF Network Analysis
Combining wavelet analysis with the RBF network for financial data forecasting, here, we take the Shanghai and Shenzhen 300 Index return rate as an example for detailed calculation research and combine the calculation results of the Shanghai and Shenzhen 300 Index return rate for research and discussion. Using the db7 wavelet with a decomposition level of 6 layers, the CSI 300 Index return data from July 10 to November 23 is decomposed and reconstructed into approximate part a6 and detailed parts d6, d5, d4, d3, d2, and d1. Use a quadratic polynomial to fit and predict the approximate part a6, and use the cosine wave approximation method to fit and predict the detail parts d6, d4, and d3. For d5, we use the symmetric method to predict according to its graphical characteristics. d2 and d1 were fitted and predicted with the AR model, respectively. We use the normalization formula to approximate the data to part a6 and detail parts d6, d5, d4, d3, d2, and d1 and their corresponding predicted values . Take d6, d5, d4, d3, d2, and d1 as the input vectors and SI as the output vector, and use the RBF network for training. For this reason, we take . After the RBF network is trained, enter the predicted value and get the predicted value of S1. At this time, the predicted value of the original time series we need is obtained through the inverse operation of the normalization formula. The results of wavelet analysis combined with the RBF neural network to predict the return value of the Shanghai and Shenzhen 300 Index are shown in Table 4. The daily closing price of the CSI 300 Index is shown in Figure 5.

The use of wavelet analysis and RBF neural network to predict the average relative error of Shanghai and Shenzhen and Shanghai Stock Exchange, respectively, is shown in Table 5. The daily closing price of the CSI 300 Index is shown in Figure 6. It can be found from Figure 6 that the logarithmic return of the daily closing price of the Shanghai and Shenzhen 300 Index has obvious volatility and aggregation; the kurtosis of the sample is , the peak of the distribution; and the sample skewness ; the distribution is left skewed; therefore, the logarithmic rate of return has a sharp peak and thick tail. The JarqueBera test of the sample sequence shows that the associated probability value is 0, and the assumption of normal distribution is rejected. Also, according to the comparison chart of the sample and the standard normal score, it can be seen that the logarithmic return of the Shanghai and Shenzhen 300 Index is not a normal distribution. Similarly, in the construction of the RBP network model of the SSE 50 Index timesharing graph, the normalized data is also used, and the data is divided into a training sample set and a validation sample set as described above. Due to the short data interval represented by the timesharing for the timesharing graph, considering the input nodes 8 and 5, the actual input samples are 190 and 193, respectively. The network parameters are also genetically evolved according to 500 generations of the genetic algorithm. When the input sample is 5, the mean square error is the smallest when the hidden layer is 6, and the network generalization performance is the best when the hidden layer nodes are 116, 12, 4, 33, 179, and 165, and the mean square error is 1.6349, where the training stage mean square error is 0.0209, and the validity error is 1.6141. When the input sample is 8, the mean square error is the smallest when the hidden layer is 4, and the network generalization performance is the best when the hidden layer nodes are 146, 159, 11, and 150, and the mean square error is 1.4161, where the mean square error in the training phase is 0.0234, the validation error is 1.3926, and the network performance is better than the network performance of the sample 5 when the input sample is 8. From the hourly forecast and simulation, an interesting problem is also found; that is, the network performance tends to be better when the number of hidden layer nodes is small.

For the convenience of research, we take the average relative error obtained by using wavelet analysis for prediction, the average relative error obtained by directly using RBF neural network for prediction, and the average relative error obtained by using wavelet analysis combined with RBF neural network for prediction, as shown in Table 6. The average relative error analysis result obtained by combining wavelet analysis and RBF neural network for prediction is shown in Figure 7. It can be found from Table 6 that the average relative error of prediction using wavelet analysis is much smaller than that of directly using neural network. This is because the wavelet analysis method is used to predict, it can extract the main trend part and detail part of the data, so the average relative error of the fitting prediction will not be very large. Using neural networks for prediction, of course, as mentioned earlier, it has many advantages, but in general, this method is not particularly mature. For example, here is the use of RBF neural network for prediction, and spread is a key role in the prediction results. When spread takes different values, different predicted values will be obtained. Therefore, it is necessary to train the network through a large number of experiments and past experience to select a more appropriate spread value. Training the network in different groups on the data can also affect the predicted value of the neural network. It can be seen that there are various factors that can have a direct impact on the predicted value of the neural network. Therefore, the average relative error of directly using the neural network to predict is slightly larger than that of using the wavelet analysis method to predict. When the average relative error of the prediction by the neural network and the average relative error of the prediction with the wavelet analysis method are both within a reasonable range, the prediction of the neural network can be directly used for calculation by MATLAB programming, and the program is simple and easy to understand. For the prediction with wavelet analysis method, it is necessary to decompose layer by layer and calculate layer by layer; the calculation is more complicated. In this case, the neural network can be given priority for prediction. At the same time, it can be seen that the prediction results obtained by the wavelet prediction method combined with the RBF network prediction method are better than those obtained by pure wavelet prediction or pure RBF network prediction.

Table 7 shows the comparison results of RBF, WNN, and WWNN network prediction evaluation indicators. RBF, WNN, and WWNN network predictive analysis results are shown in Figure 8. Based on Figure 8 and Table 7, the prediction accuracy of the wavelet neural network is better than that of the traditional RBF network. The wavelet neural network based on wavelet decomposition and reconstruction has better results than the prediction of pure wavelet neural network. The sum of squares of network prediction, the evaluation indicators of error, average absolute error, mean square error, average absolute percentage error, and square percentage error are better than those of the RBF neural network and pure wavelet neural network. It can be seen that the WWNN network has good approximation and generalization ability, although the two kinds of wavelet networks are inferior to the BP neural network in predictive power in some cases. But starting from the time dimension, the training time of the wavelet network model, especially the WNN model, is 1/2 of the BP neural network model, indicating that the wavelet transform and neural network and the wavelet network formed by the combination of networks still have a good development prospect in time series forecasting. The WNN model is based on multiscale wavelet basis functions. It has more adjustable scale parameters and translation parameters and has stronger learning ability. However, based on the comparison of the effects of the WNN model and the WNN model in the time series prediction, the WNN model performs better, because the homogeneous (single variable) model is mainly used to predict the future exchange rate through historical exchange rates, without taking into account other key factors such as national interest rate, inflation rate, original price, and gold price, which have an impact on the exchange rate. Therefore, the performance of the WNN model in time series forecasting cannot be determined. It is not as powerful as the WNN model, and multifactor consideration and analysis are needed in the later stage.

5. Conclusion
This research mainly discusses the financial time series image algorithm based on wavelet analysis and neural network. The parameters related to the prediction effect in the actual application of the RBF network are the size of the training set, the number of input nodes, the number of output nodes, the number of hidden nodes, the center of the hidden layer, the weight , the width parameter, etc., among which the number and center of the hidden layer have a greater impact on the performance of the network. Neural networks are used for time series modeling, and time needs to be embedded in the network structure. The commonly used method is time delay, which can be implemented in the input layer of the network.
The basic method of this research is to use the law that wavelet transform depends on the scale change, and the different scale domains of wavelet transform correspond to the scale decomposition sequence and the wavelet transform sequence, and the wavelet neural network containing different input neurons and hidden neurons is used to perform, respectively. Forecast and finally combine the results of the respective predictions using wavelet reconstruction technology into the final prediction results based on the original time series. Using the RBF algorithm in the neural network and using SPSS Clementine to model wavelet transform sequences on 5 scales, each network model has three layers, namely, an input layer, a hidden layer, and an output layer, and each output layer has only one output element. In order to compare the forecasting effects of the models proposed in this article, we also use ordinary RBF networks to model and predict the logarithmic rate of return itself.
The waveletRBF neural network prediction method can be divided into four steps: wavelet decomposition of the original exponential sequence; use of wavelet coefficients to reconstruct the decomposed subsequences to make them the same length as the original sequence; use RBF neural network to separate the wavelet subsequences carry out modeling and prediction; and reconstruct the predicted value of each subsequence to generate the final predicted value of the Shanghai Composite Index sequence. The learning process consists of forward propagation of information and back propagation of errors. In the process of forward propagation, the input information is processed from the input layer through the hidden layer and then transmitted to the output layer. The state of neurons in the first layer only affects the state of neurons in the next layer. If the desired output result is not obtained in the output layer, it will switch to back propagation and return the error signal along the original connection channel. By modifying the weights of neurons in each layer, the error mean square is minimized. The threelayer network can realize any nonlinear mapping between input and output.
Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
References
 N. Amrani, J. SerraSagrista, V. Laparra, M. W. Marcellin, and J. Malo, “Regression wavelet analysis for lossless coding of remotesensing data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 9, pp. 5616–5627, 2016. View at: Publisher Site  Google Scholar
 A. Arneodo, B. Audit, E. Bacry, S. Manneville, J. F. Muzy, and S. G. Roux, “Thermodynamics of fractal signals based on wavelet analysis: application to fully developed turbulence data and DNA sequences,” Physica A: Statistical Mechanics and its Applications, vol. 254, no. 12, pp. 24–45, 1998. View at: Publisher Site  Google Scholar
 F. Bousefsaf, C. Maaoui, and A. Pruski, “Peripheral vasomotor activity assessment using a continuous wavelet analysis on webcam photoplethysmographic signals,” Biomedical Materials and Engineering, vol. 27, no. 5, pp. 527–538, 2016. View at: Publisher Site  Google Scholar
 C. M. Alcala, M. C. Kelley, and J. C. Ulwick, “Nonturbulent layers in polar summer mesosphere: 1. Detection of sharp gradients using wavelet analysis,” Radio Science, vol. 36, no. 5, pp. 875–890, 2001. View at: Publisher Site  Google Scholar
 W. Y. Chen, “Health progress and economic growth in the USA: the continuous wavelet analysis,” Empirical Economics, vol. 50, no. 3, pp. 831–855, 2016. View at: Publisher Site  Google Scholar
 M. Zhou, Y. Wang, Y. Liu, and Z. Tian, “An informationtheoretic view of WLAN localization error bound in GPSdenied environment,” IEEE Transactions on Vehicular Technology, vol. 68, no. 4, pp. 4089–4093, 2019. View at: Publisher Site  Google Scholar
 G. R. J. Cooper, “Locating thin dykes using wavelet analysis of their magnetic anomalies,” Exploration Geophysics, vol. 50, no. 5, pp. 554–560, 2019. View at: Publisher Site  Google Scholar
 L. Li, P. Liu, Y. Xing, and H. Guo, “Wavelet analysis of the farfield sound pressure signals generated from a highlift configuration,” AIAA Journal, vol. 56, no. 1, pp. 432–437, 2018. View at: Publisher Site  Google Scholar
 X. Wen, Q. Feng, R. C. Deo, M. Wu, and J. Si, “Wavelet analysisartificial neural network conjunction models for multiscale monthly groundwater level predicting in an arid inland river basin, northwestern China,” Nordic Hydrology, vol. 48, no. 6, pp. 1710–1729, 2017. View at: Publisher Site  Google Scholar
 A. Banskota, M. J. Falkowski, A. M. S. Smith et al., “Continuous wavelet analysis for spectroscopic determination of subsurface moisture and watertable height in northern peatland ecosystems,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 3, pp. 1526–1536, 2017. View at: Publisher Site  Google Scholar
 M. Durocher, T. S. Lee, T. B. M. J. Ouarda, and F. Chebana, “Hybrid signal detection approach for hydrometeorological variables combining EMD and crosswavelet analysis,” International Journal of Climatology: A Journal of the Royal Meteorological Society, vol. 36, no. 4, pp. 1600–1613, 2016. View at: Publisher Site  Google Scholar
 A. Osipov, “Wavelet analysis on symbolic sequences and twofold de Bruijn sequences,” Journal of Statistical Physics, vol. 164, no. 1, pp. 142–165, 2016. View at: Publisher Site  Google Scholar
 G. I. Donskikh, M. I. Ryabov, A. L. Sukharev, and M. Aller, “Singularspectrum analysis and wavelet analysis of the variability of the extragalactic radio sources 3C 120 and CTA 102,” Astrophysics, vol. 59, no. 2, pp. 199–212, 2016. View at: Publisher Site  Google Scholar
 P. Gupta and R. N. Mahanty, “An approach for detection and classification of transmission line faults by wavelet analysis,” Energy Education Science and Technology, vol. 34, no. 1, pp. 109–122, 2016. View at: Google Scholar
 M. R. Kaloop and D. Kim, “Denoising of GPS structural monitoring observation error using wavelet analysis,” Geomatics, Natural Hazards and Risk, vol. 7, no. 2, pp. 804–825, 2016. View at: Publisher Site  Google Scholar
 M. Zhou, X. Li, Y. Wang, S. Li, Y. Ding, and W. Nie, “6G multisource information fusion based indoor positioning via Gaussian kernel density estimation,” IEEE Internet of Things Journal, 2020. View at: Publisher Site  Google Scholar
 C. Ma, S. Song, Z. Gao et al., “Electrochemical noise monitoring of the atmospheric corrosion of steels: identifying corrosion form using wavelet analysis,” Corrosion Engineering, Science and Technology, vol. 52, no. 6, pp. 432–440, 2017. View at: Google Scholar
 G. J. Wang, C. Xie, and S. Chen, “Multiscale correlation networks analysis of the US stock market: a wavelet analysis,” Journal of Economic Interaction and Coordination, vol. 12, no. 3, pp. 561–594, 2017. View at: Publisher Site  Google Scholar
 M. Gallegati, M. Gallegati, J. B. Ramsey, and W. Semmler, “Long waves in prices: new evidence from wavelet analysis,” Cliometrica, vol. 11, no. 1, pp. 127–151, 2017. View at: Publisher Site  Google Scholar
 J. J. Yang, Z. B. He, W. J. Zhao, J. Du, L. F. Chen, and X. Xi, “Assessing artificial neural networks coupled with wavelet analysis for multilayer soil moisture dynamics prediction,” Sciences in Cold and Arid Regions, vol. 8, no. 2, pp. 116–124, 2016. View at: Google Scholar
 M. R. Mosavi and A. Tabatabaei, “Travelingwave fault location techniques in power system based on wavelet analysis and neural network using GPS timing,” Wireless Personal Communications, vol. 86, no. 2, pp. 835–850, 2016. View at: Publisher Site  Google Scholar
 A. Singh, S. Maiti, and R. K. Tiwari, “Modelling discontinuous well log signal to identify lithological boundaries via wavelet analysis: an example from KTB borehole data,” Journal of Earth System Science, vol. 125, no. 4, pp. 761–776, 2016. View at: Publisher Site  Google Scholar
 D. Das, M. Kannadhasan, A. K. Tiwari, and K. H. alYahyaee, “Has comovement dynamics in emerging stock markets changed after global financial crisis? New evidence from wavelet analysis,” Applied Economics Letters, vol. 25, no. 20, pp. 1447–1453, 2018. View at: Publisher Site  Google Scholar
 B. Das, S. Pal, and S. Bag, “Weld quality prediction in friction stir welding using wavelet analysis,” International Journal of Advanced Manufacturing Technology, vol. 89, no. 14, pp. 711–725, 2017. View at: Publisher Site  Google Scholar
 G. Pugliano, U. Robustelli, F. Rossi, and R. Santamaria, “A new method for specular and diffuse pseudorange multipath error extraction using wavelet analysis,” GPS Solutions, vol. 20, no. 3, pp. 499–508, 2016. View at: Publisher Site  Google Scholar
 W. Huang and Y. Yang, “Water quality sensor model based on an optimization method of RBF neural network,” Computational Water, Energy, and Environmental Engineering, vol. 9, no. 1, pp. 1–11, 2020. View at: Publisher Site  Google Scholar
 L. Yue, L. YuNan, L. Bing, and W. ChuanBiao, “Research on the correlation between physical examination indexes and TCM constitutions using the RBF neural network,” Digital Chinese Medicine, vol. 3, no. 1, pp. 11–19, 2020. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2021 Wuwei Liu and Jingdong Yan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.