Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2017 / Article

Research Article | Open Access

Volume 2017 |Article ID 5729786 | https://doi.org/10.1155/2017/5729786

Lin Lin, Fang Wang, Shisheng Zhong, "A Study on the Generalized Approximation Modeling Method Based on Fitting Sensitivity for Prediction of Engine Performance", Discrete Dynamics in Nature and Society, vol. 2017, Article ID 5729786, 12 pages, 2017. https://doi.org/10.1155/2017/5729786

A Study on the Generalized Approximation Modeling Method Based on Fitting Sensitivity for Prediction of Engine Performance

Academic Editor: Zhan Zhou
Received09 Nov 2016
Revised13 Jan 2017
Accepted05 Feb 2017
Published06 Mar 2017

Abstract

Prediction technology for aeroengine performance is significantly important in operational maintenance and safety engineering. In the prediction of engine performance, to address overfitting and underfitting problems with the approximation modeling technique, we derived a generalized approximation model that could be used to adjust fitting precision. Approximation precision was combined with fitting sensitivity to allow the model to obtain excellent fitting accuracy and generalization performance. Taking the Grey model (GM) as an example, we discussed the modeling approach of the novel GM based on fitting sensitivity, analyzed the setting methods and optimization range of model parameters, and solved the model by using a genetic algorithm. By investigating the effect of every model parameter on the prediction precision in experiments, we summarized the change regularities of the root-mean-square errors (RMSEs) varying with the model parameters in novel GM. Also, by analyzing the novel ANN and ANN with Bayesian regularization, it is concluded that the generalized approximation model based on fitting sensitivity can achieve a reasonable fitting degree and generalization ability.

1. Introduction

Prediction technology for aeroengine performance is significantly important in operational maintenance and safety engineering. In January 2008, Australia’s Qantas Airlines Boeing 747-400 power system malfunctioned in mid-flight, and four engines were all unusable [1]. In America, transport airplane accidents due to mechanical malfunctions were investigated, which covered 7,571 registered airplanes from 1980 to 2001. It is concluded that landing gear and turbine engine are most likely broken. Besides that, in commercial aircraft industry, $31B is spent worldwide on aircraft maintenance in 2007, but 31% of it is on engine maintenance [2]. So, prediction technology for aeroengine performance can decrease the possibility of plane crash and save the cost of maintenance.

Aeroengine is complicated, nonlinear equipment, because it includes a large number of parts and there is a high degree of coupling. Also, performance parameters of aeroengine consist of many great discrete and nonlinear points. Consequently, this complexity brings about great difficulty to the development of prediction technology.

There are many methods to predict aeroengine performance. Approximate mathematical models based on data-driven approaches are acceptable substitutes for accurate physical models that are unlikely obtained in fields of multidisciplinary design optimization, prediction, and so on. Many approximation models with random and nonlinear features have been constructed, and they include the response surface model (RS) [3, 4], polynomial regression model [5, 6], autoregressive moving average (ARMA) [7, 8], artificial neural network model (ANN) [9, 10], support vector machines (SVM) [11, 12], hidden Markov model (HMM) [13, 14], and Grey model (GM) [15, 16]. These approximation models are widely used in nonlinear simulation, classification, regression, and other domains. In the training phase, overfitting reduces the fitting errors but it causes the generalization performance of the model to decline. By contrast, underfitting causes models to veer away from true models with a relatively low fitting accuracy.

The overfitting problem has been studied by many scholars from different perspectives. To measure the fitting degree, Akaike proposed an entropy-based information criterion (i.e., Akaike information criterion or AIC) with consideration of the complexity and fitting precision of models. A small AIC is indicative of good model performance and has thus been widely studied by many authors [17, 18]. For the sake of decreasing the high complexity of models, the Bayesian information criterion is studied on the basis of the AIC [19, 20]. Utilizing existing regularization methods [21, 22] is another method to avoid overfitting. With this approach, optimal parameters can be obtained to simplify models by adjusting either the L1 norm or the L2 norm of the weight coefficients [23]. Also, adjusting algorithm parameters, dimensionality reduction, and cross validation experiments [24] are effective means to avoid the overfitting problem.

Some examples in prediction domain are shown with the same EGT data (Exhaust Gas Temperature) as follows. Prediction results with an ANN with Bayesian regularization algorithm are seen in Figure 1, prediction results with a SVM with cross validation are seen in Figure 2, and prediction results with Grey model (GM) are seen in Figure 3.

Although regularization method is used in ANN and SVM, error precision ought to be better. GM is usually applied to predict parameters with noise, but error precision should be improved too.

To obtain high prediction precision, overfitting and underfitting problems must be addressed better. In addressing fitting problems, we set up a fitting sensitivity model to make the approximation model insensitive to samples that are far from the core of sample clusters and sensitive to samples that are close to the core of sample clusters. The fitting sensitivity model is achieved by controlling the fitting sensitivity to adjust the fitting accuracy of the approximation model. In using the fitting sensitivity model, the approximation model approaches the center of the sample clusters instead of the samples far from the center of the sample clusters. Hence, the robustness and adaptability of the model to new samples are improved.

In Section 2, an approximation modeling method based on fitting sensitivity is introduced. With this method, a generalized approximation model is built by analyzing the fitting sensitivity and its correlation with the approximating precision. In Section 3, a GM is taken as an example and the approximation modeling method based on the fitting sensitivity is employed to gain a novel GM that could avoid overfitting and underfitting problems better. This novel GM exhibits such capability because the main tendency of its training samples can be obtained through fitting, which can reduce the fluctuations in prediction results. In Section 4, values of parameters for the specific model proposed in Section 3 are set, and the optimization model solved with a genetic algorithm is established with a reasonable fitting degree. In Section 5, experimental verification with a single variable is performed to analyze the impacts of the model parameters on the accuracy of novel GM. The contrast experiment between ANN with Bayesian regularization and the novel ANN showed that the approximation model based on fitting sensitivity can yield better prediction results in comparison with traditional models.

2. Modeling Methods of the Generalized Approximation Model Based on Fitting Sensitivity

Firstly, fitting sensitivity is introduced and then the relationship between fitting degree and fitting sensitivity is illustrated. Secondly, generalized approximation model is constructed based on fitting sensitivity. Also, fitting error analysis for the model is derived. Lastly, a novel GM based on fitting sensitivity is studied.

2.1. Fitting Sensitivity Analyses

Training samples are represented as and fitting values are represented as . The fitting degree can be expressed by the fitting sensitivity model , when there is a same initial value in both and ; that is, , and .

The corresponding analysis is presented as follows:(1)When , there is overfitting of to ; that is, . More specifically, the changing tendency of the fitting value is in accordance with that of , as shown in Figure 4(a).(2)When , there is underfitting of to with , as shown in Figure 4(b). Moreover, the fitting value enlarges the trend of the training sample . In this case, is unstable and is fluctuating along with . Consequently, inaccurate prediction results are got. This condition is called “excessive underfitting.”(3)When , there is underfitting of to with , as shown in Figure 4(c). Moreover, the fitting value compresses the trend of the training sample . In this case, is close to the main trend of . Consequently, accurate prediction results are got. This condition is called “reasonable underfitting.”

In conclusion, when the initial value of is the same as that of , the different levels of overfitting and underfitting correspond to the different values of the fitting sensitivity . In particular, the reasonable fitting degree is obtained by setting in the interval , which can make the model avoid overfitting and excessive underfitting in some degree.

2.2. Generalized Approximation Model Based on Fitting Sensitivity

Setting in the interval (), a fitting sensitivity model is built as follows when :where is coefficient of whole compressibility to : . is coefficient of compressibility of to .

The description of (1) is as follows.

(1) becomes large, which means the fitting value is far from . Because the fitting value is required to represent the average level of the training samples, strong noise and violent fluctuations are included in . To obtain a gentle main trend in , the sensitivity of to , , should be decreased.

(2) becomes small, which means the fitting value is close to . Because the fitting value is required to represent the average level of the training samples, the changing trends of are gentle. To maintain this gentle main trend, the changing trend of should follow that of . In this way, the sensitivity of to , , should be increased.

Equation (1) is transformed into the following integral equation:

The integral variable is transformed intowhere is integral offset.

Equation (4) is the implicit expression of the approximation model based on fitting sensitivity. The generalized approximation model is obtained by transforming (4) as follows:

The first item in (5) is the fitting value of the approximation model to the training sample . The second item is , which denotes the adjustment of the traditional prediction model.

The approximation model mentioned above can avoid overfitting. However, the constraint equation needs to be added to avoid “excessive underfitting,” as shown in Figure 5.

If , then , which inevitably leads to a large underfitting degree of to , “excessive underfitting.” Consequently, the lower bounds for should be set up to avoid excessive underfitting as follows:where is adjusting coefficient and .

When , ; when , . Thus, the coefficient should be changed to control in a range. The definition domain of (6) is whole time domain; that is, . However, when the number of training samples is too large, it is difficult to maintain less than a small number at any point. As samples which are close to the prediction moment play an important role in improving forecast precision, the definition domain of (6) is controlled into the last points of the training samples. That is, (6) is effective when (see the following equation):where is adjusting coefficient and .

Equation (5) is substituted into (7) to finally obtain the generalized approximation model based on fitting sensitivity as follows:where is fitting value at the point .

2.3. Fitting Error Analysis for the Generalized Approximation Model Based on Fitting Sensitivity

The fitting error is to be analyzed between the fitting values and training samples.

The following is obtained from (8):

As shown in (11), when , . That is, when the fitting value is equal to a constant , the fitting error is zero. However, actually varies with . Thus, must be a numeric value greater than a small number ; that is, .

The following is obtained from (9):

The lower and upper bounds of the fitting error are obtained as follows:

The model controls the fitting error in a certain range, as shown in (13). Then, the model can avoid the overfitting and underfitting problems to some extent.

2.4. Modeling Methods of a Novel GM Based on Fitting Sensitivity

The GM is suitable for predicting the time series of performance parameters with great randomicity. The GM can reduce accumulated errors and the fluctuations of prediction results by accumulated summation. Thus, we introduce a modeling technique for the approximation model based on fitting sensitivity by taking the GM as an example.

Overfitting and underfitting problems also exist in the traditional GM during the training phase, similar to other approximation models. As shown in (14), the developed coefficient and Grey-controlled variable in the GM are derived from the least squares method.where is developed coefficient, is Grey-controlled variable, and is initial value of the training samples.

To address the fitting problem, model parameters and are evaluated with the approximation model based on fitting sensitivity instead of the least squares method to effectively avoid overfitting and excessive underfitting in the training phase and ultimately gain the precise estimation parameters and . We can then construct the novel GM based on fitting sensitivity by using and , as mentioned in Section 2.2.

The novel GM based on fitting sensitivity is written as follows:

3. Solution of Novel GM Based on Fitting Sensitivity

The model parameters in (15) and (16) include the adjusting coefficients that can avoid the overfitting problem, the adjusting coefficients that can avoid the excessive underfitting problem, and the shape parameters and . Then, are set according to their physical meaning, and are set with the constraint condition of avoiding overfitting and excessive underfitting. Using these parameters, the optimization model based on fitting degree is solved with the genetic algorithm.

3.1. Parameter Setting of the Novel GM

are set to a reasonable range according to their physical meaning in the novel GM. More specifically, the last training samples are used to satisfy the constraint condition to improve the prediction accuracy, is employed to avoid excessive underfitting in the training phase, and are adjusting coefficients to avoid overfitting in the model.

3.1.1. Setting of Parameter

Excessive underfitting is challenging to be avoided during the whole training phase because of the presence of a large number of training samples. Thus, only the last training samples are used to satisfy the constraint condition to improve the prediction accuracy, as mentioned in Section 2.2. The value of ranges from 1 to , that is, the length of the training samples; that is, , and .

When , a few training samples are constrained to avoid excessive underfitting, but these training samples do not contain the trend of . Inaccurate prediction results are thus obtained.

When , it is difficult to make all training samples be constrained to avoid excessive underfitting. Even with the avoidance of excessive underfitting, inaccurate prediction results are still obtained because of the unrelated historical information on the training samples.

As a result, is set up to reasonable range from 1 to . The last training samples are then selected to be constrained, including the development tendency of the training samples instead of the unrelated historical information.

3.1.2. Setting of Parameter

The parameter is employed to avoid excessive underfitting in the training phase when , making the fitting sensitivity at the last training samples.

When , as shown in (16), which results in high fitting precision and the high sensitivity of the fitting value to when ; when , which results in the low sensitivity of the fitting value to when and the inaccurate fitting values at the last training samples.

As a result, is set up to reasonable range from 0 to on the basis of the sensitivity of the fitting value to at the last training samples; that is, .

3.1.3. Setting of Parameters

The adjusting coefficients to avoid overfitting are , which makes the fitting sensitivity , and , which is the compressibility of affected by to make approach the main trend of and reduce the fluctuation of prediction results, as shown in (16).

The adjustment of can avoid the overfitting of to . Overfitting means . At the same time, , as mentioned in Section 2.1. That is, a fitting sensitivity that is less than 1 can avoid the overfitting problem. So, ; that is, . However, when , and is unaffected by , thus leading to a meaningless fitting process. Hence, the study controls into () with consideration of .

The adjustment of can make approach the main trend of . As mentioned in Section 2.2, the fitting value represents the average level of the training samples. Thus, when ( is a threshold), exhibits high sensitivity to ; when , exhibits low sensitivity to . In other words, can obtain the main trend of with consideration of the two points above.

The lowest fitting sensitivity is , which is obtained from the constraint in (15). When the distance between and reaches the maximum, that is, , the minimum fitting sensitivity can be obtained as follows:

We obtain from (17).

The parameter can be well defined after setting the threshold by taking the maximum difference between the neighboring points, that is, , in the training samples as the unit. In engineering practice, when ( is a constant), the fitting value is far from , with the fitting being ineffective and with exhibiting low sensitivity. The study sets , which means that when the distance between and is over five times , it is a meaningless fitting, that is, when .

3.2. Adjustment of Parameters Based on the Genetic Algorithm

After the setting of based on their physical meanings, other parameters, , are solved with the genetic algorithm with consideration of the constraint for overfitting and underfitting.

3.2.1. Establishment of the Optimization Model

The first step in the design of the genetic algorithm is to establish the optimization model, and the key to construct the model is to build the adaptive function and nonlinear constraint.

Step 1 (build an adaptive function). The adaptive function is the objective function of the optimization model which is used to decide the seeking direction of the group. As the novel GM based on fitting sensitivity can avoid the overfitting problem, the adaptive function is transferred from (15) as follows:

Step 2 (construct the constraint condition). The adaptive function considers the overfitting; it is to be supplemented with the constraint for the excessive underfitting problem.

Finally, the optimization model solved with the genetic algorithm is obtained as follows.

Find :

3.2.2. Design of the Initial Value of the Genetic Variable

The design of the genetic operators is very mature and is not repeated here. On the basis of the requirements of this study, we need to set up the initial population of as the optimization variables, as shown in (20).

Because the methodology proposed in this study modifies the traditional GM, the initial values of the GM parameters are set to the values obtained through the least squares method as shown in the following equation:where .

The design of offset employed in the integration process is to be discussed. The fitting accuracy at the last training samples should be improved during the training phase to achieve expected prediction precision. Thus, and , make satisfy (4); that is,

The fitting value of the GM is substituted into (23). Then, is obtained as follows:where .

The initial value of is set to the mean of , as follows:

4. Experimental Validation

Firstly, the experimental data is described. And then the different prediction results under different values of parameters in novel GM are analyzed. Lastly, the prediction precision of ANN with regularization method is compared with that of approximation ANN based on fitting sensitivity.

4.1. Experimental Data

The time series of performance parameter DEGT (the difference between the monitored Exhaust Gas Temperature and the benchmark) of an aeroengine over 200 cycles is used in the experiment. It is an important performance parameter of aeroengine. But significant randomicity and serious fluctuation are observed in it as shown in Figure 6.

Randomicity and fluctuation in observed parameters bring about difficulties to predict DEGT. There are mainly four reasons which are given as follows. Firstly, random factors such as the actual working condition of equipment and human operation result in the fluctuation of the time series data, and thus the chosen approximation model cannot easily approach the original nonlinearization. Secondly, for the accumulation of errors caused by the iteration method for prediction, a long prediction phase equates to a large forecast deviation. Thirdly, when nonlinear models are aimed at high precision, overfitting occurs in the training phase with forecast deviation. Lastly, in the case involving many training samples during the training phase, the model parameters for approximating such samples are difficult to determine and model accuracy is adversely affected by the underfitting problem.

Real signals are difficult to find because of the presence of noise signals. So, the smoothed data approved by the aeroengine manufacturer are regarded as true values in evaluation of prediction accuracy in this work as shown in Figure 7.

4.2. Experiment Analysis of Parameters in Novel GM

To analyze the effect of parameters on novel GM precision, a single variable is maintained in turn to observe the effect of every variable on the prediction precision in novel Grey model. Values of are set to their certain range. And then prediction errors are compared and analyzed varying with in turn. Besides, 20 points are to be predicted in all experiments.

4.2.1. Effect of Parameter on Model Precision

To study the effect of the single variable on the prediction precision, we set , , and . Value range of is , and is set to 1.25, 3, 5, 7, and 9 in five groups in novel GM. 60 experiments are done in every . The box plot of RMSEs at different values of with the novel GM is shown in Figure 8. For comparison, RMSEs of traditional GM are drawn in Figure 8 too. There are seven indicators of RMSEs in box plot: max, maximum value; Q3, 75th percentile; median, the median of RMSEs; Q1, 25th percentile; min, minimum value; outlier, the number of outliers; and DQQ, distance between Q1 and Q3.

Values of indicators in box plot of RMSEs at different values of with the novel GM and GM are shown in Table 1.


IndicatorGMNovel GM

Max13.9311.209.4210.810.3710.24
Q37.965.945.185.915.956.17
Median4.813.383.223.373.603.36
Q13.102.272.252.242.372.23
Min1.371.091.101.191.051.18
Outlier203041
DQQ4.863.672.933.673.583.94

As shown in Table 1, the least value of RMSE median is 3.22, and the least DQQ is 2.93 with in novel GM. That is, the prediction precision of novel GM gathers round 3.22, and the dispersion of errors is small when .

In the prediction model proposed in the study, the fitting degree in the training phase is partly decided by . When , overfitting is the possible problem, while when , underfitting is the possible problem. So, there must be an at the highest prediction precision. Here, .

Prediction results of the novel GM are better than the results of the traditional GM, because there are smaller median and DQQ, corresponding to higher precision and lower dispersion.

4.2.2. Effect of Parameter on Model Precision

To study the effect of the single variable on the prediction precision, we set , , and . And is set to 30, 40, 50, 60, 70, and 80 in six groups in novel GM. 60 experiments are done in every . The box plot of RMSEs at different values of is shown in Figure 9.

Values of indicators in box plot of RMSEs at different values of with the novel GM and GM are shown in Table 2.


IndicatorGMNovel GM

Max13.9310.8311.1110.9511.919.4211.61
Q37.966.736.226.196.215.186.26
Median4.814.263.533.573.273.223.38
Q13.102.912.482.392.402.252.32
Min1.371.171.081.101.121.101.40
Outlier2210031
DQQ4.863.823.743.83.812.933.94

As shown in Table 2, the least value of RMSE median is 3.22, and the least DQQ is 2.93 with in novel GM. That is, the prediction precision of novel GM gathers round 3.22, and the dispersion of errors is small when .

Excessively large introduces too much unrelated information about the trend of the training samples, while small does not include sufficient information. So, there must be an at the highest prediction precision. Here, .

4.2.3. Effect of Parameter on Model Precision

To study the effect of the single variable on the prediction precision, we set , , and . is set to 1, 10, 20, 30, 40, 50, and 60 in seven groups with novel GM. 60 experiments are done in every . The box plot of RMSEs at different values of is shown in Figure 10.

Values of indicators in box plot of RMSEs at different values of with the novel GM and GM are shown in Table 3.


IndicatorGMNovel GM

Max13.9311.009.429.999.839.769.8111.56
Q37.966.335.185.405.625.405.365.92
Median4.814.323.223.263.283.053.393.71
Q13.102.862.252.342.532.202.302.14
Min1.370.941.101.381.091.081.011.03
Outlier21310000
DQQ4.863.472.933.063.093.23.063.78

As shown in Table 3, the least RMSE median is 3.05 with and the least DQQ is 2.93 with in novel GM. Prediction results of the novel GM are better than results of the traditional GM, because there are smaller median and DQQ, corresponding to higher precision and lower dispersion.

When , a few training samples which do not contain the trend of are constrained to avoid excessive underfitting. And when , all training samples which include the unrelated historical information are constrained to avoid excessive underfitting. So, there must be a at the highest prediction precision.

4.2.4. Effect of Parameter on Model Precision

To study the effect of the single variable on the prediction precision, we set , , and . And is set to 0.1, 0.2, and 0.3 in three groups in novel GM. 60 experiments are done in every . The box plot of RMSEs at different values of is shown in Figure 11.

Values of indicators in box plot of RMSEs at different values of with the novel GM and GM are shown in Table 4.


IndicatorGMNovel GM

Max13.939.4210.5410.46
Q37.965.185.716.35
Median4.813.223.143.35
Q13.102.252.202.15
Min1.371.101.121.14
Outlier2310
DQQ4.862.933.514.2

As shown in Table 4, the least value of RMSE median is 3.22, and the least DQQ is 2.93 with in novel GM. Prediction results of the novel GM are better than the results of the traditional GM, because there are smaller median and DQQ, corresponding to higher precision and lower dispersion.

When , there is a high fitting degree in novel GM, and when , the low sensitivity of the fitting value to at the last training samples also could affect the prediction precision. So, there must be an at the highest prediction precision. Here, .

4.2.5. Approximation Model of ANN

Experimental data in Section 4.1 is also used to be predicted with the approximation model of ANN with Bayesian regularization. For comparison, the ANN with Bayesian regularization algorithm is employed in the experiment.

Eight experiments are designed as shown in Table 5.


NumberModel

1ANN with regularization
2Novel ANN370100.1
3Novel ANN1.2570100.1
4Novel ANN270100.1
5Novel ANN270200.2
6Novel ANN270300.3
7Novel ANN270150.3
8Novel ANN270250.3

Prediction errors in novel ANN and ANN with regularization are shown in Figure 12 and Table 6.


Indicator12345678

Max9.399.8610.8710.388.699.2110.439.37
Q35.636.355.935.925.375.986.305.46
Median3.833.593.433.773.293.553.803.26
Q13.102.412.342.432.312.602.592.47
Min1.061.161.151.111.301.251.221.26
Outlier41003001
DQQ2.533.943.593.493.063.383.712.99

By analyzing the data, we conclude that there is a smaller median in novel ANN than in ANN with regularization, but DQQ in novel ANN is not the smallest. So, higher prediction precision is obtained, but there is little big dispersion in novel ANN than in ANN with Bayesian regularization.

5. Conclusion

This study established a generalized approximation model based on fitting sensitivity to solve the overfitting and underfitting problems. Taking GM as an example, a novel GM based on fitting sensitivity was proposed. Then, the novel GM was solved with the genetic algorithm by establishing an optimization model restricted by a reasonable fitting degree. We used RMSE as criteria and compared the effects of the different model parameters of the novel model on the prediction precision in the experiments. It showed that novel GM and novel ANN can get higher prediction precision than traditional GM and ANN with regularization. Therefore, the novel model based on fitting sensitivity proposed in this work could avoid overfitting and underfitting and yield accurate prediction results in accordance with the theoretical analysis.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are thankful for the support from the Key National Natural Science Foundation of China, China (no. U1533202), the support from the Shandong Independent Innovation and Achievements Transformation Fund, China (no. 2014CGZH1101), and the support from National Science-Technology Support Plan Project “the Application Paradigm of Full Lifecycle Information Closed-Loop Management for Construction Machinery” (no. 2015BAF32B01-4).

References

  1. M. Pecht and K. Rui, Diagnostics, Prognostics and System's Health Management, PHM Centre, City University of Hong Kong, 2010.
  2. C. R. Mercer, D. L. Simon, G. W. Hunter et al., “Fundamental technology development for gas-turbine engine health management,” in Proceedings of the AIAA Infotech @ Aerospace Conference, Rohnert Park, Calif, USA, May 2007. View at: Google Scholar
  3. D. Baş and İ. H. Boyacı, “Modeling and optimization I: usability of response surface methodology,” Journal of Food Engineering, vol. 78, no. 3, pp. 836–845, 2007. View at: Publisher Site | Google Scholar
  4. V. Gunaraj and N. Murugan, “Application of response surface methodology for predicting weld bead quality in submerged arc welding of pipes,” Journal of Materials Processing Technology, vol. 88, no. 1, pp. 266–275, 1999. View at: Publisher Site | Google Scholar
  5. T. Zhang, Q. Zhang, and Q. Wang, “Model detection for functional polynomial regression,” Computational Statistics and Data Analysis, vol. 70, pp. 183–197, 2014. View at: Publisher Site | Google Scholar
  6. Y. Han, W. Liu, F. Bretz, F. Wan, and P. Yang, “Statistical calibration and exact one-sided simultaneous tolerance intervals for polynomial regression,” Journal of Statistical Planning and Inference, vol. 168, pp. 90–96, 2016. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  7. R. H. Jones, “Maximum likelihood fitting of ARMA models to time series with missing observations,” Technometrics, vol. 22, no. 3, pp. 389–395, 1980. View at: Publisher Site | Google Scholar | MathSciNet
  8. Y. Grenier, “Time-dependent ARMA modeling of nonstationary signals,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 31, no. 4, pp. 899–911, 1983. View at: Publisher Site | Google Scholar
  9. Q. Zhang, X. Wei, and J. Xu, “On global exponential stability of discrete-time Hopfield neural networks with variable delays,” Discrete Dynamics in Nature and Society, vol. 2007, Article ID 67675, 9 pages, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  10. S. Agatonovic-Kustrin and R. Beresford, “Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research,” Journal of Pharmaceutical and Biomedical Analysis, vol. 22, no. 5, pp. 717–727, 2000. View at: Publisher Site | Google Scholar
  11. Y. Bao, T. Xiong, and Z. Hu, “Forecasting air passenger traffic by support vector machines with ensemble empirical mode decomposition and slope-based method,” Discrete Dynamics in Nature and Society, vol. 2012, Article ID 431512, 12 pages, 2012. View at: Publisher Site | Google Scholar
  12. Z. Zhang and Q. Zhao, “The application of SVMs method on exchange rates fluctuation,” Discrete Dynamics in Nature and Society, vol. 2009, Article ID 250206, 8 pages, 2009. View at: Publisher Site | Google Scholar
  13. F. Ye and Y. Wang, “A novel method for decoding any high-order hidden Markov model,” Discrete Dynamics in Nature and Society, vol. 2014, Article ID 231704, 6 pages, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  14. S.-g. Yue, P. Jiao, Y.-b. Zha, and Q.-j. Yin, “A logical hierarchical hidden semi-Markov model for team intention recognition,” Discrete Dynamics in Nature and Society, vol. 2015, Article ID 975951, 19 pages, 2015. View at: Publisher Site | Google Scholar | MathSciNet
  15. Y. Wang, Y. Dang, Y. Li, and S. Liu, “An approach to increase prediction precision of GM(1,1) model based on optimization of the initial condition,” Expert Systems with Applications, vol. 37, no. 8, pp. 5640–5644, 2010. View at: Publisher Site | Google Scholar
  16. N. Xie and S. Liu, “Discrete GM (1, 1) and mechanism of grey forecasting model,” Systems Engineering-Theory & Practice, vol. 1, article 014, 2005. View at: Google Scholar
  17. H. Bozdogan, “Model selection and Akaike's information criterion (AIC): the general theory and its analytical extensions,” Psychometrika, vol. 52, no. 3, pp. 345–370, 1987. View at: Publisher Site | Google Scholar | MathSciNet
  18. D. R. Anderson, K. P. Burnham, and G. C. White, “AIC model selection in overdispersed capture-recapture data,” Ecology, vol. 75, no. 6, pp. 1780–1793, 1994. View at: Publisher Site | Google Scholar
  19. C. T. Volinsky and A. E. Raftery, “Bayesian information criterion for censored survival models,” Biometrics, vol. 56, no. 1, pp. 256–262, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  20. M. Bogdan, J. K. Ghosh, and R. W. Doerge, “Modifying the Schwarz Bayesian information criterion to locate multiple interacting quantitative trait loci,” Genetics, vol. 167, no. 2, pp. 989–999, 2004. View at: Publisher Site | Google Scholar
  21. C. Flynn, C. M. Hurvich, and J. S. Simonoff, “Efficiency and consistency for regularization parameter selection in penalized regression: asymptotics and finite-sample corrections,” NYU Working Paper no. 2451/31317, 2011. View at: Google Scholar
  22. J. Diebolt, M. Garrido, and C. Trottier, “Improving extremal fit: a Bayesian regularization procedure,” Reliability Engineering & System Safety, vol. 82, no. 1, pp. 21–31, 2003. View at: Publisher Site | Google Scholar
  23. J. Zhang, L. Peng, X. Zhao, and E. E. Kuruoglu, “Robust data clustering by learning multi-metric Lq-norm distances,” Expert Systems with Applications, vol. 39, no. 1, pp. 335–349, 2012. View at: Publisher Site | Google Scholar
  24. Z. Shao and M. J. Er, “Efficient leave-one-out cross-validation-based regularized extreme learning machine,” Neurocomputing, vol. 194, pp. 260–270, 2016. View at: Publisher Site | Google Scholar

Copyright © 2017 Lin Lin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views581
Downloads309
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.