|
Category | Commonly used models | Advantages | Disadvantages | References |
|
History of the same period | Smoothing method | Easy understandability, better results in normal conditions and with large time granularity | Excessive reliance on data patterns from historical data | Omkar and Kumar [32] |
|
Time series | Kalman filtering | Applicable to time series data and interpretability | Unsuitable for capturing nonlinear data patterns | Zhou et al. [33] |
AR(Auto regressive) | Li et al. [34] |
ARIMA | Gummadi and Edara [35] |
|
Machine learning | SVM SVR | Suitable for learning nonlinear features in data | Low computational efficiency at high data volumes | Li and Xu [36] |
K-nearest neighbor | Sun et al. [37] |
Linear regression | Khiari and Olaverri-Monreal [38] |
Decision tree | Alajali et al. [39] |
Random forest | Zhou et al. [40] |
|
Deep learning | RNN | Applicable for learning linear and nonlinear patterns with good data fitting capability | Low interpretability and low efficiency | Pang et al. [41] |
LSTM | Agafonov and Yumaganov [23, 42] |
GRU | Shu et al. [43] |
|
Ensembled model | AdaBoost | Applicable to select the appropriate base model for ensemble according to the characteristics of different datasets | Prone to overfitting, low interpretability, and poor results when data are unbalanced | Zhou et al. [44] |
Bootstrapped aggregation | Vaish et al. [45] |
Stacked generalization | Sharma et al. [46] |
Gradient boosting Machines, GBM | Monego et al. [47, 48] |
Gradient boosted regression Trees, GBRT | Chen et al. [49] |
|
Combined model | Direct averaging, weighted averaging, and other combinations | High applicability with various sub-models and combinations | Subjective on choosing the combination method and sub-models | Yan et al. [50] |
|