Research Article

An Approach for Demand Forecasting in Steel Industries Using Ensemble Learning

Table 1

Summary of most recent works for demand forecasting in various fields with their input factors and performances.

PublicationObjectivesDomainPerformanceFinding

Ribeiro and dos Santos Coelho [8]Ensembling bagging (RFR), boosting (GBR and XGBR), and stacking (STACK), as well as adopting reference model SVR, MLP, and KNNAgribusiness predictionMAPE = 0.0093–1.6354, RMSE = 0.0013–0.0680The ensemble approach outperforms the single models, especially the STACK model
Yu et al. [9]Developing an ensembling and decomposition algorithm with ensemble empirical mode decomposition (EEMD) and extended extreme learning machine (EELM)Forecasting crude oil priceMAPE = 0.0003, RMSE = 0.1431This ensembling method shows better results than some existing popular model as well as single model in terms of accuracy, speed, and resilience
Adhikari et al. [12]Ensembling time series methods and regression techniques in order to reduce forecast error from the actual valueSupply chain demand forecastingTS FACC =  Reg FACC =  , En FACC =  Showed superior outcome because of the reality of invalidating the overgauging and underdetermining and bringing the conjecture esteems closer to the genuine in the vast majority of the cases
Cankurt [26]Developing M5P and M5-Rule model trees, randomization, boosting, bagging, voting, and stacking in order to anticipate the demand for tourism in TurkeyTourism demand forecastingR = 0.9866,  = 0.973, RAE = 14.96, and RRSE = 16.77The bagging and boosting methods have great significance for the improvement of performances in regression tree models
Yang et al. [27]Developing the bagging and combining approaches with heterogeneous autoregression (HAR) model for the prediction of agriculture commodities’ futureForecasting agriculture commodities  =  HAR model with bagging shows outstanding performance comparing with AR benchmark
Wang et al. [28]Ensembling empirical mode decomposition to analyze global food price volatilityForecasting food price volatilityMSE = 74.29, MAE = 6.969, MAPE = 3.799This model can successfully analyze the fluctuation of 3 types of agricultural commodities
Tao et al. [29]Developing a combination of EEMD, ELM, and ARIMAForecasting hog priceR =  This model outperforms for the selected parts and claimed itself as an alternative for short-term forecasting for hog price
Ribeiro et al. [30]Design of nonlinear prediction models for the ensemble aggregation of waveNet ensembleElectricity load time seriesAll preprocessing stages and aggregation techniques contribute to overall performance, although perhaps not all to the same extent as a ceiling analysis would indicate
da Silva et al. [31]For multi-step forward extremely short-term forecasting, decomposition-ensemble learning approaches are used. These methods include K-Nearest neighbors (KNN), partial least squares regression (PLSR), Ridge regression (RR), support vector regression (SVR), and Cubist regression (CR).Wind energy forecastingMAE = 101.32, MAPE = 8.63, RMSE = 138.97CEEMD–BC–STACK a stacking-ensemble learning technique that significantly improved the accuracy of weak models CEEMD by merging and forecasting with a strong model.
Yu et al. [32]Proposing decomposition-ensemble learning model (ARIMA, SVR, ANN, RVFL, KRR, and ELM)Gasoline forecastingMAPE =  Decomposition-ensemble is better for prediction. Ensemble model or instantaneous frequency analysis is applicable for complex and irregular characteristics.
Liu et al. [33]Developing a revolutionary wind speed ensemble forecasting system (WSEFS) to enhance point forecasting (PF) and interval forecasting (IF)Wind speed forecastingMAPE – 1st step, 2nd step, and 3rd step are 1.9322%, 2.1579%, and 2.2808%, respectively.VMD technology is better than mayfly algorithm (MA) and ICEEMDAN. MOMA ensemble forecasting system is better than MOGWO and MODA.
Cook and Weisberg [34]Developing imperialist competitive algorithms (ICA) and particle swarm optimization (PSO) algorithms were compared with the results of the MLP neural network trained with the back propagation algorithmNondeposition sediment transport predictionMAPE =  and RMSE =  In comparison to the PSO and MLP algorithms, the ICA method is more accurate for computing the densimetric Froude number in pipe channels
Ebtehaj and Bonakdari [35]Developing an extreme learning machine (ELM) and comparing with back propagation (BP), genetic programming (GP), and existing sediment transport equationSediment transport estimationRMSE =  .309 and MARE =  .059FFNN-ELM performs well and is also an alternative method in predicting the Fr

MAPE = mean absolute percentage error, RMSE = root mean square error, MSE = mean square error, MAE = mean absolute error, R = Pearson correlation,  = coefficient of determination, FACC = forecast accuracy check, TS FACC = time series FACC, Reg FACC = regression FACC, En FACC = ensemble FACC, RAE = relative absolute error, RRSE = root relative square error, CEEMD = complete ensemble empirical mode decomposition, VMD = variational mode decomposition, MOMA = multiobjective Mayfly algorithm, ICEEMDAN = improved complete ensemble empirical mode decomposition with adaptive noise, MOGWO = multiobjective grey wolf optimizer, MODA = multiobjective dragonfly algorithm, FFNN = feed-forward neural network, Fr = densimetric Froude number, MARE = mean absolute relative error.