Research Article
Fake News Detection Using Machine Learning Ensemble Methods
Table 2
Overall accuracy score for each dataset.
| ā | DS1 | DS2 | DS3 | DS4 |
| Logistic regression (LR) | 0.97 | 0.91 | 0.91 | 0.87 | Linear SVM (LSVM) | 0.98 | 0.37 | 0.53 | 0.86 | Multilayer perceptron | 0.98 | 0.35 | 0.94 | 0.9 | K-nearest neighbors (KNN) | 0.88 | 0.28 | 0.82 | 0.77 |
| Ensemble learners | Random forest (RF) | 0.99 | 0.35 | 0.95 | 0.91 | Voting classifier (RF, LR, KNN) | 0.97 | 0.88 | 0.94 | 0.88 | Voting classifier (LR, LSVM, CART) | 0.96 | 0.86 | 0.92 | 0.85 | Bagging classifier (decision trees) | 0.98 | 0.94 | 0.94 | 0.9 | Boosting classifier (AdaBoost) | 0.98 | 0.92 | 0.92 | 0.86 | Boosting classifier (XGBoost) | 0.98 | 0.94 | 0.94 | 0.89 |
| Benchmark algorithms | Perez-LSVM | 0.99 | 0.79 | 0.96 | 0.9 | Wang-CNN | 0.87 | 0.66 | 0.58 | 0.73 | Wang-Bi-LSTM | 0.86 | 0.52 | 0.57 | 0.62 |
|
|