Research Article
Deep Learning-Based Methods for Sentiment Analysis on Nepali COVID-19-Related Tweets
Table 3
Comparison of our method with existing machine learning algorithms in terms of classification performance (%).
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Note that P, R, F, and A denote overall Precision, Recall, F1-score, and Accuracy for three types of embeddings (ft: fastText, da: domain-agnostic, and ds: domain-specific), respectively. The hyperparameters of traditional machine learning algorithms are as follows: SVM + Linear (c: 1, Gamma: 0.1), SVM + RBF (c: 100, Gamma: 0.1), XGBoost (learning-rate: 0.1, max-depth: 7, n-estimators: 150), ANN (Hidden-layer-size: 20, learning-rate-init: 0.01, max-iter: 1000), RF (min-sample-leaf: 3, min-sample-split: 6, n-estimators: 200), LR (C: 10, solver: lbfgs, max-iter: 1000), and K-NN (leaf-size: 35, n-neighbor: 120, p: 1). Boldface denotes the highest performance. |