Research Article

Deep Learning-Based Methods for Sentiment Analysis on Nepali COVID-19-Related Tweets

Table 3

Comparison of our method with existing machine learning algorithms in terms of classification performance (%).

AlgorithmsFtdads
PRFAPRFAPRFA

SVM + Linear67.162.262.263.957.344.044.047.263.554.154.156.3
SVM + RBF70.251.051.040.258.553.953.955.563.251.551.554.6
XGBoost69.069.569.566.758.359.859.856.361.562.362.358.9
ANN63.163.763.763.456.858.858.854.762.061.961.958.0
RF69.667.567.563.560.860.760.757.059.961.961.959.5
NB59.456.156.157.547.048.748.744.948.550.050.045.8
LR65.167.467.464.754.256.656.652.063.261.861.861.8
K-NN65.265.265.260.351.857.557.552.861.361.661.657.4

Note that P, R, F, and A denote overall Precision, Recall, F1-score, and Accuracy for three types of embeddings (ft: fastText, da: domain-agnostic, and ds: domain-specific), respectively. The hyperparameters of traditional machine learning algorithms are as follows: SVM + Linear (c: 1, Gamma: 0.1), SVM + RBF (c: 100, Gamma: 0.1), XGBoost (learning-rate: 0.1, max-depth: 7, n-estimators: 150), ANN (Hidden-layer-size: 20, learning-rate-init: 0.01, max-iter: 1000), RF (min-sample-leaf: 3, min-sample-split: 6, n-estimators: 200), LR (C: 10, solver: lbfgs, max-iter: 1000), and K-NN (leaf-size: 35, n-neighbor: 120, p: 1). Boldface denotes the highest performance.