Research Article

[Retracted] Software Systems Security Vulnerabilities Management by Exploring the Capabilities of Language Models Using NLP

Table 5

Comparison of all the experiments.

ExperimentProcessOutput

CNN and FastText embeddingCNN-based processingAccuracy: 71.89%; precision: 0.88; recall: 0.72; F1-score: 0.77
Bidirectional LSTM with FastText embeddingBidirectional GRU or LSTM with global attentionAccuracy: 84.33%; precision: 0.91; recall: 0.84; F1-score: 0.87
USE modelUSE pretrained model with TF 1.0Accuracy: 92.61%; precision: 0.95; recall: 0.93; F1-score: 0.93
NNLMNNLM-based sentence encoder, with pretrained modelAccuracy: 90.16%; precision: 0.81; recall: 0.90; F1-score: 0.86
BERTBERT tokenization and TF Keras modelingAccuracy: 91.39%; precision: 0.92; recall: 0.91; F1-score: 0.88
DistilBERTDistilBERT-based preprocessing of dataAccuracy: 94.77%; precision: 0.95; recall: 0.95; F1-score: 0.94
BERTData preprocessing and tokenization with BERTAccuracy: 97.44%