Research Article

[Retracted] Software Systems Security Vulnerabilities Management by Exploring the Capabilities of Language Models Using NLP

Algorithm 2

Classification using bidirectional LSTM and attention layer.
Input: security- and nonsecurity-related text with labeling
Process:
(1)Step 1 to 4 as in Algorithm 1
(2)Global attention layer architecture construction
(3)Entire sequence is sent to global attention layer instead of sending the last output from GRU cell ((5))
(4)Learning function is fed with hidden sequence vectors ((6))
(5)Production of a probability vector
(6)Weighted average of outcomes of above two steps results in a context vector ((7))
(7)Attention layer definition
(8)FastText-based embedding matrix construction using “wiki-news-300d-1M-subword.vec”
(9)Building LSTM-based sequential model architecture: bigru = tf.keras.layers.Bi-directional (); model = tf.keras.models.Model (inputs = inputs, outputs = outputs)
(10)Training and validation
(11)Model performance evaluation on test set
Output:
Accuracy: 84.33%
Precision: 0.91
Recall: 0.84
F1-score: 0.87