|
Technique | Functionality | Pros | Cons |
|
CNN | Automates feature extraction and can be adapted for classification | Pretrained CNN models save time and resources | Vulnerable to adversial attacks. CNNs can overfit the training data |
RNN | Captures temporal dependencies in data and distinguish normal behavior from suspicious patterns | Adaptable to different types of network traffic patterns and diverse datasets | RNNs are prone to the vanishing gradient problem during back-propagation through time |
GRU | GRUs utilize gating units to selectively manage and update information | Capable of understanding network activities by grasping long-term dependencies | GRUs are prone to overfitting while dealing with imbalanced datasets |
Autoencoders | Autoencoders do not require labeled intrusion data for training | Well-suited for unsupervised learning, training autoencoders, especially deep ones, is complex | They do not perform as well in supervised learning tasks where labeled data are abundant |
DBF | Use unsupervised learning to automatically learn hierarchical representations of data | Can adapt to evolving attack patterns, making them effective for detecting new attacks | Proper tuning of hyperparameters is essential. DBNs require large and diverse datasets |
LSTM | Analyzes sequential network data to detect patterns, anomalies, and identify cyber threats effectively | Captures long-term dependencies in sequential data to understand network activities’ context effectively | LSTM is vulnerable to adversial attacks. Response time is high for large-scale networks |
|