PKNN is used for detecting input attacks, hybrid HIDS strategy (based on Naïve Bayes feature selection); OSVM is applied for outlier rejection. Naïve Bayes is applied as the feature selector approach
Kyoto 2006+ dataset, KDD-Cup ‘99 dataset, and NSL-KDD dataset
Flexible mutual information-based feature selection (FIMS) is employed as feature selector, MH-ML (multilevel hybrid machine learning), MH-DE (multilevel hybrid data engineering), MEM (micro expert module) for training the KDD-Cup ‘99 dataset
Three feature selection methods are employed, linear correlation-based feature selection (LCFS), modified mutual information-based feature selection (MMIFS), and forward feature selection algorithm (FFSA), chromosomes as vector and training data as metrics
KDD-Cup ‘99 dataset and CTU-13 dataset
FPR, accuracy rate
LCFS, FFSA, MMIFS
FPR of 0.17% is achieved, and accuracy rate for the DoS is 99.8%
Random forest, Naive Bayes, J-48, -nearest neighbor algorithm
WrapperSubsetEval and CfsSubsetEval are applied as two feature selection techniques, while random forest, -NN algorithm, Naive Bayes, and J-48 are applied as the classifiers
NSL-KDD dataset
Detection rate, accuracy rate, –measure, TP rate, FP rate, MCC, and time
Wrapper and filter
Overall accuracy rate of 99.86%, overall FPR of 0.00035%, overall detection ratio of 0.9828%, -measure of 0.706%, overall TPR of 0.929%, overall MCC of 0.955%, and total execution time of 10.625 seconds (executed on NSL-KDD dataset with 25 attributes on all attack types)
EWOD is used to detect outliers in data, IFLFSA is used as feature selection, and intelligent layered classification algorithm is applied to classify the data
-nearest neighbor is used to apply a class to unknown data point, ID3 is used as feature selector, and isolation forest is employed to segregate normal data from anomaly
NSL-KDD & KDD-Cup ‘99 dataset
Detection rate, accuracy rate, false alarm rate
-NN
The performance with KDD-Cup ‘99 dataset has a detection rate of 97.20%, accuracy of 96.92%, and FPR of 7.49%. Performance on NSL-KDD dataset has a detection rate of 95.5%, accuracy of 93.95%, and a FPR of 10.34%
Hierarchical extreme learning machine (H-ELM), extreme learning machine (ELM), and -nearest neighbor (-NN) are applied for classification, and SDN controller is employed as a feature selection approach
NSL-KDD dataset
(-NN), (ELM), (H-ELM)
SDN controller
An accuracy of 84.29%, FPR of 6.3%, precision of 94.18%, recall of 77.18%, -measure of 84.83%
An integration technique () is employed to improve the classification accuracy
NSL-KDD dataset and UNSW-NB15 dataset
()
Correlation-based feature selection technique
An accuracy of 98.45%, specificity of 94.38%, sensitivity of 92.94%, and execution time of 500 seconds are obtained on the NSL-KDD dataset. For the UNSW-NB15 dataset, an accuracy of 96.44%, specificity of 98.4%, a sensitivity of 50.4%, and an execution time of 1023 seconds are achieved
Multiclassifier, deep neural network, kernel density
Random forest differential evaluation with kernel density for predicting unusual activities. For input classification, a multiclassifier is applied, while a deep neural network is employed as the learning and training of the data. Kernel density is used for clustering and prediction of data.
HHAR dataset
Random forest differential evaluation with kernel density, multiclassifier, deep neural network, kernel density
Basic sort-merge tree
An accuracy rate of 98.4%, a sensitivity of 96.02%, and a specificity of 99.8%