Abstract

The goal of the network intrusion detection system (NIDS) is to spot malicious activity in a network. It seeks to do that by examining the behavior of the traffic network. To find abnormalities, the NIDS heavily use machine learning (ML) and data mining techniques. The performance of NIDSs is significantly impacted by feature selection. This is due to the numerous characteristics that are used in anomaly identification, which take a lot of time. The time required to analyze traffic behavior and raise the accuracy level is thus influenced by the feature selection strategy. In the current work, the researcher’s goal was to provide a feature selection model for NIDSs. IGWO (improved grey wolf optimizations) for FSs (feature selections) was proposed to address these difficulties. The three primary processes in this proposed study are preprocessing, extractions and classifications of FSs, and evaluations of results. IGWOs are used to choose a subset of input variables by minimizing features to measure the accuracy in the search space and discover the best solution. A particular structure of HPNs (hierarchical progressive networks) is controlled by the MDAEs (multimodal deep autoencoders) and ABLSTMs (attention-based long short-term memories) for enhanced multimodal-sequential IDSs, i.e., AB-LSTMs. It is possible to understand relationships between neighboring network connections automatically and efficiently integrate information from many levels of characteristics inside a network connection using the EMS-DHPN technique simultaneously. This work’s suggested hybrid IDSs called IGWO-EMS-DHPN technique were evaluated using two intrusion datasets: UNSW-NB15 and CICIDS-2017 which is compared with other existing classifiers in terms of relative accuracies, precisions, recalls, and -scores in categorizations. While several classifiers have been developed, the suggested IGWO-EMS-DHPN classifier obtains maximum accuracy.

1. Introduction

Due to the worldwide internet infrastructures’ growing interconnection, security specialists discover additional security flaws every month. As a result, cybercriminals can infiltrate these systems and engage in harmful activities [1]. Identifying different network assaults, particularly unanticipated threats, is an inescapable major issue. Since hackers can access databases containing financial, medical, and other sensitive data, cyber security professionals and designers must devise new IDSs to guard against such attacks. IDSs assist in detecting threats by analyzing packets captured from the network. DoS (denial of services) is a leading common threat, as it prevents legitimate users from accessing network resources by introducing unwanted traffic. On the other hand, malware is malicious software that takes advantage of flaws in computer networks to get its objectives [2].

Internet users have significantly benefited from the developments of ICTs (information and communication technologies). However, network intrusion threats, such as DDOS, cross-site scripting, and probe, are becoming more and more prevalent, making information security a growing concern. The NIDSs are critical security countermeasures for detecting and preventing malicious intrusions in the cybersecurity industry. Network intrusion detection is a standard classification problem; its goal is to watch network behaviors each moment and assess whether to issue an alarm message to the network [3].

IDSs are critical scientific breakthroughs in information security as they can detect invasions ongoing or even if already occurred. IDSs must determine whether attacks are normal or they are DOSs (denial of services), U2Rs (user to roots), probes, and R2Ls (root to locals) [4]. In short, IDSs enhance classifier performances in identifying invasive behavior. Traditional IDSs monitor traffics using detailed descriptions like rules or signatures where positive and negative false detections were common resulting in false alarms. IDSs (intrusion detection systems) can be based on hosts HIDS (Host IDSs) or based on networks NIDSs (network IDSs) or based on signatures/anomalies AIDSs (anomaly IDSs) or HIDSs (hybrid IDSs) [5]. HIDSs combine features of NIDSs and HIDS and are highly reliable security frameworks [6].

MLTs (machine learning technologies) have been extensively employed to determine various sorts of threats and aid network administrators prevent intrusions. However, most typical MLTs are shallow and focus on feature engineering and selections, making them ineffective for huge intrusion data classification. External learning is also unsuitable for high-dimensional learning with huge data. These challenges led the researchers to seek a better approach [7].

On the other hand, DLTs (deep learning techniques) collect better representations from data to generate significantly improved approaches. DLTs have made significant strides in AI (artificial intelligence) during the last decade. DLTs have also outperformed shallow MLTs in areas like finance, automatic machine translations, speech recognition, and computer vision [8]. Singular DLTs usually perform admirably when large amounts of data are available in computer vision. NIDSs can benefit from DLTs, due to their ability to generalize in new environments while handling voluminous data and thus can be applied for identifying new threats [9]. Therefore, intrusion detection models will inevitably lose some information of traffic data and can only use incomplete feature information to classify because in these methods, the complex feature information within a network connection and the temporal information between network connections were either completely ignored or considered simply.

This research work proposes EMS-DHPN framework to utilize multiple information of traffic data and thus enhancing the effectiveness of AIDSs. MDAEs are designed to understand distributions of subfeature vectors. Also, AB-LSTM technology is used to support the methodology’s intelligence. Experimental results validate the developed model. This work’s significant contributions are as detailed below: (1)A novel multiview strategy for minimizing feature complications and investigating advanced multimodal DLTs to develop practical feature fusion units for traffic data. In the information security industry, multimodal DLTs are being used to address the challenges of IDSs(2)Constructing smart AIDSs with structures of HPNs that fully utilize structured traffic information enhance the accuracy of intrusion recognitions(3)The suggested approach employs IGWOs for its FSs, which minimizes irrelevant and repetitive data, and important characteristics are selected from search spaces with accuracy and optimal solution as the base. And also, indicating that channels of MDAEs and interpreters of each access channel can be altered to detect in new environments(4)The recommended EMS-DHPN approach’s performance in detecting attacks in current networks is benchmarked using two datasets from 2015 to 2017 where accuracy and robustness in binary and multiclass categorizations were tested

The relevant work in the field of intrusion detection is covered in Section 2. Section 3 largely introduced the recommended multimodal-sequential intrusion detection system. Section 4 assessed the efficiency of the classification algorithm on two datasets and reviewed the experimental investigation. Finally, in Section 5, there is a discussion of the paper’s conclusion and future work.

2. Literature Review

Many researchers concentrated on classic MLTs in the early stages of studies where predominantly shallow ANNs (artificial neural networks) were used. FFNNs (feedforward neural networks) built classifiers while BPs (backpropagations) trained network classifiers. IDSs have also been proposed based on MLTs including SVMs (support vector machines), RFs (random forests), and NBs (naive Bayes).

Mohammad [10] suggested grey wolf optimization (GWO) and particle swarm optimization (PSO) for IDS. They developed two novel approaches (PSO-GWO-NB and PSO-GWO-ANN) for FSs and IDS. In addition, this study evaluated the most frequently repeated features of PSO and GWO. For assessments, intrusion datasets were used in this study. Furthermore, two classifiers, namely, NB and ANN, were used in evaluations where their trials showed that MRF features produce good precisions and recalls. Their findings revealed that PSO-GWO-NB classifiers outperformed PSO-GWO-ANN classifiers in FSs and IDSs.

Al-Safi et al. [11] used IGs (information gains), SVMs, ABCs (artificial bee colonies), and CSs (cuckoo searches) for identifying anomalies in networks. Their main steps suggested were FSs and categorizations where FSs were based on IGs, and the best features from the NSL-KDD dataset were chosen. Their proposed technique performed well on modern intrusion datasets (NSLKDD). The study used the UCI dataset extensively as a baseline for the evaluation of IDSs. The results of classification methods were measured by rates of ACCs (accuracy correct classifications), precisions, and recalls. The proposed method outperformed other modern techniques on the NSLKDD dataset in terms of speed and accuracy.

Aghdam and Kabiri [12] suggested ACOs (ant colony optimizations) and nearest neighbors for intrusion identifications. The study followed FSs and categorizations. Initially, FSs converted TCP dump data into feature sets or vectors. Subsequently, ACOs searched and identified all features. The resulting FSs were evaluated by using smaller feature spaces and assessing classification results. This was also followed by untrained ACOs and neighborhood classifiers for identifying fresh attacks. Finally, precision, recall (or) false positive, -measure, and accuracy evaluated classification methods. The suggested scheme surpassed prior approaches, improving accuracies in identifying intrusion attempts and lowering false alarms with fewer features.

Ali et al. [13] presented fast learning network (FLN) based on PSO called PSO-FLN for their proposed IDS. Their proposed scheme consisted of three major steps where the exploration-exploitation trade-off described the ability to evaluate different regions of problem spaces and find optimums. Secondly, particle-based FLN were used which were created by PSO for training classifications of IDS. Creating particles that represent weighed solutions was the first step in PSO-based optimizations of FLN. To improve accuracy, both weights and neuron counts in hidden layers had to be chosen. Their PSO-FLN model was compared to many metaheuristic techniques to train extreme ML and FLN. In terms of learning accuracies, PSO-FLN outperformed other learning algorithms.

Almomani [14] suggested PSO, GWO, firefly optimization (FFO), genetic algorithm (GA), and SVM for NIDS. Their recommended FS reduced investigation times, increased reliability, and relied on PSO, GWO, FFA, and GA for improving the effectiveness of NIDS. The first preprocessing stage involved removing labels, features, label encoding, and data binarization. Using Anaconda Python Open Source, GAs, PSOs, GWOs, and FFAs were used to generate 13 sets of MI (mutual information) rules. Finally, the model’s features were classified using MLTs, namely, SVMs and J48, and tested on the UNSW-NB15 dataset. Their proposed IDS with limited parameters were found to be more reliable.

Kuang et al. [15] developed an approach based on SVMs for IDS where their scheme integrated KPCA (kernel principal component analyses) and GAs. The proposed model determined whether an activity was an attack by employing multilayered SVMs in classifications. The dimensionality of feature vectors was reduced using KPCAs for quicker training of SVMs. Furthermore, noises caused by feature differences were reduced while performances were enhanced. Finally, the tube diameters, kernel parameters, and punishment factor C are used to optimize GAs. The research findings demonstrated that the suggested technique beat existing detection algorithms on the KDD CUP99 dataset in terms of predictive accuracies, convergence speeds, and generalizations.

Zhong et al. [16] designed big data-based hierarchical DL systems (BDHDLS) for IDS. The proposed BDHDLS analyzed network traffics and payloads using behavioral and content features. It worked in three steps: (1) utilizing Apache Spark big data techniques for FSs and clustering, (2) combining both behavioral and content-based features in parallel for improved recognition rated, and (3) utilizing multiple DLTs in a hierarchical tree framework to understand independent traffic patterns for all intrusive attacks. Multimodel approaches instead of single-model approaches can improve intrusion detection rates. The effectiveness of IDSs was measured using three metrics: true positive rates, false positive rates, and accuracies. Their results of tests on the CICIDS2017 dataset showed the time taken to build BDHDLS was significantly reduced as big data techniques use many machines in a parallel training strategy.

Haggag et al. [17] devised min-max normalization, SMOTE (synthetic minority oversampling technique), and FFN (fast fusion networks) for intrusion identifications. Their suggested DLS-IDS approach has four major blocks: dataset selection, dataset preprocessing, class imbalance solutions, and Apache Spark model training. Preprocessing consists of two steps: feature preparation and feature scaling. All features were normalized using min-max. SMOTE took care of class imbalances. Spark had three primary components: driver, cluster management, and worker. FFNs were trained on multilayer perceptron (MLP), RNN (recurrent neural Networks), and LSTMs. NSL-KDD dataset was used in a comparison of Apache Spark with conventional implementation calculation delays. The performance of IDS was measured in terms of accuracy and precision. Their modified model also improved attack recognition rates.

Yin et al. [18] proposed DLTs using RNNs called RNN-IDS for their IDSs. Their study’s neuron counts and learning rates affect the model’s binary and multiclass categorization performances. Therefore, preprocessing and categorization were the main steps in the proposed study. The first stage used numericalization and normalization techniques. The proposed RNN-IDS model had two parts: forward propagations which computed outputs and BP which transferred residuals to upgrade weights, similar to standard training of NNs. The suggested work compared the proposed scheme with J48, ANNs, RFs, SVMs, and other MLTs where their results showed that the RNN-IDS model enhanced recognition rates of IDS.

Mighan and Kahani [19] proposed the use of MLP, SAE (stacked autoencoder), DT, and SVM for their IDS. Their hybrid scheme combined DLTs with MLTs. Their suggested approach had four processes: (1) data preprocesses, (2) latent feature extractions, (3) threat categorizations, and (4) decisions. The first step of data normalization was min-max normalizations, which removed dimension effects for each attribute. Secondly, SAE extracted latent features, i.e., inferences from other variables instead of direct observations. Thirdly, attack classification used latent features extracted by DLTs in an extensive data framework. Fourth, SVMs were trained on datasets for classifications before which DTs trained datasets to reduce false positive attack detections. Following SAE feature extractions, classification-based intrusion detection algorithms like SVMs, RFs, DTs, and NBs were utilized to detect intrusion in huge network traffic data quickly. The proposed SAE-SVM’s performances were satisfactory.

Lopez-Martin et al. [20] conducted intrusion categorization in an IoT context using a conditional VAE (CVAE). This program in IDS was said to be the first to conduct feature reconstruction using CVAE. The NSL-KDD dataset was used for the studies, and the authors said that the model outperforms well-known methods like linear SVM, random forest, and multilayer perceptron’s in terms of classification accuracy.

Wu et al. [21] proposed fast object recognition and picture enhancement tasks may be completed using an edge computing and multitask-driven architecture. To encrypt medical pictures and protect patient privacy and the healthcare environment, Wu et al. [22] presented a unique content-aware deoxyribonucleic acid (DNA) computer system. A two-stage DL model for effective NIDS was proposed by Khan et al. [23] using stacked autoencoder with softmax for classification. It was demonstrated that the model could extract useful feature representations from huge amounts of data.

Keserwani et al. [24] introduced an anomaly-based cloud intrusion detection system (IDS) to discover intrusions in a cloud network. The suggested method employs a deep learning strategy for classification and a hybrid metaheuristic algorithm for feature selection. A mixture of the crow search algorithm (CSA) and grey wolf optimization (GWO) pulls pertinent characteristics from the cloud network connection for the deep learning classifier section to process more efficiently. For classification, a deep sparse autoencoder (DSAE) is used. Accuracy, precision, recall or detection rate (DR), and -score are the parameters taken into consideration for the performance comparison. Sharing sensitive information online has expanded due to the development of network-based services, putting network security at risk. The number of assaults and invaders is always increasing, making detection increasingly difficult. The manual labeling of audit data takes more time, is more expensive, and is laborious. It is crucial to designate the significant characteristic of network traffic for intrusion detection using a classifier that would obtain a greater performance since the capacity to identify important inputs may minimize size and training time and increase accurate results. To determine the best feature subset that increases classification accuracy, we searched the feature space in this work using the grey wolf optimizer, a swarm-based optimization approach.

3. Proposed Methodology

The proposed work includes preprocessing and feature extraction, feature selection, classification, and results in the evaluation. Develop EMS-DHPN with a unique hierarchical progressive network structure for current assault detection. EMS-DHPN has three layers. Assist in integrating complicated features in each traffic flow using a multimodal fusion method based on the MDAE. The second layer uses AB-LSTM to acquire temporal data between traffic flows. To measure the search space’s accuracy and locate the ideal solution, describe the preprocessing and feature extraction module that splits complicated characteristics from traffic data. Figure 1 displays the proposed detection approach for EMS-DHPN.

3.1. Preprocessing and Feature Extraction

The preprocessing and feature extraction module extract features from traffic data. First, acquire the traffic database containing previous network behavior. The connection record describes a succession of TCP packets from sources to destinations, defined as in which implies features and elements counts in each connection record.

Figure 2 depicts the categorization of each record’s features based on packets, traffic, or generic. Segmentations ensured sequential relationships between records were preserved and records counts did not change. Then, for each document, get numerous feature groups, each one a vector, , which represent counts of feature groups.

The data process divides the large feature vector into smaller features to reduce feature complexity instead of concatenating them all together as in the simple method. The dividing rule is also adaptable to other observing aids and feature vision. Because data features vary between network data monitoring technologies. The UNSW-NB15 and CICIDS 2017 evaluation datasets had 2 and 3 categories of characteristics, respectively.

3.2. Feature Selection

FS is the procedure of minimizing or discovering the most significant inputs for processing and analysis. FS approaches are used to choose salient characteristics to determine the accuracy in the search space and locate the ideal solution. Optimizations using GWOs and IGWOs are detailed below.

3.2.1. GWOs

GWOs replicate the hunting style of the grey wolf pack. Grey wolves have a tight four-level social order, with collections ranging from 5 to 12. GWO is based on the social intelligence of grey wolves that like to live in packs of 5-12. This program simulates GWO’s leadership structure using four levels: alpha, beta, delta, and omega. The primary responsibility of the alpha is to make decisions (e.g., hunting, sleeping place, and wake-up time). Beta is known to help alpha make decisions and provide input. A scout or sentinel is a hunter. Omega wolves are controlled by alpha and beta wolves. Omega wolves must obey all wolves.

The dominant wolves in the group are termed alpha (α). The alpha wolves’ subordinates are the beta wolves (β). Omega (ω) wolves are the lowest ranking. Omega wolves must submit to all wolves and eat last in a pack. Other wolves in the collection are termed delta (δ) that consent to alpha and beta wolves but dominate omega wolves [25]. In GWOs α, β, and δ, direct hunting procedures and ω wolves obey.

GWO’s circling behavior is estimated as follows:

Here denote coefficient vectors, denote prey’s locations vector, replicates the location of wolves in a -dimensional space in which d represents the number of variables, represents the iterations number, and is defined as below:

Here, and are defined as below:

In which, and represent vectors randomly in .

denotes encircling coefficient minimized from 2 to 0 as the rounds raise as per the below formula:

where represents iterations counts and stands for max iterations count. For example, grey wolves consider alpha the best candidate for the job, with beta and delta supposed to know the prey’s whereabouts. So, until a particular iteration, the best three solutions are preserved, forcing those (e.g., omega) to adjust their locations in the decision space [26].

The upgrading locations concept is computed as below:

In which is described and computed as below:

Here, , , and denote the three optimal wolves in the genetic at a given iteration . Here, , , and are evaluated as in the above formulae. , , and are computed as follows:

In which, , , and are evaluated as per the formula mentioned above.

3.2.2. IGWOs

The GWO uses the best three options to update each wolf’s location. Omega wolves make up an enormous population and are less fit than alpha, beta, and delta wolves. Realigning the weaker wolves can increase GWO’s diversification capacity and find better results.

In the introduced IGWOs, each generation’s wolves are ranked by fitness. They are divided into two groups: enhanced grey wolves and grey wolves. Each upgraded grey wolf has a master wolf from which it learns using the equations given below:

In which DL denotes the fraction of distance between a master and slave wolf, indicates the learning coefficient, is evaluated by Equation (4), denotes a master wolf, is enhanced wolf, and and are computed using Equation (10). For wolves,

Following equations, new ceaseless locations of enhanced wolves are calculated using

Here, is the continuous solution and is identified in Equation (3).

In which is the new binary solution for an improved wolf in dimension , , and implies sigmoid functions depicted below as

Equation (12) is used for updating all wolves’ locations. However, the rand is now 0.5. Explorations use nonlinear control parameters

The flow diagram of IGWO is depicted in Figure 3.

3.3. Classification

Classification is the intrusion detection procedure of a particular feature subset. Classifications are sometimes known as targets/labels. Classification is supervised learning with marks and input data (selected features). EMS-DHPN stands for enhanced multimodal sequential intrusion detection. This section has three subsections: (1)Multimodal fusion model(2)Sequential learning model(3)Multimodal real-time model

3.3.1. Multimodal Fusion Model

MDAEs can be used in IDSs using multimodal learning technology [27]. MDAEs assume that associations between traffic flow features are varied and complementary. The architecture of MDAEs is depicted in Figure 4. The number of input channels in the input layer is determined by the feature groups that are specified. Gaussian restricted Boltzmann machines were used as intermediate layers for assessing distributions of input units. The goal is to comprehend the final consensus representation given feature groups .

As demonstrated in Figure 4, MDAE model training processes include forward encodes and backward decodes. Forward encoded compute initial joint representation values and fuse multifeatures. Back decodes adjust weight matrices based on reconstruction errors. RBMs (restricted Boltzmann machines) with their undirected graphical structures use two layers: input and hidden, with the number of hidden neurons varying between ten to hundred and twenty based on inputs. Joint distributions are easily computed via an energy function:

Where is a constant value for normalizations while represents energy to function. stands for hyperparameter, and implies weight matrices between visible and hidden layers where and are biases for visible and hidden layers, respectively. The conditional probability distributions of the Russian RBM are calculated as below:

Divergence algorithm that trains Gaussian RBMs and RBM parameters can be obtained using a learning rule depicted as

where is the expectation perceived in the training data and is the expectation perceived in the data generated by the RBM model. Backward decodes involved unfolding forward stacked RBMs into deeper autoencode having multiple inputs/outputs. Algorithm 1 describes forward encoding and its reverse encodes.

Input: multifeature sets
Output: combined representations
Step 1: forward encodes
1: forto do
2: Train Gaussian RBMs for currently chosen feature sets
3: Preserve Gaussian RBMs as independent interpreters
4: Combine hidden layers of Gaussian RBMs with intermediate layers
5: Use intermediate layers as inputs to train subsequent RBMs as joint interpreters
6: Acquire preliminary joint representations
Step 2: backward decodes
7: Distribute RBMs uniformly to create MDAEs
8: Upgrade MDAEs as per rebuilding faults of picked feature sets
9: end
3.3.2. Sequential Learning Model

There are two layers in the AB-LSTM model: LSTMs and attentions which learn to find solutions and manage more complex sequences and relationships. There are three nonlinear gating units in LSTM’s RNNs, namely, forget, input, and output gates [28]. LSTM storage units decide on obtaining new data while discarding old data. This work added an attention layer to LSTMs for focusing on significant data. Long-term dependences were simulated in AB-LSTM by the addition of an attention layer to outputs from LSTMs which can also serve in detecting intruders. Equation (18) is the network’s input gate that contains the level of the new memory. The memory quantity of the forget gate is controlled by Equation (19). Last but not least, Equation (22) changes the memory storage of output gates while LSTMs calculate control states and cell states where and represent the Sigmoid and activation function, respectively; denotes multiplication of the elements; is and concatenation; , and denote learning weight parameters; , and represent learning bias parameters. Also, an attention mechanism improves the LSTM model’s accuracy [29, 30]. To choose the critical outputs from prior layers, the attention layer helps. It helps networks focus on important information. The attention model’s functions are as follows:

where is the matrix of picked features , is a vector, is the embedded attention mechanism, is the vector form of picked features attention weights, and is the resultant attention model’s weighted sum of picked features .

3.3.3. Multimodal Real-Time Model

Multimodal fusion and sequential learning algorithms can gather and combine network flow data with improving threat identification performance. They can only express part of the traffic data. To address this issue, EMS-DHPN was developed. To make EMS-DHPN more usable, flexible MDAEs were proposed. To classify network traffics, two layered MDAEs are built ahead of time, and softmax functions are used at the end of AB-LSTMs. The loss functions also compute the differences between actual and predicted labels . The cross-entropy loss function is used for binary classification:

The entropy computations for assessments of losses in the case of multiple class classifications are depicted below:

Algorithm 2 is described below.

Input: flow information from network’s connections
Output: the probability of categories in flows
Step 1: model for multimodal fusions
1: Generation of MDAE based on Algorithm 2
2: Integration of MDAE with multiple features internally
Step 2: model for sequential learning
3: for from 1 to do
4: Use STM on externally selected temporal features
5: end for
Step 3: generate progressive hierarchical networks by joining MDAE with AB-LSTM
6: Creating EMS-DHPN by adding layers at the end of AB-LSTMs
7: While train not to end do
8: Obtain probable outputs by
9: Calculate cross-entropies and update EMS-DHPN correspondingly
Step 4: test
10: Test the EMS-DHPN model with the network’s traffic flows
11: Return a probable list of categories

4. Results and Discussion

This section includes performance assessments of the EMS-DHPN model on the CICIDS2017 dataset produced by the Australian Centre of Cyber Security (ACCS), as well as stage-by-stage experimental findings for each model used in the study to test IDSs [31] and UNSW-NB15 dataset of Canadian Institutes for Cyber security (CIC) [32, 33] where initially, MDAEs were used to learn multimodal feature representations from traffic data.

The final experiment used EMS-DHPN approaches to detect attacks. The proposed IGWO-EMS-DHPN technique for detecting attacks in modern networks was evaluated. Table 1 summarizes two datasets. Algorithmic categorizations were compared in terms of accuracies, precisions, recalls, and -scores (or) -measures.

4.1. Evaluation Metrics

The suggested approach’s performances were measured using relative accuracies, precision, recall, and scores with accuracy being the most significant performance indicator. The suggested experiment also uses the classification model’s confusion matrix.

The confusion matrix in Table 2 determines the metric used for two class categorizations. For example, two rows and two columns in a confusion matrix (Table 2) represent the number of Fp, Fn, Tp, and Tn in predictive analytics (Tn).

TP represents attack record counts that were correctly classified as attacks while TN implies normal records classified correctly. FP stands for inaccurately classified normal records as attacks while FN denotes attack record counts that were incorrectly classified as normal records. Using these four, the following metrics were computed for examining the efficiency of classifiers.

4.1.1. Accuracy

Accuracy is the rate of all records accurately categorized total records.

4.1.2. Precision

Precision is the rate of the accurately recognized threat records in all detected threats records.

4.1.3. Recall

Recall is the rate of the accurately recognized threat records in all threat records, also known as the true positive rate (TPR).

4.1.4. -Score or -Measure

-score or -measure is the harmonic mean of precision and recall.

4.1.5. False Alarm Rate

FAR is the likelihood of incorrectly rejecting the null hypothesis for a given test is measured by a false positive ratio, also known as a fall-out ratio or false alarm ratio.

4.2. Result Comparison

Table 3 lists comparative experimental results obtained by classifiers on the intrusion detection datasets.

4.2.1. Precision Result Comparison

As shown in Figure 5, the suggested IGWO-EMS-DHPN and existing methods like PSO-FLN and MS-DHPN are compared in terms of precision. Compared to existing approaches, the recommended approach has high precision rates of 86.5% and 98.9%, two separate datasets. In contrast, traditional techniques like PSO-FLN, MS-DHPN, and MS-DHPN have lower precision of 79.9%, 96.8%, 84.4%, and 98.2%, respectively. Thus, the suggested approach is valuable and practical for recognizing short-term threats.

4.2.2. Recall Result Comparison

Figure 6 depicts the recall analysis of the proposed suggested IGWO-EMS-DHPN and other categorization methods, such as PSO-FLN and MS-DHPN. There are high recall rates of 87.5% for one dataset and 98.9% for the IGWO-EMS-DHPN approach. We know that IGWO-EMS-DHPN can obtain high recall rates, which indicates a high detection rate, whereas traditional techniques like PSO-FLN and MS-DHPN provide lower recall rates of 80.1% and 97.1% and 86.2% and 98.1%, respectively.

4.2.3. -Measure Result Comparison

-measure comparisons between the suggested IGWO-EMS-DHPN and traditional approaches, such as PSO-FLN and MS-DHPN, are shown in Figure 7. IGWO-EMS-DHPN is well known for its high -measure, exhibiting excellent attack detection based on the results. Compared to other methods like PSO-FLN and MS-DHPN, which provide -measure rates of 80.8% and 97.3% and 85.3% and 98.1%, the proposed work can provide better attack detection results with two different datasets than the other previous techniques.

4.2.4. Accuracy Result Comparison

Figure 8 depicts a comparison of the accuracy of various classification techniques, including the proposed IGWO-EMS-DHPN and other current methods such as PSO-FLN and MS-DHPN. According to the graph, the proposed method has high accuracy when compared to previous techniques. The suggested IGWO-EMS-DHPN is an excellent method of accurately detecting attacks, with high accuracy rates of 92.6% and 99.7% for two separate datasets. When comparing the accuracy of existing approaches, PSO-FLN and MS-DHPN provide lower rates of 81.8%, 97.5%, and 86.2%, respectively. The experiments demonstrated that the suggested system is far superior to the conventional techniques.

4.2.5. FAR Result Comparison

Figure 9 depicts a comparison of the FAR of various classification techniques, including the proposed IGWO-EMS-DHPN and other current methods such as PSO-FLN and MS-DHPN. According to the graph, the proposed method has lesser FAR when compared to previous techniques. The suggested IGWO-EMS-DHPN is an excellent method of accurately detecting attacks, with lesser FAR rates for two separate datasets.

5. Conclusion and Future Work

The proposed work uses MDAE and AB-LSTM learning to classify attacks (or intrusions) using UNSW-NB15 and CICIDS 2017 datasets. Each of the four steps in the proposed IGWO-EMS-DHPN framework is described below. Preprocessing and feature extraction stages can be processed separately. A second feature selection step allows IGWOs to narrow the search field and locate the best solution. Finally, MDAE and AB-LSTM enable EMS-DHPN. A network connection using the EMS-DHPN technique may efficiently integrate multiple levels of selected features while learning temporal information across nearby network connections. The IGWO-EMS-DHPN detection technique was developed for attack detection and tested on two intrusion datasets. The findings of several experiments were compared to other methodologies. The suggested IGWO-EMS-DHPN was examined along with other techniques and compared in terms of accuracies, precisions, recalls, and -measures. Compared to existing PSO-FLN and MS-DHPN, the suggested IGWO-EMS-DHPN achieves high accuracy rates of 92.6% and 99.7% for two separate datasets. Create your traffic gathering system. Find new assaults to validate the suggested model and research data multimodality in information security from a more fundamental perspective to increase intrusion recognition rate.

Data Availability

The (CICIDS2017 dataset) information used to support the study’s conclusions has been deposited in the (kaggle) repository (10.1109/ACCESS.2020.3009843/https://www.kaggle.com/datasets/cicdataset/cicids2017). The (UNSW-NB15 dataset) data used to support the findings of this study have been deposited in the (UNSW-NB15 dataset) repository (10.1109/MilCIS.2015.7348942).

Conflicts of Interest

The authors declare that they have no conflicts of interest.