Abstract

With the 5G millimeter wave (mmWave) application, ultradense cellular networks are gradually becoming one of the core characteristics of 5G cellular networks. In the edge computing environment, considering load balancing among edge nodes is beneficial to slow down the process of distributed denial of service (DDoS) attack. However, most existing studies have given less consideration to congestion in the multiuser and multiedge server models. Someone who uses the M/M/1 model also seems to ignore the effect of scheduling algorithms on the Markov property of the task arrival process. In this manuscript, based on ensuring the quality of experience (QoE) for users, the G/M/1 model is introduced to the task scheduling of edge servers for the first time to improve load balancing between edge servers. For the multi armed bandit (MAB) algorithm framework, specific metrics are established to quantify the degree of its equilibrium. The number of users assigned to the edge nodes and each edge node’s processing of specific tasks is taken into account. We experimentally evaluated its performance against two baseline approaches and three state-of-the-art approaches on a real-world dataset. And the experimental results validate the effectiveness of this method.

1. Introduction

As is known to all, user equipment (UE) has low computing capacity. It may not efficiently solve task requests initiated by users, while cloud services have problems such as long transmission delays. The presence of mobile edge computing (MEC) brings mobile computing, network storage, and control issues down from the cloud to the network edge, driving the execution of compute-intensive, latency-critical applications on mobile devices, effectively reducing latency and energy consumption [13].

There are still deficiencies in interoperability, heterogeneous architecture, data privacy, and load balancing in heterogeneous edge computing systems, which can be considered to be compensated for by requirements such as federated deployment and resource management [4]. Edge servers have limited memory, central processing unit (CPU), storage, and other resources. They generally deploy at base stations close to user terminals, and users are guaranteed low latency and stable connectivity by using edge servers [5]. The emergence of the user plane function (UPF) separates the control plane from the user plane, making MEC even more critical in 5G technology. The emergence of the 5G millimeter wave has significantly expanded the transmission bandwidth and reduced the transmission delay of mobile communications, but there are also challenges such as easy loss. Increasing the density of base stations (BSs) helps to minimize losses, and thus, ultradense cellular networks are gradually becoming one of the core features of 5G cellular networks [6]. The deployment of large-area and high-density BSs will bring new network security issues. Due to the limited signal transmission range and edge server resources, a typical IoT-based distributed denial of service (DDoS) attack can disable most nodes in a particular area by continuously trying to occupy the resources of edge nodes [7], thus paralyzing the Internet of Things (IoT) devices in its service interval within a specific period (smart monitors, infrared sensors, etc.) [8], causing severe social impacts and security problems.

DDoS attack is a resource competition problem between attackers and defenders [9], and this competition will be more prominent in resource-limited edge service environments [8]. Based on the careful consideration of system characteristics such as proximity constraint, capacity constraint, and delay constraint among edge servers [10], balancing the workload among edge servers can slow down the DDoS attack process [8], thus leaving enough reaction time for the system and reducing the possibility of the system being breached. We consider the load balancing problem of edge user allocation (EUA) based on users’ quality of experience (QoE) and establish specific metrics to quantify the degree of balance. To more precisely quantify the QoE, we introduce the multi-armed bandit (MAB) algorithm framework and add nonstationary factors to the learning mechanism to better adapt to the actual complex and variable task assignment process.

For the time being, there is relatively little research on the load balancing problem and mainly reflected in the relatively balanced number of choices of edge servers [1113]. In reality, in addition to the task volume of each mobile device which may be different, there may also be performance differences among MEC servers. Simply considering the relative uniformity of task allocation among servers may cause the performance of high-performance servers to be wasted and aggravate the waiting time and performance wear and tear of less performing servers. In studies involving user task waiting time (stay time), they are mainly divided into two forms: computational time accrual for queueing tasks [14] [15] [13] and the use of the M/M/1 queuing model [16] [17]. For the latter, researchers have ignored the effect of the scheduling algorithm on the Markov property of the M/M/1 queuing model, i.e., subjective task scheduling that undermines the principle of no posteriority of the task arrival process.

Queuing systems generally consist of customers, service desks, and queuing rules [18]. Under conditions independent of other factors, a customer’s arrival satisfies Poisson distribution, the service time satisfies negative exponential distribution, and a single service desk processes the task of the customer, those situations that can be represented by the M/M/1 model [1820]. A customer’s arrival is usually independent of others, while the service desk is responsible for solving the task requests of arriving customers. The processing time of specific tasks is influenced by stochastic factors such as the nature of the customer’s task and the service desk. When we use the algorithm to schedule the user assignment process in the edge environment, customer arrivals will no longer follow the Poisson distribution, and continuing to use the M/M/1 model at this point seems to deviate from reality. Regarding the G/M/1 model, the process of customer’s arrival is not restricted, which is “general arrival,” and the service process still follows a negative exponential distribution, which considers the randomness of customer tasks and service desks [1820].

In the actual edge user assignment process, we reduce the impact of the scheduling algorithm on the Markov property of the edge server’s task arrival process by applying the G/M/1 queuing theory model. And a nonstationary factor is added to the MAB algorithm framework to consider the impact of the edge server’s task processing capacity fluctuation on the computation delay. In an attempt to improve load balancing in the edge user assignment scenario to mitigate the DDoS attack process, we further consider factors such as the number of tasks offloaded by edge users, possible performance differences among edge servers, and more in-depth consideration of the specific task processing of each edge node. To enhance the experiment’s credibility, we used a real dataset from the Central Business District (CBD) of Melbourne [21] and compared it extensively with existing studies, and the experimental results verified the effectiveness of the algorithm. The main contributions are as follows: (i)We attempt to improve the load balancing for edge user allocation in edge computing to slow down the DDoS attack process(ii)This is the first attempt to study the EUA problem through the MAB algorithm framework in 5G ultradense cellular networks, considering the processing of specific tasks in each edge node. The number of users in edge nodes is no longer considered solely(iii)This is the first attempt to introduce the G/M/1 queuing theory model to the MEC system, considering the impact of scheduling algorithms on the Markov property of the actual task arrival process. And the performance is experimentally evaluated on a widely used real-world dataset

The rest of the paper is organized as follows. Section 2 presents related work. Section 3 offers the system model. Section 4 proposes the Thompson sampling nonstationary (TSNS) algorithm. Section 5 designs experiments and evaluates the algorithm’s performance, and Section 6 concludes the paper.

Currently, relatively little research has been done on DDoS attack in edge computing, mainly in edge collaboration, attack identification, and defense [8, 2227]. The literature [8] studied the DDoS attack mitigation problem in edge computing, proved its NP-hardness, and proposed a game-theoretic approach to solving the problem. The literature [22] considered mitigating the DDoS attack process by balancing the incoming control plane’s total traffic and inducing the attack initiator to stop the attack. The literature [23] designed an adaptive traffic scheduling algorithm to enhance collaboration among edge nodes and thus reduce DDoS attack. The literature [24] developed an intrusion detection and defense method for edge environments by learning the original data distribution through Deep Convolution Neural Network (DCNN) and building defense through Q-network algorithm. In a software-defined network (SDN), [25] established initial detection of intrusion based on entropy and further accurate detection by integrated learning, reducing communication overhead and attack detection latency. From the perspective of smart cities, [26] on the fractional-level fusion of multimodal biometrics effectively improves recognition accuracy and [27] proposed a data encryption technique applicable to the IoT, etc.

The low latency of edge computing is fundamental for users to execute resource-intensive and latency-sensitive applications on edge devices. It is a crucial factor affecting the QoE of user experience [4]. In the study of resource allocation for computational offloading, with the goal of latency optimization, [2831] schedule computational tasks through a Markov decision process, [14, 32, 33] consider game theory to obtain the best strategy for task offloading, and [11, 12, 3439] consider algorithms such as reinforcement learning to solve problems related to resource allocation. The literatures [13, 1517] introduced the MAB algorithm framework to learn online to adjust task allocation in real time. Among them, only the literatures [1114, 34] consider the load balancing problem of edge servers from the perspective of resource allocation.

The literature [14] proposed a decentralized learning algorithm from a game-theoretic perspective, considering a relatively uniform number of users allocated across edge servers. However, the experimental design with the same upper bound of acceptable cost for users may deviate from reality. From the perspective of on-edge computing, [11] transformed the offloading and load balancing problem into a mixed-integer nonlinear programming problem and changed the problem into two subproblems for optimization. The literature does not seem to consider the effect of the waiting factor, and the possible case of multiple tasks appearing at the same node is not further explored. It is only described from a collision perspective. The literature [34] introduced fiber-wireless (Fi-Wi) technology to enhance the signals of vehicular edge computing networks (VECNs), which in turn use software-defined networking (SDN) to achieve load balancing. The literature considers the possibility of task assignment locally, at the edge nodes or in the cloud. Still, the impact of the coverage of the signaling edge nodes may be neglected in the selection process of the offload servers, and task processing at the edge nodes seems to lack consideration of congestion factors. The literature [39] utilized multipath TCP to increase application throughput and used reinforcement learning-empowered multipath manager to address the buffer congestion problem further. In the literature [12], on the premise of determining the set of optional edge service nodes for each mobile device users (MDUs), the situation of the user devices to be assigned to each edge node and their computational capabilities were considered comprehensively, and new devices were assigned to the edge server with less computational pressure accordingly. The algorithm design process seems to ignore the influence of congestion factors within the edge nodes, and the optimal edge server may deviate from the actual scenario only in terms of transmission and computation delay; i.e., there may be a large waiting delay after the task arrives at the redistributed edge node. In addition, there may be a significant task assignment delay. Uncertainty decision-making is an essential challenge in machine learning, and the MAB algorithm is a common framework for solving this problem, where each MEC server is considered an arm [40]. The literature [13] proposed a utility table-based MAB algorithm with online learning to adjust the workload allocation in real time and update the feedback signal after task allocation through the utility table to determine the optimal solution. The literature mainly considers load balancing from cloud-edge collaboration and gives less consideration to task allocation among edge servers.

As we know, the DDoS attack problem is currently a hot topic of research in network security, and relatively little research has been conducted from the perspective of edge computing. In edge computing, considering the load balancing of edge user allocation, we make the first attempt to study the EUA problem in 5G ultradense cellular networks through the MAB algorithm framework, focusing on the processing of specific tasks at each edge node. We made the first attempt to introduce the G/M/1 queuing theory model into the MEC system, considering the impact of the scheduling algorithm on the Markov property of the actual task arrival process.

3. System Model

In the ultradense cellular network scenario, in the edge user allocation process, as in Figure 1, we use to denote the set of MEC servers and to represent the set of user devices. In edge computing, in addition to service requests from regular users, DDoS attack can launch frequent task requests to the edge server by controlling multiple IoT devices in the service range. Considering the influence of the scheduling algorithm on the Markov property of the resource allocation process, we assume that the task arrival process follows a general distribution and the service time follows a negative exponential distribution. Since each MEC server has different task arrival and service capacity and there is no restriction on queue length and task origin, the task offloading process can be represented by the G/M/1 queueing theory model. We denote by the task arrival impact factor per unit time of server , which can be obtained by solving the scheduling process and by the average service rate of server . Every time a task assignment is made, the average service rate of the selected server is updated by a nonstationary method.

We consider to denote the cost of processing task for server and and to denote the corresponding service latency and energy consumption. Therefore, is computed as follows:

In the formula, and , respectively, represent the weight of delay and energy consumption in the cost, .

We understand that in 5G ultradense cellular network architecture, the physical distance between microcell BSs is typically between 100 and 200 m [6]. The delay can be further subdivided into transmission delay, propagation delay, waiting delay, and computation delay. The sum of the waiting and computation delays is the delay of the task staying in the system. The signal strength decreases from the central node in all directions [5]. We may assume that the effective signal coverage of each edge node is 200 m (edge servers beyond the signal range can be selected, but the selection cost is relatively high [5]), and on this basis, we consider the limited nature of each user when selecting an edge server. And since the physical distance of the computing task from the user end to the edge node generally does not exceed 200 m, its actual propagation delay will be at the microsecond level. We usually use the ratio of task volume to channel bandwidth to express the transmission delay. In the ultradense cellular network structure, the case of transition node forwarding tasks is basically nonexistent; i.e., the transmission delay of tasks will also be close to a subtle level. According to the queuing theory [1820], the stay time obeys the distribution , whose average stay time is , and the actual stay time of each task can be randomized by the distribution function. We use to represent the energy consumption influencing factor that comprehensively considers power, signal-to-noise ratio, and other factors. Let denote the task size of task in server , and the energy consumption is [41, 42]. Further, we can get the following formula: where denotes the number of tasks to be processed by user device , which in general is equal to . is the channel bandwidth, and is the transmission delay. denotes the physical distance between user device and edge node . indicates the propagation speed of the task in the channel, which is generally equal to or slightly less than the speed of light, and is propagation delay.

To measure the QoE more specifically, we introduced the MAB algorithm framework. An upper bound on the cost is chosen as , and we assume that the cost as a percentage of the given threshold is . The reward after each selection can be calculated as follows: where is the indicator function.

4. Algorithm Design

The MAB model is a simple but compelling algorithmic framework that can make decisions over time in uncertain situations [43]. It simulates an agent, learning new knowledge to optimize selection decisions.

We know that considering load balancing in the edge environment is beneficial to slow down the DDoS attack process [8, 22]. We use the MAB algorithm framework to balance the limited task processing latency and cost and offload the tasks to each MEC server as evenly as possible. Each MEC server can be considered an arm of varying nature, and each selection of the arm can be rewarded and cost accordingly. This property is unknown to the task assignor, so we may call it an implicit property. As the number of selections increases, the resource allocation of edge servers will become more rational, and the number of tasks processed per unit time will improve. In addition, considering the complexity and variability of the actual task arrival and processing, the server’s performance may also change with the increasing number of selections, and we introduce a nonstationary factor.

To reduce useless exploration and increase the exploration of the arm with larger pairwise differences, we consider applying the improved Thompson sampling to the MAB algorithm. In the Thompson sampling algorithm, the payoff value of each action follows a beta distribution, with and as prior probability parameters. All the arms will generate a random number as payoff value through beta distribution according to their prior probability parameters whenever a selection is made. The system will select the arm with the largest payoff value. The probability distribution law of Bernoulli distribution and the probability density of beta distribution are as follows: where the two refer to the distribution of returns and the distribution of the parameter of the return distribution. satisfies the formula

We assume that there are edge servers, and tasks are processed in a certain period of time. Each selection will update the distribution. When action is selected, the return is subject to a Bernoulli distribution with parameter .The probability of returning 1 is , and returning 0 is . In round , select the action will receive a return . Assuming that are independent of each other, the prior distribution obeys , and the posterior distribution obeys .

For each selection made, the parameters of the posterior distribution of the selected arm will be calculated based on its return values. The posterior distribution of the last round can be used as the prior distribution of the next round, and the parameter update rule of the posterior distribution beta is [44]

We use the reward of the edge nodes after performing the task processing as the QoE measure for the corresponding users. As we analyzed above, the propagation delay and transmission delay under delay segmentation is at the microsecond level, which is negligible compared to the task’s computation delay and queuing delay. In contrast, the task processing energy consumption is a weak user experience. To simplify the model, we set and , and the channel bandwidth is infinite concerning the task volume and mainly considers the average stay time of the task in the system. After each selection, we add a nonstationary utility learning mechanism [45]. where represents the learning rate in the selection process; i.e., the greater the , the greater the importance of the actual reward, and the greater the degree of learning in calculating the utility reward. The updated utility reward is used as the reward value. In particular, after each selection, the average service rate of the selected service desk is optimized to simulate the effect of random factors in the user assignment process.

The arm with the largest parameter is considered during each selection, and the calculated reward of the actual choice is used to update the posterior distribution parameters beta. The corresponding regret value is where represents the edge server selected for time and is the corresponding cost of the currently selected server. denotes the minimum value of the corresponding cost of each edge server at time .

The specific idea of the TSNS algorithm is represented in Algorithm 1 in an ordered manner. Each user assignment is made that the corresponding service time and stay time are calculated according to the G/M/1 queuing theory model. Service time is an essential statistic for measuring load balancing. Stay time can be used to calculate rewards and, in turn, utility rewards.

Require:, , , ,
1: fordo
2: Generate uniformly distributed random variables , , from [0,1)
3: ifthen
4:
5: end if
6:
7: end for
8: Generate normally distributed random variables with (1, 0.5)
9: whiledo
10: fordo
11:
12: end for
13:
14:
15: Generate uniformly distributed random variable from [0,1)
16:
17:
18:
19:
20:
21:
22: end while

We learn and record the specific situation after each task assignment through the MAB algorithm framework, including the actual cost and reward after each user assignment and the actual task processing latency of edge nodes, which can measure QoE and load balancing more precisely and effectively. The specific experiments are described in detail in Section 5.

5. Performance Evaluation

In this section, we conducted extensive experiments to evaluate the TSNS algorithm based on a real-world dataset from the CBD of Melbourne (e.g., Figure 2). The excerpted dataset contains 125 service base stations and 816 random users. We model the difference in task volume between users using a normal distribution with mean 10 and variance 2, in which the ratio of the random number to the mean is used as the weight of the effect of task volume on processing latency and use this as the basis for a series of experiments.

5.1. Preferences

Combining the ideas of the Monte Carlo method, we conducted an experimental design. First, we record and calculate the average stay time as the initial property of the corresponding service desk, in which the average service rate and the task arrival impact factor per unit time are solved by a uniform distribution in the interval [0,1), as in Table 1 (the experiment contains but is not limited to the parameters given in Table 1). Based on the nature of queuing theory regarding the G/M/1 model, we obtain the distribution function of the task sojourn time and consider randomizing the essential stay time of the current task at the selected server by the distribution function. For each user’s specific task, we assume that it satisfies a normal distribution with a mean of 10 and a variance of 2. The ratio of the random value to the mean is used as the influence of the stay time for the specific task. That is, for each task assignment, the corresponding edge server randomizes the corresponding sojourn time for calculating the reward and, in turn, the utility reward. The calculated utility rewards are used to update the parameters of the posterior distribution beta, which in turn affects the next round of task assignment.

Regarding the initial prior distribution corresponding to each server in the TSNS algorithm, it is considered randomized out through a normal distribution with a mean of 1 and a variance of 0.5. In each task assignment process, the posterior distribution will be used as the prior distribution corresponding to the following selection, and the parameters will be randomized through the prior distribution. Then, the server with the largest parameters will be selected for the task processing.

First, we assume that the learning parameter and the cost upper bound . During the selection process of the simulated edge servers, we obtained the upper quartile (), median, and lower quartile () of the corresponding cost distribution for each method. We calculated the maximum observed value of the upper edge by [46]. Subsequently, we averaged the upper edge observations for all methods and calculated an approximate cost upper bound of 10. Further, we compared the average reward profile under different learning parameters and obtained the average profile after removing the anomalous profile . In multiple experiments, the larger the parameter , the better the distribution of the average reward might be, and basically fluctuates up and down around the average curve, as shown in Figure 3. We simulated the user assignment process under different parameters and obtained the variance comparison among the edge nodes, as shown in Figure 4. To balance the load situation of the server, we might as well set it as the experimental parameter. Comparison of cost distribution among methods for the upper cost bound and learning parameter is shown in Figure 5.

5.2. Algorithm Performance

We determine the cost upper bound and learning parameters through the above experiments. Subsequently, we will examine the performance of the TSNS algorithm in terms of user QoE and load balancing by comparing it with classical methods and related work. (i)Improved -greedy: the edge node with the highest utility value is explored or selected with a certain probability. After the algorithm is improved, keeps getting smaller, and the exploration probability keeps decreasing as the number of selections increases(ii)UCB: all optional but not yet selected edge servers are first explored. Subsequently, the edge node with the largest utility value is selected, and the utility value is updated after each selection(iii)UBL [40]: based on the improvement of the general greedy algorithm, the utility value of the selected edge node is updated after each selection. If the same edge server is selected twice in a row, the utility value of the corresponding server is updated to a temporary value(iv)LBCO [35]: first, determine the number of mobile devices offloaded to each edge node, consider the different available uplink data rates of the user-side devices and the computing power of the edge nodes, calculate the upload and service times for each task, obtain the set of edge nodes available to the users, calculate the corresponding times, and force each user to select the optimal edge node for the task(v)MTOTC [27]: each user has partially selectable edge nodes, and the selection probability of all selectable nodes is summed to 1. The stochastic congestion game with incomplete information is performed based on the careful consideration of each user’s task type and different task volumes. When the probability of all users selecting an edge node is 1 or the probability within an acceptable error range is greater than the set value, the game stops, and the corresponding edge node is the final choice of users

We evaluated the methodology from two main perspectives. (i)QoE: the metric is expressed in terms of the average reward earned by users after uninstalling a task and is necessary for measuring service quality(ii)Load balancing: this metric compares the total number of tasks ultimately served by each edge service but specifically considers the cumulative computation time for task processing in each service. This manuscript’s load balancing degree is the primary metric to measure DDoS attack mitigation

In Figure 6, by computing the actual reward after each selection, we obtain a graph of the evolutionary trend of the average reward for each algorithm and, in turn, represent it as the evolutionary trend of the QoE. First, the algorithm was compared with the TS algorithm, which is based on the M/M/1 queuing model and the classical algorithms (improved -greedy and UCB) within the framework of the MAB algorithm. We find that the algorithm with the G/M/1 model will significantly outperform the case with the M/M/1 model in terms of QoE performance, having more significant advantages and potential. The algorithm that uses the M/M/1 model is similar to the UCB algorithm but significantly lower than the improved -greedy algorithm and the TSNS algorithm. Subsequently, during the comparison with related work, we found that the UBL, LBCO, and MTOTC algorithms reach their QoE peaks relatively quickly and are largely stable. In contrast, the TSNS algorithm suffers from a slow learning ascent. However, as the user assignment process continues, the TSNS algorithm outperforms the other algorithms in terms of QoE overall.

We can find that all algorithms in the MAB algorithm framework fluctuate to some extent at the operation beginning, especially during the first 100 edge user assignments. Because properties, such as the service rate of all servers, are unknown to the algorithm in the MAB framework at the beginning, the quality of user assignment could be gradually improved through continuous selection. Considering the influence of stochastic factors in the actual task arrival and processing process, server performance may also change with time; we introduce a nonstationary factor in the algorithm improvement; i.e., after each task assignment, a reward is calculated based on the task processing process, and the service rate of the edge servers is updated based on the reward.

In experiments, we count the specifics of user selection of edge nodes in different methods and represent them as Figure 7. We can see that large numbers of clusters form the representation graph for each method. The centers of the clusters represent edge servers, while the ends represent users, and the connecting lines between them represent their selection relationships. The size and density of the clusters can reflect the uniformity in selecting edge nodes by users. Among them, UBL, LBCO, and improved -greedy algorithms mainly focus on choosing some fixed edge nodes, and fewer edge nodes connect more users. In contrast, the TSNS, MTOTC, and UCB algorithms can distribute edge users more evenly, and the number of users served by each edge node is similar.

However, since the task volume of tasks to be processed by different users and the computational capacity of edge nodes vary, we also need to discuss the task processing of each edge node more specifically.

As in Figure 8, we count the work of each edge server between methods. Where the vertical coordinate represents the accumulated computation time of each edge server, which is expressed as the degree of load, ideally, the degree of load should be essentially similar between edge servers, although there are some fluctuations. This figure shows more intuitively that the load within the UCB, TSNS, and MTOTC algorithms are relatively homogeneous, compared to other algorithms, with slight fluctuations basically around a certain level. To quantify this balance’s level more concretely, the changes in stay time are calculated and subsequently expressed as the variance. As shown in Figure 9, we can conclude that the TSNS algorithm has some advantages in load balancing compared with other algorithms. This advantage is beneficial in resource-limited edge environments, facilitating the mitigation of DDoS attack processes and, in turn, reducing the probability of system breaches.

6. Conclusion

In this paper, to slow down the DDoS attack process in edge computing, we have focused on the EUA problem in a 5G ultradense cellular network scenario and considered improving the load balancing of edge servers while guaranteeing the QoE. To quantify the QoE, we have introduced the MAB algorithm framework and added nonstationary factors to the learning mechanism. Considering the effect of scheduling algorithms on the Markov property of the task arrival process, we have introduced the G/M/1 queueing theory model to EUA for the first time. We have focused on processing specific tasks in each edge server and conducted a series of experiments on real-world datasets, which verified the strength and potential of the algorithm in the target scenario.

In future research, in the context of non-orthogonal multiple access (NOMA) for 5G networks, we will consider more general cases of load balancing of edge demand response under the impact of latency and energy consumption. And we will slow down the process of DDoS attack in edge computing by pursuing load balancing of edge servers. First, we will specifically consider the number and performance of physical machines installed in each edge server and further consider the specific processing process after tasks reach edge servers; subsequently, we will combine cloud-edge collaboration and collaboration among edge nodes to set the threshold to determine whether users need to receive cloud services; more importantly, we will considering the performance of the algorithm in three aspects: mobile users, edge infrastructure providers, and edge service providers, considering the QoE, system energy consumption, and DDoS attack mitigation, to make the model more generalized, etc.

Data Availability

The data used to support the findings of this study is cited in the article and can be viewed via the link https://github.com/swinedge/eua-dataset.

Conflicts of Interest

The authors declare that they have no conflicts of interest.