Abstract

As the data rate and area capacity are enormously increased with the advent of 5G wireless communication, the network latency becomes a severe issue in a 5G network. Since there are various types of terminals in a 5G network such as vehicles, medical devices, robots, drones, and various sensors which perform complex tasks interacting with other devices dynamically, there is a need to handle heavy computing resource intensive operations. Placing a multiaccess edge computing (MEC) server at the base station, which is located at the edge, can be one of the solutions to it. The application running on the MEC platform needs a specific simulation technique to analyze complex systems inside the MEC network. We proposed and implemented a simulation as a service (SIMaaS) for the MEC platform, which is to offload the simulation using a Cloud infrastructure based on the concept of computation offloading. In the case study, the Monte-Carlo simulations are conducted using the proposed SIMaaS to select the optimal highway tollgate where vehicles are allowed to enter. It shows how clients of the MEC platform use SIMaaS to obtain certain goals.

1. Introduction

With the advent of the 5G wireless communication service, the aggregate data rate and area capacity have been increased up to 1000 times compared to the existing 4G LTE network and the latency has been decreased to 1 ms [1, 2]. In order to effectively provide ultrareliable and low latency communication (URLLC) in the 5G network, a multiaccess edge computing (MEC) server, installed at the base station, becomes a very crucial technology to handle user’s requests in real time. Until now, users still need to reach a central server far from the starting point to receive a particular service, which increases latency. However, URLLC can be a reality once MEC provides a service directly next to the base station [35].

As wireless network technology develops, more research is being done on how to effectively utilize the Cloud platform. In addition, computation offloading technology has been developed in which user equipment (UE) connects to other platforms to utilize resources because complex tasks could not be performed in the device due to resource problems such as low computing power and weak energy [6].

While mobile phones are the only ones that are mostly connected to the network in the LTE network, in the 5G era various types of terminals such as vehicles, medical devices, robots, drones, and various sensors will always be connected. The information collected from the connected heterogeneous terminals can be used to analyze complex systems, and social internet of things (SIoT) becomes a reality which have not been possible in the LTE network before. However, verification is the first priority or at least a required process for these systems to be implemented before coming into real life. Since the dynamic interactions of complex systems cannot be formalized, the only method to analyze them is a simulation technique. Simulation can be effectively used to analyze complex systems, optimization problems, etc. [79].

Taking into account the simulation situation within the MEC platform, the installation of the simulation tool for each MEC application may result in nonefficiency, such as storage duplication and additional overhead which increases when developing the application. This paper proposed the environment since no research has previously been conducted on the MEC platform to provide simulations with a common feature.

2. Background

2.1. Multiaccess Edge Computing

Multiaccess edge computing (MEC) is a crucial method to reduce latency in 5G networks [35]. In a 4G network, the terminals had to go to the central server to get information, which makes the delay time longer. In addition, when traffic is concentrated in the central server, congestions, network bottlenecks, and other problems may have occurred which cause network delay. As shown in Figure 1, in the MEC environment, the server is placed very next to the base station, called the edge, so the delay time can be minimized because the terminal can receive a quick response as soon as the request is made.

The European Telecommunications Standards Institute (ETSI) established a standard related to MEC and specified the essential elements [10]. The MEC framework is divided into three levels: the MEC system level, the MEC host level, and the network level. The types of messages exchanged between the components and the overall system structure are determined.

2.2. Docker and Kubernetes
2.2.1. Docker

A Docker is a container-based tool that virtualizes programs. A container is a kind of virtualization that allows a process to run in isolated space [11, 12]. Conventional virtualization used in virtual machines (VMs) was operating system (OS) virtualization; however, OS virtualization’s performance is substandard. Therefore, the Docker container virtualized the CPU and dramatically increased performance. The VM method using OS virtualization requires a guest OS per VM, and the overhead greatly increases when the guest OS is used. The Docker runs the container in the form of a process in the Docker engine without a guest OS, so it is much faster because there is no additional overhead.

Because a container operates like an isolated process, it efficiently allocates CPU and system resources even if multiple containers are executed at the same time. By applying Docker’s advantages, a research was conducted using Docker in an edge computing platform [13].

2.2.2. Kubernetes

The Docker container has emerged as a breakthrough technology, but since it operates as an individual process, there was no tool to manage it, which makes it difficult to utilize the Docker container efficiently. Consequently, by the necessity of an orchestrator to manage the containers effectively, Google developed Kubernetes (k8s) as an open source platform [14, 15]. Kubernetes connects multiple physical machines (nodes) to form a Cloud, and places the containers on the optimal node, considering the resource and current state of each node. The conceptual structure of Kubernetes is shown in Figure 2. The master node consists of an etcd, scheduler, controller, and API server. The etcd stores information for the Kubernetes system. The scheduler determines a node for a newly created Pod considering the state of the CPU, the memory, and the number of Pods running on nodes, etc. The controller maintains the container to keep the state as intended. The API server is a central communication server that all Kubernetes components must communicate through. The worker nodes are connected to the master node, and the worker nodes run containers. Worker nodes send and receive commands through the API server in the master node.

In Kubernetes, the Pod is a minimum unit which carries several Docker containers. Pods are executed in the worker node as mentioned earlier. A Kubernetes component called “Job” ensures that a task is successfully done by creating one or more Pods to perform certain tasks. The Job is used for the proposed environment described in Section 4.2.1.

A Kubernetes API (kubectl) can be used to generate Kubernetes components such as Pods and Jobs. By default, the API server is accessible only in the master node. In order to access API server in the worker node, a ServiceAccount must be created and the access rules must be defined in a Role component. Afterward, a connection of the Role and ServiceAccount with RoleBinding is required. Finally, when a Pod is created, the authorization to access the API server can be obtained by connecting the defined ServiceAccount to the Pod. In this research, specific tasks are performed by using the Kubernetes API in Pods and Jobs belonging to the worker node. Details of this process are described in Section 4.2.1.

3.1. Simulation as a Service

Generally, the types of Cloud services are divided into infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). Each provides a physical resource, a platform to help develop software, and software to users as a service. The advantage of Cloud services is that users can get services only when needed, without having to deploy extra systems. Besides conventional Cloud services, the concept of providing modeling and simulation (M&S) in the form of service has emerged. Since modeling and simulation are separate concepts, they are divided into modeling as a service (MaaS) and simulation as a service (SIMaaS).

There is a study that constructs MaaS and SIMaaS using a traditional Cloud platform [16]. In this study, basic components of the Cloud system (IaaS, PaaS, and SaaS) were constructed using physical infrastructure, and it defined modeling and simulation as a service (MSaaS) using PaaS and SaaS. MSaaS includes modeling as a service, model as a service, and simulation as a service, and these services are provided to the user. Figure 3 shows this in more detail. The model as a service is “a concept of being able to invoke reusable, fine-grained software components across a network” [17]. The modeling as a service is “delivering modeling functionality as a service” [18]. Thus, model as a service becomes some kind of results of modeling as a service.

While this study focused on the architecture for delivering MSaaS from a Cloud perspective, the proposed research focuses on how to effectively deliver SIMaaS in a MEC environment.

3.2. Simulation in Cloud Platform

There is research on performing simulation using the Amazon Cloud platform [19]. It constructed a system consisting of a master node and worker nodes. The master node manages the simulation and the worker nodes perform the simulation using the resources of the Cloud. The experiments are performed iteratively because the research considered probabilistic variables. The results can be obtained much faster than simulating only in the local machine because the resources of the Cloud can be used to perform experiments on many machines.

The concept of performing a rapid simulation using the Cloud resource of the referenced paper is similar to this research. While the referenced paper focuses only on the simulation, this research considers not only the step of constructing the modeling but also simulating the relationships of heterogeneous models, not just mathematical calculations. In addition, SIMaaS is a kind of an API that enables components of the MEC platform to use the service as a function call. The details of the proposed environment are described in Section 4.

4. MEC-Based Simulation as a Service

4.1. MEC Platform Structure

As a mobile communication infrastructure for simulation as a service, we are developing a MEC platform that conforms to the reference architecture [20] and specifications presented by ETSI. The MEC reference architecture provides the service API that the MEC platform provides for MEC apps as summarized in Table 1.

In addition to ETSI-based standard MEC services and APIs, we developed a digital twin service which manages virtual world software replications of real-time status 5G terminals in the real world [21]. The proposed MEC platform includes a digital twin service, and APIs are implemented using the Kubernetes Pod component. Based on the information and location of terminals within the 5G network, the digital twin service creates digital twin models that correspond to managed terminals and synchronizes them in real time by reflecting data collected from them. Information and location of the terminal are obtained through the location service API and UE identity service API. The digital twin service not only provides MEC applications with real-time status information on terminals but also provides simulation capability to predict the future behavior of terminals according to the state changes in the 5G network. The state of the 5G network is obtained through radio information service API and bandwidth management service API. Figure 4 shows the structure of the MEC platform in which the digital twin service operates.

The digital twin service includes the modeling service and simulation service. This paper deals with simulation service, and it is described in more detail in Section 4.2.

4.2. Development of Simulation as a Service
4.2.1. Structure of Simulation as a Service

In this paper, we defined simulation as a service (SIMaaS) which performs a simulation and returns the simulation result depending on the request of a MEC application in a Kubernetes-based MEC platform. The structure of the environment to provide SIMaaS is shown in Figure 5.

A digital twin service includes a modeling service and a simulation service. When a simulation is requested from the MEC application to the digital twin service, the modeling service identifies the model information suitable for the requested situation, creates the schema information of the simulation model in XML format, and passes it to the simulation service. Currently, the modeling service is not implemented. We designed the whole structure for the integrity of the entire environment, so the implementation of the modeling service has remained as future work. In this study, we assume that the modeling service’s result is provided to the simulation service as a predefined model.

The XML structure of the model schema for the simulation service is shown in Figure 6. The ModelStructure node defines the name of the message type and port value for the simulation model. The child node of the ModelStructure node is a NModel node which means network model. It represents the connected model of atomic models, which no longer splits to smaller elements and can play independent roles. The component node, which is the child node of the NModel node, defines the atomic model. The attribute node, which is the child of the component node, contains the variable information used to initialize the atomic model, and one component node can have multiple attribution nodes. The coupling node specifies the connection between the input and output ports, which are the paths through which messages are exchanged between the models. It connects the models by linking “FromPort” of “FromModel” to “ToPort” of “ToModel.” The port of the network model can also be defined in port node. The port node specifies the name of the in and out port and the port number.

A simulation service Pod is a kind of server that processes requests when it receives simulation requests. The simulation service receives a schema of the simulation model in XML file format and simulation information from the modeling service. After receiving files, the simulation service generates the simulation model source code compatible with the Adevs environment by combining the atomic models as specified in the model structure XML. The simulation service has the source code of the predefined atomic models. Then, the simulation service generates a Makefile and executes the script file which runs the generated Makefile to compile and get the executable file.

When the executable file of the simulation model is generated, the simulation service creates the simulation engine Jobs corresponding to the number of simulations to be performed using the Kubernetes API. Since the simulation service is a Pod belonging to the worker node, it cannot use the Kubernetes API by default. Therefore, it acquires the authority to use the Kubernetes API through a series of processes, such as Rolebinding described in Section 2.2.2. A detailed example of a yaml file format application is shown in Figure 7. The yaml file creates a ServiceAccount named simulation-service, which then creates a Role named pod-reader, then connects simulation-service and pod-reader in RoleBinding. Then, when a Pod is created, it finally gets permission to use the Kubernetes API by specifying “serviceAccountName: simulation-service.”

When the simulation engine Job is created, a program that connects the Job to the simulation service is executed automatically. The simulation service and simulation engines exchange data through TCP socket communications. However, due to the characteristic of the Kubernetes Pod, which dynamically assigns an IP address whenever a Pod is executed, the IP address cannot be known when opening a socket. To solve this problem, the simulation engine Job also obtains the authority to use the Kubernetes API, and then uses the API (kubectl-described Pod) to find the IP address of the simulation service Pod. When the simulation engine finds the IP address of the simulation service and establishes the connection, the simulation service sends the executable simulation model file. The simulation engine Job performs a simulation by executing the received file, and when the simulation ends, sends the result to the simulation service. Since the simulation engine does not need to do further tasks after the simulation, it is created as a Kubernetes Job for efficient memory utilization of the entire server. The simulation service collects the simulation results received from the simulation engines and sends them to the MEC application which requested the simulation.

The Kubernetes system does not delete the data even though the Job is finished. Therefore, when all simulations are finished, the simulation service automatically deletes created Jobs using the kubectl command.

4.2.2. Simulation Engine

The expected situations for simulating via the MEC platform are mostly constructed with a discrete event system. A discrete event system is an “event-driven system in which discrete states are updated depending entirely on the occurrence of asynchronous discrete events over time” [22].

Generally, objects that make individual decisions are connected in the MEC network. Each decision becomes an event to affect another objects’ behavior. This is called the “event-driven system” which I mentioned before. A discrete event system specification (DEVS) formalism [7] describes the discrete event system.

The model representation is based on DEVS formalism, so a DEVS-based simulation engine is required. Several DEVS-based simulation engines are described in Table 2. Among these simulation engines, we decided to use Adevs [23] as this environment’s simulation engine. Because SIMaaS is used in a real-time system, the execution speed is an important factor.

According to the benchmark results of research conducted by Van Tendeloo and Vangheluwe [24], Adevs has the least CPU time used, which means that Adevs is the fastest among DEVS-based simulation engines. The benchmark ranking is noticed in Table 2.

5. Case Study

5.1. Scenario Description

The problem to be solved in the case study is to find out the optimal gate in a situation where autonomous vehicles (AVs) and manually driven vehicles (MVs) are mixed and are made to pass through a highway toll collection system (TCS; as known as tollgate). The abstract situation is described in Figure 8 below.

When a vehicle approaches the tollgate, it is connected to the regional MEC server network. Then, a decision-making app (MEC client app) in the vehicle requests the optimal gate to the tollgate selection MEC app running in the MEC server. The MEC app calls the simulation service to acquire the average processing time and the average queue waiting time of each gate based on the current situation. The data are required to decide what is the optimal gate. In the case study, the modeling service is not implemented, so we assumed a certain situation, which is described in Section 5.2. Consequently, when the MEC app requests simulation to the simulation service, it sends the XML schema of the simulation model. The simulation service performs simulations based on the MEC app’s request using multiple computing nodes. After finishing the simulation, the simulation service sends the simulation results to the MEC app. Then, the MEC app analyses the simulation result and gets the optimal gate. Finally, the MEC app sends the result of the simulation to the MEC client app. The process is described in Figure 9.

The arrival time of the vehicles and the processing time of a tollgate are stochastic, thus a Monte-Carlo simulation [25], which is a simulation method to solve the probabilistic problems by simulating repeatedly, is required. Because the proposed SIMaaS utilizes Cloud infrastructure to perform multiple simulations, it can be a decent option for the case study.

5.2. Simulation Model Description

The structure of the model for this case study is shown in Figure 10. The generator model creates vehicle objects periodically following a certain distribution and sends them to the buffer model. When a vehicle object is created, the heading direction, car type, and pay type are randomly assigned. The heading direction is the heading way out of the tollgate, and it can have the value of right, straight, and left. The car type is a vehicle’s driver type. It can be autonomous or manual. The pay type is a payment method of a vehicle. It can have the value of electronic automatic or manual.

The buffer model transmits the stored vehicle objects with the first in first out (FIFO) manner to the connected processor model. The processor model is the tollgate, and it handles the vehicle objects. Its types are an electronic toll collection system (ETCS) or a toll collection system (TCS). ETCS charges the fare automatically without stopping vehicles. In TCS, the fare is manually charged by a human. Each type of processor has a different processing time because the characteristics of the types differ. Generally, the processing time of the ETCS model is faster than the TCS model. For this case study, two ETCS processors and three TCS processors are used for constructing the tollgate models as shown in Figure 10. The ETCS processors are located top and bottom, while the TCS processors are located in the middle.

The transducer model collects the finished objects passed by the processors to analyze the simulation results. It calculates average waiting time, average service time, average queue waiting time, average queue length, average electric queue length, and average manual queue length. When the object count reaches the end of the simulation count, the transducer produces a final simulation result and sends a stop command to the generator model.

We predefined the XML format simulation of Figure 10 on behalf of the modeling service. When the simulation service receives the XML model, it automatically generates an executable file as described in Section 4.2.1, then creates simulation engines to run simulation. After the end of the simulations, the simulation service collects the results from simulation engines.

When a vehicle model is created, it is probabilistically determined whether it is an autonomous vehicle or a manually driven vehicle. In the case of an autonomous vehicle, it is assumed that the payment method is only electronic. The manually driven vehicle can have two payment methods, electronic and manual, which are also probabilistically determined. ETCS is available only if the payment method is electronic, and TCS can be used regardless of the payment method if the type of vehicle is manual. The situation in which AV uses TCS is not considered because this case study assumes a fully autonomous driving situation.

When the generator model creates vehicle objects to send to the buffer model, the optimal buffer is determined by Algorithm 1. Since one buffer is connected to each processor, sending to a particular buffer means sending to the connected processor. Algorithm 1 has , , , , B, and P as parameters. The components of the parameters mean heading direction after passing through the TCS, car type, pay type, queue type, buffer size, and self-determination probability, respectively.

Find the optimal buffer
Parameters: (direction), (car type), (pay type), (queue type), (buffer size), (self-determination probability).
INITIALIZATION:
1: If then
2:  set start ⟵ 0
3:  set end ⟵
4: End if
5: Else if then
6:  set start ⟵
7:  set end ⟵
8: End if
9: Else then
10:  set
11:  set
12:  If then
13:   set
14:  End if
15: End else
16: If then
17:  s
18: End if
BUFFER DECISION:
19: set result ⟵ Algorithm 2()
20: If then
21:  If then
22:   set result⟵Algorithm 2()
23:  End if
24: End if
FINALIZATION:
25: return result

can have AUTONOMOUS or MANUAL as a value, and and can have ELECTRONIC or MANUAL as a value. The self-determination probability () is the probability that the driver of the manually driven vehicle arbitrarily makes a different decision without following the calculated destination by the MEC application. Since has a range of 0 to 9, also has a range of 0 to 9, and when is 0, the probability of making another decision is 0%, and when it is 9, the probability goes 100%. Algorithm 1 is connected to Algorithm 2 and finally finds the optimal buffer. In the INITIALIZATION phase, it calculates the searching range (start, end) for Algorithm 2. In the BUFFER DECISION step, the final result is obtained by calling Algorithm 2 with start, end, and as a parameter. is a parameter for self-determination probability () calculation and it has a value of 0 or 1. When considering self-determining situation, is 1. Otherwise, is 0. In the case of equals 1, if both the car type is MANUAL and the probability of is satisfied, add an extra task to select a buffer other than the previously selected buffer. The final result is achieved by calling Algorithm 2 with set to 1.

Find the optimal buffer
Parameters: , , , , , , C(self-determination condition).
INITIALIZATION:
1: set
2: set
BUFFER DECISION:
3: do
  LOOP:
4:  for to do
5:   If &&
   || then
6:     If
7:      set
8:      set
9:     End if
10:    End if
11:   End for
12:  set
13:  set
14:  If then
15:   set
16:  End if
17:  If then
18:   set
19:  End if
20: while
FINALIZATION:
21: return result

Algorithm 2 finds the buffer matching the payment type of the vehicle within the range of (start, end) and selects the smallest size of buffer as a result. If equals 1, the MANUAL car type is solely considered. When the manually driven vehicle’s payment method is AUTOMATIC, it is basically allocated to the automatic buffer. Since the manually driven vehicle’s driver can pay directly, Algorithm 2 considers both types of the buffer and recalculates the final result by choosing the smallest size of the buffer.

5.3. Model Validation and Verification

The toll collection system model is constructed with a basic generator, a buffer, and processor (GBP) models. To know if the model follows the real system and is completely implemented, we calculated the ideal result using the queueing theory and compared the result. The assumed system consists of one generator, one buffer, and one processor. The testing model for validation is described in Figure 11, and it is a typical //1 queue model with an arrival rate () that follows a Poisson process and a service rate () that follows an exponential distribution with one processor. For this experiment, and are set to 10 and 12 each, and the unit time is set to seconds.

The equations of the //1 queue are described as follows: is the average amount of time that a customer spends in the system (1), is the average amount of time spent in a queue (2), and is the average number of customers in the buffer (3). According to the queueing theory, when equals 10 and equals 12, T is 0.5, W is 0.42, and is 4.17.

To verify the implementation, an experiment is conducted simulating 1,000,000 processing objects for 10 times and calculating the average value. The result is as follows: is 0.53, is 0.44, and is 4.3. It is similar to the queueing theory’s calculation result; therefore, the implementation is verified.

When it comes to validating the scenario described in Section 5.1, it is hard to obtain real-world data at this point. Because fully activated autonomous vehicles are not commercialized, there are no comparable real-world data assumed in this case study. Nevertheless, the model follows the queueing theory; thus, if we have real data, the model will act like a real situation.

5.4. Experiment Design and Results

The arrival rate () of a generator assumed a crowded time of following the Poisson distribution with 3600 units per hour. We assumed that the service rate () of the processors follows exponential distribution of 1200 units per hour for ETCS and 257 units per hour for TCS. To sum up, , , and .

AV solely enters the tollgate specified by the MEC server, while MV can enter another tollgate arbitrarily without following the decision made by the MEC server. The arbitrary probability is assumed as 20% ( equals 2) for this case study. This is because people often ignore GPS navigation information and make different decisions based on personal judgment.

Experiments were conducted by changing the ratio of AV from 0% to 100%. There is a study on analyzing the driving stress of MV drivers in circumstances where AV and MV are operated together [26]. In a similar context, this experiment is aimed at analyzing the delay in passing tollgates depending on the ratio of AV.

Because all situations (generating vehicles, assigning to a specific buffer, and processing time in a processor) are probabilistic, a Monte-Carlo simulation is applied to produce results through repeated executions. In each case, we simulated 100 times for 10,000 vehicle passes, and took an average. The experiment results are shown in Figure 12.

The information collected in the experiment are average waiting time (AWT), average service time (AST), average queue waiting time (AQWT), average queue length (AQL), average electric queue length (AEQL), and average manual queue length (AMQL). The AWT is the sum of the time it takes for objects to be created and passed through processors divided by the number of objects. The AST is the sum of all the time taken by the processors divided by the number of objects. The AQL is the sum of the queue lengths at the time the object enters the processor divided by the number of objects. Similar to AQL, AEQL is a summation of the ETCS buffer, and AMQL is a summation of the TCS buffer.

From the experiment results, we can figure out the tollgate delay tendency based on various circumstances. In the case where a self-deterministic mode is available, AST and AQL increase. Comparing Figure 12(a) and Figure 12(b), Figure 12(b) represents a circumstance where the self-deterministic mode is on, and it’s AST is bigger than that of Figure 12(a). Also, if the ratio of using ETCS is high, AST tends to be smaller. Comparing Figure 12(a) and Figure 12(e), Figure 12(a)’s ETCS-using ratio is 70%, and Figure 12(e)’s ETCS-using ratio is 50%. The former’s AST is significantly smaller than the latter. In every case, AST and AQL increase when the AV_RATIO is too small or too big.

This experiment shows an example of the result when vehicles in various circumstances request optimal gates. Therefore, when vehicles approach MEC-installed tollgates and request the optimal gate information, the MEC app immediately transmits the gate information in any circumstances by using the SIMaaS.

6. Conclusion

In this paper, we explained a Kubernetes-based multiaccess edge computing platform to provide ultrareliable and low latency communication service to user equipment connecting to the 5G network. Also, we designed and implemented SIMaaS that provides the simulation for the MEC apps to run in the MEC platform. SIMaaS performs multiple simulations using numerous simulation engines spread in many computing nodes. In the case study, we constructed a highway toll collection system where autonomous and manually driven vehicles are passing through. When a vehicle approaches the tollgate, it requests the optimal gate information to the MEC server. Monte-Carlo simulation is carried out to handle probabilistic situations such as vehicles’ arrival rate and tollgates’ service rate. Since MEC apps do not solely solve to find out the optimal path to the tollgates for vehicles, SIMaaS is called and transmits the simulation results back to the MEC apps. After analyzing the results, the MEC apps notifies optimal tollgate information to the MEC client, which is a vehicle.

The experimental results show that the processing time is reduced as the ratio of using an electronic toll collection system increases and the ratio of drivers’ self-deterministic behavior decreases.

Moreover, besides this case study, in the near future, there will be thousands of objects connecting over the 5G network. To analyze relations of the complex intertwined heterogeneous objects, the proposed SIMaaS can be a good option.

For future work, we are planning to implement a modeling service which analyzes the objects connected to the MEC network and makes a digital twin model of objects. With the modeling service, the dynamically changing system can be simulated without manual model construction.

Data Availability

The data availability statements are listed as follows: (1) The experiment results data used to support the findings of this study are available from the corresponding author upon request. (2) The software code data used to support the findings of this study have not been made available because the data belonged to my institute.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by “The Cross-Ministry Giga KOREA Project” grant funded by the Korea government (MSIT) (No. GK19P0400, Development of Mobile Edge Computing Platform Technology for URLLC Services).