Abstract

At present, the fast-paced work and life make people under great pressure, and people’s enthusiasm for fitness is getting higher and higher, which is in contradiction with the shortage of existing stadiums. So it is considerably significant to open shared stadiums near where citizens live for booking. Therefore, how to allow citizens to book a suitable stadium according to their own needs through mobile phones or computers is an urgent problem to be solved. The booking of the shared stadium can be regarded as a mobile edge computing (MEC) scenario, and the problem can be transformed into task scheduling research under MEC through intelligent scheduling method. When using edge computing (EC) technology for service calculation, the mobile terminal needs to offload the service to the edge computing server. After the server completes the calculation, the calculation results will be sent back to the mobile terminal. Therefore, the calculation time and system energy consumption in the calculation process can be further reduced through task scheduling to improve user satisfaction. In this study, joint scheduling of service caching and task algorithm is proposed to reduce the latency of booking shared stadium request and improve user experience. The simulation results show that the proposed algorithm with edge cooperation idea can achieve lower average system latency at lower load level and can significantly reduce the cloud offloading ratio under low and middle pressure. In addition, the proposed algorithm uses the secondary transfer of more tasks to reduce the pressure of local task running. Finally, the quality of experience (QoE) satisfaction rate under low pressure is guaranteed.

1. Introduction

Fitness is gaining popularity in China over the decade [1, 2]. One of the main problems encountered in the implementation is the lack of stadiums [3]. As an important guarantee for the development of sports, the sharing of stadium resources from college or high/middle school and society has become the direction under the condition of insufficient stadium resources, which can effectively alleviate the contradiction between the growing demand for fitness of citizens and the limited social stadium resources. Therefore, it is necessary to construct the resource-sharing model of stadiums from college or school opening to the society and ensure the sustainable opening of college or school stadium resources to the society.

Taking Beijing as an example, there are nearly about 1,000 stadiums available with different sizes in the whole city, which are free and charged, and how to make the citizens book a suitable stadium by using phones or computers for their own needs is in most urgent need to address the problem. The booking of the shared stadium can be regarded as task scheduling in MEC. The phones or computers are often deployed at the edge of the network [46]. Task offloading refers to the transfer of computing tasks from resource-constrained phones or computers to external platforms.

Mobile edge computing (MEC) is actually the cloud for real-time and personalized services, which offers the IT service environment and cloud computing capabilities. The related environment is characterized by proximity, low latency, and high bandwidth, and this service environment also offers exposure to real-time context information, and it can be leveraged by applications to create value. [7, 8].

For the problems of limited edge network resources and insufficient flexibility, it is necessary to introduce resource management and task scheduling mechanism to manage the edge network as a whole. Reasonable resource allocation and scheduling scheme are designed to make full use of edge resources to reduce latency and improve the user experience of booking stadiums. The scheme designing needs to consider many factors such as the number of users, the distribution of computing resources, the number of computing resources, and so on, which maximize the utilization of resources at the edge. The optimized network latency makes it easier for citizens to book the most suitable stadium for them, which is bringing a better experience to users.

Accordingly, the contributions of this study are summarized as follows. (i) The offloading latency is modeled after analyzing the task computing latency and cloud computing latency. (ii) A joint scheduling of service caching and task algorithm is proposed to divide the problem into two relatively easy subproblems, which are joint scheduling of service caching and task.

The rest of this study is organized as follows. The related work is in Section 2. The modeling of resource allocation and task scheduling is studied in Section 3. In Section 4, joint scheduling of service caching and task algorithm is proposed. The experimental results are shown in Sections 5 and 6 that conclude this study.

For the problems such as limited resources and insufficient elasticity of edge network, resource allocation and task scheduling mechanism should be introduced to manage the overall edge network. Currently, there are some research studies on resource allocation and task scheduling.

In order to cope with the problems of insufficient processing capacity and limited resources of terminal devices, computing offloading is introduced in MEC. EC offloading refers to the user-end offloading computing tasks to the MEC network to solve the device’s shortcomings in resource storage, computing performance, and energy efficiency. Many strategies of resource allocation for MEC have been proposed. In [9], two resource allocation mechanisms were proposed, one is the auction-based mechanism, which generated the envy-free allocation, and the other is the linear program-based mechanism, which was no guarantee for envy. In order to recover the impact of limited caching service for task offloading, in [10], efficient convex programming was designed to solve the problem. In [11], a game-based distribution scheme was presented to allocate resources that offloading jointly and dynamically required in the mobile system. In [12], the nodes in the industrial internet of things using power transmission-scheduling scheme was proposed to lower the charging cost. In [13], the jointly partial task offloading and resource allocation of MEC with energy collection was proposed to take into account the network architecture of MEC. The computing tasks could be offloaded to the edge server, but the movability of users could not be predicted, and in [7], an effective solution based on relaxation and rounding off strategy was proposed to ensure the seamless migration problem by promising the movability of users. In order to safely perform the offloaded tasks, in [14], the Markov model was used to reduce the risk. In addition, a secure mechanism was proposed to lower the economic returns. There were most important two aspects in MEC, which were the security of task offloading and the connectivity of users, and in [15], the constraint of offloading rate and latency was studied to express the optimal solution of MEC. The power used in the edge server is huge, and in [16], the tasks were offloaded through searching the highest reward in order to solve the limited power of the edge server.

Task scheduling has been studied for a long time in cloud computing. Scholars usually optimize resource allocation and user task scheduling to reduce latency. With the development of MEC and the combination of cellular network, how to schedule EC tasks submitted by users has become an urgent problem to be solved. There are also some other methods for task scheduling in MEC. Based on the combination and strong coupling of resource allocation, in [17], a joint resource allocation and task scheduling method was proposed to divide the problem into two subproblems, which could optimize the coupling of MEC. Considering the cooperation of convolutional neural networks (CNNs) and MEC, in [18], the task scheduling problem could be divided into two problems, which were resource allocation and CNN task scheduling. In order to highlight the latency of task scheduling, the problem could be transformed as a minimum latency problem. It was necessary to transmit data in some special environments, and in [19], in order to transmit encrypted data, the task scheduling strategy was proposed. In [20], a dynamic planning energy-saved task scheduling algorithm was proposed to the minimum the latency constraint. In [21], MEC server was deployed at each area on the internet of vehicles, and authors proposed that the computing tasks could be scheduled to MEC server through wireless networks. In [22], the energy-based task scheduling method was proposed to optimize power consumption. In [23], a novel secure framework was proposed to offload the computing tasks to the MEC server in low latency and low power consumption. In [24], a random integer nonlinear planning method was proposed to schedule the computation-intensive tasks.

3. Modeling of Resource Allocation and Task Scheduling

Users book stadium on phones or computers and offload computing tasks to edge servers. The latency of task offloading can be divided into two parts, which are communication latency and computing latency. In the scenario of mobile edge network, wireless transmission latency is not the focus of this study that can be regarded as a constant. When the task of booking a shared stadium arrives at the base station, if the directly connected edge server has the ability to process the task, there is no transmission latency. If the task needs to be sent to other edge servers in the edge domain for processing, the related transmission latency needs to be increased. So, the problem of booking shared stadium is transformed into service caching and booking task scheduling.

3.1. Edge Caching

In this study, the tasks can be performed only if the resources exceed a certain threshold Thm. There is a decision to be made about which types of services to cache so that the average latency is lowest. The decision variable amn is introduced to indicate whether service m runs on edge server n. The symbols in this study are shown in Table 1.

3.2. EC Latency

Computing latency refers to the time required for the task of booking a shared stadium to be calculated in the edge server. The computing latency of a task is affected by multiple dimensions of computing resources at the same time. Only the CPU frequency is considered in this study. Let 1/λ and λ denote the parameters of negative exponential distribution and the service rate. It is assumed that when the CPU frequency allocated to the service exceeds the resource threshold Thm, the service rate linearly increases with the increase in CPU frequency. For example, the coefficient of service m is μm, and the processor frequency allocated to edge server n is ESfmn; then, the service rate is λmn = μm × ESfmn. A simple M/M/1 queuing model can be used to model the computing latency of tasks [25].

3.3. Cloud Computing Latency

Due to the limitation of edge resources and the existence of sudden traffic, the edge cannot process the task of user offloading timely booking shared stadium. So the booking task may be sent to a remote data center for running, and the latency is called cloud computing latency in this study. The communication latency experienced by cloud computing is much larger than the communication latency Lic in edge network. It is worth noting that cloud latency changes over time but over a shorter period of time, which can be assumed that the edge to cloud data center communication latency is a constant. Cloud data center has nearly infinite computing resources, which requires less computing latency than the edge and is relatively fixed. Therefore, for each service m, a constant Lm can be used to represent cloud computing latency within a period.

3.4. Optimization Model

The offloading latency is modeled after analyzing three kinds of latency. The purpose of this study is to optimize the latency performance of the edge network by adjusting the allocation of stadium resources and the scheduling of booking shared stadium tasks. The latency of each user of each service is a random variable, and a weighted average ω can be used to characterize the performance of the entire edge.

When a directly connected user of an edge server frequently uses a service, the resource quantity of the related service can be adjusted to increase ESfmn. When there are many user requests for multiple services on the edge server, it needs to decide which services and what percentage of requests will be offloaded and where the offloading tasks will be performed. emn represents load adjustment for service m on edge server n. If emn is positive, it indicates that the load of the same service of other base stations in a certain domain needs to be offloaded to the local server. Otherwise, it means that a certain amount of user requests need to be transferred to other nodes for computing, which may be other edge servers or cloud data centers. When emn is determined, each node will forward the user task request to the controller according to the ratio ηmn = emn/δmn, and then, the controller will decide the destination node to forward.

On the basis of the M/M/1 model in queuing theory, the computing latency of service m on edge server n is defined as follows.

Even for the same service, each edge server has its own queuing queue, and the l_cmn is quite different. Weight balancing is required according to the number of users that need to be processed. The load proportion of each edge server is defined as follows.

So the average computing latency of service m at the edge is defined as follows:

In addition to the task of booking shared stadium performed on the edge, there are some booking tasks to offloading to the cloud for running, and the proportion of booking tasks performed on the cloud is determined by the adjustment of the arrival rate of each edge server δmn. The calculation of latency introduced during transmission is similar. Therefore, the transmission ratio can be defined as follows:

The average latency can be defined as follows.

To sum up, for a certain service m, the average latency can be defined as follows.

By using the weight of services to perform weighted average, the complete expression of average latency in the edge domain can be defined as follows.

There are certain constraints on decision-making, which are summarized as follows.(i)In [26], the convergent divides the population into multiple clusters by using a clustering strategy. To ensure the convergence of the calculation queue, the service rate should be higher than the request arrival rate, which can ensure that the task of booking shared stadium can be performed faster and improve user experience; then,(ii)The edge server can forward the directly connected tasks at most, and the service arrival rate cannot be less than zero after adjustment; then,(iii)Only internal demands are processed in the edge domain, and the remaining demands are performed by the cloud. Therefore, the sum of service arrival rate adjustment should be less than or equal to zero; then,(iv)The total resources allocated to different services on the same edge server cannot exceed the resource limitation; then,(v)The CPU frequency allocated for a service should be greater than or equal to the minimum threshold; then,

4. Joint Decision Algorithm

The intelligent scheduling method supporting the stadium sharing problem is a mixed-integer nonlinear programming, and the global optimal solution cannot be found in polynomial time. In this study, a joint scheduling of service caching and task algorithm is proposed to divide the problem into two relatively easy subproblems, which are joint scheduling of service caching and task. The subproblem of service caching is to make a decision on the decision variable amn. In service caching, a greedy strategy is proposed to determine the service that should be cached for each edge server. The joint task scheduling is to further optimize resource allocation and task scheduling on the basis of the determined service caching.

4.1. Service Caching Subproblem

The service caching subproblem is a zero-one programming problem, which is tightly coupled with subsequent subproblems. When the number of edge servers controlled by the controller is large, the time of the conventional solution will rapidly increase and it is difficult to solve. Therefore, this study presents a greedy strategy to determine the service caching on the edge server, which can greatly reduce the solution time and avoid excessive performance loss.

Caching of selected service at the edge is essential to allow more tasks of booking shared stadium that users offload to be performed at the edge without offloading to the cloud. Therefore, the task arrival rate of service is an important reference index. In addition, different tasks have different tolerance to latency, and tasks with lower tolerance and higher weight should be satisfied first. To balance the above two points, the “cache priority index” is introduced as shown in equation (13). Each edge server is ranked according to priority criteria and allocated computing resources according to minimum operating requirements until the server resources are exhausted. For the service to which computing resources are allocated, the service caching decision variable amn is set to 1.

4.2. Joint Scheduling of Service Caching and Task

There may be a resource surplus that needs to be allocated among the services that are cached by the edge server once the caching types are determined. The scheme of allocating computing resources will affect the waiting time of service queues. To optimize the latency, it is necessary to determine the proportion of computing tasks in the edge and cloud. Therefore, there are two decisions to be made. (i) The computing resources to be allocated for each service are determined by ESfmn. (ii) The proportion of tasks that needs to be adjusted is determined by emn.

The two decision problems cannot be separately made, and the results need to be obtained in an optimization process. After the service caching decision variable amn is determined, the original objective function is transformed from the original mixed-integer nonlinear programming problem to the general nonlinear programming with constraint problem. The decision Algorithm 1 can be summarized as follows.

Input: the number of iterations iN
Output: amn, ESfmn, emn
(01)Initialization: logarithmic barrier lb = 0.01
(02)Initialization: amn = 0,
(03)for edge server n ∈ Q, do
(04)  for service m ∈ P, do
(05)   compute service cache priority index ;
(06)  end for
(07)  Rank cpin in reverse order
(08)  Resource surplus rs = C
(09)  for service m ∈ service cache order com, do
(10)   if rs ≥ Thm, then
(11)    amn = 1
(12)    rs = rs − Thm
(13)   end if
(14)  end for
(15)end for
(16)
(17)for iN = 1 ⟶ iN, do
(18)  
(19)  
(20)end for

5. Experiment and Results Analysis

5.1. Performance Metrics

The algorithm in this study is proposed to reduce the latency of booking shared stadium request and improve user experience. The following performance metrics are set in this study. (i) The average latency of users in the system, which is also the target of direct optimization of the algorithm. (ii) The satisfaction rate of quality of experience (QoE) can reflect whether it is reasonable to take the average latency as the optimization goal. The settings for the simulation parameters are shown in Table 2.

A QoE latency limit is set in each edge decision sample. When the user’s latency is smaller than the QoE latency limit, QoE requirements can be considered to be met. The average latency of each queue can be obtained through the queuing model, so this study directly takes whether the average latency is less than the QoE latency limit as the standard of whether users meet QoE. When machine learning meets congestion control, in [27], the performance of reinforcement learning-based congestion control algorithms was explored.

Edge load refers to the computing offloading requests generated by users in the edge domain within a unit time, that is, the arrival rate of requests. In this study, a benchmarking value is designed for the load size, which reflects the edge load relative to the edge affordability, so as to get rid of the influence of service type p and the number of edge servers q. It is assumed that each edge server has a computing resource of Cf (such as the basic frequency of CPU).

5.2. Result Analysis

Figure 1 shows the effect comparison of whether or not edge cooperation is performed in the edge domain. Three classical resource allocation and task scheduling algorithms in recent years are selected for comparison, which are MEC-based resource management and task scheduling (MEC-RMTS) [28], coalitional game-based cooperative offloading (CGCO) [29], and multiservice task computing offload algorithm (MTCOA) [30]. The MEC-RMTS framework is used for efficient task offloading in the internet of things, CGCO is a cooperative offloading algorithm based on the coalitional game, and MTCOA solves the multiservice task offloading problem. The data in the figure are measured when p = 10 and q = 20. Each edge server separately makes service caching and scheduling decisions.

It can be seen from Figure 1 that the proposed algorithm in this study with edge cooperation idea can achieve lower average system latency at lower load level. This is because the use of idle edge servers to provide certain computing services for neighbor nodes can improve the overall performance of the system. With the increase in edge load, the average latency of the three baselines gradually increases, which indicates that edge resources are unable to process edge load, and more tasks need to be transferred to the cloud for running. Edge cooperation can improve the stress resistance of the edge system. Through the analysis of the results, it is found that the proposed algorithm in this study using edge cooperation reduces the time latency by 70% on average compared with the algorithm without edge cooperation.

Figure 2 shows the proportion of the calculated tasks transferred to the cloud for running, which is called the proportion of cloud offloading. It can be seen from Figure 2 that the proposed algorithm in this study can significantly reduce the cloud offloading ratio under low and middle pressure, so as to achieve lower average latency. Under heavy load, edge cooperation cannot continue to reduce latency and degenerates into uncooperative mode.

Figure 3 shows the calculated proportion of tasks that are transmitted but not run in the above process. More network transmission may introduce more transmission effort, but it is also more conducive to resource concentration. It can be seen from Figure 3 that the proposed algorithm in this study uses the method of secondary transfer of more tasks to reduce the pressure of local task running.

Figure 4 shows the statistics of the QoE satisfaction rate of users with different scheduling algorithms (latency is limited to 0.5 s). As can be seen from Figure 4, the proposed algorithm in this study can guarantee a better QoE satisfaction rate under low pressure. No matter which scheduling algorithm, the QoE satisfaction rate of users decreases with the increase in edge load pressure. However, no matter in which load range, the proposed algorithm in this study can achieve better performance, which also verifies that the proposed algorithm in this study meets the design goal of improving edge peak stress resistance.

6. Conclusions

National fitness is imperative, but a major problem encountered in the implementation is the lack of stadiums. The sharing of stadium resources from college or high school and society has become the direction of change under the current situation of insufficient stadium resources, which can effectively alleviate the contradiction between the growing fitness needs and the limited social stadium resources. In this study, the characteristics of edge resource allocation and task scheduling are analyzed, and joint scheduling of service caching and task algorithm is proposed. According to the minimum resource allocation requirements of service caching, the decision-making problem of the edge network is modeled as a joint resource allocation and task scheduling model, and the resource allocation and task scheduling scheme are obtained in an optimization process. Experimental results demonstrate that the proposed algorithm can effectively integrate and utilize edge resources, which can also improve the latency and QoE. Currently, there are few studies on task collaborative scheduling between mobile devices and MEC servers.

The algorithm proposed in this study is for the tasks that cannot be decomposable, that is, all the computing tasks can only be locally performed or on the MEC server. However, there are many mobile applications that can be fine-grained in practice. For such computing tasks, some subtasks can be offloaded to the MEC server for execution, while others are locally offloaded for execution. More detailed offloading decisions need to be reformulated, which can further reduce task latency.

Data Availability

The data used to support the findings of the study are included within the article.

Conflicts of Interest

The author declares no conflicts of interest.

Acknowledgments

This paper was supported by the Social Science Research Planning Project of Jilin Province Education Department (grant no. JJKH20210426SK).