Abstract

Edge computing is a creative computing paradigm that enhances the computing capacity of the edge device close to the data source. As the key technology of edge computing, task offloading, which can improve the response speed and the stability of the network system, has attracted much attention and has been applied in many network scenarios. However, few studies have considered the application of task offloading in time-sensitive networking (TSN), which is a promising technology that has the potential to guarantee data delivery with bounded latency and low jitter. To this end, we establish a task offloading stream transmission model for TSN based on the queueing theory. With the model, the average response time can be achieved by quantitative calculation. Then, we introduce the backward method to construct a utility function and formulate an exact potential game to model the task offloading competition among edge devices considering the minimization of the average response time of all tasks. Furthermore, a distributed and sequential decision-making algorithm for multitask offloading (DSDA-MO) is proposed to find the Nash equilibrium. Through numerical studies, we evaluate the algorithm performance as well as the benefit of the multitask offloading mechanism. The results reveal that through the proposed game theoretic approach, we can obtain the optimal multitask offloading strategy, which can significantly reduce the task computation delay in TSN, within a finite number of rounds of calculation.

1. Introduction

With the advent of the era of Industry 4.0, related technologies, such as artificial intelligence (AI), cyber-physical systems (CPSs), and industrial internet of things (IIOT), have been deeply studied and developed [1, 2]. In many cases, such as the case of industrial equipment and systems sharing the communication channel in the industrial field network, the data packet delivery requires bounded latency and low jitter, which are known as deterministic real-time communication. The traditional network technology can no longer meet the requirements of deterministic real-time transmission of industrial data, let alone support the mixed transmission of periodic control data and emergency data in many complex industrial scenarios. To overcome this challenge, IEEE promotes a new real-time communication protocol standard, IEEE 802.1 Time-sensitive networking (TSN) [3, 4], which evolved based on IEEE 802.1 audio video bridging (AVB) series standards. Utilizing network-wide clock synchronization, time-aware haper (TAS), traffic scheduling, and other mechanisms, TSN realizes the shared network transmission of time-sensitive streams and nonreal-time streams within the local area network.

During the development process of industrial network technology, the industry and academy find that the traditional centralized cloud computing model cannot satisfy the requirement of real-time computing in the industrial field. Therefore, a new computing paradigm, named edge computing (EC), has come out to address the challenge of real-time processing data from the network edge. EC provides an architecture for migrating computation power from the remote cloud to the local position close to the data source by deploying edge computing nodes (ECNs) or edge computing servers (ECSs) near the edge devices [58].

Since TSN can reduce the delay of data transmission and EC can reduce the time cost of data computation, the combination of TSN and EC technologies will be able to decrease the overall delay of the industrial system. The core principles of EC are computing resource allocation and computing service provision. A common application of EC is task offloading from edge devices to ECN/ECS, i.e., tasks generated in edge devices are delivered to ECN/ECS for remote computing [9]. Recently, several studies have investigated computing task offloading strategies in many network environments for EC [1012]. However, to the best of our knowledge, few studies have considered their applications in TSN. We are the first to study the computing task offloading strategy in TSN.

The TSN standard comprises a series of protocol clusters, of which IEEE 802.1Qbv [13] is now widely applied in industrial scenarios. The core of IEEE 802.1Qbv is the principle of time-triggered communication controlled by gate control list (GCL) according to the traffic priority. The GCL is usually generated in advance of data delivery in TSN according to the transmission requirements of the given application. Generally, edge devices are ignorant of the details of GCL. Hence, it is challenging to evaluate the task offloading cost and design the multi-task offloading strategy, i.e., whether each task should be computed locally or offloaded to an ECN/ECS. For scenarios that are similar to the uncertain transmission conditions, Li [14] applied the queuing theory to model and evaluate the average delay encountered in task offloading. Moreover, Li proposed a strategy for adjusting the arrival rates of tasks offloaded from user equipment (UE) to different mobile edge computing (MEC) [15] servers based on the average delay. Motivated by these ideas, this paper treats each edge device as an M/G/1 queueing system continuously generating multitasks. Moreover, it is assumed that the hybrid stream, which is composed of the task offloading substreams and the original TSN substreams with the same priority, injects into an M/G/1 queue with server breakdown in the TSN switch. Based on it, this paper proposes a system model and designs a strategy decision-making algorithm on the basis of the potential game theory. The technical contributions of this article can be summarized as follows:(1)This paper establishes a task offloading streams transmission model by treating each edge device as an M/G/1 queueing system continuously generating multitasks and treating the TSN switch as many M/G/1 queueing systems with server breakdown. Through the proposed model, the average response time of tasks generated on each edge device can be obtained through quantitative calculation, and the optimal multitask offloading strategy of the edge device can be studied mathematically and rigorously.(2)This paper constructs a utility function with the backward method and formulates an exact potential game to model the task offloading competition among edge devices considering the minimization of the average response time of all tasks.(3)To effectively find the Nash equilibrium, a distributed and sequential decision-making algorithm for multitask offloading (DSDA-MO) is proposed, and its performance is studied in simulation experiments.

The remainder of this paper is structured as follows: in Section 2, the related work is presented. In Section 3, we describe the system model. In Section 4, we introduce the game formulating the competition among edge devices for edge computing resources and propose a DSDA-MO algorithm to search the Nash equilibrium. In Section 5, numerical results are presented, and the algorithm is evaluated. Finally, the paper is concluded in Section 6.

As the key technology of EC, computing offloading refers to offloading tasks generated on terminal devices to ECN/ECS based on rational offloading decisions and resource allocation strategies [16]. Utilizing the computing offloading mechanism, the terminal devices, also named edge devices, with limited computing resources can achieve less response time and energy consumption by the tasks executing on ECN/ECS. There are three main goals for task offloading decision-making: shorten the task computing time, reduce the device energy consumption, and jointly optimize the weighted sum of task computing time and energy consumption.

Over the past few years, task offloading in MEC has received more and more attention. Several studies jointly model the computing mode decision problem and resource allocation problem. Liu et al. [17] propose an efficient one-dimensional search algorithm to find the optimal task scheduling strategy. It uses the Markov chain to analyze the average delay and energy consumption of each task on the mobile device for computing task offloading strategies, according to the queuing state of the task buffer, the execution state of the local processing unit, and the state of the transmission unit. It establishes a mathematical model of delay minimization problems with power constraints. Chen and Hao [18] study the task offloading problem in ultradense networks from the perspective of software-defined networking (SDN) thinking and model the task offloading problem as an NP-hard mixed integer nonlinear programming (MINLP) problem. In reference [19], mobile devices offload computing tasks to multiple ECSs and download results from the MEC servers in a preset time slot. By jointly optimizing task scheduling and resource allocation, the computational delay of tasks is minimized.

Recently, many pieces of literature have proposed some approaches to deal with multitask offloading situations [2023]. In these approaches, the author of reference [20] constructs a multitask scheduling scheme for multicore mobile devices to balance the execution cost and energy consumption; the author of reference [21] aims to the tradeoff for energy consumption and time cost of a single task with different offloadable components, the author of reference [22] proposes a decision-making strategy for the multitask offloading game and constructs the Nash equilibrium. In addition, many researchers apply game theoretic approaches for computation offloading and resource allocation with minimizing the time cost as an optimal objective, by treating the multiuser computation offloading strategy making as a noncooperative game [2429].

Overall, a lot of literature focuses on modeling task offloading decision-making mode and has promoted many excellent algorithms to resolve the optimal problem of the offloading strategies for EC in the mobile network. However, few studies have considered the TSN scenarios. To the best of our knowledge, this is the first study that reviews the task offloading problem for EC in the TSN environment. In TSN, the tasks are generated in the queue in each edge device, and the transmission packages are cached in the queues of TSN switches for delivery controlled by GLC. Utilizing queueing theory to evaluate task offloading time cost seems an appropriate choice. The author of reference [30] uses M/M/1 queueing theory to model the task offloading competition as a noncooperative game in a three tiers architecture, consisting of mobile nodes, cloudlets, and cloud servers. Li [14] establishes an M/G/1 queueing model for the UEs and an M/G/m queueing model for the MEC and models a noncooperative game to study the stabilization of a competitive mobile edge computing environment. Inspired by the previous studies, we model multitask offloading competition in TSN as a potential game by using queueing theory.

3. System Model

This section studies a TSN system comprising multiple TSN end nodes, one TSN switch, M edge devices denoted by the set , and N ECSs denoted by the set . The multitask offloading in the TSN system is shown in Figure 1, and the notations and definitions in the proposed system model are listed in Table 1. We consider that each edge device has a queueing system to process the continuously generated tasks. A task in can be further described as , where is the task body size (including the code and data) and is the processing density in cycles per bit, i.e., the number of cycles required to process a unit bit of the task body. The task processing time in the edge device can be calculated as , where is the process frequency of the given edge device in cycles per second (e.g., MIPS). Usually, is treated as a constant according to the task application type and follows an arbitrary probability distribution; thus, the task computing time is also an arbitrary random variable. We assume that in each edge device, the task interarrival time follows an exponential distribution. Therefore, each edge device can be treated as an M/G/1 queueing system.

Benefiting from the edge computing technology, edge devices could choose tasks to be offloaded to the nearby ECSs to achieve faster computation speed or less power consumption than local computing. The tasks on are divided into two types of M/G/1 substreams: unoffloadable computing task substream and offloadable computing task substream. The latter includes local computing tasks and remote computing tasks offloaded to other ECSs. Correspondingly, the whole task arrival rate of can be formulated as , where is the arrival rates of the tasks unoffloadable, is the arrival rates of the tasks that are offloadable and executed locally, and is the arrival rates of the tasks that are to be offloaded to the ECS . The arrival rate distribution vector (for ) represents the computation offloading strategy of . Similarly, there are two types of task body sizes, which are denoted as and corresponding to the aforementioned unoffloadable and offloadable task substreams, respectively; these variables are independent and identically distributed (i.i.d.) random variables with an arbitrary probability distribution. Since most devices generally work on periodic duty in TSN, the statistical distribution of the attributes of the tasks in the edge devices can be obtained via long-term statistics. Herein, we assume that the expected values of and , i.e., and , and their second moment, and , are available.

Each task generated in an edge device has two choices: local computing or remote computing in one of the ECSs. We discuss the two cases of the local computing and the remote computing approaches here.

3.1. Local Computing

The local computing task stream is composed of an unloadable task substream and a loadable local computing task substream; the arrival rates of these substreams are given by and , respectively. Let denote the tasks’ local computing time in ; then, the average time for local computing is as follows:

The second moment of can be obtained as follows:

Furthermore, according to the Pollaczek–Khintchine formula [31], the average waiting time in the queue is given by the following expression:and the average response time for all local computing tasks on is as follows:

If all tasks are executed locally, i.e., , the average response time for all tasks in is as follows:

3.2. Remote Computing in ECS

In the scenario of task remote computing, a task generated on one edge device is delivered to an ECS using TSN. To simplify the model, we assume that the tasks generated on edge devices only pass one TSN switch to reach various ECSs and the whole TSN network is consistent with the IEEE 802.1Qbv standard. As shown in Figure 1, each TSN device injects into the TSN data stream from the relevant ingress port of the TSN switch. Then, the switching fabric of the switch redirects the data stream to the proper output port according to the TSN data frame destination. To provide latency and jitter guarantees, before being transported to a specific output port that is connected to , the TSN stream must pass through a priority filter and be reshaped in eight priority queues, , and be finally ejected out from egress port according to the frame priority and GCL state without conflict; this process is known as the time aware shaper (TAS) mechanism. In the computing tasks offloading scenario, the hybrid stream composed of task offloading substreams and original TSN substreams is transmitted over the network. In IEEE802.1Q stand, the stream type and priority are defined in Table 2. To reduce the impact on original TSN data transmission, we set the priority of all the task offloading substreams to be p (e.g., , excellent effort traffic), i.e., all the task offloading streams to get into the queue , for all , .

First, we ignore the priority queue delivery interruption caused by TAS. Herein, we have three assumptions. First, we assume that before the task offloading substreams were generated, the queue was in agreement with the M/G/1 queue model. Second, we assume that the original TSN substream of priority , whose empress port is the same as the task offloading substreams to , arrives according to a Poisson process with a rate of , and the length of each package (composed of many TSN frames) is following a general distribution. Third, we assume that and are available, and we set the TSN switch egress port sending speed as . Thus, the service time, , of the original TSN substream in is an i.i.d random variable with mean and second moment . When the task offloading stream from edge device to arrives at queue , it is considered as a single Poisson substream with arrival rate . Its service time is also an i.i.d. random variable with mean and second moment . The compound stream, composed of the original TSN data Poisson substream and the task offloading Poisson substreams from all devices to , is still a Poisson stream, whose arrival rate is . Let the service time of each substream in queue be denoted by ; then, the whole service time of the compound stream is an i.i.d. random variable with meanand second moment

According to the standard IEEE802.1Qbv, the transmission of queue may be interrupted, i.e., queue service breakdown may occur, due to the GLC state or higher priority queue transmission requirements. We assume that the queue system of remains in a stable state before the task offloading substreams reach the switch and that the transmission server breaks down at an exponential rate , i.e., the probability that the queue will be able to serve for an additional time t without breaking down is . After the transmission server breaks down, the queue system stops sending data for a random time denoted as with a general distribution; subsequently, the server goes on from the point at which it broke down. We assume that based on long-term statistics of the stable original TSN stream queue system, the mean time and the second moment are known. When the task offloading streams arrive at , as the GCL and higher priority queues are unchanged and only the customer arrival rate increases in , we suppose that the server of queue still breaks down at the exponential rate , and the pause time still follows the previous general distribution.

Here, we discuss the average time a customer spends in queue with server breaking down. Let denote the random variable “cost time” that includes service and pause time of server breaking down, k denotes the number of times the server breaks down, and denote the amount of time the server waits for in each breakdown period. Then, it can be found that

According to the Pollaczek–Khintchine formula [31], the average waiting time before a customer begins to be served is calculated as follows:

To obtain and , we assume that a customer in requires service time ; thus, we can obtain the following expressions:

As the number of times the server was interrupted while a customer is in service is a Poisson random variable with mean , the sum is also a compound Poisson variable with mean , when . Consequently, it can be seen that

Using equations (9)–(13), it can be found that

Furthermore, we see that

The second moment can then be obtained as follows:

Finally, substituting equations (16) and (18) into equation (9) and assuming that the inequality is satisfied, the average waiting time can be written as follows:

Thus, the average time that the members of the hybrid stream spend in queue is calculated as follows:

Suppose that when task offloading streams reach the , immediately provides parallel computing services, which offer CPU resources with frequency for each offloaded task. To guarantee that the offloading computing service requirement is beyond the maximum capacity of , the parameter is used to denote the available maximum offloading task arrival rate from all edge devices to , i.e., . In , the average computation time for offloaded tasks from is . Considering that the data transmission speed on the TSN wire is fast (e.g., 1000 Mbps) and the offloaded task computation result’s volume is typically small (maybe a few bytes), the time of computation result delivering back can be neglected. Moreover, the time for the task offloading stream in the switch is mostly spent in , then the average remote computing time for all tasks offloaded from edge device to can be written as follows:

Finally, the average response time for all the tasks generated in edge device can be obtained as follows:

4. Problem Formulation and Algorithm

4.1. Game Formulation

In the TSN computation offloading scenario, each edge device is independent and selfish with the aim of utilizing the maximum ECS computing resources possible to reduce its own response time; this represents a suitable situation for game theory to be used to model the competition process among all edge devices. Considering the edge device set as the game players, the arrival rate allocation policy set as the strategies of player (where ), and the Cartesian products of all individual strategy set, i.e., , as the strategy space, we give the multitask offloading game as follows:

Definition 1. Multitask offloading game: the multitask offloading game, , which is composed of the player set , the offloading strategy space , and the utility function set , is represented as follows:We expect that within limited rounds of decision-making, the players can reach a certain consensus, i.e., each player benefits most in that state. Thus, we now introduce Nash Equilibrium as follows:

Definition 2. Nash equilibrium [33]: the strategy profile is a Nash equilibrium of multitask offloading game if and only ifwhere is the offloading strategies of computing devices except . That means there is no more optimal strategy than to get more benefit based on the utility function. The challenge in fully defining the game formulation (equation (23)) is to find an appropriate utility function, which allows all players to maximize the benefits from their own perspectives and achieve Nash equilibrium. Here, we adopt the exact potential game theory as defined below to design the utility function.

Definition 3. Exact potential game [34]: the multitask offloading game, , is an exact potential game if and only if there exists a potential function that satisfiesAs the existence of Nash equilibrium is a fundamental property of exact potential games, we construct a potential function, F and a utility function, , to ensure that the multitask offloading game, , is an exact potential game using the backward method [34]. We let denote the whole system benefits that can be achieved via task offloading computing rather than the situation in which all tasks are locally executed. can be represented in computation time, as the greater benefit is equated with a lesser time cost. Thus, we see that.Subsequently, considering equations (1)–(22), a decomposition can be obtained as follows:Then, we can obtain the utility function of the potential game with , where is a noncontributing term from the perspective of player . Thus, the utility function of the exact potential game is given as follows:

4.2. The Algorithm

Based on the utility function , the best response strategy of each edge device is described.

Definition 4. Best response strategy: let represent the strategies of all the edge devices except device , the best response strategy of is given byEquation (29a) represents the choice of to pursue minimal time cost. The constraint equation (29b) guarantees that the decision of the task arrival rate allocation meets the total arrival rate requirement for each edge device. The constraint equation (29c) ensures that the workload of ECSs providing remote computing services is not beyond the upper limit of its capacity. Equations (29d) and (29e) guarantee that the queuing system is in a serviceable state.
According to the aforementioned system model and game formulation, the distributed and sequential decision-making algorithm for multi-task offloading (DSDA-MO) was designed. A summary of this algorithm is given in Algorithm 1. After the strategies of each edge device are initialized, a token is passed among edge devices, and the device in possession of the token calculates its best response strategy based on all the decisions of the other devices. The algorithm runs (line 3–8) until the , the mean square error between the strategy of the current round and the strategy of the previous round, is less than accuracy parameter (e.g., ). is defined as follows:In each round of Algorithm 1, the equations (29a)–(29e). Problem could been solved via the Lagrange multiplier method with the algorithm of iterative search in feasible region, and the complexity is , where is the search interval. Thus, the complexity of Algorithm 1 is , where R is the number of rounds mainly determined by the accuracy parameter in line 9.

Input: , , , , , , , , , , , , , , , and , for all
(1)Initialization: and for all and give the token to first edge device .
(2)Repeat:
(3)  Player waits for the token
(4)  Player collects and updates all the other edge devices’ decisions,
(5)  
(6)  Player calculates the best response strategy
    
     Subject to (29b)–(29e)
(7)   Player broadcasts to all the other edge devices
(8)  Player sends the token to the next node
(9)Until
(10)Output:

5. Numerical Results

In this section, the results of numerical experiments are reported to evaluate the proposed system model and the DSDA-MO algorithm. In the simulation setup, we consider M edge devices (, i = 1, 2, …, M) and N ECSs (, ,j = 1,2, …, N) with the following parameters: , , , , , , , , , , , , , , 0.8 + 0.1(i − 1) GHz,  GHz, and , for all , . The simulation is constructed on MATLAB platform.

5.1. Nash Equilibrium of the Game

First, we set M = 6, N = 5 to evaluate the results of Nash equilibrium, i.e., the final optimal strategy for . Figure 2 shows the curves describing the utility of the edge devices found at each round of the DSDA-MO algorithm. It seems that the utilities fluctuate wildly in the initial rounds of calculations and then enter a stable zone with low variation. Figure 2 shows the average time cost at each round, and it illustrates that the average time cost of each device remains near a stable value after about 200 rounds. The results in Figures 2 and 3 verify that the game reaches the approximate Nash equilibrium after finite rounds of calculation in the proposed algorithm.

To further evaluate the speed of convergence of the Nash equilibrium, we list the strategies, i.e., and , from 30 to 90 rounds in Tables 3 and 4, and give the curves of strategies at each round (as shown in Figure 4) by using the edge device and as examples. The results in Tables 3 and 4 and Figure 4 show that changes significantly in each round at the early stage until round number 200; from this point on, the optimal strategy remains in a stable state, which is consistent with the findings presented in Figures 2 and 3. As discussed earlier in Algorithm 1, the round number to get approximate Nash equilibrium mainly depends on the accuracy parameter . Table 5 presents the relationship between the round number and the value of , listed as It shows that the round number increases with the improvement of the accuracy requirement. That is to say, a smaller value of leads to a greater round number. We can select the appropriate parameters according to the application demand with the appropriate balance between the Nash equilibrium accuracy and the algorithm running time.

5.2. Benefit of the Game

In this subsection, we focus on analyzing the benefit that edge devices obtain from the proposed game. Let accuracy parameter ; after 220 rounds of calculation in Algorithm 1, we obtain the final optimal strategy profile . Figure 5 presents the arrival rate of the un-offloadable tasks (known parameter), the sum of the optimal arrival rates of the tasks offloaded from to all ECSs (i.e., , ), and the optimal arrival rate of the offloadable local computing tasks (i.e., ) in edge devices. It is found that increases as the arrival rate of all tasks generated on each edge device increases with in our experiment settings. In other words, the amount of the offloaded tasks increases as the computation workload ascends in edge devices. Figure 6 illustrates the average time with the decision of all locally-computed tasks versus the average time with optimal strategy profile obtained by Algorithm 1. Compared with the method of all tasks computing locally, our proposed algorithm brings significant time-saving advantages for each device, where obtains 13.03% decrement in time cost.

To study the profitability of the proposed approach, we define the task offloading proportion , aswhere is the sum of the arrival rates of the tasks offloaded to from , and is the sum of the arrival rates of all offloadable tasks on . The larger means more benefits obtained from task offloading for . Figure 7 illustrates that the task offloading proportions of increase in order as their computation workload increases accordingly, and the task offloading proportion increment of reaches more than 25%.

We also study the workload of the edge computing servers in the game. Figure 8 illustrates the final amount of the workload in the target ECS set , i.e., , and the final decision that how many tasks on each edge device will choose each ECS. The decision is indicated by the arrival rate from all edge devices to . From Figure 8, we can see that although the processing frequencies of the members of set increase successively in our experiment settings, the sum of the task arrival rates from the edge devices to ECSs, i.e., , descends gradually. In other words, ECSs gradually lost their attraction to the tasks offloaded from edge devices as the processing frequency increases successively. This is because each offloaded task needs to consider both the computation time and the transmission time and makes a decision based on the sum of them. Although high processing frequency of the target ECS leads to low computation time for the offloaded task, the sum of the task arrival rates may also be small due to a possible high transmission time cost. In our experimental settings, it is assumed that the busy degree of the TSN links to increase successively. That is to say, the average interrupted time of the TSN output queues corresponding to each target ECS, i.e., , increases orderly. This experiment setting results in an increase in the transmission time of the same task to orderly. Hence, the number of the tasks selected to be offloaded to ECSs, namely, the task arrival rates to ECSs, gradually decreases in Figure 8. Especially, almost none of the edge devices here decided to offload their task to .

To analyze the influence of the number of the edge devices, M, and the number of ECSs, N, on the task offloading decision-making result, we construct an experiment with , , and . Figure 9 shows the average response time of the tasks in the edge devices with different M and N, which reflects the impact on the performance of the entire edge computing system. Obviously, the average response time of the edge computing system increases as M increases when N is fixed. Inversely, the average response time decrease as N increases when M is fixed.

6. Conclusion

In this paper, we adopt the queueing theory to establish a time-cost model for task offloading stream transmission to resolve the problem of task offloading decision-making for edge computing in TSN. Meanwhile, we construct a utility function with the backward method to formulate the task offloading competition among edge devices as a game model. To find the Nash equilibrium of the proposed game, we design a distributed and sequential decision-making algorithm for multitask offloading. Finally, the exist of Nash equilibrium, the speed of convergence of Nash equilibrium, and the relationship between the number of calculation rounds and accuracy parameter are investigated through numerical experiments. Furthermore, to study the benefit of the proposed game theoretic approach, we analyze and compare the optimal strategy of task arrival rate, average time cost, and task offloading proportion among each edge device. The experiment results demonstrate that the TSN edge devices can obtain the optimal multi-task offloading strategies within finite rounds of calculation, which significantly reduce the average time cost of task computation, by utilizing the proposed model and approach.

7. Future Work

For future work, we will study the scenario where the tasks are generated in the queues of edge devices and are processed in the queues of ECSs, involving a multilevel queuing system. The goal is to develop efficient algorithms for each edge device to find the best response in games considering the fluence to TSN transmission. In addition, we will jointly allocate the task arrival rates and computing resources (such as computing memory and storage space) to further promote the application of edge computing technology in TSN.

Data Availability

The data used to support the findings of the study are available within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the R&D Project of China under Grant No. 2018YFB1700200 and the Science and Technology Research Program of Chongqing Municipal Education Commission under Grant No. KJQN202000611.