Abstract

Various emerging vehicular applications such as autonomous driving and safety early warning are used to improve the traffic safety and ensure passenger comfort. The completion of these applications necessitates significant computational resources to perform enormous latency-sensitive/nonlatency-sensitive and computation-intensive tasks. It is hard for vehicles to satisfy the computation requirements of these applications due to the limit computational capability of the on-board computer. To solve the problem, many works have proposed some efficient task offloading schemes in computing paradigms such as mobile fog computing (MFC) for the vehicular network. In the MFC, vehicles adopt the IEEE 802.11p protocol to transmit tasks. According to the IEEE 802.11p, tasks can be divided into high priority and low priority according to the delay requirements. However, no existing task offloading work takes into account the different priorities of tasks transmitted by different access categories (ACs) of IEEE 802.11p. In this paper, we propose an efficient task offloading strategy to maximize the long-term expected system reward in terms of reducing the executing time of tasks. Specifically, we jointly consider the impact of priorities of tasks transmitted by different ACs, mobility of vehicles, and the arrival/departure of computing tasks, and then transform the offloading problem into a semi-Markov decision process (SMDP) model. Afterwards, we adopt the relative value iterative algorithm to solve the SMDP model to find the optimal task offloading strategy. Finally, we evaluate the performance of the proposed scheme by extensive experiments. Numerical results indicate that the proposed offloading strategy performs well compared to the greedy algorithm.

1. Introduction

Intelligent connected vehicles can improve the traffic safety and ensure passenger comfort by supporting various applications such as autonomous driving, safety early warning, natural language processing, advertisements, and entertainments in the vehicular environment [14]. These applications consist of enormous latency-sensitive/nonlatency-sensitive and computation intensive tasks [5]. However, the computational capability of the vehicles is limited, which makes it difficult to support latency-sensitive application. Vehicular fog computing (VFC) has been emerged as an efficient approach to tackle this issue in the vehicular network, where the computational resources are pushed at the network edge to satisfy the requirements of latency-sensitive tasks [6, 7]. Nevertheless, the computational resources of the VFC system are not sufficient because the number of tasks generated by the vehicular applications is so huge. Therefore, it is critical to propose a new computing paradigm for the vehicular network.

Mobile fog computing (MFC) system has been proposed as an efficient computing paradigm for the vehicular network, which extends the computational capability of the VFC system by jointly working with the remote cloud. In the MFC system, we consider each vehicle as a computational resource unit (RU) with the same computational capability [8], and the vehicles adopt the IEEE 802.11p protocol to communicate with each other [9]. The 802.11p employs the enhanced distributed channel access (EDCA) mechanism to provide different quality-of-service (QoS) supports. Specifically, the EDCA mechanism defines different access categories (ACs) with different priorities to transmit different data traffic [10]. Since the vehicles in the MFC system generate enormous latency-sensitive/nonlatency-sensitive and computation intensive tasks due to various vehicular applications and these kinds of tasks are with different delay requirements, we consider that the latency-sensitive and computation-intensive tasks are of high priority and transmitted by higher priority AC to obtain higher-level QoS, while the nonlatency-sensitive tasks are low priority and granted with lower priority AC. When a vehicle generates a high priority task, i.e., service requester, it can offload the task to other vehicles or remote cloud, i.e., accepting by the vehicular fog or transmit it to the remote cloud. Since the QoS of the low priority task is not stringent, service requester can only offload low priority tasks to other vehicles in the vehicular fog, or the tasks are rejected by the system. Once the tasks are accepted by the vehicular fog, the system needs to determine how many RUs to be assigned to obtain maximal long-term expected reward. Note that the main goal for the MFC system is to reduce the executing time of tasks [11]. The MFC system in this paper includes the features which are proposed in previous literatures: (1) vehicles arrive/depart the system; and (2) computing tasks arrive/depart the system, resulting in that the number of available resources in the system is variable. In addition, the MFC system has its unique feature, i.e., considering different priorities of tasks transmitted by different ACs of the 802.11p EDCA mechanism, which makes it challenge to find an optimal task offloading strategy to maximize the long-term expected system reward.

To the best of our knowledge, although there are extensive studies on the task offloading scheme in the MFC system for vehicular network, no existing literature considers the feature, i.e., tasks with different delay requirements are transmitted by different ACs of the 802.11p EDCA mechanism, which poses a significant challenge to construct model to find the optimal task offloading policy. Thus it is necessary to propose an optimal offloading policy while considering the high/low priority tasks transmitted by different ACs, which motivates us to do this work.

In this paper, we consider that the different priorities of tasks transmitted by different ACs and propose an optimal task offloading policy for the MFC system. The main contributions of this paper are summarized as follows. (1)We propose an offloading strategy to obtain maximal long-term expected reward in the MFC system for vehicular network while jointly considering the impact of computation requirements of tasks, vehicle mobility, and the arrival/departure of high/low priority task. Specifically, we transform the task offloading process into a semi-Markov decision process (SMDP) model where all the system state set, action set, state transition probabilities, and system reward function are defined. To solve the problem efficiently, we adopt the relative value iterative algorithm to find the optimal offloading strategy for the MFC system(2)To demonstrate the performance of the proposed optimal strategy, we perform extensive experiments for our proposed strategy and the greedy algorithm (GA) under the same condition and obtain numerical results. The results indicate that the performance of our proposed strategy has been significantly improved compared to the GA method

The rest of this paper is organized as follows. Section 2 discusses the related work on the task offloading strategy in the vehicular network. The MFC system is described in Section 3. We construct the SMDP model to formulate the task offloading problem in Section 4. The relative value iteration algorithm is introduced in Section 5. Section 6 provides the numerical results and the corresponding performance analysis. The conclusion of this paper is given in Section 7.

The computing paradigm VFC and MFC are widely used to the vehicular network. In this section, we first review the related works on task offloading in the VFC system, and then the works about offloading in the MFC system are discussed.

2.1. Task Offloading in the VFC System

Zhu et al. [6] considered the fog node capacity, constraints on service latency, and quality loss to propose a solution for task allocation in the VFC system. They first changed the process of task allocation into a joint optimization problem which is a biobjective optimization problem. To solve the problem, they also proposed an event-triggered dynamic task allocation framework based on the linear programming-based optimization and binary particle swarm optimization. Wu et al. [11] considered the transmission delay caused by 802.11p and proposed a task offloading strategy for the VFC system. They first transformed the task offloading problem into an SMDP model, and then they adopted an iterative algorithm to solve the SMDP to attain optimal strategy. Zhou et al. [12] first proposed an efficient incentive mechanism based on contract theoretical modelling which is tailored for the unique characteristic of each vehicle type to motivate vehicles to share their resources. Then, they formulated the task assignment problem as a two-sided matching problem which is solved by a pricing-based stable matching algorithm to minimize the network delay. Zhao et al. [13] proposed a contract-based incentive mechanism, which combines resource contribution and utilization. Then, they adopted distributed deep reinforcement learning (DRL) to reduce implementation complexity in the VFC system. Finally, they proposed a task offloading scheme based on the queuing model to avoid the task offloading conflict. Lin et al. [14] proposed a resource allocation management scheme to reduce servicing time. They first introduced a serving model, and then they built a VFC system utility model based on the serving model which is solved by a two-step method. Specifically, they first presented all the suboptimal solution based on a Lagrangian algorithm, and then they provided an optimal solution selection process. Xie et al. [15] jointly considered the effect of vehicle mobility and time-varying computation capability and proposed an effective resource-aware based parallel offloading policy for the VFC system.

2.2. Task Offloading in the MFC System

Ning et al. [4] developed an energy-efficient task offloading scheme. Specifically, they first formulated the optimization problem to minimize energy consumption through jointly considering load balance and delay constraint. Then, they divided the optimization problem into two stages, i.e., flow redirection and offloading decision. Finally, they adopted Edmonds-Karp and deep reinforcement learning-based Minimizing Energy Consumption algorithms to solve the optimization problem. Zheng et al. [8] considered the variability feature of resources and proposed an optimal computation resource allocation strategy to maximize the long-term expected reward of the MFC system in terms of power and processing time. Specifically, they first transformed the optimization problem into an SMDP, and then they adopted the iteration algorithm to find the optimal scheme. Lin et al. [16] took into account the heterogeneous vehicles and roadside units, then the resource allocation problem was formulated as an SMDP model. Afterwards, the formulated problem is solved by a proposed method. Zhao et al. [17] first changed a collaborative computing offloading problem into a constrained optimization by jointly optimizing computation offloading decision and computation resource allocation. They also adopted a collaborative computation offloading and resource allocation optimization scheme to solve the optimization problem. Wu et al. [18] considered the vehicles which are processing tasks and proposed a task offloading scheme to maximize the long-term system reward leaving system. Specifically, they first formulated the offloading problem as an infinite SMDP. Then, they employed the value iteration algorithm to tackle this problem. Wang et al. [19] jointly considered the heterogeneous delay requirements for vehicular applications and the variable computation resources to propose an efficient offloading policy. They first introduced a priority queuing system to model the MFC system, and then found an application-aware offloading policy by an SMDP. Liu et al. [20] developed an offloading strategy in the MFC system to minimize the task offloading delay which is consisted of transmission delay, computational delay, waiting delay, and handover delay. Specifically, they first established the task offloading delay model, and then they developed a pricing-based one-to-one matching algorithm and pricing-based one-to-many matching algorithm to obtain offloading strategy.

From the mentioned above, we can find there is no existing work considering the different priorities of tasks transmitted by different ACs of the 802.11p, which is the motivation of our work.

3. System Model

In this section, the MFC system is first described in detail, and then we introduce how vehicles employ the IEEE 802.11p EDCA mechanism to transmit high/low priority computing tasks.

3.1. System Description

The MFC system has jointly considered the impact of priority of tasks, vehicle mobility, and arrival/departure of the computing tasks. The scenario is shown as Figure 1; it consists of both the vehicular fog and remote cloud. Once a high priority task is generated, the system needs to make a decision to assign it to the desirable computing entity, i.e., offloading the task to the vehicular fog or transmitting it to the remote cloud. Since the tasks with low priority are not latency-sensitive, the system is more likely to execute them in the vehicular fog or drop them. In addition, if a task is received by the vehicular fog, the system further determines how many available vehicles, i.e., available RUs, are assigned to execute it. A simple example is shown in Figure 1. The task from vehicle arrives at the system, and two RUs are assigned to handle it. Afterwards, the service requester receives the computing results from RUs. We assume that the maximal number of computational resources is in the MFC system, and the computational service rate of each vehicle for high/low priority tasks is . The vehicles move into or depart from the MFC system according to the Poisson process with parameter and , respectively. Similarly, the arrival rate of high and low priority computing tasks follows Poisson distribution with parameter and .

3.2. Data Transmission of 802.11p EDCA Mechanism

The 802.11p EDCA defines four ACs with different priorities to transmit different types of tasks. Each AC queue employs its own parameters, i.e., arbitration interframe space number (AIFSN), minimum contention windows, and maximum contention windows [21]. Denote and as the minimum contention windows and maximum contention windows of , respectively. Thus, the maximal times that the contention window of can be doubled, i.e., , is expressed by Equation (1). In this paper, the high priority tasks are transmitted by queue and low priority tasks are transferred by queue in the broadcast mode. Similar to most related work [2225], we assume that the channel is ideal. The transmitting procedure of tasks in the MFC system is described as follows. Specifically, when an AC queue in a vehicle has a task to transmit and the channel is idle for the time duration of arbitration interframe space (AIFS), a backoff process is initiated. Specifically, a random value is selected between zero and the minimum contention windows as the value of the backoff counter. Then, if the channel is kept idle for one slot, the backoff timer is decreased by one. Otherwise, the backoff timer is frozen until the channel keeps idle for the duration of AIFS. If the value of the backoff timer is decreased to zero, the packet is transmitted. Once the two ACs in a vehicle are transmitting simultaneously, the internal collision happens. In this case, the high priority task will be transmitted, while the low priority task is retransmitted with a new backoff procedure with doubled contention windows. If the number of retransmissions is more than the retransmission limit , the task will be dropped.

4. SMDP Model

In this section, the task offloading process is transformed into an SMDP model. To clarify the problem, we have defined the system state set, action set, state transition probabilities, and system reward function of the system.

4.1. System State Set

The system state includes the number of vehicles, i.e., , the number of high/low priority computing tasks which are executed by a different number of RUs and the event. The event is denoted as , where . Here, means that the service requester generates a high priority computing task, i.e., arrival of high priority computing task; means that the service requester generates a low-priority computing task, i.e., arrival of low priority computing task; means that the task with priority processed by RUs departs from the MFC, i.e., completion of a task with priority processed by RUs; means a vehicle moving into the MFC system, i.e., arrival of a vehicle and means an available vehicle leaving the system, i.e., departure of a vehicle. Thus, the system state set can be expressed as where means the number of the tasks with priority processed by RUs and means that the maximal number of RUs for processing a task. It is no doubt that the number of busy vehicles must be smaller than , i.e., .

4.2. Action Set

The action is related with the system current state and reflects the decision of the MFC system under the current event. The action belongs to the set , where indicates that the system takes no action when the event is one of the completion of a task, vehicle moving into or departing from the system; indicates that the system transmits the high priority computing task to the remote cloud or the system rejects the low priority task due to lack of computational resources; indicates that RUs are assigned to execute a task. Thus, the action set can be expressed as

4.3. State Transition Probabilities

In the SMDP model, the next state is related to the current state and action, and the state transition probabilities represent the relationship between the current state and the next state. Thus, the state transition probabilities are defined as the ratio between the arrival rate of the next event and the sum of the arrival rate of all events. Given the current state and action , denote as the transition probability from the current state to the next state after taking action , and as the sum of arrival rate for all events. To model state transition probabilities, the arrival rate of the next event should be discussed first, which is different according to the current event and action. The detailed procedure is as follows.

As a service requester generates a high priority computing task, the MFC system transmits it to the remote cloud. In this case, the arrival rate of event is . The number of busy RUs does not change when the task is executed in the remote cloud. The arrival rate of event , i.e., completion of a task with priority and processed by RUs, is . The arrival rate of event and are and , respectively. If a high/low priority task is generated and received by the vehicular fog, RUs are assigned to execute it. In this case, when the next event is one of the arrival of computing task, vehicle moving into or departing from the system, the arrival rate of the next event is , , and , respectively. Let be the next event. When the event, i.e., the completion of a task with priority processed by RUs, occurs, the arrival rate of the next event is analyzed as follows: (1)e = Ai, a(x) = j, E = Di,j

This case indicates that the service requester generates a computing task with priority , and it is executed by RUs. Thus, the number of tasks with priority handled by RUs increases, and the arrival rate of event is . (2)e = Ai,

This case represents that a task with priority processed by RUs is accomplished. Thus, the arrival rate of event is . (3)e = Ai,

This case means that the task with priority processed by RUs is accomplished. The arrival rate of event is .

In conclusion, given the current state and action , the transition probabilities can be expressed as Equations (4) and (5), which are shown on the next page.

Similarly, as the current event is one of the completion of a task with priority processed by RUs, vehicle moving into or departing from the system. The state transition probabilities can be calculated by Equations (6)–(8), respectively. (1)(2)(3)(4)(5)

Since is the sum of arrival rate for all events, the expression can be denoted by Equation (9), i.e.,

4.4. System Reward Function

Given the action, the system is rewarded when the MFC system changes from the current state to the next state. Denote as the system reward function, which is expressed by where means the revenue of the system by taking action under state and represents the system cost during the period between two states. Next, we first discuss the revenue, and then the system cost is explained.

4.4.1. System Revenue

The main goal for the MFC system is to reduce the executing time of tasks. Let be the transmission delay from the vehicular fog to the remote cloud, be the processing time of a task when the requester executes the task locally. be the transmission time from the requester to the vehicular fog and be the executing time of a task by RUs. Similar to previous literatures [26], we do not take into account the feedback time of the analytical results. Note that the remote cloud is equipped with power capability and thus the processing time of remote cloud is ignored. If a high priority computing task arrives at the system and the computation resources are not sufficient, the system transmits the task to the vehicular fog and then transfers it to a remote cloud. In this case, the revenue of the system is denoted as , where is the price of per unit time. When a low priority task reaches the system where available RUs are insufficient, the task will be rejected and the system is terminated with punishment parameter . If a high/low priority task is executed by the vehicular fog, the revenue is expressed as . If the event is one of the completions of a task, vehicle moving into or leaving the system, the system takes no action and the revenue is zero. A busy vehicle moves out of the MFC system, resulting in the failure of executing tasks, and the system is punished with parameter . Thus, the revenue of the system under different events and actions can be formulated by Equation (11), i.e.,

Since the computational service rate of each vehicle for high/low priority tasks is , the executing time of a task by RUs can be expressed as

The tasks with different priorities are transmitted by different ACs, i.e., and , with different transmission delay. Similar to the work [27], we consider the whole tasks transmitting process as a -domain linear model and adopt the probability generating function (PGF) method to obtain the transmission delay for ACs. is denoted as the PGF of the transmission delay, which can be expressed by Equation (13), i.e., where is the PGF of average transmission time, is the PGF of backoff time of when the number of retransmission is , and is the transmission probability of .

is denoted as the maximal contention window (CW) size of when the number of retransmission is , which can be calculated by Equation (14). Let be the PGF of average time that the backoff timer reduces by one. Thus, the PGF of can be calculated as Equation (15).

Let be the duration of a slot. The PGF of is expressed by Equation (16), i.e., where is the time duration of AIFS and is the backoff freezing probability that the requester vehicle senses other vehicles in the MFC system occupying the channel or other access categories attempting to transmit tasks. Since needs to sense channel for more slots than , the can be calculated by where is calculated according to and as follows.

Let SIFS be the time duration of short inter-frame spacing (SIFS). The can be expressed by Equation (19) according to the 802.11p.

Assuming that all high/low priority tasks have the same size, i.e., , the average transmission time is given by where and are the header length of the physical and MAC layer, and is the propagation time. and are the basic rate and data rate, respectively. Thus, the PGF of average transmission time can be expressed by Equation (21), i.e.,

According to the Markov chain in [27], the transmission probability of can be expressed by Equation (22), i.e., where is the utilization of , and is the task arrival probability of AC. Since the arrival rate of all computing tasks follows Poisson distribution, the arrival probability can be calculated by Equation (23), i.e.,

Initialize the utilization , then and can be calculated according to Equations (17) and (22) through iterative method. Substituting and into Equation (13), the PGF of transmission time can be obtained. Thus, the transmission time can be obtained by

4.4.2. System Cost

Given the current state and action , the system cost means the cost caused by executing tasks during the period of two states and is expressed by Equation (25). where is the number of busy vehicles under the current event after taking action, which can be calculated by Equation (26) and is the corresponding expected time.

Denote as the continuous-time discounted factor. In this paper, we adopt the discounted reward model found in [28], i.e.,

Thus, the system revenue function can be rewritten as Equation (28).

5. Relative Value Iteration Algorithm

In this section, we adopt the relative value iteration algorithm to find the optimal task offloading strategy to maximize the long-term expected reward in terms of reducing the executing time of tasks. Specifically, the relative value iteration algorithm is used to solve the Bellman optimal equation [29]. The Bellman equation is expressed by Equation (29), where is the discount factor that determines the impact of future reward on current state.

Since the continuous-time SMDP is hard to solve and the discrete MDP can be solved directly through iterating the Bellman optimal equation to obtain optimal strategy, we transform the continuous-time SMDP into the discrete MDP by uniformizing the system revenue, the discount factor, and state transition probabilities according to Equations (30)–(32). where . Substituting Equations (30)–(32) into Equation (29), the Bellman optimal equation can be written as

The detailed description of the relative value iterative algorithm to solve Equation (33) is presented in Algorithm 1.

1: For each state , initialize the state-value . Set a small constant , and initialize the number of iterative .
2: For each state , calculate by Bellman optimal equation:
   
3: If for all , go to step 4; otherwise the number of iterative increases, i.e., and go to step 2.
4: For all state , calculate the optimal strategy by
   

6. Numerical Results and Analysis

In this section, we conduct extensive experiments in MATLAB 2010a and obtain numerical results to demonstrate the performance of the proposed offloading strategy though comparing it with the greedy algorithm. The GA method indicates that the system is always inclined to allocate as many RUs as possible to execute tasks. The considered scenario is shown in Figure 1. In the experiment, we first initialize the system state set, action set, state transition probabilities, and system reward according to Equations (2)–(10). Then, the relative value iterative algorithm is used to solve the Bellman optimal equation to obtain an optimal offloading strategy. Finally, we compare the proposed strategy with the GA method in terms of the long-term expected reward. We assume that the maximal number of RUs for executing a task is 2. and mean that the system assigns one RU and two RUs to execute a high priority task. Similarly, and indicate that the system allocates one RU and two RUs to process a low priority task. represents that a high priority task is transmitted to remote cloud. means that the system rejects a low priority task. For simplicity, the number of retransmissions limit is set to be 2. The parameters of 802.11p are adopted according to the IEEE 802.11p standard [30], and the main parameters in the experiment are shown in Table 1.

Figure 2 shows the transmission delay of high/low priority computing tasks when the maximal amounts of vehicles changes. We can see that, when the maximal amounts of vehicles increases, the transmission delay of tasks keeps increasing. With the increase of the maximal number of vehicles in the system, the packet collision probability is increased, thus incurring the degraded transmission delay. In addition, we can find that the transmission delay of high priority tasks, i.e., , is higher than that of low priority tasks, i.e., . This is because that the contention window of queue which is used to transmit high priority tasks is smaller than that of .

Figure 3 shows the action probabilities of the MFC system when the maximal number of vehicles in the MFC system changes. It can be seen that the probabilities of and become smaller with the increasing of the maximal number of vehicles, which can be explained as follows. When the maximal number of vehicles increases, the computational resources become sufficient, resulting in that the system tends to execute tasks in the vehicular fog. Since the system is inclined to process tasks in the vehicular fog, the probabilities of , , , and become larger. With the number of vehicles further increasing, the available resource is abundant and the system assigns as many RUs as possible to process tasks to maximize the system reward in terms of reducing executing time of tasks. Therefore, the probabilities of and decrease, while those of and continue to increase. Since the difference of transmission delay for high/low priority is small and the computational rate for different priorities of tasks is the same, the difference of system revenue for action and is not obvious. Thus, the probabilities of and are the same.

Figure 4 shows the long-term expected reward of the MFC system when the maximal number of vehicles in the MFC system changes. We can see that, when the amounts of vehicles increase, the proposed strategy has a significantly larger system reward than the greedy algorithm. This is because when the available computing resource increases, more tasks can be processed by the vehicular fog. Moreover, we can find that the proposed offloading strategy performs well in the system reward as compared to the GA method. This is because the proposed scheme considers the long-term reward when assigning RUs to execute tasks. However, the GA method only allocates as many RUs as possible to process tasks and does not take into account the long-term system reward.

Next, we compare the proposed task offloading strategy for the high priority tasks with that for the low priority tasks, which is shown in Tables 2 and 3, respectively. Note that the blank part of the tables indicates the corresponding state does not exist. It can be seen that when the number of available resources is larger than the maximal number of RUs processing a task, i.e., , the system allocates as many RUs as possible to execute the high/low priority tasks to maximize the system reward. Moreover, when the number of available RUs is smaller than and larger than one, the system assigns one RU to process computing tasks. When the number of available resources is very small, the system will adopt different action to execute high/low priority tasks, i.e., transmitting the high priority task to the remote cloud to obtain maximal long-term expected reward and allocating the one RU to process the low priority task or rejecting the low priority task.

7. Conclusions

In this paper, we developed a task offloading strategy for the MFC system to maximize the system reward in term of reducing the processing time of tasks while considering the impact of computation requirements of tasks which are transmitted by different ACs of the 802.11p EDCA mechanism, mobility of vehicles, and the arrival/departure of high/low priority tasks. We first transformed the offloading problem into an SMDP model. Afterwards, the relative value iterative algorithm was used to solve the model to obtain the optimal strategy. Finally, we demonstrated the performance of the proposed scheme by comparing it to the GA method. In the future, we will consider the task queuing time and study the task offloading problem in the vehicle platoon.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant No. 61701197, in part by the 111 Project under Grant No. B12018, in part by the Foundation for Jiangsu key Laboratory of Traffic and Transportation Security under Grant No. TTS2020-02, and in part by the Jiangsu Laboratory of Lake Environment Remote Sensing Technologies Open Fund under Grant No. JSLERS-2020-001.