Abstract

The introduction of mobile edge computing (MEC) in vehicular network has been a promising paradigm to improve vehicular services by offloading computation-intensive tasks to the MEC server. To avoid the overload phenomenon in MEC server, the vast idle resources of parked vehicles can be utilized to effectively relieve the computational burden on the server. Furthermore, unbalanced load allocation may cause larger latency and energy consumption. To solve the problem, the reported works preferred to allocate workload between MEC server and single parked vehicle. In this paper, a multiple parked vehicle-assisted edge computing (MPVEC) paradigm is first introduced. A joint load balancing and offloading optimization problem is formulated to minimize the system cost under delay constraint. In order to accomplish the offloading tasks, a multiple offloading node selection algorithm is proposed to select several appropriate PVs to collaborate with the MEC server in computing tasks. Furthermore, a workload allocation strategy based on dynamic game is presented to optimize the system performance with jointly considering the workload balance among computing nodes. Numerical results indicate that the offloading strategy in MPVEC scheme can significantly reduce the system cost and load balancing of the system can be achieved.

1. Introduction

As road traffic density continues to increase and traffic data explodes, the limited computing capacity of onboard terminals cannot meet the communication and computing demand of computationally intensive onboard applications [1, 2]. The introduction of mobile edge computing (MEC) has become an effective solution to the problem of resource scarcity in vehicular networks [3, 4]. Usually, MEC can enhance network resources and enable localized data processing by deploying computation and storage resources at the edge of the network close to the users [5, 6]. Compared with remote cloud computing, it can use resource-rich servers at the roadside unit (RSU) to provide users with low-latency, high-bandwidth application services [7].

However, onboard applications such as augmented reality and autonomous driving are with higher demands on data processing and storage capabilities and still require more available resources [8, 9]. When the number of offloading tasks is large, the MEC server with limited resources will be overloaded and result in less efficient task execution. Moreover, deploying massive MEC servers to augment vehicular network resources would entail huge economic and time costs, which is clearly not feasible. In order to address the problem, end devices such as vehicles and gateways with limited resources can be used as infrastructures to effectively extend the computational, communication, and storage capabilities of the edge server. For example, the idea of using vehicles as communication and computing infrastructure to collaborate with other edge devices to perform tasks has been proposed in [10], which can meet the demand for considerable communication and computation capabilities. The computing power of mobile vehicles on the road has been used to assist in offloading tasks and speed up the task execution process [1113]. As mobile vehicles are highly dynamic, it is difficult to guarantee the stability and reliability of task offloading.

Note that the PVs with vast idle resources in the roadside and parking lots can act as static network infrastructures to enhance vehicular network [14, 15]. According to a survey, about 70% of vehicles are parked for more than 20 hours per day [16]. And most of the vehicles in the vehicular network have been equipped with sensors, wireless devices, and onboard units, which facilitates the vehicles to establish stable and reliable wireless communication and consequently form a vehicular adhoc network (VANET) [17, 18].

In the existing studies of vehicular edge networks, the data interaction between the driving vehicles and the roadside units is mainly considered. Generally, it leads to low utilization of the PV resources. Based on the above problems, a multiple parked vehicle-assisted edge computing framework is proposed in this paper, which has multiple parked vehicles to assist the edge server to perform offloading tasks. And the total system cost is minimized under the constraint of maximum allowable delay while taking load balancing and offloading optimization into account.

The main contributions of this work are summarized as follows. (i)The system model of multiple parked vehicle-assisted edge computing is analyzed. And the joint load balancing and offloading optimization problem is formulated to minimize the total system cost under the delay constraint. The offloading strategy is proposed to solve the optimization problem, which includes offloading node selection and workload allocation(ii)Considering the parked probability and resource availability of PVs, a multiple offloading nodes selection algorithm is adopted to select several candidate offloading nodes among vehicles and MEC server(iii)Considering the sequential nature of offloading decisions and the resource consumption during task execution, an efficient workload allocation strategy based on dynamic game is proposed to optimize system utility while considering load balancing

The rest of this paper is organized as follows. In Section II, related works about task offloading in vehicular networks are firstly introduced. The system model for multiple parked vehicle-assisted edge computing is described in detail in Section III. In Section IV, the workload allocation problem among multiple tasks and multiple computing nodes is modeled as a dynamic game process. In Section V, an efficient offloading strategy is proposed to solve the node selection and workload allocation problems. Simulation results for proposed scheme and the related analysis for different cases are provided in Section VI. Finally, the research work is concluded in Section VII.

In the last few years, the existing work in vehicular networks is mainly utilizing MEC to provide offloading service. These research works can be divided into two main categories: one is only using MEC servers to handle task offloading requests, and the other is using remote clouds or vehicles to assist MEC servers.

2.1. Vehicular Edge Computing

Researches in the first category aim to solve the offloading problem in the vehicular work by only using edge servers. The MEC server has more computing power and can provide a large amount of resources for offloading services. The existing work mainly focused on improving the efficiency of task execution and avoiding server overload by optimizing resource allocation. For example, a collaborative computing offload and resource allocation optimization scheme, based on the scalable nature of tasks in driver assistance applications, was presented in [19]. In order to balance resource consumption and user experience with limited computing and spectrum resources, edge computing and social networking were combined to propose a new network system—vehicular social edge computing (VSEC) in [20]. Thus, the quality of service and quality of experience of drivers were improved by optimizing the available network resources. Moreover, a multipath transmission workload balancing optimization scheme was investigated in [21], which uses multipath transport to support communication between vehicles and edge nodes. In [22], the fiber-wireless (FiWi) technology was introduced to enhance vehicular network, and a SDN-based load-balancing task offloading scheme was also proposed to minimize the processing delay.

2.2. Collaborate Vehicular Edge Computing

The second category of method for handling task offloading requests is to use other infrastructure such as remote clouds, UAVs, and vehicles to collaborate with MEC servers.

2.2.1. Remote Clouds Collaborate Vehicular Edge Computing

The remote clouds are often introduced in edge computing to provide more offloading services. For instance, a two-tier offloading architecture for cloud-assisted MEC to improve system utility and computational latency by using collaborative computational offloading and resource allocation optimization schemes was discussed in [23]. A multilayer data flow processing system, i.e., EdgeFlow, was presented in [24], to integrally utilize the computing capacity throughout the whole network and optimally the transmission resource allocation to minimize the system latency. Furthermore, a cloud-based tiered vehicle edge computing offloading framework that introduces nearby backup servers to make up for the lack of computing resources of MEC servers was presented in [25]. A game theoretic algorithm was used to design the optimal multilevel offloading scheme to improve the utility of vehicles and servers.

2.2.2. Mobile Vehicles or UAVs Collaborate Vehicular Edge Computing

Moreover, many works have proposed solutions for task offloading by using mobile vehicles or UAVs to assist MEC servers. In [26], a UAV-MEC system was investigated based on the idea of utilizing the UAV as a computing node to improve the average user latency. In [27], a cooperative UAV-enabled MEC network structure was presented to collaborate UAV offloading tasks, which the long-term utility was maximized by deep reinforcement learning-based algorithms. A distributed collaborative task offloading architecture by treating mobile vehicles as edge computing resources was discussed to guarantee low latency and application performance in [28]. A joint energy and latency cost minimization problem was formulated while using vehicles to assist task offloading. And an ECOS scheme with three phases was proposed to effectively solve the optimal problem in [29].

2.2.3. Parked Vehicles Collaborate Vehicular Edge Computing

Task offloading via UAVs and mobile vehicles is highly dynamic and lead to discontinuity in communication which is highly unstable [30]. In contrast, vehicles parked on the roadside or in parking lots are relatively static and can provide a more stable and reliable task offloading service. Thus, another recent work introduces parked vehicles to extend edge computing capabilities. For instance, serving PVs as static nodes to extend vehicular network resources and the concept of parked vehicle assistance (PVA) was proposed in [31, 32]. In addition, using PVs to assist edge servers in handling offloading tasks was presented in [33], by organizing PVs into parking clusters and abstracting them as virtual edge servers. Eventually, the task offloading performance was effectively improved by a task scheduling strategy and an associated trajectory prediction model. In [34], a three-stage contract-Stackelberg offloading incentive mechanism was developed to optimize the system utility by making full use of the large amount of free resources in the parking lot. The computing resources were also classified to provide different contracts, and the problem was solved using backward induction. In [35], the system task allocation was optimized according to the collaborative vehicle edge computing (CVEC) framework by designing a contract-based incentive mechanism to schedule PVs to handle offloading tasks. And an optimal contract that maximizes subjective utility under information asymmetry was formulated to optimize user utility.

The related works discussed above in parked vehicle-assisted vehicular network rarely consider the load balance among computing nodes or just allocate the load between MEC server and single parked vehicle. Compared with them, in this paper, a MPVEC framework with multiple parked vehicles collaborating MEC server while executing offloading tasks is presented. The computing framework with distributed characteristics increases computing capacity of task offloading and provides users with more efficient and flexible offloading options. To ensure the reliable and stable task execution, a multiple offloading node selection algorithm based on the parking behavior and resource availability is proposed to select multiple appropriate PVs to accomplish the offloading tasks. Considering that the resource states of MEC servers and PVs are time-varying during task execution, an efficient workload allocation strategy is developed to optimize system performance and keep the load balancing.

3. System Model

3.1. Network Entities

In this section, the MPVEC system with network entities is mainly composed of requesting vehicles, service provider, MEC server, and several PVs, as shown in Figure 1. More details of the function of the network entities in the system are described as follows.

Requesting vehicle: the requesting vehicle makes task offloading decisions based on the information provided by the service provider. Part of the task is processed by requesting vehicles locally, and the other part is uploaded to the nearby RSU through vehicle to infrastructure (V2I) communication and reasonably distributes the workloads to corresponding edge nodes.

Service provider: based on the computational and storage capacity of the MEC server, the service provider can collect global information, including task information as well as the computational capacity and unit energy consumption of the requesting vehicles, MEC server, and PVs. Simultaneously, according to the offloading decision, it can dispatch the MEC server and PVs to execute the corresponding workload on demand.

MEC server: the MEC server is richer in computing resources and can provide offloading services for requesting vehicles. The task requests are transmitted wired to the MEC server for processing via the RSU.

Parked vehicle: RSUs are wired to each other, and wireless connections are established between PVs and RSU via V2I communication. PVs in the parking lot can use the idle computing resources to perform offloading tasks.

3.2. System Model

As shown in Figure 1, it is assumed that a one-way road is within the coverage of RSU, and there are requesting vehicles moving on the road. Each vehicle generates a computation task, which can be described as and . Here, denotes the data size of task, the number of CPU cycles needed for executing task, and the maximum allowable time delay of the task, respectively.

To ensure uninterrupted communication during task execution, the task offloading process needs to be completed before the vehicle leaves the RSU coverage area. Assuming that the length of the road section covered by the RSU is , the requesting vehicle moves on the one-way road at a constant speed of , and its position is away from the starting position of the road section by . Then, the maximum allowable time delay of the task can be represented as .

The task generated by the requesting vehicle can be executed locally or offloaded to edge computing nodes. The set of represents the edge computing nodes. Among them, represents the MEC server, represents PVs, and the offloading part of the task can be offloaded to multiple edge computing nodes for parallel processing. The parameter of represents the node selection variable. If the -th edge computing node is selected to execute the -th offloading task is set; otherwise, Let denotes the allocation workload ratio of the -th offloading task to the -th edge computing node. And () represents the unoffloading ratio of the -th task and should be executed locally. As the computing resource of the system is time-varying, and the resource consumption cost of each node is different, it is a key challenge to balance the workload of each node while minimizing the whole system cost. The communication and computation model in the MPVEC framework will be described in the following sections. And the rest main parameters that will be used in this paper are listed in Table 1.

3.3. Communication Model

For the convenience of analysis, it is assumed that the network topology and wireless channels remain unchanged during the task execution. When the vehicle moves into the RSU coverage area, it can establish a V2I communication connection with the RSU based on IEEE 802.11p. If the task is partially or completely offloaded, the offloading part of the task is firstly transmitted to the RSU. And the MEC server, which establishes wired connection with RSU, calculates the corresponding offloading tasks. Simultaneously, the remaining offloading part is forwarded to the RSU at the parking lot, and the RSU will issue the task to the PVs for processing. Finally, the task execution result is returned. Since the size of task execution result is small, its transmission delay and energy consumption can be ignored, and only the task distribution process is considered in this paper. (1)Requesting vehicle to RSU: when the -th requesting vehicle who generate a computation task is occurred in the coverage of RSU, the uplink transmission rate between the -th requesting vehicle and RSU can be expressed as

where is the transmission power of the -th requesting vehicle, and is the power gain between the -th requesting vehicle and RSU. and are the channel bandwidth and the background noise power, respectively.

The transmission time and energy consumption of the uplink are related to the size of the task, which can be calculated by

where is the uplink cost coefficient and represents the cost to calculate the unit data volume in the uplink. (2)RSU to MEC server (): since the connection between RSU and MEC server is in a wired manner, the transmission rate is relatively high, and the data transmission time and energy consumption are negligible(3)RSU to PV (): there is a RSU near the parking lot, and the RSU is connected with the roadside RSU by wire. The transmission time and energy consumption can be neglected. The RSU will send the received offloading task requests to the PVs for processing via V2I communication, and the downlink transmission rate between the RSU and the -th PV is

where is the transmission power of the RSU, and is the power gain between the RSU and the -th PV.

Furthermore, the tasks can be offloaded in parallel transmission; thus, the data transmission time of the downlink is the maximum task transmission time of each offloading part. And the transmission energy consumption is the sum of the transmission energy consumption of each offloading part, which are defined as

We let as the downlink cost coefficient and represents the cost to calculate the unit data volume in the uplink.

As a result, the total transmission time and energy consumption for data transmission to the edge computing node for the offloading part of Di are expressed, respectively, as

3.4. Computation Model

(1)Compute task locally: the unoffloading part of the task is calculated by the requesting vehicles locally. And the delay and energy consumption are related to the number of CPU cycles required by the task , which can be calculated by

where is the computing capability of the -th requesting vehicle, and is the energy consumption required to calculate the unit CPU cycle. (2)Compute task by edge computing nodes: when the task is offloaded partially to the th (jJ) edge computing node, the task processing delay is related to the computing capability of the edge computing node

The computing resources of the -th edge node are limited and changed with time during the task execution, which can be described as , and is the max computing power of the j-th edge node.

When the computing resources occupancy rate of the edge nodes in a certain period of time is greater than its own threshold of , the tasks in the node will not be processed in parallel and need to be stored in the waiting queue and executed in sequence according to the delay constraint. Therefore, the task processing delay at the edge node mainly includes two parts: task calculation time and task waiting time, which can be written as

Let represents the energy consumption required to calculate the unit CPU cycle of the edge node, where the of the PV is smaller than that of the MEC server.

The offloading part of the task request generated by the -th requesting vehicle can be processed in parallel by multiple edge computing nodes. Therefore, the task offloading delay is mainly composed of the task transmission delay and the task processing delay. And the largest processing latency among edge computing nodes is used as the task offloading latency

Task offloading energy consumption is the sum of the transmission energy consumption and processing energy consumption of each offloading part

3.5. Problem Formulation

For the task generated by the -th requesting vehicle, there is a delay and energy consumption during processing, which mainly contain two aspects: the local processing part and the offloading processing part. Due to the parallel processing of tasks, the total task processing latency is the maximum latency for local processing and task offloading processing, which can be expressed as

The total energy consumption of the task processing can be written as

A cost function for task is defined as the combination of executing time and energy consumption.

where . The goal in this paper is to minimize the whole cost of all distributed tasks with the latency constraint, while joint consider load balancing and offloading decision. The optimizing problem for all tasks can be formulated as

In the above optimization model, constraint (14) denotes the workload ratio is a continuous variable, which cannot exceed to 1. And each requesting vehicle can offload its task to multiple computing nodes according to the workload ratio of . Constraint (15) ensures that the task execution delay cannot exceed the maximum allowable task delay . Constraint (16) indicates the offloading node selection variable, and if the -th requesting vehicle selects the -th edge computing node to offload part of the task, then ; otherwise, . Constraint (17) presents the total offloading task of the -th requesting vehicle cannot exceed to 1 and makes the problem a mixed integer nonlinear optimization problem. In constraint (18), α and β are the weights of time delay and energy consumption in the total cost, respectively, which can be dynamically adjusted according to the task type to meet the computing requirements of different tasks.

4. Multitask Multinode Dynamic Game

In this section, the workload allocation for multiple tasks in multiple computing nodes is modeled as a dynamic game process. Considering that offloading decisions are sequential, the requesting vehicle who makes the former decision will have an impact on the requesting vehicle who makes the subsequent decision. Thus, to ensure the sequential rationality of the game process, each requesting vehicle is required to make the optimal decision, so that the overall strategy of the system is optimal.

The service provider in the proposed framework can provide requesting vehicles with global information (including the available computing capability of edge nodes and task queuing sequence). To optimize the total system cost of task execution, sequential decisions for different requesting vehicles are made to offload part or all tasks to edge computing nodes. At the same time, both sides of the game complete distributed autonomous decision making in the game process, which can obtain the optimal utility and effectively relieve the computational pressure of the MEC server.

The dynamic game process with multiple tasks and multiple computing nodes can be defined by while the three elements of the game can be described as (1) represents the requesting vehicle players in the game that generates the tasks and makes task offloading decisions(2) means the task offloading decision of the requesting vehicle players, where . And represents the workload proportion of task performed by -th the edge node(3)The cost function represents the cost required for requesting vehicle players to perform tasks, including task execution time and energy consumption

Therefore, the offloading decision for requesting vehicle players other than the -th requesting vehicle player can be described as , and the -th player needs to choose a strategy to minimize the task execution time and energy consumption, which can be expressed as

Next, the existence of the Nash equilibrium point in the dynamic game process is discussed.

Definition 1. There exists a strategy set in the dynamic game and if then strategy set is the Nash equilibrium of game . At the Nash equilibrium point, it is impossible for any player to change the strategy to obtain greater utility; that is, each requesting vehicle has made the optimal offloading decision to minimize the task execution cost.

Meanwhile, for the -th requesting vehicle, in order to minimize its own cost, the optimal strategy needs to be obtained by solving the following problem.

The optimal strategy of can be obtained by solving the following formula:

It can be easily concluded that the formula for solving the optimal strategy is convex, and there is an optimal solution. Hence, there is an optimal strategy in the dynamic game process between multiple tasks and multiple computing nodes.

5. Efficient Workload Allocation Strategy

In this section, an efficient offloading strategy is proposed to minimize the total cost of task execution, which joint consider load balancing and offloading optimization. In the task scheduling problem formulated in this paper, the value of the offloading selection variable is 0 or 1, while the task offloading ratio can be any value between 0 and 1. Therefore, the optimization problem is a mixed integer nonlinear optimization problem. We divide it into two subproblems to solve, namely, offloading node selection and workload allocation.

5.1. Offloading Node Selection

Different form the reported works that mainly use single computing node to collaborate MEC server in task processing, in this paper, the task generated by the -th requesting vehicle is considered to decompose into multiple subtasks, and then it is offloaded to multiple edge computing nodes for joint execution. And a multiple offloading nodes selection algorithm is designed to select several appropriate computing nodes for parallel processing tasks, which is described in Algorithm 1.

It is assumed that the maximum computing power of the -th requesting vehicle and the -th edge computing node are and , respectively. The maximum computing power of each node is fixed, but the computing resources of each node change dynamically during task execution process. Part of the computing resources are occupied during task execution, and released after the task execution is completed. When the -th requesting vehicle selects offloading nodes, the MEC server must be used as one of the offloading nodes to prevent overloading at the PVs, considering that it can provide strong computing power. As a result, the appropriate offloading nodes are mainly selected among the PVs within the coverage area of RSU.

The appropriate offloading nodes in the PVs are selected to minimize the system cost by evaluating the execution cost of subtasks. As shown in Algorithm 1, the computation task is first divided into several subtasks with equal size, and the workload ratio of the single offloading task ω is set to 0.1. Then, the local and each edge node processing cost increment required for this subtask and is calculated and compared according to equation (12). The additional waiting time caused by resource consumption is also considered. The PV that satisfies the parking probability constraint will be selected while its cost increment is lower than the local . Simultaneously, if the -th edge node is selected to executing part of the task j is set to 1, and the workload ratio and computing power will be updated during task execution.

Input: Task Di = {di, ci, timax}; the offloading task workload ω; the computing power filo, fij; the parking probability Pij.
Output: The node selection variable xij (j =1 denotes MEC server, j >1 denotes PV).
1: Initialization:, , and .
2: calculate the execution time and , the energy consumption and .
3: setΔuiloc = Tiloc + Eiloc
4: ifkij ≥ rmaxfijmaxthen
5: CalculateTijwait
6: setΔtij = Tij + Tijwait
7: else
8: setΔtij = Tij
9: end if
10: setΔeij = Eij
11: then calculate Δuij = Δtij + Δeij
12: ifPij ≥ Pth and Δuij < Δulocthen
13: setxij =1
14: updatekij = kij + ω, fij = fij - ω
15: else
16: setxij = 0
17: end if
18: returnxij

During the task scheduling process, the -th requesting vehicle evaluates the current resource availability status of the -th PV, which can be described by the probability value of the -th PV remaining parked state at the execution time period of the -th task [16]. And the can be calculated by where indicates the parking durations, denotes the probability density function of the -th PV parking duration, and is the cumulative distribution function of . is time span of the time period. The denotes the time interval detecting the parking behavior of the -th PV, and the parameter of a is constant. The accumulative parking durations until now is recorded as . Thus, the probability that the PV will continue to stay parked for at least time slots can be predicted.

When the PV stays for a specified period of time with a higher probability, it can provide more stable and reliable resources for task execution. If the task is assigned to the PV, the extra task retransmission overhead caused by the departure of the PV can be effectively avoided. Thus, the probability that the PV keeps parked state during the task execution period is used as an important indicator to measure the availability of PV resources.

In Algorithm 1, the PVs satisfying can be hopefully selected as offloading nodes to execute the workload, where is a predefined threshold value in the PV selection process. Furthermore, when selecting a suitable PV as an offloading node, it is necessary to consider the computing power of the corresponding PV itself and the energy consumption per unit. The PVs that satisfy both the parking probability constraint of , and the less task executing cost of will be selected as the candidate offloading nodes. According to Algorithm 1, stable and reliable offloading nodes can be obtained to meet the task workload allocation requirements.

5.2. Workload Allocation

The workload allocation between multiple tasks and multiple nodes is modeled as a dynamic game process. Since the requesting vehicles can obtain global information according to the service provider, the game process can be regarded as a complete information game. The requesting vehicles need to make sequential decisions based on the priorities defined by task delay constraints. Considering the resource consumption in the system, a workload allocation algorithm based on dynamic game is proposed, and the backward induction is used to assist requesting vehicles in formulating their strategies. When all requesting vehicles make the optimal decision, the Nash Equilibrium is reached and the game ends. The specific steps are described in Algorithm 2 as follows.

Assuming that the task set generated by the requesting vehicles has been prioritized according to the delay constraint, that can be given as . According to the proposed algorithm, the requesting vehicles allocate the workload among the local and the selected offloading nodes and makes offloading decisions in turn.

Among them, the ()-th requesting vehicle who makes the later decision develops an offloading strategy based on the former decision of the -th requesting vehicle, and then it feeds the developed strategy to the -th requesting vehicle. If the former -th requesting vehicle has a lower-cost strategy in this case, the strategy is updated and the later strategy changes accordingly. Iteration keeps until the strategies and costs are both no longer changed, and then the optimal solution and i+1 are obtained. This step is repeated until all requesting vehicles obtain the optimal strategy , which results in the minimum total system cost and joint consider the load balancing.

Input:The task , ; the node selection variable xij; the offloading task workload ω.
Output:The final workload strategy , , , let j = 0 denotes local, j > 0 denotes edge node.
Initialization:kij = 0, ui =0.
Step 1. Workload allocation between nodes.
1: for each task Di
2:  whileωi < cido
3:   calculate the incremental cost of processing unit-sized tasks locally
4:   ifthen
5:    calculate
6:   end if
7:   compare
8:   allocate subtask to node j with the smallest cost increment,thenkij = kij + ω. ui = ui + Δuij
9:   setωi = ωi + ω
10:  end
11: setki = {ki0, ki1,ki2, …, kij}
Step 2. Iterative to NE based on backward induction.
12: leti = i + 1 and repeat step 1
13: calculateki+1 = {ki+1,0, ki+1,1,ki+1,2, …, ki+1,j}, ui+1
14: sendki+1 to user i and repeat step 1
15: calculateki, ui
16: Ifui < uithen
17:  updateki = ki
18:  Sendki to user i + 1 and repeat step 1
19:  n = n + 1
20:  calculateki+1, ui+1
21:  repeat step 2 until and
22:  set and
23: end if
24: returnki, , ui, ui+1

6. Numerical Results

In this section, the performance of the proposed MPVEC scheme through numerical research is evaluated. By formulating an efficient offloading strategy, the requesting vehicles allocate the task requests to the local and multiple edge computing nodes for joint execution.

6.1. Parameter Setting

We consider a unidirectional road with a section of length in the coverage of RSU, and the RSU is equipped with a MEC server which can provide offloading services. On the road section, there are [10-50] requesting vehicles driving at a constant speed of , and each vehicle generates a delay-sensitive task request. And there are [5–15] vehicles parking in the parking lot nearby, which can provide idle resources. The data size of the task is KB, the number of CPU cycles required for computing is [500,1500] megacycles, and the maximum allowable task delay is related to the location of the moving vehicle. We set the location , where the requesting vehicle is located to ensure that the vehicle can complete the task offloading process before leaving the RSU coverage area.

In addition, the probability density function of parking durations of these PVs δ(t) is formulated by [16], and the parking probability can be calculated according to equation (23). In the simulation, the requesting vehicles prefer to choose the PV as the candidate offloading node if its parking probability is larger than 0.85. More simulation parameters are shown in Table 2.

6.2. Performance Comparison

In this section, the proposed offloading strategy in MPVEC is simulated, and the effectiveness and feasibility of the proposed scheme is evaluated under the same or different parameters and compared with the following schemes. (1)All task requests generated by the requesting vehicles are computed locally (local computing, LC)(2)There is only one MEC server in the system to provide offloading services, and there is no PV to assist in the computation. The workload is allocated between local and MEC server (no parked vehicles, NP)(3)The system has one MEC server and multiple PVs to provide offloading services, but tasks can only be offloaded to single node for processing (single node computing, SNC)

In Figure 2, the system cost of different offloading scenarios with the same parameters is compared. We set the number of tasks generated by the requesting vehicle is , the data size of the requesting task is equal to 500 KB, and the number of required CPU cycles is equal to 1000 megacycles. And there is one MEC server and 10 PVs in the scenario to provide offloading services. As is shown in Figure 2, the requesting vehicle generates the largest system cost when choosing LC scheme, due to that the requesting vehicle itself has weak computing power and generates large computing latency. Compared with the LC scheme, the tasks can be offloaded to the MEC server and PVs, which have much larger resources than the requesting vehicles. Hence, the system cost of the other three schemes cut down as a result.

Simultaneously, the MPVEC scheme shows the excellent performance under the same parameters, which can significantly reduce the system cost. As no PVs can provide offloading services in NP, by comparing NP with MPVEC, it clearly proves that the introduction of PVs is beneficial to the system performance. Moreover, compared with the SNC scheme, the MPVEC decomposes the tasks to multiple nodes for joint execution, the multinode distributed processing can provide more node selectivity for users, and the system cost is efficiently reduced. Numerical results show that the system cost of the proposed MPVEC is 41.5%, 34.6%, and 7.7% lower than of LC, NP, and SNC, respectively.

In Figure 3, the impact of the different number of tasks generated by the requesting vehicle on the system cost is illustrated. With the increasing number of tasks generated by the requesting vehicles, the system cost of all schemes presents an upward trend. Due to the limited resources of the computing nodes, the number of tasks in the waiting queue increases after the resource consumption reaches the threshold value of the edge nodes themselves. It will generate additional execution time costs and lead to the increasing system cost. In addition, it can be concluded from Figure 3 that the MPVEC shows better performance than other schemes under the same conditions. It is because that PVs can provide more computing resources with lower cost for offloading tasks execution. And the offloading strategy in MPVEC can effectively optimize the task execution efficiency by reasonably allocating load among computing nodes. Results indicate that the system cost of LC, NP, and SNC is 17.1%, 5.1%, and 2.6%, respectively higher than MPVEC when the number of tasks reached to 50.

In Figure 4, the impact of the number of CPU cycles required for the task of requesting vehicle generation on the system cost is illustrated. With the increase in number of CPU cycles, the computational latency and energy consumption of each node increases accordingly. Hence, the system cost has maintained an upward trend of all schemes. Moreover, it can be visualized from Figure 4 that the optimization performance of MPVEC scheme is more obvious than other schemes. When the task requires a larger amount of computing power, the task computing requirements can still be met at a lower cost in MPVEC. The numerical results indicate that the system cost increase with increasing CPU cycles is 60.1%, 32.6%, and 6.1% higher for LC, NP, and SNC than MPVEC, respectively. As a result, it can be effectively proved that utilizing the large amount of idle resources of multiple PVs and allocating workload reasonably can greatly expand the edge computing capacity while providing offloading services to users at low cost.

In Figure 5, the impact of the different number of PVs on the system cost and the workload borne by the MEC server is illustrated. It can be seen the workload ratio of the MEC server and system cost decrease significantly at the beginning increase of the PV numbers. It is because that more available resources are provided for the system to accomplish offloading tasks, and the workload ratio of MEC server is reduced. It proves that the MPVEC scheme is feasible to use PVs to assist edge computing and the computational pressure on the server is considerably relieved. And the reasons for the decrease of system cost can be explained through two aspects: on the one hand, the PVs can execute part of the offloading tasks at a lower energy consumption per unit than the MEC server, and the system computing energy consumption is reduced as the number of the PVs becomes larger. On the other hand, as the number of PVs assisted in offloading increased, the tasks can be offloaded to more computing nodes for parallel processing, and the tasks waiting time in MEC server and the tasks computing time can be both reduced, which results in the reduction of system cost. Therefore, the multiple PV-assisted MEC can improve the system performance, and the workload of MEC server can be effectively relieved.

In Figure 6, the impact of different numbers of PVs on the load balancing of the system is illustrated. We set the number of PVs in the parking lot to 5 and 10, respectively. Comparing the load ratio of the selected computing nodes, it can be clearly seen that when the number of PVs is 5, except for individual nodes taking more workload, the workload ratio of the remaining computing nodes has a small difference, and load balancing of some nodes (except node 1) can be achieved. And when the number of PVs increases to 10, all the selected computing nodes can achieve load balancing. It can be easily concluded that through the proposed efficient offloading strategy in MPVEC, the workload of each node is reasonably distributed, which can effectively reduce the overload phenomenon and realize the system load balancing.

6.3. Complexity Analysis

The computational complexity of the proposed offloading strategy in MPVEC is , where and are the number of requesting vehicles and the total number of computing nodes (include requesting vehicle itself and MEC server and PVs), respectively. In [28], the collaborative task offloading strategy based on a computation task and resource sharing mechanism between vehicles and edge infrastructures was reported. Its computational complexity is , where and are the total number of edge infrastructures and the number of vehicles in the task offloading subcloudlet, respectively. A cloud-based mobile edge computing (MEC) offloading framework in vehicular networks was proposed, and an efficient computation offloading strategy based on contract theoretic approach was introduced to maximize the benefit of the MEC service provider and improve the utilities of the vehicles in [36]. Its computational complexity was given as , where is the number of computation tasks types, and is the number of MEC servers. The computation complexity of the proposed offloading strategy is similar to that of the algorithms mentioned above.

7. Conclusion

In this paper, an offloading strategy in MPVEC is investigated to optimize the system performance with jointly considering the workload balance among computing nodes. First, a multiple offloading node selection algorithm is proposed to select appropriate PVs to take part in computing tasks. Furthermore, a workload allocation strategy based on the idea of dynamic game is presented to optimize system performance and consider the load balancing at the same time. Numerical results have demonstrated that the proposed offloading strategy in MPVEC can effectively reduce the system cost under delay constraint while achieving the load balancing of the system. In this work, only the computing resources of PVs are considered to optimize the system performance. In the future work, the communication resources allocation in multiple PV-assisted MEC will be researched. This study can be reviewed as a reference for task offloading in the vehicular network.

Data Availability

All the data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the 2020 State Grid Corporation of China Science and Technology Program under Grant 5700-202041398A-0-0-00.