Abstract

With the development of various vehicle applications, such as vehicle social networking, pattern recognition, and augmented reality, diverse and complex tasks have to be executed by the vehicle terminals. To extend the computing capability, the nearest roadside-unit (RSU) is used to offload the tasks. Nevertheless, for intensive tasks, excessive load not only leads to poor communication links but also results to ultrahigh latency and computational delay. To overcome these problems, this paper proposes a joint optimization approach on offloading and resource allocation for Internet of Vehicles (IoV). Specifically, assuming particle tasks assigned for vehicles in the system model are offloaded to RSUs and executed in parallel. Moreover, the software-defined networking (SDN) assisted routing and controlling protocol is introduced to divide the IoV system into two independent layers: data transmission layer and control layer. A joint approach optimized offloading decision, offloading ratio, and resource allocation (ODRR) is proposed to minimize system average delay, on the premise of satisfying the demand of the quality of service (QoS). By comparing with conventional offloading strategies, the proposed approach is proved to be optimal and effective for SDN-enabled IoV.

1. Introduction

The Internet of Vehicles (IoV) is one of the most promising application for Internet of Things (IoT) technology. Currently, it is fuelled by advances in related technology and industries, such as the decline in sensor costs, the widespread wireless connections, the improvement of computing capabilities, the development of cloud computing, and wireless transmission and positioning. In IoV, large amounts of heterogeneous data (such as voice, program code, and video) need to be transmitted in various links, e.g., vehicles to vehicles, vehicles to roadside equipment, and vehicles to the cloud [14]. These requirements are challenging for cloud infrastructure and wireless access networks. Upgraded service such as ultralow latency, continuity of user experience, and high reliability have been proposed to highly promote the local services at the edge of the network close to the terminals. The basic idea of Mobile Edge Computing (MEC) is to migrate the cloud-computing platform to the edge of the mobile access network. The traditional cellular network is deeply integrated with Internet services, to reduce the end-to-end delay of mobile service delivery and improve the user experience. With the development of MEC, mobile devices with limited resources can implement various new applications by offloading computing tasks to the MEC server, such as autonomous driving, augmented reality, and image processing [57]. The MEC server is owned by the network operator and is directly implemented in cellular base stations (BSs) or local wireless access points (APs) as a general-purpose computing platform [8, 9]. On the one hand, higher requirements on computing resources and storage capacity and capabilities consumption are put forward to carry the various vehicular applications. On the other hand, the computing power of in-vehicle devices is limited by the size and portability. Therefore, MEC with distributed computing capabilities, rich computing resources, and flexible wireless accessibility is promising for IoV. For instance, the selected part of the in-vehicle task can be offloaded to the appropriate MEC for parallel processing and feedback the execution result to the vehicle-mounted terminal through the neighboring BSs or APs.

Although the MEC server has richer resources than local equipment, excessive load has a great impact on the transmission link. Considering the transmission delay on the link [10], this is detrimental to delay-sensitive tasks. As the number of vehicles increases, and the communication environment becomes worse, resulting in high transmission delays. In addition, the heterogeneous nature of vehicle-mounted tasks places higher requirements on the entire system. There are two types of separable task offloading: (1) One is a bit-type task, which can be arbitrarily divided into several independent parts [11, 12], and these parts can be processed in parallel on different platforms. (2) The other is a code-oriented task, which is composed of various components [13, 14]. There are dependencies between task components and need to be executed in an orderly or continuous manner. Therefore, a reliable computing offloading solution is needed to support low-latency, highly reliable IoV services.

The edge computing capabilities of nearby RSUs have been leveraged to meet the task-intensive requests and strict latency requirements, i.e., part or all of the divisible bit-type tasks with high delay requirements are selected to be offloaded to nearby RSUs; then, the computing delay can be greatly reduced by parallel computing. Nevertheless, the unified task intensity is not often, some RSUs may have to handle extra requests beyond their capabilities, while other RSUs are relatively idle. Benefiting from the software-defined network (SDN) architecture, SDN controllers with global information are able to coordinate edge computing resources [15, 16]. Combined with the current global situation, the requested tasks are controlled and forwarded to the corresponding target nodes through the SDN controller, which can effectively integrate global resources.

A lot of research work has focused on task offloading in IoV [1720], majorly considering one part of offloading strategy, offloading ratio or resource allocation, and lack of the utilization of SDN to efficiently solve the load balancing problem. In this paper, we jointly optimize the offloading strategy, offloading ratio, and resource allocation, in order to minimize the system delay of SDN enabled IoV. Moreover, the effects of different task complexity on transmission and execution are also considered. The main contributions of this paper are as follows: (i)A tasks-divisible system model with a software-defined network is proposed based on two-layer transmission(ii)A Particle Swarm Optimization- (PSO-) based heuristic approach for the overall optimization is proposed, which can effectively solve the offloading strategy problem of multiuser and multiobjective nodes. This approach works by decomposing the problem into three subproblems: (1) offloading decision of vehicles; (2) resource allocation by RSUs; and (3) offload ratio of vehicles. It greatly reduces the complexity of the problem and is very effective for solving the multivehicle and multimec offloading scenario(iii)The offloading decisions, local offloading ratios, and resource allocation (ODRR) are jointly optimized in a complete way, to maximize the system performance

By comparing with the conventional offloading strategies, simulation results show the proposed ODRR approach achieves the best performance.

The offloading strategy has been a hot research topic for IoV, and various offloading models have been proposed in different application scenarios. Considering the standby time of mobile devices and the latency sensitivity of tasks, most work in edge computing or fog computing focuses on the optimization of energy consumption or latency. Therefore, we survey the related work according to the optimization goals, as shown in Table 1.

2.1. Optimizing the Energy Consumption

For mobile users, processing various application tasks consumes a lot of energy, so improving the standby time of the device has always been a concern. Some work is devoted to reducing local computing power consumption to improve standby time. The energy cost of task calculation and file transmission has been studied in [21]. Combining the multiaccess characteristics of 5G heterogeneous networks, jointly optimizing offloading and wireless resource allocation to minimum energy consumption within delay constraints. A multiuser MEC system with wireless power supply has been modeled in [22], in order to solve a practical scenario that requires delay limit and reduces the total energy consumption of the AP, jointly optimizing the energy transmission beamforming of the access point, the frequency of the central processing unit, the number of bits offloaded by users, and the time allocation among users. [23] has proposed an energy-optimal dynamic computation offloading scheme algorithm to minimize system energy consumption under energy overhead and other resource constraints.

2.2. Optimizing the Delay of the System

For latency-sensitive tasks, researchers are devoted to reducing system latency and improving user experience by optimizing local computing resources and edge node resources. [10] aim to minimize the maximum delay among users by joint optimization of offloading decision, computing resource allocation and resource block and power. [17] investigate the calculation rate maximization problem in the MEC wireless power supply system enabled by UAV, which is subject to the energy harvesting causal constraints and the UAV speed constraints. A new device-to-device multiassistant MEC system that requests local users to nearby auxiliary devices for collaborative computing has been designed in [18]. By optimizing task allocation and resource allocation to minimize the execution delay, a collaborative method has been proposed in [20] for parallel computing and transmission of virtual reality. The task is divided into two subtasks and offloaded to the MEC server and the vehicle side in order to shorten the completion time of the virtual reality application. Moreover, an offloading scheme has been proposed with efficient privacy protection based on fog-assisted computing and solved by a joint optimization algorithm to minimize the completion delay in [24].

2.3. Optimizing Both System Delay and Energy Consumption

The user experience and the standby time of the device have been optimized together by the weight parameter, i.e., when the power is low, the weight of the energy consumption is set larger; when the power is sufficient, a larger delay weight is set. [19] consider a multicell wireless network that supports MEC, which assists mobile users perform computationally intensive tasks through task offloading. A heuristic algorithm is proposed to combine task offloading and resource allocation to maximize system utility, which is measured by the weighted sum of task completion time and energy consumption reduction. The system latency has been minimized with energy consumption constraints by jointly optimizing offloading decisions, local computing power, and fog node computing resources in [25]. The cloudlet overload risk has been alleviated by offloading user tasks to vehicle nodes, and a heuristic algorithm has been proposed to balance energy and delay to minimize system overhead in [26]. In order to solve the problem of high power consumption and delay sensitivity of portable devices, a reinforcement learning scheme has been proposed in [27] to search for optimal available resource nodes to minimize delay and energy consumption. In addition, the fog computing and cloud computing have been discussed in [2427]. Since cloud servers are located in areas far away from cities, there is a large transmission delay, and tasks that are not sensitive to delay are offloaded to the cloud for computing, while intensive and delay-sensitive tasks are computed locally or offloaded to the fog to improve system performance.

In this paper, we mainly focus on delay-sensitive and task-intensive scenarios, considering that the computing resources of edge nodes and the number of tasks received in each period are limited. In order to prevent some edge computing nodes from being overloaded and some edge nodes relatively idle, we use the SDN-enabled IoV to control the task offloading decision-making by monitoring the global situation, which effectively improves the utilization of resources and reduce system latency.

3. System Model

In this section, the system transmission model, execution model, and optimization problem formulation are presented. As shown in Figure 1, we assume a vehicular network system is composed of vehicles and RSUs. Each RSU is equipped with a MEC server, which has the computing ability to process offloading tasks. Generally, the MEC can be a physical device or a virtual machine provided by the operator. Taking into account the complexity of the vehicular network, this system is a software-defined IoV that supports edge computing and the configuration of edge computing nodes. The edge nodes are coordinated and controlled by the software-defined network (SDN), which aims to reduce system delay and improve overall performance. As a coordinator, SDN divides the IoV system into two independent layers through software definition and virtualization technology: the data layer and the control layer. The edge nodes uniformly obey SDN scheduling and follow the OpenFlow protocol [28]. These edge nodes transmit and process information according to SDN control instructions. The control and processing are separated by the network entities, to effectively integrate resources and improve utilization. The edge computing node equipped on RSU connects the edge node cluster and the SDN controller through broadband connection. The physical communication on the control layer is independent of the physical communication channel on the data layer. The data layer is composed of an OpenFlow-based SDN controller and network nodes, refers to [29, 30]. The SDN controller broadcast global status, including Channel Status Information (CSI), available resources, and task priority. When SDN receives the vehicle’s offloading request, it looks for the best solution (including offloading decision and resource allocation) at the control layer and then sends control instructions. The data layer performs data transmission according to the received control layer. Each vehicle generates a task in a period of time, taking the heterogeneity of vehicle tasks into account (data volume, delay sensitivity, and difference in computational complexity). In the case of real-time tasks require minimal delay and/or have a large amount of data, local offloading cannot meet the requirements. Otherwise, tasks are not sensitive to delay response and can be processed locally. If all of them are executed locally or offloaded to the RSUs, it can cause timeout failure, waste of local resources, and a very poor communication environment due to interference. Therefore, different offloading strategies need to be set for different task types. Let and represent the set of vehicles and the set of RSUs, respectively. For ease of reference, the key symbols used in this article are summarized in Table 2.

3.1. Communication Model

We assume that each vehicle terminal has an executed task at a time and denoted as . Each task has three parameters, , in which defines the size of the input data of the task of the vehicle terminal (usually in bits), and defines the computing resources required by the task of the terminal , which refer to CPU cycles. Parameters and can be obtained from task analysis. is the maximum allowable delay for task transmission execution, i.e., if is exceeded by the time of result received, and the task is failed by timeout. Since the vehicles receive the offloading decision, the tasks are not allowed to be interrupted before the execution is completed. Typically, the speed of cars on conventional road is 5 to 16 meters per second; thus, we assume the radio channel is not radical varying to interrupt the execution of the tasks, due to the severe fading. When the vehicle generates a task, it sends an offloading request to the nearby RSU first; then, the RSU routes the request command to the SDN controller. SDN synthesizes the current global information and provides the optimal offloading decision. Finally, the decision plan is sent to the targeted node through the control layer.

We introduce a binary variable of task offloading selection . If =1, it defines that the RSU is selected by vehicle to perform task offloading. On the contrary, =0 means the vehicle do not select the RSU . Therefore, the constraint of is defined as

Equation (1) means that each vehicle can offload tasks to a unique RSU for execution. We define the coordinates of the RSUs set in two-dimensional plane as , the coordinates of the vehicle are defined as , and the coordinates of nearest RSU are denoted by . Then the distance between the vehicle and the nearest RSU is given by

The transmission rates of vehicle to routing RSU channel and routing RSU to target RSU channel are described as [31] where and represent the bandwidth allocations of vehicle to routing RSU channel and routing RSU to target RSU , respectively. represents additive white Gaussian noise of the channel. represents the transmission power from routing RSU to vehicle . is the complex Gaussian channel coefficient [31] following the complex normal distribution CN (0,1). is the path loss index.

We introduce to represent the local execution ratio of the vehicle; thus, denotes the offload ratio. Since the execution result is usually relatively small, the delay from targeted node to the routing node and the delay from the targeted node can be ignored. We define the binary parameter , where is 1 means the vehicle needs to be routed to the targeted node, otherwise, vice versa. Assuming the routing node is RSU , based on the above formulas, the transmission time for the uploading data from vehicle to the targeted RSU is divided into two parts: (1) the delay of uploading from the vehicle to the routing node and (2) the delay of uploading from the routing node to the targeted node, which are expressed as

Considering that the routing node can be the candidate of targeted node, the uploading time is defined as

3.2. Computing Model

In this section, we introduce the computing model of the vehicle. Considering the parallel offloading mode, the computing model is mainly divided into two parts: (1) the local computing model and (2) RSU computing model.

3.2.1. Local Computing Model

Let denotes the computing capabilities of vehicle ; then, the local task execution time on data can be defined as

3.2.2. RSU Computing Model

Besides the local computing, the rest part of the tasks are offloaded to the RSU for further computing. It requires (1 - ) computing resources from RSU. Note that a RSU can be selected by multiple vehicles, and a vehicle can only select one RSU for execution. Since RSU has limited computing resources, it is necessary to allocate RSU resources to different vehicles in a reasonable manner to improve resource utilization and reduce system delay. Define as the computing resource of RSU , and denotes the computing resources allocated by the RSU server to vehicle . The sum of the resources allocated by the RSU to each vehicle cannot exceed its own computing resources, thus constrained by Equations (7) and (8), and the execution time of vehicle on RSU is defined by Equation (9).

Assuming there exist RSUs in the network, then the execution time for vehicle is defined as

For vehicle , the task is divided into two subtasks, which are processed in parallel by local unit and by edge RSU node. The task completion time is accordingly divided into two parts: local processing time and edge RSU node processing time. The edge RSU node processing also results transmission delay and execution delay . Therefore, the time for completing task generated by vehicle is determined by the maximum delay, which is defined by

3.3. Problem Formulation

For delay-sensitive applications, delay is a problem that must be considered. Therefore, it is vital to formulate the most suitable offloading strategy based on the global status. For intensive tasks, meeting the performance requirements of the vehicle and making full use of effective resources are of importance. In addition, considering the transmission and execution delay caused by offloading, the system delay is defined by Equation (12). Therefore, offloading decision-making and resource allocation must be jointly optimized to improve system performance. The goal is to provide all vehicle optimized offloading strategies , computing resource allocation , and offloading ratio , aiming to reduce average delay. Finally, the optimization problem is described as follows:

The above constraints are explained as follows: constraint (13a) is to ensure that the total delay of the task does not exceed the maximum allowable delay; constraints (13b) and (13d) mean that each vehicle can only transmit the task to only one RSU, and the offloading decision is a binary variable; constraint (13c) is the offloading ratio constraint, which is a decimal between 0 and 1; and to ensure that the resources allocated by the RSU to the vehicle does not exceed its own capacity, the constraint (13e) must be met. Finally, there is a binary constraint, which represents whether the vehicle to the targeted RSU needs to be routed.

4. Problem Solving

The fact that the offloading decision is a binary variable brings problem as a mixed integer programming problem, which is nonconvex NP-hard. In order to reduce the complexity of the problem, we divide the problem into three subproblems, i.e., offloading decisions of various vehicles, proportional distribution of tasks performed locally, and different resource allocation for vehicles by RSUs.

4.1. Offloading Strategy Making and Load Balance

Given the local computing ratio and resource allocation , problem is transformed into as follows:

Obviously, is still an NP-hard problem. For such problems, heuristic algorithm is a rational solution. The Particle Swarm Optimization algorithm (PSO) originated from the study of bird predation behavior is a evolutionary heuristic algorithm, which has efficient global search capabilities. It has achieved great success in image processing and neural network training. Hence, we propose a offloading decision-making algorithm based on PSO. The basic idea of PSO is to find the optimal solution through collaboration and information sharing between individuals in the group. In the paper, particles have only two attributes: speed and position. Speed represents the speed of movement, and position represents the direction of particles. Each particle searches for the optimal solution separately in the search space and records the current individual extreme value, then sharing the individual extreme value with other particles. All particles in the particle swarm adjust their speed and position according to the current individual extreme value they find and the current global optimal solution shared by others. Therefore, through iteration, the particles of the entire population eventually converge to the optimal solution.

According to the speed update formula proposed by Clerc and Kennedy [32], we introduced a compression factor . The update formula for speed and position is presented as where , is the number of particles. is the coefficient of inertia. The larger , the stronger the global optimization ability, and the weaker the local optimization ability. are the learning factors, is the velocity of particle at the iteration, and is the current position of particle at the iteration. denotes the maximum velocity.

The compression factor is given by where . The compression factor guarantees the convergence of the particles and prevents the explosion of velocity. The specific process of the improved PSO algorithm is described in Algorithm 1. The function refers to Equation (12).

Input:
 The input parameters of particle swarm:
Output:
1: Fordo
2: For each do
3:  Initialize the velocity position of particles
4: End for
5: In Algorithm 2, we obtain then record the current position and fitness as particle’s individual extreme option and value .
6: Record the smallest fitness and the corresponding position as .
7: While the number of iteration steps is not 0 do
8:  Fordo
9:   Update the velocity of particles using Equation (16)
    Update the position of using Equation (17)
    Evaluate fitness of particle using Equation (14)
10:   Ifthen
11:    
12:   End if
13:   Ifthen
14:     =
15:   End if
16:  End for
17: End while
18: End for
19: Return
Input:
 Offloading decison:
Output:
 Routing information:
1: Initialize the location of the vehicles.
2: For each do
3: Obtain the access RSU of vehicle , set .
4: Fordo
5:  If and then
6:   .
7:  End if
8: End for
9: End for
10: Return
4.2. Resource Allocation Optimization

Assuming the local computing ratio and offloading decision are given, the problem is transformed into the lowest vehicle delay to each RSU. Defining the vehicle set of the task on RSU , and the number of vehicles is The variable only has an impact on the task execution of the objective function; thus, the converted problem can be presented as

Substituting formulas (4), (5), and (6) into , and performing equivalent transformation, the overall optimization problem is changed to the following form:

As shows, is not differentiable subject to . Substituting formulas (5), (6), and (9) into , then the is approximated as where and is the upper bound of . Substitute the upper bound into , then can be bounded with the worst case delay as

is a convex optimization problem, with linear constraints (13e) and convex inequality constraints (23a). Therefore, we use Lagrangian duality theory to solve this problem. The Lagrangian function of is given by where , where and are the Lagrangian multipliers. According to KKT conditions, we have

The Lagrange dual function is then given by

Then, the dual problem of is

As the Lagrange function is differentiable, the gradients of the Lagrange multipliers can be obtained by

The Lagrange multiplier iterative formula is as follows by using gradient descent. where are the gradient steps, represents gradient number, and [.]+ represents . We summarize the procedure for solving problem in Algorithm 3.

Input:
Output:
1: Repeat
2: Calculate resource allocation based on Equation (26)
3: Update the using Equation (30)
4: Until convergence
5: Return
4.3. Offloading Radio Allocation

Given and , the problem can be transformed into a linear programming problem on . Therefore, the function value takes the extreme value at the boundary or stagnation point, as shown in

Based on the above formula, as a constant and the minimum value can be obtained at the boundary. According to the constraints (13c) and (23a), can be obtained.

Taken together, we propose an approach combining the whole process of offloading strategy, offloading ratio, and resource allocation control (ODRR), as described in Algorithm 4.

1: Set offloading decision , resource allocation , and local executing ratio
2:
3: While tT do
4: Obtain the offloading decision by Algorithm 1 based on
5: Calculate resource allocation by Algorithm 3 based on
6: Update the by using Equation (13c), (23a) based on
7: 
8: End while
9: Return

5. Numerical Results

In this section, simulation configurations and results are presented and analyzed to verify the effectiveness of the proposed algorithm.

5.1. Simulation Configurations

The scenario in this paper is as follows: the system consists of 3 RSUs and N vehicles , which carry tasks with random parameters, different data volumes, computing resources, and allowable delays. Assuming that the computing resources of 3 RSUs are , the local resources of the vehicle are . Vehicles’ task data volume, computing complexity, and maximum allowable delay are random , random , random . Referring to [31, 33, 34], communication parameter settings are shown in Table 3. The settings of the parameters in Algorithm 1 are as follows: the number of particles is 100, the maximum number of iterations is 50, the learning factors and are both equal to 2, because the learning factors are parameters for adjusting the step size. If the setting is too large, the particle moves fast and fly over the optimal point. If set small, the optimization speed will be slow. Larger inertia weight has stronger global search ability and slower convergence speed. In order to avoid falling into the local optimal solution and have a faster convergence speed, it is most appropriate to set to 0.9. In order to verify the effectiveness of the proposed offloading strategy on the proposed Internet of Vehicles architecture, we compare it with other offloading strategies. (i)Offload-Whole-to-RSUs: offload the whole task to edge nodes including access node and remote nodes(ii)Execute-Locally: the vehicle terminal directly executes the task locally(iii)Offload-proportion-by-ODRR: joint optimization of offloading decision, local calculation ratio, and resource allocation (ODRR) is based on the proposed algorithm(iv)Offload-proportion-by-SA: offload proportion of task to RSUs using simulated annealing algorithm (SA).

5.2. Simulation Results

In this section, we present the performance of the proposed ODRR algorithm and compare with other conventional offloading strategies.

Figure 2 plots the average execution time of vehicles as the number of connected vehicles increases. It can be seen that the performance of the proposed ODDR partial offloading algorithm is the best and can keep the average calculation delay to a minimum, and the partial offloading SA algorithm is behind it. When the number of vehicles is less than 35, the effect of offloading whole to RSUs is better than executing locally. Because RSUs have more computing resources than the vehicles, the computing capability of RSU is dozens of times that of executing locally. Therefore, computing performance of executing locally is worse than offloading whole to RSUs. But when the number exceeds the limit, the situation becomes different. On the one hand, this is because the resources of RSUs are limited. When the number of offloading vehicles exceeds a certain number, the load capacity of RSUs is exceeded, resulting in great performance decrease. On the other hand, more vehicles means a worse communication environment, which leads to more communication delays. Compared with the other three conventional strategies, the latency of the proposed ODRR algorithm is always the smallest, as the number of vehicles increases. Compared with the Offload-Whole-to-RSUs scheme, the ODRR can be reduced by 42.7% in the best case, and 17.5% in the worst case; compared with the Execute-Locally scheme, the ODRR can be reduced by 52.6% in the best case, and 24% in the worst case; compared with Offload -proportion-by-SA scheme, the highest can be reduced by 16.7%, and the lowest by 7.8%. It can be concluded that compared with other two conventional strategies, ODRR can reduce the delay by up to nearly half. Compared to the Offload -proportion-by-SA scheme, the reduction is also up to 10%.

Figure 3 plots the impact of different executing complexities () on system delay. The number of connected vehicles is set to 30, and it is found that the average delay of the four offloading strategies increases linearly with the increase of task complexity. The performance of the ODRR algorithm given in this paper is obviously the best compared with other strategies. The partial offloading by SA algorithm is the second, the whole offloading to RSUs is the third, and executing locally has the worst performance. The higher the task complexity, the more CPU resources are needed to process each byte of data. The local computing resources are much smaller than RSUs, so executing locally gets the highest latency. However, offloading whole to RSUs causes uneven distribution and local resource waste. The partial offloading ODRR algorithm takes into account both resource allocation and offloading ratio, which greatly improves system performance. Compared with the Offload-Whole-to-RSUs scheme, the ODRR can be reduced ranging from 14% to 29%; compared with the Execute-Locally scheme, the ODRR can be reduced ranging from 14% to 49.7%, and when the computational complexity is greater, the reduction effect is better. Compared with the Offload-proportion-by-SA scheme, the reduction is between 7% and 9%.

When , Figure 4 shows the change of average delay with the amount of input data increasing. The average execution delay of the vehicles increases linearly with the size of the input data increasing. The proposed ODRR algorithm obtains the minimum delay, after the SA algorithm, whole offloading to RSUs is the third, and only executing locally is final. The larger the input data, the more transmission delay is caused. The algorithm proposed in this paper jointly optimizes the offloading ratio, offloading decision-making, and resource allocation, thus greatly improving the system performance. With the larger amount of input data, the ODRR proposed in this paper has a better effect on reducing the delay. Compared with Execute-Locally scheme, the reduction is between 36.6% and 47.5%. Compared with Offload-Whole-to-RSUs scheme, it can reduce the delay by 30.4% to 42%; compared with the Offload-proportion-by-SA scheme, the decline rate is nearly 10%.

6. Conclusion

In this paper, we propose a multiuser and multi-RSU system architecture based on SDN-enabled IoV. The loads of RSUs are effectively balanced by using the characteristics of SDN. In order to reduce the delay of task offloading in IoV, a joint approach is proposed to optimize the offloading ratio, offloading decision-making, and resource allocation. Compared with the conventional work that executing locally, the system performance increases . Compared with the decision by fully offloading to RSUs, the performance increases ; compared with the offloading strategy by SA, the performance increases about . The simulation results have greatly proved that the joint optimization approach proposed in this paper is more effective than conventional strategies in dealing with the delay problem of multiuser and multi-RSU system and can effectively solve the multidimensional problem. Although greatly improving system performance, there is still room for improvement. For instance, the possibility of task failure caused by transmission link or edge node failure has not considered in this work. Therefore, the reinforcement learning-based approach is of interest to solve the optimization goal considering the failure retransmission mechanism of the task.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (NSFC) under grants No. 61901104 and the Science and Technology Research Project of Shanghai Songjiang District No. 20SJKJGG4 (Corresponding author: Lei Zhang).