Research Article
Time-Driven Scheduling Based on Reinforcement Learning for Reasoning Tasks in Vehicle Edge Computing
Algorithm 3
Algorithm implementation.
| Input:, , , , | | Output: | (1) | Initialization: set the array , the subtask queue Q and the set of predecessor nodes R to | (2) | Use the constraint relationship to set the array | (3) | Enqueue the subtask with to Q and set the number of traversed subtasks , the number of subtasks in the current layer to the current queue size | (4) | whiledo | (5) | if u = k then | (6) | . | (7) | end if | (8) | The subtask is dequeued, and the task is expressed as , | (9) | for to do | (10) | if there exists a directed edge of to i then | (11) | Add the subtask and its predecessor node set R() to R(i) I(i)- = 1 | (12) | if I(i) = 0 then | (13) | enqueue the subtask to Q | (14) | end if | (15) | end if | (16) | end for | (17) | end while | (18) | According to , the subtasks are assigned to edge nodes. | (19) | Initialization: set the subtask completion list to , set the remaining execution latency of subtasks by , and set the current running time | (20) | whiledo | (21) | Determine the subtask to be assigned to each edge node, which satisfies the direct predecessor set is subset of | (22) | Find the minimum execution latency from the currently executed subtasks in parallel | (23) | , when , add the subtask to and set | (24) | end while | (25) | return h |
|