Abstract

With the rapid growth of 6G communication and smart sensor technology, the Internet of Things (IoT) has attracted much attention now. In the 6G-based IoT applications on the multiprocessor platform, the partitioned scheduling has been widely applied. However, these partitioned scheduling approaches could cause system resource waste and uneven workload among processors. In this paper, a smart semipartitioned scheduling strategy (SSPS) was proposed for mixed-criticality systems (MCS) in 6G-based edge computing. Besides tasks’ acceptance rate and weighted schedulability, QoS is considered in SSPS to improve the service quality of the system. The SSPS allocates tasks into each processor, and some tasks can migrate to other processors as soon as possible. By comparing with the several existing algorithms, the experimental results show that the SSPS achieves the best in the schedulability and QoS of the system.

1. Introduction

Nowadays, with the 6G wireless communication networks and various smart sensors widely applied, the IoT applications grow rapidly in a wide range of areas, including industrial robot, driverless car, and edge computing [15]. Especially, edge computing, a new application paradigm, is growing popular with 6G technology’s development. For these systems in 6G-based edge computing, various sensors and mobile devices with different importance or criticality levels are integrated into a single computation platform for less space and energy. Criticality is designed for assurance needed against system failure [6]. Generally, criticality can be divided into several levels, such as low and high criticality. For example, in automotive systems, the tasks brought by steering and braking sensor are safety-related high, while the tasks of multimedia players, used for infotainment, are low-criticality (LO-criticality) tasks. These systems that have components with more than one distinct criticality level are mixed-criticality systems (MCS) [7], which are the special kind of 6G-based application with multiple criticality. The task scheduling is a fundamental issue in MCS, to reconcile the conflicting requirements for resource usage. The proper task is scheduled so that all high-criticality (HI-criticality) tasks’ execution is guaranteed which is a major challenge of MCS. Because the reliability verification and the mixed criticality exist simultaneously in MCS, the traditional real-time scheduling algorithms cannot be directly adopted [8, 9].

With the prompt of computing requirements, the platforms of MCS are migrating from a single processor to multiprocessor hardware. MCS scheduling on multiprocessors can be mainly divided into global scheduling [1012] and partitioned scheduling [1317]. Fully global scheduling, because of task migration globally, has an overhead of the context switching and associated caches, while purely partitioned scheduling, in which some processors are too busy and others are too idle because of forbidden migration, causes waste of system resources [18]. Following this, the researchers mixed the above two scheduling methods and proposed the semipartitioned scheduling strategy [1921]. The semipartitioned algorithms apply two-phase allocations for the different system criticality modes. During a phase of criticality mode update, the executing low-criticality (LO-criticality) tasks (jobs) will be aborted and new ones can be executed on a different processor, and thus, these jobs’ deadlines are met to achieve better schedulability of system.

However, in most existing MCS semipartitioned scheduling algorithms, when the system criticality mode switches into HI-criticality from LO-criticality, the LO-criticality tasks are directly discarded to ensure HI-criticality tasks’ completion [2224], which seem too negative. Firstly, LO-criticality levels are not noncritical, and dropping the executing LO-criticality tasks may damage the system’s acceptance rate. During the scheduling process, the processors could be so idle that they can be assigned to perform LO-criticality tasks and thereby improving system utilization and task acceptance [25, 26].

Furthermore, acceptance rate and utilization rate are the main schedulability concerning parameters in MCS, to ensure the HI-criticality tasks’ completion. However, for tasks with the identical criticality, they can have different influences to MCS in actual applications, where some tasks are more significant or have higher quality of service (QoS) to a certain extent. To describe the QoS property of task, a notion, for example, value [27, 28], is usually used. The higher the value, the better the quality brought by task. And calculative value brought by finished tasks is recorded as to respect the whole tasks’ QoS under scheduling algorithm [29, 30].

1.1. Organization

The paper’s structure is listed as follows: the related work is described in Section 2. Section 3 describes the paper’s overall framework. Section 4 defines the proposed MCS model and notation in detail. We analyze the schedulability of tasks in MCS in Section 5. Section 6 designs the task priority assignment. The detail of the proposed scheduling algorithm SSPS is introduced in Section 7. The simulation experiment setup and results are presented in Section 8. Finally, in Section 9, we summarize the conclusion and future work.

In recent years, with the 6G network’s development, related IoT applications are widely applied and researched [13]. Especially, 6G-based edge computing is growing popular with 6G wireless communication developing rapidly [4, 5]. And the scheduling issue of these applications on multiprocessors, such as systems with multiple criticalities, has become prominent [612]. We review the related work as follows.

The review of the literature [1317] shows that the partitioned scheduling approach can achieve better schedulability than the global scheduling approach. By partitioned scheduling method, the task sets are firstly allocated to each processor, and then, they are executed according to the single-processor scheduling algorithm. The optimal partitioning of task sets on multiprocessors is a NP-hard problem, and the researchers mainly use heuristic partitioned algorithms to obtain suboptimal solutions. For the MCS on the identical multiprocessor platform, a fixed partitioned scheduling algorithm was firstly proposed in [14], and the impact of different task set sorting as well as a heuristic division on the system performance has been investigated. It showed that decreasing criticality (DC) can gain better schedulability than decreasing utilization (DU). In implicit-deadline sporadic MCS, a partitioned scheduling algorithm MC-PARTITION based on DC was proposed, which can get a better speedup bound. Since the task criticality level may change, the tasks are fixed via the Best-Fit Decreasing (BFD) of continuous criticality and utilization and improved resource utilization [15]. However, the pure partitioned scheduling algorithm may reduce the utilization of the entire system because of the migration forbidden between processors [16, 17].

These above reasons lead to the emergence of a semipartitioned scheduling strategy. In this scheduling, most tasks are assigned to the fixed processor and some tasks can be scheduled to different processors globally [1820]. And for the MCS, a series of semipartitioned scheduling algorithms have been proposed. Santy et al. designed a heuristic scheduling strategy, combining reserved, semipartitioned, and periodic conversion, which reduces the migration overhead and obtain better performance [21].

In the original Vestal model, LO-criticality jobs sometimes are treated the same as noncritical jobs that will be not guaranteed in HI-criticality system mode, which ensures the completion requirements of HI-criticality tasks. Nevertheless, from the engineering perspective, LO-criticality task is not an NO-criticality task; it cannot be dropped easily [2226]. Su and Zhu firstly focus on the LO-criticality tasks dropped in mixed critical scheduling and discuss the feasibility of restarting LO-criticality tasks from a multimodal perspective [22]. Burns and Baruah constructed an elastic mixed critical task model, which enables more frequent execution of LO-criticality tasks set through elastic processing [23]. And Baruah et al. [24] introduced an additional less pessimistic WCET for LO-criticality jobs to guarantee service regardless of the executions of HI-criticality jobs. The works in [25] follow the MC-Fluid framework to address the corresponding scheduler to handle LO-criticality service, having a good speedup factor.

Some researchers agree that real-time task has importance or quality, which should be treated as a factor to improve the quality of service (QoS) of system or application [27, 28]. In these papers above, a notion, namely, value, is given to respect the quality of a task, as a basis of the scheduling algorithm. Moreover, the value density (value of a time unit) and urgency of a task are considered comprehensively into dynamic scheduling algorithm and improved the real-time application performance [29, 30].

3. Overall Framework

We consider the scheduling on mixed-criticality systems (MCS) under a multiprocessor platform, in 6G-based edge computing environment. Firstly, the schedulability analysis based on response time is used to obtain schedulable tasks. Then, these tasks are sorted by priority assigned by criticality, value, and deadline. The tasks are divided to processor by first fit (FF) in the priority of descending order. The smart semipartitioned scheduling strategy (SSPS) is proposed, in which some tasks can be migrated to other processors as needed. During the scheduling, the slack time collection is working to execute more task’s job. The overall framework of the SSPS is shown in Figure 1.

3.1. Our Contributions

For the mixed-criticality systems (MCS) in 6G-based edge computing of homogeneous multiprocessors, the timing and service quality of the system tasks are taken into consideration, and a smart semipartitioned scheduling strategy (SSPS) is proposed in the paper. Besides, when the system mode switches from LO-criticality to HI-criticality, a mechanism that facilitates LO-criticality tasks (jobs) is designed in SSPS, to improve both the schedulability and the QoS.

4. System Model and Notation

4.1. System Model and Notation

Here, a mixed-criticality system (MCS) is defined as below, a task set comprised of independent and periodic tasks and n processor set with identical processors .

Meanwhile, the dual MCS model is adopted in this paper, which runs in either a HI-criticality mode or a LO-criticality mode.

Definition 1. MCS tasks. The task of MC model can be characterized by a 5-tuple of parameters: , where (1) denotes the criticality of task , where . A task with HI-criticality is subject to be certified, whereas a LO-criticality task does not need to be certified(2) denotes the task ’s worst-case execution time (WCET) in criticality mode l, where . and denote the WCET of task at HI-criticality mode and LO-criticality mode, respectively. It meets the constraint (3) is the relative deadline of task (4) is the period of task (5) specifies the value of task in criticality mode l, where . and respect the value of task at LO-criticality mode and HI-criticality mode, respectively, and it meets

Each task in MCS can give rise to potentially infinite sequence of jobs.

Definition 2. MCS jobs. Each job released by task can be described by a 4-tuple of parameters: , where (1) is the release time of job (2) is the estimated execution time of , with the constraint (3) denotes the absolute deadline of , satisfying (4) denotes the finish time of job . If succeed, it should be the constraint

The system starts in the LO-criticality mode and remains in this mode as long as all jobs finished their execution.

If any job does not complete its execution within its LO-criticality execution time , the system criticality mode will arise and HI-criticality tasks are executed with .

4.2. Assumptions of the Model

In the MCS model, the LO-criticality does not mean noncriticality, and these tasks should be executed to the extent possible.

Assumption 3. If MCS switches into HI-criticality mode from LO-criticality mode, some of the LO-criticality tasks (and jobs) will be not dropped directly and allowed to be scheduled later.

Assumption 4. Tasks are independent of each other; they only share a processor, but not any other resource, such as bandwidth or memory.

5. Schedulability Analysis

In this section, we will investigate the schedulability by analyzing the response time of the job.

For job , released by task , its response time is denoted by . Task ’s response time is denoted as , which equals to the value of the maximum response time of all jobs released by .

We test the schedulability of the job by comparing the response time with the task ’s deadline . If , can be scheduled; otherwise, it cannot be scheduled.

The job ’s response time includes two parts: its estimated execution time and interference time caused by higher priority tasks.

When a task is allocated to a processor, assuming a job , released by at , waits other higher priority jobs’ completion until , its finish time is , and its deadline is . The execution constraint is illustrated by Figure 2.

Suppose is the time when the system mode is raised from LO-criticality to HI-criticality. When , discussion about is as follows: (1)If , it means that the system has been raised to HI-criticality mode before the job starts (a)The system initials at LO-criticality mode during the interval . And the job executes in LO-criticality mode; the interference time can be calculated as where indicates the tasks with higher priority than task .(b)In the interval , the system is raised to HI-criticality mode. Then, the job executes at HI-criticality mode; the interference time is defined as where indicates the tasks with higher criticality than task .(2)When , it means that the system critical mode is improved during the execution of the job (a)In the interval , the job executes in LO-criticality mode; the interference time equals to of Equation (1)(b)In the interval , the job executes in HI-criticality mode; the interference time equals to of Equation (2)

In summary, the response time of can be expressed as

Based on Equation (3), the task s response time , which can be determined by the jobs released by , selects the maximum response time of its job and satisfies .

According to the discussion above, the pseudocode of the schedulability analysis algorithm can be described as Algorithm 1. In this algorithm, the inputs include two pieces of information: the undivided task set and unallocated processors . The output is the partitioned task queue of the processors. At first, each processor is not allocated any task (lines 1–3). Then, sort the tasks according to its priority in order (line 4). After this, the algorithm allocates the tasks to processors (lines 5-15). At last, return the partitioned result (line 16). For each task in the queue (line 5, the outer for), starting from the first, the algorithm tries to allocate to a processor to execute (line 6, the inner for). If can finish, insert it to the processor s ready queue and allocate it to the next task (lines 7-10).

In Algorithm 1, it contains two-layer loops, where the outer-layer loop (lines 1, 5) can be evaluated in constant time and the inner-layer loop (line 7)’s complexity is also constant level . The step of calculate in line 6, between the outer-layer loop (line 5) and the inner-layer loop (line7), in which complexity is . Consequently, Algorithm 1’s run time complexity is .

6. Priority Assignment of Task

In general, the priority of a task is the basis of schedule. This section mainly considers the criticality level and the value of task and proposes the priority assignment strategy.

6.1. Criticality and Value

According to Definition 1, HI-criticality task ’s value is related to the criticality , satisfying .

Inputs:
 Task set to be partitioned T = ,...,;
 Processors set to be allocated .
Outputs:
 PT = Que_Ready_,...,Que_Ready_.
 /where Que_Ready_ is the tasks ready queue on ./
1: for each in Pdo
2: Set Que_Ready_ = {};
3: end for
4: Sorted according to priority in descending
5: for each in do
6: Calculate Ri by Eq. (3);
7: for each pj in P by descending order do
8:  ifRiDithen
9:   Add τi into Que_Ready_pj;
10:   break;
11:  end if
12: end for
13: end for
14: returnPT;

In the existing strategies, the LO-criticality level tasks will be dropped out when the system criticality mode upgrades. Total value (TV) of the system can be expressed as , where is the HI-criticality task set, is the system total value in LO-criticality system mode, and is the system total value in HI-criticality mode.

To compare the two values in HI-criticality mode and LO-criticality separately, and , let , where is criticality factor of task, which satisfies . indicates the total value difference between in HI-criticality mode and in LO-criticality mode.

In Equation (4), indicates the value difference of HI-criticality tasks in HI-criticality mode and in LO-criticality mode, and represents the values obtained by the LO-criticality tasks.

If , it means that the increases as system criticality mode upgrades. In other words, the value difference of HI-criticality tasks in different criticality modes is larger than the LO-criticality tasks’ values at this time.

If , it means the does not rise when the system criticality mode switches from LO-criticality to HI-criticality.

6.2. Assignment of Task’s Priority

In MCS, the task’s priority should reflect its attributes, including criticality level, value, and deadline. When constructing the priority assignment function , we consider the importance of these factors. (1)All HI-criticality tasks should be executed firstly(2)Next, tasks with high value are prioritized in the same criticality level

In different system criticality modes, for a task , its priority is recorded as : where and vary as the system critical mode changes. And satisfies .

Theorem 5. For the task , it satisfies .

Proof. In dual MCS with HI-criticality and LO-criticality, there are  and . Therefore, .

Lemma 6. When the system is in HI-criticality mode, for each HI-criticality task and LO-criticality task, it satisfies .

7. Scheduling Algorithm

For the MCS under a homogeneous multiprocessor platform, we propose a smart semipartitioned scheduling strategy (SSPS).

7.1. Smart Semipartitioned Scheduling Strategy (SSPS)

SSPS includes both the processes of partitioned scheduling and global scheduling; the details are as follows: (1)Task order. All the tasks of are sorted in a descending order according to their priorities calculated by Equation (5)(2)Processors allocation. Each sorted task is allocated to processors by First-Fit Decreasing (FFD) method(3)Schedulability test. The task subset’s schedulability is tested by Algorithm 1(4)Task execution. This process includes executing jobs released by the task and collecting the processors’ idle time for LO-criticality tasks’ execution (a)Tasks allocated in each processor execute by priority, and these tasks do not migrate. During this process, the slack times of each processor are collected and stored in the queue (b)When the system criticality mode upgrades, all unfinished LO-criticality jobs are sorted and managed globally and then assign their execution times in . At the high mode, we allow the execution of LO-criticality tasks but do not allow them to preempt HI-criticality tasks; i.e., the HI-criticality tasks will not incur any interference from the ones with a LO-criticality level

Here, the queue is used to store feasible slack fragment . Each is represented in the form of , where is the length of time and is the end time of . The algorithm of slack time collection is shown as Algorithm 2.

In Algorithm 2, the inputs include two pieces of information, including the given executing job and slack fragment of processor . The output of the algorithm is the idle time queue . At first, the condition of collect processor slack time is that the job can finish until its deadline (line 1). And if the input is null, insert into queue (lines 2-6). Otherwise, discuss the values of executing time and the ’s length , the former should be not larger than the latter to ensure ’s execution. If the two are equal, then remove the from (lines 9-10), and if is less than , ’s remaining time after completing is update to (lines 11-14). At last, return (line 17).

Inputs:
 Executing job ;
 Slack fragment .
Output:
 The queue for collect idle time Que_Slack.
1: if finish at t0(<) then
2: if = null then
3:  q = (-);
4:  ;
5:  
6:  Insert into Que_Slack;
7: else
8:  Get q,d from sfi;
9:  ifthen
10:   Remove from Que_Slack;
11:   else ifthen
12:    Set ;
13:    Update to Que_Slack;
14:   end if
15:  end if
16: end if
17: returnQue_Slack;

Example 1. A task set including 5 tasks is shown in Table 1 and is divided into two homogeneous processors.

The system is in LO-criticality mode at the initial time, and if the task set is presorted using the DU method, the sequence of system tasks is , , , , and . In accordance with the FFD strategy, tasks and are divided into processor , while tasks , , and are divided into processor . In this case, two processors meet the conditions. In LO-criticality system mode, the resource utilization of is 83.3%, and the corresponding resource utilization of is 78.3%. When the system mode upgrades to a HI-criticality mode, all LO-criticality tasks , , and are discarded. In this case, processor obtains a resource utilization of 62.5% and the resource utilization of processor is 50%.

The existing scheduling algorithms, such as MC-PARTITION, drop out all LO-criticality tasks directly in HI-criticality system mode, even if there is some idle time on the processor at some moment. In order to improve the acceptance rate of the task and the utilization of the processor, the cutoff period and value attribute of the task can be considered comprehensively, and the LO-criticality task is dispatched globally by using the idle time of the processor when the system mode switches from LO-criticality into HI-criticality. A hyperperiod execution (here, 24 time units) for the task set of Example 1 is shown in Figure 3, in which the task set is presorted according to the priority function of Equation (5), with the same result as the DU method.

After the system critical mode upgrades, we perform global scheduling for LO-criticality tasks , , and on the premise of ensuring HI-criticality tasks and and allocate idle time on processors and dynamically. This method can achieve high acceptance ratio and improve the utilizations of both processors to 91.7% (11/12).

7.2. Analysis of SSPS

In the smart semipartitioned scheduling strategy (SSPS), the task’s schedulability is analyzed by Algorithm 1; tasks allocated by first fit (FF) are assigned priority by Equation (4). The queue is used to collect slack time by Algorithm 2, and the LO-criticality jobs are collected to the queue . The system selects the segment in to execute the job in . Meanwhile, the queue is used to store prepared tasks. And the pseudocode of SSPS is shown in Algorithm 3 as follows.

Inputs:
;
;
;
;
;
Output:
 Acceptance Ratio.
1: = schedulability analysis()
 /, where
is the tasks allocated to /
2: ifthen
3:  for each in do
4:  Get the top element out of ;
5:  ;
6: then
7:  if is HI-criticality and
   then
8:   Abort all Jobs of LO-criticality in PT;
9:   Insert LO-criticality jobs into as
     in descending order;
10:  ;
11:  Execute in HI-criticality;
12:  ;
13: else
14:   Abort ;
15:   continue;
16  end if
17: else
18:  Execute ;
19:  ;
20:  Que_Slack= slack time collection(, );
21:  continue;
22: end if
23: ifthen
24:  for each from do
25:   Get the top element out of ;
26:  ifthen
27:   Execute ;
28:   ;
29:   Que_Slack= slack time collection(, );
30:  else
31:   continue;
32:  end if
33:  end for
34: end if
35: end for
36: return ;
37: end if

In Algorithm 3 mentioned above, the inputs include seven parameters: the task set , the processor set , the queue for LO-criticality jobs, the queue storing processor’s idle time, the queue for ready tasks, the initial system mode (), the total successful jobs’ number and the total job number . The output of Algorithm 3 is the acceptance ratio which equals . At first, analyze all tasks of and allocate to a ready queue of each processor (line 1). Then, for the queue , get the task and compare its response time to (lines 2-4). If cannot finish its completion (), then discuss if is a HI-criticality job and ; the system mode switches to HO-criticality from LO-criticality according the MCS definition; abort LO-criticality tasks in ready queue of each process, insert LO-criticality jobs into the queue , and execute the (lines 5–10); if is a LO-criticality job, then abort it and continue (lines 11-14). Otherwise, can finish, execute it, and collect the slack time of its completion (lines 15-19).

During the scheduling in HO-criticality system mode, the processor of executing LO-criticality uses idle time of each processor (lines 20-30). For each slack fragment in queue , the top job of is chosen to execute. It is necessary to compare to , the length of . If the former is not less than the letter, then complete and collect the slack time of its completion (lines 22-25).

Algorithm 3 calls Algorithm 1 (line 1) that contains a two-layer loop. And its another two-layer loop in algorithm, where the outer-layer loop is in lines 3-31 of , and the inner-layer loop (lines 21-29) can be evaluated in constant time of . Therefore, the time complexity of the Algorithm 3 is in the pseudopolynomial order.

For the SSPS algorithm, it is necessary to discuss the system value . (1)In LO-criticality system mode, the system total value of the is .(2)In HI-criticality system mode, the total value contains two parts: the value obtained by the HI-criticality task , where , and the value obtained by the finished LO-criticality task set, named as , .

The total value can be described as

According to Equation (6) about the TV of the classic strategies, obviously.

To discuss the changed with system criticality mode, let indicate the total value difference between in HI-criticality mode and in LO-criticality mode. If , it indicates that the in HI-criticality mode is bigger; conversely, it means that the in HI-criticality mode is smaller. where is the criticality factor of task, and .

In Equation (7), is the obtained value of HI-criticality task difference in HI-criticality mode and in LO-criticality mode. is the value obtained by the dropped LO-criticality tasks.

8. Simulations and Analysis

8.1. Simulation Experiments

The simulation experiments are performed to test the scheduling algorithm SSPS. All experiments were run on a PC with a 3.40 GHz 4 identical processor and 8 GB memory. In the simulation, we compared SSPS to the existing partition scheduling algorithms, DC-RM [13] and MC-PARTITION [14], which are the classic and representative algorithms in the research community of MCS partition scheduling. Based on these two algorithms, there are lots of derived algorithms for other real-time application scenarios [1619]. The task set parameters of the experiments were randomly generated as follows: (1)The utilization of each task was in the range [0.025, 0.975] and was generated by the Uunifast-Discard algorithm(2)The proportion of high-critical tasks to task set HTP, which directly affects the execution of LO-criticality tasks and then affects the performance of our SSPS strategy, was 0.5 by default. And the criticality factor was set to 1.5, according to Definition 1 where (3)Each task ’s LO-criticality execution time is randomly generated in the range from 1 to 10 in accordance with uniform distribution. And, ’s execution time of HI-criticality satisfies (4)The task ’s relative deadline satisfies: , and ’s period is (5)The task ’s value of LO-criticality was generated randomly between 10 and 50, and its HI-criticality was set to

The performance indicators include acceptance ratio (AR), weighted schedulability (WS), and total values (TV). (1), where is the number of successful tasks and is the number of system taskset. The shows the proportion of successful tasks to total task set(2), where . The indicates the total utilization of each task(3), where is the successful tasks. The represents the QoS of whole successful tasks

In order to measure the average number of job migrations, 100 trials of simulations with different tasks are conducted in the experiment.

8.2. Acceptance Ratio Analysis

The AR changes as tasks’ different LO-critical utilization scheduled by MC-PARTITION, DC-RM, and SSPS are demonstrated in Figure 4. It can be seen that all algorithms’ AR decrease significantly as the grows from 0.3 to 1.0. And the DC-RM algorithm has the least AR and the SSPS algorithm obtains the best AR. When the is below 0.5, SSPS is close to MC-PARTITION and DC-RM in AR. But as becomes larger than 0.6, the SSPS begins to outperform the other two algorithms, because in HI-criticality mode, SSPS executes the LO-criticality tasks selectively, improving the AR of the whole system, while the other two algorithms discard LO-criticality tasks directly, which leads to a sharp descending in AR.

It is illustrated that the AR varies in different high-critical task proportion HTP in Figure 5, when is set to 0.6 and the HTP grows from 0.3 to 1.0. As shown by the result, with the HTP increases, the additional HI-criticality tasks require more executing time; thus, all algorithms’s AR continues to decline. When the HTP is less than 0.5, all algorithms’ AR is close because the competition of tasks’ execution is not intense in LO-criticality system mode. As the HTP grows larger than 0.6, the intense competition among tasks reduces the AR, in which some task cannot finish its execution and the system mode arises to HI-criticality.

Once the system mode upgrades to HI-criticality, the task’s becomes large. Compared to the MC-PARTITION and DC-RM, the SSPS algorithm achieves a more stable and higher AR, because the former two algorithms drop the LO-criticality tasks directly. When the system mode switches from LO-criticality to HI-criticality, the SSPS algorithm executes some LO-criticality jobs in an idle time of the processor and gradually decreases as HTP increases.

8.3. Schedulability Analysis

It is shown that the weighted schedulability WS is declining with arguments, where HTP is set to 0.5 (see Figure 6). Compared to MC-PARTITION and DC-RM algorithms, the SSPS gets higher and more stable in WS through schedule the LO-criticality task in HI-criticality system mode. In the beginning, the WS obtained by all algorithms is falling steadily. When HTP becomes larger than 0.6, MC-PARTITION and DC-RM accelerate degradation in WS due to the more execution time required by increased HI-criticality tasks and the discarded of LO-criticality tasks directly in HI-criticality system mode.

That weighted schedulability WS results change as HTP grows is plotted in Figure 7, where . The HTP is represented on the horizontal axis, and the vertical axis is WS. We can see that WS is gradually declining as HTP arguments. Because with the number of HI-criticality tasks increasing, the system resources they need are increased. When the HTP is below 0.3, SSPS is almost identical to MC-PARTITION and DC-RM algorithms that get a steady decline in WS. With the HTP becoming larger and the system criticality level arising, the execution time of the HI-criticality task becomes longer, which can intensify competition among tasks and reduces the system schedulability. The SSPS can obtain better WS than the other two methods, because SSPS in HI-criticality mode can take advantage of slack time produced by HI-criticality task, to execute the selected LO-criticality task globally.

8.4. Total Value Analysis

The simulation results for total value TV changed with LO-critical system utilization ’s growth are shown in Figure 8, where the horizontal axis is and the vertical axis is TV. As shown in Figure 8, all algorithms’ TV decrease as the increases from 0.3 to 0.9. Compared to the other two algorithms, the MAPPS has a significant advantage over the other two in TV, which gradually decreases as the grows. Because only the SSPS chooses the task with high urgency and high value, thereby obtaining better TV and improving the performance of the system.

Figure 9 plots the TV with the change of HTP. It shows that the total value TV presents the fluctuation of first up then down as HTP grows from 0.1 to 0.9. In the beginning, the growing number of HI-criticality tasks can take a larger value. But with HTP increasing, the HI-criticality task’s longer executing time reduces the WS of the system, as shown in Figure 9, which brings the TV decreasing. And SSPS can obtain the best TV due to its choice of high urgency and high-value tasks.

9. Conclusions and Future Works

In recent years, with the increasing popularity of 6G wireless communication technology, mixed-criticality systems (MCS) in 6G-based edge computing have been grown quickly in application scenarios. Meanwhile, with the multiprocessors’ development widely applied, including homogeneous, the relative MCS scheduling technique is necessary to research. In this paper, a smart semipartitioned scheduling algorithm (SSPS) was designed on MCS in the homogeneous multiprocessors. Firstly, we analyze the task’s schedulability based on the response time and allocate the processors. Then, a task’s priority assignment function with multiple attributes, including criticality, urgency, and the total value, is constructed. Besides, a scheduling algorithm titled by SSPS has been proposed with the schedulability analysis algorithm and the priority assignment above. In the SSPS, we allocate the tasks in LO-criticality mode, while in HI-criticality mode, the SSPS not only finish the HI-criticality tasks but also choose the LO-criticality tasks to execute under the utilization of the processor’s slack time globally. The experimental results illustrate that the SSPS could achieve the best performance among the existing algorithms.

However, there are still some limitations of the SSPS algorithm. In practical 6G-based edge computing applications, the task real-time scheduling is often related to the sharing of limited resources. With the heterogeneous multiprocessors’ development, heterogeneous will be more the case for the 6G-based real-time applications. We will explore the scheduling and resource sharing issues of edge computing of heterogeneous multiprocessors based on the SSPS algorithm. Besides, in other complex real-time applications, like parallel industry systems and smart industrial networks [3133], it needs to consider several factors in data transmission and task scheduling; we are also planning to investigate these issues. Moreover, we notice that modern IoT devices are increasingly being equipped with multiple network interfaces; our future work will consider to apply the proposed SSPS algorithm to optimize the promising multipath parallel data transmission methods [34, 35] for the multihomed IoT environment.

Data Availability

The data, including task’s properties and performance indicators in the experiments, used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research is funded by the National Natural Science Foundation of China (NSFC) under Grant Nos. 61762040, 61962026, and 62041702; the Natural Science Foundation of Jiangxi Province under Grant No. 20192ACBL21031; the Provincial Key Research and Development Program of Jiangxi Province (No. 20181ACE50029); the Science and Technology Research Project of Jiangxi Provincial Department of Education (Nos. GJJ170234 and GJJ160781); and the doctoral research project of Jiangxi Normal University (No. 12020361).