Abstract

In cyber-physical systems, sensor transactions should be effectively scheduled to maintain the temporal validity of real-time data objects. Previous studies on sensor transaction scheduling mainly focus on uniprocessor systems. In this paper, we study the problem of data quality-based scheduling of sensor transactions on multiprocessor platforms. The data quality is defined to describe the validity degree of real-time data objects. Two methods, named the Partitioned Scheduling for Quality Maximization (P-QM) and the improved P-QM scheduling (IP-QM), are proposed. P-QM maximizes the data quality by judiciously determining the preallocated computation time of each sensor transaction and assigns the transactions to different processors. IP-QM improves the data quality obtained from P-QM by adaptively executing transaction instances on each processor based on the current status of the system. It is demonstrated through experiments that IP-QM can provide higher data quality than P-QM under different system workloads.

1. Introduction

Cyber-physical systems (CPS) feature a tight combination of the computational and physical elements of the systems [13]. They are widely used in applications that need to process real-time data in a timely manner. Example applications include road traffic control and industry process control [46]. The real-time data objects model the current status of entities in a system environment. Different from the traditional data objects, their values may become invalid with the passage of time. Associated with each real-time data object is a validity interval that specifies the lifetime of its current value. If this lifetime does not expire, the data object is temporally valid. Otherwise, it becomes invalid and a new data value needs to be installed.

In CPS, sensor transactions are generated to sample the status of external entities and update the corresponding data values. They should be effectively scheduled to maintain the temporal validity of real-time data objects. This scheduling problem consists of two issues. That is, the determination of the release time and the deadline of each transaction instance and the scheduling of these instances. Various methods have been proposed to solve this problem. Some examples are the More-Less scheme (ML), the deferrable scheduling algorithm with fixed priority (DS-FP), and [79]. These methods aim to reduce the update workload while providing a complete guarantee on temporal validity, i.e., guaranteeing that the data objects are valid all the time. The update workload reduction allows a system to consume less energy. In addition, it leaves more processor resources to other types of transactions, such as user transactions and triggered transactions.

For many systems, providing completely guaranteed temporal validity for real-time data objects could be difficult. At first, the user transactions in the system may have strict timeliness requirements. They compete with the sensor transactions for the same set of resources for execution. To guarantee temporal validity, more resources should be given to sensor transactions. This could lead to high deadline miss ratio of user transactions. Secondly, the system workload can be highly dynamic. The computation time of a transaction in the worst case can be much larger than that in the normal case. The arrivals of transaction instances may be aperiodical and unpredictable. As a result, data validity violations may occur during system overloads. To tackle these problems, several quality of service- (QoS-) based methods have been proposed to schedule sensor transactions and user transactions to maintain the quality of real-time data objects and the quality of transactions [1022].

Current methods for sensor transaction scheduling are mainly restricted to uniprocessor systems. In this paper, we study the problem of data quality-based scheduling of sensor transactions on multiprocessor platforms. We consider the partitioned scheduling approach. On each processor, the earliest deadline first (EDF) scheme is adopted. The major contributions of the paper are as follows:(i)A definition of data quality is presented to describe the validity degree of real-time data objects. The definition considers the validity of individual data objects and the validity of correlated data object sets.(ii)Two scheduling methods, named the Partitioned Scheduling for Quality Maximization (P-QM) and the improved P-QM scheduling (IP-QM), are proposed to maximize the data quality.(iii)Experiments are conducted to evaluate the performance of the proposed methods. The results show that IP-QM outperforms P-QM in terms of the average quality of individual data objects and the average quality of correlated sets.

The rest of the paper is organized as follows. Section 2 reviews previous studies on temporal validity maintenance. Section 3 describes the system model. The definition of data quality is also presented in this section. The details of the P-QM method and the IP-QM method are presented in Section 4 and Section 5, respectively. Performance studies are given in Section 6. Finally, Section 7 concludes the paper.

In recent years, there have been a number of studies on maintaining temporal validity of real-time data. ML adopts the periodic task model and the deadline monotonic scheme (DM) [7]. In ML, a sensor transaction’s period is set to be no shorter than its relative deadline. The sum of the period and the deadline is equal to the validity interval. The Half-Half scheme (HH) can be viewed as a special case of ML [23]. DS-FP adopts a sporadic task model [8]. It reduces the processor workload by judiciously deferring the release times of transaction instances. Two extensions of DS-FP were presented in [24] to reduce the online scheduling overhead. The basic idea is to produce a hyperperiod of the DS-FP schedule and repeat the hyperperiod infinitely. A necessary and sufficient schedulability condition for DS-FP in discrete time systems was proposed in [25]. The problem of temporal validity maintenance under dynamic priority scheduling was first studied in [9]. Three algorithms were proposed to derive the periods and deadlines of sensor transactions under the earliest deadline first scheme (EDF). A two-phase algorithm was proposed in [26] to reduce the searching cost of period and deadline assignment under EDF. Li et al. [27] presented two methods to maintain temporal validity for EDF when transmission delays are considered. DS-FP was extended in [25] to be a dynamic priority scheduling algorithm by applying EDF to schedule update instances. The problem of scheduling tasks with both maximum distance constraints (i.e., temporal validity constraints) and minimum distance constraints was investigated in [28]. Jha et al. [29] investigated how to maintain the mutual temporal consistency of real-time data objects. Han et al. studied the problem of maintaining temporal validity in the presence of mode changes [30]. Two algorithms were presented to search for proper mode switch points. These studies are all limited to uniprocessor systems.

In [31], Li et al. proposed several algorithms to partition a set of sensor transactions on multiprocessors under EDF and DM. The resource augmentation bounds of these algorithms were also derived. The global EDF algorithm and the half-half principle were applied in [32] to satisfy the data validity constraints. The energy-aware real-time data processing problem on multicore platforms was studied in [33]. Efficient techniques were proposed to maintain temporal validity while reducing energy cost.

The studies described above focus on providing a complete guarantee on temporal validity. The co-scheduling problem of periodic application transactions and update transactions was investigated in [1012]. The aim is to meet the deadlines of all the application transactions while maximizing the quality of data objects. The algorithms based on fixed priority scheduling scheme, EDF and DS-LALF, were presented, respectively. In [13], a set of extensions of ML was proposed to achieve the trade-off between QoS of temporal validity and the number of supported transactions. Labrinidis and Roussopoulos studied the problem of maintaining the freshness of views on a web server [14]. A quality-aware update scheduling algorithm was proposed based on the popularity of views. In [15], a QoS management architecture was proposed to support the desired QoS by applying feedback control, admission control, and flexible freshness management. On-demand schemes were proposed in [16] to skip the updates with similar data object values. Amirijoo et al. employed the notion of imprecise computation for QoS specification and management [17]. Differentiated data service approaches for transaction classes with diverse importance and QoS requirements were proposed in [18]. An effective approach was presented in [19] to decrease both the deadline miss ratio and power consumption by real-time query aggregation and data freshness adaptation. All the above QoS-based methods are designed for uniprocessor systems.

In [20], a scheduling framework was presented to assign update jobs to multiple tracks and schedule them on each track, with the objective of minimizing the total staleness of data objects in a streaming warehouse. A track here represents a fraction of the computing resources. Bateni et al. proposed an update scheduling algorithm upon multiprocessors that has bounded stretch and weighted staleness under the quasiperiodic model [21]. In [22], Kang and Chung proposed a multiple inputs/multiple outputs (MIMO) feedback control method to support the timeliness of data-intensive tasks running on multicore-based embedded platforms. Although these studies have also studied the data quality maintenance upon multiprocessors, their definitions of data quality are different from ours. In [20, 21], the data quality is defined in terms of the data staleness. A data value will become stale after its generation. The staleness increases linearly with the passage of time until a new value is installed. In [22], the data quality is defined as the ratio of the number of fresh data objects to the total number of data objects. In our work, however, a data value remains valid during its validity interval. The data quality is then defined based on the validity of individual data objects and the validity of correlated data object sets.

3. System Model

Let denote a set of real-time data objects and a set of sensor transactions. The data object is associated with a validity interval . It is value is updated by transaction . consists of a sequence of instances (update jobs), in which each samples a value of and installs it. The jth instance of is denoted as . We assume that the jitter between the sampling time and the release time of an instance is zero. is the probability density function that describes the distribution of the actual computation times of ’s instances. Transactions in are indexed according to nondecreasing order of their validity intervals, i.e., for all , .

The transaction set is scheduled upon a multiprocessor platform . In general, there are three approaches to schedule : partitioned approach, global approach, and hybrid approach [34]. In the partitioned approach, each transaction is assigned to a processor and is always executed on it. In the global approach, an instance that has been preempted on one processor can resume its execution on a different processor. The hybrid approach is the combination of partitioned and global approach. It can be further classified into semipartitioned approach and cluster-based approach. In this paper, the partitioned approach is adopted. On each processor, the EDF scheme is used to schedule the transaction instances.

Definition 1. is temporally valid at time t if, for its update job finished latest before t, the sampling time of this job plus is not less than t, i.e., [7].
According to Definition 1, if ’s current value is sampled at , it should be updated before . Otherwise, it will become invalid. However, when the actual computation time of the corresponding update job is large, the update may not be finished before due to the lack of processor resources. Consider a time period . Let denote the valid state of at time . is 1 if is valid at and is 0 otherwise. The quality of , , is defined as follows:A correlated data object set is a set of real-time data objects whose values are used together to compute the corresponding derived data or to make decisions. It is often required that a certain percentage of the data objects in the correlated set are valid to produce the result with sufficient accuracy. Let denote the correlated sets in the system. For each , let denote the valid threshold of . If the number of data objects in that are valid at time is no less than , then is considered to be valid at . is set by users based on application requirements. The quality of , , is defined as follows:The overall data quality of the system, , is defined as follows:Sensor transactions should be effectively scheduled on to maximize . This scheduling problem consists of two subproblems. The first is to assign the sensor transactions to processors in a data quality-aware manner. Notice that, different from the traditional real-time tasks, the periods and deadlines of sensor transactions are unknown. They must also be derived during the assignment. The second is to determine which instances can be executed in the system and how to execute them. Table 1 summarizes the major symbols that are used in the paper.

4. The P-QM Method

In this section, we present the P-QM method. Instead of maximizing directly, P-QM tries to maximize an approximated overall data quality. Let denote the preallocated computation times of transactions in . The actual computation time of instance is denoted as . The quality of with respect to , , is defined as the probability that is no larger than , i.e., Pr. For a correlated set , let denote the preallocated computation times of the corresponding transactions. The quality of with respect to , , is defined as follows:

In equation (4), . can be viewed as the probability that the number of data objects in that are valid at a time instant is no less than under . The overall data quality of the system with respect to , , is defined as follows:

P-QM maximizes by judiciously determining and assigning the transactions to processors. A transaction assignment method under the given is presented at first. Then, based on this method, a greedy heuristic is presented to determine in order to maximize .

4.1. Assigning Transactions to Processors

In P-QM, sensor transactions are assigned to processors in a way that the temporal validity of data objects in is completely guaranteed under the given . Let and denote the period and the deadline of transaction , respectively. The assignment problem is described as follows.

Given the transaction set with and the multiprocessor platform , assign each to a processor in and derive and , such that the following constraints are satisfied:(1)Validity constraints: (2)Deadline constraints: (3)Feasibility constraints: , the transactions assigned to with given preallocated computation times and derived deadlines and periods are feasible by using EDF scheduling

Notice that if the above constraints are all satisfied, then the temporal validity of data objects is guaranteed. This is because for each , its jth value sampled at is updated by instance which will finish before . This means a value is certain to be refreshed before its validity interval expires.

Next, we present the algorithm used in P-QM to solve the assignment problem. Let and denote the cumulative density and the density of with respected to , respectively. , . Let , , and . The following conditions are checked at first:

If equation (6) or equation (7) holds, the assignment mode is set to be the restricted mode. In this mode, transaction ’s deadline is restricted to be no larger than . Otherwise, it is set to be the unrestricted mode. In this mode, ’s deadline can be larger than but should be no larger than . Suppose the first i transactions have been successfully assigned to processors. Let denote the set of transactions that are assigned to processor . The utilization of is . If , the deadline of on is set to be . Otherwise, it is computed as follows:

The period of is set to be . is assigned to the first processor such that the following conditions are satisfied:(1) if the assignment mode is restricted mode, or if the assignment mode is unrestricted mode.(2).

The assignment fails if no such processor exists. This process is described in Algorithm 1. Notice that both the calculation of ’s deadline and the check of condition 2 can be carried out in an incremental way. Thus, the time required to assign to a processor is . The check of equations (6) and (7) takes time. Therefore, the time complexity of Algorithm 1 is . The following theorem shows the correctness of the algorithm.

Input, ,
Output: The assigned processor of each transaction
1. assignment mode restricted if equation (6) or (7) holds. Otherwise assignment mode unrestricted;
2. fordo
3.  fordo
4.    if . Otherwise compute using equation (8);
5.   ;
6.   if condition 1 and condition 2 are satisfied then
7.    ;
8.    break;
9.   end if
10.  end for
11.  ifthen
12.    is infeasible on under , return failure;
13.  end if
14. end for
15. return success;

Theorem 1. If Algorithm 1 succeeds, then the temporal validity of data objects in is completely guaranteed.

Proof. For each , when it is assigned to a processor, its deadline is set to be no less than and no larger than due to equation (8) and condition 1 of the algorithm. Its period is set to be . Thus, the validity constraints and the deadline constraints are satisfied. We only need to show the feasibility constraints are also satisfied.
The first transaction is assigned to processor with deadline and period . Obviously, it is EDF-schedulable. Suppose that the first transactions have been assigned to processors and the transactions on each processor are EDF-schedulable. Consider that transaction is assigned to processor . The transaction set on processors except are not affected by ; thus, they are still EDF-schedulable. If is the first transaction assigned to , then obviously it is EDF-schedulable. Let denote the approximated demand bound of in . is if and is 0 otherwise. Notice that is larger than the deadlines of transactions that are assigned to prior to . According to [35], is EDF-schedulable if andLet denote the transactions in that are assigned to before . Based on equation (8), one hasEquation (10) means the transaction set on is EDF-schedulable after is added. By induction, the feasibility constraints are satisfied when all transactions have been assigned to processors.

Theorem 2. Algorithm 1 succeeds if equation (6) or equation (7) holds.

Proof. At first, we consider the case in which equation (6) holds and equation (7) does not hold. Suppose in this case, transaction fails to be assigned to any processor. Then, on each processor, one or both of the conditions of the algorithm are not satisfied. Let denote the set of processors on which condition 1 is not satisfied and denote the set of processors on which condition 1 is satisfied and condition 2 is not satisfied. . The transactions assigned to processors in and are denoted as and , respectively. For each , it must beAccording to Theorem 1, equation (11) impliesNotice that, for each , . Summing over all processors in and making some transformations, we obtainFor each , it must beSumming over all processors in and making some transformations, we obtainIf both and are larger than zero, then based on equations (13) and (15) and the definition of and , we obtainSince equation (7) does not hold, it must beIf , we also obtain equation (17). If , then , otherwise equation (15) does not hold. We then obtainBoth equations (17) and (18) contradict the assumption that equation (6) holds.
Then, we consider the case in which equation (7) holds. Suppose that the first transactions have been successfully assigned to processors. For each , it must beEquation (19) means the deadline of is not larger than ; thus, condition 1 of the algorithm holds. Condition 2 of the algorithm also holds since equation (7) holds implies . Therefore, each transaction can be assigned to a processor.
Notice that there may exist some transactions in with . Let denote the set of such transactions. If , or and , then is infeasible on since a processor can only accommodate just one transaction from . If , then Theorem 2 can be applied to and .

4.2. Determining the Preallocated Computation Times

The problem of determining the preallocated computation times is formulated as an optimization problem:subject to(1)Feasibility constraint: the transactions in can be successfully assigned to processors under by Algorithm 1(2)Computation time constraint:

Theorem 3. Let denote the lth data object in , the preallocated computation time of . Given two sets of pre-allocated computation times and of . If for each l, , , then .

Proof. Let and denote the quality of with a valid threshold of j under and , respectively. For each i, , it must be(1)For all , , and (2)(3) and if For , and ; therefore, . Suppose for , holds for all , . Now, consider . For each , let , and . can be calculated as follows: can also be calculated in the same way. By applying equation (21), we obtainIn equation (22), and due to (1) and (2), respectively. and are no smaller than 0 due to the assumption on and (3). Therefore, . By induction, we know that .
According to Theorem 3, will increase by increasing the preallocated computation times of some transactions. Based on this observation, P-QM adopts a greedy heuristic to determine . For each , the heuristic sets the initial value of to be its minimum computation time at the start. It then selects one transaction at a time to increase its preallocated computation time.
Let denote the increase in the overall data quality due to an increase of current by the step size of . is computed as follows:In every step, transaction is selected to increase if .
To determine whether increasing by is feasible, the following condition is checked at first:If equation (24) holds, then increasing by is infeasible. To see why, let us suppose increasing by is feasible; then, for each , there must exist at least one instance of finished in any interval with length , otherwise the data object will be invalid at some time points. This is also true for with . The total time required to execute the instances should not exceed the platform capacity. Thus, it must beDividing both sides of the negation of equation (25) by , we obtain equation (24).
Notice that if equation (24) holds, then cannot be selected to increase in the future. Equation (24) can be evaluated in constant time since the value of the sum term in it can be obtained in the previous step.
If equation (24) does not hold, the following condition is checked:If equation (26) holds, then increasing by is feasible according to equation (6) and Theorem 2. Equation (26) can be checked in an incremental way; thus, the time required is . If equation (26) does not hold, Algorithm 1 is applied. If no transaction can be selected in a step or every transaction’s preallocated computation time has reached its maximum value, the algorithm terminates and returns the current . The selection process is described in Algorithm 2.
More sophisticated optimization algorithms can be used for our optimization problem to obtain better solutions, such as the evolutionary algorithm and particle swarm optimization [36, 37]. However, as the number and the worst case computation times of sensor transactions can be large, these optimization algorithms may coverage slowly. The usage of these algorithms for our optimization problem will be left as our future work. In addition, as will be illustrated in the experiment section, by applying the adaptive instance execution method on the transactions with preallocated computation times obtained from the greedy heuristic, the data quality can be greatly improved.
After determining the preallocated computation times and assigning transactions to processors, the system will execute the transaction instances to update the corresponding real-time data objects. In P-QM, when instance arrives, if is no larger than , then is admitted, otherwise it is dropped. The admitted instances are scheduled using the EDF scheme.

Input: ,
Output: The preallocated computation time for each transaction
1.;
2. if Algorithm 1 fails for under then
3.   is infeasible on , return failure;
4. end if
5. while true do
6.  ;
7.  for each do
8.   if equation (24) holds for under and then
9.    ;
10.   else if equation (26) holds for under and then
11.    ;
12.   else if Algorithm 1 succeeds for under then
13.    ;
14.   end if
15.  end for
16.  ifthen
17.   The assignment is finished, return success;
18.  else
19.    ;
20.  end if
21.  ifthen
22.    ;
23.  end if
24. end while

5. The IP-QM Method

IP-QM also uses Algorithms 1 and 2 to determine and assign transactions to processors. It further improves the data quality by executing transaction instances adaptively on each processor. Figure 1 shows the adaptive execution of instances on a processor.

As shown in Figure 1, when instance arrives at processor , the instance dropper determines whether is admitted or not. If ’s actual computation time is no larger than , then is admitted. Otherwise, the following condition is checked:

In equation (27), is the total remaining computation time of ’s instances with release times no larger than and deadlines no larger than . is the (absolute) deadline of , . is the average computation time of an instance of :

is the number of ’s instances with release times larger than and deadlines no larger than :

is admitted if equation (27) is satisfied and is dropped otherwise. The remaining computation times of instances are obtained from the scheduler. can be precomputed in stage one. can be computed in constant time. Thus, the time required to check equation (27) is . The check is only required when .

Two queues and are maintained for . The instances in each queue are arranged in nondecreasing order of their absolute deadlines. The admitted instance is put into if . Otherwise, is spitted into two instances: with computation time and with computation time . and are put into and , respectively. Both of them have the deadline . can only be executed after finishes. Each time the scheduler selects the instance at the head of for execution. If is empty, the instance at the head of is selected. An instance is aborted if it cannot be finished before its deadline.

In addition to admit the arriving instances, the instance dropper is also used to drop some instances that are already in the system. Let denote the computation time of the finished part of at time instant t. The following rules are used for dropping:(1)When arrives, if is not finished and , then is dropped. The deadline of is set to be .(2)When arrives, if ’s latest two instances and are not finished, has been finished, and , then is dropped. The deadline of is set to be . If , then is removed from and put into with deadline .

For rule 2, if there are more than one transaction satisfying the dropping condition, the transaction with the largest skippable computation time, i.e., , is selected for dropping. If , will not be considered as an dropping candidate in the future. The time required to evaluate the condition of rule 1 and rule 2 are and , respectively.

The instance dropping does not affect the validity of data objects. Consider rule 1. If is dropped, then is certain to be finished before . The corresponding data object is valid in the time interval . is the finish time of . If is not dropped and finished before , then is valid in . is the minimum of the finish time of and . Otherwise, is valid in . is the finish time of . It can be seen that and are no smaller than since the remaining part of may be executed and is no larger than . Consider rule 2. is valid from the finish time of to . If is dropped, then the unused computation time of , i.e., , will be used by . Thus, is certain to be finished before and is valid in . If is not dropped, then and may not be finished before their deadlines if their actual computation times are larger than ; thus, may be invalid in some intervals in .

6. Experiment Evaluation

This section presents the results obtained from performance studies on the proposed methods.

The performance metrics used in the experiments are the average quality of individual data objects (ADQ_IND), the average quality of correlated sets (ADQ_COR), and the average update workload (AUW). The definition of ADQ_IND and ADQ_COR are given by equations (1) and (2). Let denote the simulation time and the total time that processor is busy executing the transaction instances:

The parameters and default settings used in the experiments are presented in Table 2. The validity interval of a data object is uniformly distributed in [2000, 4000]. The computation time of a sensor transaction is generated following the normal distribution with mean computation time and standard deviation given in Table 2. itself is uniformly selected in [10, 20]. The number of data objects in a correlated set is uniformly distributed in [2, 8]. These objects are randomly selected from the data object set. The threshold of a correlated set is set to be , where indicates the percentage of data objects in that are permitted to be invalid at a time point.

Figure 2 shows the average quality of individual data objects when the number of sensor transactions varies from 100 to 200. The number of processors is 2. is set to be 0.4. When the number of transactions is no larger than 120, the ADQ_IND of P-QM and IP-QM are equal to 1. Both methods’ ADQ_IND decrease as the number of transactions increases. However, the ADQ_IND of IP-QM keeps higher than that of P-QM. This is because for an incoming instance with actual computation time larger than the preallocated computation time, IP-QM will accept it for execution if equation (27) is satisfied, while P-QM always drops it. In addition, IP-QM will drop some instances that are already in the system. As discussed in Section 5, this kind of dropping does not affect the validity of data objects. However, it can leave more processor resources to instances with long execution times. Notice that the difference in the data quality between the two methods increases as the number of transactions increases. When the number reaches 200, the difference goes to about 0.37.

Figure 3 shows the average quality of correlated sets. It can be seen that the ADQ_COR of IP-QM is constantly higher than that of P-QM. For example, when the number of transactions is 180, the ADQ_COR of IP-QM is about 0.98, while the ADQ_COR of P-QM is about 0.87. Both methods’ ADQ_COR decrease with the increase in the number of transactions. However, IP-QM’s ADQ_COR drops much slower than P-QM. One observation from Figures 2 and 3 is that the ADQ_COR is higher than the ADQ_IND under both methods. The reason is that the threshold of a correlated set is not large, which leads to a high probability that the correlated set is valid at a time point while the individual data objects in it are not. In addition, since the data objects in a correlated set are selected randomly from the whole data object set, the average valid ratio of them is very close to the average valid ratio of all data objects.

Figure 4 shows the average update workload. It can be seen that when the number of sensor transactions is larger than 120, the of IP-QM is higher than that of P-QM. In addition, the of IP-QM always goes up when the number of transactions increases, while the of P-QM goes up before the number reaches 140 and drops slowly after that. This is because in P-QM, when there are a large number of transactions in the system, to accommodate them, the preallocated computation times of them are decreased. This leads to more rejected instances which cancel out the increase in the system workload. In IP-QM, however, many of the instances rejected by P-QM can be accepted and finished due to the admission and dropping rules. These extra instances contribute to the higher average quality of individual data objects and correlated sets.

Figures 5 and 6 show the average quality of individual data objects and correlated sets under different setting of , respectively. The number of sensor transactions is fixed at 200. It can be seen that the ADQ_IND and the ADQ_COR of IP-QM are higher than that of P-QM. For example, when is 0.3, the ADQ_IND and the ADQ_COR of IP-QM are about 0.87 and 0.92, while the ADQ_IND and the ADQ_COR of P-QM are about 0.53 and 0.58. There is not much changes in the ADQ_IND of both methods since the average system workload under different values of are very close to each other when the number of transactions is fixed. Meanwhile, both methods’ ADQ_COR increase as the value of increases. This is because a larger means fewer data objects in a correlated set are required to be valid; thus, there are more chances to access a correlated set that is valid under both methods.

Experiments were repeated for systems with 4 processors. Figures 7 and 8 show partial results. The number of transactions varies from 220 to 320. is set to be 0.4. It can be seen that IP-QM still outperforms P-QM in terms of ADQ_IND and ADQ_COR. In addition, the performance degradation of both methods becomes much slower when more processor resources are available.

7. Conclusions

This paper studies the problem of data quality-based scheduling of sensor transactions on multiprocessor platforms. A definition of data quality is given to describe the validity degree of real-time data objects. Two methods, P-QM and IP-QM, are proposed. P-QM maximizes the data quality by judiciously determining the preallocated computation times of sensor transactions and assigning the transactions to processors. IP-QM improves the data quality obtained from P-QM by adaptively executing transaction instances on each processor. Experiment results show that IP-QM outperforms P-QM in terms of the average quality of individual data objects and the average quality of correlated sets. In this work, the partitioned approach is adopted for transaction scheduling. In the future, we will study how to use the global approach and the hybrid approach to schedule the transactions so that the real-time data quality can be effectively maintained.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported by the Hunan Provincial Natural Science Foundation of China under Grant no. 2020JJ4032.