Abstract
With the rapid development of network technology, the traditional defense of “mending the fold after the sheep have been stolen” cannot accurately prevent various potential threats and attacks in cyberspace. At the same time, cyberspace mimic defense (CMD) makes the system uncertain and dynamic in time and space to effectively defend against potential attacks. As the key technology of CMD, the scheduling algorithm still needs to be improved in reliability and active defense. Aiming at current problems, this paper first innovatively proposes a new heterogeneous measure algorithm HVTG combined with a vulnerability topology graph, which measures the heterogeneity of executor set in a finegrained manner. Then, based on the historical confidence, heterogeneity, and minimum sleep time of the executor, we propose an adaptive multiexecutors scheduling algorithm (HHAC) to better defend against various attacks. Finally, combining with Analytic Hierarchy Process and Fuzzy Comprehensive Evaluation, this research proposes a comprehensive evaluation model and fill in the gap of the evaluation model of the scheduling algorithm. Theoretical analysis and simulation results show that the HHAC performs well on the system dynamics, probability of system failure, and reliability, which is conducive to the development of CMD.
1. Introduction
With the rapid development of Internet technology, today’s world has entered the era of “Internet of everything”, and the network has spread to every corner of people’s lives. However, there are always uncertain threats in the cyberspace, such as unknown vulnerabilities and backdoors. According to research statistics, programmers will unconsciously produce a vulnerability in every 1000–1500 lines of code [1]. By paying a small price, malicious attackers can infringe on the privacy of individuals and even society, cause largescale network downtime or failure. In view of the current asymmetric situation in cyberspace, which is easy to attack and difficult to defend, new ideas of active network defense are urgently needed. Wu [2] proposes the theory of cyberspace mimic defense (CMD), which enhances general robustness and endogenous security by introducing dynamic, heterogeneity, redundancy, and closedloop feedback characteristics into the system [3–6]. The details will be introduced in Section 3.
It is proven by practice that MD greatly increases the difficulty of attacks, what is more, the MD has been successfully applied to routers [7], switches [8], and so on. The realization mechanism of MD mainly includes structural effect and functional fusion, strategy scheduling, mimic ruling, and closedloop feedback control, which makes the mimic system naturally uncertain both timely and spatially. MD is expected to fundamentally get rid of the current cyberspace dilemma of “Easy to attack, hard to defend.”
In Mimic defense, the scheduling algorithm (SA) [9] is the key to realize mimic system high security. It is responsible for building and scheduling online executors according to the historical performance and feedback information and realizing the unpredictability inside the mimic bracket. At the same time, heterogeneity is the vital consideration in the scheduling algorithm, which represents the difference in structure and function between different executors. The greater the heterogeneity, the more difficult to attack two executors at the same time. Furthermore, this article proposes an adaptive multiexecutors scheduling algorithm in CMD, which performs multiexecutors selecting and replacement dynamically according to the changing cyberspace environment. However, there is few research on the scheduling algorithm, and the proposed scheduling algorithm is too random regardless of cost or lack dynamic [10]. Most of the existing scheduling algorithms lack finegrained research on executor similarity and highorder heterogeneity [11]; some scheduling algorithms mainly depend on feedback mechanism and cannot meet the reliability requirements [12]. In addition, there is no research on SA to analyze the unified evaluation model. Aiming at the shortcomings of existing scheduling algorithms, the main research work in this paper is as follows:(i)We first summarize the related research on MD and SA. Then, based on an automaton model [13], we illustrate the significance of SA by simulating the state transition process in the mimic system;(ii)Combining the highorder common vulnerability among executors and graph theory [14], we innovatively propose a new heterogeneous measure algorithm combined with a vulnerability topology graph (HVTG), which can effectively measure the similarity between executors and executor sets;(iii)Considering the historical confidence of the executor, the highorder heterogeneity of the executor set, and the minimum sleep time of the executor, this paper proposes a multiexecutors adaptive scheduling algorithm (HHAC). The algorithm can adaptively perform online executor behavior transformation according to historical performance and current network environment. Simulation experiments show that HHAC achieves good scheduling overhead and system endogenous security.(iv)As there is still lacking unified evaluation criteria for SA, we innovatively introduce the Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE) for the comprehensive evaluation of scheduling algorithms. By proposing a general comprehensive evaluation index and model, we finally achieve a good comprehensive evaluation effect.
The structure of this paper is organized as follows: Section 2 introduces the existing research on mimic scheduling algorithms; Section 3 presents a brief overview of MD and SA; in Section 4, a heterogeneous quantification algorithm that considers highorder common vulnerability is proposed; in Section 5, a multiexecutors scheduling algorithm based on highorder heterogeneity and historical confidence is proposed; Section 6 proposes a comprehensive evaluation method based on AHP and FCE; Results are provided and discussed in Section 7; in the last section, the conclusions are drawn, and the scope of future work is discussed.
2. Related Work
As a key part of MD, the SA schedules online executors according to their historical performance and feedback information. It makes the characteristics inside the mimic brackets diverse and unpredictable by rolling the executors online/offline or transferring their services. From the perspective of executor scheduling strategies, this section divides them roughly into three categories: objectbased scheduling, timebased scheduling, and quantitybased scheduling. The following will briefly analyze and evaluate them.
2.1. ObjectBased Scheduling
Mimic architecture requires online heterogeneous executors have equivalent functions and different structures. The greater the dissimilarity of online executors, the lower the possibility of common vulnerabilities, and the smaller the probability of being successfully attacked via a shared vulnerability. Reference [15] proposes the maximum dissimilarity distance component selection (MD) and the optimal mean dissimilarity distance component selection (OMD) algorithm, which selects the executor set based on the longest dissimilarity and the best average dissimilarity distance, respectively. The system distance threshold setting is relatively high and lacks the dynamic of executors. Lu et al. [16] propose an inverse feedback scheduling algorithm based on historical information to perform optimal scheduling according to different attack types, and the heterogeneity between executors has not been studied. Liu et al. [17] use the similarity between the components of executors to measure their heterogeneity and propose a random seed minimum similarity algorithm (RSMS) to select the executor set with minimum overall similarity, but lack of consideration about executor historical confidence and the dynamic needs further study when the number of executors is small. Zhang et al. [18] take executers’ complexity and dissimilarity into consideration, use the quadratic entropy to quantify the dissimilarity of executors, and propose a random seed scheduling algorithm (RSMHQ) based on the maximum heterogeneity and Web service quality, which achieves a better balance between system security and service quality, and it needs to be continuously optimized to determine security and service quality weights according to a different environment. Pu et al. [19] measure the similarity of executors in time and space. Based on common vulnerability between executors, Pu proposes a pool scheduling algorithm based on priority and time slice (PSPT), which achieves good dynamism and time complexity. Reference [20] proposes a random seed scheduling algorithm (RSMHQH) based on executor heterogeneity, performance, and historical confidence, which achieves better performance and comprehensive indexes. Meanwhile, the selection of the seed executor is too random, which will give the attacker a greater chance to attack successfully.
2.2. TimeBased Scheduling
Timebased scheduling is to select an optimal time for online executer set replacement. Lu et al. [21] describe a closedloop controller scheduling security process and model the time problem of dynamic scheduling as an update process in stochastic theory. At the same time, according to the input scheduling cost, attack loss, and attack distribution function, [21] proposes an optimal scheduling algorithm (OSA) to calculate the optimal scheduling time. The OSA has solved the timing selection of executor to a certain degree and lacks the safe way to select executor and replacement. Considering the abnormal threshold and scheduling period, Guo [22] proposes a scheduling sequence control method based on the sliding window model, which can improve the overall system security through scheduling parameters in different scenarios. The algorithm can improve the system security, high efficiency, and robustness by adjusting the scheduling parameters in different scenarios. However, the algorithm only considers single executor replacement and random selection.
2.3. QuantityBased Scheduling
On the basis of security, security gain, and cost, Wei et al. [23] comprehensively analyze the threemode redundancy to obtain the best comprehensive effect between safety and cost. Combining the online redundancy of executor and system dynamics, Qi et al. [24] propose a feedback dynamicaware scheduling algorithm. According to the number of controller failures to be tolerated by the system, it determines the number of online executors at the next moment. The algorithm reduces the probability of the system at a small cost but needs to reduce complexity. Li [25] proposes a utilitybased dynamic elastic scheduling strategy to determine the next online executor redundancy according to the current network environment. Similarly, based on the feedback results and system dynamics, Gao et al. [26] propose a scheduling algorithm to balance cost and security. That is increasing or decreasing the online executor redundancy according to the network environment. But it lacks of the research on the specific transformation time and the update of the subsequent ruling algorithm of executors.
Moreover, in other application fields, Kavin et al. [27] propose a new task scheduling algorithm by combining incremental PSO and Search algorithm in a heterogeneous cloud environment, experiment in the Aneka framework shows that the algorithm is better than existing task scheduling approaches. Gunasekaran [28] proposes a new start up model based on temporal constraints, by scaling back the planning policy for reducing the waiting of scales back tasks and begin times within the Hadoop platform. To optimize the power and performance in public cloud networks, Muthurajkumar et al. [29] propose the Energy Efficient and Optimal Scheduling Algorithm (EEOSA) by applying temporal constraints and rules, finally it has achieved the effective storage and retrieval in cloud data.
To sum up, the SA has specific applications in various fields, and we mainly focus on the application of SA in CMD in this paper. From the aspects of scheduling object, scheduling time and scheduling quantity, the current research results of SA have certain feasibility and application scope and achieve a significant improvement in the security of the mimic system. However, as the key technology of CMD, there is still lacking of the finegrained and standard metrics for heterogeneity between executors and indepth research on flexible and adaptive scheduling algorithms. Simultaneously, there is still no standard and effective evaluation for SA. Therefore, taking both system reliability and availability into account, the algorithm is needed to achieve system highquality security gains without compromising or even improving the original service quality. In view of the existing problems, this paper innovatively proposes a new algorithm HVTG to measure heterogeneous between executors in a finegrained manner, for example, we consider the highorder common vulnerabilities and vulnerability topology graph that has not been studied. Then, by introducing survival time and historical confidence of executor to adaptive multiexecutors scheduling, the HHAC achieves high reliability of online executor set. To the best of our knowledge, we are the first to consider the sleep threshold to select an online executor. Moreover, a comprehensive AHPFCE model is proposed to evaluate different SAs which can provide a new evaluation criterion to researchers. The abovementioned contributions will be described in detail below.
3. Related Technology of Mimic Defense
3.1. Mimic Defense Architecture
Based on cyberspace active defense, Wu proposes the theory of MD as shown in Figure 1 [9]. By introducing dynamism, heterogeneity, redundancy, arbitration, and closedloop feedback mechanisms into the system, the MD system can adjust its internal structure according to the arbitration information while having the inherent uncertainty of the attack surface [30]. Thus, the mimic system can present dynamic and generalized uncertainty to the outside and greatly increases the difficulty of attacks.
As shown in Figure 1, the strategy distribution module of MD distributes copies of external input to a heterogeneous executor set , then executes the task and outputs results to the multimode ruling module. The ruling module chooses the output results according to the ruling algorithm and finally outputs the result. At the same time, the ruling module feeds back the state information of to the scheduling module. Ultimately, the scheduling module decides whether to perform executor scheduling according to the feedback information and current cyberspace environment.
The purpose of SA is to make attackers unable to acknowledge the internal changes of the mimic system through the nonlinear transformation of redundant parts and realize the uncertainty and high security in time and space. This paper summarizes the SA as the following three steps:(1)According to the feedback information, the SA determines the online executor redundancy and scheduling time, simultaneously selects executors to go online(2)Replaces the abnormal online executor from time to time, in addition determines the online executor transformation threshold(3)Outputs the ruling result, and determines the next transformation time and redundancy according to the feedback information
3.2. Importance of SA
In this section, we will use the nondeterministic finite automata model to illustrate the importance of SA in MD. Automata theory is widely used in natural language processing, pattern recognition, automatic control, and other fields. Drawing on the advantages of its powerful and simple state transition function, this section will describe the state transition in MD through automata theory and further explain the importance of SA in MD. Since the online executors can be considered independent and heterogeneous, here we assume that the attacker can only successfully probe and attack a single executor in one scheduling cycle. In addition, there is only one executor can be replaced within a scheduling cycle to ensure high service quality of the system.
In order to briefly describe the role of SA in MD, this section assumes that the online executor redundancy is 3, that is . As shown in the state transition in Figure 2, we can see that the state transition is a nondeterministic finite automata model (NFA).
At the same time, the NFA quintuple can be expressed as , which can be expressed as: (Table1).(1), , among them, , , , (2)(3) is the initial state(4) indicates the attacked success status(5)Transfer function is:
For state , there are:
It can be seen from (1) that when is larger, the probability of system state transition to is smaller, the system is more secure and difficult to be attacked.
For state , there are:
When system state is or , the greater the probability at this time, the greater the probability of the system transitioning from attacked successful state to or . From the NFA above, we can know that the greater the probability SA schedules the wrong executor, the system tends more to return to a normal state and the system is more secure. Through the nonlinear transformation of the mimic system, the attacker cannot grasp the internal changes of the system or attack the bulk of executors successfully. In summary, the SA is crucial in MD.
4. Heterogeneous Measure
4.1. Threat of HighOrder Common Vulnerability
Existing scheduling algorithms mostly calculate the heterogeneity between two executors by their common vulnerabilities and calculate the heterogeneity of executor set by accumulating the heterogeneity between each two executors, which lacks consideration of highorder common vulnerabilities. Zhang [31] proposes the concept of highorder common vulnerability as:
Definition 1. (Highorder Common Vulnerabilities) When there are vulnerabilities in different executors that can achieve the same attack effect, and the number of executors meeting this situation is , then we define them as morder common. In addition, when , we call them highorder common vulnerabilities.
As shown in Figure 3, assuming that the online executor redundancy is 3, then there is . can be expressed as , in which stands for a component of . When there is a vulnerability that exists in the component of A, as well as B, and C, and it can cause the same attack effect, then component in A, B, and C are all marked. We use to indicate all the vulnerabilities existing in the executer set. If , then there is no vulnerability in A, Band C. For example, we can tell that is a 3order common vulnerability of executor set . Similarly, is a 2order common vulnerability, and are1order common vulnerabilities.
We assume that an attacker can only discover and exploit one vulnerability in one scheduling cycle. As shown in Figure 4(a), attacker discovers and exploits vulnerability , it can be used to attack executor C successfully, and C only. As executors A and B are not attacked successfully, the system can still output correct results after the majority ruling. If the attacker discovers and exploits the 2order common vulnerability , as shown in Figure 4(b), the executors A and B are successfully breached at the same time. Then, the final ruling result is wrong and system will be breached momentarily (we call this phenomenon an instant attack escape). Moreover, there is 3order common vulnerability in the executor set. As shown in Figure 4(c), if the attacker exploits vulnerability and attacks all online executors successfully, the system will be breached and the inverse feedback control mechanism will not be triggered until the next scheduling cycle. To sum up, the existence of highorder common vulnerabilities in the executor set will seriously threaten the security of the mimic system.
(a)
(b)
(c)
4.2. Heterogeneous Measure
Most of the existing heterogeneity measuring algorithms between two executors are based on their common vulnerabilities (which is also the 2order common vulnerabilities of these two executors). After the analysis above, this paper concludes that the existing methods for calculating heterogeneity of the executor set still have the following deficiencies:(1)Only the similarity of the same component between different executors (based on common vulnerabilities between two executors) is considered, and there may be common vulnerabilities between different components of the same or different executors(2)The complexity of matrix operation, which is based on executor components, turns too high when there are too many executors in the executor pool(3)There is no consideration for the existence or calculation method of highorder common vulnerabilities
Aiming at the deficiencies above, taking both highorder common vulnerabilities and computational complexity into consideration, this section creatively proposes the heterogeneous measure algorithm based on a vulnerability topology graph (HVTG). Using graph theory, we construct the vulnerability topology graph based on the vulnerabilities that have been found between executors. Assuming that the online executor set is , and each executor is composed of components, that is , . Then the online executor set of the system can be expressed as follows:
Among them, each row of matrix , that is , represents an executor, and each column, that is , represents a class of functionally equivalent components.
Definition 2. (Vulnerability Topology Graph (VTG)) VTG is constructed based on the discovered vulnerabilities in all executors. Suppose the discovered vulnerabilities are recorded as , for , its component set can be represented as . As shown in Figure 5, the VTG is formed by connecting the vulnerability with its component set.
As we can see from Figure 5, the VTG takes multiple situations into consideration. Firstly, the situation in which common vulnerabilities appear in the same component of different executors. For example, in Figure 5(b), there is the same vulnerability in the component of executor , , and . Secondly, the situation in which common vulnerabilities appear in different components of different executors. As shown in Figure 5(a), the component of executor shares a vulnerability with the components of executor . Thirdly, the situation in which common vulnerabilities appear in different components of the same executor, such as the components and of share a common vulnerability . In summary, the VTG can calculate the location of vulnerability and select the online executor set in a more finegrained manner.
(a)
(b)
Definition 3. (Pseudo norder common vulnerability) As shown in Figure 5(a), when vulnerability has branches, that is, .But within its component set, exists a component subset of the same executer, for example .For , there is . At this time, we define as a pseudo norder common vulnerability.
Definition 4. (norder common vulnerability) , the component set can be expressed as . Calculate its maximum subset, components in the same executor are only calculated once. The maximum subset satisfies , when , we define vulnerability as norder common vulnerability.
The pseudo norder common vulnerability increases the attack surface of executor for vulnerability . Here we have reason to believe that the threat of pseudo norder common vulnerability is greater than of vulnerability , but less than norder common vulnerability. To elaborate on norder common vulnerability, we assume an executor set , for , . Assuming that vulnerability set has been found in , the VTG is shown in Figure 6.
From Figure 6, we can know that is a 5order common vulnerability, and is a pseudo 5order common vulnerability but actually a 4order common vulnerability. The same, is a pseudo 3order vulnerability, is a 3order common vulnerability, and is a 2order common vulnerability. Considering the VTG above, this section calculates the threat level of norder common vulnerability from structure. The higher the vulnerability order, the higher the system similarity, and the greater the potential threat of the system. Taking the pseudo norder common vulnerability and norder common vulnerability into consideration, here we define the weight function of as follows:where represents the order of vulnerability , represents the bias order calculated as , and represents the online executor redundancy.
(a)
(b)
(c)
(d)
(e)
Proof. The calculation of norder common vulnerability weight should conform to the change rule of vulnerability threat degree. Firstly, must be a monotonically increasing function. Then, the vulnerability threat degree should increase faster when or . First, we calculate the first partial derivative of function with respect to variable :Let , (5) is equivalent to:Then, calculate the second partial derivative of :In the same way, calculate the second partial derivative of function with respect to variable when :Finally, calculate the second partial derivative of :From (6)–(8), we can conclude that , that is weight function increases monotonically. Furthermore, the value of reaches maximum when , that is, the vulnerability weight increases the fastest when , which is in line with the fact that when the vulnerability order is more than half of the number of online executors, the threat degree of vulnerability will increase sharply and cause instant escape. At the time when , the vulnerability weight gradually increases but its increasing speed gradually decreases, which satisfies the change law of vulnerability threat degree.
Definition 5. (Vulnerability Binary Subset (VBS)) For , the vulnerability component set can be expressed as . Based on , we define VBS to contain two elements and . The number of VBS of vulnerability is . For executors , let:Traversing the VBS of vulnerability , then the heterogeneity between executors can be expressed as follows:From (4)–(10), we can conclude that the larger is, the more common vulnerabilities there are between , the greater the possibility of highorder common vulnerabilities, and the higher the similarity between . When , there is complete heterogeneity between . On the contrary, executors are exactly the same when . The similarity of online executor set in HTVG can be calculated as follows:Finally, the pseudocode of HTVG is shown in Algorithm , Lines 1–9 construct the VTG and VBS by all discovered vulnerabilities in executors. From lines 10 to 17, we determine the order and weight of each vulnerability according to the function Getordervulnerability. In lines 18–22, we compute the heterogeneity of executors and executor set. In summary, this paper achieves the measure of similarity between executors in a more finegrained way and avoids the occurrence of highorder vulnerability to the greatest extent.

5. Scheduling Algorithm Based on HighOrder Heterogeneity and Adaptive Historical Confidence (HHAC)
5.1. The Measure of Historical Confidence
Measure the historical performance of the executor in the past to get its historical confidence of executor (HCE), which can reflect its historical performance and current ability to resist attack. Most of the current research studies calculate the global confidence [22], namely, the whole historical performance of the executor. Furthermore, Zhang [28] proposes the sliding window confidence by calculating the HCE of the current local time period. However, the global confidence of an executor cannot completely reflect its real attacked state in the current time period, and the sliding window confidence only considers the attacked state in the current time period. From what has been discussed above, we believe that both the global and local historical confidence of the executor should be considered, and this paper redefines the concepts and calculation methods of both.
Definition 6. (Global Confidence (GC)) As shown in Figure 7, we express the time points when executor rolls online as , and the time points when executor rolls offline as . This paper defines the historical performance of executor during as the global confidence .
Definition 7. (Local Confidence (LC)) As shown in Figure 7, represents the time period after executor rolled online at ., and may be replaced offline or still performing online at . From above, we define the recent performance of executor during as the local confidence .
This section proposes an adaptive measure algorithm for HCE based on executer’s the online working time and number of performed tasks, and then introduces the calculation method of and in detail. Relevant parameters are shown in Table 2.
According to Definition 6, we calculate using the formula as follows:As for , since executor still works online during , we perform an adaptive update after each task, and the update rule should be in consistency with the change rule of historical confidence with task success or failure. The formula isAmong them,When performs a task successfully during , the LC of should increase slowly. On the contrary, the LC should decrease rapidly. In addition, if the number of wrong outputs reaches the threshold, must be reduced to the threshold value of offline cleaning. If performs a task successfully at , thenIf performs a task unsuccessfully at (moreover, this error output is the time during ), then
Proof. After performing a task successfully, let , then the growth rate of LA isSimilarly, when performs a task unsuccessfully, the change rate of LC can be calculated as follows:According to (16)–(18), the confidence of increases slowly by about after performing a task successfully, and decrease rapidly by about after outputting a wrong result. That is, when the number of wrong output results increases, the decline amplitude of gradually increases. To sum up, we calculate the GC to measure the historical performance of and use as input to calculate . Furthermore, is adjusted adaptively according to the efficiency of the current task, so it can better quantify ’s current ability to resist attack risk. In order to avoid frequent executor scheduling due to occasional output errors or inconsistencies, we propose the adaptive measure algorithm for HCE to provide priori information and the trust degree of the executor for SA.
5.2. The Selection of Adaptive Survival Time
Assuming the attacker is smart enough, and the attacker may discover a vulnerability but do not attack immediately in this scheduling cycle, instead, wait until obtaining the vulnerability of another online executor or vulnerabilities of more than half of the online executors. Then, the attacker attacks by exploiting all the vulnerabilities at the same time, which can cause instant attack escape or permanent attack escape due to more than half of executors output wrongly. Considering the abovegiven potential threats, this paper introduces the survival time disturbance factor. carries a random survival time, and will go offline regardless of its historical performance or current potential threats when the online time of exceeds its random survival time. The executor scheduling process with survival time disturbance factor is shown in Figure 8. According to the rotation scheduling mentioned earlier, the optimal backup executor is calculated for each online executor to form the backup executor set . When the historical confidence of is lower than the threshold or the survival time is reached, will be scheduled online and given a survival time.
Definition 8. (Random Survival Time Disturbance Factor ) To avoid attacks when executor is initially online, we introduce a random survival time disturbance factor to determine its online survival time regardless of its historical confidence or current performance.
Definition 9. (Adaptive Survival Time Disturbance Factor ) An overly short survival time can cause frequent replacement of executers and furthermore high system cost. While an overly long survival time can bring relatively high threat potentials. In order to avoid random survival time getting to short or too long, we should adaptively increase or decrease the online time according to the historical confidence of , and we introduce the adaptive survival time disturbance factor to do so.
In this section, we propose an adaptive generation iterative algorithm of the survival time disturbance factor based on . Considering the LC of , and the calculation method of and will be introduced in detail below.(1)Regardless of adaptive change of survival time, when executor is initially online at in a scheduling cycle, the random survival time disturbance factor will be generated, that is, will go offline into cleaning at time ;(2)When , the adaptive transformation rule is
5.3. Scheduling Algorithm Based on HighOrder Heterogeneity and Confidence (HHAC)
When selecting an initial online executor set, we tend to think that the ideal scheme is to select a set that meets the redundancy condition and has the overall minimum similarity. But the fact may prove otherwise, that the set with overall minimum similarity may not be the optimal choice. To explain the problem, here we consider a 3redundant mimic defense system as is shown in Figure 9(a), the executor pool is , and represents heterogeneity between executor and . Then, there are two scheduling schemes and , because , namely the overall heterogeneity of is greater than . But in , , that is, the probability of common vulnerabilities between and is relatively high, and the probability of attackers successfully exploiting the common vulnerabilities is relatively high. Therefore, when using the similarity index, we should not only consider the overall value but also prevent the occurrence of local extreme values.
(a)
(b)
Based on the abovegiven assumptions and problem of existing SA, the selection of the initial executor set can be equivalent to the optimization problem:
From (20), the heterogeneity between any two executors should be smaller than . The ideal scheduling scheme is that the scheduling period is infinite, namely, the number of executors in the executor pool is infinite. But due to cost and environmental constraints in practical applications, the number of executors in the executor pool is often limited to less than 20. Therefore, the global optimal solution can be obtained in a short amount of time and the computational complexity is low.
Then, we consider the replacement of the executor. According to the feedback information of the ruling module, when the Local Confidence or Global Confidence is lower than the threshold or Adaptive Survival Time Disturbance Factor over time to live, it is necessary to schedule the executor offline and select a new executor online in time to further ensure the dynamic security of the system. In order to ensure the system service quality, we assume that only a single executor can be scheduled online or offline in one scheduling cycle. If two or more (but no more than half of the online executers) executors reach the offline threshold at the same time, the executor with the lowest HCE will be offline first. And, the operation will be repeated in the next scheduling cycle or cycles. If more than half of executors’ HCEs reach the scheduling threshold, we assume the system suffers from serious loss due to the attack, and all online executors should be immediately taken offline for cleaning and recovery.
As shown in Figure 9(b), we consider a 3redundancy mimic system, where the executor pool is , and the length of represents the time period since executor was scheduled offline the last time, also known as its scheduling period. The process of selecting a new online executor is shown in Figure 9(b). Assuming that the overall heterogeneity of is greater than that of , an original system would be more inclined to choose to go online and ignore the potential threat of its overly short offline time. But system should be more inclined to choose executor , because . If the scheduling period is too short, it can cause an executor to go online too quickly, and give the attack more priori knowledge. We should try to avoid this situation by introducing a sleep threshold.
Because . If the scheduling period is too short, it can cause an executor to go online too quickly and give the attack more priori knowledge. We should try to avoid this situation by introducing a sleep threshold.
Definition 10. (Sleep Threshold ) Assuming that the last offline time of executor is , then represents the offline sleep time of executor , and the sleep threshold is the optimal time, the system has the highest security and the lest cost. Supposing that the number of executors in the executor pool is , and the redundancy of online executor set is , we define the sleep threshold in this paper as follows ([] represents choosing integer):
Definition 11. (Rotational Scheduling) Assuming online executor set is , and the executor pool is . A rotational scheduling process is to roll an executer offline and roll another online. When the historical confidence of is lower than the threshold, it will be rolled offline into the cleaning process, and a new executor will be selected to go online. After a rotational scheduling process, the new executor set will be . Obviously, the selection of a replacement executor is an optimization problem, and we formally formulate it as follows:Equation (26) represents the similarity between and is smaller than the threshold. Eq. (27) shows the scheduling period of should be larger than the sleep threshold. (26) shows the weight of the objective function. From the abovegiven derivation, the pseudocode of HHAC proposed is shown in Algorithm 2. Lines 1–9 select the initial online executor set according to the heterogeneity calculated by the HTVG. In lines 10–12, we compute the LC and Adaptive Survival Time Disturbance Factor of each executor. In lines 13–15, if the LC of is lower than threshold or exceeds the sleeping threshold, will be offline. From lines 16 to 20, according to the rotational scheduling rules and executor pool, the HHAC selects the replaced online executor. To the best of our knowledge, we are the first to consider the sleep threshold to select a new online executor. Compared with existing scheduling algorithms, the HHAC can achieve better system reliability in more complex attack scenarios.

6. Evaluation Model Based on AHPFCE
Existing researches mainly rely on a single index, such as scheduling period, antiattack capability, system cost, and service quality to evaluate the pros and cons of a SA, which has a large deviation. In practical applications, we cannot just consider security without considering the possibility or cost of deployment, or only consider deployment cost without considering system security. Based on the abovegiven considerations, this paper introduces the AHPFCE model for comprehensive SA evaluation based on multiple factors such as dynamics, heterogeneity metrics, system cost, and service quality.
6.1. Determine Indicator Weights Using AHP
Analytic Hierarchy Process (AHP) [32] is a systematic and hierarchical analysis method that combines qualitative and quantitative methods. As shown in Table 3, this paper adopts Santy’s 1–9 scale method to determine the degree of importance.
Theorem 1. Eigenvalue method: if the pairwise comparison matrix is not a consistent matrix, the normalized eigenvector corresponding to its largest eigen root is used as the weight vector , then . CI is the divergence between the judgment matrix and consistency, namely:Considering the random consistency index RI, the consistency ratio CR can be calculated as follows:When , we consider has a satisfying consistency, and the consistency test is passed. Otherwise, it is necessary to reconstruct the judgment matrix and retest.
6.2. FCE Model
Fuzzy Comprehensive Evaluation (FCE) [33, 34] is a comprehensive evaluation method based on fuzzy mathematics, which is suitable for solving fuzzy or difficulttoquantify problems. Therefore, we apply FCE to the evaluation of SA. Here the trapezoidal distribution membership function used in this section is as follows:
Then, weight of can be obtained from the above AHP, then the evaluation matrix can be calculated as follows:where is the fuzzy matrix of evaluation factor set about evaluation set , and represents the membership score about . At the same time, score set for each evaluation is . Finally, the comprehensive score can be used to evaluate the pros and cons of the SA comprehensively.
6.3. Evaluation Criteria
By researching and summarizing the existing scheduling algorithms and evaluation indexes, this paper formulates the index evaluation criteria table on the basis of the calculation methods of related evaluation indexes, as shown in Table 4.
7. Experiment Result
In this section, we demonstrate the effectiveness of the scheduling algorithm HHAC proposed in this paper through simulation experiments. Firstly, we introduce the setup of experimental simulation and then analyze the simulation results.
7.1. Experimental Environment
Specifically, the configuration of the server includes AMD 4800U, 1.8 GHz, and 16 GB RAM. The operation platform is Pycharm2020. In this experiment, we set the redundancy of online executions to be 3 and 4, respectively, and the number of executors in the executor pool is 15. In order to reduce the uncertainty, the experiment is repeated 120 times to obtain the final evaluations of four scheduling algorithms. In addition, to better focus on the actual performance of the scheduling algorithm, we simplify some experimental conditions and parameter settings:(i)During the experiment, in all four algorithms, we generate the heterogeneity between executors using the same normal distribution with parameters (0, 0.5) and the same random seed. And, the failure probability of executor is generated by a normal distribution with parameters (0, 0.1).(ii)This experiment mainly researches the selection of online executors, and we simplify the selection of offline executors into selecting offline executors based on the probability of failure.(iii)Here, we assume that only one executor can be rolled online or offline in one scheduling cycle to improve the service quality.(iv)In order to better research the dynamics of the scheduling algorithm and the average failure probability of the system, we do not change the online executor redundancy in a single scheduling experiment.
7.2. Comparison Indexes
Existing research to evaluate scheduling algorithms mainly include the system dynamics [19, 31] and the probability of system failure [17, 35]. In addition to the abovegiven two indicators, this paper innovatively proposes the minimum sleep threshold of executor, the cumulative distribution function of system failure, and minimum sleep times to further illustrate the effectiveness of the HHAC proposed in this paper.(i)Dynamics of scheduling algorithm : the number of scheduling times between the same scheduling scheme is generated, regardless of the position of executor.(ii)Minimum sleep threshold of executor : the minimum number of scheduling times of the executor set during the time when the executor is rolled offline until it is scheduled to go online again.(iii)System average failure probability : the system failure probability proposed in [35] is related to the failure probability of single executor and the similarity between executors. Based on [35], this paper proposes the system average failure probability further based on the number of failure executors and the average minimum sleep threshold of the single executor as follows:
7.3. Compared Schemes
In this experiment, we compare the HHAC comprehensively with the typical Completely random scheduling algorithm (CRS), Timebased random threshold scheduling algorithm (TIRTS), and the Random seed and minimum similarity (RSMS) to help readers of our paper better understand our algorithm and its advantage.
7.3.1. CRS
This scheme randomly selects the online and offline executors.
7.3.2. TIRTS
This scheme assigns a random online time to the executor, and when the online time of the executor online reaches this time, rolls the executor offline for cleaning, and then randomly selects an executor to go online.
7.3.3. RSMS [17]
This scheme first randomly selects the seed executor and then selects the scheduling scheme with the smallest overall similarity according to the principle of minimum similarity.
7.4. Simulation Results
7.4.1. System Dynamics
We first compare the dynamics of our algorithm with CRS, TIRTS, and RSMS. In order to avoid accidental errors, we program all algorithms with Python scripts and run every experiment 120 times. In order to compare the scheduling cycles under different redundancy, this paper selects different redundancy with 3 and 4. Figure 10 shows the scheduling cycles of four scheduling algorithms with online executor redundancy , and Figure 11 shows the results with the online executor redundancy . Under different redundancy, the average scheduling period of the four algorithms is shown in Table 5.
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
It can be seen that, along with the increasing of online executor redundancy, the scheduling cycle of HHAC gradually increases but still less than the CRS and TIRTS algorithms. For example, the scheduling cycle of CRS and TIRTS is about 2 times than HHAC, and HHAC is about 20 times than RSMS when . When the scheduling cycle of HHAC is about 28 times than RSMS, and 25% to CRS and TIRTS. This result is easy to understand. In CRS and TIRTS, each executer is selected randomly, while HHAC adds many restrictions to reduce the scheduling cycle. In addition, we can see the scheduling cycle of CRS and TIRTS are relatively scattered and random, and HHAC has a relatively centralized scheduling cycle. Hence, in a limited system considering security and stability, HHAC could be a good choice to achieve high security and reliability.
7.4.2. Minimum Sleep times of Executor
In this simulation to further verify the feasibility of HHAC, we evaluate minimum sleep times by calculating the average minimum sleep times of each executor. We run every experiment 120 times. The same Figure 12(a) shows the average minimum sleep times of each algorithm when , and Figure 12(b) shows the average minimum sleep times of the four algorithms when . It is observed from above figure that the minimum sleep times of HHAC is significantly larger than that of CRS and TIRTS. More specific, the average minimum sleep times of HHAC is nearly 12.25% more than RSMS and CRS, and 5 times than TIRTS when . In addition, the average minimum sleep times of HHAC is nearly 26.39% more than RSMS and CRS when . Although the average minimum sleep times of HHAC is lower than RSMS in some times, the RSMS is relatively random, which will greatly affect the security and reliability of the system. In summary, in terms of the minimum sleep times of executor, HHAC achieves better performance compared to the other three algorithms. With the greater sleep times, HHAC can better avoid the attacker’s longterm system scanning increase the system security.
(a)
(b)
7.4.3. System Failure Probability
We evaluate the average failure probability of the system by calculating the average failure probability of each online executor set between the same scheduling scheme. Figure 13(a) shows the system failure probabilities of four algorithms when , and the system failure probabilities under redundancy in Figure 13(b). Simultaneously, we summarize the average failure probability of four algorithms in Table 6. Obviously, compared to the other three algorithms, HHAC can achieve better results on system security under the same redundancy. Specifically, the average failure probability of HHAC is 28.8% less than RSMS, 4.5% less than CRS when , and much less than TIRTS. In addition, the average failure probability of HHAC is 56% less than RSMS under redundancy . Intuitively, with the increase of executor redundancy, the average failure probability of the system gradually decreases around 10 times, which is in line with the security changing rule of the mimic system. In summary, compared to the other three algorithms, HHAC maybe a better choice to guarantee system security under certain constraints.
(a)
(b)
7.4.4. The CDF of Minimum Sleep times of Executor
It can be seen from Figure 14 that when the redundancy of online executor is , the minimum sleep times of HHAC is stable at about 6 times, sometimes less than RSMS and CRS. But the abovegiven two algorithms are very unstable and less than HHAC in most of time. Moreover, compared with TIRTS, HHAC has great advantages to ensure system security. Similarly, HHAC has the same advantage compared to TIRTS and CRS when redundancy , but less than RSMS to a certain degree.
(a)
(b)
7.4.5. The CDF of System Failure Probability
Figure 15(a) shows the CDF of system failure probability when , it is observed from the abovegiven figure that the system failure probability of HHAC is much smaller than TIRTS and less than RSMS and CRS in about 50% of experiments. Moreover, compared with RSMS and CRS, the system failure probability of HHAC is very stable and most around 0.42%. The same compared to TIRTS and CRS when , HHAC has an obvious advantage, but larger than RSMS on some times.
(a)
(b)
7.4.6. Comprehensive Evaluation Based on AHPFCE
(1) Constructing judgment matrix. In order to ensure the objectivity of evaluation on scheduling algorithms, we determine the relationship between dynamics, heterogeneity, system cost, and quality of service by adopting Santy’s 1–9 scaling method. Firstly, since the main goal of the scheduling algorithm is to construct a system with an unmeasurable internal structure and high security, the system security should be the primary goal when constructing the judgment matrix while balancing the deployment cost. Considering the abovegiven reasons, the judgment matrix is determined in this paper as follows:
(2) Calculating the weight vector. According to the abovegiven matrix, the weight vector of dynamics, heterogeneity, system cost and QoS calculated by AHP is
(3) Comprehensive evaluation. According to the research on CRS, TIRTS, RSMS, and HHAC and the comprehensive evaluation table in Table 4, we obtain the comprehensive scores of four algorithms and show them in Table 7. According to the membership function, the fuzzy matrices of CRS, TIRTS, RSMS, and HHAC can be obtained as follows:
Combined with we can know
In this paper, we define the comment set as S , , and the final score is calculated as follows:
From the final score of the AHPFCE evaluation model, the RSMS and HHAC have achieved better results in system security compared with CRS and TIRTS. This result is easy to understand. For CRS and TIRTS lack consideration of executor heterogeneity, which may lead to many potential security threats in the online executors. In addition, considering the system cost and service quality, the HHAC proposed in this paper has a higher score than RSMS and achieves better results. To sum up, the AHPFCE comprehensive evaluation model proposed in this paper can evaluate the merits of each algorithm in a more comprehensive and finegrained way. Moreover, the AHPFCE evaluation model fills the gap in the comprehensive evaluation of scheduling algorithms and achieves better results. Constrained by our limited understanding of scheduling algorithms, we only formulate the scoring rule table for four algorithms. In the future, we intend to build more complete scoring rules and perfect the AHPFCE model.
8. Conclusions
Mimic Defense has gradually shown powerful defense capabilities in cyberspace security. In this paper, we focus on the research and implementation of SA and has achieved some results. Firstly, considering the highorder common vulnerabilities among executors, this paper proposes a new executor heterogeneity quantification algorithm HVTG based on graph theory, which realizes the finegrained heterogeneity measurement; secondly, we propose an adaptive scheduling algorithm HHAC based on highorder heterogeneity and historical confidence of executor and makes some result in security and reliability of the mimic system. For example, the minimum sleep times of HHAC is nearly 12.25% more than RSMS and CRS when , and the failure probability of HHAC is about 28.8% lower than RSMS under redundancy . Finally, we propose a comprehensive evaluation index, and evaluation model of the scheduling algorithm based on the AHPFCE. The evaluation model fills the gap of a comprehensive evaluation of scheduling algorithm, and the result shows the HHAC has a good result than other algorithms through the evaluation model.
Looking into the future, we plan to conduct more indepth and comprehensive research on the adaptive historical confidence, adaptive online redundancy, and evaluation criteria in the AHPFCE evaluation model; for example, we will research how to continuously change the number of online executors, and eventually achieve high security balancing the system cost and quality of service.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by the National Nuclear Foundation Major Special Fund (Grant no. 2017ZX01030301).