Abstract

Safety-Critical Cyber-Physical System (SCCPS) refers to the system that if the system fails or its key functions fail, it will cause casualties, property damage, environmental damage, and other catastrophic consequences. Therefore, it is vital to verify the safety of safety critical systems. In the community, the SCCPS safety verification mainly relies on the statistical model checking methodology, but for SCCPS with extremely high safety requirements, the statistical model checking method is difficult/infeasible to sample the extremely small probability event since the probability of the system violating the safety is very low (rare property). In response to this problem, we propose a new method of statistical model checking for high-safety SCCPS. Firstly, with the CTMC-approximated SCCPS path probability space model, it leverages the maximum likelihood estimation method to learn the parameters of CTMC. Then, the embedded DTMC can be derived from CTMC, and a cross-entropy optimization model based on DTMC can be constructed. Finally, we propose an algorithm of iteratively learning the optimal importance sampling distribution on the discrete path space and an algorithm to check the statistical model of verifying the rare attribute. Eventually, experimental results show that the method proposed in this paper can effectively verify the rare attributes of SCCPS. Under the same sample size, comparing with the heuristic importance sampling methods, the estimated value of this method can be better distributed around the mean value, and the related standard deviation and relative error are reduced by more than an order of magnitude.

1. Introduction

Safety-Critical Cyber-Physical System (SCCPS) is characterized with high safety and high reliability and are widely used in fields closely related to the national economy and people’s livelihoods, such as aerospace, nuclear industry, public transportation, finance, and medical care. Once the execution of such system fails, it will deeply threaten the safety of human’s life and property [13]. Therefore, it is vital to analyze and verify the safety and reliability of safety-critical systems, and it is of great significance to the design and development of safety-critical systems. Indeed, it has attracted wide attention from researchers and has extensively grown as a prominent research topic in the community [47].

Essentially, SCCPS is a kind of complex cyber-physical fusion system [810]. For this kind of systems, the continuously changing behavior in their physical layer is intertwined with the discrete changing behavior in their decision control layer. Their state spaces are infinite as well. It increases the difficulty and brings severe challenges to the safety analysis and verification of SCCPS. However, the traditional model checking has the problem of state space explosion, and it is difficult to effectively verify it [11].

With the execution path of the sampling system, Statistical Model Checking (SMC) uses statistical analysis techniques to approximate the probability that the target system meets the sequential logic attributes and can provide arbitrarily small error limits [1214]. Because SMC does not need to analyze the complex logic inside the target system to verify the timing logic properties of the system, it can effectively avoid the complexity of the system and the explosion of the state space [15, 16]. Therefore, SMC is the most effective solution to verify the timing properties of complex SCCPS [12, 1719]. However, for SCCPS requiring extremely high safety, the probability of occurrence of the negative events of its safety attributes and the probability of system failures are extremely low. It is infeasible for SMC to sample extremely low probability events. Thus, how to use SMC to verify the extremely secure SCCPS is an urgent problem to be solved [20, 21].

To date, verification of the SMC rare attributes mainly relies on the importance sampling method. For CTMC and DTMC random models, Reijsbergen et al. [22] and Barbot et al. [23] utilized the heuristic methods to obtain an importance sampling distribution to complete the attribute verification of the two models, respectively. Clarke and Zuliani [24] proposed the cross-entropy minimization importance sampling-based SMC method to verify the safety properties of the Stateflow/Simulink model system. Zuliani et al. [17] used the SMC method in his study [24] to verify the secure attribute of the discrete-time SHS. The methods proposed by Clarke and Zuliani assume that the distribution of the system path space is an exponential distribution. By simply increasing the failure rate of the system parameters, several paths that satisfy the rare attributes are extracted at one time to calculate the optimal parameters for the exponential distribution to obtain an importance sampling distribution [25]. J´egourel et al. [26] leveraged the cross-entropy minimum optimization method in the random model of a random guardian command system, which can approximate the path distribution of the system by increasing the number of commands (number of parameters), to obtain an importance sampling distribution in the random model. However, the optimal importance sampling distribution obtained with the aforementioned methods is not from the distribution family of the system path space, but essentially is a heuristic importance sampling method. Thus, the verification results are only rough approximation.

In this paper, we propose a method with the SCCPS path space to construct a cross-entropy optimization model and use an iterative learning method to obtain an optimal importance sampling distribution from the parameterized distribution cluster of the path space. It can ensure that the optimal importance sampling distribution is from the spatial distribution family in the SCCPS path, and the iterative learning method can ensure that the distribution evenly covers the unsafe path distribution area. As evaluated in our experiments, the accuracy and efficiency of the rare attribute verification are significantly improved.

2. Background

2.1. Statistical Model Checking

Statistical Model Checking (SMC) can be simply described as follows: given a system model and system properties described by the bounded linear temporal logic (BLTL) [18], it uses the Monte Carlo sampling, model checking, and statistical analysis techniques to qualitatively/quantitatively verify the following two questions:(i)The probability that satisfies the attribute : ()(ii)Whether the probability of satisfying the attribute is higher than or equal to the threshold : () ()

In SMC, it first simulates the execution of the system model to extract a random execution path . Then, the BLTL model detector is used to determine whether satisfies the attribute , and a certain number of samples will be generated after multiple simulations. It further leverages the statistical method to perform statistical analysis on the samples to assess the probability of the system model satisfying the attribute , as well as give the confidence interval or the estimated error margin. Let () represent the output result of the BLTL model detector. If , () = 1; otherwise, it is 0. () is a Bernoulli random variable, so the behavior of can be modeled by the Bernoulli distribution with a parameter :

The parameter represents the probability that the model satisfies the BLTL attribute . With the Bernoulli distribution, we note that  = [()], [()] =  (12212). Since the value of is unknown, the goal of SMC is to estimate the value of .

SMC can be divided into two categories: hypothesis testing and parameter estimation. The hypothesis testing is used to determine whether the probability of the system satisfying the temporal logic attribute is greater than or equal to a given threshold, which is a qualitative result, while the parameter estimation is a quantitative result to represent the approximate probability of the system satisfying the temporal logic attribute. SMC qualitative algorithms include the single sampling plan (SSP) algorithm [27], the sequential probability ratio test (SPRT) algorithm [27], and the Bayesian hypothesis test (BHT) algorithm [18]. SMC quantitative algorithms mainly include the approximate probabilistic model checking (APMC) [28] algorithm and the Bayesian interval estimation testing (BIET) algorithm [18]. Kim et al. [29] conducted an empirical evaluation on the performance and applicability of the four algorithms (i.e., SSP, SPRT, BHT, and BIET).

2.2. Safety Requirement Specification

In this paper, we use Bounded Linear Temporal Logic (BTCL) as our specification language. BLTL restricts Linear Temporal Logic (LTL) with time bounds on the temporal operators. Formally, the syntax of BLTL is given aswhere , (the finite set of state variables), , , and , , and are the usual Boolean connectives. The formulas is called the atomic propositions (AP). The formula will return true if and only if is true and will hold within the time . The operators and can be defined as follows by using the operator: True , which required φ to hold true within time t (true). ¬¬ requires to hold true up to time t.

The semantics of BLTL formulas [28, 30, 31] is defined with respect to system traces (or executions). A trace is a sequence , where is the state of the system at the represented time . The pair expresses the fact that the system moved to state after having spent time units in state . If the trace satisfies the property , we write . The trace suffix of starting at is denoted by , and denotes the full trace .

The semantics of BLTL for a trace is defined as follows:(i), iff holds true in state (ii), iff and (iii), iff or (iv), iff does not hold ()(v), iff such that (a) and (b) , as well as (c) , The sampling bound: #() of a BLTL formula is the maximum nested sum of time bounds(vii)(viii)(ix)(x)(xi)

Lemma 1 (Bounded sampling). The problem “” is well-defined and can be checked for BLTL formulas and traces based on only a finite prefix of of bounded duration.

Proof. According to Lemma 1, the decision “” is uniquely determined (and well-defined) by considering only a prefix of of duration #() 0. By divergence of time, reaches or exceeds this duration #() in some finite number of steps . Let denote a finite prefix of of length , such that #(). Again by Lemma 3, the semantics of is well-defined because any extension ” of ’ satisfies if and only if . Consequently, the semantics of coincides with the semantics of . On the finite trace , it is easy to see that BLTL is decidable by evaluating the atomic formulas at each state of the system simulation.

Lemma 2 (BLTL on bounded simulation traces). Let be a BLTL formula, . Then, for any two infinite traces, and with and with [17]. We have that if .

Proof. IH is short for induction hypothesis.(1)If is of the form , if since and by using [17] for .(2)If is of the form ,by induction hypothesis as and . The proof is similar to and .(3)If is of the form , if the following three conditions are satisfied:(). because such that the durations of trace and are for each index with by the assumption [17].(). by induction hypothesis as follows: we know that the traces and match at for duration and need to show that the semantics of matches at . By IH, we know that has the same semantics at (that is, if ) provided that we can show that the traces and match at for duration . For this case, it considers any with . Then, . Thus, . As was arbitrary, we conclude from this with assumption [17] that, indeed and for all with . Thus, the IH for yields the equivalence of and when using the equivalence of (a) and (a’).(). For each , . The proof of equivalence to (c) is similar to that for (b’) using . The existence of an for which these conditions (), (), and () hold is equivalent to .

2.3. Safety Critical System Model

Safety-Critical Systems (SCCPS) [32] are defined as a tuple, , where(i) is a finite set of discrete states (control mode);(ii) is a finite set of continuous variables;(iii) is a collection of discrete changes;(iv)Inv: represents the mapping from the discrete state set to the continuous state space. For , Inv () is the invariant-position set of ;(v) is a mapping of a vector domain, which assigns a set of Stochastic Differential Equations (SDE) to each control mode to describe the continuous random dynamic behavior with respect to the different control modes , . is a standard Wiener process defined in the real number field. It assumes that , , and are bounded and Lipschitz continuous;(vi) is to assign a guardian condition to each discrete transition, satisfying the following conditions: denotes a measurable subset of Inv () is a disjoint subset of Inv ()(vii) is a reset mapping. represents a set of probability measures defined on , and continuous variables are reset according to the probability distribution.

According to the definition, the SCCPS hybrid state space is , and represents the hybrid state. The continuous dynamics of SCCPS evolves according to the SDE in the current control mode. However, the discrete dynamics refers to migrating one control mode to another control mode with the guardian condition on the discrete transition, when the continuous variable cannot reach the boundary of the invariant.

Let be the SDE solution of the initial state ;  =  means that, in the control mode , the first time that the evolution of a continuous variable violates the invariant, that is, the first time of exiting the control mode .

SCCPS execution semantics: a random execution of SCCPS is defined as a random process in the SCCPS state space. If there is a stop-time sequence that makes , where(i) indicates the initial state of SCCPS.(ii) is a const, and is a continuous solution of the SDE ;;the probability distribution of is determined by the reset map , where .

SCCPS path: a SCCPS execution path is defined as an infinite sequence from the initial state , where represents the SCCPS state. means the time that transitions the state to the next state .

3. Our Approach

In this section, we present our proposed method with the SCCPS path space to construct a cross-entropy optimization model and use an iterative learning method to obtain an optimal importance sampling distribution from the parameterized distribution cluster of the path space.

3.1. SCCPS Path Space Model
3.1.1. Model Representation

To avoid the complexity of the dynamic evolution of SCCPS, SMC does not pay attention to the structure of SCCPS, but focus on sampling the execution path of SCCPS. The behavior of SCCPS evolving over time can be characterized by the path of the system. According to the execution semantics of SCCPS, the execution path generation process of SCCPS can be described as follows: in the current control mode , the continuous variable evolves according to the SDE. When the evolution of satisfies the guardian condition (), it migrates to the next control mode and the initial value of is determined by the random reset kernel . The residence time of is  = . is a random variable, and its value depends on the SDE of and the initial values and Inv (). According to the generation process of the SCCPS execution path, the next state of SCCPS depends on the current state and the related residence time of the current state. Therefore, the execution path of the SCCPS can be regarded as that it is generated in the continuous-time Markov process in the hybrid state space. As the residence time of is longer, the probability of migration from is higher. It can further presume that the residence time of obeys the exponential distribution, and the continuous-time Markov process then becomes CTMC.

Let denote the guard condition set of all edges starting from :where Inv () and . In , the time for the continuous variable evolving to satisfying the conditions of each guard is . Then, the residence time in is . Supposing , respectively, obey the exponential distribution of parameters , then the residence time in obeys the exponential distribution of parameters . With this assumption, the execution path of SCCPS can be generated by the CTMC random process.

Definition 1. SCCPS path generation model: the path generation model on the SCCPS state space is defined as , where(i) represents the discrete state set of SCCPS denotes the initial state of SCCPSMigration rate function , and all migration rate function values form the migration rate matrix It can be seen from this definition that when the CTMC structure is known, its behavior is controlled by the migration rate matrix , whose value comes from SCCPS. The value of is estimated with the maximum likelihood method according to simulating the execution of SCCPS to obtain the time samples of the state transition.

3.1.2. Algorithm of Learning Model Parameters

The rarity of the path does not necessarily imply that the conversion rate between two adjacent discrete states is low, and the rarity of the safety attributes in the path space does not necessarily imply that the optimal parameters in the parameter space are rare. Based on this observation, this section introduces our approach of leveraging the maximum likelihood estimation method to estimate the migration rate of two adjacent discrete states of SCCPS and obtain the migration rate matrix . With the simulation operation of each discrete state of SCCPS, the discrete state is sampled to migrate to the next discrete state time; we then use the maximum likelihood estimation to obtain an estimate of .

For the state , we simulate executing the SDE in the running state to obtain the migration time samples of the adjacent state . Assuming that the migration time between and obeys the exponential distribution of the parameter , then the likelihood function of can be obtained:and its log likelihood function is as follows:

We further take the derivative of with the log-likelihood function and make it equal to 0, and its estimated value can be resolved, . With , it can be seen that the estimated value is an unbiased estimate of . The estimated variance isbut the estimated variance is biased, and the variance will be decreased as the samples increase.

In most cases, it is difficult to obtain a clear expression for the random execution of SCCPS. However, what the safety concerned is the accessibility analysis of discrete states. The discrete state set and its transitions can capture all necessary information. Therefore, we derive the DTMC from the SCCPS path generation model to represent the path space of SCCPS. The value of DTMC’s migration probability matrix can be obtained from the migration rate matrix of the SCCPS path generation model. For two states ,where .

3.2. Method of Sampling Rare Attributes

In the path space of the high-safety SCCPS, it is difficult to obtain samples satisfying the rare attributes, which makes the SMC infeasible. To address this challenge, we propose a method for sampling the rare attributes. It uses the cross-entropy method to learn an optimal-importance sample distribution from the path space of the SCCPS. With this sample distribution, it is easy to obtain the samples that satisfy the rare attributes. Thus, the convergence of the SMC can be accelerated. The importance sampling distribution is corrected by the likelihood ratio weighting to ensure that the SMC verification result is unbiased.

3.2.1. Zero-Variance Importance Sampling Distribution

The basic idea of the importance sampling method [33, 34] is to change the probability density distribution of random variables, so as to obtain the samples of extremely small probability events with a higher probability. We now present the SMC method based on the importance sampling. Let be the true distribution of path , and let be the importance sampling distribution, and can obtain the samples of the extremely small probability events with a higher probability when and . In the case of verifying the extremely small probability events, it is difficult to sample from to meet the requirements, but the importance sampling method is to sample from . The probability satisfying the system attribute can be described aswhere is the likelihood ratio, and is for the importance sampling. We leverage the likelihood ratio to correct the weighting to ensure that the estimated value of is unbiased. We then randomly sample independent execution paths from the importance distribution and obtain the unbiased estimate:and estimated variancefor , respectively.

The efficiency and accuracy of importance sampling rely on the selection of the distribution . If the selection is inadequate, the importance sampling method is difficult to effectively achieve the acceleration effect and may play a decelerating effect. The key problem of importance sampling is to find a density function for the optimal sampling probability to minimize the estimated variance. With formula (10) returning 0, it can obtain the following formula:where is a zero-variance importance sampling distribution, which means that extracting only one sample from the zero-variance importance sampling distribution can be used to calculate its estimated value, that is, any sample is an unbiased estimate of its mean. However, the zero-variance importance sampling distribution depends on the true value , and the value of is unknown. Therefore, it is impossible to sample from . This paper proposes to use the cross-entropy method to find an approximate optimal importance sampling distribution closest to from the parameterized distribution family of the sample path space, so as to reduce the SMC variance and accelerate the convergence of the SMC algorithm.

3.2.2. Cross-Entropy Optimization Model

This section is to obtain the optimal importance sampling distribution by minimizing the cross entropy between the two probability distributions. According to the definition of cross entropy [35], this section provides the definition of cross entropy for the SCCPS path space.

Definition 2. Cross entropy for the SCCPS path space: the cross entropy between two probability measures and for the SCCPS path space is as follows:The cross entropy is used to assess the similarity of two probability distributions. The value of cross entropy is smaller, and and are more similar, i.e., if and only if .
According to Definition 2, the construction of the cross-entropy optimization model on the SCCPS path space is given below. Assume that the original distribution of the SCCPS path comes from the parameterized distribution family , The cross-entropy optimization method is used to select a distribution in the parameterized distribution family, and the optimal distribution have the smallest cross-entropy. This optimization problem can be described forThe first term of formula (13) has nothing to do with and minimizing cross entropy is equivalent to maximizing the second term. Let ; the minimization problem of formula (13) is equivalent to the maximization problem of formula (14):Solving the optimization problem of formula (14) requires sampling from the true distribution . However, in the case of rare attribute verification, it is difficult to sample from to the path sample that satisfies the rare attribute. By using importance again, the sampling method samples from the distribution and the selection of parameter should be able to increase the probability of the path that meets the rare attribute. Therefore, the optimization problem of formula (14) can be re-formed asAmong them, the likelihood ratio function . In formula (16), the optimal solution of its optimization problem can be estimated by the path sample, and the sample mean is replaced by the expectation Get the estimated value of where is a sample from the distribution .

3.3. Algorithm of Verifying the Cross-Entropy Safety

In Section 3.1, we provide a DTMC-based method to approximate the SCCPS path space. SMC mainly considers the system execution path within a bounded time , where is a random variable to represent the number of state transitions, and its value varies with . Let denote two adjacent and ordered state pairs in , represent the set of ordered state pairs in , represent the number of transitions from state to state in , and represent the number of occurrences of the state in ; then, the probability measure function of path under system parameter can be formulated as

Substituting of formulas (16) with (17), we obtainand formula (18) can be transformed by the Lagrangian multiplier method into the following optimization problem:where is the Lagrangian multiplier. Taking the derivative of formula (19) to and making it equal to 0, the solution can bewhere is the sample path from the distribution , and represents the true probability distribution of the SCCPS path.

With formula (20), it indicates that the estimated value of the optimal solution relies on the initial distribution . However, the distribution of is generally far from the optimal distribution. Therefore, in order to reduce the influence of the initial distribution on the optimal importance sampling distribution, this paper proposes the iterative solution in the path space. Through the iteration, the algorithm can explore a wider path space, so as to obtain a better approximate optimal solution.

Let the initial distribution parameter be , and an iterative formula can be obtained from formula (20):where is the number of samples per iteration, represents the likelihood ratio of the th iteration, and is the th sample path sampled from the distribution .

Usually, only a few state transitions can be seen in each simulated execution. During each iteration, some parameters do not work in the path that satisfies the extremely small probability event. Formula (21) will set these parameter values to zero so that these parameters will not work in all subsequent iterations. As a result, the iterative algorithm converges too prematurely to detect a wider parameter space. To avoid this situation, this paper adopts a smoothing strategy to temporarily reduce the importance of inoperative parameters in the iteration instead of simply setting them to zero. The smoothing strategy is to weight current iteration value and the parameters of the previous iteration:

The smoothing strategy can retain important but not yet effective parameters. Iterative formula (21) and smoothing formula (22) can jointly ensure that approximately uniform sampling is obtained from the path set of events satisfying the minimal probability.

The selected initial distribution should be able to produce some paths that satisfy the event with minimal probability in the first iteration, that is, the selected parameter should be able to increase the probability of occurrence of the extremely small probability events. Therefore, in this paper, we set the initial parameter to a uniform distribution, and the uniform distribution can quickly obtain the sample path that satisfies the extremely small probability event. The condition for stopping the iteration can be that the coefficient of variance or the distance between two iteration parameter vectors are not higher than a certain constant or the maximum number of iterations. For example, given any small positive number , if is satisfied, the iteration will be stopped. To facilitate the comparison, we limit the maximum number of iterations in the experiment. To sum up, Algorithm 1 presents the description of the importance sampling distribution learning algorithm, which iteratively solves the approximate optimal importance sampling distribution in the SCCPS path space of the attributes for being verified.

Input: , the number of samples per iteration.
Input: , the true path distribution parameter of SCCPS.
Input: , the initialization parameter.
Input: , the maximum number of iterations.
Output: Optimal parameters.
(1)Function learningAlg (, , )
(2)j = 0;
(3)  while do
(4)   A = 0, B = 0, i = 1
(5)   while do
(6)    generate a path according to the pdf
(7)    if then
(8)    ;
(9)     ;
(10)    ;
(11)   i = i + 1;
(12)   ;
(13)  j = j + 1
(14) return

Regardless of sample acquisition time and BLTL model checking time, the time complexity of Algorithm 1 is . Since the optimized objective function is convex, there is a unique optimal solution. If Algorithm 1 can converge, it must converge to the vicinity of the unique optimal solution [36]. Since the number of samples in each iteration is limited, the convergence is probabilistic but not necessarily monotonic. By simply limiting the maximum number of iterations , the algorithm can be guaranteed to be terminated with 100% probability. For the proof of convergence of cross-entropy optimization, please refer to [37]; thus, a formal proof of convergence is not provided in this paper. In experiments, we observe that the parameters are convergent. Once the parameters converge, the last set of simulated samples is used to estimate the probability that SCCPS satisfies the safety attribute with the optimal importance sampling distribution. Algorithm 2 describes the verification process of the safety verification algorithm.

Input: , The number of samples.
Input: , the true path distribution parameter of SCCPS.
Input: , the optimal parameters calculated by Algorithm 1.
Output: , Probability of SCCPS meeting safety attributes.
(1) Function verifyingAlg (, , )
(2)  A = 0, i = 1
(3)  while do
(4)   generate a path according to the pdf
(5)   if then
(6)    :
(7)    ;
(8)i = i + 1
(9) return

4. Experiment and Analysis

To evaluate the effectiveness and performance of the Cross-Entropy Safety Verification Algorithm (CESVA) method proposed in this paper, we apply CESVA to a fault-tolerant controller for an aircraft elevator system (FTC4AE), that is, a Stateflow/Simulink hybrid system modeling case from MATLAB. It introduces the randomness in terms of the fault injection and simulates with MATLAB to obtain the system execution path. Path checking is realized by the BLTL model detector of Plasma-Lab [38]. In the experiment, the rare attributes of FTC4AE is verified with the CESVA method, which is further compared with the Heuristic Importance Sampling (HIS) method [17].

4.1. Validity Measurement of Experimental Results

In the case of nonrare attribute verification, the confidence interval is used to assess the accuracy of various methods, while in the case of rare attribute verification, the relative error of sampling is used to assess the accuracy of the estimation:where is replaced by the current estimated value , .

Skewness is a measure of assessing the skewing direction and degree of data distribution and is the characteristic number that characterizes the degree of asymmetry of the probability distribution density curve with respect to the average. Skewness is defined as the third-order standardized moment of the sample, and the skewness of the normal distribution is 0, and its estimator is evenly distributed around the mean:

The negative skewness means that the distribution is left-tailed. At this time, the data on the left of the mean are less than the data on the right. Intuitively, the tail on the left is longer than the tail on the right. In contrast, the positive skewness means that the distribution is right-tailed. The data on the right of the mean is less than the left. Intuitively, the tail on the right is longer than the tail on the left.

4.2. Experiment and Analysis on a Fault-Tolerant Controller for the Aircraft Elevator System

The fault-tolerant controller for an aircraft elevator system is a part of a large Simulink model of HL-20 rescuers developed by the National Aeronautics and Space Administration [39]. The two horizontal tails on the two side of the aircraft’s fuselage are controlled by two elevators, respectively. Each elevator has two independent hydraulic actuators. In the normal operation process, each elevator is positioned by its corresponding external actuator, and its internal actuator can be used when the external actuator does not work. The two external actuators are driven by two independent hydraulic circuits, and the two internal actuators are both connected to the third hydraulic circuit. The system should ensure that only one set of actuators (i.e., external or internal) locates the elevator at any given time. If the external actuator or its corresponding hydraulic circuit fails, the system will activate the internal actuator. If the fault still exists, the external actuator will be shut down and eventually isolated. The fault in the hydraulic circuit may be temporary, and if the fault is cleared, the hydraulic circuit can always be restored to the online state. The control logic of the system is implemented in the form of a state flow diagram, while the hydraulic actuators and elevators are modeled by using Simulink.

According to modifying the Stateflow/Simulink model, we add random faults into three hydraulic circuits. Setting the fault model with an out-of-bounds’ reading of circuit pressure, we model the fault injection as three independent Poisson processes. When the hydraulic circuit fails, the circuit will stay in the fault state for one second. Then, the pressure reading will restore to its normal value, and the fault state will be terminated. In our experiments, the being estimated safety attribute is the probability that, within 25 seconds, the horizontal tails will not respond to the control inputs in the duration of 1 second.

We estimated the probability of the BLTL formula :where and represent the hydraulic circuit that drives the external actuator, while represents the hydraulic circuit that drives the internal actuator.

In the experiment, the failure rate of the three hydraulic circuits is set to 0.001, and the failure repair rate is 1. With the two parameters, the parameter in Algorithm 1 can be calculated. It still is difficult to obtain samples that satisfy the attribute with the previous parameters. Therefore, to ensure that the obtained samples can satisfy the attribute , the initial failure rate is set as 0.1 and the fault repair rate is set as 1. According to these two parameters, the initial parameter of iteration in Algorithm 1 can be calculated. In order to assess the performance of verifying the rare attributes with the CESVA method, 20 iterations of Algorithm 1 are performed. In each iteration, the number of samples is , the smoothing factor , and the total number of required samples is .

Figure 1 shows the change trend of the failure rate parameters during the 20 iterations of the CESVA method. At the beginning of the iteration, the parameters converge rapidly. When the parameters are close to their optimal values, the convergences of their values slow down with random fluctuations. From the 16th iteration, the failure rate parameters start to converge to the stable values. From the perspective of the parameter convergence trend, it seems that the value of the failure rate parameter increases with the increasing iteration times. It indicates that the proportion of sampling the paths satisfying the rare attribute is gradually increasing.

Figure 2 illustrates the distribution of the estimated values of the CESVA method during the iterations. The estimated value gradually converges from the 17th iteration. Figure 3 presents the distribution of the relative error of the CESVA method during the iterations. The relative error gradually converges from the 16th iteration. Finally, the probability estimated value of the security attribute is , and the value of the relative error is 0.01.

In order to verify the statistical performance of the CESVA method, 100 experiments were carried out under the above parameters, and samples were used in each experiment. Compared with the performance of the HIS method under the same sample size, Table 1 shows the mean, skewness, and statistical indicators such as standard deviation (likelihood ratio standard deviation), relative error, and sample size for each experiment. As presented in Table 1, with the same sample size, the estimated values of the CESVA method are more closely distributed around the mean value, and the likelihood is over 10 times less than the standard deviation and relative error, when comparing against the HIS method. Although the true probability is unknown, statistical indicators such as the standard deviation, skewness, and relative error of the likelihood ratio illustrate that the true probability and the mean are very close.

The verification of the rare attribute for SMC mainly includes the importance sampling method, the importance splitting method, and the statistical learning method.

The importance sampling method is an effective method to solve the verification of rare attributes. For the CTMC and DTMC random models, Reijsbergen et al. [40] and Barbot et al. [23] leveraged the heuristic methods to obtain an importance sampling distribution to complete the attribute verification of the two types of models. For the Stateflow/Simulink model, Clarke and Zuliani [24] proposed the SMC method of cross-entropy minimization importance sampling to verify its safety properties. Zuliani et al. [17] further used the SMC method in paper [24] to verify the safety properties of a class of discrete-time SHS. The method proposed by Clarke and Zuliani [24] assumes that the distribution of the system path space is exponential distribution. By simply increasing the failure rate of the system parameters and calculating the optimal parameters of the exponential distribution with the paths satisfying the rare attributes extracted at one time, an importance sampling distribution can be obtained. J´egourel et al. [26] used a random guardian command to the importance sampling distribution. This model can approximate the path distribution of the system by increasing the number of commands (the number of parameters) and uses the minimized cross-entropy method to obtain an importance sampling distribution in the random model. However, the optimal importance sampling distribution obtained by the above method does not come from the distribution family of the system path space, and these methods actually belong to the heuristic importance sampling method.

The importance segmentation method [34] is a method of reducing the estimated variance. Based on the importance segmentation method, J´egourel et al. [33] proposed the SMC algorithm for the verification of small probability events. The key idea is to decompose the system logic attributes into embedded attributes, which makes its probability easier to be estimated and reduces the number of sample paths required by verification. To improve the performance, the attributes need to be decomposed into multiple levels with different probabilities. During the decomposition process, copying or eliminating paths depend on their intermediate behavior. When the decomposition is over, an estimated probability that the attribute is satisfied can be obtained. The importance segmentation method is essentially heuristic and depends on the model, but lacks the support of theoretical results.

Applying statistical learning methods to SMC is also an important research direction. Du et al. [19] proposed a learning SMC framework based on support vector machine-based two classifiers. It uses cost-sensitive and resampling methods to solve the unbalanced data learning problem of support vector machines and implements predicting and assessing the probability of occurrence of small-probability events with a relatively small number of samples. However, this method cannot obtain rare attribute samples. For the low-probability attributes of hardware circuits with multiple failure regions, Kumar et al. [41] assumed that the system failure distribution is a Gaussian mixture model, thus proposed to use the variational Bayes method to learn an optimal importance sampling distribution from the Gaussian mixture model. However, the optimal importance sampling distribution is not a distribution family from the system path space. Kalajdzic et al. [42] proposed an SMC method based on the principle of feedback control. This method learns a model of a cyber-physical fusion system by using importance sampling to estimate the system state and importance division to control the system. So it can infer the probability that the system satisfies the given attributes.

The method proposed in this paper starts from the SCCPS path probability space, constructs a cross-entropy optimization model, and uses an iterative learning method to obtain an optimal importance sampling distribution from the parameterized distribution clusters of the path space. It ensures that the optimal importance sampling distribution can come from the distribution family in the path probability space of SCCPS. And, the iterative learning method ensures that the distribution can evenly cover the unsafe path distribution area. Therefore, the accuracy and efficiency of the rare attribute verification can be improved significantly.

6. Conclusion

SMC has been successfully applied to SCCPS safety attribute verification and has become the most effective solution, but rare attribute verification is still a challenge for SMC. To be able to extract samples satisfying the rare attributes from SCCPS, CTMC is used to construct the probability space model of the execution path of SCCPS given with the probability measure of the random execution path as well as the parameterized probability distribution function family, to construct the cross-entropy iterative model. According to the iteratively learning from finding the approximate optimal importance sampling distribution in the SCCPS path probability space, the efficient sampling of rare attribute samples in SCCPS is achieved. With the evaluating experiments, the experimental results show that, for the verification of rare attributes, comparing against the heuristic importance sampling method with the same number of samples, the estimated value of our method is better distributed around the mean, and the standard deviation and relative error are reduced by more than an order of magnitude. Based on the method proposed in this paper, combining with the current mainstream SMC method to develop an adaptive SMC tool is set as the future work.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request. The authors apply CESVA to a fault-tolerant controller for an aircraft elevator system (FTC4AE) that is a State-flow/Simulink hybrid system modeling case from MATLAB.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Key Research and Development Program of China (no.2018YFB1003900), National Natural Science Foundation of China (no. 61772270), Key Laboratory of Safety-Critical Software (Nanjing University of Aeronautics and Astronautics), and Ministry of Industry and Information Technology Research Project (NJ2019006).