Abstract

This paper deals with the problem of the usage of formal techniques, based on model checking, where models are large and formal verification techniques face the combinatorial explosion issue. The goal of the approach is to express and verify requirements relative to certain context situations. The idea is to unroll the context into several scenarios and successively compose each scenario with the system and verify the resulting composition. We propose to specify the context in which the behavior occurs using a language called CDL (Context Description Language), based on activity and message sequence diagrams. The properties to be verified are specified with textual patterns and attached to specific regions in the context. The central idea is to automatically split each identified context into a set of smaller subcontexts and to compose them with the model to be validated. For that, we have implemented a recursive splitting algorithm in our toolset OBP (Observer-based Prover). This paper shows how this combinatorial explosion could be reduced by specifying the environment of the system to be validated.

1. Introduction

Software verification is an integral part of the software development lifecycle, the goal of which is to ensure that software fully satisfies all the expected requirements. Reactive systems are becoming extremely complex with the huge increase in high technologies. Despite technical improvements, the increasing size of the systems makes the introduction of a wide range of potential errors easier. Among reactive systems, the asynchronous systems communicating by exchanging messages via buffer queues are often characterized by a vast number of possible behaviors. To cope with this difficulty, manufacturers of industrial systems make significant efforts in testing and simulation to successfully pass the certification process. Nevertheless, revealing errors and bugs in this huge number of behaviors remains a very difficult activity. An alternative method is to adopt formal methods, and to use exhaustive and automatic verification tools such as model-checkers.

Model checking algorithms can be used to verify requirements of a model formally and automatically. Several model checkers as [15] have been developed to help the verification of concurrent asynchronous systems. It is well known that an important issue that limits the application of model checking techniques in industrial software projects is the combinatorial explosion problem [68]. Because of the internal complexity of developed software, model checking of requirements over the system behavioral models could lead to an unmanageable state space.

The approach described in this paper presents an exploratory work to provide solutions to the problems mentioned above. The proposed approach consists to reduce the set of possible behaviors (and then indirectly the state space) by closing the system under verification with a well-defined environment. For this, we propose to specify the behavior of the entities that compose the system environment. These entities interact with the system. These behaviors are described by use cases (scenarios) called here . They describe how the environment interacts with the system. Indeed, in the context of embedded reactive systems, the environment of each system is finite and well known. We claim that it is more efficient to ask the engineers to explicitly and formally express this context, than to search to reduce the state space of the system to explore facing an unspecified environment. In other words, the objective is to circumvent the problem of the combinatorial explosion by restricting the system behavior with a specific surrounding environment describing the different configurations in which one wants to verify the system. Moreover, properties are often related to specific use cases (such as initialization, reconfiguration, and degraded modes) so that it is not necessary for a given property to take into account all possible behaviors of the environment, but only the subpart concerned by the verification. The context description thus allows a first limitation of the explored space search, and hence a first reduction of the combinatorial explosion.The second idea exploited is that, if the context is finite (i.e., there is a noninfinite loop in the context) and in case of safety (invariant) properties, then the two following verification processes are equivalent: (a) compose the context and the system, and then verify the resulting global system; (b) unroll the context into scenarios (i.e., a sequence of events), and successively compose each scenario with the system and verify the resulting composition. In other words, the global verification problem can be transformed into smaller verification subproblems.

Our approach is based on these two ideas. This paper presents a DSL (domain-specific language). called CDL (Context Description Language) for formally describing the environment of the system to be verified. This language serves to support our approach to reduce the state space. We illustrate our reduction technique with our OBP (Observer-based Prover) (OBP is available on http://www.obpcdl.org/.) tool connected to two tools: the first is an academic model checker TINA-SELT (http://projects.laas.fr/tina/) [3] and the second is an explorer called OBP Explorer, integrated in OBP. We illustrate our approach with a partial case study provided by a industrial partner in the aeronautics domain.

This paper is organized as follows: Section 2 presents the related techniques to improve model checking by state reduction. Section 3 presents the principles of our approach for context aware formal verification. Section 4 describes the CDL language for contexts specification and property specification. Our toolset used for the experiments is presented Section 5. In Section 6, we give results on the industrial case study. In Section 7, we discuss our approach and we conclude.

Model checking is a technique that relies on building a finite model of a system of interest, and checking that a desired property, specified as a temporal logic formula, holds in that model. Since the introduction of this technology in the early 1980s, several model checkers have been developed to help the verification of concurrent asynchronous systems. For example, the SPIN model checker [1] based on the formal language PROMELA allows the verification of LTL properties encoded in “never claim” formalism and further converted into Buchi automata. Since its introduction, model checking has advanced significantly. For instance the state compression method or partial-order reduction contributed to the further alleviation of combinatorial explosion [9]. In [10], the partial-order algorithm based on a depth-first search (DFS) has been adapted to the breadth first search (BFS) algorithm in the SPIN model checker to exploit interesting properties inherent to the BFS. Partial-order methods [9, 11, 12] aim at eliminating equivalent sequences of transitions in the global state space without modifying the falsity of the property under verification. These methods, exploiting the symmetries of the systems, seemed to be interesting and were integrated into many verification tools (for instance SPIN).

In the same way, the development of more efficient data structure, such as binary decision diagrams (BDD) [13], allows for automatic and exhaustive analysis of finite state models with several thousands of components or state variables.

Another approach deals with compositional verification, for example, assume/guarantee reasoning or design-by-contract techniques. A lot of work exist in applying these techniques to model checking including, for example, [1417]. These works deal with model checking/analyzing individual components (rather than whole systems) by specifying, considering, or even automatically determining the interactions that a component has or could have with its environment so that the analysis can be restricted to these interactions. Design-by-contract proposes to verify a system by verifying all its components one by one. Using a specific composition operator preserving properties, it allows assuming that the system is verified.

Many other techniques have been proposed for combating state explosion. On-the-fly verification constructs the state space in a demand-driven way, thus allowing the detection of errors without a priori building the entire state space. Distributed verification [18] uses the computing resources of several machines connected by a network, thus allowing to scale up the capabilities of verification tools. In the same objective, methods exploiting heuristic search [19] have been proposed for improving constraint satisfaction problem and more generally for optimizing the exploration for the behaviour of a model to verify.

Combined together, the successful application of these methods to several case studies (see for instance [20] for noncritical application, or [21, 22] for aerospace examples) demonstrates their maturity in the case of synchronous embedded systems. However, if these techniques are useful to find modelling errors, they still suffer from combinatorial explosion in the case of large and complex asynchronous systems (see [23] for an experiment of SPIN on a real asynchronous function showing that the verification does not complete despite all the optimizations mentioned above).

Our approach presented in this paper explores another way for reducing the combinatorial explosion. Conversely to “traditional” techniques in which contexts are often included in the system model, we choose to explicit contexts separately from the model. It is about using the knowledge of the environment of a whole system (or model) to conduct a verification to the end. We propose to formally specify the context behavior in a way that allows a fully automatic divide-and-conquer algorithm.

Another difficulty is about requirement specification. Embedded software systems integrate more and more advanced features, such as complex data structures, recursion, and multithreading. Despite the increased level of automation, users of finite-state verification tools are still constrained to specify the system requirements in their specification language which is often informal. While temporal logic-based languages (example LTL or CTL [6]) allow a great expressivity for the properties, these languages are not adapted to practically describe most of the requirements expressed in industrial analysis documents. Modal and temporal logics are rather rudimentary formalisms for expressing requirements, that is, they are designed having in mind the straight for wardness of its processing by a tool such as a model checker rather than the user-friendliness. Their efficient use in practice is hampered by the difficulty to write logic formula correctly without extensive expertise in the idioms of the specification languages.

In literature, many approaches have been proposed to enable software and hardware engineers to use temporal logic with ease and rigor. For instance [24, 25] proposed a graphical interval logic (RTGIL) allowing visual an intuitive reasoning on real-time systems. From a textual point of view, [2628] proposed to formulate requirements using textual patterns, that is, textual templates that capture common logical and temporal properties and that can be instantiated in a specific context. They represent commonly occurring types of real-time properties found in several requirement documents for embedded systems. These two approaches have been recently combined by De Francesco et al. in [29]. The authors propose a user-friendly interface with the aim of simplifying the writing of concurrent system properties. This interface supplies a set of patterns from the natural language which are then automatically translated into the mu-calculus temporal logic.

In this paper, we choose to follow this approach. In order to be as simple as possible, we only consider safety properties expressed by using an extension of textual Dwyer's patterns and translated into observer automata and invariants. The work could be extended to other types or properties as proposed in [29]. Such an extension is out of the scope of this article.

3. Context Aware Verification

To illustrate the explosion problem, let us consider the example in Figure 1. We are trying to verify some requirements by model checking using the TINA-SELT model checker [3] and OBP Explorer. We present the results for a part of the model, which consists in reducing the set of. Then, we introduce our approach based on context specifications.

3.1. An Illustration

We present one part of an industrial case study: the software part of an antiaircraft system ( ). This controller controls the internal modes, the system physical devices (sensors, actuators) and their actions in response to incoming signals from the environment. The system interacts with devices ( ) that are considered to be actors included in the environment called here .

The sequence diagrams of Figure 2 illustrate interactions between context actors and the system during an initialization phase. This context describes the environment we want to consider for the validation of the controller. This context is composed of several actors running in parallel or in sequence. All these actors interleave their behavior. After the initializing phase, all actors wait for orders from the system. Then, actors send and receive either (Figures 2(a) and 2(c)) or (Figure 2(b)) as responses from the system. The logged devices can send operate(op) (Figures 2(a) and 2(c)) and receive either (Figure 2(a)) or (Figure 2(c)). The messages can be received in parallel in any order. However, the delay between messages and (Figure 1) is constrained by . The delay between messages and (Figure 1) is constrained by . And finally all send to end the interaction with the controller.

As example, let's see two requirements on the system. These requirements were found in a document of our partner and are shown in Listings 1 and 2.

R1: A device (Dev) can be authorized to execute a command “operate” if it
has previously connected to the system.

R2: During initialization procedure, S_CP shall associate an identifier to
each device (Dev), after login request and before maxD_log time units.

The first requirement R1 is expressed by Listing 1.

We choose to specify this requirement with SELT language for the device . It is expressed by the following formula:

Inv1: ((SM_1_voperateAccepted1) => (SM_1_vdevLogged1));

is a process of and and are variables of this process. To verify this requirement, we used the TINA-SELT model checker (Figure 3).

Let's see in Listing 2, the second requirement .

We choose to specify this requirement with an observer automaton (Figure 4). An observer is an automaton which observes the set of events exchanged by the system and its context (and thus events occurring in the executions (runs) and which produces an event reject whenever the property becomes false. With observers, the properties we can handle are of safety and bounded liveness type. The accessibility analysis consists of checking if there is a reject state reached by a property observer. In our example, this reject node is reached after detecting the event sequence of and , in that order, if the sequence of one or more of is not produced before time units. Conversely, the reject node is not reached either if or are never received, or if event above is correctly produced with the right delay. Consequently, such a property can be verified by using reachability analysis implemented in our OBP Explorer.

This observer is checked with OBP Explorer (Figure 5).

In both cases, the system model (Here by system model, we refer to the model to be validated) is translated into Fiacre format [30] to explore all the model behaviors by simulation, interacting with its environment (devices). Model exploration generates a labeled transition system (LTS) which represents all the behaviors of the controller in its environment.

3.2. Model Checking Results

Table 1 shows (tests were executed on Linux 32 bits—3 Go RAM computer, with TINA vers.2.9.8 and Frac parser vers.1.6.2.) the TINA-SELT exploration time and the amount of configurations and transitions in the LTS for different complexities ( indicates the number of considered actors). Over four devices, we see a state explosion because of the limited memory of our computer.

Table 2 shows (tests were executed on Linux 32 bits—3 Go RAM computer, with OBP Explorer vers.1.0.) the OBP Explorer exploration analyze time and the amount of configurations and transitions in the LTS. Over three devices, we see also a state explosion because of the limited memory of our computer.

Note that the size of the LTS explored by OBP Explorer for verifying is greater than the size of the related LTS explored by TINA-SELT for verifying . This is due to the way chosen for modeling these two requirements. is formalized as a SELT formula, and is modeled as an observer automaton. In the second experiment ( with OBP Explorer), the explorer begins by building the synchronized product between the model of the system, each context and the observer automaton. If this automaton contains several locations and several clocks, taking into account the observer as an input of the synchronized product could significantly increase the number of states and transitions explored.

3.3. Combinatorial Explosion Reduction

When checking the properties, a model checker explores all the model behaviors and checks whether the properties are true or not. Most of the time, as shown by previous results, the number of reachable configurations is too large to be contained in memory (Figures 3 and 5). We propose to restrict model behavior by composing it with an environment that interacts with the model. The environment enables a subset of the behavior of the model. This technique can reduce the complexity of the exploration by limiting the scope of the verification to precise system behaviors related to some specific environmental conditions.

This reduction is computed in two stages: contexts are first identified by the user ( , in Figure 6). They correspond to patterns of use of the component being modeled. The aim is to circumvent the combinatorial explosion by restricting the behavior system with an environment describing different configurations in which one wishes check the requirements. Then each context is automatically partitioned into a set of subcontexts. Here, we precisely define these two aspects implemented in our approach.

3.3.1. Context Identification

The context identification focuses on a subset of behavior and a subset of properties. In the context of reactive embedded systems, the environment of each component of a system is often well known. It is therefore more effective to identify this environment than trying reduce the configuration space of the model system to explore. The proof relevance is based on a strong hypothesis: it is possible to specify the sets of bounded behaviors in a complete way. This hypothesis not formally justified in our work. But, in this approach, the essential idea is the designer can correctly develop a software only if he knows the constraints of its use. So, we suppose that the designer is able to identify all possible interactions between the system and its environment.

It's particularly true in the field of embedded systems, with the fact that the designer of a software component needs to know precisely and completely the perimeter (constraints, conditions) of its system for properly developing it.

We also consider second hypothesis. It expresses that the contexts we describe are finite. There are no infinite loops in the interactions between the system and its environment. It is particularly true for instance with command systems or communication protocols.

It would be necessary to study formally the validity of these working hypothesis based on the targeted applications. In this paper, we do not address this aspect that gives rise to a methodological work to be undertaken.

Moreover, properties are often related to specific use cases (such as initialization, reconfiguration, and degraded modes). Therefore, it is not necessary for a given property to take into account all possible behaviors of the environment, but only the subpart concerned by the verification. The context description thus allows a first limitation of the explored space search, and hence a first reduction in the combinatorial explosion.

3.3.2. Context Automatic Splitting

The second idea is to automatically split each identified context into a set of smaller subcontexts (Figure 7). The principle of splitting is as following: each context is represented by an acyclic graph as mentioned earlier. This graph is composed with the model for exploration. In case of explosion, this context is automatically split into several parts taking into account a parameter for the depth in the graph for splitting until the exploration succeeds.

To reach that goal, we implemented a recursive splitting algorithm in our OBP tool. Figure 7 illustrates the function for exploration of a , with a and model checking of a set of properties .

We illustrate one execution of this algorithm in Figures 8 and 9. One context , represented by an acyclic graph, is composed with the model for exploration. In case of explosion, is automatically split into several parts (taking into account a parameter which specifies the depth in the graph for splitting) until the exploration succeeds. For example in Figure 8, the graph of is split in four graphs ; , , and . After splitting of , the subcontexts are composed with the model for exploration. If exploration fails, one subcontext is split, as into and , taking into account parameter .

In Figure 9, we illustrate which is split into subcontexts and composed with model . the exploration of successes (we note as composition operator). The explorations of and fail. So, (resp., ) is split into subcontexts and ( and .,). In the same way, is split into subcontexts , , and . This algorithm is executed until all the explorations succeed. Since the property set is associated with the context , pty is checked during the explorations with all subcontexts. We demonstrated in [31] that the verification of property set (as pty) taking into account of exploration is equivalent union of verifications taking into account each exploration ( is each subcontext as illustrated in Figure 9).

The following verification processes are then equivalent: (i) compose the context and the system, and then verify the resulting global system, (ii) partition the context into subcontexts (scenarios), and successively deal each scenario with the model and check the properties on the outcome of each composition. Actually, we transform the global verification problem for into smaller verification subproblems. In our approach, the complete context model can be split into pieces that have to be composed separately with the system model.

In summary, the context aware method provides three reduction axes: the context behavior is constrained, the properties are focused, and the state space is split into pieces. Finally, the verifications for the set of contexts is transformed into verifications with small verifications.

The reduction in the model behavior is particularly interesting while dealing with complex embedded systems, such as in avionic systems, since it is relevant to check properties over specific system modes (or use cases) which is less complex because we are dealing with a subset of the system automata. Unfortunately, only few existing approaches propose operational ways to precisely capture these contexts in order to reduce formal verification complexity and thus improve the scalability of existing model checking approaches. The necessity of a clear methodology has also to be identified, since the context partitioning is not trivial, that is, it requires the formalization of the context of the subset of functions under study. An associated methodology must be defined to help users for modeling contexts (out of scope of this paper).

4. CDL Language for Context and Property Specification

We propose a formal tool-supported framework that combines context description and model transformations to assist in the definition of requirements and of the environmental conditions in which they should be satisfied. Thus, we proposed [32] a context-aware verification process that makes use of the CDL language. CDL was proposed to fill the gap between user models and formal models required to perform formal verifications. CDL is a (domain specific language) presented either in the form of UML like graphical diagrams (a subset of activity and sequence diagrams) or in a textual form to capture environment interactions.

4.1. Context Hierarchical Description

CDL is based on Use Case Charts of [33] using activity and sequence diagrams. We extended this language to allow several entities (actors) to be described in a context (Figure 10). These entities run in parallel. A CDL (For the detailed syntax, see [34] available on http://www.obpcdl.org/.) model describes, on the one hand, the context using activity and sequence diagrams and, on the other hand, the properties to be checked using property patterns. Figure 10 illustrates a CDL model for the partial use cases of Figures 1 and 2. Initial use cases and sequence diagrams are transformed and completed to create the context model. All context scenarios are represented, combined with parallel and alternative operators, in terms of CDL.

A diagrammatical and textual concrete syntax is created for the context description and a textual syntax for the property expression. CDL is hierarchically constructed in three levels: level 1 is a set of use case diagrams which describes hierarchical activity diagrams. Either alternative between several executions (alternative/merge) or a parallelization of several executions (fork/join) is available. Level 2 is a set of scenario diagrams organized in alternatives. Each scenario is fully described at level 3 by sequence diagrams. These diagrams are composed of lifelines, some for the context actors and others for processes composing the system model. Counters limit the iterations of diagram executions. This ensures the generation of finite context automata.

From a semantic point of view, we can consider that the model is structured in a set of sequence diagrams (MSCs) connected together with three operators: sequence ( ), parallel ( ), and alternative ( ). The interleaving of context actors described by a set of MSCs generates a graph representing all executions of the actors of the environment. This graph is then partitioned in such a way as to generate a set of subgraphs corresponding to the subcontexts as mentioned in Section 3.3.

The originality of CDL is its ability to link each expressed property to a context diagram, that is, a limited scope of the system behavior. The properties can be specified with property pattern definitions described in [32, 34]. For checking, properties are linked to one or several context descriptions. Listing 3, we illustrate an example (textual version) of a scenario ( ) with linked properties: three observer-based properties , , and ( ( ) property specifying requirement R2) and three invariants , , and ( ( ) property specifying requirement R1). As example, properties and are specified at Section 4.2.

cdl scenario_ex is
{
     properties P1, P2, P3// references to observers
     assert Inv1, Inv2, Inv3// references to invariants
     init is { initDevs } // initialization sequence
     main is { DEV1  ∣∣ DEV2 ∣∣  DEV3 } // body of scenario
}

The clause specifies an initialization with an activity. Actors DEV1, DEV2, and DEV3 are specified with activities, by Listing 4.

activity DEV1 is
{
    { event send_login1; {event recv_ack_log1     event recv_nack_log1 ;
    { event send_operate1; {event recv_ack_oper1     event recv_nack_oper1 ;
    { send logout1 to { SM }1}
}

In Listing 4, the operators “;” and “ ” are respectively the sequence ( ) and alternative ( ) operators. CDL is designed so that formal artifacts required by existing model checkers could be automatically generated from it. This generation is currently implemented in OBP described briefly in Section 5. The CDL formal syntax and semantics are presented in [35].

4.2. Property Specification Patterns

Property specifying needs to use powerful yet easy mechanisms for expressing temporal requirements of software source code. As example, requirements as or of the system described in Section 3.1 can refer to many events related to the execution of the model or environment. Also, a requirement can depends on an execution history that has to be taken into account as a constraint or precondition.

If we want to express these kinds of requirements with a temporal logic based language as LTL or CTL, the logical formulas are of great complexity and become difficult to read and to handle by engineers. So, for the property specification, we propose to reuse the categories of Dwyer patterns [26] and extend them to deal with more specific temporal properties which appear when high-level specifications are refined. Additionally, a textual syntax is proposed to formalize properties to be checked using property description patterns [28]. To improve the expressiveness of these patterns, we enriched them with options (Prearity, Postarity, Immediacy, Precedence, Nullity, and Repeatability) using annotations as [27]. Choosing among these options should help the user to consider the relevant alternatives and subtleties associated with the intended behavior. These annotations allow these details to be explicitly captured. During a future work, we will adapt these patterns taking into account the taxonomy of relevant properties, if this appears necessary.

We integrate property patterns description in the CDL language. Patterns are classified in families, which take into account the timed aspects of the properties to be specified. The identified patterns support properties of answer (Response), the necessity one (Precedence), of absence (Absence), of existence (Existence) to be expressed. The properties refer to detectable events like transmissions or receptions of signals, actions, and model state changes. The property must be taken into account either during the entire model execution, before, after, or between occurrences of events. Another extension of the patterns is the possibility of handling sets of events, ordered or not ordered similar to the proposal of [36]. The operators and , respectively, specify if an event or all the events, ordered (Ordered) or not (Combined), of an event set are concerned with the property.

We illustrate these patterns with our case study. The given requirement (Listing 2) must be interpreted and can be written with CDL in a property as follow (cf. Listing 5). is linked to the communication sequence between the and device ( ). According to the sequence diagram of Figure 10, the association to other devices has no effect on .

Property P1;
   ALL Ordered
         exactly one occurrence of S_CP_hasReachState_Init
         exactly one occurrence of login1
   end
   eventually leads-to [0..maxD_log]
   AN
         one or more occurrence of ackLog (id)
   end
   S_CP_hasReachState_Init may never occurs
   login1 may never occurs
   one of ackLog (id) cannot occur before login1
   repeatability: true

specifies an observation of event occurrences in accordance with Figure 10. refers to reception event in the model, refers to ackLog reception event by . refers a state change in the model under study.

In CDL, we specify properties with events and predicates. For example, the event is defined with predicate as follows:

event S_CP_hasReachState_Init is {S_CP_State_Init becomes true}

The predicate is defined as follows:

predicate S_CP_State_Init is {{SM}1@State_Init}

with as a state of process .

Invariants are specified with CDL predicats. As example, invariant is specified as in Listing 6.

predicate Inv1 is
{({SM}1:operateAccepted1 = false) or ({SM}1:devLogged1 = true)}

5. OBP Toolset

To carry out our experiments, we used OBP tool (Figure 11). OBP is an implementation of a CDL language translation in terms of formal languages, that is, currently Fiacre [30]. As depicted in Figure 11, OBP leverages existing academic model checkers such as TINA-SELT [3] or simulators such as OBP Explorer. From CDL context diagrams, the OBP tool generates a set of context graphs which represent the sets of the environment runs. Currently, each generated graph is transformed into a Fiacre automaton. Each graph represents a set of possible interactions between model and context. To validate the model under study, it is necessary to compose each graph with the model. Each property on each graph must be verified. In the case of TINA-SELT, the properties are expressed with SELT logic formula [3]. With OBP Explorer, OBP generates an observer automaton [37] from each property for OBP Explorer. With OBP Explorer, the accessibility analysis is carried out on the result of the composition between a graph, a set of observers and the system model as described in [32]. If for a given context, we face state explosion, the accessibility analysis, or model-checking is not possible. In this case, the context is split into a subset of contexts and the composition is executed again as mentioned in Section 3.3.

To import models with standard format such as UML, SysML, AADL, and SDL, we necessarily need to implement adequate translators such as those studied in TopCased (http://www.topcased.org/) or Omega (http://www-omega.imag.fr/.) projects to generate Fiacre programs.

6. Experiments and Results

Our approach was applied to several embedded systems applications in the avionic or electronic industrial domain. These experiments were carried out with our French industrial partners. In [32], we reported the results of these experiments. For the case study, we constructed several CDL models with different complexities depending on the number of devices. The tests are performed on each CDL model composed with system.

Table 3 shows the amount of TINA-SELT exploration and model checking (tests with same computer as for Table 1) for checking of requirement with the use of context splitting. The first column depicts the number of asking for login to the . The third one indicates the number of subcontext after splitting by OBP. The other columns depict the exploration time and the cumulative amount of configurations and transitions of all LTS generated during exploration by TINA with context splitting. For example, with 7 devices, we needed to split the CDL context in 56 parts for successful exploration. Without splitting, the exploration is limited to 4 devices by state explosion as shown Table 1. It is clear that device number limit depends on the memory size of used computer.

Table 4 shows the amount of OBP Explorer exploration and analyze (tests with same computer as for Table 2) for checking of requirement with the use of context splitting. With 7 devices, we needed to split the CDL context in 344 parts for successful exploration. Without splitting, the exploration is limited to three devices by state explosion as shown Table 2.

As mentioned previously in Section 3.2, the size of the LTS explored by OBP Explorer for verifying is greater than the size of the related LTS explored by TINA-SELT for verifying . In that case, being able to split the contexts in order to overcome this new source of combinatorial explosion, as proposed by OBP, is of greater importance.

The example given (Figure 1) illustrates a case where there are lots of asynchrony in the behavior of environment actors, causing an explosion in the number of states and thus an increase in the number of contextes generated. We obtain a good performance with this method in case the one hand, the contexts restrict significantly the behavior of the model to be validated (space-complexity reduction) and, secondly, in case the context number is not too large (time-complexity reduction).

Exploration can easily be parallelized. In fact, the splitting method allows contexts to be distributed on machine network. We do not yet implement this parallelization technique, but it can be very effective. Suppose we have a network of similar machines. We can divide by time of global exploration. This is an approximation by considering that the context transfer delays and result return delays are negligible compared to the exploration time on a machine. We should take into account the exploration time is not identical for all contexts. But parallelization can significantly improve the proof execution time. For example, in case shown in Table 4, with 20 machines (resp., 100), we can hope to obtain an execution time of approximately 5 minutes (resp. 1 minute) instead of two hours on a single machine. We believe our method of context splitting is complementary with other reduction methods. On some machine, for one subcontext, we can use another technique complementary way.

7. Discussion and Conclusion

CDL is a prototype language to formalize contexts and properties. However, CDL concepts can be implemented in another language. For example, context diagrams are easily described using full UML2. CDL permits us to study our methodology. In future work, CDL can be viewed as an intermediate language. Today, the results obtained using the currently implemented CDL language and OBP are very encouraging. For each case study, it was possible to build CDL models and to generate sets of context graphs with OBP.

During experiments, we noted that some contexts and requirements were often described in the available documentation in an incomplete way. During the collaboration with us, these engineers responsible for developing this documentation were motivated to consider a more formal approach to express their requirements, which is certainly a positive improvement.

In case studies, context diagrams were built, on the one hand, from scenarios described in the design documents and, on the other hand, from the sentences of requirement documents. Two major difficulties have arisen. The first is the lack of complete and coherent description of the environment behavior. Use cases describing interactions between the system ( for instance) and its environment are often incomplete. For instance, data concerning interaction modes may be implicit. CDL diagram development thus requires discussions with experts who have designed the models under study in order to explicate all context assumptions. The problem comes from the difficulty in formalizing system requirements into formal properties. These requirements are expressed in several documents of different (possibly low) levels. Furthermore, they are written in a textual form, and many of them can have several interpretations. Others implicitly refer to an applicable configuration, operational phase, or history without defining it. Such information, necessary for verification, can only be deduced by manually analyzing design and requirement documents and by interviewing expert engineers.

The use of CDL as a framework for formal and explicit context and requirement definition can overcome these two difficulties: it uses a specification style very close to UML and thus readable by engineers. In all case studies, the feedback from industrial collaborators indicates that CDL models enhance communication between developers with different levels of experience and backgrounds. Additionally, CDL models enable developers, guided by behavior CDL diagrams, to structure and formalize the environment description of their systems and their requirements.

One element highlighted when working on embedded software case studies with industrial partners, is the need for formal verification expertise capitalization. Given our experience in formal checking for validation activities it seems important to structure the approach and the data handled during the verifications. That can lead to a better methodological framework, and afterwards a better integration of validation techniques in model development processes. Consequently, the development process must include a step of environment specification making it possible to identify sets of bounded behaviors in a complete way.

Although the CDL approach has been shown scalable in several industrial case studies, the approach suffers from a lack of methodology. The handling of contexts, and then the formalization of CDL diagrams, must be done carefully in order to avoid combinatorial explosion when generating context graphs to be composed with the model to be validated. The definition of such a methodology will be addressed by the next step of this work.