Abstract

This work proposes a new methodology for the management of event tree information used in the quantitative risk assessment of complex systems. The size of event trees increases exponentially with the number of system components and the number of states that each component can be found in. Their reduction to a manageable set of events can facilitate risk quantification and safety optimization tasks. The proposed method launches a deductive exploitation of the event space, to generate reduced event trees for large multistate systems. The approach consists in the simultaneous treatment of large subsets of the tree, rather than focusing on the given single components of the system and getting trapped into guesses on their structural arrangement.

1. Introduction

For a given system, the scope of quantitative risk assessment is to investigate the circumstances giving rise to different modes of system operation and to quantify the risk for each operation mode. A system can be comprised of hardware, software, humans, or organizational components [1]. Each component can be found in various states of operation, leading to multiple modes of failure and normal operation for the overall system. Once this mapping of component states to system outcomes is known, it is theoretically possible to quantify the risks for different operation modes to occur, given the occurrence probabilities for all the component states [2, 3].

The computational effort and the memory requirements for risk evaluations increase exponentially as the system components and the number of component states increases. Exact calculations for binary systems are achieved faster by employing binary decision diagrams to effectively organize the evaluation procedure [4]. Since the logic behind multistate systems is not Boolean, multistate behavior cannot be represented by binary models without introducing additional variables and constraints [5]. Rocco and Muselli [6] developed a methodology based on machine-learning and hamming clustering to address multistate systems and any success criterion. The required computational resources can be reduced using approximate risk estimations [7] or criticality analysis [8]. Event trees represent the combination of component states leading to each mode of system operation. Quantification of event trees enables faster exact risk evaluation [2] and is not limited to binary or two-terminal systems, it is, however, very computationally intensive. Clearly, implementation of event tree quantification to systems with many components in multiple states needs to be preceded by substantial reduction of the number of tree branches.

This work develops an algorithm that can efficiently exploit a large event space and generate a reduced event tree. It is assumed that every real system has an intrinsic logic behind the assignment of system outcomes to the system events. A simple and general methodology is suggested to robustly extract this knowledge, by exploiting the system outcome space information as this is stored in a table listing all the possible event combinations and their associated final system outcomes. The proposed algorithm is not biased by any prior information on the functionalities of the system in structural or algebraic form and seeks to acquire knowledge on the system logic and encapsulate it in the reduced tree. The algorithm can be generally applied to any given system and is not affected by the way that the supplied data may be organized or sorted. The procedure is deductive, starting from the entire event table and systematically organizing the system events in a set of clusters. The final set of clusters can be translated back to an event tree that is significantly reduced compared to original outcome space information supplied to the algorithm.

The paper is organized as follows: Section 2 presents some very basic system definitions used in the system representation and operations presented in Section 3. In Section 3 a suitable system representation is defined, using sets of events and based on the concepts of Cartesian products. Section 4 describes a set of clustering and declustering operations to be applied on the event sets. Section 5 discusses the implementation of the proposed developments into an algorithmic procedure. Section 6 presents an illustration example and a large case study to demonstrate the proposed methodology. Section 7 concludes the work.

2. Basic System Definitions

The system considered here is comprised of blocks. Each block can be found in various states, and the system response (or output) at every given instance of time, , depends on the state that the blocks occupy at . Each block relates to a component, a set of components or a part of a component of the system, regardless of physical conventions and according to the choices in the system modeling. The basic definitions are taken from Papazoglou [9, 10] where the blocks were interconnected to form functional block diagrams. In the present work, the blocks are stripped of their networking functionalities, and the definitions are simplified accordingly.

2.1. System Blocks and Their States

Consider a system of -independent blocks. Each block , , can be found in various internal states during the system operation period.

Let the state set of block , denoted as , be a partition over the possible instances of , where denotes the th state of block , and denotes the number of elements in , thus is the cardinality of .

It is assumed that, at every given time instance within the period of system operation, is found in exactly one state (e.g., at 94% of maximum production level), and this state relates to exactly one member of (e.g., = “at least 90% of maximum production level”).

2.2. Event Definitions

A basic event is an instance of a single system block according to the state set partition of this block. A basic event for block is denoted as .

A joint event is the combination of more than one basic events taking place in different blocks of the system . Note that, the set is the result of the Cartesian product .

A complete joint event is defined here as a joint event over all the system blocks . For simplicity, the term event is used here instead of complete joint event.

Note that, the above event definitions are simplified compared to Papazoglou [10] where blocks had functionalities not considered here.

2.3. Event Space, Subspaces, and Event Partitions

Let the system event space be the set of all the possible complete joint events . Then , where denotes the number of elements , thus . Note that, is the result of the Cartesian product , therefore .

Consider a nonempty subspace of the system event space . Let denote a partition applied over , composed of disjoint subspaces of denoted by .

2.4. Event Table

Let denote the set of the all possible system outcomes and is their number. Therefore, is the cardinality, , of .

Each event yields a unique system outcome, while different events may yield the same outcome. For instance, there might be more than one event that leads to system failure. In event trees, a complete joint event and its associated outcome are equivalent to a path [9, 10].

Let , where , denote the many-to-one mapping from the event space to the set of system outcomes . The mapping is recorded in the system event table, as a complete list of all the system events and their outcome.

The defines a partition on , which is herein called the outcome-based partition and denoted by . The members of are denoted by .

Table 1 gives the event table of an example taken from Papazoglou [10]. In this case, there are 32 events and 4 possible outcomes, and is comprised of the 4 sets:(1), (2), (3), (4). Note that the mapping can be derived from the structural dependencies among the system blocks, as dictated by the rational and physical interconnections of components within the system and depicted in the form of a fault tree or a functional block diagram. However, such information may be unavailable or too difficult to attain or process. This work assumes that the only information available is the system event table.

3. System Representation

The system representation presented here aims to organize the information contained in the event table. The table data are partitioned and organized into vectors summarizing the contribution of block states in each subspace of the partition. These vectors will provide the framework to apply the manipulations described in Section 4.

3.1. Cartesian Subspaces and Partitions

Let denote the set of block states, , in the events comprising . Note that, since , then , for  all .

Let , denote the Cartesian product . In general, is a superset of , since it always contains all the elements of and may contain events not included in .

Consider the system of Table 1. For the subspace we get , and . As expected, is a superset of .

A Cartesian subspace, denoted as , is herein defined as a nonempty subspace of , such that . Note that, all the singleton subspaces are Cartesian.

A Cartesian partition over , denoted as , is herein defined as a partition comprised only of Cartesian subspaces of . The elements of are denoted as . Note that, every subspace of has at least one Cartesian partition comprised of singleton subspaces.

For instance, the subspace defined above can be partitioned into the four subspaces , , ,  and  . These are all Cartesian, since , , are singleton, and .

3.2. Implicit Subspaces and Partitions

The state set is called a complete state set when it contains all the possible states of the block, that is, .

An implicit subspace, denoted as , is herein defined as a Cartesian subspace when all of its constituent block state subsets are either complete or singleton sets. Note that an implicit subspace corresponds to the implicant [11] containing only the block states in the singleton sets. The subspace defined above is implicit and corresponds to the implicant .

For example, the sets ,    and   are all Cartesian (since they are Cartesian products) but is not implicit.

An implicit partition over , denoted as , is herein defined as a partition comprised only of implicit subspaces. The elements of are denoted as . Every subspace has at least oneimplicit partition, the , and the cardinalities of all possible partitions over lie between one (when is an implicit subspace) and .

For example, the partition over subspace into ,  , is implicit. Also, the partition of into and is implicit.

An implicit partition conveys the same information as the set of its corresponding implicants. Therefore can be derived from the prime implicants and .

3.3. Contribution Vectors

The contribution vector of subspace , denoted by , is defined here as a vector of nonnegative integer entries , reporting the sum of the contributions of each state of each block in the events comprising , namely, the number of times that each block state contributes in the development of .

For the example of subspace , the contribution vector is , where the semicolons are used as separators between the three block compartments.

Properties of contribution vectors include the following.(i)The vector length is , therefore ,  for  all  .(ii)For each block , the sum of all the block entries in equals .In general, vector is an abstract (inductive) representation of the information stored in . As the size of increases, retrieving it from is not trivial, it might even be impossible.

3.4. Bicontribution Vectors

Let the bicontribution vector of subspace be defined as a vector of Boolean entries , reporting the contribution or not of each state of each block to the events of , namely, whether a certain block state contributes (“true” or I) or not (“false” or O) in the development of set .

Properties of bicontribution vectors include the following.(i)The vector length is , so ,  for  all  .(ii)The sum of all the entries corresponding to block equals ,  for  all  .For instance, the associated bicontribution vector of is . The sum of entries is 8 and equals the number of block state instances present in set .

Let denote the operation applied on the contribution vector to derive its associated bicontribution vector . Based on the definitions of contribution and bicontribution vectors: The reverse operation yields as follows: Therefore, from the definition of . Clearly, always carries less information than its associated vector , unless is Cartesian.

3.5. Cartesian Contribution Vectors

The Cartesian contribution vector of subspace , denoted as , is defined here as the contribution vector of , thus . Let denote the entries of .

For instance, starting from subspace we get .

Let denote the operation applied on the bicontribution vector to derive the Cartesian contribution vector . Based on the above vector definitions: For every subspace , each entry of , , lays between zero and . Nonzero entries for which are herein called Cartesian entries.

The vector satisfies the Cartesian property if and only if is a Cartesian subset. In this case .

Going back to set and comparing to :(i)the entries corresponding to the block states , , , and are Cartesian,(ii)the vector does not satisfy the Cartesian property since at or .The contribution vectors of implicit subspaces are herein called implicit contribution vectors. Since, implicit subspaces are always Cartesian, implicit contribution vectors always satisfy the Cartesian property.

3.6. Vector Partitions

Considering a partition , the contribution vector partition of , denoted as , is defined as the set of the contribution vectors .

Since ,  for  all  , the vector is specified as the composite contribution vector of . In general, composite contribution vectors carry less information than their vector partitions.

Likewise to subspace partitions, vector partitions can be classified as Cartesian (when all their associated vectors are Cartesian) or implicit (when all their associated vectors are implicit). Cartesian and implicit partitions are denoted as and , respectively.

Considering subspace and its partition ,  ,  , and , the respective contribution vector partition is: The subsets are implicit, so is an implicit contribution vector partition.

It should be highlighted that a Cartesian contribution vector partition is a disjoint-set data structure, as it contains all the information necessary to: (a) find which subspace includes a certain event and (b) reconstruct the event union set using the operations.

4. Decomposition and Recomposition Operations

Stating from a given outcome-based partition , the scope here is to derive an implicit partition featuring minimal cardinality for each of the ,   subspaces. This would be equivalent to a set of prime implicants describing the event table. The scope is accomplished through the application of specific operations on the system vectors defined above. These operations extract knowledge from the information carried in the composition vectors and store this knowledge in the minimal possible schemes. The naming of these operations is after Shannon’s decomposition [11].

4.1. Decomposition of Contribution Vectors

This section presents an operation for the systematic manipulation of a contribution vector to yield a contribution vector partition. The operation is called decomposition, since it is a case of multistate Shannon’s decomposition applied on the contribution vectors. This operation decreases the abstraction of the information stored in the vector, since contribution vector partitions carry more information than their composite vectors. Starting from a contribution vector, subsequent decomposition actions generate a low cardinality implicit partition of this vector.

Consider a subspace and a nonempty subset . The decomposition operation of according to is a partition rule applied on to generate the sets such that: Since then , .

The simplest way of applying the decomposition operation is to go through each one of the events in , as described in the operation definition. This procedure requires computational effort to decide whether a certain event belongs in or . If, on the other hand, contains Cartesian entries, the operation can be applied directly on . Section 6 shows that decomposing rather than reduces the computational effort by several orders of magnitude.

Consider a contribution vector and a nonempty set . The vector decomposition operation according to is a partition rule applied on to yield the contribution vector partition such that The development of ensures that this vector has the Cartesian property. Instead of the Cartesian product we can use the bicontribution vector: The decomposition operation can be applied iteratively. To simplify the notation, let be an initial contribution vector, decomposed into , where has the Cartesian property. Then, is decomposed into , where has the Cartesian property and so forth. Application of the decomposition operation over iterations replaces with the contribution vector partition . The iterations terminate when has the Cartesian property, therefore is a Cartesian partition. Note that, the vectors can be derived as , using (7) and .

4.1.1. Decomposition Example

To illustrate the decomposition operation consider the set of Table 1 and its contribution vector .

Starting from , the bicontribution vector is and the Cartesian contribution vector is . Clearly, has three Cartesian entries at , and  . Let :(i), (ii). Alternatively, the decomposition could be applied simultaneously at as follows:(i), (ii). In both cases, the vectors and do not satisfy the Cartesian property, but they contain Cartesian entries. For instance, can be decomposed at as follows:(i), (ii). Both and satisfy the Cartesian property and no further decomposition is necessary. So, the decomposition of yields the Cartesian contribution vector partition .

The set vectors may contain complete blocks, like the first block in . The vectors and can be further decomposed until they give implicit contribution vector partitions:

(i)

(ii) In the previous example, the cardinality of was 13, while the cardinality of the implicit partition is equal to . The latter cardinality could be even smaller if a more intelligent decomposition strategy were applied. For instance, has two complete blocks (first and third block) and three Cartesian entries (one in the first block and two in the third). The choice of decomposition order, that is, the sequence of ’s is crucial, as discussed in the algorithm implementation section.

4.2. Recomposition of Contribution Vectors

The outcome of the decomposition operation is an implicit contribution vector partition. Given this, we can seek merging opportunities to create unions that have complete blocks, thus reduce the partition cardinality. Since the contribution vectors are all implicit, the recomposition operation is hereby discussed in terms of bicontribution vectors.

Consider a block and its set of states . Let be a set of bicontribution vectors, ,  ,  ,  , such that ,  for  all  . Then, the set can be replaced by the single bicontribution vector , where ,  for  all  . Note that, .

4.2.1. Recomposition Example

Consider the set .

From the decomposition example discussed earlier. The two bicomposition vectors have exactly the same entries in the second and third block but different entries in the first block. In addition, the first block has exactly two state instances. The two vectors can be recomposed into a single equivalent vector .

5. Algorithm Implementation

Given the table associated with an event tree, the proposed algorithm launches an iterative process of decomposition and recomposition operations until we obtain an implicit partition of minimal cardinality for each one of the tree outcomes. Working with the vectors defined above rather than sets of events decreases significantly the amount of information being stored and the computational effort for manipulating this information.

The algorithm applies the vector decomposition and recomposition operations using a set of heuristic rules. These rules help in the identification of more promising entries to apply the operations. Sections 5\tmspace +\thinmuskip {.1667em}5.1 and 5\tmspace +\thinmuskip {.1667em}5.2 describe the heuristic rules, and Section 5\tmspace +\thinmuskip {.1667em}5.3 discusses how these procedures work together within the proposed algorithm.

5.1. Heuristic Rules for Decomposition Order

The decomposition operations proceed iteratively, replacing the original vectors of the outcome partition with contribution vector sets. Decompositions are applied locally, based on the features (e.g., the Cartesian entries) of each contribution vector.

Given a composition vector, the choice of decomposition order is crucial in preserving the maximum possible of the initial complete blocks. The set of Table 1 has a complete block of 2 states and a complete block of 4 states. If the decomposition order is different the final implicit partition could be different. In effect, applying the decomposition on entries of results in a reduction by on the cardinality of the final implicit partition of . Applying the decomposition on —followed by , , , and so forth, in any order—preserves the third block, and the final reduction is by : The following decomposition rules support the generation of the smallest possible implicit partitions at the minimum possible execution time, and they are applied on each contribution vector that is not Cartesian.(a)If there are Cartesian entries in the current contribution vector:(i)it is prefered to decompose at Cartesian entries in incomplete blocks, to avoid breaking the complete blocks. Note that, in this case, the decomposition order makes no difference to the final partitions,(ii)if the only Cartesian entries are within complete blocks, start decomposing the complete blocks with the largest span between their contribution values, relatively to their number of states, that is, sort blocks according to .(b)If there are no Cartesian entries in the current contribution vector, then:(i)if no complete blocks are present, prefer incomplete blocks with more states being present, and decompose them into as many vectors as the number of states present,(ii)if only complete blocks are present, decompose the block featuring the entry with the maximum departure from its Cartesian value, that is, .Before the application of these rules it is essential to recognize which of the complete blocks make sense, according to the values of the contribution vector entries. For instance, the vector could lead to a partition that includes a bicontribution vector of 2 complete blocks, but this not possible for the vector , since the first entry in the compartment of the first block is lower than the number of states in the third block. Similarly, in the vector the third block cannot remain complete under any decomposition order since there is no single entry in the second block compartment greater or equal to .

The final Cartesian partition includes vectors and may need to be further decomposed to derive implicit partitions. In this case, the vectors are decomposed in all their incomplete block entries (which are all Cartesian), and the decomposition order makes no difference.

5.2. Heuristic Rules for Recomposition Order

Once the decomposition stage is completed, the final implicit partitions may have vectors that can be merged. This reduces the partition cardinalities. The choice of recomposition order is crucial, since the application of decomposition operations on large event tables can produce numerous vectors as candidates for recomposition.

Let , , denote the implicit partitions output from the decomposition stage. The recomposition algorithm proceeds according to the following steps applied on each vector partition (or the associated bicontribution vectors).(i)Partition into sets having the same complete blocks. Let be such a subset of .(ii)The set is a candidate for creating an additional complete block , if there is at least one subset of cardinality , where is the index of the particular subset for , such that: (i) its overall contribution vector features as a complete block and (ii) the remaining blocks have the same state sets .(iii)If is replaced by its composite contribution vector, the resulting vector partition is implicit.(iv)The process is repeated on the new set to reduce it to and so forth and terminates when there are no more recomposition candidates left.The set may have several subsets that relate to the creation of different complete blocks and/or to different ways of creating a particular complete block. The recomposition procedure is supported by intelligent selection biases to ensure that the merging opportunities are properly exploited. During the iterative recomposition process, the following recomposition rules are applied on every set of Cartesian contribution vectors featuring the same complete blocks.(i)Find all the candidate sets , , for creating each new complete block at ,  for  all  .(ii)Associate each candidate to a score proportional to and and incorporate the potential to create two complete blocks in one go.(iii)Select among conflicting sets according to their .(iv)Apply the recomposition operation on the selected sets.Note that, in the new partition, the composite vectors should be removed form the subset and possibly included in other subsets including the complete blocks of and complete blocks created during the recomposition. The reason why the value of is taken into account is that larger ’s increase the probability of finding “recomposable” sets in the next iteration of the recomposition process.

Consider, for example, the vectors of Table 2 representing the implicit partition for the fifth outcome of the BWR example solved in Section 6\tmspace +\thinmuskip {.1667em}6.1. As explained above, recomposition operations are applied on bicontribution vectors. The vectors are sorted and divided into different sets according to their complete blocks. The procedure starts from the subset with the fewer complete blocks. This includes the vectors and , which are merged to give . Table 3 shows the Cartesian set updated with , which can now be merged with and so forth. The recomposition choices are not always so few and they can be conflicting. The subset of Table 4 appears after a few iterations. A crude analysis indicates the potential of creating new complete blocks at:(i)block by merging & and/or & ,(ii)block by merging & & and/or & ,(iii)block by merging & and/or & .The above seven merging actions involve binary state blocks, so their initial scores are equal to 2. This score is multiplied by the number of candidate merges per block , giving a score of 4 for the 4 merges in blocks and and 6 for the 3 merges in block . The possible merging actions are sorted according to their score and an action can take place if it does not conflict with any of the higher score actions. In this sense the proposed actions are & & and & .

This involvement of in the score calculation stems from the observation that the resulting vectors have many common entries so the possibility of these vectors being treated as candidates for subsequent merges is very high. In effect, amongst the merged vectors of Table 5, vectors and are candidates to be merged. However, this potential should be examined along with other vectors featuring the same complete blocks.

5.3. Algorithm Implementation

The decomposition and recomposition operations discussed above are each implemented into an iterative procedure. Following is a step-by-step description of the algorithm, using the motor-operated valve (MOV) example of Table 1 to illustrate the different procedures.

Step 1. Acquire event outcome data. This step returns a table of columns for the component blocks and 1 column for the outcomes. Following is the data array for the MOV example: 532350.fig001

Step 2. Get the contribution vectors referring to the outcome partition. The 32 events in columns 1 to 3 of the above array can be divided into 13, 11, 5, and 3 events according to the outcome they yield. The following arrays can be derived for the 4 outcomes: 532350.fig002

Using composition vectors, each of these arrays gives a row of the following array: The array now stores the contribution vector partition according to the system outcome. The superscript is used to indicate that the last column of also stores the number of the outcome associated with each contribution vector.

Step 3. Check data consistency. This step exposes any irregularities present in the original data, by checking that all the possible complete events are present in the event table, only these events are present, and the table has no duplicate entries.
A fist check involves the columns of the contribution vector array. Based on the number of block states, we estimate the exact number of occurrences for each one of them and see if the sums of entries in the array satisfy this constraint. The MOV example has 3 blocks of 2, 4, and 4 states each. Therefore, each one of the 2 states of block 1 should occur 16 times; each one of the 4 states of blocks 2 and 3 should occur 8 times each. In effect, ; .
The second check involves the rows of the contribution vector array. Each entry refers to a specific state of a specific block. In each row, the total number of occurrences of the block 1 states should be equal to the number of occurrences of blocks 2 and 3. Looking at the first row of the contribution vector array .
Step 3 is repeated for the other rows of .

Step 4. Contribution vector decomposition. This step applies the decomposition operation (Section 4\tmspace +\thinmuskip {.1667em}4.1) and the decomposition rules (Section 5\tmspace +\thinmuskip {.1667em}5.1) to the first row of array . Let, for instance, the contribution vector be  . The vector exhibits complete state blocks in block 1 (2nd state) and block 3 (3rd, 4th states). The rules indicate that the decomposition is first applied to block 1 and later to block 3. According to Section 5\tmspace +\thinmuskip {.1667em}5.1, the vector is decomposed into the Cartesian contribution vector and the remaining contribution vector .

Step 5. Update system arrays. There are working arrays where contribution vectors (and their outcome) are stored. The first one is . The others are denoted by and store the Cartesian vectors generated during Step 4 for each response , starting from . Each row of treated during Step 4 is replaced by the remaining contribution vector. In our example, during the first iteration and become Note that, there is no need to store outcome information in the arrays .
Steps 4 and 5 are repeated until a decomposition operation leads to two Cartesian vectors. Then, both vectors are added to , the first row of is removed, and is replaced by .

Step 6. Decompose . The array is further decomposed to get an array of implicit contribution vectors. Since the vectors have only Cartesian entries, the operations can be easily applied on the bicontribution vectors of and , denoted by and , respectively. The arrays referring to the contribution vector are and, after further decomposition, Step 6 supports the following recomposition actions.

Step 7. Apply recomposition actions. This step applies the recomposition operation (Section 4\tmspace +\thinmuskip {.1667em}4.2) and the recomposition rules (Section 5\tmspace +\thinmuskip {.1667em}5.2) to the parts of sharing the same outcome. Note that, the example considered here is too small and simple to offer potential for recomposition.
Steps 47 are repeated until the array is empty.

Step 8. Algorithm termination. The final output of the procedures described here is the arrays , which represent implicit partitions of significantly reduced cardinality compared to the size of the system event table. Note that, the decomposition and the recomposition operations developed here ensure that the consistency of the data is preserved throughout the vector processing.

The Matlab environment is chosen as suitable for the fast manipulation of matrices using the built-in matrix operations. For instance, the decomposition when there are no Cartesian entries requires knowledge of the exact event subspace that corresponds to the processed contribution vector. The algorithm can either keep track of the events contributing in each vector or go through the original event table to isolate the subspace relating to each vector. The former, though it is more sophisticated, takes up a lot of memory even for relatively small problems. Matlab takes advantage of its built-in matrix operations for sort and find, to reduce significantly the execution time.

6. Case Studies

6.1. Case Study 1

The first case study is taken from Papazoglou [10] and concerns the development of an event tree for a boiling water nuclear reactor. The system involves 10 state blocks with 2 to 4 block states each. The event space consists of 3072 complete events and the system has 5 outcomes. Papazoglou [10] provided a set of Boolean equations and developed functional block diagrams that embedded information on the dependencies between the blocks. He finally presented a reduced event tree of 41 branches. Note that, if the reactor system is treated in BowTieBuilder [12, 13] without providing dependency information, the resulting event tree has 110 branches. This confirms that the efficiency of functional block diagram applications in reducing the size of event trees depends on the structure of the Boolean model, that dictates the dependencies between the blocks.

The methodology proposed here takes as input the original event table and produces the results reported in Table 6. Note that(i)the states of block I correspond to , and(ii)the outcomes correspond to {CI, CII, CIII, CIV, Success}of Papazoglou [10]. Each row of Table 6 gives a Cartesian vector (or an implicant) corresponding to a branch of the event tree. In this sense, the reduced tree described here has only 38 branches. The proposed algorithm identifies an inconsistency in the partition of block C of the original data. Resolving it leads to different results for outcome CIV, and this explains the three branches difference between 41 and 38. The rest of the branches/implicants are notably the same, with a single exception, involving the choice to expand block U rather than block Q (see bold cells of Table 6, lines 12–15). While both choices yield four branches/implicants, this differentiation shows that the procedure proposed here is not biased by the order of the blocks in the event table data.

6.2. Case Study 2

The proposed methodology is tested against a large problem involving 16 blocks, including one block having four state instances, four blocks with three states and eleven binary. The original event table has cells. The system has 5 possible outcomes. The initial event table is constructed via recursive partitions of the event subspace.

The resulting implicit partition has totally 273 vectors; in particular 86, 115, 34, 28, and 71 for the five outcomes. The recomposition stage requires 1.08 CPU seconds. Then, the final implicit partitions have totally 178 vectors; in particular 31, 54, 16, 20, and 57 for the respective partitions of the five outcomes.

CPU times can give an idea of the relative effort invested in the different activities taking place during a run. In this relatively large problem, the preparatory Steps 13 of Section 5\tmspace +\thinmuskip {.1667em}5.3 require 1.27 CPU seconds. The decomposition steps require only 0.0469 CPU seconds for a total of 86 decompositions using rule (a) and 124 CPU seconds for a total of 52 decompositions using rule (b) of Section 5\tmspace +\thinmuskip {.1667em}5.1. Therefore, the application of decompositions on the basis of Cartesian entries reduces the computational effort by almost 4 orders of magnitude. Clearly, an intelligent reduction of the frequency of visiting the event table would bring significant benefits in the computational times. Note that, CPU times refer to an Intel Core Quad 2.50 GHz processor with 1.95 GB RAM.

Finally, the proposed procedure manages to reduce the expanded event tree to just 0.0268% of its original size. The final partitions are easily translated into a set of implicants. There is no proof that this is a prime set, since there is lack of theoretical background on sufficient and necessary minimality conditions. In any case, the proposed methodology is a fast, effective, and intelligent way to reduce substantially a large event tree and facilitate the quantification of risk.

7. Conclusions

This work presents a new methodology for the reduction of event trees without the use of structural or functional information on the system. The work applies a holistic approach based on the concept of contribution vectors, to generate a minimal set of implicants representative of the system behavior. The method inherits the advantages and limitations of event tree representation. In this sense, the method is not hindered by component interdependences and noncoherent behavior in the considered systems. The proposed representation framework stems from Cartesian products to define partitions using composition vectors. The representation provides the basis for the application of decomposition and recomposition operations on single composition vectors and composition vector partitions. Implementation issues for the efficient use of these operations within an iterative algorithmic framework are discussed thoroughly.

The proposed method is tested against two case studies, one found in the literature and a fictitious large scale problem. In the former, the method provides a set of prime implicants very similar to the one reported in the literature. The latter illustrates the efficiency of the method in handling large-scale problems and proves the computational advantages from the proposed representation and operations.

Future work considers the use of the theoretical background presented here to develop necessary and sufficient conditions for the minimality of the final set of implicants. These conditions could then be incorporated in the recomposition stage to guide an optimal search algorithm towards the set of prime implicants.

Nomenclature

:Number of system blocks
:System block,
:Set of internal states of block ,  
: th internal state of block ,  
:System event space
:Complete joint event,
:Set of the all possible system outcomes
:System outcome,
:Event table mapping , where   and  
:Nonempty subspaces of
:Number of elements (cardinality) of set
:Partition applied over
: th subset of according to ,  
:Outcome-based partition of (according to mapping )
:Set of -block states, , in the events comprising
:Cartesian product
:Cartesian subspace
:Implicit subspace
:Cartesian partition over subspace
: th subset of ,  
:Implicit partition over
:th subset of ,  
:Contribution vector of
:Entry of ,    and  ,  
:Bicontribution vector of subspace
:Entry of ,    and  ,  
:Cartesian contribution vector of
:Entry of ,    and  ,  
:Contribution vector partition of
: th member of ,  
:Vector length
:Vector
:Vector partition (i.e., set of vectors)
:Entity obeying the Cartesian property
:Entity obeying the property of implicitness
:Operation applied on to obtain
:Operation applied on to obtain
:Subset of
:Subset of such that
:Event subspace such that .
Glossary
Complete set of block states: Set of all the possible states of a certain system block
Complete joint event: Joint event containing an instance of each one of the system blocks
Cartesian property: Event subspaces and contribution vectors that can be generated by a Cartesian product. Also, subspaces partitions and partition contribution vectors that can be generated by a set of Cartesian products
Property of implicitness: Cartesian entities (i.e., subspaces, vectors, partitions) whose associated Cartesian products contain only complete or singleton sets of block states
Cartesian entries: Nonzero entries of a contribution vector equal to their relative Cartesian contribution vector entries.