- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Mathematical Problems in Engineering

Volume 2013 (2013), Article ID 401616, 13 pages

http://dx.doi.org/10.1155/2013/401616

## A Generalized If-Then-Else Operator for the Representation of Multi-Output Functions

^{1}School of Education, Tel-Aviv University, Ramat Aviv, Tel Aviv 69978, Israel^{2}School of Engineering, Bar-Ilan University, Ramat-Gan 52900, Israel

Received 6 September 2012; Revised 30 December 2012; Accepted 22 January 2013

Academic Editor: Bozidar Sarler

Copyright © 2013 Ilya Levin and Osnat Keren. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The paper deals with fundamentals of systems of Boolean functions called multi-output functions (MOFs). A new approach to representing MOFs is introduced based on a Generalized If-Then-Else (GITE) function. It is shown that known operations on MOFs may be expressed by a GITE function. The GITE forms the algebra of MOFs. We use the properties of this algebra to solve an MOF-decomposition problem. The solution provides a compact representation of MOFs.

#### 1. Introduction

Logic design, as a scientific discipline, has a fascinating history. This field was intensively studied from the 1950s to the 1970s. This produced many remarkable results including the theorem of function completeness, optimization techniques, decomposition techniques, a number of spectral methods, and a theory of multivalued functions [1–7]. However, there are topics in logic design which are still interesting and have untapped potential from a theoretical point of view [8, 9]. One such topic is “systems of logic functions” (logic systems) and their corresponding representations. The most popular representations of logic systems are (a) the matrix (which is actually a Karnaugh-like form) and (b) the Multiterminal Binary Decision Diagram (MTBDD) [10–14]. There are three known ways to treat and optimize systems of logic functions: (a) as a set of single decision diagrams with two terminal nodes that may share conditional nodes (Shared Binary Decision Diagrams (SBDDs)) [15]; (b) as a single, so-called characteristic function, where the output of each function of the system is considered to be an additional input variable of the characteristic function [16]; (c) as a single decision diagram whose terminals are words [14].

In this paper, we present a different approach. We represent a system of logic functions as a network of interconnected subfunctions. In other words, we decompose a given system into a number of subsystems of lower complexity, thus achieving a compact representation of the given system.

We introduce a new Generalized If-Then-Else operator, denoted as GITE, and an algebra based on this operator. A system of logic functions can be described as a GITE formula. In this sense, the conventional algebra of the Boolean formulas is a particular case of the GITE algebra. In turn, the new operator is considered to be a generalization of the known If-Then-Else (ITE) operator that is widely used in computer science.

GITE-algebra makes it possible to formulate and solve various optimization problems. In our work, we formulate a general optimization problem, which is the decomposition of a system into a *compact* network of GITE components with predefined characteristics. We present a solution of this task in the form of an optimization algorithm. The decomposition algorithm is based on a theorem of GITE formula transformation presented in Section 4.

The paper is organized as follows. Basics of the theory of logic functions and Boolean formula are briefly reviewed in Section 2. The GITE operator is introduced in Section 3. A polynomial representation of GITE formulas is presented in Section 4. The optimization problem is formulated in Section 5. A solution to the optimization problem is given in Section 6. Section 7 includes experimental results. Conclusions are given in Section 8.

#### 2. Preliminaries

In this section we review some fundamentals of logic design underlying our work.

##### 2.1. Representation of a Single Logic Function

A logic function is a function that takes the values of and . An expression obtained by substitution of functions into each other followed by renaming variables is called a *formula* that describes this substitution. The formula-based representation of logic functions is an *analytical* expression.

Let be a set of functions, and let be a set of formulas. We define the depth of a formula as follows.

*Definition 1. *Symbols representing the input variables are considered to be formulas of *depth *. A formula has depth if can be expressed as , where , are formulas of the maximal depth , and is the number of arguments of .

The formulas are called subformulas of . Function is called the *outside* or the *main* operation of formula . All subformulas of the are also considered as subformulas of .

When talking about a formula corresponding to a specific function, it is acceptable to say that a formula *represents* the function. Unlike the truth table representation of a function, the formula representation is not unique. Formulas representing one and the same function are *equivalent*. Actually, in most cases, we do not deal with functions but rather with formulas representing these functions. In other words, we deal with the algebra of formulas [17, 18].

One of the popular forms for the representation of a logic function is a Binary Decision Diagram (BDD) [17]. A fundamental operator enabling operations between BDDs is the *If-Then-Else (ITE)* operator [19]. In our paper, we use the ITE operator as an operator on the system of logic functions and not just on the set of BDDs.

The ITE operator serves to represent any function of two variables [17, 19] and consequently may be considered as a *universal* basis for the set of logic functions. In other words, any function of variables can be represented as a substitution of ITEs. We define the depth of an ITE formula as follows.

*Definition 2. *Symbols representing the input variables are considered ITE formulas of *depth *. An ITE formula has depth if can be expressed as , where , are ITE formulas of the maximal depth .

##### 2.2. Representation of the System of Functions

The present study deals primarily with a system of logic functions and not necessarily a single Boolean function. The system of logic functions can be considered as a single function of input Boolean variables and Boolean output variables. Such functions are called *multi-output functions* (MOFs) [10, 11, 20–23]. The domain and range of an MOF are the -dimensional and -dimensional Boolean cubes, and , respectively. We refer to a vertex of a Boolean cube by its corresponding integer value or as a minterm (A literal is a variable or its complement. A minterm is a product of all the literals). An MOF can be defined by a truth vector (also called a truth table); that is, a list of all possible input combinations with the corresponding output values (vectors). In this paper we refer to the output vectors by their corresponding integer values. An *analytical* representation of MOFs differs from the conventional Boolean representation. An analytical MOF representation is the focus of the present study.

Multiterminal Binary Decision Diagram (MTBDD) is a data structure for representation and efficient manipulations with MOFs [10, 24]. Unlike the conventional BDD that has two terminal nodes, the MTBDD has a number of the terminal nodes.

*Definition 3. *An MTBDD is a directed acyclic graph, representing an MOF or a set of logic functions .

Properties of MTBDD and other MOF representations and their applications have been studied intensively [25–33]. To the best of our knowledge, some aspects of the representation of MOFs and especially an algebra for MOFs have never been developed, although two main operations on MOF, the Apply and the ITE, were studied in [10]. The Apply may be used to accomplish a large number of matrix operations. Its definition is , where and are MOFs and is any binary operation on two operands, for example, , min, max, and so forth.

The ITE operator has three arguments. The first argument takes or . The second and third arguments are MOFs. ITE is defined as follows:

The following example illustrates the above operations. To simplify the explanation we use the truth vector representation of MOFs. That is, a function of variables is represented as a vector of length , , where is the value of the function at .

*Example 4. *Let , and be three MOF functions specified by the following truth vectors:
and let stand for addition of integers. Then,

In the following section we generalize these two operations to define the algebra of MOFs.

##### 2.3. Partition on a Boolean Cube

In this section we describe the concept of partition on Boolean cubes.

*Definition 5. *A partition on a set is a collection of disjoint subsets of whose set union is ; that is, such that and .

We refer to the subsets of as blocks of .

The fact that two distinct elements and are in the same block of is denoted as . In other words, if and only if there exists such that and .

*Definition 6 (intersection of partitions). *Let and be two partitions. The product is a partition comprising intersections of blocks of and , such that

*Definition 7 (summation of partitions). *Let and be two partitions. The sum consists of blocks for which if and only if a chain exists in such as: for which either or , .

We denote the product and the sum of partitions as respectively.

For and , we say that is larger or equal to and write if and only if every block of is contained in a block of . Thus, if and if and only if . The algebraic structure of partitions is known as a lattice. This lattice has both *Zero* (the smallest partition) and *One* (the largest partition). These elements are
Obviously,

In this paper, the partition is performed on the -dimensional Boolean cube . A block of the partition is the subsets of vertices of the cube. Vertices that belong to the same block must have the same output value. The *Zero* partition on a Boolean cube corresponds to the Sum-Of-Minterms (SOM) representation of a function.

*Example 8. *Consider a function specified by the truth vector
and a partition
Note that all the elements in a block are associated with the same value of the function, for example, . Consider a different partition
The product of partitions is
The sum of partitions is
Since we are dealing with partitions of the Boolean cube, blocks in the partition can be expressed as Boolean functions. For example, the Boolean functions that correspond to the blocks of
are

#### 3. The Generalized ITE Operator

The first argument in the definition of the ITE operator can be interpreted as a two-block partition of the Boolean cube or as a Boolean function. The present paper is focused on a *Generalized ITE (GITE) operator* which uses an -block partition instead of the two-block partition.

##### 3.1. Definition of the GITE Operator

First, we define the GITE operator on MOFs that are specified by truth vectors.

*Definition 9. *Let be a partition of the -dimensional Boolean cube comprising blocks: , and let be a set of MOFs defined by their truth vectors:
Then, the GITE operator is , where the th element in the truth vector is
where

*Example 10. *Let be a 4-block partition of , and let the vertices of the Boolean cube be addressed by their corresponding integer value, for example, ,

Let , where
Then is

Now we are ready to formulate the concept of MOF in terms of GITE.

*Definition 11. *An MOF is a mapping from to , which is a operation defined on two sets: the set of partitions and the set , of terminals (In [10], the values of an MOF, that is, the ’s, are called *terminal nodes *or terminals (this is different from the internal nodes that correspond to variables). In Algorithmic State Machine theory, the values of the MOF represent operations to be performed and hence are called *operators* [34]. In this paper we use the term “terminals.”).

In other words, the GITE comprises a *partition* portion and a *terminals* portion. The GITE operation maps a partition of the Boolean cube on the predefined set .

##### 3.2. GITE Formulas

In this section we introduce the algebra of GITE formulas. The elements of the algebra are MOFs, and the basic operator is the GITE. We define the depth of the GITE formula as follows.

*Definition 12. *Symbols of given terminals from the finite set are considered as GITE formulas of *depth *. A GITE formula has depth if can be expressed as , where are formulas of the maximal depth and is the number of blocks in the partition .

Unlike the case of the algebra of Boolean functions, the algebra of GITE formulas contains an additional operation *composition*. The composition operation (*Compose*) corresponds to the known Apply operation in the algebra of ITE. Composition means to construct a GITE from two other GITEs. The values of the composed GITE are calculated by performing a bitwise operation denoted by “” on the values of the given GITEs.

*Definition 13 (composition). *Let
then,

In other words, the composition of the GITE is performed by multiplying the corresponding partitions and and pairwise “” operations on terminal .

*Example 14. *Let
and let
where
Let the “” stand for a bitwise XOR between two integers represented in radix two. Then,

#### 4. *D*-Polynomials Analytical Representation of GITE Formulas

In this section we describe a special type of GITE formulas. This type corresponds to the case where a GITE formula is defined by partitions expressed in an analytic form—the form of a Boolean expression—the Sum-of-Products (SOP) form. We call this analytical representation a *D*-polynomial representation [35–37].

*Definition 15. *Let be Boolean functions expressed in an SOP form, and let be a partition, where . A *D*-polynomial is defined as follows:
where .

A *D*-polynomial can be interpreted as follows. If evaluates to 1, then . If all of the explicit functions are equal to , then .

A *D-polynomial of depth * is a formula defined over a set of terminal . *D*-polynomials of *depth * are defined in the same way as in the general case of GITE formulas.

Since the *D*-polynomials are defined in the SOP-based analytical form, the *Compose* operation on the set of *D*-polynomials may be easier to perform than in the general case of GITE, namely, by using Boolean operations. In turn, the *Compose* operation is interpreted as a composition of *D*-polynomials.

*Definition 16 (composition of D-polynomials). *Let
The composition of and denoted by is defined as
over each pair of products from and , including the implicit terms and .

Here is a logic product (AND) of the corresponding functions, and is a predefined bitwise operation between and . In other words, when evaluates to 1, the operation between and is performed.

Note that partition algebra and Boolean algebra use different terminologies. Consequently, the terminologies of the GITE and *D*-polynomials are also different. In particular, if two partition blocks are disjoint, their corresponding Boolean functions are *orthogonal*.

According to this definition, the composition operation corresponds to the *product of the partition* of *D*-polynomials. Next we show that the *D*-polynomial operation that corresponds to the *sum of the partition portions* is not a sum of *D*-polynomials (as may be expected), but is a *factorization*.

Without loss of generality, let a *D*-polynomial be represented as .

*Definition 17 (factorization of D-polynomials). *The factorization of with respect to and is its representation in the following form:
where is a number of blocks in and stand for

*D*-polynomials corresponding to the remaining functions.

Note that sum of the partition is equivalent to the max operation in lattices [22]. In this sense the sum is the minimal partition that is larger than both of them; that is, (see Definition 7). Hence, the sum can be interpreted as a common factor.

*Example 18. *Consider two partitions:
Let
Then, , and
where

*Laws and Properties of D-Polynomials. *The *D*-polynomials are the special class of GITE formulas. In what follows we use the *D*-polynomial notation to represent and manipulate MOFs. There are two main reasons for using the *D*-polynomial notation: (a) *D*-polynomials are formulas; (b) *D*-polynomials support SOP function representations.

We now introduce a substitution of certain *D*-polynomials into another *D*-polynomial as follows. Let , and be MOFs:

After substitution we have which is a hierarchical structure comprising a number of *D*-polynomials.

In general, let be a formula representing a multi-output function . A substitution of formula in the place of the terminals gives a new formula

Obviously, but we consider them as the same function () with different arguments. As a result, we can deal with the algebra of functions, which are determined by their partition portion of the corresponding GITE formula. The same situation is found in the conventional Boolean algebra of logic functions where we usually talk about logic operations (AND, OR, NOT, etc.) on the set of logic functions.

A special class of *D*-polynomials is the class of atomic *D*-polynomials; that is, *D-*binomials.

*Definition 19. *A *D*-binomial is a special case of *D*-polynomials, including exactly one product: .

The following theorem states that any *D*-polynomial can be represented as a composition of all of its *D*-binomials.

Theorem 20. *An arbitrary D-polynomial can be represented as a composition of -binomials , where .*

The proof of this theorem is given in the Appendix.

#### 5. The Decomposition Problem

Each class of hardware technology requires its own specific optimization criterion. Both the technology and the corresponding optimization criteria are changing continuously as a function of progress in the field of hardware and updates in design requirements. However, universal criteria of optimization do exist. These universal criteria relate to a measure of complexity of the formulas for the representation of a discrete function. Complexity may be considered *quantitatively* and *qualitatively*. Here we consider the *compactness* of the formula representation as a quantitative complexity criterion and the *modularity* as the qualitatively complexity criterion. Compactness is an important parameter for storing and remote communication of information, as well as for software representations of discrete functions. In our experiments, we assessed compactness as the number of nodes in the corresponding decision diagram.

The universal qualitative criterion for complexity is modularity. We represent a given discrete function as a hierarchical network of modular components, each performing part of a common functionality. There are number of reasons to prefer a modular (structured) representation. First of all, a structured representation simplifies the process of debugging and testing; further, modules are potentially reusable. Moreover, structured representations usually correspond to their specifications. This correspondence is highly desirable, since it helps understand and interpret the realization of the function and can be seen as a powerful measure of complexity of the representation of the function.

Below we discuss methods for transforming a system of logic functions into a structured hierarchical network of interconnected components. We decompose the system while minimizing the total number of nodes in the resulting structure.

Let us consider two extreme cases of decomposition of *D*-polynomials that are the *D*-binomials (i.e., the *binomial representation*) and the *monolith*. The monolith corresponds to the *D*-polynomial , where is a partition corresponding to the Reduced Ordered MTBDD representation of MOF [17, 19, 20, 32]. The following example illustrates these two extreme cases.

*Example 21. *Consider the MOF specified in Table 1. The function has five inputs and four outputs. Each row of the table corresponds to one *D*-binomial. Hence,
where is the *D*-binomial that corresponds to the th row in the table. For example, for ,
and therefore . The binomial representation of the function is shown in Figure 1, and the corresponding monolith is presented in Figure 2. The terminal nodes of the decision diagrams are marked by the decimal numbers of corresponding outputs. Note that, as a result of the composition operation, the sets of terminal nodes in the diagrams are not the same. In this example, let the be a bitwise-OR between corresponding output vectors. For example, terminal node “7” in Figure 2 corresponds to the two terminal nodes “6” and “3” in Figure 1. Such cases where new values are being created stem from the nonorthogonality of products (in our case, and are nonorthogonal).

Note that the monolith can be derived by composing all the *D*-binomials. However, this procedure has high complexity. For example to obtain the monolith representation in Example 21 one has to compose the five *D*-binomials of . The first Compose is quite simple since and are orthogonal ; that is,
The second Compose is

It is possible to avoid this complex calculation and, at the same time, reduce the resulting number of MTBDD nodes by factorization on subsets of *D*-binomials. The proposed decomposition presented below is based on this principle.

There are many ways to group *D*-binomials to form a network of *D*-polynomials. Different groupings of the binomials yield different representations of MOF. Each *D*-polynomial in the network has its own optimal structure, that is, its own optimal header. This fact forms the basis of our decomposition approach. The decomposition goal is to represent the given *D*-polynomial in a compact form so as to optimize a certain cost function, for example, the number of nodes in the corresponding decision diagram. The number of nodes in the decomposed *D*-polynomial is upper bounded by .

In our example, Figure 3 shows a compact representation of the MOF as a network of MTBDDs. Namely, For comparison, in this example, the monolith MTBDD has 17 nonterminal nodes (NTNs), whereas the decomposition allows to reduce the number of nonterminal nodes to 8.

#### 6. Decomposition Algorithm

The main decomposition algorithm is a recursive grouping of the set of products representing the given *D*-polynomial to a number of portions. The main decomposition algorithm is presented in Figure 4. Each recursion step comprises a fragmentation of current portion into a *block* and a *remainder*. In turn, a block is divided into a *block header* and a number of *block fragments* (*tails*). The fragmentation algorithm is described below and presented in Figure 5.

The main decomposition algorithm uses a stack for saving a current portion of the given *D*-polynomial. After the fragmentation of the current portion, each of the resulting tails the remainder are saved in the stack sequentially. If the remainder portion is empty, which means that all products of the current portion of the *D*-polynomial are included in the block; then, obviously, nothing is saved into the stack. Similarly, if a certain resulting tail comprises just one product (one terminal), then nothing is saved into the stack. The main decomposition algorithm stops when the stack is empty. Clearly, this happens when all products of the given *D*-polynomial are distributed between blocks.

##### 6.1. Fragmentation Algorithm

In each step, the fragmentation algorithm divides the set of *D*-binomials (which is the set of products) into two subsets. One subset is called , and the second subset is called ; consists of the products that determine the *block D*-polynomial—, and contains the products that form the *reminder D*-polynomial—. Formally,
where the partition determines a common factor (block header) and ’s (block fragments) of . The block header is selected in such a way so as to provide a minimization of the resulting structure.

*Definition 22 (prefix). *Let be a product in a block. Each product covering the product is called a *prefix*.

The set of all prefixes associated with the products of the block defines the block *header*.

The block header can be represented as a monolith whose internal nodes are associated with the prefix variables. The terminal nodes of the monolith correspond to the block fragments representing a *D*-polynomial with the remaining input variables. We call such fragments *tails*.

In what follows we describe fragmentation algorithm.

In each iteration the fragmentation algorithm chooses the prefixes for the current block. The prefixes that form the block header are chosen one by one. Each newly added prefix must be orthogonal to all prefixes accumulated so far. The algorithm is depicted in Figure 5.

Each iteration starts by preparing a list of candidate prefixes (see Section 6.3). Then, the first (basic) prefix is chosen. The basic prefix defines the set of input variables that will determine the partition . Therefore it has special significance. The left hand side of Figure 5 shows the steps related to choosing the basic prefix. The set of criteria for ranking the candidate prefixes is described in Section 6.4. After choosing the prefix that is ranked the highest as the basic prefix, the algorithm constructs the block header by adding secondary prefixes. The right hand side of Figure 5 shows the algorithm that gathers all the prefixes that form the block header. The set of criteria for ranking the candidate secondary prefixes is described in Section 6.5. Let us start by presenting the notations.

##### 6.2. Notations

(i)—the set of products for a given MOF. (ii)—the prefix under consideration, that is, the basic prefix in the block header. (iii)—a secondary prefix to be added to the block header. (iv)—the number of variables in the prefix. (v)—the set of products having prefix . The set is called the family of the prefix. When it is clear from the context we write instead of .(vi)—the set of the products that do not depend on any of the prefix variables. (vii)—the set of products depending on some of the prefix variables, . Set is the “undecided” set, since these products are neither in the prefix family nor in its remainder. (viii)—the set of products orthogonal to the prefix. Clearly, .(ix)—the set of products that are not orthogonal to the prefix, .(x)—the set of variables in all the products in a set .(xi)—the number of literals in all the products in a set .(xii)—the set of outputs corresponding to all the products in a set .

*Example 23. *Consider the function specified in Table 1. The function is specified by five products.

Let be a prefix under consideration. The number of variables in is . The family of , that is, the set of products having prefix , is . There are two products in . Note that the set of variables in all the products of is , the number of literals in all the products of is , and the set of outputs corresponding to all the products of is .

The set of the products that do not depend on any of the prefix variables is , and the set of products depending on some of the prefix variables is . Note that is orthogonal to the prefix; hence .

##### 6.3. Preparing the List of Candidates

The prefix can be either a product or a product that covers it. A straightforward procedure is proposed below for constructing the list of the candidates from the products in .

Let be variables with values from . Define an operator , that compares these two Boolean variables and , and returns the value of one of them if they are equal, and otherwise it returns a “−”. The function *Common * accepts two products and and applies in a bitwise manner to each of the variables in the set . The suggested procedure for constructing the list of candidates is presented in Algorithm 1.

##### 6.4. Choosing the Basic Prefix

The basic prefix is the foundation of a block. It is chosen to simplify the representation of the block header. For this, the basic prefix has to attract the secondary prefixes “close” to it and repel those “far” from it.

There are three main concerns to consider here: the input variables, the output functions, and the length of the prefix. In addition, since the secondary prefixes will be chosen from set , it is imperative to measure the orthogonality of . The four criteria are as follows.

The first criterion fulfills the input requirement: It counts the variables common to the tail and the remainder corresponding to the prefix. The ratio must be reduced as much as possible to separate the block (with its tails) from the remainder. This criterion has values in the interval, where corresponds to the case, where all the remainder variables are present in the tail, and to the opposite.

The second criterion responds to the output requirement: It counts the outputs common to the tail and the remainder corresponding to the prefix. The rationale here is the same as for the input requirement.

The third criterion, called *Prefix Significance*, measures the percentage of literals in the products of the prefix family:

The reason for this is simple: the longer the basic prefix, the longer the list of candidates for the secondary prefixes.

The last criterion, called *Orthogonality*, responds to an additional requirement. It counts the number of literals in the products orthogonal to the prefix relative to the number of literals in all the candidates:

The weighted grade of a candidate prefix is defined as . When choosing the basic prefix, the candidate with the highest is taken. Note that the coefficients of the criteria should be chosen so as to reflect the relative significance/contribution of each criterion to the quality of the overall solution. In this paper (since it is conceptual) we assumed that all the criteria were equally significant; that is, the experimental results described in Section 7 were produced with . Therefore, the results are suboptimal: they can be further improved. This however is left for future study.

##### 6.5. Construction of the Block Header by Choosing the Secondary Prefixes

In the following equations, the superscript indices and stand for “current situation” and “after adding the target prefix,” respectively.

The first criterion, called *Additional Inputs*, counts the number of variables common to the tail and to the remainder of the target prefix, but only those not yet present in the block,

The second criterion, called *Additional Outputs*, counts the number of output functions common to the tail and to the remainder of the prefix:
Here, as in the previous criterion, only the newly added outputs are considered.

The third criterion, called *Overhead*, measures the literal overhead introduced to the block and removed from the remainder by selecting the target prefix,
This equation can be rewritten as follows:
Each of the two fractions is limited to the interval , but the total value of is in the interval .

The weighted grade of a candidate prefix is defined as . The candidate with the highest is taken and added to the set of prefixes that form the block header.

The complexity of the algorithm can be estimated as follows. Denote by the number of products in the given *D*-polynomial. Unlike the fragmentation algorithm that deals with the partitions, the main decomposition algorithm considers only the values of the MOF. Since the main decomposition algorithm separates the products by using a binary tree, its complexity is of order . The complexity of the fragmentation algorithm is of order , since its main task is the generation of the set of secondary prefixes.

#### 7. Experimental Results

The efficiency of the proposed approach was tested experimentally by applying the above decomposition algorithm to a number of benchmark functions. The effectiveness of the method was evaluated by comparing the compactness of a monolith MTBDD which corresponds to the given MOF with the compactness of the proposed decomposed network. In the experiments, PLA-like representations of the standard combinatorial-circuit benchmarks (LGSYNTH93) were used.

The experiments demonstrate that the proposed decomposition, when successful, greatly reduces the size of the decision diagram as compared to the monolith solution.

To analyze the experimental results, we defined a *block density*—a specific parameter of a block. This parameter corresponds to the number of literals in the block’s products normalized by the maximal possible number of literals in this block. The success of the decomposition strongly depends on this defined density. Consequently, the effectiveness of the decomposition can be predicted quite reliably by making a preliminary study of the given MOF.

The experimental results are shown in Tables 2 and 3. Table 2 lists the benchmarks for which the decomposition network was simpler than the monolith MTBDD of the MOF. Table 3 shows the opposite cases. The columns in the tables are as follows: is the number of inputs, is the benchmark’s density , and and are the number of nodes in the monolith MTBDD and in the decomposition network. The last column shows the ratio . Both tables are arranged by ascending density.

The results show that density is a consistent indicator of the success of the decomposition. The successful cases are mostly in the low-density area (density up to 45%), and the unsuccessful ones are mostly in the high-density area (density of at least 60%). The middle functions (density 40–60%) are divided more or less evenly between the successes and the failures. Moreover, there are several examples where the high-density functions are successfully decomposed and no examples where the method failed to work on low-density functions.

The proposed decomposition, on the other hand, relies upon extracting dense blocks from the given MOF and treating the sparse remainders and tails separately. Therefore, a sparse MOF can be easily dealt with by splitting them into a network of component MTBDDs. With dense MOFs, choosing suitable blocks is difficult, and arbitrary choices lead to an ineffective resulting network.

#### 8. Conclusions

Despite extensive research on building the fundamentals of logic design, some of its topics have yet to be examined. One of these topics relates to representation of systems of Boolean functions (multioutput functions) by decision diagrams. Specifically, the conceptual transition from the Boolean function domain to the multi-output functions is considered hard. Although introducing the If-Then-Else (ITE) operator on the Boolean domain makes it possible to construct the decision diagram of a logic function in a very clear way, the analogous procedure for multi-output functions was unknown. This work fills this gap. The main results can be summarized as follows. (i)A GITE operator was introduced. The GITE operator is a generalization of the ITE operator on the Boolean domain. (ii)Based on the GITE operator, an algebra of the GITE formula was developed and studied. (iii)The concept of *D*-polynomials as a compact analytical representation of the GITE formula was presented. The problem of the compact representation of multi-output functions was then formulated as a problem of decomposition of *D*-polynomials. (iv)Finally, a solution to this problem, based on the GITE algebra and its properties, was introduced. Experimental results obtained on a number of benchmarks are promising. We believe that the present work will initiate future research on the GITE algebra and its possible applications in logic design.

#### Appendix

#### Proof of Theorem 20

Theorem 20 states that any *D*-polynomial can be represented as the composition of its all *D*-binomials. To prove the theorem we show that the composition of *D*-binomials is equal to the *D*-polynomial with the same coefficients.

When composing two *D*-binomials, one of the following cases can occur.(1) The products are pairwise orthogonal: , for . In this case, the orthogonality of the products and the completeness condition yield . Hence, . Therefore,
(2) Products and are not orthogonal: , for . Note that the nonorthogonal products are associated with one and the same terminal . Let
After composition we have

#### Acknowledgment

This paper was partially supported by the Israel Science Foundation (Grant no. 1200/12).

#### References

- R. L. Ashenhurst, “The decomposition of switching functions,” in
*Procceedings of an International Symposium on the Theory of Switching*, pp. 74–116, April 1957. - G. de Micheli,
*Synthesis and Optimization of Digital Circuits*, McGraw-Hill Higher Education,, 1994. - J. P. Hayes,
*Introduction to Digital Logic Design*, Addison-Wesley Longman Publishing, Boston, Mass, USA, 1993. - M. G. Karpovsky,
*Finite Orthogonal Series in the Design of Digital Devices*, John Wiley & Sons, New York, NY, USA, 1976. View at MathSciNet - Z. Kohavi,
*Switching and Finite Automata Theory*, McGraw-Hill, 1970. View at MathSciNet - E. J. McCluskey,
*Logic Design Principles*, Prentice-Hall, Englewood Cliffs, NJ, USA, 1986. - R. E. Miller,
*Switching Theory*, John Wiley & Sons, New York, NY, USA, 1965. - R. Brayton, “The future of logic synthesis and verification,” in
*Logic Synthesis and Verification*, pp. 403–4434, Kluwer Academic Publishers, Norwell, Mass, USA, 2002. - M. A. Perkowski, “A survey of literature on function decomposition,”
*Technical Report*, GSRP Wright Laboratories, 1995. - R. I. Bahar, E. A. Frohm, C. M. Gaona et al., “Algebraic decision diagrams and their applications,” in
*Proceedings of the IEEE/ACM International Conference on Computer-Aided Design*, pp. 188–191, November 1993. View at Scopus - C. M. Files and M. A. Perkowski, “New multivalued functional decomposition algorithms based on MDDs,”
*IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*, vol. 19, no. 9, pp. 1081–1086, 2000. View at Scopus - Y. Iguchi, T. Sasao, and M. Matasuura, “Evaluation of multipleoutput logic functions using decision diagrams,” in
*Proceedings of the Asia and South Pacific Design Automation Conference*, 2003. - T. Sasao, Y. Iguchi, and M. Matsuura, “Comparison of decision diagrams for multiple-output functions,” in
*Proceedings of the International Workshop on Logic and Synthesis*, 2002. - S. N. Yanushkevich, D. M. Miller, V. P. Shmerko, and R. S. Stankovic,
*Decision Diagram Techniques For Micro- and Nanoelectronic Design Handbook*, CRC Press, 2005. - T. Sasao,
*Memory-Based Logic Synthesis*, Springer, 2011. - T. Sasao and M. Matsuura, “A method to decompose multiple-output logic functions,” in
*Proceedings of the 41st Design Automation Conference*, pp. 428–433, San Diego, Calif, USA, June 2004. View at Scopus - R. E. Bryant, “Symbolic boolean manipulation with ordered binary decision diagrams,”
*ACM Computing Surveys*, vol. 24, no. 3, pp. 293–318, 1992. View at Publisher · View at Google Scholar · View at Scopus - P. G. Hinman,
*Fundamentals of Mathematical Logic*, A K Peters, 2005. View at MathSciNet - G. D. Hachtel and F. Somenzi,
*Logic Synthesis and Verification Algorithms*, Kluwer Academic Publisher, 2005. - S. Hassoun and T. Sasao, Eds.,
*Logic Synthesis and Verification*, vol. 654 of*The Springer International Series in Engineering and Computer Science*, 2002. - S. Nagayama and T. Sasao, “Compact representations of logic functions using heterogeneous MDDs,” in
*Proceedings of the 33rd International Symposium on Multiple-Valued Logic*, pp. 247–252, May 2003. View at Scopus - T. Sasao,
*Switching Theory For Logic Synthesis*, Kluwer Academic Publishers, 1999. - T. Sasao and M. Fujita,
*Representations of Discrete Functions*, Kluwer Academic Publishers, 1996. - C. Baier and E. Clarke, “The algebraic Mu-calculus and MTBDDs,” in
*Proceedings of the 5th Workshop on Logic, Language, Information and Computation (WoLLIC '98)*, pp. 27–38, 1998. - B. Chen and C. L. Lee, “Complement-based fast algorithm to generate universal test sets for multi-output functions,”
*IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*, vol. 13, no. 3, pp. 370–377, 1994. View at Publisher · View at Google Scholar · View at Scopus - R. Drechsler, J. Shi, and G. Fey, “Synthesis of fully testable circuits from BDDs,”
*IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*, vol. 23, no. 3, pp. 440–443, 2004. View at Publisher · View at Google Scholar · View at Scopus - G. Fey and R. Drechsler, “Minimizing the number of paths in BDDs: theory and algorithm,”
*IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*, vol. 25, no. 1, pp. 4–11, 2006. View at Publisher · View at Google Scholar · View at Scopus - W. N. N. Hung, X. Song, G. Yang, J. Yang, and M. Perkowski, “Optimal synthesis of multiple output Boolean functions using a set of quantum gates by symbolic reachability analysis,”
*IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*, vol. 25, no. 9, pp. 1652–1663, 2006. View at Publisher · View at Google Scholar · View at Scopus - M. G. Karpovsky, R. S. Stankovic, and J. T. Astola, “Reduction of sizes of decision diagrams by autocorrelation functions,”
*IEEE Transactions on Computers*, vol. 52, no. 5, pp. 592–606, 2003. View at Publisher · View at Google Scholar · View at Scopus - O. Keren, “Reduction of the average path length in binary decision diagrams by spectral methods,”
*IEEE Transactions on Computers*, vol. 57, no. 4, pp. 520–531, 2008. View at Publisher · View at Google Scholar · View at MathSciNet - O. Keren I. Levin and R. S. Stankovic, “Minimization of the number of paths in binary decision diagrams by using autocorrelation coefficients,”
*IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*, vol. 30, no. 1, pp. 31–44, 2011. - C. Meinel, F. Somenzi, and T. Theobald, “Linear sifting of decision diagrams and its application synthesis,”
*IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*, vol. 19, no. 5, pp. 521–533, 2000. View at Scopus - C. Yang and M. Ciesielski, “BDS: A BDD-based logic optimization system,”
*IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*, vol. 21, no. 7, pp. 866–876, 2002. View at Publisher · View at Google Scholar · View at Scopus - S. Baranov,
*Logic and System Design of Digital Systems*, TUT Press, 2008. - I. Levin, O. Keren, V. Ostrovsky, and G. Kolotov, “Concurrent decomposition of multi-terminal BDDs,” pp. 165–169, Proceedings of the 7th International Workshop on Boolean Problems, Freiberg, Germany, September 2006.
- I. Levin and O. Keren, “Split multi-terminal binary decision diagrams,” in
*Proceedings of the 8th International Workshop on Boolean Problems*, pp. 161–167, 2008. - I. Levin and O. Keren, “Generalized if-then-else operator for compact polynomial representation of multi output functions,” in
*Proceedings of the 14th Euromicro Conference on Digital System Design (DSD '11)*, pp. 15–20, 2011.