`VLSI DesignVolume 2011, Article ID 356137, 9 pageshttp://dx.doi.org/10.1155/2011/356137`
Research Article

New Considerations for Spectral Classification of Boolean Switching Functions

1Department of Mathematics & Computer Science, University of Lethbridge, 4401 University Drive West, Lethbridge, AB, Canada T1K 3M4
2Department of Computer Science, University of Victoria, P.O. Box 3055 STN CSC, Victoria, BC, Canada V8W 3P6

Received 5 May 2010; Revised 21 October 2010; Accepted 11 January 2011

Copyright © 2011 J. E. Rice et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents some new considerations for spectral techniques for classification of Boolean functions. These considerations incorporate discussions of the feasibility of extending this classification technique beyond . A new implementation is presented along with a basic analysis of the complexity of the problem. We also note a correction to results in this area that were reported in previous work.

1. Introduction

In building or designing a circuit or Boolean switching function, one approach is to begin with a known implementation and add logic to achieve the desired functionality. However, as , the number of inputs to the switching function, increases, it is impossible to individually examine each switching function to determine if such a technique will lead to a reasonable implementation, both for a given starting function and for the desired function. Thus it is desirable to determine some method of grouping or classifying these functions, such that functions with similar characteristics fall into a common class. One such technique involves applying a transform to the outputs of the switching function, thus generating a series of coefficients which are then used to group the functions. This was first suggested by Edwards [1]. Since then, other researchers have expanded upon this in works such as [2, 3]. The limiting factor of such an approach has been the computational complexity of applying the transform.

For many spectral classification approaches, the transforms generally used are the Walsh or Rademacher-Walsh transforms. These transforms are desirable for a number of reasons, but techniques using these transforms are quite computationally intensive. They are so computationally intensive that it has not been possible to compute the complete spectral classes for functions with more than 5 variables, as published in [3, 4].

In this paper, we revisit the computation of the spectral classes. Since 30 years have passed since this problem has been examined, we theorized that advances in technology could allow for further classes to be computed. We found this to be untrue and present some basic analysis of the problem to support this. In our attempt to press beyond the limitation, however, we developed a new approach to spectral classification, and during testing we found that our results differed from those presented in [3]. An additional contribution of our work is a correction to the results from [3], and an argument as to why our new results are correct.

The main idea of our approach involves applying the operations used in classification directly to the functions themselves, using bit operations whenever possible. Rules reflecting the changes for each operation are also predetermined where possible, and stored as templates in order to speed up overall processing. Unlike other approaches, where the spectral domain is used for processing, we chose to remain in the functional domain and make use of a truth table representation for each function, thus allowing us to use a very simple bit-vector as the underlying data structure. A variety of optimization techniques are used to speed up the processing; however, we avoid assumptions made by previous researchers that resulted in faulty classifications when extended to larger values of .

2. Background

We first provide some background in the areas of classification and in particular, spectral classification.

2.1. Classification

Given that unique Boolean switching functions can be realized for input variables, it is clear that having some technique for grouping functions together according to some similarities would be useful. There are many techniques for classification, but in general a classification of a set of functions into classes based on transformations is such that

Two functions and , are in the same class if and only if can be obtained from by the application of some appropriate set of transformations from . No set of transformations applied to a function in can lead to a function in for any where .

2.2. Spectral Classification

One of the more common classification techniques divides functions into the NPN classes [4]. In this technique the transformations consist of negation of the input variable(s) (N), permutation of the input variable(s) (P), and negation of the output (N).

The negation of an input variable effectively replaces each instance of a value with its inverse for that particular variable. An example is shown in Table 1. As this example shows, the effect of negating an input variable is that the rows of the truth table for the switching function are permuted.

Table 1: The effect of negation of an input variable () on the truth table of a Boolean switching function. Note that the output vector of the function is generally referred to as , with being individual elements of .

Similarly, the permutation operation also affects the rows of a function's truth table. Negation of the output has the effect of inverting (negating) all the output values, and so the rows of the table are not affected.

The spectral classification technique is based on examination of a function's spectral coefficients. Computation of these coefficients is discussed in Section 2.3. There are five operations which may be applied to a function that do not change the values of a function's spectral coefficients, only the order in which they appear. These five operations are [3] (1)permutation of the input variables, (2)negation of the input variables, (3)negation of the output, (4)replacing some variable with , and , and (5)replacing the output with for some .

As one can see, the first three operations are the same as given for the NPN classification. These five operations are sometimes referred to as invariance operations. Throughout this work we refer to each of these operations according to their numeric ordering; for example, a Type 1 invariance operation would refer to some permutation of the function's input variables.

2.3. Computing the Spectral Classes

The spectral classes are formed by determining all functions that have the same set of absolute values of spectral coefficients. Functions with the same spectral coefficients, disregarding ordering and sign, fall into the same spectral class. It is generally necessary to compute the spectral coefficients for a function to determine its spectral classification.

Computation of the spectral coefficients can be carried out by applying the appropriate spectral transform to the output vector of the Boolean switching function in question. In this explanation we describe the use of the Hadamard transformation matrix to compute the spectral coefficients. It should be noted that this results in the same overall set of coefficients as do the Walsh and Rademacher-Walsh transforms; the primary difference is in the resulting ordering of the values. The Hadamard transformation matrix was chosen simply because of its recursive nature, making it easy to describe. Since the remainder of this paper focuses on computing the spectral classes in the functional domain and not in the spectral domain this explanation is included merely for the sake of completeness and understanding of the work.

The Hadamard transformation matrix is recursively defined as given in (2). Then computation of the spectral coefficient vector is generally computed by applying (3), where is the output vector of the switching function in question. Generally, encoding is used; that is, 0 values are replaced with +1 and 1 values are replaced with −1. The output vector is referred to as if encoding is used, and if encoding is used. Discussions of this replacement can be found in [3, 5].

An example of applying the Hadamard transformation matrix to the encoded output vector of the function is shown in Figure 1. A straightforward application of (3) would result in the summation of individual product terms; however, there are fast transform procedures that allow interim values to be reused, thus saving some computational effort. Details of such procedures are given in [3].

Figure 1: An example of computing the spectral coefficients for the function .
2.4. Relationship between Spectral Classes and the Spectral Coefficients

Before discussing the impact of the five invariance operations, it is useful to define some labeling to clarify our discussion. If we describe the spectral coefficients as a vector of values consisting of individual coefficients , then it is common to label each individual coefficient according to its meaning. For instance, a coefficient with a single subscript, for example, , indicates the correlation between and the input variable . A coefficient with multiple subscripts, for instance , indicates the correlation between and the function . The remaining coefficient, indicates the similarity of to the constant function. An example illustrating this labeling is shown in Figure 1. Given this labeling we can then describe the effect of the five invariance operations on the spectral coefficients of a function as follows. (1)Permutation of any input variables and results in the exchange of with , with , with , and so on. Coefficients , , , and others in this pattern remain unchanged. (2)Negation of any input variable results in the negation of the related coefficients: , , , and so forth. Other coefficients , , , and so on remain unchanged. (3)Negation of the output results in the negation of all spectral coefficients. (4)Replacement of any variable with results in the exchange of with , with , with , and so on. Coefficients remain unchanged. (5)Replacement of the output with results in the exchange of with , with , with , and so on. All coefficients are affected by this change [3].

For example, given , then functions and belong to the same spectral class. can be generated by negating variable in the original function , and can be generated by permuting variables and in the original function. and would have the same spectral coefficients, in the same order, while the change for would be the interchange of coefficient pairs and and and .

Thus the spectral classes can be computed by determining all functions with the same sets of coefficient values, or alternatively by applying the 5 invariance operations to a function in order to generate all other functions in the same class.

3. Related Work

There has been a variety of approaches suggested for computation of the spectral coefficients. For instance, Thornton and Drechsler propose the computation of spectral information from logic netlists in [6], and also suggest techniques based on AND/OR graphs and output probabilities [79]. Clarke et al. [10] is probably one of the earliest researchers to suggest computation based on decision diagrams, and other researchers such as Jankovic et al. have followed up on this [1115]. Techniques based on programmable hardware have also been introduced [16] as have parallel techniques [17].

One of the reasons for this renewed interest in spectral coefficients is that researchers such as Hansen and Sekine [18] have suggested new synthesis techniques based on the spectral coefficients of a function. Other work in this area includes techniques for Boolean matching [19], in reversible logic synthesis [20, 21], and in multiple-valued applications [22]. The reader is directed to [23] for further details on advancements in spectral techniques in this field as well as various works by Falkowski and Yan including [24, 25].

While there are many questions of interest in the area of classification, research here has not been as common. Classification techniques based on Reed-Muller forms have been suggested by Tsai and Marek-Sadowska in [26], and a technique based on the matrix representation of a function was introduced by Lapshin in [27]. Related work on classification has also been discussed by Rice and Muzio in [28]. Strazdins [29] discusses the matter of Boolean function classification from a more mathematical approach, although it is of interest to note that this work also is following up on similar work after an approximately 30-year gap.

Our research has focused primarily on spectral classification as introduced by Edwards. In [1] Edwards applied five spectral invariance operations to classify all possible Boolean switching functions for . He did so using a compressed version of the spectral coefficients referred to as a function's signature. This signature consists of , all coefficients with a single subscript, and a list of the absolute values of all coefficients with the number of occurrences of each value. For example, the spectral coefficients for the function are shown in Figure 1, while the corresponding signature for this function is . Thus has the coefficient values , , , and , and the remaining coefficients are grouped into the summary indicated by the notation showing that there are four coefficients with the value 0 and four coefficients with the (absolute) value 4; note that the values for , , , and are also included in this summary. As one can see, the signature compresses the entire coefficient information by removing the information relating to where in the coefficient pattern a particular value appears.

It was thought that this signature was sufficient to uniquely define each class, and Edwards stated that 47 spectral classes were required to completely classify all functions. However, further investigation in [4] discovered that one of the 47 classes published in [1] did not meet the definition of the classification. There existed at least one function within a particular class for which there was no way to transform it to any other function in that class. This offending class was therefore split into two separate classes, each with the same signature. This discovery proved that this spectral signature as defined by Edwards is not sufficient to uniquely classify all functions and the number of classes for thus increased from 47 to 48.

A complementary approach was taken in [3] using only the first four operation types. This approach produced 191 classes, which were mapped to the equivalent classes in [4] using the signature of each class. Clearly the classes determined using only four of the invariance operations have fewer criteria than the classes determined using all five operations, and so there are multiple classes defined in [3] for each class defined in [4].

The general process used in [1, 3, 4] was to transform a starting function, , into the spectral domain and attempt to construct all other functions in the same spectral class using the five spectral transformations. In order for this to be computed in a reasonable amount of time for , the problem was extensively pruned and optimized. Unfortunately, the details of these optimizations are unavailable.

4. Our Approach

In contrast with the previous work in [1, 3], the processing in our approach takes place entirely in the functional domain, with the exception of the direct comparison to the previous work. An advantage of performing classification in the spectral domain, or in other words, manipulating the spectral coefficients of a function, is that the spectral coefficients provide more global information about a function, making it possible to make observations about groups of functions and optimize accordingly. However, a big disadvantage of this approach is in the overhead of computing the spectral coefficients, and even fast techniques still have significant computational complexity.

To avoid the overhead associated with converting to the spectral domain, we chose to focus on the functional domain and apply the invariance operations directly to the function's truth table representation. Since most of the operations simply require reordering of a function's truth table, this can be easily reflected by a change in a bit vector storing the function's outputs. Furthermore, the effect of each operation is predictable, and so rules reflecting the changes for each operation can be predetermined and stored as templates. A further advantage of working in the functional domain is in the fact that all of the above operations can be performed very quickly as bit operations such as AND, OR, XOR, and SHIFT. Unfortunately, we do lose out on some opportunity to optimize over groups of functions, since we cannot get information about other functions from an individual function's truth table.

4.1. Optimizations

One optimization that can be made is to prefilter the functions. It is possible to group the functions based on the number of true bits in the output vector of the function's truth table. For example, the functions shown in Table 2 all have exactly four 1's, or four true minterms in their output vectors. Because the number of true bits is never changed by any of operations 1, 2, or 4 (permutation, negation of an input, or replacement of an input with an exclusive-or operation), we know that functions with differing numbers of true minterms can never be in the same class. This prefilter process categorizes the functions according to the number of true minterms in the output vector, resulting in categories, of which two are trivial cases ( and ). The effect of the Type 4 invariance operation is to negate the output, and so a function with true minterms will be transformed to a function with true minterms. It can be shown that these will both fall into the same category, and so the prefilter can further reduce the number of categories to . Details of this proof are given in [30].

Table 2: An example of three functions each having the same number of true minterms in their output vectors.
4.2. Rules

Our process of determining the classes then requires three steps: generate rules, prefilter, and then apply the rules. The rules describe how the output bits from the current function are remapped to create a new function within the same spectral class. For instance, if we consider the first invariance operation, permutation, then it is possible to permute input variables in possible ways resulting in rules, including the original function. Table 3 illustrates these rules.

Table 3: The six rules for the Type 1 operation, permutation, when applied to functions with variables.

The second invariance operation, negation of input variables, results in possible rules, as there are ways in which combinations of the input variables can be negated.

The third invariance operation, negation of the output, is something of a special case, as no swapping of output bits takes place. Thus there is no real rule generated for this operation.

The fourth invariance operation, replacement of a variable with an exclusive-or operation involving that variable, is a more difficult proposition. Figure 2 illustrates how an input whose original value was variable can be replaced with the exclusive-or operation . Rules for this operation must list all possible legal replacements of the variables, keeping in mind that the replacement of a particular variable must incorporate the original variable. To generate a complete list, first all possible replacements are listed and placed in a lookup table such as is shown in Table 4. To generate a possible Type 4 operation rule, one must then choose one item from each row in Table 4, and this is repeated until all possible combinations have been realized. This will generate combinations. However, the problem is that some of these combinations are invalid. For example let us take a function such as and perform the following replacements:

Table 4: Type 4 input variable lookup table for . This table shows all possible exclusive-or combinations of the three input variables.
Figure 2: An example illustrating the Type 4 invariance operation, with (a) showing the original function, and (b) showing input 1 replaced with .

Then the resulting function becomes

It is clear from examining the truth tables of these two functions as shown in Table 5 that they have different numbers of true minterms, and so cannot be in the same class.

Table 5: The truth tables for and for .

To solve this problem, the replacements can be represented as a matrix in which each row represents a variable and each column indicates whether the variable is included in the replacement. Using this representation for functions and from the above example would result in matrices as shown in Figure 3. In this form, each combination must have true bits on the diagonal. To then separate the valid operations from the invalid combinations, a linear independence check is performed on the vectors of each input combination (the rows of the matrix for each combination of replacements). If a combination is found to be linearly independent, it is added to the list of valid Type 4 operations.

Figure 3: Matrices representing (a) the starting function and (b) the function resulting from replacing each variable with the network .

The final invariance operation is the replacement of the output with . This operation was not included in this work, so as to generate a list more easily comparable to work in [3].

4.3. Applying the Rules

Simply applying each of the rules for each rule type to some chosen starting function will not achieve the desired result of producing all possible functions in a spectral class. To create all possible functions all of the rules must be considered in combination, and the eventual result must combine all rules in all possible combinations. Figure 4 shows a tree illustrating the concept of combining the transformation rules. Each result of the Type 1 rules must be operated on by each of the Type 2 rules. For each result of the Type 2 rules, the Type 3 operations must be applied. Finally, for each Type 3 result, each Type 4 operation must be applied. The leaf nodes of the tree represent all possible functions that can be realized from the original starting function. The first leaf node, when considering a postorder traversal, represents the starting function: in other words the original Boolean switching function, unmodified. The code for implementation of the prefilter, rule-generation, and application of the rules is available in [30].

Figure 4: Tree illustrating the concept of combining all the transformation rules.

5. Results & Analysis

5.1. A Correction to Previous Results

In [3] it was stated that there are 191 spectral classes to represent all functions for . Our research, however, indicates that several classes may inadvertently have been combined and that there are in fact 206 spectral classes needed to represent all functions. Table 6 shows the number of classes for through 5 as determined by our approach. All previously published results agree with these values up to . As classifying all functions is a problem that grows not just exponentially but double-exponentially as the value of increases, this problem must be optimized in order to compute a solution, and a computational solution is not easily checked. Careful analysis of the optimizations in this implementation indicates that transformations are required to classify all functions. Although this is still a lot of computation, it is significantly less than the transformations that must be performed when no optimization is added to the problem. It is thus necessary to present arguments demonstrating the accuracy of our results, since computational verification is nearly impossible.

Table 6: Table showing the number of spectral classes for to 5.

Given this, we feel that the evidence of our correctness is substantial. When we examine previous computations of classes for and , our results agree with previous work, providing one form of verification, at least for smaller values of . Additionally, if we list the 191 classes found in previous work for , our research has resulted in the same 191 classes, although with an additional 15 new classes. These 15 new classes have a spectral signature matching classes contained within the other 191, so it is likely that in previous research the optimization techniques used in the implementation lead to the inadvertent combination of multiple classes. Since our optimizations rely strictly upon patterns that can be proven for a general this is unlikely to be a factor in our results. In addition, a number of internal checks on our data and computations were performed, details of which are given in [30].

5.2. The Difficulties of 𝑛≥5

This is a difficult problem, as clearly solutions for small values of such as 3, or 4 are relatively easy to determine. This is due to the nature of the numbers, as we must examine on the order of functions. For , we have , and for we have . However, for we have , and we have used the term double-exponential for this growth. Because of this growth, we suspect that as increases there may be different “behaviours” of the coefficients. For instance, using the signature of the function was sufficient to classify functions for , but not for , so it is likely that optimizations of this nature used in computing the classes may not have been suitable for the classes. There are a few factors that may indicate why this might be. It seems as though the coefficients may “behave” differently for . As the value of increases, there are more ways to combine the truth values of each function. It is possible that not all combinations of the four operation types are needed to compute all of the classes for smaller values of . Indeed, it was previously thought that the signature of a function's coefficients was unique to a class, but as computation for higher values of became feasible this was shown to be untrue. It is also possible that the spectral classes for are in fact special cases and that it is not possible to observe any patterns without knowing the classes for . Once spectral classes for are calculated, it may appear that there are different patterns for even and odd values of . Unfortunately, until the spectral class structure for higher values of are computed, the answers to these questions will remain unknown.

5.3. Analysis of the Number of Rules

It is possible to calculate the number of rules required to transform a starting function into all other possible functions within that class. The number of rules for is given in Table 7.

Table 7: The numbers of rules for varying values of .

For Type 1 transformations, rules are needed to produce all possible permutation of the input variables while Type 2 transformations require rules. Type 3 transformations do not have rules, as defined in Section 4.2 but rather the outputs of the function are negated. Therefore, for all values of , the number of transformations for Type 3 is simply the constant 2. Type 4 transformations are somewhat more complex as there is no known general case to calculate the required number of rules, and a generalized equation for the number of rules for Type 4 is currently unknown and as such, an area of future research. In general, only an upper bound of can be specified, which includes invalid transformations, thus we know only that the removal of invalid transformation will reduce the total possible number of rules. In Table 7 we show this as the equation where is yet to be determined.

Although the number of rules generated by this implementation for rules of Types 1 and 2 are confirmed, Type 4 cannot be compared if no general case can be provided. To confirm the number of Type 4 rules, an alternate method of generating the rules was created using Maple [31]. In this method all possible combinations of input variables for each value of were checked for linear independence. Each set of input variables was checked using the determinant function that is built into Maple. If the result of the determinant modulo 2 does not equal 0 then it is considered to be linearly independent, and therefore valid combination of input variables. For the number of linearly independent sets returned from Maple equaled the number of Type 4 rules generated by our implementation.

5.4. General Complexity

Spectral classification of Boolean functions is a very large problem and the number of transformations that must take place can be described as where represents the total number of functions to be considered, represents the number of Type 1 transformations, represents the number of Type 2 transformations, represents the number of Type 3 transformations, and represents the upper bound for Type 4 transformations. Because includes possibly invalid transformations, this expression represents an upper bound. The implementation of this approach is highly dependent on the order in which through are combined. In this implementation, for each item saved in , the work associated with parts , , , and can be completely avoided. This is true for every term going from left to right in the expression. In other words, for each item saved in , work in , , and is avoided and so on.

Alternatively, we can consider this expression as a tree where Figure 4 represents the terms , , , and . In this tree, Figure 4 is the child node for each item in . If there are starting functions with , it means that Figure 4 must be traversed times; once for each item of . The optimization approaches in this section are simply methods to prune this tree. The further up the tree these optimizations occur, the larger the overall benefit to the problem.

The analysis of the problem is first considered in its worst case. In the worst case, the total number of function transformations that must be performed is as follows:

There are thus transformations that must be performed to spectrally classify all functions of input variables.

The first reduction of this is fairly straightfoward; simply, the number of starting functions can be reduced by half by observing that the second half of all functions can be achieved by a Type 3 operation applied to the first functions. There are then transformations in the optimized general case for functions of input variables.

Since all possible combinations of all four spectral transformations are applied to a starting function, all functions that exist within the same class as the starting function are also discovered. This observation further reduces the number of starting functions from to the number of spectral classes, . As a result of the second optimization to , the number of transformations in the optimized general case is now . For , the total number of transformations is as follows:

We can see that the optimizations and pruning as described above and in Section 4.1 have reduced the number of transformations considerably, and the final number of transformations required to classify all functions for is . This is significantly smaller than the brute force case that would require transformations.

5.5. Future Considerations

It has already been shown that classification of all functions is an incredibly difficult problem, especially as increases to values above 4. For , it is impractical to use current methods of classification; therefore some other method is needed to calculate these classes. One approach is to use existing data from smaller values of and extrapolate to higher values. Using prediction of this nature, it may be possible to derive all, or a large portion, of the classes for variables. This could greatly decrease the amount of processing needed and could make classification for feasible.

When considering the first 128 functions for , 4, and 5 many of the classes remain the same as the value of increases. This is an interesting observation that deserves further study.

We note that the program for this work was initially written in Java, and then ported to C++ when we ran into speed and performance issues. The computation was carried out on a Macintosh G5 with 2.5 GHz dual processors and 6 GB of RAM. On this platform, processing of the classes required between one and two days (24 to 48 hours) of processing time to complete. As platforms and language support improve, it may be possible to continue improving this; unfortunately as the numbers increase for even significant improvements will not provide enough speed-up to complete in a reasonable amount of time, thus some type of derivation or prediction process is likely to be required.

6. Conclusion

This work was begun with the goal of continuing work begun in [3] and by other researchers, and extending the computational results to higher values of . Instead, what we found was that even with faster technology and larger amounts of memory the problem of determining the spectral classes for Boolean functions grows too quickly. Moreover, we determined that there is significant evidence that previous implementations for determining the classes for were, in fact, faulty, and resulted in incorrect results.

This research focused on an entirely new approach to the computation of the spectral coefficients, and found that an additional 15 classes should be added to the previously computed lists. In addition, our optimization techniques have been well documented and closely checked, leading us to have a great deal of confidence in them. This will, we hope, provide a basis for future researchers to build upon without having to rebuild our work. These optimizations include the design of a prefilter to reduce the numbers of functions on which our technique must operate. We also have taken the step of mathematically proving for a general any optimations that were utilized.

It is our goal to eventually determine a prediction method for higher values of , so that rather than computing all spectral classes we could instead predict, based on existing data, which class a function might fall in to. In addition we conjecture that, once enough data is available, we may be able to extrapolate the structure and composition of classes of functions for higher values of based on data for lower values.

Acknowledgment

The authors would like to thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for their support of this work.

References

1. C. R. Edwards, “The application of the Rademacher-Walsh transform to Boolean function classification and threshold logic synthesis,” IEEE Transactions on Computers, vol. 24, no. 1, pp. 48–71, 1975.
2. M. Karpovsky, Finite Orthogonal Series in the Design of Digital Devices, John Wiley & Sons, New York, NY, USA, 1976.
3. S. L. Hurst, D. M. Miller, and J. C. Muzio, Spectral Techniques in Digital Logic, Academic Press, Orlando, Fla, USA, 1985.
4. S. Hurst, The Logical Processing of Digital Signals, Crane Russak, 1978.
5. J. E. Rice, Autocorrelation coefficients in the representation and classification of switching functions, Ph.D. thesis, University of Victoria, 2003.
6. M. A. Thornton and R. Drechsler, “Computation of spectral information from logic netlists,” in Proceedings of the 30th IEEE International Symposium on Multiple-Valued Logic (ISMVL '00), pp. 53–58, May 2000.
7. M. A. Thornton and V. S. S. Nair, “Efficient spectral coefficient calculation using circuit output probabilities,” Digital Signal Processing, vol. 4, no. 4, pp. 245–254, 1994.
8. M. A. Thornton and V. S. S. Nair, “Efficient calculation of spectral coefficients and their applications,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 14, no. 11, pp. 1328–1341, 1995.
9. A. Žužek, R. Drechsler, and M. A. Thornton, “Boolean function representation and spectral characterization using AND/OR graphs,” Integration, the VLSI Journal, vol. 29, no. 2, pp. 101–116, 2000.
10. E. M. Clarke, K. L. McMillan, X. Zhao, M. Fujita, and J. Yang, “Spectral transforms for large Boolean functions with applications to technology mapping,” in Proceedings of the 30th ACM/IEEE Design Automation Conference, pp. 54–60, June 1993.
11. D. Jankovic, R. S. Stankovic, and R. Drechsler, “Decision diagram method for calculation of pruned Walsh transform,” IEEE Transactions on Computers, vol. 50, no. 2, pp. 147–157, 2001.
12. R. S. Stanković, M. Stanković, C. Moraga, and T. Sasao, “Calculation of Reed-Muller-Fourier coefficients of multiple-valued functions through multiple-place decision diagrams,” in Proceedings of the International Symposium on Multiple-Valued Logic (ISMVL '94), pp. 82–88, 1994.
13. S. Purwar, “An efficient method of computing generalized Reed-Muller expansions from binary decision diagram,” IEEE Transactions on Computers, vol. 40, no. 11, pp. 1298–1301, 1991.
14. M. A. Thornton and R. Drechsler, “Spectral decision diagrams using graph transformations,” in Proceedings of the Conference on Design, Automation, and Test in Europe (DATE '01), pp. 713–719, 2001.
15. M. A. Thornton, “Mixed-radix MVL function spectral and decision diagram representation,” Automation and Remote Control, vol. 65, no. 6, pp. 1007–1017, 2004.
16. B. J. Falkowski and T. Sasao, “Unified algorithm to generate Walsh functions in four different orderings and its programmable hardware implementations,” IEE Proceedings: Vision, Image and Signal Processing, vol. 152, no. 6, pp. 819–826, 2005.
17. B. J. Falkowski, “Parallelization of methods to calculate Walsh spectra for logic functions,” Journal of Multiple-Valued Logic and Soft Computing, vol. 10, no. 2, pp. 91–127, 2004.
18. J. P. Hansen and M. Sekine, “Synthesis by spectral translation using Boolean decision diagrams,” in Proceedings of the 33rd Annual Design Automation Conference, pp. 248–253, June 1996.
19. J. Moore, K. Fazel, M. A. Thornton, and D. M. Miller, “Boolean function matching using Walsh spectral decision diagrams,” in IEEE Dallas ICAS Workshop on Design, Applications, Integration and Software (DCAS '06), pp. 127–130, Richardson, Tex, USA, October 2006.
20. D. M. Miller, “Spectral and two-place decomposition techniques in reversible logic,” in Proceedings of the 45th Midwest Symposium on Circuits and Systems (MWSCAS '02), vol. 2, pp. 493–496, August 2002.
21. D. M. Miller and G. W. Dueck, “Spectral techniques for reversible logic synthesis,” in Proceedings of the 6th International Symposium on Representations and Methodology of Future Computing Technologies, 2002.
22. M. G. Karpovsky, R. S. Stanković, and C. Moraga, “Spectral techniques in binary and multiple-valued switching theory: a review of results in the decade 1991–2000,” Journal of Multiple-Valued Logic and Soft Computing, vol. 10, no. 3, pp. 261–286, 2004.
23. M. A. Thornton, R. Drechsler, and D. M. Miller, Spectral Techniques in VLSI CAD, Kluwer Academic Publishers, 2001.
24. B. J. Falkowski and S. Yan, “Properties of logic functions in spectral domain of sign Hadamard-Haar transform,” Journal of Multiple-Valued Logic and Soft Computing, vol. 11, no. 1-2, pp. 185–211, 2005.
25. B. J. Falkowski and S. Yan, “Ternary Walsh transform and its operations for completely and incompletely specified Boolean functions,” IEEE Transactions on Circuits and Systems I, vol. 54, no. 8, pp. 1750–1764, 2007.
26. C. C. Tsai and M. Marek-Sadowska, “Boolean functions classification via fixed polarity Reed-Muller forms,” IEEE Transactions on Computers, vol. 46, no. 2, pp. 173–186, 1997.
27. A. B. Lapshin, “Classification of Boolean functions by the invariants of their matrix representation,” Automation and Remote Control, vol. 67, no. 7, pp. 1100–1107, 2006.
28. J. E. Rice and J. C. Muzio, “Use of the autocorrelation function in the classification of switching functions,” in Proceedings of the Euromicro Symposium on Digital System Design: Architectures, Methods and Tools (DSD '02), pp. 244–251, 2002.
29. I. Strazdins, “Universal affine classification of Boolean functions,” Acta Applicandae Mathematicae, vol. 46, no. 2, pp. 147–167, 1997.
30. N. Anderson, The classification of Boolean functions using the Rademacher-Walsh transform, M.S. thesis, University of Victoria, 2007.
31. Maplesoft, “Maple 10,” 2007, http://www.maplesoft.com/products/maple/history/documentation.aspx.