Abstract

As the demand on Computer-Aided Testing Systems (CATS)—Automatic Test Pattern Generation (ATPG) and logic and fault simulations as well as testability analysis—increases and the choice becomes more varied, a need to compare the merits of the different systems emerges. Benchmark circuits are used to carry out the comparisons.In this paper, criteria for selecting the benchmark circuits are discussed. These criteria are partly based on the results of experiments carried out to characterize CATS. The focus is particularly on Automatic Test Pattern Generators. The preliminary results show that there is no general agreement on how: 1) fault collapsing is performed, and 2) fault coverage is calculated. In addition, the performance of the ATPGs depends on the circuit representation, topology and size as well as the algorithm. In order to compare the performance of the ATPGs as the circuit under test increases in complexity, it is important to use regular structures that consist of replication of medium size circuits. Practical considerations involved in benchmarking are also examined. Emphasis is on the transfer of circuits between different CATS systems and the use of EDIF as a neutral exchange language.