Abstract

Built-in-self-test (BIST) response data can be compacted using a linear-feedback shift register (LFSR). Prior work has indicated that the probability of aliasing tends to converge to 2-k for a polynomial of degree k and large test length, and that primitive polynomials perform better than non-primitive polynomials. Nearly all analytical models and simulations have been based on the assumption that error occurrences are statistically-independent. This paper presents the first statistical results, based on fault simulation, that show that this convergence property holds for actual digital logic circuits and randomly-generated test vector sequences. However, it is shown that the average probability of aliasing is unsuitable as a design metric, and that a 95% upper confidence limit (UCL) is more useful. This paper introduces a UCL for the loss of fault coverage due to test response compaction. The theoretical or “ideal” UCL is shown to match closely the empirically-derived UCL obtained by fault simulation. The result is that a tight lower bound on fault coverage for LFSR-based BIST configurations can be obtained easily. Fault coverage for a BIST configuration can be obtained without the LFSR, eliminating costly fault simulation of the full structure with the LFSR. These results have been incorporated in the standard procedure for fault coverage measurement.