Review Article

On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review

Table 8

Summary of Piecewise Linear Approximations.

Ref.MethodConvergencePrecisionComputational costsNotes

[44]Piecewise linear
“VHDL-C”
N/AMSE = 0.00049213 clock cyclesResources used:
Flip Flop Slices: 1277
4 input LUTs: 3767
BRAMS: 4

[45]“Bajger-Omondi” methodN/AAbsolute error: up to 10−6 for 128 pieces with 18-bit precisionN/A

[46]PWL approximationN/AN/APropagation delay: 1.834 ns (100 ns more than LUT approach)Resources used:
4 input LUTs: 108 (79 less than LUT approach)
Slices: 58 (44 less than LUT approach)
Total gates: 1029 (329 less than LUT)

[47, 48]A-LawN/A% error = 0.63
84% accuracy (classification problem)
Propagation delay: 3.729 nsResources used:
Slices: 185
LUTs: 101
Total gates: 1653

[47, 49]AlippiN/A% error = 1.11Propagation delay: 3.441 nsResources used:
Slices: 127
LUTs: 218
Total gates: 1812

[47, 50, 51]PLANN/A% error = 0.63
85% accuracy (classification problem)
Propagation delay: 4.265 nsResources used:
Slices: 127
LUTs: 218
Total gates: 1812