Review Article

On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review

Table 7

Summary of Lookup-Table approximations.

Ref.MethodConvergencePrecisionComputational costsNotes

[39]RA-LUT (tanh)N/AMSE = 0.0053 17.5 us on 50 MHz FPGA Resources used:
1815 LC comb.
4 LC reg.

[39]RA-LUT (Logsig)N/AMSE = 0.159817.5 us on 50 MHz FPGAResources used:
1617 LC comb.
4 LC reg.

[40]RA-LUT (Tansig) + FPUN/AMSE = 0.015047 us on 50 MHz FPGAResources used:
6538 LE

[41]Error-optimized LUT N/AMax. error = 0.0378 Propagation delay: 0.95 ns (2x faster than classic LUT approach) Gate Count: 70
Area (μm2): 695.22
(10x smaller than classic LUT approach)

[42]Compact RA-LUTN/AMax. error = 0.0182Propagation delay: 2.46 nsGate Count: 181
Area (μm2): 780 (4.5x smaller than classic LUT approach)

[43]Hybrid11 Epochs % error = 1.88
(normalized to Full-Precision Floating Point)
Propagation delay: 0.8 nsTrained on-chip with Levenberg Marquardt Algorithm
Area (μm2): 309

[43]LUT16 Epochs% error = 1.34
(normalized to Full-Precision Floating Point)
Propagation delay: 2.2 nsTrained on-chip with Levenberg Marquardt Algorithm
Area (μm2): 19592

[43]RA-LUT12 Epochs% error = 0.89
(normalized to Full-Precision Floating Point)
Propagation delay: 1.0 nsTrained on-chip with Levenberg Marquardt Algorithm
Area (μm2): 901