Review Article
On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review
Table 7
Summary of Lookup-Table approximations.
| Ref. | Method | Convergence | Precision | Computational costs | Notes |
| [39] | RA-LUT (tanh) | N/A | MSE = 0.0053 | 17.5 us on 50 MHz FPGA | Resources used: 1815 LC comb. 4 LC reg. |
| [39] | RA-LUT (Logsig) | N/A | MSE = 0.1598 | 17.5 us on 50 MHz FPGA | Resources used: 1617 LC comb. 4 LC reg. |
| [40] | RA-LUT (Tansig) + FPU | N/A | MSE = 0.0150 | 47 us on 50 MHz FPGA | Resources used: 6538 LE |
| [41] | Error-optimized LUT | N/A | Max. error = 0.0378 | Propagation delay: 0.95 ns (2x faster than classic LUT approach) | Gate Count: 70 Area (μm2): 695.22 (10x smaller than classic LUT approach) |
| [42] | Compact RA-LUT | N/A | Max. error = 0.0182 | Propagation delay: 2.46 ns | Gate Count: 181 Area (μm2): 780 (4.5x smaller than classic LUT approach) |
| [43] | Hybrid | 11 Epochs | % error = 1.88 (normalized to Full-Precision Floating Point) | Propagation delay: 0.8 ns | Trained on-chip with Levenberg Marquardt Algorithm Area (μm2): 309 |
| [43] | LUT | 16 Epochs | % error = 1.34 (normalized to Full-Precision Floating Point) | Propagation delay: 2.2 ns | Trained on-chip with Levenberg Marquardt Algorithm Area (μm2): 19592 |
| [43] | RA-LUT | 12 Epochs | % error = 0.89 (normalized to Full-Precision Floating Point) | Propagation delay: 1.0 ns | Trained on-chip with Levenberg Marquardt Algorithm Area (μm2): 901 |
|
|