Review Article
On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review
Table 8
Summary of Piecewise Linear Approximations.
| Ref. | Method | Convergence | Precision | Computational costs | Notes |
| [44] | Piecewise linear “VHDL-C” | N/A | MSE = 0.00049 | 213 clock cycles | Resources used: Flip Flop Slices: 1277 4 input LUTs: 3767 BRAMS: 4 |
| [45] | “Bajger-Omondi” method | N/A | Absolute error: up to 10−6 for 128 pieces with 18-bit precision | N/A | |
| [46] | PWL approximation | N/A | N/A | Propagation delay: 1.834 ns (100 ns more than LUT approach) | Resources used: 4 input LUTs: 108 (79 less than LUT approach) Slices: 58 (44 less than LUT approach) Total gates: 1029 (329 less than LUT) |
| [47, 48] | A-Law | N/A | % error = 0.63 84% accuracy (classification problem) | Propagation delay: 3.729 ns | Resources used: Slices: 185 LUTs: 101 Total gates: 1653 |
| [47, 49] | Alippi | N/A | % error = 1.11 | Propagation delay: 3.441 ns | Resources used: Slices: 127 LUTs: 218 Total gates: 1812 |
| [47, 50, 51] | PLAN | N/A | % error = 0.63 85% accuracy (classification problem) | Propagation delay: 4.265 ns | Resources used: Slices: 127 LUTs: 218 Total gates: 1812 |
|
|