Review Article

On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review

Table 9

Summary of hybrid and higher order techniques.

Ref.MethodConvergencePrecisionComputational costsNotes

[52]4th-order TaylorN/AFrom 99.68% to 45% accuracy (classification problem)Full NN computation time: 1.7 msResources used:
Slices: 4438
Flip Flops: 2054
LUTs: 8225

[53]5th-order TaylorN/A% error = 0.51 N/AResources used:
Slices: 4895
Flip Flops: 4777
LUTs: 8820

[54, 55]Hybrid with PWL and RA-LUTN/AUp to for 404 elementsElaboration time: 40 μs on 50 MHz FPGAResources used:
Slices: 12
4 Inputs LUT: 17
BRAM: 1

[54, 55]Hybrid with PWL and combinatorialN/AUp to for 404 elementsElaboration time: 40 μs on 50 MHz FPGAResources used:
Slices: 12
4 inputs LUT: 17
BRAM: 0

[56]High precision sigmoid/exponentialN/ARMSE = (sigmoid)
RMSE = (exponential)
Maximum operative frequency: 868.056 MHzResources used:
(as low as)
43 LUTs
26 registers

[57]PWL and optimized LUTN/AN/APropagation delay: 0.06 nsResources used:
Number of gates: 35
Area (μm2): 148

[39, 58]Four-polynomial tanh
“4PY-T”
N/AMSE = 0.0039Full NN computation (50 MHz FPGA)
142 μs

[39, 58]Five-polynomial tanh
“5PY-T”
N/AMSE = 0.0018Full NN computation (50 MHz FPGA)
174 μs

[39, 58]Five-polynomial Logsig
“5PY-L”
N/AMSE = 0.0075Full NN computation (50 MHz FPGA)
185 μs

[61]Piecewise Quadratic Tanh
“scheme 2”
N/AMEA = Throughput rate: 0.773 MHzResources used:
Area (μm2): 83559.17

[60]Piecewise Quadratic Tanh
“Gs”
33 EpochsSE = 0.1
99.6 generalization capability
N/A

[47, 61]Zhang quadratic approximationN/AMEA =
% error = 1.10
Propagation delay: 3.9 nsResources used:
Slices: 93
LUTs: 86
Total gates: 1169

[62]Adjusted LUT
(0.02 max. errors)
N/AMEA = 0.0121 Propagation delay: 2.80 nsArea (μm2): 5130.78

[62]Adjusted LUT
(0.04 max. errors)
N/AMEA = 0.0246 Propagation delay: 2.31 nsArea (μm2): 3646.83