Review Article

On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review

Table 4

Summary of analytic AF.

Ref.MethodConvergencePrecisionComputational costsNotes

[26]Hyperbolic tangentN/AMSE = 0.01650.3435 us (Pentium II Machine)

[16]Arctangent24 EpochsMSE = 0.3435 us (Pentium II Machine)Backpropagation with = 0.5 (learning rate) and = 0.8 (Momentum)

[18]Quadratic sigmoid4000 EpochsMSE = 0.1–0.5 (2x more accurate than sigmoid on the same problem)N/ABackpropagation with = 0.1 (learning rate) and = 0.1 (Momentum), reduced during training

[19, 22]Logarithmic-Exponential250 EpochsMSE = 0.048 (fitting problem)
97.2% accuracy (classification problem)
6.1090 us (Pentium II Machine)

[20]Spline-interpolant2000 EpochsMSE = −17.94 dB (3.26 dB less than sigmoid on the same problem)N/ABackpropagation with = 0.8 (learning rate)

[21]Hermite polyn.N/A98.5% accuracy (classification problem)N/A

[22, 23]Neural50 Epochs97.6% accuracy (classification problem)
MSE = 0.082 (prediction problem)
N/A

[24]Composite AF900 Epochs92.8% accuracy (classification problem)N/AEven distribution of Gaussian, sinusoidal, and sigmoid

[24, 26]Wave250 EpochsMSE = 0.2465 (15x less accurate than sigmoid on the same problem)0.3830 us (Pentium II Machine)

[25]CosGauss20 EpochsMSE = 1.0 (10x more accurate than sigmoid on the same problem)N/AImplemented on a cascade correlation network

[26]Sinc250 EpochsMSE = 0.0132 (0.25x more accurate than tanh on the same problem)104.3360 us (Pentium II Machine)

[26]PolyExp250 EpochsMSE = 0.1007 (6x less accurate than tanh on the same problem)0.3840 us (Pentium II Machine)

[26]SinCos250 EpochsMSE = 0.0114 (0.7x more accurate than tanh on the same problem)1.1020 us (Pentium II Machine)