|
Ref. | Method | Convergence | Precision | Computational costs | Notes |
|
[26] | Hyperbolic tangent | N/A | MSE = 0.0165 | 0.3435 us (Pentium II Machine) | |
|
[16] | Arctangent | 24 Epochs | MSE = | 0.3435 us (Pentium II Machine) | Backpropagation with = 0.5 (learning rate) and = 0.8 (Momentum) |
|
[18] | Quadratic sigmoid | 4000 Epochs | MSE = 0.1–0.5 (2x more accurate than sigmoid on the same problem) | N/A | Backpropagation with = 0.1 (learning rate) and = 0.1 (Momentum), reduced during training |
|
[19, 22] | Logarithmic-Exponential | 250 Epochs | MSE = 0.048 (fitting problem) 97.2% accuracy (classification problem) | 6.1090 us (Pentium II Machine) | |
|
[20] | Spline-interpolant | 2000 Epochs | MSE = −17.94 dB (3.26 dB less than sigmoid on the same problem) | N/A | Backpropagation with = 0.8 (learning rate) |
|
[21] | Hermite polyn. | N/A | 98.5% accuracy (classification problem) | N/A | |
|
[22, 23] | Neural | 50 Epochs | 97.6% accuracy (classification problem) MSE = 0.082 (prediction problem) | N/A | |
|
[24] | Composite AF | 900 Epochs | 92.8% accuracy (classification problem) | N/A | Even distribution of Gaussian, sinusoidal, and sigmoid |
|
[24, 26] | Wave | 250 Epochs | MSE = 0.2465 (15x less accurate than sigmoid on the same problem) | 0.3830 us (Pentium II Machine) | |
|
[25] | CosGauss | 20 Epochs | MSE = 1.0 (10x more accurate than sigmoid on the same problem) | N/A | Implemented on a cascade correlation network |
|
[26] | Sinc | 250 Epochs | MSE = 0.0132 (0.25x more accurate than tanh on the same problem) | 104.3360 us (Pentium II Machine) | |
|
[26] | PolyExp | 250 Epochs | MSE = 0.1007 (6x less accurate than tanh on the same problem) | 0.3840 us (Pentium II Machine) | |
|
[26] | SinCos | 250 Epochs | MSE = 0.0114 (0.7x more accurate than tanh on the same problem) | 1.1020 us (Pentium II Machine) | |
|