Adaptive Online Sequential ELM for Concept Drift Tackling
Table 5
Average testing accuracy and Cohen’s kappa in % for MNIST RD, HD, and MNIST + USPS transfer learning experiment (other ELM parameters are same: ROS, , , ) with 10x trials.
(a) Benchmark result, nonadaptive OS-ELM and offline ELM
Source
Class
Testing accuracy
Cohen’s kappa
OS-ELM
Offline ELM
OS-ELM
Offline ELM
MNIST
(1–6)
95.99 ± 0.15
96.00 ± 0.14
95.21 (0.30)
95.22 (0.30)
(7–10)
94.30 ± 0.22
94.32 ± 0.19
92.50 (0.48)
92.53 (0.48)
MNIST
(1–6)
97.59 ± 0.11
97.49 ± 0.09
97.10 (0.23)
97.00 (0.24)
(7–10)
95.76 ± 0.26
95.87 ± 0.12
94.40 (0.42)
94.55 (0.42)
MNIST + USPS
(1–10)
96.01 ± 0.10
96.08 ± 0.08
95.56 (0.02)
95.65 (0.02)
(A–Z)
99.94 ± 0.02
99.94 ± 0.02
99.94 (0.02)
99.93 (0.02)
(b) RD experiment, ELM ensemble (3 classifiers, full memory) versus AOS-ELM
Source
Concept
Testing accuracy
Cohen’s kappa
ELM ensemble
AOS-ELM
ELM ensemble
AOS-ELM
MNIST
(1–6)
94.58 ± 0.17
96.09 ± 0.12
93.54 (0.35)
95.10 (0.31)
(7–10)
91.60 ± 0.29
94.34 ± 0.16
89.04 (0.57)
92.56 (0.48)
(c) HD experiment, ELM ensemble (3 classifiers, full memory, outdated classifier pruning) versus AOS-ELM
Source
Concept
Testing accuracy
Cohen’s kappa
ELM ensemble
AOS-ELM
ELM ensemble
AOS-ELM
MNIST
(1–6)
94.48 ± 0.33
97.01 ± 0.18
93.42 (0.35)
96.42 (0.26)
MNIST
(7–10)
92.29 ± 0.36
96.05 ± 0.19
89.95 (0.55)
94.78 (0.40)
(d) MNIST + USPS experiment, ELM ensemble (5 classifiers, full memory, outdated classifier pruning) versus AOS-ELM