Advances in Neural Networks and HybridMetaheuristics: Theory, Algorithms, and Novel Engineering Applications
View this Special IssueResearch Article  Open Access
Arif Budiman, Mohamad Ivan Fanany, Chan Basaruddin, "Adaptive Online Sequential ELM for Concept Drift Tackling", Computational Intelligence and Neuroscience, vol. 2016, Article ID 8091267, 17 pages, 2016. https://doi.org/10.1155/2016/8091267
Adaptive Online Sequential ELM for Concept Drift Tackling
Abstract
A machine learning method needs to adapt to over time changes in the environment. Such changes are known as concept drift. In this paper, we propose concept drift tackling method as an enhancement of Online Sequential Extreme Learning Machine (OSELM) and Constructive Enhancement OSELM (CEOSELM) by adding adaptive capability for classification and regression problem. The scheme is named as adaptive OSELM (AOSELM). It is a single classifier scheme that works well to handle real drift, virtual drift, and hybrid drift. The AOSELM also works well for sudden drift and recurrent context change type. The scheme is a simple unified method implemented in simple lines of code. We evaluated AOSELM on regression and classification problem by using concept drift public data set (SEA and STAGGER) and other public data sets such as MNIST, USPS, and IDS. Experiments show that our method gives higher kappa value compared to the multiclassifier ELM ensemble. Even though AOSELM in practice does not need hidden nodes increase, we address some issues related to the increasing of the hidden nodes such as error condition and rank values. We propose taking the rank of the pseudoinverse matrix as an indicator parameter to detect “underfitting” condition.
1. Introduction
Data stream mining is a data mining technique, in which the trained model is updated whenever new data arrive. However, the trained model must work in dynamic environments, where a vast amount of data not only is continuously generated but also keeps changing. This challenging issue is known as concept drift [1], in which the statistical properties of the input attributes and target classes shifted over time. Such shifts can make the trained model less accurate.
More methods for concept drift handling can be found in the literature [1], where the aim is to boost the generalization accuracy. These methods pursue an accurate, simple, fast, and flexible way to retain classification performance when the drift occurs. Ensemble classifier is a wellknown way to retain the classification performance. The combined decision of many single classifiers (mainly using ensemble members diversification) is more accurate than single classifier [2]. However, it has higher complexity when handling multiple (consecutive) concept drifts.
One of the popular machine learning methods is Extreme Learning Machine (ELM) introduced by Huang et al. [3–7]. The ELM is a SingleLayer Feedforward Neural Network (SLFN) with fast learning speed and good generalization capability.
In this paper, we focused on the learning adaptation method as an enhancement to Online Sequential Extreme Learning Machine (OSELM) [8] and Constructive Enhancement OSELM (CEOSELM) [9]. We named it as adaptive OSELM (AOSELM). The AOSELM has capability to handle multiple concept drift problems, either changes in the number of attributes (virtual drift/VD) or the number of target classes (real drift/RD) or both at the same time (hybrid drift/HD), also for recurrent context (all concepts occur alternately) or sudden drift (new concept substitutes previous concepts) [10]. Our scope of attribute changes discussed in this paper is on the feature space concatenation widely used in data fusion, kernel fusion, and ensemble learning [11] and not on the feature selection (irrelevant features removal) methods [12]. We compared the performance with nonadaptive sequential ELM: OSELM and CEOSELM. We also compared the performance with ELM classifier ensembles as the common adaptive approach for concept drift solution. In the present study, although we focus on the adaptation aspect, we address some possible change detection mechanisms that are suitable for our method.
A preliminary version of RD and its early results appeared in conference proceedings [14]. In this paper, we introduced the new scenarios in VD, HD, and consecutive drifts, either recurrent or sudden drift scenarios as well as theoretical background explanation. Our main contributions in this research area can be summarized as follows:(1)We proposed simple adaptive method as enhancement to OSELM and CEOSELM for addressing concept drifts issue. Unlike ensemble systems [6, 13] that need to manage the complex combination of a vast number of classifiers, we pursue a single classifier for simple implementation while retaining comparable performance for handling multiple (consecutive) drifts.(2)We introduced a simple unified platform to handle a hybrid drift (HD) when changes in the number of attributes and the number of target classes occurred at the same time.(3)We elaborated how the AOSELM for transfer learning uses hybrid drift strategy. Transfer learning focuses on extracting the knowledge from one or more source task domains and applies the knowledge to a different target task domain [15]. Concept drift focuses on the timevarying domain with a small number of current data available. In contrast, transfer learning is not associated with time and requires the entire training and testing data set [16]. The example of transfer learning by using HD strategy is the transition from different data set sources but still related and with the same purpose. In this paper, we discussed the transfer learning on numeric handwritten MNIST [17] to alphanumeric handwritten USPS [18] recognition.(4)Naturally, the AOSELM handling strategy was based on recurrent context. We devised an AOSELM strategy to handle sudden drift scenario by introducing output marginalization method. This method is also applicable for concept drift in a regression problem.(5)We studied the effect of increasing the number of hidden nodes, which is treated as one of learning parameters, to improve the accuracy (other learning parameters are input weight, bias, activation function, and regularization factor). We proposed the evaluation parameter to predict the accuracy before the training was completed. We applied this assessment parameter actually to prevent “underfitting” or nonconvergence condition (the model does not fit the data well enough that makes accuracy performance dropped) when any learning parameter changes such as hidden nodes increased.
This paper is organized as follows. Section 2 explains some issues and challenges in concept drift, the background of ELM, and ELM in sequential learning. Section 3 presents the background theory and algorithm derivation of the proposed method. In Section 4, we focus on the empirical experiments to prove the methods and research questions in regression and classification problem. We use artificial and real data set. The artificial data sets are streaming ensemble algorithm (SEA) [19] and STAGGER [20], which are commonly used as benchmark in sequential learning. The real data sets are handwritten recognition data: MNIST for numeric [17] and USPS for alphanumeric classes [18]. We studied the effect of hidden nodes increase as one of the important learning parameters in Section 4.5. Section 7 discusses research challenges and future directions. The conclusion presents some highlights in Section 8.
2. Related Works
2.1. Notations
We specify the notations used throughout this article for easier understanding as follows:(i)Matrix is written in uppercase bold (e.g., ).(ii)Vector is written in lowercase bold (e.g., ).(iii)The transpose of a matrix is written as . The pseudoinverse of a matrix is written as .(iv), will be used as nonlinear differentiable function (activation function), for example, sigmoid or tanh function.(v)The amount of training data is . Each input data contains some attributes. The target has number of classes. An input matrix can be denoted as and the target matrix as .(vi)The hidden layer matrix is . The input weight matrix is . The output weight matrix is . The matrix is the additional block portion of the matrix . The matrix is the autocorrelation matrix of . The inverse of matrix is .(vii) can be denoted as . can be denoted as and can be denoted as . denotes the additional nodes number of .(viii)When the number of training data , we employed the online sequential learning method by updating model every time each new training pairs are seen. is the subset of input data at time as the initialization stage. are the subset of input data at the next sequential time. Each subset may have different number of quantities. The corresponding label data is presented as . We used the subscript font with parenthesis to show the sequence number.(ix)We denote the training data from different concepts (sources or contexts), using the symbol for training data and for target data. We used the subscript font without parenthesis to show the source number.(x)We denote the drift event using the symbol , where the subscript font shows the drift type. For example, Concept 1 has virtual drift event to be replaced by Concept 2 (sudden drift): . Concept 1 has real drift event to be replaced by Concept 1 and Concept 2 recurrently (recurrent context) in the shuffled composition: .
2.2. Concept Drift Strategies
In this section, we briefly explained the various concept drift solution strategies.
Gama et al. [1] explained that many concept drift methods have been developed, but the terminologies are not well established. According to Gama et al., the basic concept drift based on Bayesian decision theory in the classification problem for class output and incoming data is
Concept drift occurred when has changed; for example, , where and are, respectively, the joint distribution at times and . Gama et al. categorized the concept drift types as follows:(1)Real drift (RD) refers to changes in . The change in may be caused by a change in the class boundary (the number of classes) or the class conditional probabilities (likelihood) . The number of classes expanded and different class of data may come alternately, known as recurrent context. A drift, where new conditional probabilities replace the previous conditional probabilities while the number of classes remained the same, is known as sudden drift. Other terms are concept shift or conditional change [21].(2)Virtual drift (VD) refers to the changes in the distribution of the incoming data (e.g., changes). These changes may be due to incomplete or partial feature representation of the current data distribution. The trained model is built with additional data from the same environment without overlapping the true class boundaries. Other terms are feature change [21], temporary drift, or sampling shift.
Kuncheva [10, 22] explained the various configuration patterns of data sources over time as random noise, random trends (gradual changes), random substitutions (abrupt or sudden changes), and systematic trends (recurring context). The random noise will simply be filtered out. A gradual drift occurs when many concepts may reoccur alternately in the gradual stage for a certain period. A consecutive drift takes place when many previously active concepts might keep on changing alternately (recurring context) after some time. The sudden drift (abrupt changes or concept substitutions) is the type that at one time one concept is suddenly replaced by another concept.
Žliobaitė [13] proposed a taxonomy of concept drift tackling methods as shown in Figure 1. It describes the methods based on when the model is switched on (the “when” axis) and how the learners adapt to training set formation or design and parametrization of the base learner (the “how” axis). The “when” axis spans drift handling from trigger based to evolving based methods. The “how” axis spans drift handling from training set formation to model manipulation (or parametrization) methods.
Žliobaitė [13] explained that most attention on the concept drift tackling methods is drawn to multiclassifier model selection and fusion rules, but little attention is drawn on the model construction of base classifier.
Gama et al. [1] proposed a complete online adaptive learning scheme that organized four modules: memory, change detection, learning, and loss estimation (see Figure 2). These modular components can be integrated, permuted, and combined with each other. The key modules are the learning and the change detection modules. Most methods focused on some subset or often mixtures of many types within certain concept drifts.
The learning module refers to the methods for the adaptation strategies of the predictive model. The learning module is categorized based on (i) how the model is updated when new data points are available (learning mode): retraining or incremental (online) modes; (ii) the behavior of predictive models on timeevolving data (model adaptation): a blind (evolving or implicit) based module or an informed (trigger or explicit) based module; (iii) the techniques for maintaining active predictive models (model management): a single model or ensemble model. The change detection module refers to drift detection. The change detection identifies change points or small time intervals when changes occur.
Each drift employed different solution strategies. The solution for RD is entirely different from VD. If the systematic changes are likely to reappear, we may want to keep past successful classifiers and simply reuse them. If the changes are gradual, we may use a moving window strategy on the training data. If the changes are abrupt, we can pause the existing static classifiers and then retrain the classifier using the new training data. Thus, it is hard to combine simultaneously many strategies at one time to solve many types of concept drift in just a simple platform.
2.3. ELM in Sequential Learning
In this section, we briefly explained the previous related works of ELM in sequential learning and adaptive environments.
ELM is getting popularity thanks to its learning speed, generalization capability, and simplicity. Huang [5] explained the term “Extreme” meant to move beyond conventional artificial neural network learning that required iterative tuning. The ELM moves toward brainlike learning in which hidden neurons need not be tuned.
The output function of an SLFN with single hidden layer matrix can be presented as the function ofwhere . All of these parameters defining the values of elements are named as hidden node parameters [6].
The solution of ELM training with the smallest error can be obtained when the output weight is approximated bywhere is the pseudoinverse of .
can be approximated by left pseudoinverse of asWe can use ridge regression or regularized least squares to be .
Based on [4], Liang et al. [8] proposed online learning for ELM named OSELM. If we have from filled by the number of training data and incremental batch of data filled , the output weights are approximated by
Both and have a different number of training data but have the same number of hidden nodes.
If , then we can rewrite
The OSELM assumes no changes in the number of hidden nodes. However, increasing the number of hidden nodes is required to improve the performance. A CEOSELM [9] has addressed this problem by adding hidden nodes in the sequential learning stage. So . The submatrix is set to a zero block matrix to simplify the computation in accordance with the fact that the previous data is not related to the new hidden nodes. The additional hidden nodes block matrix for data has relation to the additional hidden nodes .
Then, we can rewrite with as
If can be solved using block matrix inversion and Schur complement, then
It is important to note that both OSELM and CEOSELM did not address the concept drift issue; for example, when the number of attributes in or the number of classes in in data set has been added. In this paper, we categorized OSELM and CEOSELM as nonadaptive sequential ELM.
To the best of our knowledge, no previous single base ELM approach specifically addresses many concept drifts learning [6]. However, some papers [23, 24] already discussed how the ELM is implemented in adaptive environment.
van Schaik and Tapson [23] proposed Online Pseudoinverse Update Method (OPIUM). OPIUM is based on Greville’s method as the incremental solutions to compute the pseudoinverse of matrix. The pseudoinverse computation can be solved incrementally as linear regression problems and can be adaptive which allows for nonstationary data. The derivation of OPIUM is equivalent to the OSELM if the condition is met at each iteration. This condition implies that is a linear combination of the previous hidden layer and the simpler derivation of (3) with right pseudoinverse becoming
van Schaik and Tapson defined as the cross correlation matrix between and and as the inverse of the autocorrelation , so . According to Greville’s method, the solution for . And the solution for , or in short writing .
van Schaik and Tapson proposed a simplified version named OPIUM light by computing only the ondiagonal element of . van Schaik and Tapson applied the OPIUM light for nonstationary data by using different weight in determining for the most recent pair appropriate for nonstationary mapping, which are and .
In our opinion, OPIUM only tackled the real drift case with discriminant function boundary shift in the streaming data (e.g., the frequency shift of sine wave). They implemented the weighting as a nonstationary mapping parameter between input and output vectors.
Cao et al. [24] proposed twophase classification algorithm: first, weighted ensemble classifier based on ELM (WECELM) algorithm, which can dynamically adjust classifier and the weight of training uncertain data to solve the problem of concept drift, and second, an uncertainty classifier based on ELM (UCELM) algorithm designed for the classification of unknown data streams, which considers attribute (tuple) value and its uncertainty, thus improving the efficiency and accuracy. When concept drift occurs, WECELM will dynamically adjust the classifiers and the weight of training data, thus a new classifier will be added to the ensemble until it reached a preset maximum and then removed the worstperforming classifier. UCELM is designed for the classification of uncertain data streams, which has attributes (tuples) and its uncertainty values. The UCELM evaluated uncertainty value for every newly arrived attribute and decided based on the probability of the new attributes belonging to each class, thus improving the efficiency and accuracy. In our opinion, WECELM is categorized as evolving based method by selecting the bestperforming classifier, and UCELM addressed virtual drift problem by using uncertainty attributes selection.
Most ELM work in adaptive environments addressed for particular drift case which may be impractical for other cases. We pursue a simple unified platform that has the capability to handle many (consecutive) drift cases.
3. Proposed Method
3.1. Theoretical Background of AOSELM
In sequential learning, some partial training data arrives in time sequential fashion: . Learning is the process of constructing function to map between observation and its nature called (class) [25]. When the number of training data , we need to address the expected value of .
Learning from the data is the process to select a function from a class of by minimizing the empirical squared error with the error probability of the resulting classifier. According to [25], the empirical squared error minimization is consistent under general conditions.
Theorem 1. Assume that is a totally bounded class of functions. If , then the classification rule obtained by minimizing the empirical squared error over is strongly consistent; that is,
Based on Law of Large Numbers (LLN) theorem [26] and Theorem 1, in sequential learning with the number of training data , we can make sure that the consistency of expected value of learning model is .
The concept drift refers to an online supervised learning model when the relation between the input data and the target variable changes over time [1]. If the learning model from Concept 1 is bounded by hypothesis space and feature space and the learning model from Concept 2 is bounded by hypothesis space and feature space , we defined the real drift as when the hypothesis space has changed to . We scoped the definition for dimension changes. The virtual drift is when the feature space has changed to . We scoped the definition for dimension changes.
To achieve the consistency of minimized square error in the new hypothesis space or new feature space, the learning model needs a transition map from the former space to the new space. The learning model needs a transition space before it converges to the new learning model . Our transition space idea was inspired by geometric approach for solving many problems in the fields of pattern recognition and machine learning [27, 28].
For transition space, we propose two approaches: (i) assign the random coordinates in the new concept space and (ii) assign the equivalent projection coordinates in the new design space. The first approach is suitable for VD scenario, in which we assigned the new random coordinates as the new input weight parameters. The second approach is suitable for RD situation, by setting the equivalent projection coordinates in the new space (e.g., in 1D coordinate has corresponding 2D projection coordinates as ).
Here, we relate the ELM theory to the context of AOSELM concept drift scenarios (see Table 1) as follows.
(a) The experiment design scenarios  
 
(b) Concept drift sequential patterns  

Scenario 1 (virtual drift (VD)). Huang et al. [6] explained interpolation theory from ELM point of view as stated by the following description.
Theorem 2. Given any small positive value , any activation function which is infinitely differentiable in any interval, and arbitrary distinct samples , there exists such that, for any input weight and bias pair randomly generated from any interval of , according to any continuous probability distribution, with probability one, . Furthermore, if , then with probability one, .
According to Theorem 2 and Learning Principle I of ELM Theory [5], the input weight and bias as hidden nodes parameters are independent of training samples and their learning environment through randomization. Their independence is not only in initial training but also in any sequential training stages. Thus, we can adjust the input weight and bias pair on any sequential stages and still make sure with probability one that .
Scenario 2 (real drift (RD)). Huang et al. [6] explained universal approximation capability of ELM as described by the following theorem.
Theorem 3. Given any nonconstant piecewise continuous function , if span is dense in L2, for any continuous target function and any function sequence randomly generated according to any continuous sampling distribution, holds with probability one if the output weights are determined by ordinary least square to minimize .
Based on Theorem 3 and inspired by the related works [9, 14], we devised the AOSELM real drift capability by modifying the output matrix with zero block matrix concatenation to change the size dimension of the matrix without changing the value. Zero block matrix has meant the previous has no knowledge about the new concept. ELM can approximate any complex decision boundary, as long as the output weights are determined by ordinary least square to keep the minimum.
3.2. AOSELM Algorithms
In this section, we presented the AOSELM pseudocodes (the Matlab source code, data set, and demo file implementation are available at https://github.com/abudiman250172/adaptiveOSELM) in the th sequential with training input and target to update .
Basically, we have three pseudocodes, namely, OSELMSeq (Algorithm 1) as OSELM and CEOSELM pseudocodes; AOSELMVDSeq (Algorithm 2) as AOSELM pseudocodes tackling virtual drift; and AOSELMRDSeq (Algorithm 3) as AOSELM pseudocodes for addressing real drift. We can combine the pseudocodes together to form a hybrid drift Algorithm 4. We can increase the hidden nodes using CEOSELM in Algorithm 1 after AOSELMVDSeq or AOSELMRDSeq. For initialization, basically we can use any ordinary ELM initialization in offline learning mode.




For sudden drift scenario, we proposed output marginalization method by adding the new output nodes when the new concept presented (see Figure 3) and marginalized the output result by defining that class of concept is =. We scoped that the new concept has the same output nodes quantity with the previous concept. Output marginalization is by shifting the ELM output to the output nodes belonging to the new concept and ignoring the previous concept output nodes. This strategy is similar with classifier pruning in ELM ensemble. However, in output marginalization, we can reactivate the previous concepts by shifting back to the previous output nodes. If we want to forget the last concept totally, we can quickly delete the previous output nodes without impacting the generalization performance, or we can increase the hidden nodes at the same time with the drift event.
In regression, because we have only one output node, then we can employ sudden drift scenario by amplifying the related output node of the concept with a constant value that makes the maximum output approximated to 1.
The systematic rules make AOSELM more flexibe to handle complex consecutive drifts scenario. The AOSELM only stored the previous output weight and autocorrelation . The autocorrelation did not keep the training data. This makes AOSELM scalable for big streaming data without impacting the computation performance.
To improve the accuracy, we define the target values , so that class is =. According to [29], the target values are equivalent with .
4. Experiments
4.1. Experiments Design in Classification
To verify our method, we designed some experiments with the following purposes:(i)To investigate the effectiveness of AOSELM on tackling three concept drift scenarios (VD, RD, and HD) in two sequential patterns (sudden changes and recurring context). We used various data set starting with synthetic data set (SEA, STAGGER) and then with real data set in handwritten recognition (MNIST, USPS). Each data set has different drift characteristics. This experiment is presented in Sections 4.2 and 4.4. We also demonstrated the AOSELM capability as drift detection role in Section 4.3 using SEA data set.(ii)To investigate the effectiveness of AOSELM on transfer learning to combine different data set sources. This experiment is presented in Section 4.4 using two data set sources (MNIST and USPS) in handwritten recognition problem.(iii)To investigate the effect of hidden nodes increase in the drift events and how it impacts performance. This experiment is presented in Section 4.5.
We used Matlab™ running on Microsoft Windows™ Computer with 4core 2.5 GHz processor and 8 GB memory.
Our experiments are organized as follows:Simulation benchmark tests on the data sets commonly used in concept drift handling of stream data, for example, SEA [19] and STAGGER [20] (see Table 2(a)). Both data sets are binary classification problem. SEA has 3 inputs with random integer values from 0 to 9. STAGGER has three inputs with multiple category values from 1 to 3 (total inputs are 9). SEA and STAGGER are the examples of concept drift caused by discriminant function changes while the number of attributes and classes from all concepts is still the same. The change type is sudden drift. The expected result is that the classifier has good performance for the newest concept [22].We tested our algorithm with realworld public data sets from MNIST numeric (0 to 9) [17] and the USPS alphanumeric (A to Z, 0 to 9) handwritten data set [18]. We used original greylevel image attributes [] of MNIST data set and the combination of [] with additional attributes from the 9 × 9 bins histogram of orientated gradients () of greylevel image features [30]. For USPS, we added more data with Gaussian random and saltpepper noises. Refer to Table 2(a) for detailed data set information.We designed the initial input weights and bias based on robust OSELM with regularization scalar (ROS) [31] and then based on initial random from the normal distribution (NORM). The activation function is sigmoid. The pseudoinverse function is the orthogonal projection using ridge regularization.Let us define the following concept as is MNIST class (1–6), is MNIST class (7–10), is MNIST class (1–6), is MNIST class (7–10), is MNIST class (1–10), is USPS class (1–10, A–Z). We followed the simulated concept drift methods in Dries and Rückert [32]. We simulated sudden drift by splitting the composition into two groups, for example, and , and recurring context by shuffling the composition of and . We set the sequential training flow to be the following drift equation:Experiment 1: virtual drift: ,Experiment 2: real drift: for recurring context, ; for sudden drift: ,Experiment 3: hybrid drift: ,Experiment 4: MNIST + USPS transfer learning: .We measured the performance based on Table 2(c). The testing accuracy and Cohen’s Kappa are to show the quantitative measurement. The predictive accuracy is to demonstrate the trend in a line chart. The sudden drift performance is based on the forgetting capability that compared the testing accuracy of the latest concept against all the previous concepts.We compared the AOSELM performance with nonadaptive online sequential and offline version of ELM classifier. The performance expectation of sequential version classifier is to approximate the offline version of the classifier (desiderata for online classifiers [22]). We also compared with adaptive ELM ensemble method (see Figure 4). We designed the hierarchical ensemble using two models of ELM classifier with different roles (see Figure 4). The first role is a binary classifier that acts as a director based on one against all (OAA) classification. The binary classifier needs all sequential training data to be recalled (full memory). Another role is the data classifier. This ensemble requires total classifiers for concepts, thus, not effective for consecutive concept drift case, for example, SEA concepts. The ensemble also applied outdated classifier pruning when the ensemble detects that the previous attributes need to be replaced.
(a) Data set dimension and quantity  
 
(b) Evaluation method  
 
(c) Performance measurements  

4.2. SEA and STAGGER Concepts Result
We addressed the question whether nonadaptive OSELM and CEOSELM with increase could handle the concept drift situation. We compared between AOSELM with no increase (AOSELM1) and with increase (AOSELM2). We used 5fold crossvalidation and compared between NORM and ROS parameter. For SEA, parameters and increase per drift. For STAGGER, parameters and hidden nodes increase per drift.
The AOSELM has better accuracy with better recovery time (see Tables 3(a) and 3(b)) than CEOSELM, whereas nonadaptive OSELM fails (see Figure 5). The AOSELM2 improved the forgetting capability better than AOSELM1. In comparison with Kolter and Maloof result using dynamically weighted majority (DWM) of naive Bayes (DWMNB) for SEA, AOSELM result is near to the DWM result. Comparison with inducing decision trees (DWMITI) for STAGGER [20], AOSELM outperformed DWM (see Tables 3(a) and 3(b)).
(a) Testing accuracy in % for SEA with  
 
(b) Testing accuracy in % for STAGGER with  

(a) The SEA concept
(b) The STAGGER concept
4.3. Concept Drift Detection
The drift detection works based on loss estimation (see Figure 2) that compared current prediction accuracy with the previous feedback. Using similar method on [33, 34], we can evaluate the intersection point between accuracy decrease and increase in Figure 6. If the consecutive loss performance exceeded a certain threshold, then drift warning status is triggered. We measured the output performance from the new concept output and compared with the previous output. If it met certain criteria, then the new AOSELM is committed. Otherwise, the previous AOSELM is rolled back.
4.4. MNIST and MNIST + USPS Result
We measured the testing accuracy based on holdout test data by 10x experiment trials. The results are as follows.
Experiment 1 (virtual drift). The AOSELM of has Cohen’s kappa of testing accuracy 95.72 (0.21)% approximated to its nonadaptive ELM and offline ELM of version with the same hidden nodes number . It has better accuracy than single attribute [] or [] only (see Table 4(b)). It proves our explanation in the theoretical background on Section 3.1.
(a) Benchmark result, nonadaptive OSELM and offline ELM  
 
(b) VD experiment, AOSELM ()  

Note. We set for [] ELM based on the same ratio between number of input nodes with hidden nodes of [] ELM.
Experiment 2 (real drift). The final result is shown in Table 5(b): the AOSELM has better Cohen’s kappa performance for all concepts than ELM ensemble and little exceeds its nonadaptive and offline ELM (Table 5(b)).
(a) Benchmark result, nonadaptive OSELM and offline ELM  
 
(b) RD experiment, ELM ensemble (3 classifiers, full memory) versus AOSELM  
 
(c) HD experiment, ELM ensemble (3 classifiers, full memory, outdated classifier pruning) versus AOSELM  
 
(d) MNIST + USPS experiment, ELM ensemble (5 classifiers, full memory, outdated classifier pruning) versus AOSELM  

As in the split composition, the AOSELM with increase has better performance in forgetting capability than the AOSELM with no increase (see Table 8(b)).
Experiment 3 (hybrid drift). The final result is shown in Table 5(c): the AOSELM has better Cohen’s kappa performance for HD than ELM ensemble and approximates to its nonadaptive and offline ELM.
Experiment 4 (MNIST + USPS transfer learning). The AOSELM has better Cohen’s kappa performance for both numeric and alphabet concepts than ELM ensemble (see Table 5(d)) and approximates to its nonadaptive and offline ELM. The AOSELM shows better recovery time than ELM ensemble in Figure 7.
4.5. The Effect of Hidden Nodes Increase
The initial size of hidden nodes selection is important to have good generalization performance. Researches [3, 6] suggested for the hidden nodes size to be at minimum equal to the rank value of training data. However, in a data stream, it is hard to determine a fixed number of hidden nodes following that suggestion. The larger requires more computation resources and processing time, probably not giving a significant result at the end. Thus, we have a requirement to increase in sequential stage [9].
The experiment result in Table 6 shows that the performance improved when certain hidden nodes size increases. We used different initial hidden nodes size () condition: 2000, 666 (the rank value of initial training data), and 713 (the rank value of total training data). We used also different conditions of hidden nodes increase () by using ROS parameters on the drift event: 0 (no increase), 500, 1000, and 2000. However, the larger has better influence than increase.

We studied the effect of hidden nodes increased in the sequential phase as follows.
(1) “Underfitting” Condition. “Underfitting” is the condition when the model does not fit the data well enough which makes nonconvergence. Based on an empirical experiment with increase in the sequential phase on Table 7, we investigated particular condition when the AOSELM classifier has a bad result. We realized that the ELM performance is dependent upon finding general matrix inverse of . Based on orthogonal projection method in CEOSELM, we can employ the rank value of as evaluation parameter to detect “underfitting”.
