Research Article  Open Access
Imen Jammoussi, Mounir Ben Nasr, "A Hybrid Method Based on Extreme Learning Machine and Self Organizing Map for Pattern Classification", Computational Intelligence and Neuroscience, vol. 2020, Article ID 2918276, 9 pages, 2020. https://doi.org/10.1155/2020/2918276
A Hybrid Method Based on Extreme Learning Machine and Self Organizing Map for Pattern Classification
Abstract
Extreme learning machine is a fast learning algorithm for single hidden layer feedforward neural network. However, an improper number of hidden neurons and random parameters have a great effect on the performance of the extreme learning machine. In order to select a suitable number of hidden neurons, this paper proposes a novel hybrid learning based on a twostep process. First, the parameters of hidden layer are adjusted by a selforganized learning algorithm. Next, the weights matrix of the output layer is determined using the Moore–Penrose inverse method. Nine classification datasets are considered to demonstrate the efficiency of the proposed approach compared with original extreme learning machine, Tikhonov regularization optimally pruned extreme learning machine, and backpropagation algorithms. The results show that the proposed method is fast and produces better accuracy and generalization performances.
1. Introduction
The extreme learning machine (ELM) is a very important supervised machine learning algorithm proposed for training single hidden layer feedforward neural network (SLFN), which have been successfully used in many engineering disciplines [1–8], etc. One of the main drawbacks of ELM is the selection of the optimal number of hidden nodes, the random choose of the input parameters, and the type of the activation functions. These disadvantages directly affect the performances of neural network [9, 10]. Therefore, in order to enhance the performance of SLFN, several algorithms have been developed for optimizing ELM hidden nodes [11–23]. In [11], the authors proposed a new kind of ELM, named selfadaptive extreme learning machine (SaELM), in which optimal hidden neurons number are selected to construct the neural network. In [12], Huang et al. proposed an incremental extreme learning machine, named (IELM), which randomly adds hidden neurons incrementally and analytically determines the output weights. In [13], Huang and Chen proposed an improved version for (IELM) called enhanced random searchbased incremental algorithm (EIELM), which choose the hidden neurons that lead to the smallest residual error at each learning step. A further improvement about (IELM) is made in convex incremental extreme learning machine (CIELM) [14]. Its output weights are updated after a new hidden neuron is added. In [15], an effective learning algorithm, known as selfadaptive evolutionary extreme learning machine, is presented to adjust the ELM input parameters adaptively, which improves the generalization performance of ELM. An improved evolutionary extreme learning machine based on particle swarm optimization was proposed to find the optimal input weights and hidden biases [16]. Error minimized extreme learning machine (EMELM) [17] randomly adds neurons to the hidden layer one by one or group by group and updates output weights recursively. PrunedELM [18], named as PELM, was presented to determine the number of hidden neurons using statistical methods. In [19], Miche et al. considered the optimally pruned extreme learning machine (OPELM), in which the hidden neurons are ranked using multiresponse sparse regression algorithm, and then the selection for the best number of neurons is taken by a leaveoneout validation method. In [20], a constructive hidden neuron selection ELM (CSELM) was proposed, where the hidden neurons are selected according to some criteria. The work in [21] used ELM with adaptive growth of hidden neurons (AGELM) to automate the design of networks. In [22], by combining Bayesian models and ELM, the Bayesian ELM (BELM) is proposed to optimize the weights of the output layer using probability distribution. In [23], Miche et al. proposed a double regularized ELM using a leastangle regression (LARS) and Tikhonov regularization (TROPELM). Bidirectional extreme learning machine (BELM) was presented in [24], in which some hidden neurons are not randomly selected. In [25], Cao et al. proposed an enhanced bidirectional extreme learning machine (EBELM), in which some hidden neurons are randomly generated and only the neurons with the largest residual error are added to the existing network. Online sequential learning mode based on ELM (OSELM) was presented in [26]. Fuzziness based OSELM was presented in [27]. In [28], a dynamic forgetting factor is utilised to adjust OSELM parameters, and the corresponding DOSELM algorithm is proposed. Up to now, many other algorithms have been considered to extend the basic ELM to make it more efficient [29–35].
Motivated by developing a fast and efficient training algorithm for SLFN, this paper presents a new hybrid approach for training SLFN, where the weights between the input layer and the hidden layer are optimized by a selforganizing map algorithm [36], and the output weights are calculated using the Moore–Penrose generalized inverse like in ELM [1]. The efficiency in terms of classification accuracy and computation time of the proposed method is shown by the simulation results of different classification problems. The main contributions of our work can be summarized as follows:(1)We propose a hybrid algorithm combining the selforganizing map algorithm with extreme learning machine algorithm for optimizing SLFN weights. In this algorithm, the selforganizing map is first used to optimize the weights connecting the input and hidden layers. Then, the ELM is applied to determine the weights connecting the hidden and output layers. The main objective of the proposed approach is to achieve a higher solution accuracy and faster convergence with a compact network size.(2)Comparing with various methods, we evaluate the performance of our algorithm in terms of classification accuracy and convergence speed over different types of datasets.
The remainder of this paper is as follows. In Section 2, we recall the preliminary of ELM. Section 3 provides a detailed description of the hybrid learning algorithm. In Section 4, simulation results and comparisons with BP algorithm, basic ELM, and TROPELM are given. Finally, the conclusion is drawn in Section 5.
2. Basic ELM Algorithm
Recently, an efficient learning algorithm, called extreme learning machine (ELM), for single hidden layer feedforward neural network (SLFN) has been proposed by Huang et al. [1]. In ELM, the input weights of the hidden nodes are randomly chosen, and the output weights of SLFN are then computed by using the pseudoinverse operation of the hidden layer output matrix. The illustration of single hidden layer feedforward neural network is given in Figure 1. The numbers of neurons for input, hidden, and output layers are n, , and m, respectively.
Given N training samples , where and . The output of an SLFN can be represented by:where is the weight vector connecting the hidden node and the input nodes.
In general, the total weight matrix W iswhere is the weight vector connecting the hidden node and the output nodes, is the threshold of the node, is the output vector of neural network, and denotes an activation function, in general,
Equation (1) can be written compactly aswhere H is the output matrix of the hidden layer and defined as follows:
The criterion function to be minimized is the sum of the squared errors over all the training samples, given by
The output weight matrix can be determined analytically by minimizing the least square error:
A solution of the linear system (7), , can be computed as follows:where is called the Moore–Penrose generalized inverse of matrix H and T is the desired output matrix, expressed as
The ELM algorithm can be summarized as follows:
Step 1.Randomly assign the input weight and biases , i ∈ [1,].Step 2.Calculate the hidden layer output matrix H using equation (4).Step 3.Calculate the output weight matrix by equation (8).3. Proposed Learning Algorithm
In this study, the architecture of the proposed single hidden layer feedforward neural network (SLFN) is shown in Figure 2.
It is composed of an input layer, onedimensional Kohonen layer, and an output layer. To ensure the superiority of the proposed network structure, an appropriate hybrid learning algorithm for training a SLFN is presented. This algorithm is the fusion of a selforganizing map [36] and extreme learning machine [1]. During training with this algorithm, the network operates in a twostage sequence. The weights of hidden layer are clustered by SOM in the first stage. In the second stage, ELM is initialized with the weights obtained in the previous stage. The sketch map of the proposed method is shown in Figure 3.
The learning algorithm can be described as follows.
3.1. Stage 1: SOMBased Initialization
Selforganizing map (SOM) is an unsupervised learning method to represent highdimensional data vectors into a regular lowdimensional map by grouping similar input vectors and form a number of clusters. In our work, the basic SOM network consists of two layers, an input layer and a onedimensional Kohonen layer in which neurons are arranged into a onedimensional map. Each neuron i on the map is presented by ndimensional weight vector , where n is the dimension of the input vector x. The steps of SOM learning algorithm are as follows:
Step 1.Initialize weights to small random values, and initialize the neighborhood size.Step 2.Select a vector and determine the index of the winner neuron , that is,where is the total number of neurons in the Kohonen layer.Step 3.Update the weight of the winning neuron and its neighbor using the following Kohonen rule.where the neighborhood contains the indices for all of the neurons that lie within a radius d of the winning neuron and is the learning rate.Step 4.If all input data are presented to the network, go to Step 5; otherwise, go to Step 2.3.2. Stage 2: ELM with Subset of Neurons
In the first stage, SOM is used to reduce the dimension of input weights matrix W of ELM from to .
Step 5.Create a weight matrix from input layer to the Kohonen layer and insert the values of each weight in the matrix as follows:where are the weights of the winner neuron and its neighbors in Kohonen layer, represents the order of the corresponding weight vector, and is the number of all neurons in the set .Step 6.Set the final as initial weight matrix of the ELM.Step 7.Calculate the hidden layer output matrix for input x:Step 8.Calculate the weights between the hidden layer and the output layer:where is the new weight vector connecting the hidden node and the output layer.4. Simulation Results
In this section, simulation results are presented and discussed in order to evaluate the performance of the proposed algorithm and to compare it with the conventional BP algorithm, basic ELM, and TROPELM through a classification problem. Our method has been tried on nine datasets; the first eight datasets are from the UCI Machine Learning Repository. The ninth dataset “Jaffe” is composed of images and provided by the Psychology Department in Kyushu University. The algorithms were tested on a computer with the Corei5 processor, 8 GB RAM, 2.4 GHz CPU, MATLAB R2018a.
4.1. Datasets Description
There are many benchmarks for classifications, and we have selected nine classification datasets that are summarized in Table 1. The description of the datasets is as follows: Dataset 1: ionosphere is a type of dataset used for binary classification. The main objective is to determine the type of a given signal (good or bad) by referring to free electrons in the ionosphere. It has 351 instances divided into two classes with 34 integer and real attributes. Dataset 2: Iris is the most popular and the bestknown dataset for classification and recognition of models based on the examination of the size of petals and sepals of the plant. It contains in totality 150 instances, which are equally separated between three classes. Each instance is characterized by four real attributes. Dataset 3: the wine dataset is the result of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. It shows the existence of 178 instances and 13 continuous attributes. Dataset 4: the balance dataset is generated to model psychological experimental results. Four categorical attributes can indicate the balance scale of the 625 instances that are divided into three classes. Dataset 5: it is a simple dataset that consists of 101 animals from a Zoo. This dataset is able to predict the seven class of animals based on the 16 Boolean attributes. Dataset 6: this dataset includes 2310 instances divided into 7 classes that are handsegmented to create a classification for every pixel. Image data are described by 19 attributes. Dataset 7: the objective of the Ecoli dataset is to predict the localization of proteins by using measurements on the cell. It has 336 instances which are identified by seven attributes and divided into eight classes in unbalanced way. Dataset 8: the multiple features dataset aims to classify the handwritten numerals. It has in totality 2000 instances that are equally separated between 10 classes with 649 attributes. Dataset 9: the Jaffe dataset is composed of 213 grayscale images sized of 256∗256 and posed by 10 Japanese female models. Each female has two to four examples for each expression. The objective is to predict for each image one of the seven facial expressions such as angry, disgust, fear, happy, neutral, sad, and surprised. One emotion of the seven different facial expressions from the Jaffe dataset is shown in Figure 4.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
For all datasets, 70% of the data are chosen for training phase while the remaining are reserved for testing. Three performance metrics have been listed in Table 2 in which accuracy value is calculated as follows:where TP is the number of elements correctly classified as positive, FP is the number of positive elements incorrectly classified, FN is the number of negative elements incorrectly classified, and TN is the number of true elements correctly classified as negative.

4.2. Results and Discussion
The performance of the current ELM method is dependent on the initial input weights and biases which are randomly initialized. In an attempt to overcome this problem, the heuristic approach explained above is used to automatically determine the optimal number of hidden neurons based on the clustering method. Different from basic ELM with hidden neurons, our method generally needs less hidden neurons and . The comparison results given in Table 2 clearly indicate that our approach reduces the number of hidden neurons compared with the standard ELM and TROPELM for all cases. In addition, it should also be noted that the proposed approach outperforms the standard ELM, TROPELM, and backpropagation algorithms in terms of training time. A Box and Whiskers plot illustrations of the compared methods is shown in Figure 5. It can be clearly seen from Table 2 and Figure 5 that the accuracy of the results of the proposed algorithm is indeed higher than that of backpropagation, ELM, and TROPELM algorithms. All these results indicate that the hybrid algorithm can optimize the network structure to a suitable size with fewer hidden nodes and yet be able to classify the datasets with a better accuracy.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
5. Conclusion
This paper proposed a novel hybrid algorithm for single hidden layer feedforward neural network. This algorithm consists of the use of a selforganizing map algorithm coupled with extreme learning machine. The learning process of this method includes two steps. The first step is to train the weights connecting the input and the hidden layers by a selforganizing map algorithm, and the second step is to use the Moore–Penrose inverse method to calculate the weights connecting the hidden and output layers. In order to prove the performance of the hybrid approach, it is used to solve several popular classification problems. A comparison with other basic methods such as BP, ELM, and TROPELM confirms the superiority of this method in terms of generalization performance and faster learning speed. The main disadvantage of the proposed method is that it uses a fixed structure of selforganizing map, where the number of neurons and the size of neighbourhood function must be determined before clustering. This often leads to significant limitation for most application. In future work, we will consider extending the study of the proposed method in the image classification domain. Another direction of future research includes the study of the proposed approach with different types of selforganizing maps and a wide range of activation functions.
Data Availability
The data used to support the findings of this study have been deposited in the UCI Machine Learning Repository and the Psychology Department in Kyushu University.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
References
 G.B. Huang, Q.Y. Zhu, and C.K. Siew, “Extreme learning machine: Theory and applications,” Neurocomputing, vol. 70, no. 1–3, pp. 489–501, 2006. View at: Publisher Site  Google Scholar
 J. Tang, C. Deng, and G. B. Huang, “Extreme learning machine for multilayer perceptron,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 4, pp. 809–821, 2015. View at: Publisher Site  Google Scholar
 A. A. Mohammed, R. Minhas, Q. M. J. Wu, and M. A. SidAhmed, “Human face recognition based on multidimensional PCA and extreme learning machine,” Pattern Recognition, vol. 44, no. 1011, pp. 2588–2597, 2011. View at: Publisher Site  Google Scholar
 Z. Huang, Y. Yu, J. Gu, and H. Liu, “An efficient method for traffic sign recognition based on extreme learning machine,” IEEE Transaction on Cybernetics, vol. 47, no. 4, pp. 920–933, 2016. View at: Publisher Site  Google Scholar
 H.J. Rong, S. Suresh, and G.S. Zhao, “Stable indirect adaptive neural controller for a class of nonlinear system,” Neurocomputing, vol. 74, no. 16, pp. 2582–2590, 2011. View at: Publisher Site  Google Scholar
 N. A. Shrivastava, B. K. Panigrahi, and M.H. Lim, “Electricity price classification using extreme learning machines,” Neural Computing and Applications, vol. 27, no. 1, pp. 9–18, 2016. View at: Publisher Site  Google Scholar
 W. J. Niu, Zk Feng, Y. B. Chen, H. R. Zhang, and C. T. Cheng, “Annual streamflow time series prediction using extreme learning machine based on gravitational search algorithm and variational mode decomposition,” Journal of Hydrologic Engineering, vol. 25, no. 5, Article ID 04020008, 2020. View at: Publisher Site  Google Scholar
 Z.k. Feng, W.j. Niu, R. Zhang, S. Wang, and C.t. Cheng, “Operation rule derivation of hydropower reservoir by Kmeans clustering method and extreme learning machine based on particle swarm optimization,” Journal of Hydrology, vol. 576, pp. 229–238, 2019. View at: Publisher Site  Google Scholar
 W. Cao, J. Gao, Z. Ming, and S. Cai, “Some tricks in parameter selection for extreme learning machine,” In IOP Conference Series: Materials Science and Engineering, vol. 261, no. 1, Article ID 012002, 2017. View at: Publisher Site  Google Scholar
 F. F. Navarro, C. H. Martinez, J. SanchezMonedero, and P. A. Gutirrez, “MELMGRBF: A modified version of the extreme learning machine for generalized radial basis function neural networks,” Neurocomputing, vol. 74, no. 16, pp. 2502–2510, 2011. View at: Publisher Site  Google Scholar
 G.G. Wang, M. lu, Y.Q. Dong, and X.J. Zhao, “Selfadaptive extreme learning machine,” Neural Computing and Applications, vol. 27, no. 2, pp. 291–303, 2016. View at: Publisher Site  Google Scholar
 G.B. Huang, L. Chen, and C.K. Siew, “Universal approximation using incremental constructive feedforward networks with random hidden nodes,” IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879–892, 2006. View at: Publisher Site  Google Scholar
 G.B. Huang and L. Chen, “Enhanced random search based incremental extreme learning machine,” Neurocomputing, vol. 71, no. 16–18, pp. 3460–3468, 2008. View at: Publisher Site  Google Scholar
 G.B. Huang and L. Chen, “Convex incremental extreme learning machine,” Neurocomputing, vol. 70, no. 16–18, pp. 3056–3062, 2007. View at: Publisher Site  Google Scholar
 J. Cao, Z. Lin, and G.B. Huang, “Selfadaptive evolutionary extreme learning machine,” Neural Processing Letters, vol. 36, no. 3, pp. 285–305, 2012. View at: Publisher Site  Google Scholar
 F. Han, H.F. Yao, and Q.H. Ling, “An improved evolutionary extreme learning machine based on particle swarm optimization,” Neurocomputing, vol. 116, pp. 87–93, 2013. View at: Publisher Site  Google Scholar
 G. Feng, G.B. Huang, Q. Lin, and R. Gay, “Error minimized extreme learning machine with growth of hidden nodes and icremental learning,” IEEE Transactions on Neural Networks, vol. 20, no. 8, pp. 1352–1357, 2009. View at: Publisher Site  Google Scholar
 H.J. Rong, Y.S. Ong, A.H. Tan, and Z. Zhu, “A fast prunedextreme learning machine for classification problem,” Neurocomputing, vol. 72, no. 1–3, pp. 359–366, 2008. View at: Publisher Site  Google Scholar
 Y. Miche, A. Sorjamaa, P. Bas, C. Jutten, and A. Lendasse, “OPELM: Optimally pruned extreme learning machine,” IEEE Transactions on Neural Networks, vol. 21, no. 1, pp. 158–162, 2009. View at: Publisher Site  Google Scholar
 Y. Lan, Y. C. Soh, and G.B. Huang, “Constructive hidden nodes selection of extreme learning machine for regression,” Neurocomputing, vol. 73, no. 16–18, pp. 3191–3199, 2010. View at: Publisher Site  Google Scholar
 R. Zhang, Y. Lan, G.B. Huang, and Z.B. Xu, “Universal approximation of extreme learning machine with adaptive growth of hidden nodes,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 2, pp. 365–371, 2012. View at: Publisher Site  Google Scholar
 E. SoriaOlivas, J. GomezSanchis, J. D. Martin et al., “BELM: Bayesian extreme learning machine,” IEEE Transactions on Neural Networks, vol. 22, no. 3, pp. 505–509, 2011. View at: Publisher Site  Google Scholar
 Y. Miche, M. Van Heeswijk, P. Bas, O. Simula, and A. Lendasse, “TROPELM: A doubleregularized ELM using LARS and Tikhonov regularization,” Neurocomputing, vol. 74, no. 16, pp. 2413–2421, 2011. View at: Publisher Site  Google Scholar
 Y. YangY. Wang and X. Yuan, “Bidirectional extreme learning machine for regression problem and its learning effectiveness,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 9, pp. 1498–1505, 2012. View at: Publisher Site  Google Scholar
 W. Cao, Z. Ming, X. Wang, and S. Cai, “Improved bidirectional extreme learning machine based on enhanced random search,” Memetic Computing, vol. 11, no. 1, pp. 19–26, 2019. View at: Publisher Site  Google Scholar
 N.Y. Liang, G.B. Huang, P. Saratchandran, and N. Sundararajan, “A fast and accurate online sequential learning algorithm for feedforward networks,” IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411–1423, 2006. View at: Publisher Site  Google Scholar
 W. Cao, J. Gao, Z. Ming, S. Cai, and Z. Shan, “Fuzziness‐based online sequential extreme learning machine for classification problems,” Soft Computing, vol. 22, no. 11, pp. 3487–3494, 2018. View at: Publisher Site  Google Scholar
 W. Cao, Z. Ming, Z. Xu, J. Zhang, and Q. Wang, “Online sequential extreme learning machine with dynamic forgetting factor,” IEEE Access, vol. 7, pp. 179746–179757, 2019. View at: Publisher Site  Google Scholar
 Q. He, X. Jin, C. Du, F. Zhuang, and Z. Shi, “Clustering in extreme learning machine feature space,” Neurocomputing, vol. 128, pp. 88–95, 2014. View at: Publisher Site  Google Scholar
 W.Y. Deng, Z. Bai, G.B. Huang, and Q.H. Zheng, “A fast SVDhiddennodes based extreme learning machine for largescale data analytics,” Neural Networks, vol. 77, pp. 14–28, 2016. View at: Publisher Site  Google Scholar
 M. L. D. Dias, L. S. de Sousa, A. R. Rocha Neto, and A. L. Freire, “Fixedsize extreme learning machines through simulated annealing,” Neural Processing Letters, vol. 48, no. 1, pp. 135–151, 2018. View at: Publisher Site  Google Scholar
 J. Zhai, Q. Shao, and X. Wang, “Architecture selection of ELM networks based on sensitivity of hidden nodes,” Neural Processing Letters, vol. 44, no. 2, pp. 471–489, 2016. View at: Publisher Site  Google Scholar
 L. Zhang and D. Zhang, “Evolutionary costsensitive extreme learning machine,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 12, pp. 3045–3060, 2016. View at: Publisher Site  Google Scholar
 Y. Yang and Q. M. Jonathan Wu, “Extreme learning machine with subnetwork hidden nodes for regression and classification,” IEEE Transactions on Cybernetics, vol. 46, no. 12, pp. 2885–2898, 2015. View at: Publisher Site  Google Scholar
 G. Feng, Y. Lan, X. Zhang, and Z. Qian, “Dynamics adjustement of hidden node parameters for extreme learning machine,” IEEE Transactions on Cybernetics, vol. 45, no. 2, pp. 279–288, 2014. View at: Publisher Site  Google Scholar
 T. Kohonen, “The selforganizing map,” Proceedings of the IEEE, vol. 78, no. 9, pp. 1464–1480, 1990. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2020 Imen Jammoussi and Mounir Ben Nasr. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.