Research Article  Open Access
Jie Wang, YiFan Song, TianLei Ma, "Mexican Hat Wavelet Kernel ELM for Multiclass Classification", Computational Intelligence and Neuroscience, vol. 2017, Article ID 7479140, 8 pages, 2017. https://doi.org/10.1155/2017/7479140
Mexican Hat Wavelet Kernel ELM for Multiclass Classification
Abstract
Kernel extreme learning machine (KELM) is a novel feedforward neural network, which is widely used in classification problems. To some extent, it solves the existing problems of the invalid nodes and the large computational complexity in ELM. However, the traditional KELM classifier usually has a low test accuracy when it faces multiclass classification problems. In order to solve the above problem, a new classifier, Mexican Hat wavelet KELM classifier, is proposed in this paper. The proposed classifier successfully improves the training accuracy and reduces the training time in the multiclass classification problems. Moreover, the validity of the Mexican Hat wavelet as a kernel function of ELM is rigorously proved. Experimental results on different data sets show that the performance of the proposed classifier is significantly superior to the compared classifiers.
1. Introduction
Extreme learning machine, which was proposed by Huang et al. [1] in 2004, is a model of singlehidden layer feedforward neural network. In this model, input weights and hidden layer biases are initialized randomly, and output weights are obtained by using the MoorePenrose generalized inverse of the hidden layer output matrix. Compared with the conventional BP neural networks, ELM has faster learning speed, higher testing accuracy, and lower computational complexity. Therefore, ELM is widely used in sales forecasting [2], image quality assessment [3], power loss analysis [4], and so on. In 2006, Huang et al. [5] proposed incremental extreme learning machine (IELM), which continuously increased the number of hidden layer nodes to improve the training accuracy. Subsequently, Li [6] combined IELM with the convex optimization learning method and proposed ECIELM in 2014, which reduced the training time of IELM. This improvement overcame the weakness of randomly selecting weights in IELM and eventually improved the training accuracy. At the same time, Wang and Zhang [7] introduced the GramSchmidt orthogonalization method into IELM and saved the training time of IELM to a large degree. But, in general, IELM and its varieties only improve the training accuracy. Their numbers of hidden layer nodes are very likely to exceed the number of samples. Thus, IELM greatly improves the training time. In another perspective, in order to achieve a higher training accuracy, Rong et al. [8] used statistical methods to measure the relevance of hidden nodes of ELM and proposed PELM in 2008. Then, in 2010, Miche et al. [9] proposed OPELM, which is an improvement of PELM. In addition, Akusok et al. [10] proposed a highperformance ELM model in 2015, which provides a solid ground for tackling numerous Big Data challenges. However, none of these methods has changed the characteristic of the random selection of input weights. In addition, the linear weighted mapping method in original ELM is not replaced at all.
Therefore, both ELM and its varieties have some inevitable problems. Because of the random selection of input weights, some hidden nodes may be given an input weight that is very close to 0, which are commonly called dead nodes. This phenomenon leads to the minimal effect of these nodes and eventually affects the output accuracy. With the increment of the number of samples, the hidden nodes number also becomes large. Thus, some high dimensional dot product operations will appear in the training process. Eventually, that will cause the increase of computational complexity and training time. This problem is commonly called dimension explosion. For nonlinear samples, the linear weighted mapping method often has inevitable error, which leads to the reduction of the training accuracy.
In order to solve the above problems, Huang et al. [11] proposed the kernel extreme learning machine (KELM) in 2012, which utilized the kernel function to replace the linear weighted mapping method. Initially, the kernel function they selected is a Gauss function. Although [11] solves the problem of dead nodes and dimension explosion in a sense, the performance of the traditional kernel function for multiclass classification problems is still not very good. From [12, 13], we know that wavelet functions can be used in SVM and ELM, which have a strong fitting capability. Therefore, in this paper, we propose a Mexican Hat wavelet kernel ELM (MHWKELM) classifier, which effectively solves the problems in the conventional classifier. Compared with the traditional KELM, the MHWKELM classifier achieves better results on dealing with the multiclass classification problems. Because of that, the new kernel function improves the training accuracy.
The basic principle of ELM and some theorems are shown in Section 2 of this paper. In Section 3, the Mexican Hat wavelet kernel ELM is proposed, and its validity is also proved. Performance evaluation is presented in Section 4. Conclusion is given in Section 5.
2. Preliminary Work
2.1. ELM Model
Let us suppose that there are arbitrary distinct samples , , . If the number of the hidden nodes is and the activation function is , then we can randomly select the initial value of the input weights and the hidden biases . So, the hidden layer output function of ELM can be obtained. It is shown aswhere , , .
If the output weights are , according to the proof given by Huang et al. [1], the norm of is smaller, and the generalization performance of ELM is better. Therefore, the output weights can be obtained by finding the least square solution of the problemwhere is the th output vector of hidden layer, is the th label vector, and is the error between the th network output vector and the label vector.
According to KKT theory, the above problem can be transformed into a Lagrange functionwhere each of the Lagrange multipliers corresponds to a sample . By calculating the partial derivative of (3), we can get the following set of equations:where . And the least square solution of can be obtained by calculating the three equations in (4a), (4b), and (4c). The solution isand the output function of ELM is
2.2. TranslationInvariant Kernel Theorem
Kernel function method is often used in SVM as a method of replacing dot product. According to the Mercer theorem (see [14]), by introducing the kernel function , we can replace the calculation of dot product in ELM. In order to reduce the computational complexity of high dimensional dot product, it is necessary to ensure that is only a mapping method of the relative position of two input samples (see (7)).
The kernel functions which satisfy (7) are called the translationinvariant kernel function. In fact, it is difficult to prove that a translationinvariant kernel function satisfies the Mercer theorem. Fortunately, for the translationinvariant kernel function, the following theorem provides a necessary and sufficient condition to make it become an admissible support vector kernel.
Theorem 1 (translationinvariant kernel theorem; see [15, 16]). A translationinvariant kernel is an admissible support vector kernel, if and only if the Fourier transformis nonnegative.
The kernel function selection method of ELM is the same as SVM. Therefore, the above theorem can also be used to determine whether a function is an admissible ELM kernel. The commonly used translationinvariant kernel functions are Gauss kernel function and polynomial kernel function. In these two functions, Gauss kernel function is a kind of translationinvariant kernel function. And the expression of the two kernel functions can be given as
In (9), is a Gauss core width and is an adjustable polynomial power exponent.
3. Mexican Hat Wavelet Kernel ELM
3.1. Kernel ELM
In original ELM model, the linear weighted hidden output function is usually not satisfied with the mapping method of the nonlinear samples. In order to solve this problem, we can replace and in (6) with a kernel function . And the result iswhere is the kernel function matrix of (see (11)).
3.2. Mexican Hat Wavelet Kernel Function
In this part, the Mexican Hat wavelet kernel function is proposed. It is also proved that Mexican Hat wavelet function is an admissible ELM kernel.
Theorem 2 (see [12]). Let be a mother wavelet. Let and denote the dilation and translation, respectively, and . If , then the dot product wavelet kernel isIf it satisfies the translationinvariant kernel theorem, the following translationinvariant kernel function can be obtained:
The proof of Theorem 2 is given in [12]; we will not repeat it in this paper. We use Mexican Hat wavelet as the mother wavelet (see (14)). Then, the Mexican Hat wavelet kernel function is derived (see (15)). In this paper, it is also proved that Mexican Hat wavelet satisfies the translationinvariant kernel theorem. In other words, it is also an admissible ELM kernel.
Lemma 3. As a kind of translationinvariant kernel function, Mexican Hat wavelet is an admissible ELM kernel.
Proof. Firstly, it should be proved that the Fourier transform of Mexican Hat wavelet is nonnegative (see (16)).Equation (17) can be decomposed into a set of integral inequalities (see (19)). And the derivation process isThe integral term in (17) can be written aswhere is the integral term in (17), According to the translation invariance of the integral, it is easy to get (21) by using the partial integration method. The answer isSubstituting (21) into (18), we haveThen, substituting (22) into (17), we can obtain the Fourier transformFrom (23), it is known that if , . Therefore, according to the translationinvariant kernel theorem, Mexican Hat wavelet is an admissible ELM kernel.
3.3. MHWKELM Classifier
We have already proved that Mexican Hat wavelet is an admissible ELM kernel. So, we can substitute (15) into (10) and construct MHWKELM classifier. For a binary classification problem, the output function of the new classifier is
Besides, this classifier can also be used for the multiclass classification problems. And the output function is
Equation (25) means the classification result is expressed by the index value of the maximum value in output vector. In addition, we can combine the nonnegative constant parameter of Mexican Hat wavelet and the penalty factor into an individual and use some evolutionary algorithms such as PSO [17, 18] to find the best values of these parameters. Next, we will analyze the performance of the proposed classifier.
4. Performance Evaluation
This section will analyze the performance of MHWKELM and compare it with the traditional GaussKELM, PolyKELM, original ELM, and BP classifier. All these algorithms run on the R2014a MATLAB software. The operating environment is Corei7, 2.6 GHz CPU, 8 G RAM. We choose scaled conjugate gradient algorithm to optimize BP neural network, which is faster than normal BP neural network. In order to get excellent performance, the number of hidden nodes of original ELM and BP is selected as 100% and 30% of training samples, respectively. The data sets used in the experiment are from the UCI database [19]. They are Abalone, Auto MPG, Bank, Evaluation, Wine, Wine Quality, Iris, Glass, Image, Yeast, Zoo, and Letter, respectively. The basic features of these 12 data sets are shown in Table 1.

Then, we use the 12 data sets given in Table 1 to test the running time and training accuracy of 5 algorithms. Each data set will be tested by each algorithm 100 times. For each time, the training sample will be randomly selected from the total sample. In order to conduct a rigorous comparison, paired Student’s test is performed, which gives the probability that two sets come from distributions with an equal mean. Tables 2–13 record the results of these experiments, and each table corresponds to a data set. All tables have four elements which represent mean accuracy, standard deviation, value obtained by paired Student’s test, and the running time, respectively. For each data set, the data with bold type means this is the best accuracy or the best running time ( value = 1.00), while the data with italic type means there is no statistical difference between this one and the best accuracy or it is very close to the best time ( value 0.05).












By drawing the running time in all tables to a line graph, we can get Figure 1. In Figure 1, the horizontal coordinate corresponds to the number of training samples, 50, 100, 200, 1000, and 2000, respectively. Without loss of generality, we can select five data sets, Zoo, Image, Auto MPG, Car Evaluation, and Abalone, as the representations of different numbers of samples. The vertical coordinate shows the mean running time of each data set. Moreover, the running times of MHWKELM and GaussKELM are very close. So, we only draw the running time of MHWKELM. Four lines are drawn with different styles.
From all tables and Figure 1, it is clear to see that when the training number is larger than 1000, compared to other algorithms, MHWKELM shows an obvious advantage in running time. For the data sets whose training number is more than 1000, such as Abalone, Bank, Car Evaluation, Wine Quality, Yeast, and Letter, we can obtain that the running time of MHWKELM and GaussKELM is less than that of other algorithms. That means translationinvariant kernel is superior to other kernels. Therefore, it can be concluded that the choice of translationinvariant kernel function can effectively shorten the running time when the training size is large enough.
From Tables 2–13, it can be obviously seen that the classification performance of MHWKELM is better than other algorithms when the number of categories is more than 4. The results of paired Student’s test show that the performance of MHWKELM is significantly different ( value 0.05) from that of the original ELM and SCGBP on all data sets, and it is also different from GaussKELM and PolyKELM on Auto MPG, Car Evaluation, Wine Quality, and Image. These four data sets have one thing in common, which is the fact that the category numbers of these data sets are all more than 4. Besides, when the category number is less than 4, such as Abalone, Bank, Wine, Iris, Yeast, and Letter, MHWKELM still has a similar performance to GaussKELM or PolyKELM. Therefore, MHWKELM is an excellent classifier in multiclass classification problems, which is better than traditional kernel ELM. That means the Mexican Hat wavelet function is a better ELM kernel than the Gaussian function.
5. Conclusion
In this paper, we propose a classifier, the Mexican Hat wavelet kernel ELM classifier, which can be applied to the multiclass classification problem. Besides, its validity as an admissible ELM kernel is also proved. This classifier solves the inevitable problems in original ELM by replacing the linear weighted mapping method with Mexican Hat wavelet. The experimental results show that the training time of MHWKELM classifier is much less than that of original ELM, which solves the problem of the dimension explosion in original ELM. Meanwhile, the training accuracy of this classifier is superior to the traditional GaussKELM and original ELM in dealing with the multiclass classification problems.
In future work, in order to reduce the impact of inequality of the training data on the performance, we plan to utilize the boosting weighted ELM proposed by Li et al. [20] to optimize the proposed classifier. In addition, from the experimental results of this paper, it can be seen that a single kernel function cannot meet the requirements of all data sets. So, we are prepared to combine multiple kernel functions to construct mixed kernel ELM, in order to suit different situations.
Competing Interests
The authors declare that there are no competing interests regarding the publication of this paper.
Acknowledgments
The authors gratefully acknowledge the support of the following foundations: 973 Project of China (2013CB733605), the National Natural Science Foundation of China (21176073 and 61603343), and the Fundamental Research Funds for the Central Universities.
References
 G. B. Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: a new learning scheme of feedforward neural networks,” in Proceedings of the International Joint Conference on Neural Networks, vol. 2, no. 2, pp. 985–990, Budapest, Hungary, 2004. View at: Google Scholar
 Z.L. Sun, T.M. Choi, K.F. Au, and Y. Yu, “Sales forecasting using extreme learning machine with applications in fashion retailing,” Decision Support Systems, vol. 46, no. 1, pp. 411–419, 2008. View at: Publisher Site  Google Scholar
 S. Suresh, R. Venkatesh Babu, and H. J. Kim, “Noreference image quality assessment using modified extreme learning machine classifier,” Applied Soft Computing Journal, vol. 9, no. 2, pp. 541–552, 2009. View at: Publisher Site  Google Scholar
 A. H. Nizar, Z. Y. Dong, and Y. Wang, “Power utility nontechnical loss analysis with extreme learning machine method,” IEEE Transactions on Power Systems, vol. 23, no. 3, pp. 946–955, 2008. View at: Publisher Site  Google Scholar
 G.B. Huang, L. Chen, and C.K. Siew, “Universal approximation using incremental constructive feedforward networks with random hidden nodes,” IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879–892, 2006. View at: Publisher Site  Google Scholar
 Y. Li, “Orthogonal incremental extreme learning machine for regression and multiclass classification,” Neural Computing & Applications, vol. 27, no. 1, pp. 111–120, 2016. View at: Publisher Site  Google Scholar
 W. Wang and R. Zhang, “Improved convex incremental extreme learning machine based on enhanced random search,” in Unifying Electrical Engineering and Electronics Engineering, vol. 238 of Lecture Notes in Electrical Engineering, pp. 2033–2040, Springer, New York, NY, USA, 2014. View at: Publisher Site  Google Scholar
 H.J. Rong, Y.S. Ong, A.H. Tan, and Z. Zhu, “A fast prunedextreme learning machine for classification problem,” Neurocomputing, vol. 72, no. 1–3, pp. 359–366, 2008. View at: Publisher Site  Google Scholar
 Y. Miche, A. Sorjamaa, P. Bas, O. Simula, C. Jutten, and A. Lendasse, “OPELM: optimally pruned extreme learning machine,” IEEE Transactions on Neural Networks, vol. 21, no. 1, pp. 158–162, 2010. View at: Publisher Site  Google Scholar
 A. Akusok, K.M. Bjork, Y. Miche, and A. Lendasse, “Highperformance extreme learning machines: a complete toolbox for big data applications,” IEEE Access, vol. 3, pp. 1011–1025, 2015. View at: Publisher Site  Google Scholar
 G.B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 2, pp. 513–529, 2012. View at: Publisher Site  Google Scholar
 L. Zhang, W. Zhou, and L. Jiao, “Wavelet support vector machine,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 34, no. 1, pp. 34–39, 2004. View at: Publisher Site  Google Scholar
 W. Qin, S. Yuantong, K. Yu, W. Qiang, and S. Lin, “Parsimonious wavelet kernel extreme learning machine,” Journal of Engineering Science and Technology Review, vol. 8, no. 5, pp. 219–226, 2015. View at: Google Scholar
 J. Mercer, “Functions of positive and negative type, and their connection with the theory of integral equations,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 209, no. 441458, pp. 415–446, 1909. View at: Publisher Site  Google Scholar
 A. J. Smola, B. Schölkopf, and K.R. Müller, “The connection between regularization operators and support vector kernels,” Neural Networks, vol. 11, no. 4, pp. 637–649, 1998. View at: Publisher Site  Google Scholar
 C. J. C. Burges, “Geometry and invariance in kernel based methods,” in Advances in Kernel Methods, MIT Press, 1999. View at: Google Scholar
 J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995. View at: Google Scholar
 R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science (MHS '95), pp. 39–43, Nagoya, Japan, October 1995. View at: Publisher Site  Google Scholar
 M. Lichman, UCI Machine Learning Repository, University of California, School of Information and Computer Science, Irvine, Calif, USA, 2013, http://archive.ics.uci.edu/ml.
 K. Li, X. Kong, Z. Lu, L. Wenyin, and J. Yin, “Boosting weighted ELM for imbalanced learning,” Neurocomputing, vol. 128, pp. 15–21, 2014. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2017 Jie Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.