Computational Intelligence and Neuroscience

Volume 2018, Article ID 3635845, 10 pages

https://doi.org/10.1155/2018/3635845

## An Extreme Learning Machine Based on Artificial Immune System

School of Computer Science and Technology, Zhejiang University, Hangzhou, China

Correspondence should be addressed to Shi-jian Li; nc.ude.ujz@ilnaijihs

Received 5 March 2018; Accepted 27 May 2018; Published 25 June 2018

Academic Editor: Rodolfo Zunino

Copyright © 2018 Hui-yuan Tian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Extreme learning machine algorithm proposed in recent years has been widely used in many fields due to its fast training speed and good generalization performance. Unlike the traditional neural network, the ELM algorithm greatly improves the training speed by randomly generating the relevant parameters of the input layer and the hidden layer. However, due to the randomly generated parameters, some generated “bad” parameters may be introduced to bring negative effect on the final generalization ability. To overcome such drawback, this paper combines the artificial immune system (AIS) with ELM, namely, AIS-ELM. With the help of AIS’s global search and good convergence, the randomly generated parameters of ELM are optimized effectively and efficiently to achieve a better generalization performance. To evaluate the performance of AIS-ELM, this paper compares it with relevant algorithms on several benchmark datasets. The experimental results reveal that our proposed algorithm can always achieve superior performance.

#### 1. Introduction

In recent years, many computational intelligence techniques, such as neural networks and support vector machines (SVMs) [1], have been widely used in many real-world applications. However, those algorithms face some defects such as slow learning speed, trivial human intervention, and poor computational scalability.

Recently, to solve the drawbacks mentioned above, Huang et al. [2–5] proposed a new method named extreme learning machine (ELM) which has attracted ever-growing research attention. In contrast to the traditional neural networks such as BP [6], ELM is a tuning-free algorithm with fast learning speed by randomly generating input weights and hidden biases. With the help of least square method and Moore-Penrose generalized inverse, the ELM is transferred as a linear learning system. In addition, ELM is theoretically proved to have a good generalization performance with least human intervention. Therefore, ELM is widely used in many fields [5]. For example, Chaturvedi et al. [7] extended the extreme learning machine (ELM) paradigm to a novel framework that exploits the features of both Bayesian networks and fuzzy recurrent neural networks to perform subjectivity detection. Gastaldo et al. [8] addressed the specific role played by feature mapping in ELM. Cambria et al. [9] explored how the high generalization performance, low computational complexity, and fast learning speed of extreme learning machines can be exploited to perform analogical reasoning in a vector space model of affective common-sense knowledge. Recently Ragusa et al. [10] tackled the implementation of single hidden layer feedforward neural networks (SLFNs), based on hard-limit activation functions, on reconfigurable devices.

It is known that an appropriate selection of initial weight sets is very vital for training a neural network model [11]. There is a strong correlation between the final solution and the initial weight. However, due to the random determination of some learning parameters, some nonoptimal parameter may be introduced to the model [5], which may put negative impact on the final performance. To solve such a drawback, many relative works have been proposed in the past ten years. A straightforward way is to combine evolutionary methods with ELM [12]. For instance, Zhu et al. [13] utilized differential evolutionary algorithm (DE) to optimize ELM’s generated parameters to achieve better performance. In [14], Xue et al. combined genetic algorithm (GA), ELM, and ensemble learning to get a better and stable result. Rather than using GA or DE method, Sarasw athi et al. presented a PSO driven ELM [15], combining with Integer Coded Genetic Algorithm (ICGA) to solve gene selection and cancer classification. In [16], Cao et al. proposed an improved learning algorithm named self-adaptive evolutionary extreme learning machine (SaE-ELM). Similarly, Wu et al. presented a novel algorithm named dolphin swarm algorithm extreme learning machine (DSA-ELM) [17] to solve optimization problems.

However, all the above evolutionary algorithms have different search efficiency to optimize the problem. There is still much space to improve. For example, it is one of the biggest challenges in ELM that some nonoptimal parameters may be introduced to ELM algorithm due to the random generation of parameters. To overcome that challenge, in this paper we propose a new extreme method named artificial immune system extreme learning machine (AIS-ELM). Because artificial immune system (AIS) [18–20] has global search ability [21] and good convergence [22], it can solve some difficulties like slow convergence, getting stuck in local minima, etc. Therefore, we use AIS to optimize ELM to get a better initial weight sets capable of avoiding the training process falling into the local optimum. The original version and preliminary results of this paper’s method were proposed by us in ELM2017 [23]. In this paper we have revised the original formulas, compared the AIS-ELM with more algorithms and added new expressions, regression validation and more datasets.

The rest of the paper is arranged as follows. Sections 2 and 3 briefly describe the traditional ELM and AIS methods. Section 4 proposes the detailed description of AIS-ELM. Section 5 carries out corresponding experiment: AIS-ELM algorithm is compared with traditional ELM, PSO-ELM, SaE-ELM, and DSA-ELM on five regression problems and eight classification benchmark problems obtained from the UCI Machine Learning Repository [24]; the training times between AIS-ELM and BP and SVM and traditional ELM are compared on three benchmark classification problems. The last section gives a conclusion of this paper.

#### 2. Extreme Learning Machine

This section will introduce the extreme learning machine [2–5] proposed by Professor Huang. ELM is developed from a single hidden layer feedforward network and is extended to a generalized single hidden layer feedforward network. Compared to other conventional learning algorithms, the extreme learning algorithm’s advantage is that the nodes of the single hidden layer feedforward network need not be adjusted.

Compared with the traditional learning algorithm, the extreme learning machine not only has the smaller error but can reach the smallest norm of weights [5]; because the hidden layer need not to be adjusted in the limit learning machine algorithm, the output weight matrix can be solved by the least squares method.

For arbitrary training samples , where and , and given activation function , the standard mathematical model of SLFNs with hidden nodes is modeled as follows:where is the weight vector connecting the input neurons and hidden neuron, is the weight vector connecting the hidden neuron and the output neurons, and is the threshold of the hidden neuron.

That standard SLFNs with hidden neurons given activation function can approximate these samples with zero error which means thatThere exist , , and such thatThe above N equations can be written compactly aswhereHere, is called the hidden layer output matrix [3].The column of is the hidden node’s output vector with respect to inputs and the row of H is the output vector of the hidden layer with respect to . Then the vector (connecting the hidden layer with the output layer) is estimated using the Moore-Penrose generalized inverse of the matrix :

ELM algorithm can be summarized as shown in Algorithm 1.