Mathematical Problems in Engineering

Volume 2015 (2015), Article ID 946292, 9 pages

http://dx.doi.org/10.1155/2015/946292

## The Optimisation for Local Coupled Extreme Learning Machine Using Differential Evolution

Information Science and Technology College, Dalian Maritime University, Dalian 116026, China

Received 13 August 2014; Revised 12 November 2014; Accepted 24 November 2014

Academic Editor: Yi Jin

Copyright © 2015 Yanpeng Qu and Ansheng Deng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Many strategies have been exploited for the task of reinforcing the effectiveness and efficiency of extreme learning machine (ELM), from both methodology and structure perspectives. By activating all the hidden nodes with different degrees, local coupled extreme learning machine (LC-ELM) is capable of decoupling the link architecture between the input layer and the hidden layer in ELM. Such activated degrees are jointly determined by the associated addresses and fuzzy membership functions assigned to the hidden nodes. In order to further refine the weight searching space of LC-ELM, this paper implements an optimisation, entitled evolutionary local coupled extreme learning machine (ELC-ELM). This method makes use of the differential evolutionary (DE) algorithm to optimise the hidden node addresses and the radiuses of the fuzzy membership functions, until the qualified fitness or the maximum iteration step is reached. The efficacy of the presented work is verified through systematic simulated experimentations in both regression and classification applications. Experimental results demonstrate that the proposed technique outperforms three ELM alternatives, namely, the classical ELM, LC-ELM, and OSFuzzyELM, according to a series of reliable performances.

#### 1. Introduction

Due to the significant efficiency and simple implementation, extreme learning machine (ELM) [1, 2] has recently enjoyed much attention as a powerful tool in regression and classification applications (e.g., [3, 4]). A variety of the extensions of ELM, therefore, have been developed in an attempt to improve their performances. In general, there are two manners: one is to optimise the methodology of ELM (e.g., online sequential ELM [5] and evolutionary ELM [6]); the other is to refine the hidden layer of ELM for optimising the learning model (e.g., incremental ELM [7], pruned-ELM [8], and two-stage ELM [9]). Several promising performances have been observed through these two schemes, at both theoretical and empirical levels.

Local coupled extreme learning machine (LC-ELM) ulteriorly develops the classical ELM algorithm by assigning an address to each hidden node in the input space. Given a learning sample, the hidden nodes will be activated at different levels in accordance with the distances from their locations to the input sample. In so doing, the fully coupled architecture between the input layer and the hidden layer in ELM gets simplified. And the complexity of the weight searching space will be reduced correspondingly. In fact, when the input information is modified, only those highly relevant hidden nodes will be influenced. This process is similar to the learning process of a brain: when a new learning sample is achieved, only relative knowledge needs to be revised with different memory inspired degrees.

In LC-ELM, the addresses and the window radiuses are preset empirically or randomly at present. However, the existence of the nonoptimal addresses and radiuses may yield an inappropriate underlying model, by accident. As a type of metaheuristics, the differential evolution (DE) approach [10] entails few or no assumptions regarding the problem being optimized and has the ability to search for the candidate solutions in very large spaces. In this case, this paper presents an approach termed evolutionary local coupled extreme learning machine (ELC-ELM). The proposed method makes use of DE in an attempt to address the challenges raised by the stochastically predetermined addresses and radiuses. Specifically, in ELC-ELM, DE is utilised to optimise the addresses and radiuses, according to the resulting root mean squared error (RMSE). Hence, the associated activation degrees are improved. This optimisation procedure is capable of searching for a superior framework of ELC-ELM, until the qualified fitness (consisting of the addresses and radiuses) or the maximum iteration step is reached. To evaluate the performance of this approach, comparative studies between ELC-ELM and the alternative ELM-based techniques (including the classical ELM, LC-ELM, and OSFuzzyELM [11]) are also presented through systematic experimental investigations. The results demonstrate that the proposed work entails improved performances in both regression and classification applications.

The remainder of this paper is structured as follows. An outline of the relevant background materials is presented in Section 2, including LC-ELM and the differential evolution algorithm. The optimisation of LC-ELM, termed evolutionary local coupled extreme learning machine (ELC-ELM), is then described in Section 3. In Section 4, the systematical comparisons between ELC-ELM and several relevant ELM-based algorithms (ELM, LC-ELM, and OSFuzzyELM) are carried out in an experimental evaluation. Section 5 concludes the paper with a short discussion of the potential further works.

#### 2. Theoretical Background

For completeness, the basic ideas of local coupled extreme learning machine and differential evolution (DE) [10] are briefly recalled first.

##### 2.1. Local Coupled Extreme Learning Machine

Conventionally, extreme learning machine (ELM) algorithms [1, 2] are implemented with a fully coupled framework as, in general, single input activates all hidden nodes. Such structure leads to the computation cost in proportion with the scale of a given network. In LC-ELM, a strategy to decouple the framework linking the input layer to the hidden layer in ELM was proposed. Different from the classical ELM, LC-ELM introduces a parameter, termed “address,” to each hidden node in the input space. Given a learning sample, the distances from the hidden nodes to the input sample are gauged by the fuzzy membership functions as the activated degree of the relevant hidden nodes. Due to the utilisation of these two improvements, this strategy implements the structural simplification of the weight searching space in LC-ELM.

For a dataset which contains distinct objects , where and , the output of an -hidden-node nonlinear LC-ELM iswhere denotes the activation function. , , and are the network weights. is a fuzzy membership function. is the similarity between the th input and the th hidden node. is the address of the th hidden node.

In LC-ELM, the fuzzy membership function is defined with the following properties:(1) is a nonnegative piecewise continuous function,(2) is monotonically decreasing in ,(3),(4), .

Here, is said to be piecewise continuous if it has only a finite number of discontinuities in any interval, and its left and right limits are defined (not necessarily equal) at each discontinuity [2]. In order to adjust the width of the activated area, the underlying radius parameter is employed in .

Note that, in (1), when the is a constant function which is equal to 1, LC-ELM is reduced to the classical ELM. Moreover, when in (1) is equal to zero, the fuzzy membership function is nonconstant, and the similarity function is determined by the norm distance ; then, the framework of LC-ELM is reduced to the ELM with RBF hidden nodes [2]. In [12], both of these two cases of ELM are proven to own universal approximation capabilities. Therefore, it is reasonable to consider that, for an arbitrary multivariate continuous function, LC-ELM may have the ability to approximate the function under a given accuracy.

For the linear system generated by LC-ELM,the hidden-layer output matrix in LC-ELM is is the matrix of output weights and denotes the weight vector connecting the th hidden node and the output layer. is the matrix of target outputs. Given such presentation, in the initialisation phase of LC-ELM, the hidden node address as well as the hidden layer parameters is assigned randomly as well.

Following the above discussion, a three-step LC-ELM algorithm can be summarised in Algorithm 1.