Computational Intelligence and Neuroscience

Volume 2015 (2015), Article ID 947098, 14 pages

http://dx.doi.org/10.1155/2015/947098

## Training Spiking Neural Models Using Artificial Bee Colony

^{1}Intelligent Systems Group, Faculty of Engineering, La Salle University, Benjamín Franklin 47, Colonia Condesa, 06140 Mexico City, DF, Mexico^{2}Instituto en Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, 04510 Mexico City, DF, Mexico

Received 18 October 2014; Accepted 6 January 2015

Academic Editor: Jianwei Shuai

Copyright © 2015 Roberto A. Vazquez and Beatriz A. Garro. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Spiking neurons are models designed to simulate, in a realistic manner, the behavior of biological neurons. Recently, it has been proven that this type of neurons can be applied to solve pattern recognition problems with great efficiency. However, the lack of learning strategies for training these models do not allow to use them in several pattern recognition problems. On the other hand, several bioinspired algorithms have been proposed in the last years for solving a broad range of optimization problems, including those related to the field of artificial neural networks (ANNs). Artificial bee colony (ABC) is a novel algorithm based on the behavior of bees in the task of exploring their environment to find a food source. In this paper, we describe how the ABC algorithm can be used as a learning strategy to train a spiking neuron aiming to solve pattern recognition problems. Finally, the proposed approach is tested on several pattern recognition problems. It is important to remark that to realize the powerfulness of this type of model only one neuron will be used. In addition, we analyze how the performance of these models is improved using this kind of learning strategy.

#### 1. Introduction

Artificial neural networks (ANNs) are applied in a broad range of problems. Among the most popular tasks using ANN, we could mention pattern recognition, forecasting, and regression problems. However, the accuracy of these models could drastically diminish if the topology is not well-designed and if the training algorithm is not selected carefully. One interesting alternative to designing the topology and training and exploiting the capabilities of an ANN is to adopt a learning strategy based on evolutionary and swarm intelligence algorithms. It is well-known that designing and training tasks can be stated as optimization problems; for that reason, it is possible to apply different types of evolutionary and swarm intelligence algorithms. For example, particle swarm optimization [1] and differential evolution [2] have been used to design and train ANNs automatically.

Several swarm intelligence algorithms based on the collective behavior of self-organizing systems have been proposed in the last years. Among the most popular, we could mention ant colony system (ACO) [3] and particle swarm optimization (PSO) [1] and artificial bee colony (ABC) [4]. Most of the studies related to honey bee swarm are focused on the dance and communication, task allocation, collective decision, nest site selection, mating, marriage, reproduction, foraging, floral and pheromone laying, and navigation behaviours of the swarm [5]. ABC is a novel algorithm that tries to mimic the behavior of the bees in nature, which tasks consist in exploring their environment to find a food source.

The ABC algorithm has been used in a broad range of optimization problems. This algorithm is relatively simple, and its implementation is straightforward for solving optimization problems, being able to produce acceptable results at a low computational cost. The different studies performed in the literature compare its efficiency against other traditional strategies such as genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO), ant colony optimization (ACO), and their variants. The efficiency obtained on numerical problems using numerical test functions, multivariable functions, constrained and unconstrained optimization problems, and even multiobjective problems, suggests that the ABC algorithm is a serious candidate for training ANN [5].

Some of the first works that use the ABC algorithm to adjust the synaptic weight of an ANN are described in [6, 7]. In [8] the authors train a feed-forward ANN using ABC applied to solve the XOR, 3-bit parity, and 4-bit encoder-decoder problem, and some signal processing applications. In [9], the ANN is trained using ABC algorithm to solve a medical pattern classification problem. In [10], the authors train an ANN to classify different dataset utilized in the machine learning community. In [11], the ANN is trained for the classification of the acoustic emission signal to their respective source. Another interesting paper for designing and training an ANN is presented in [12], where the authors described a methodology for maximizing its accuracy and minimizing its connections by evolving the weights, the architecture, and the transfer function of each neuron. In the context of forecasting, [13] used ABC to train an ANN for bottom hole pressure prediction in underbalanced drilling. Whereas in [14] the authors train a recurrent ANN for stock price forecasting, in [15] the author uses ABC for training an ANN for earthquake time series data. Reference [16] presents an ANN trained with ABC for S-models of biomedical networks approximation. From all these papers the authors conclude that ABC algorithm is capable of training ANN with an acceptable accuracy and, in some cases, the results are better than those obtained with other traditional techniques.

Moreover, there exist many algorithms based on the bees’ behavior such as bee algorithm, honey-bee mating algorithm, and bee colony optimization (BCO), among others [17]. Hence, there are investigations about training ANNs that use different kinds of algorithms related to the ABC. For example, in [18] the authors apply the bee colony algorithm to train an ANN, which later is applied to the wood’s defect problem. In [19], the authors estimate the state variables in distribution networks, including distributed generators using a honey-bee mating algorithm. Furthermore, in [20], the authors use this algorithm in combination with a self-organizing map (SOM) in the market segmentation problem.

Swarm intelligence algorithms have contributed and gained popularity in the field of ANN as a learning strategy. However, the intrinsic limitations of ANN do not allow applying them in complex pattern recognition problems, even using different learning strategies. These limitations motivate to explore other alternatives to model and generate neural models to make possible their application in several pattern recognition problems.

Although ANNs were inspired by the behavior of the human brain, the fact is that they do not mimic the behavior of a biological neuron. In that sense, the development and application of more realistic neural models could improve the accuracy of an ANN during several pattern recognition tasks.

Spiking neuron models are called the 3rd generation of artificial neural networks [21]. These neurons increase the level of realism in a neural simulation and integrate the concept of time. These types of models have been used in a broad range of areas, mainly from the field of computational neurosciences [22], brain region modeling [23], auditory processing [24, 25], visual processing [26–28], robotics [29, 30], and so on.

Several spiking models have been proposed in the last years. One of the most realistic and complex models was proposed in [31]. Nonetheless, there are simplified versions of this model that reduce its computational complexity. Among these models, we could mention the well-known integrate-and-fire model [32], Izhikevich model [33], FitzHugh-Nagumo model [34], and Hindmarsh-Rose [35].

Theoretically, these types of models could simulate the behavior of any perceptron type neural network. However, their application in computer vision and pattern recognition has not been widely explored. Although there are some works related to image segmentation [36–38] and pattern recognition [39–43], there still are several issues to research related to the learning process, design, and implementation.

The process of learning of these models is conducted with different techniques. In [44], the authors present the Spike-Prop, an adaptation of the well-known backpropagation algorithm to train a spiking neural model. Furthermore, several variants to improve the efficiency of spike-prop have been proposed [45–47]. However, these algorithms require a careful tuning of the network to obtain acceptable results.

Another approach to train these models is based on probabilistic models [47–49], information bottleneck learning [50–52], and reinforcement learning [53–55].

On the other hand, nongradient based methods like evolutionary strategies (such as GA, PSO, and DE) have emerged as an alternative to traditional methods for training spiking neural models. Although this approach is computationally more expensive compared with traditional methods, it has several advantages that make possible its application in real pattern recognition problems [42, 56, 57].

Recently, it has been proven that only one spiking neuron model can solve nonlinear pattern recognition problems, showing a clear advantage against the traditional perceptron [58–60]. One alternative to simulate the learning process of this type of model is to use swarm intelligence algorithms. For example, in [58] the authors describe an approach to applying a leaky-integrate-and-fire spiking neuron in various linear and nonlinear pattern recognition problems. In that work, the authors use the differential evolution algorithm as a learning strategy. In other researches, the authors use the Izhikevich spiking model to the same set of problems using a differential evolution strategy [60]. In [61, 62], the authors use the Izhikevich spiking model to the same set of problem using cuckoo search and particle swarm optimization algorithms, respectively. In general, the methodology described in those papers can be stated as follows: given a set of input patterns belonging to classes, first of all, each input pattern is transformed into an input signal. Then the spiking neuron is stimulated during and the firing rate is computed. After adjusting the synaptic weights of the spiking model by means of a swarm intelligence algorithm, we expect that input patterns from the same class produce similar firing rates, and input patterns from different classes generate firing rates different enough to discriminate among the categories.

Despite the results presented in those papers, it is still necessary to explore and develop strategies that allow these models to learn from their environment and improve their accuracy solving complex pattern recognition problems. Due to the capabilities of producing acceptable results at a low computational cost in a broad range of optimization problems, including ANN field, the ABC algorithm could be an excellent tool for simulating the learning process of a spiking neural model.

In this paper, we proposed to use the ABC algorithm as a learning strategy to training a spiking neuron model aiming to perform various linear and nonlinear pattern recognition problems. Based on the methodology described in [60, 61], we present a comparison of the results presented in [58, 60, 61] to determine which learning strategy provides the best accuracy and how it affect the accuracy of the spiking neuron. In order to test the accuracy of the learning strategy combined with the spiking neuron model, we perform several experiments using different pattern recognition problems, including an odor recognition problem and cancer classification based on DNA microarrays.

The outline of this paper is divided into five sections. A brief introduction to the ABC algorithm is presented in Section 2. The concepts related to the third generation of neural networks knowing as spiking neural networks are presented in Section 3. Section 4 presents the proposed methodology for training spiking neural networks using the ABC algorithm. The experimental results, as well as the discussion of the results, are presented in Section 5. Finally, the conclusions of this work are presented in Section 6.

#### 2. Basics on Artificial Bee Colony

The artificial bee colony (ABC) algorithm is a novel approach in the area of the swarm optimization proposed by Karaboga and Akay [6]. The ABC algorithm is based on the behavior of bees in nature, whose task consists of exploring their environment to find a food source, picking up the flower’s nectar and returning to the hive in order to evaluate the quality and the amount of the food, and then call the other bees of the community to fly towards the food source. Communication among bees is done by a particular dance.

This algorithm can find the optimum values in the search space of a given optimization problem. A global optimization problem can be defined as finding the parameter vector that minimizes the objective function : which is constrained by the following inequalities and/or equalities: is defined on a search space, , which is dimensional rectangle in . The variable domains are limited by their lower and upper bounds (2).

This algorithm represents the solutions of a given problem by means of the position of different food sources visited by a bee. Furthermore, it works with three kinds of bees in order to explore and exploit the search space: employed, onlookers, and scouts bees.

Following the work described in [7], the employed bee has to modify the position (solution) in its memory based on the local information (visual information) and test the nectar amount (fitness value) of the new source (new solution). If the quantity of nectar in the new position is better than the old one, the bee memorizes it and forgets the old one. Contrarily, it keeps the previous one in its memory. After the entire employed bees complete the search process, they share the nectar information about the food sources and their position, with the onlooker bees in the dance area.

An onlooker bee checks the nectar information obtained from all employed bees and selects a food source with a probability in terms of its nectar amount. The employed bee modifies the position in its memory and checks the quantity of nectar obtained from the candidate source. If its nectar is higher than that of the previous one, then the bee memorizes the new position and forgets the old one.

An artificial onlooker bee chooses a food source depending on the probability associated with that food source. This probability is calculated with the following expression: where is the fitness value of the solution of dimension which is proportional to the nectar amount of the food source in the position and SN is the number of food sources that is equal to the number of employed bees.

In order to produce a candidate food position from the old one in memory, the ABC algorithm uses the following expression: where and are indexes randomly chosen. Although is determined randomly, it has to be different from . is a random number between that controls the production of neighbor food sources around .

The food source whose nectar is discarded by the bees is changed with a new food source by the scouts. In the ABC algorithm, this is simulated by producing a random position and replacing it with the abandoned one. If that position cannot be enhanced after a number of trials, then the solution is discarded. This parameter is called the “limit” for abandonment. Assume that the abandoned source is and ; then the scout bee discovers a new food source to be replaced with . This operation can be defined as where and are the lower and upper bounds of the parameter , respectively.

After each candidate source position is generated and then evaluated by the artificial bee, its performance is compared to the performance associated with the previous position. If the new food source has equal or better nectar than the old one, then it is substituted with a new one in the memory. Otherwise, the old one is retained in the memory.

The pseudocode of the ABC algorithm is shown in Pseudocode 1.