Abstract

The particle swarm optimization algorithm was originally introduced to solve continuous parameter optimization problems. It was soon modified to solve other types of optimization tasks and also to be applied to data analysis. In the latter case, however, there are few works in the literature that deal with the problem of dynamically building the architecture of the system. This paper introduces new particle swarm algorithms specifically designed to solve classification problems. The first proposal, named Particle Swarm Classifier (PSClass), is a derivation of a particle swarm clustering algorithm and its architecture, as in most classifiers, is pre-defined. The second proposal, named Constructive Particle Swarm Classifier (cPSClass), uses ideas from the immune system to automatically build the swarm. A sensitivity analysis of the growing procedure of cPSClass and an investigation into a proposed pruning procedure for this algorithm are performed. The proposals were applied to a wide range of databases from the literature and the results show that they are competitive in relation to other approaches, with the advantage of having a dynamically constructed architecture.

1. Introduction

Data classification is one of the most important tasks in data mining, which is applied to databases in order to label and describe characteristics of new input objects whose labels are unknown. Generally, this task requires a classifier model, obtained from samples of objects from the database, whose labels are known a priori. The process of building such a model is called learning or training and refers to the adjustment of parameters of the algorithm so as to maximize its generalization ability.

The prediction for unlabeled data is usually done using models generated in the training step or some lazy learning mechanism able to compare objects classified a priori with new objects yet to be classified. Thus, not all algorithms generate an explicit classification model (classifier), as is the case of k-NN (k-nearest neighbors) [1, 2], and Naïve Bayes [3], which use historical data in a comparative process of distance or probability estimation, respectively, to classify new objects. Examples of algorithms that generate an explicit classification model are the decision trees [4, 5], artificial neural networks [6, 7], and learning vector quantization (LVQ) algorithms [811].

This paper proposes two algorithms for data classification—Particle Swarm Classifier (PSClass) and its successor Constructive Particle Swarm Classifier (cPSClass), both based on the particle swarm optimization algorithm (PSO) [12]. These algorithms were developed from adaptations of the PSO and other bioinspired techniques, and were evaluated in seven databases from the literature. Their performance was compared with that of other swarm algorithms and also with some well-known methods from the literature, such as k-NN, naïve Bayes, and an MLP neural network.

In the PSClass algorithm, two steps are necessary to construct the classifier. In the first step, a number of prototypes are positioned, in an unsupervised way, on regions of the input space with some density of data; for this, the Particle Swarm Clustering (PSC) [13] algorithm is used. In the next step, the prototypes are adjusted by an LVQ1 method [14] in order to minimize the percentage of misclassification. Thus, the PSClass automatically positions the prototypes in the respective classes of objects, defining the decision boundaries between classes and increasing the efficiency of the algorithm during the construction of the classifier.

The cPSClass, in turn, improved its unsupervised step by inserting mechanisms inspired by the immune system to automatically build the classifier model, more specifically, to automatically set the number of cells (prototypes) in the swarm. At this step, the PSC has been replaced by the Constructive Particle Swarm Clustering (cPSC) [15], which uses the clonal selection principle [16] to iteratively control the number of particles in the swarm. Furthermore, a pruning phase was introduced so as to avoid the explosion in the number of particles. Thus, cPSClass eliminates the need for the user to define the swarm size a priori, a critical parameter required for many data classification algorithms.

The paper is organized as follows. As the cPSClass algorithm borrows ideas from swarm intelligence and the immune system, Section 2 is dedicated to a brief review of the biological concepts necessary for a proper understanding of the algorithm. Section 3 presents the PSC and cPSC algorithms, which form the basis for designing PSClass and cPSClass. In Section 4, two classifiers are proposed based on the PSO—PSClass and cPSClass. Section 5 provides a literature review emphasizing algorithms based on the PSO to solve classification problems and PSO versions with dynamic population. The PSClass and cPSClass algorithms are evaluated in Section 6 and a parametric sensitivity analysis for cPSClass was performed to evaluate the growth of the swarm. The paper is concluded in Section 7, with discussions and proposals for future works.

2. Biological Background

Biological and behavioral mechanisms are some of the natural inspirations that motivate scientists and engineers to construct intelligent computational tools. One of the pioneer computational bioinspired tools was proposed by McCulloch and Pitts [17], with their logic model of the neuron. After that, several other lines of research have emerged, with new bioinspirations, such as evolutionary computing with the Darwinian laws of evolution [1820], swarm intelligence [12, 2123] inspired by the emergent behavior of social agents (most often insects), and artificial immune systems, inspired by the vertebrate immune system [24, 25]. These four approaches constitute one of the three major areas of natural computing [26]. The following sections briefly review swarm intelligence concepts and their main lines of research, as well as an overview of the vertebrate immune system and its main defense mechanisms.

2.1. Swarm Intelligence

Collective systems (swarms) are composed of agents that interact with each other and with the environment in order to achieve a common objective. These agents can be formed by a flock of birds, a school of fish, or a colony of insects, which are able to learn from their own experience and with the social influence and adapt to changes in the environment where they live [27]. These agents individually have limited cognitive abilities that allow them to perform only simple tasks. However, when put together and interacting with one another, they can perform substantially complex tasks [28]. The emergent behavior of this social interaction is called swarm intelligence.

This terminology was first used in the work of Beni and Wang [21], which described the behavior of a group of robots that interacted with each other in a particular environment, respecting a set of local rules inspired by the behavior of ants. According to Kennedy et al. [29], any collective behavior, like a flock of birds or an immune system, can be named a swarm.

There are basically two approaches in Swarm Intelligence: the works based on the behavior of social insects [30] and those based on the human ability to process knowledge [31]. In both lines of research, there is a group of agents that interact with one another and with the environment.

Among the swarm intelligence algorithms, a great deal of attention has been given to the PSO algorithm, introduced by Kennedy and Eberhart [12]. PSO was inspired by the social behavior of flocks of birds or schools of fish to solve complex optimization problems. It uses a population of solutions, called particles, which interact with one another exchanging experience in order to optimize its ability within a certain environment. The main bioinspiration in the PSO is that one behaves according to its own past experience and that of other interacting agents. Since its introduction, PSO has also been improved and adapted to be applied to various tasks [32].

2.2. Immune System

Living organisms have cells and molecules that protect their body against the onslaught of disease-causing agents (pathogens). Specific cells and their mechanisms, such as identification, signaling, reproduction, and attack, are parts of a complex system called the immune system [24], responsible for keeping diseases at bay. The vertebrate immune system is divided mainly into innate immune system and adaptive immune system.

The innate immune system does not evolve, remaining the same throughout the life time of the individual. It recognizes many infectious diseases and is responsible for combating infectious agents while the adaptive immune system is preparing to act. The innate immune system, by itself, cannot remove most pathogens [24]. The main role of the innate immune system is to signal other immune system cells, since most pathogens do not directly stimulate the cells of the adaptive immune system.

The adaptive immune system is also called specific immune system, because some pathogens are recognized only by cells and molecules of the adaptive immune system. One of its main features is the ability to learn from infections and develop an immune memory, in other words, to recognize an antigen when it is presented recursively to the body. Thus, the adaptability of the adaptive immune system renders it more capable of recognizing and combating specific antigens each time it tries to reinfect the body.

The main components of the innate immune system are the macrophages and granulocytes, while in the adaptive immune system these are the lymphocytes. Both systems depend on the activity of white blood cells (leukocytes).

The organs that make up the immune system are called lymphoid organs consisting of the following two subsystems:(i)primary lymphoid organs are responsible for the production, growth, development, and maturation of lymphocytes.(ii)secondary lymphoid organs aresites in which lymphocytes recognize and fight pathogens.

Some lymphocytes are generated, developed, and matured in the bone marrow. These lymphocytes are called B cells. However, some lymphocytes generated in the bone marrow migrate to an organ called thymus, where they are matured and come to be called the T cells. The B cells and T cells are the lymphocytes of the immune system.

The main mechanisms of recognition and activation of the immune system are described by the clonal selection principle or the clonal selection theory [16], which explains how the adaptive immune system responds to pathogens. Only cells that recognize antigens are selected for reproduction, whilst the remainder die after some time due to a lack of stimulation.

The immune cells subjected to high concentrations of antigens (pathogens) are selected as candidates for reproduction. If the affinity between this cell receptor (called antibody) and an antigen of greater affinity to it exceeds a certain threshold, the cell is cloned. The immune recognition, thus, triggers the proliferation of antibodies as one of the main mechanisms of immune response to an antigenic attack, a process called clonal expansion. During clonal expansion B cells are subjected to an affinity proportional mutation process, resulting in variations in the repertoire of immune cells and, thus, antibodies, which are B-cell receptors free in solution.

The clonal selection theory is considered the core of the immune response system, since it describes the dynamics of the adaptive immune system when stimulated by disease-causing agents. Therefore, it is used in the design of adaptive problem solving systems [24] and was used in the design of the cPSClass algorithm to be proposed in this paper.

3. Clustering Using Particle Swarm

The idea of using the PSO algorithm to solve clustering problems was initially proposed in [33], so that each particle corresponds to a vector containing the centroid of each group of the database. Since then, several clustering algorithms based on the PSO have been proposed [13, 15, 31, 3356]. Among them, the PSC and cPSC algorithms, form the basis for the PSClass and cPSClass classification algorithms, respectively, proposed in this paper. In this section, the precursors of PSClass and cPSClass are reviewed.

3.1. Particle Swarm Clustering (PSC)

The PSC, proposed in [13], is an adaptation of the PSO to solve data clustering problems. In the PSC, particles interact with one another and with the environment (database) so that they become representatives of a natural group from the database. The convergence criterion of the algorithm is determined by the stabilization of the path of the particles, and the number of particles in the swarm is initialized empirically. The dimension of a particle is given by the dimension of the input objects, where each vector position is an attribute of the object.

The main structural differences between the PSC and PSO algorithms are as follows.(i)In PSC, the particles altogether compose a solution to the data clustering problem. (ii)The PSC does not use an explicit cost function to evaluate the quality of the particles. Instead, the Euclidean distance is used as a measure to assess the dissimilarity between a particle and an object, and particles move around the space in order to represent statistical regularities of the input data. (iii)A self-organizing term, which moves the particle towards the input object, was added to the velocity equation. (iv)In the PSO algorithm, all the particles in the swarm are updated at each iteration. In the PSC, the particles to be updated are defined by input objects (i.e., only the winner—the one closest to the input datum—is updated according to (1) and (2)).

For each input object, there is a particle of greater similarity to it, obtained by the Euclidean distance between the particle and the object. This particle is called winner, and is updated following the proposed velocity equation (1) as

In (1), the parameter , called inertia moment, is responsible for controlling the convergence of the algorithm. The cognitive term, , associated with the experience of the particle, represents the best particle’s position , in relation to the input object so far, that is, the smallest distance between the input object and the winner particle (). The social term, , is associated with the particle closest to the input object, that is, the particle that had the smallest distance in relation to the input object so far. The self-organizing term, , moves the particle towards the input object.

Thus, the particles converge to the centroid of the groups, or regions of higher density of objects, becoming prototypes representatives of groups from the database.

The pseudocode of the PSC algorithm is described in Pseudocode 1.

1. procedure X, V, P, g, , PLABELS = PSC (Y, max, , , CLABELS)
2. Y//dataset
3. initialize X//initialize at random each particle 0,1
4. initialize //initialize at random each max, max
5. initialize dist
6.
7. while stopping criterion is not met
8.  for to //for each input datum
9.   for to //for each particle
10.    dist ( ) = distance ( )
11.  end for
12.    = index (min (dist))
13.    PLABELS = label ( , CLABELS ( ))//predicted label
14.   if distance ( ) < distance ( )
15.    
16.   end if
17.   if distance ( ) < distance ( )
18.    
19.   end if
20.    = + 1 ( ) + 2 ( ) + 3 ( )
21.    max, max
22.    ( ) = ( ) + ( )
23.    0, 1
24.  end for
25.  for to
26.   if ( = win )//particles did not
27.     = ω * ( ) + φ 4 (x most_win ( ))
28.     max, max
29.     ( ) = ( ) +
30.   end if
31.  end for
32.  
33.  
34.  Test the stopping criterion
35. end while
36. end procedure

Step 13 of Pseudocode 1 assigns a label (PLABELS) to each input object, which is given by a label (CLABELS) that represents the dominant class of objects for which each particle was the winner. Generally, in real world problems the correct labels are not known a priori. So, each label (CLABELS) must be given by each particle in the swarm.

Step 26 of Pseudocode 1 updates all those particles that did not move at iteration . Thus, after all objects were presented to the swarm, the algorithm verifies whether some particle did not win in that iteration . If yes, then these particles are updated in relation to the particle that was elected the winner more often at iteration . Such particle is called (step 27 in Pseudocode 1), where is a random vector within the interval (0,1).

3.2. Constructive Particle Swarm Clustering (cPSC)

To dynamically determine the number of particles in PSC, Prior and de Castro [15] proposed a successor, called cPSC, which eliminated the need for the user to input the number of particles (prototypes) in the swarm. cPSC automatically finds a suitable number of prototypes in databases by employing strategies borrowed from the PSC algorithm with the addition of three new features inspired by the antibody network named ABNET [57]: growth of the particle swarm, pruning of particles, and automatic stopping criterion. Furthermore, the cPSC algorithm uses an affinity threshold as a criterion to control the growth of the swarm. The growth, pruning and stopping functions are described below and are evaluated at every two iterations.

3.2.1. Swarm Growth

The growth stage is based on the immune cell reproduction mechanism during clonal selection and expansion [16, 24], as described previously. The cells that are subjected to high concentrations of antigens are chosen as candidates to reproduce. If the affinity between the antibodies of these cells in relation to the antigens of higher affinities to them is greater than a threshold , then these cells are cloned.

These principles of selection and reproduction of antibodies inspired the design of the constructive particle swarm clustering algorithm. In the cPSC, particles are analogs to immune cells and objects from the database to antigens. The algorithm starts with only one particle (immune cell), with position and velocity initially random. At every two iterations the algorithm evaluates the necessity of growing the swarm as follows: the particle that was elected the winner more times (cell submitted to the highest concentration of antigen) is selected. The algorithm evaluates the degree of affinity between the particle and the object (antigen) of higher affinity to it, using threshold . If the affinity between them is greater than , then a new particle is created in the swarm. This new particle is positioned in the middle between the winner and the object of higher affinity to it.

3.2.2. Pruning of Particles

At every two iterations, the algorithm evaluates the need for pruning particles. If a particle has not moved at all in two iterations, then it is deleted from the swarm. A new step, called suppression, was added right after the pruning of particles step. If the particles are close to one another (Euclidean distance < 0.3), they are eliminated. It is a metaphor based on the immune system: cells and molecules recognize each other even in the absence of antigens. If the cell recognizes an antigen (positive response), then a clonal immune response is started; otherwise, (negative response) a suppression, which refers to the death of cells recognized as self, happens.

3.2.3. Stopping Criterion

At every two iterations the algorithm evaluates the stopping criterion by the average Euclidean distance between the current position and the position of the memory particles. Thus, the algorithm stops when this distance is less than or equal to or 200 iterations.

The pseudocode of the cPSC algorithm is described as in Pseudocode 2.

1. Procedure  X, V, P, g, , PLABELS = cPSC (Y, , , max, , CLABELS)
2. Y//dataset
3. initialize X//initialize at random only one particle
4. initialize V//initialize at random, max, max
5. initialize dist
6.
7. while stopping criterion is not met
8.  for to //for each input datum
9.   for to //for each particle
10.     dist ( ) = distance ( )
11.  end for
12.   I = index (min (dist))
13.    PLABELS = label ( , CLABELS ( ))//predicted label
14.   if distance ( ) < distance ( )
15.      =
16.   end if
17.   if distance ( ) < distance ( )
18.     
19.   end if
20.    = + φ 1 ( ) + ( ) + ( )
21.    max, max
22.    ( ) = ( ) + ( )
23.    0,1
24.  end for
25.  if mod ( )==0
26.   Eliminate particles from the swarm if necessary
27.   Test the stopping criterion
28.   Clone particles if necessary
29.  end if
30.  
31.  
32. end while
33. end procedure

4. Classification Using Particle Swarm

This paper proposes two data classification algorithms based on particle swarms: (1) PSClass, that uses the LVQ1 heuristics to adjust the position of prototypes generated by a clustering swarm-based algorithm; and (2) cPSClass, an improved version of PSClass that uses ideas from the immune system to dynamically determine the number of particles in the swarm.

The training process of PSClass consists of the iterative adjustment of the position of particles (prototypes). After training, there is a predictor model formed by a set of prototypes able to describe and predict the class to which a new input object from the database must belong. The testing step assesses the generalization capability of the classifier. At this stage, a number of test objects are presented to the classifier and their classes are predicted.

Two methods are combined to generate the predictor: the PSC algorithm, and an LVQ1 model. The LVQ1 heuristics was adopted for its simplicity and allows, through simple procedures of position adjustment, the correction of misplaced prototypes in the data space [10].

Within PSClass, the PSC algorithm is run to find groups of objects in the database by placing the particles (prototypes) on the natural groups of the database. The number of particles must be informed by the user and this number should be at least equal to the number of the existing classes in the database. The algorithm places the prototypes in the input objects space in order to map each object in a representative prototype of each class. Thus, the algorithm generates a decision boundary between classes based on the prototypes that represent the classes. Then, the LVQ1 heuristics is used to adjust the position of the prototypes so as to minimize the classification error.

In its classification version, the PSC algorithm was modified such that the number of iterations for convergence is not determined by the user. Instead, its stopping criterion is defined by the stabilization of the prototypes around the input objects.

Two steps are required to generate the PSClass classifier.(i)Unsupervised Step. In this stage, the PSC algorithm is run in order to position the particles in regions of the input space that are representative of the natural clusters of data.(ii)Supervised Step. Some steps are added to the PSC algorithm such that the prototypes are adjusted by the LVQ1 method so as to minimize the classification error, as shown in (3) and (4).

For each object in the database, there is a prototype with greater similarity to it, determined by a nearest neighbor method. This prototype is updated considering the PSC equations combined with the LVQ1 method as if the prototype and the object belong to the same class; and if the prototype and the object do not belong to the same class.

Thus, when a particle labels correctly an object from the database, the particle is moved toward this object (3); otherwise, it is moved in the opposite direction to the object (4).

The following pseudocode presents the supervised step of the PSClass classifier (see Pseudocode 3).

1. procedure X, PLABELS = PSClass (Y, max, , , CLABELS)
2. Y//dataset
3. CLABELS//correct_labels
4. X, V, P, g, , PLABELS = PSC (Y, max, , , CLABELS)
5. while stopping criterion is not met
6.  for to //for each object
7.   for to //for each particle
8.     dist ( ) = distance ( )
9.   end for
10.   I = index (min (dist))
11.   if distance ( ) < distance ( )
12.      =
13.   end if
14.   if distance ( ) < distance ( )
15.      =
16.   end if
17.    = + φ 1 ( ) + ( ) + ( )
18.    max, max
19.   if (PLABELS ( ) == CLABELS ( ))
20.      ( ) = ( ) + ( )
21.   else
22.      ( ) = ( ) −  ( )
23.   end if
24.  end for
25.  
26.  Test the stopping criterion
27. end while
28. end procedure

As in most data classification algorithms, the user must define the architecture of the system, for example, number of particles in the swarm or neurons in the neural network; some changes in the PSClass classifier were proposed so that it could dynamically determine the number of prototypes in the swarm, and, thus, the automatic construction of a classifier model, giving rise to cPSClass. The cPSClass algorithm was inspired by the work of Prior and de Castro [15], with the proposal of the cPSC, discussed in Section 3.2.

Like its predecessor, PSClass, two steps are necessary to generate the cPSClass classifier.(i)Unsupervised Step. In this stage, the cPSC algorithm is run in order to position the particles in regions of the input space that are representative of the natural clusters of data and also to determine a suitable number of particles in the swarm (Pseudocode 2).(ii)Supervised Step. Some steps are added to the PSC algorithm such that the prototypes are adjusted by the LVQ1 method so as to minimize the classification error, as shown in (3) and (4).

As the contributions of this paper emphasize a constructive particle swarm classification algorithm, the related works to be reviewed here include the use of the PSO algorithm for data classification and PSO techniques with dynamic population.

5.1. PSO for Data Classification

There are several works in the literature involving data classification with PSO, such as [3441]. These will be briefly reviewed in this section.

In [34], two approaches to the binary PSO are applied to classification problems: one called Pittsburgh PSO (PPSO) and the other called Michigan PSO (MPSO). In the Pittsburgh approach, each particle represents one or more prediction rules. Thus, a single particle is used to solve a classification problem. The classification is done based on the nearest neighbors rule (NN). In the Michigan PSO, by contrast, each particle represents a single prototype. Thus, all particles are used to solve a classification problem. A refinement of the MPSO is presented in [35] with the adaptive Michigan PSO (AMPSO), where the population is dynamic and the whole swarm is used to solve the problem. The fitness of each particle is used to calculate its growth probability.

In [36], the authors proposed an extension of the binary PSO, called DPSO, to discover classification rules. Each attribute may or may not be included in the resulting rule. An improvement of DPSO was proposed in [37], culminating in the hybrid algorithm PSO/ACO. The proposal starts with an empty set of rules, and for each class from the database it returns the best rule for the class evaluated.

Hybrid algorithms are common, as is the case of PSORS [38], which is used to cluster and classify objects. It combines the PSO, rough sets (RS), and a modified form of the Huang Index function to optimize the number of groups found in the database. The number of groups for each attribute of the particle is limited by a range defined by the Huang index, which is applied to the database. Attributes are fuzzified by the fuzzy -means [42], and the index function is applied to each object to determine the group to which it belongs.

In [39] it was proposed a hybrid algorithm, named hybrid particle swarm optimization tabu search (HPSOTS), for selecting genes for tumor classification. The HPSOTS combines PSO with tabu search (TS) to maintain population diversity and prevent deceptive local optimum. The algorithm initializes a population of individuals represented by binary strings. Ninety percent of the neighbors of an individual are assessed according to mechanisms from the modified discrete PSO [43, 44]. The algorithm selects a new individual of the neighborhood according to the tabu conditions and updates the population.

According to Wang et al. [40], many classification techniques, such as decision trees [4, 5] and artificial neural networks [6, 7], do not produce acceptable predictive accuracy when applied to real problems. In this sense, the authors proposed the use of the PSO algorithm [12] to discover classification rules. Their method initializes a population of individuals (rules) with the dimension given by the number of attributes of the objects evaluated. A fitness function is defined to evaluate the solution (classification rules) for the problem in question. The smaller the rule set, the easier to understand it.

The works presented in [40, 45] also cite the quality of the results produced when using classical techniques to solve real world problems. For Wang et al. [40], decision trees, artificial neural networks, and naive Bayes are some of the classic techniques applied to classification problems and they work well in linearly separable problems. The authors proposed a classification method based on a multiple linear regression model (MRLM) and the PSO algorithm, called the PSO-MRLM, which is able to learn the relationship between objects from a database and also express it mathematically. The MRLM technique builds a mathematical model able to represent the relationship between variables, associating the value of each independent variable with the value of the dependent variable. The set of equations (rules) contemplate this relationship using coefficients that act on each of the attributes of each rule. The proposed method uses the PSO algorithm to estimate the value of the coefficients.

A classifier based on PSO is proposed in [41] and applied to power systems operations. Pattern recognition based on PSO (PSO-PR) evaluates a condition of operation and predicts whether this is safe or unsafe. The first step to obtain the classifier is to generate the patterns (data) necessary to the training process. As the number of variables describing the power system state is very large, the next step in this process involves a feature selection procedure, responsible for eliminating redundant and irrelevant variables. In the next step PSO is used to minimize the percentage of misclassification.

Compared with the proposed PSClass and cPSClass, none of the works available in the literature work by using a clustering algorithm followed by a vector quantization approach. The proposals here initially operate in a completely self-organized manner, and only after the particles are positioned in regions of the space that represent the input data, their positions are corrected so as to minimize the classification error. Despite these differences, in the present paper the performances of PPSO, MPSO, and AMPSO algorithms are compared with that of PSClass and cPSClass.

5.2. PSO with Dynamic Population

The original PSO has also been modified to dynamically determine the number of particles in the swarm. According to [46], there are few publications dealing with the issue of dynamic population size in PSO, and the main ones are briefly reviewed below.

In [47] two dynamic population approaches are proposed to improve the PSO speed: expanding population PSO (EP-PSO) and diminishing population PSO (DP-PSO). According to the experiments shown, both approaches reduce the run time of PSO by 60% on average.

In [48], it was proposed the dynamic population PSO (DPPSO), where the number of particles is controlled by a function that contains an attenuation item (responsible for reducing the number of particles) and a waving item (particles are generated to avoid local optimum and the ones considered inefficient die and are removed from the swarm) to control the population size.

In [46] it was proposed the dynamic multiobjective particle swarm optimization (DMOPSO) algorithm, which uses the particle growth strategy inspired by the algorithm incrementing multiobjective evolutionary (IMOEA) [49] in which particles of best fitness are selected to generate new particles.

In [50], the authors proposed the ladder particle swarm optimization (LPSO) algorithm, where the population size is determined based on its diversity.

In [51], two approaches to multiobjective optimization with dynamic population are proposed: dynamic multiobjective particle swarm optimization (DPSMO) and dynamic particle swarm evolutionary algorithms (DPSEA), which combine PSO and genetic algorithm mechanisms to regulate the number of particles in the swarm.

All PSO versions with dynamic population present growth strategies for the number of particles to ensure the diversity of solutions and pruning strategies to reduce the processing cost. Their applications, however, are focused on optimization problems, not on data classification problems, as proposed in the present paper. Therefore, no direct performance comparisons will be made with these methods.

6. Performance Assessment

The PSClass and cPSClass algorithms were implemented in MATLAB 7.0, and their performances were compared with those three algorithms based on the PSO: PPSO, MPSO and AMPSO, and also with the three well-known classification algorithms from the literature: naïve Bayes, k-NN and a multi-layer perceptron (MLP) neural network trained with the backpropagation algorithm [11]. The parametric configurations for the MLP were as follows: learning rate equals to 0.3, maximum number of epochs equals to 500, and number of nodes in the hidden layers equals to 4.0. The number of hidden layers was given by (number of attributes + number of classes)/2. The classic algorithms were run using the Weka 3.6 [58] tool, and the results of the PSO-based algorithms were taken from the literature. A k-fold cross-validation procedure was used to train the algorithms and estimate the prediction error, and the algorithms were run 30 times for a validation with folders each. The objects of each class were distributed evenly and with stratification among the 10 folders. For benchmarking we used seven databases available in the UCI Data Repository (http://archive.ics.uci.edu/ml/datasets.html). The main features of these databases are listed in Table 1.

The parametric configurations used in the two algorithms proposed here have been inherited from their predecessor, the PSC. The vectors , , and are random in the interval (0, 1). The inertia moment () has an initial value of 0.90, with a decay of 0.95 iteratively to 0.01. The number of particles used in PSClass was twice the number of classes present in the respective database in Table 1. The domain of the vector space was limited to and the velocity of the particles was also controlled and was set in the range [] to avoid particle explosion.

In its original version, the PSC stops after a fixed number of iterations. For the construction of the PSClass, the PSC was modified such that the number of iterations required for their convergence is not defined by the user. To do that, a stopping criterion had to be proposed: the stabilization of the swarm; that is, if the average Euclidean distance between the current position and the position of the memory prototypes is less than a given threshold, then the algorithm is assumed to have converged. This stopping criterion is assessed every two iterations.

In cPSClass, the pruning of particles occurs when they do not move during two consecutive iterations. When the similarity between the prototype and the object of greater similarity to it is greater than the affinity threshold, the prototype is cloned and the new prototype is positioned in the middle between it and the object evaluated. A suppression step was added right after the pruning step so as to minimize the number of prototypes generated.

The threshold depends on the dataset, for which it must be defined empirically. A number of particles added to the swarm much different from the number of classes in the database compromise the effectiveness of the algorithm. To understand the influence of in the cPSClass algorithm, a sensitivity analysis of was performed using the databases from Table 1. The following values for were tested: 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, and 0.50. The results in terms of accuracy (percentage of correct classification—Pcc), number of particles (NP), and number of iterations (NI) for each value of tested are shown in Table 2. The results presented are the average over 30 runs, and the best results of Pcc, NP and NI on average were made in bold.

Increasing the value of is the same as increasing the affinity degree between the winner particle and the input object. Thus, when the threshold increases, the number of clones tends to increase, which can compromise the effectiveness of the algorithm. As the accuracy of the algorithm is, in principle, its most important measure of interest, the value of detached in bold in Table 2 as the most suitable one for each dataset was that with higher Pcc. It can be observed, though, that the algorithm presents some robustness in relation to , in the sense that its performance in terms of Pcc, NP, and NI changes little even with a high variation in .

To evaluate the performance of the suppression step in cPSClass, its performance was compared with that without using it. The results are shown in Table 3 and the cPSClass with the suppression step is identified by ScPSClass to differentiate it from the standard cPSClass. It can be observed that the accuracy of cPSClass with the suppression step was worse for the Iris, Glass and E. coli datasets, but the number of prototypes generated was substantially smaller. By contrast, the performance of ScPSClass was equivalent or increased even with a substantial reduction in the number of particles in the swarm.

The parametric configurations of the PPSO, MPSO and AMPSO algorithms are available in [34, 35]. The number of prototypes of such algorithms, as well as of PSClass and cPSClass, is shown in Table 4. For the MLP network, the number of output neurons used was equal to the number of classes in the database (Table 1), as well as the value of k for the k-NN.

Table 5 shows the performance of PPSO, MPSO, AMPSO, PSClass, cPSClass algorithms and the classic algorithms from the literature when applied to the databases in Table 1. The best absolute results, on average, are made in bold in the table. The PSClass and cPSClass algorithms showed similar performances, on average, for the databases of Table 1. For the Yeast and Ruspini databases, the cPSClass algorithm presented maximal accuracy, whilst no other algorithm was capable of presenting such performance. It is worth noting that the number of particles generated by the cPSClass is greater than that used in the PSClass for the Wine and Haberman databases, as can be observed in Table 4. Naïve Bayes, k-NN, and MLP presented a worse performance than PSClass and cPSClass for the Haberman databases but were competitive for the other databases. PPSO, MPSO, and AMPSO performed quite well for the Glass database, being competitive with PSClass and cPSClass.

A Shapiro-Wilk test [59] was used to determine whether the behavior presented by the algorithms had a normal distribution. Assuming a confidence level equals to 0.95, the test of normality revealed that the null hypothesis should be rejected and, thus, a nonparametric test should be used to assess the statistical significance of performances. In order to determine whether the difference in performance among the evaluated algorithms is significant, we used the Friedman test [60, 61], a nonparametric method analogous to the parametric ANOVA (analysis of variance) [62]. The Friedman test is based on the ranking of the results obtained for each sample (database) to all algorithms. The value of the degrees of freedom is obtained by , being , so there are 7 degrees of freedom. Thus, according to the table [63], the critical values of probability, for and , are 14.07 and 18.48, respectively. As calculated is less than the critical values, the null hypothesis is not rejected. In other words, the difference in performance between the algorithms is not statistically significant for the Haberman databases tested.

7. Conclusion

This paper presented two algorithms based on the original particle swarm optimization algorithm—PSClass and cPSClass—to solve data classification problems. The PSClass initially finds natural groups within the database, in an unsupervised way, and then adjusts the prototypes’ position using an LVQ1 method, in a supervised way, in order to minimize the misclassification error. The cPSClass algorithm is similar to PSClass, except in its unsupervised phase, where it dynamically determines the number of particles in the swarm using the immune clonal selection metaphor. A parametric sensitivity analysis for cPSClass was also performed to evaluate the relation between the growth of the swarm and . For cPSClass, a suppression step was added right after the pruning step to reduce the number of prototypes generated by the algorithm.

The algorithms were applied to seven data classification problems and their performance was compared with that of algorithms well known in the literature—k-NN, MLP, and Naïve Bayes, in addition to three algorithms based on the original PSO-PPSO, MPSO, and AMPSO. It was used a -fold cross-validation to train the algorithms and estimate the prediction error. The algorithms were run 30 times for folders. The results showed that cPSClass was the best algorithm, on average, for the Habernam database, whilst AMPSO was the best for Glass, naïve Bayes was best for E. coli, k-NN was the best for Wine, and the MLP was the best for Iris. The PSClass and cPSClass algorithms showed similar results with each other for all the databases evaluated. However, the cPSClass has the advantage of automatically determining the number of prototypes (particles) in the swarm.

Acknowledgments

The authors thank CNPq, Fapesp, Capes, and MackPesquisa for the financial support.