Abstract

Unsupervised data classification (or clustering) analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO) algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.

1. Introduction

Unsupervised data classification (or data clustering) is one of the most important and popular data analysis techniques and refers to the process of grouping a set of data objects into clusters, in which the data of a cluster must have high degree of similarity and the data of different clusters must have high degree of dissimilarity [1]. The aim is to minimize the intercluster distance and maximize the intracluster distance. Clustering techniques have been applied in many areas such as document clustering [2, 3], medicine [4, 5], biology [6], agriculture [7], marketing and consumer analysis [8, 9], geophysics [10], prediction [11], image processing [1214], security and crime detection [15], and anomaly detection [16].

In clustering problem, a dataset is divided into number of subgroups such that elements in one group are more similar to one another than elements of another group [17]. It can be defined to find out unknown patterns, knowledge, and information from a given dataset which was previously undiscovered using some criterion function [18]. It is NP complete problem when the number of cluster is greater than three [17]. Over the last two decades, many heuristic algorithms have been suggested and it is demonstrated that such algorithms are suitable for solving clustering problems in large datasets. For instance, the Tabu Search Algorithm for the clustering is presented in [19], the Simulated Annealing Algorithm in [20], the Genetic Algorithm in [21], and the particle swarm optimization algorithm in [22], which is one of powerful optimization methods. Fernández Martínez and Garcia-Gonzalo [2326] clearly explained how PSO family parameters should be chosen close to the second order stability region. Hatamlou et al. in [27] introduced the Big Bang Big Crunch algorithm for the clustering problem. This algorithm has its origin from one of the theories of the evolution of the universe, namely, the Big Bang and Big Crunch theory. An Ant Colony Optimization was developed to solve the clustering problem in [28]. Such algorithms are able to find the global solution to the clustering. Application of the Gravitational Search Algorithm (GSA) [29] for clustering problem has been introduced in [30]. A comprehensive review on clustering algorithms can be found in [3133].

In this paper, a new heuristic clustering algorithm is developed. It is based on the evolutionary method called the Biogeography-Based Optimization (BBO) method proposed in [34]. The BBO method is inspired from the science of biogeography; it is a population-based evolutionary algorithm. Convergence results for this method and its practical applications can be found in [35]. The algorithm has demonstrated good performance on various optimization benchmark problems [36]. The proposed clustering algorithm is tested on six datasets from UCI Machine Learning Repository [37] and the obtained results are compared with those obtained using other similar algorithms.

The rest of this paper is organized as follows. Section 2 describes clustering problem. A brief overview of the BBO algorithm is given in Section 3. Section 4 presents the clustering algorithm. Experimental results are reported in Section 5. Finally, Section 6 presents conclusions with future research direction.

2. Cluster Analysis

In cluster analysis we suppose that we have been given a set of a finite number of points of -dimensional space , that is , where

In all, clustering algorithms can be classified into two categories, namely, hierarchical clustering and partitional clustering. Partitional clustering methods are the most popular class of center based clustering methods. It has been seen that partitional algorithm is more commendable rather than hierarchical clustering. The advantage of partitional algorithm is its visibility in circumstances where application involving large dataset is used where construction of nested grouping of patterns is computationally prohibited [38, 39]. The clustering problem is said to be hard clustering if every data point belongs to only one cluster. Unlike hard clustering, in the fuzzy clustering problem the clusters are allowed to overlap and instances have degrees of appearance in each cluster [40]. In this paper we will exclusively consider the hard unconstrained clustering problem. Therefore, the subject of cluster analysis is the partition of the set into a given number or disjoint subsets , with respect to predefined criteria such that

Each cluster can be identified by its center (or centroid). To determine the dissimilarity between objects, many distance metrics have been defined. The most popular distance metric is the Euclidean distance. In this research we will also use Euclidean metric as a distance metric to measure the dissimilarity between data objects. So, for given two objects and with -dimensions, the distance is defined by [38] as

Since there are different ways to cluster a given set of objects, a fitness function (cost function) for measuring the goodness of clustering should be defined. A famous and widely used function for this purpose is the total mean-square quantization error (MSE) [41], which is defined as follows: where is the distance between object and the center of cluster to be found by calculating the mean value of objects within the respective cluster.

3. Biogeography-Based Optimization Algorithm

In this section, we give a brief description of the Biogeography-Based Optimization (BBO) algorithm. BBO is a new evolutionary optimization method based on the study of geographic distribution of biological organisms (biogeography) [34]. Organisms in BBO are called species, and their distribution is considered over time and space. Species can migrate between islands which are called habitat. Habitat is characterized by a Habitat Suitability Index (HSI). HSI in BBO is similar to the fitness in other population-based optimization algorithms and measures the solution goodness. HSI is related to many features of the habitat [34]. Considering a global optimization problem and a population of candidate solutions (individuals), each individual can be considered as a habitat and is characterized by its HSI. A habitat with high HSI is a good solution (maximization problem). Similar to other evolutionary algorithms, good solutions share their features with others to produce a better population in the next generations. Conversely, an individual with low fitness is unlikely to share features and likely accept features. Suitability index variable (SIV) implies the habitability of a habitat. As there are many factors in the real world which make a habitat more suitable to reside than others, there are several SIVs for a solution which affect its goodness. A SIV is a feature of the solution and can be imagined like a gene in GA. BBO consists of two main steps: migration and mutation. Migration is a probabilistic operator that is intended to improve a candidate solution [42, 43]. In BBO, the migration operator includes two different types: immigration and emigration, where for each solution in each generation, the rates of these types are adaptively determined based on the fitness of the solution. In BBO, each candidate solution has its own immigration rate and emigration rate as follows:where is the population size and shows the rank of th individual in a ranked list which has been sorted based on the fitness of the population from the worst fitness to the best one ( is worst and is best). Also and are the maximum possible emigration and immigration rates, which are typically set to one. A good candidate solution has latively high emigration rate and allows immigration rate, while the converse is true for a poor candidate solution. Therefore if a given solution is selected to be modified (in migration step), then its immigration rate is applied to probabilistically modify each SIV in that solution. The emigrating candidate solution is probabilistically chosen based on . Different methods have been suggested for sharing information between habitats (candidate solutions), in [44], where migration is defined bywhere is a number between 0 and 1. It could be random or deterministic or it could be proportional to the relative fitness of the solutions and . Equation (5) means that (feature solution) SIV of comes from a combination of its own SIV and the emigrating solution’s SIV. Mutation is a probabilistic operator that randomly modifies a decision variable of a candidate solution. The purpose of mutation is to increase diversity among the population. The mutation rate is calculated in [34] where is the solution probability and  , where is the population size and is user-defined parameter.

If is selected for mutation, then the candidate solution is probabilistically chosen based on ; thus replace with a randomly generated SIV. Several options can be used for mutation but one option for implementing that can be defined aswhere

is user-defined parameter near 0 and also are the upper and lower bounds for each decision variable and is random number, normally distributed in the range of (0, 1).

Based on the above description, the main steps of the BBO algorithm can be described as follows.

Step 1 (initialization). At first, introduce the initial parameters that include the number of generations, necessary for the termination criterion, population size, which indicates the number of habitats/islands/solutions, number of design variables, maximum immigration and emigration rates, and mutation coefficient and also create a random set of habitats (population).

Step 2 (evaluation). Compute corresponding HSI values and rank them on the basis of fitness.

Step 3 (update parameters). Update the immigration rate and emigration rate for each island/solution. Bad solutions have low emigration rates and high immigration rates whereas good solutions have high emigration rates and low immigration rates.

Step 4 (select islands). Probabilistically select the immigration islands based on the immigration rates and select the emigrating islands based on the emigration rates via roulette wheel selection.

Step 5 (migration phase). Randomly change the selected features (SIVs), based on (4)–(5) and based on the selected islands in the previous step.

Step 6 (mutation phase). Probabilistically carry out mutation based on the mutation probability for each solution, that is, based on (6).

Step 7 (check the termination criteria). If the output of the termination criterion step is not met, go to Step 2; otherwise, terminate it.

4. BBO Algorithm for Data Clustering

In order to use BBO algorithm for data clustering, one-dimensional arrays are used to encode the centres of the desired clusters to present candidate solutions in the proposed algorithm. The length of the arrays is equal to , where is the number of clusters and is the dimensionality of the considered datasets. Figure 1 presents an example of candidate solution for a problem with centroids clusters and attributes.

Then assume is the th candidate solution and is the th cluster centre for the th candidate solution ( and , so that is the number of islands or candidate solutions in which its value in this work is set to 100. Therefore each of these candidate solutions shows centers of all clusters.

A good initial population is important to the performance of BBO and most of the population-based methods are affected by the quality of the initial population. Then in the proposed algorithm, taking into considering the nature of the input datasets, a high-quality population is created based on special ways as mentioned in pseudocodes. One of the candidate solutions will be produced by dividing whole dataset to equal sets, and three of them will be produced based on minimum, maximum, and average values of data objects in each dataset and other solutions will be created randomly. This procedure creates a high-quality initial population and consequently this procedure ensures that the candidate solutions are spread in the wide area of the search space, which as a result increases the chance of finding (near) global optima.

To ensure that the best habitats/solutions are preserved, elitist method is used to save the best individual found so far into the new population. So elitism strategy is proposed in order to retain the best solutions in the population from one generation to the next. Therefore in the proposed algorithm, new population is created based on merging initial population (old population) and the population due to migration and mutation process (new population). Then suppose is the entire initial population of candidate solutions and is the initial population, changed by iteration of BBO, and is percentage of initial population that is chosen in next iteration (whose value in this work is 30%). So the number of kept habitats of old population ( is as follows:And the number of kept habitats of new population is as follows:

Hence the population of next iteration can be as follows:

Suppose is the th candidate solution and is the th decision variable of . Based on the above description, the pseudocode of the proposed method is shown in Algorithm 1.

Create an initial population POP, as follow:
 (i) ,  
 Where Min(dataset) and Max(dataset) are correspond with data items that their features are
 the minimum and maximum values in whole of the dataset respectively.
 (ii) = Create a candidate solution using the minimum of the dataset
 (iii) = Create a candidate solution using the maximum of the dataset
 (iv) = Create a candidate solution using the mean of the dataset
 (v) : Create all other candidate solutions randomly as follow:
  for pop
   for
    for
      = random number in range of
     Where LC() and UC() are the lower and upper bounds for each decision
     variable (i.e. LC() < < UC(), ).
    end for
   end for
  end for
Calculate fitness of POP (cost function) according to (3) and sort it from the Best
(minimum) fitness to the worst one (maximum).
for to , (for each solution )
 Compute emigration rate proportional to fitness of , where , so that ,
and immigration rate , so that , .
end for
 Set , , , based on (8)–(11)
While termination conditions are not met
               NewPOP POP
for to pop (for each solution )
  for to (for each candidate solution decision variable index )
   Choose whether to immigrate with probabilistically decide .
   if New is selected for immigrating
     Use values to probabilistically select the emigrating solution .
     if is selected for emigrating
       , based on (5) by setting
        = random number in range of (0.9, 1).
     end if
   end if
With probabilistically decide whether to mutate based on (6).
if is selected for mutation, thus based on (7), (8):
 for
  if
       
       , where its value in this work
       is 0.02, also are the lower and upper bounds for each decision variable
      Break
  end if
 end for
end if
  end for (repeat for next candidate solution decision variable index)
  Recalculate fitness of .
 End for (repeat for next solution)
 Sort population based on fitness function.
 Make new population POP, based on combinatorial of old POP and New POP based on (7).
 Sort new population based on fitness function.
 Update and store, best solution ever found.
end while

5. Experimental Results

The proposed method is implemented using MATLAB 7.6 on a T6400, 2 GHz, 2 GB RAM computer. To evaluate the performance of the proposed algorithm, the results obtained have been compared with other algorithms by applying them on some well known datasets taken from Machine Learning Laboratory [37]. Six datasets are employed to validate the proposed method. These datasets named Cancer, CMC, Iris, Glass, Wine, and Vowel cover examples of data of low, medium, and high dimensions. The brief of the characteristics of these datasets is presented in Table 1. They have been applied by many authors to study and evaluate the performance of their algorithms, and they can be described as follows.

Wisconsin Breast Cancer Dataset . This dataset has 683 points with nine features such as cell size uniformity, clump thickness cell, bare nuclei, shape uniformity, marginal adhesion, single epithelial cell size, bland chromatin, normal nucleoli, and mitoses. There are two clusters in this dataset: malignant and benign.

Contraceptive Method Choice Dataset . This dataset is a subset of the 1987 National Indonesia Contraceptive Prevalence Survey. The samples are married women who either were not pregnant or did not know if they were at the time of interview. The problem is to predict the choice of current contraceptive method (no use has 629 objects, long-term methods have 334 objects, and short-term methods have 510 objects) of a woman based on her demographic and socioeconomic characteristics.

Ripley’s Glass Dataset . This dataset has 214 points with nine features. The dataset has six different clusters which are building windows float processed, building windows nonfloat processed, vehicle windows float processed, containers, tableware, and headlamps [41].

Iris Dataset . This data consists of three different species of iris flower: Iris setosa, Iris virginica, and Iris versicolour. For each species, 50 samples with four features each (sepal length, sepal width, petal length, and petal width) were collected [45].

Vowel Dataset . It consists of 871 Indian Telugu vowel sounds. The dataset has three features corresponding to the first, second, and third vowel frequencies and six overlapping classes [45].

Wine Dataset . This dataset describes the quality of wine from physicochemical properties in Italy. There are 178 instances with 13 continues attributes grouped into 3 classes. There is no missing value for attributes.

In this paper the performance of the proposed algorithm is compared with recent algorithms reported in the literature, including -means [38], TS [19], SA [20], PSO [22, 39], BB-BC [27], GA [21], GSA [30], and ACO [46].

In this paper two criteria are used to measure the quality of solutions found by clustering algorithms:(i)Sum of intracluster distances: The distance between each data vector in a cluster and the centroid of that cluster is calculated and summed up, as defined in (3). It is also the evaluation fitness in this paper. Clearly, the smaller the value is, the higher the quality of the clustering is.(ii)Error rate (ER): It is defined as the number of misplaced points over the total number of points in the dataset aswhere is the total number of data points and and denote the datasets of which the th point is a member before and after clustering, respectively.

Since all the algorithms are stochastic algorithms, therefore for each experiment 10 independent runs are carried out to indicate the stability and robustness of the algorithms for against with the randomized nature of the algorithms. The average, best (minimum), and worst (maximum) solutions and standard deviation of solutions of 10 runs of each algorithm are obtained by using algorithms on the datasets, which have been applied for comparison. This process ensures that the candidate solutions are spread in the wide area of the search space and thus increases the chance of finding optima.

Table 2 presents the intracluster distances obtained from the eight clustering algorithms for the datasets above. For the cancer dataset, the average, best, and worst solutions of BBO algorithm are 2964.3879, 2964.3875, and 2964.3887, respectively, which are much better than those of other algorithms except BB-BC which is the same as it. This means that it provides the optimum value and small standard deviation, when compared to those obtained by the other methods. For the CMC dataset, the proposed method reaches an average of 5532.2550, while other algorithms were unable to reach this solution. Also, the results obtained on the glass dataset show that BBO method converges to the optimum of 215.2097 in all of runs while the average solutions of the -means, TS, SA, GA, PSO, BB-BC, GSA, and ACO, are 227.9779, 283.79, 282.19, 230.49328, 231.2306, 255.38, 233.5433, and 273.46, respectively. For the iris dataset, the average of solutions found by BBO is 96.5653, while this value for the -means, TS, SA, GA, PSO, BB-BC, GSA, and ACO, is 105.7290, 97.8680, 99.95, 98.1423, 96.7654, 125.1970, 96.7311, and 97.1715, respectively. As seen from the results for the vowel dataset, the BBO algorithm outperformed the -means, TS, SA, GA, PSO, BB-BC, GSA, and ACO algorithms, with the average solution 149072.9042. For the Wine dataset, the BBO algorithm achieved the optimum value of 16292.6782, which is significantly better than the other tested algorithms.

From Table 2, we can see that the BBO algorithm has achieved the good performance in terms of the average, best, and worst intercluster distances on these six datasets. It means that BBO can find good quality solutions.

The best centroids coordinates obtained by the BBO algorithm on the test dataset are shown in Tables 38. Finally, Table 9 shows the error rate values obtained by algorithms for real datasets. As seen from the results in Table 9, the BBO algorithm presents a minimum average error rate in all the real datasets. However, the topography of the cost function of clustering (3) has a valley shape; therefore the found solutions by these methods were not global. Therefore the experimental results in the tables demonstrate that the proposed method is one of practicable and good techniques for data clustering.

6. Conclusions

In summary, this paper presents a new clustering algorithm based on the recently developed BBO heuristic algorithm that is inspired by mathematical models of science of biogeography (study of the distribution of animals and plants over time and space).

To evaluate the performance of the BBO algorithm, it was tested on six real life datasets and compared with other eight clustering algorithms. The experimental results indicate that the BBO optimization algorithm is suitable and useful heuristic technique for data clustering. In order to improve the obtained results, as a future work, we plan to hybridize the proposed approach with other algorithms and we intend to apply this method with other data mining problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the Ministry of Education Malaysia for funding this research project through a Research University Grant of Universiti Teknologi Malaysia (UTM), project titled “Based Syariah Compliance Chicken Slaughtering Process for Online Monitoring and Enforcement” (01G55). Also, thanks are due to the Research Management Center (RMC) of UTM for providing an excellent research environment in which to complete this work.