Research Article  Open Access
Samir Brahim Belhaouari, Shahnawaz Ahmed, Samer Mansour, "Optimized KMeans Algorithm", Mathematical Problems in Engineering, vol. 2014, Article ID 506480, 14 pages, 2014. https://doi.org/10.1155/2014/506480
Optimized KMeans Algorithm
Abstract
The localization of the region of interest (ROI), which contains the face, is the first step in any automatic recognition system, which is a special case of the face detection. However, face localization from input image is a challenging task due to possible variations in location, scale, pose, occlusion, illumination, facial expressions, and clutter background. In this paper we introduce a new optimized kmeans algorithm that finds the optimal centers for each cluster which corresponds to the global minimum of the kmeans cluster. This method was tested to locate the faces in the input image based on image segmentation. It separates the input image into two classes: faces and nonfaces. To evaluate the proposed algorithm, MITCBCL, BioID, and Caltech datasets are used. The results show significant localization accuracy.
1. Introduction
The face detection problem is to determine the face existence in the input image and then return its location if it exists. But in the face localization problem it is given that the input image contains a face and the goal is to determine the location of this face. The automatic recognition system, which is a special case of the face detection, locates, in its earliest phase, the region of interest (ROI) that contains the face. Face localization in an input image is a challenging task due to possible variations in location, scale, pose, occlusion, illumination, facial expressions, and clutter background. Various methods were proposed to detect and/or localize faces in an input image; however, there are still needs to improve the performance of localization and detection methods. A survey on some face recognition and detection techniques can be found in [1]. A more recent survey, mainly on face detection, was written by Yang et al. [2].
One of the ROI detection trends is the idea of segmenting the image pixels into a number of groups or regions based on similarity properties. It gained more attention in the recognition technologies, which rely on grouping the features vector to distinguish between the image regions, and then concentrate on a particular region, which contains the face. One of the earliest surveys on image segmentation techniques was done by Fu and Mui [3]. They classify the techniques into three classes: features shareholding (clustering), edge detection, and region extraction. A later survey by N. R. Pal and S. K. Pal [4] did only concentrate on fuzzy, nonfuzzy, and colour images techniques. Many efforts had been done in ROI detection, which may be divided into two types: region growing and region clustering. The difference between the two types is that the region clustering searches for the clusters without prior information while the region growing needs initial points called seeds to detect the clusters. The main problem in region growing approach is to find the suitable seeds since the clusters will grow from the neighbouring pixels of these seeds based on a specified deviation. For seeds selection, Wan and Higgins [5] defined a number of criteria to select insensitive initial points for the subclass of region growing. To reduce the region growing time, Chang and Li [6] proposed a fast region growing method by using parallel segmentation.
As mentioned before, the region clustering approach searches for clusters without prior information. Pappas and Jayant [7] generalized the Kmeans algorithm to group the image pixels based on spatial constraints and their intensity variations. These constraints include the following: first, the region intensity to be close to the data and, second, to have a spatial continuity. Their generalization algorithm is adaptive and includes the previous spatial constraints. Like the Kmeans clustering algorithm, their algorithm is iterative. These spatial constraints are included using Gibbs random field model and they concluded that each region is characterized by a slowly varying intensity function. Because of unimportant features, the algorithm is not that much accurate. Later, to improve their method, they proposed using a caricature of the original image to keep only the most significant features and remove the unimportant features [8]. Besides the problem of time costly segmentation, there are three basic problems that occur during the clustering process which are center redundancy, dead centers, and trapped centers at local minima. Moving Kmean (MKM) proposed in [9] can overcome the three basic problems by minimizing the center redundancy and the dead center as well as reducing indirectly the effect of trapped centers at local minima. In spite of that, MKM is sensitive to noise, centers are not located in the middle or centroid of a group of data, and some members of the centers with the largest fitness are assigned as members of a center with the smallest fitness. To reduce the effects of these problems. Isa et al. [10] proposed three modified versions of the MKM clustering algorithm for image segmentation: fuzzy moving kmeans, adaptive moving kmeans, and adaptive fuzzy moving kmeans. After that Sulaiman and Isa [11] introduced a new segmentation method based on a new clustering algorithm which is Adaptive Fuzzy kmeans Clustering Algorithm (AFKM). On the other hand, the Fuzzy CMeans (FCM) algorithm was used in image segmentation but it still has some drawbacks that are the lack of enough robustness to noise and outliers, the crucial parameter being selected generally based on experience, and the time of segmenting of an image being dependent on its size. To overcome the drawbacks of the FCM algorithm, Cai et al. [12] integrated both local spatial and gray information to propose fast and robust Fuzzy Cmeans algorithm called Fast Generalized Fuzzy CMeans Algorithm (FGFCM) for image segmentation. Moreover, many researchers have provided a definition for some data characteristics where it has a significant impact on the Kmeans clustering analysis including the scattering of the data, noise, highdimensionality, the size of the data and outliers in the data, datasets, types of attributes, and scales of attributes [13]. However, there is a need for more investigation to understand how data distributions affect Kmeans clustering performance. In [14] Xiong et al. provided a formal and organized study of the effect of skewed data distributions on Kmeans clustering. Previous research found the impact of highdimensionality on Kmeans clustering performance. The traditional Euclidean notion of proximity is not very useful on realworld highdimensional datasets, such as document datasets and geneexpression datasets. To overcome this challenge, researchers turn to make use of dimensionality reduction techniques, such as multidimensional scaling [15]. Second, Kmeans has difficulty in detecting the “natural” clusters with nonspherical shapes [13]. Modified Kmeans clustering algorithm is a direction to solve this problem. Guha et al. [16] proposed the CURE method, which makes use of multiple representative points to obtain the “natural” clusters shape information. The problem of outliers and noise in the data can also reduce clustering algorithms performance [17], especially for prototypebased algorithms such as Kmeans. The direction to solve this kind of problem is by combining some outlier removal techniques before conducting Kmeans clustering. For example, a simple method [9] of detecting outliers is based on the distance measure. On the other hand, many modified Kmeans clustering algorithms that work well for smaller mediumsize datasets are unable to deal with large datasets. Bradley et al. in [18] introduced a discussion of scaling Kmeans clustering to large datasets.
Arthur and Vassilvitskii implemented a preliminary version, kmeans++, in C++ to evaluate the performance of kmeans. Kmeans++ [19] uses a careful seeding method to optimize the speed and the accuracy of kmeans. Experiments were performed on four datasets and results showed that that this algorithm is competitive with the optimal clustering. Experiments also proved its better speed (70% faster) and accuracy (potential value obtained is better by factors of 10 to 1000) than kmeans. Kanungo et al. [20] also proposed an algorithm for the kmeans problem. It is competitive but quiet slow in practice. Xiong et al. investigates the measures that best reflects the performance of Kmeans clustering [14]. An organized study was performed to understand the impact of data distributions on the performance of Kmeans clustering. Research work also has improved the traditional Kmeans clustering so that it can handle datasets with large variation of cluster sizes. This formal study illustrates that the entropy sometimes provided misleading information on the clustering performance. Based on this, a coefficient of variation (CV) is proposed to validate the clustering outcome. Experimental results proved that, for datasets with great variation in “true” cluster sizes, Kmeans lessens (less than 1.0) the variation in resultant cluster sizes. For datasets with small variation in “true” cluster sizes Kmeans increases (greater than 0.3) the variation in resultant cluster sizes.
Scalability of the kmeans for large datasets is one of the major limitations. Chiang et al. [21] proposed a simple and easy to implement algorithm for reducing the computation time of kmeans. This pattern reduction algorithm compresses and/or removes iteration patterns that are not likely to change their membership. The pattern is compressed by selecting one of the patterns to be removed and setting its value to the average of all patterns removed. If is the pattern to be removed then we can write it as follows:
This algorithm can be also applied to other clustering algorithms like populationbased clustering algorithms. The computation time of many clustering algorithms can be reduced using this technique, as it is especially effective for large and highdimensional datasets. In [22] Bradley et al. proposed Scalable kmeans that uses buffering and a twostage compression scheme to compress or remove patterns in order to improve the performance of kmeans. The algorithm is slightly faster than kmeans but does not show the same result always. The factors that degrade the performance of scalable kmeans are the compression processes and compression ratio. Farnstrom et al. also proposed a simple single pass kmeans algorithm [23], which reduces the computation time of kmeans. Ordonez and Omiecinski proposed relational kmeans [24] that uses the block and incremental concept to improve the stability of scalable kmeans. The computation time of kmeans can also be reduced using Parallel bisecting kmeans [25] and triangle inequality [26] methods.
Although Kmeans is the most popular and simple clustering algorithm, the major difficulty with this process is that it cannot ensure the global optimum results because the initial cluster centers are selected randomly. Reference [27] presents a novel technique that selects the initial cluster centers by using Voronoi diagram. This method automates the selection of the initial cluster centers. To validate the performance experiments were performed on a range of artificial and real world datasets. The quality of the algorithm is assessed using the following error rate (ER) equation:
The lower the error rate the better the clustering. The proposed algorithm is able to produce better clustering results than the traditional Kmeans. Experimental results proved reduction in iterations for defining the centroids.
All previous solutions and efforts to increase the performance of Kmeans algorithm still need more investigation since they are all looking for a local minimum. Searching for the global minimum will certainly improve the performance of the algorithm. The rest of the paper is organized as follows. In Section 2, the proposed method is introduced that finds the optimal centers for each cluster which corresponds to the global minimum of the kmeans cluster. In Section 3 the results are discussed and analyzed using two sets from MIT, BioID, and Caltech datasets. Finally in Section 4, conclusions are drawn.
2. KMeans Clustering
Kmeans, an unsupervised learning algorithm first proposed by MacQueen, 1967 [28], is a popular method of cluster analysis. It is a process of organizing the specified objects into uniform classes called clusters based on similarities among objects based on certain criteria. It solves the wellknown clustering problem by considering certain attributes and performing an iterative alternating fitting process. This algorithm partitions a dataset into disjoint clusters such that each observation belongs to the cluster with the nearest mean. Let be the centroid of cluster and let be the dissimilarity between and object . Then the function minimized by the kmeans is given by the following equation:
Kmeans clustering is utilized in a vast number of applications including machine learning, fault detection, pattern recognition, image processing, statistics, and artificial intelligent [11, 29, 30]. kmeans algorithm is considered as one of the fastest clustering algorithms with a number of variants that are sensitive to the selection of initial points and are intended for solving many issues of kmeans like the evaluation of the clusters number [31], the method of initialization of the clusters centroids [32], and the speed of the algorithm [33].
2.1. Modifications in KMeans Clustering
Global kmeans algorithm [34] is a proposal to improve global search properties of kmeans algorithm and its performance on very large datasets by computing clusters successively. The main weakness of kmeans clustering, that is, its sensitivity to initial positions of the cluster centers, is overcome by global kmeans clustering algorithm which consists of a deterministic global optimization technique. It does not select initial values randomly; instead an incremental approach is applied to optimally add one new cluster center at each stage. Global kmeans also proposes a method to reduce the computational load while maintain the solution quality. Experiments were performed to compare the performance of kmeans, global kmeans, min kmeans, and fast global kmeans as shown in Figure 1. Numerical results show that the global kmeans algorithm considerably outperforms other kmeans algorithms.
Bagirov [35] proposed a new version of the global kmeans algorithm for minimum sumofsquares clustering problems. He also compared three different versions of the kmeans algorithm to propose the modified version of the global kmeans algorithm. The proposed algorithm computes clusters incrementally and cluster centers from the previous iteration are used to compute kpartition of a dataset. The starting point of the th cluster center is computed by minimizing an auxiliary cluster function. Given a finite set of points in the dimensional space the global kmeans compute the centroid of the set as the following equation:
Numerical experiments performed on 14 datasets demonstrate the efficiency of the modified global kmeans algorithm in comparison to the multistart kmeans (MS kmeans) and the GKM algorithms when the number of clusters . Modified global kmeans however requires more computational time than the global kmeans algorithm. Xie and Jiang [36] also proposed a novel version of the global kmeans algorithm. The method of creating the next cluster center in the global kmeans algorithm is modified and that showed a better execution time without affecting the performance of the global kmeans algorithm. Another extension of the standard kmeans clustering is the Global Kernel kmeans [37] which optimizes the clustering error in feature space by employing kernelmeans as a local search procedure. A kernel function is used in order to solve the Mclustering problem and nearoptimal solutions are used to optimize the clustering error in the feature space. This incremental algorithm has high ability to identify nonlinearly separable clusters in input space and no dependence on cluster initialization. It can handle weighted data points making it suitable for graph partitioning applications. Two major modifications were performed in order to reduce the computational cost with no major effect on the solution quality.
Video imaging and Image segmentation are important applications for kmeans clustering. Hung et al. [38] modified the kmeans algorithm for color image segmentation where a weight selection procedure in the Wkmeans algorithm is proposed. The evaluation function is used for comparison: where = segmented image, = the number of regions in the segmented image, = the area, or the number of pixels of the th region, and = the color error of region .
is the sum of the Euclidean distance of the color vectors between the original image and the segmented image. Smaller values of give better segmentation results. Results from color image segmentation illustrate that the proposed procedure produces better segmentation than the random initialization. Maliatski and YadidPecht [39] also propose an adaptive kmeans clustering, hardwaredriven approach. The algorithm uses both pixel intensity and pixel distance in the clustering process. In [40] also a FPGA implementation of realtime kmeans clustering is done for colored images. A filtering algorithm is used for hardware implementation. Suliman and Isa [11] presented a unique clustering algorithm called adaptive fuzzyKmeans clustering for image segmentation. It can be used for different types of images including medical images, microscopic images, and CCD camera images. The concepts of fuzziness and belongingness are applied to produce more adaptive clustering as compared to other conventional clustering algorithms. The proposed adaptive fuzzyKmeans clustering algorithm provides better segmentation and visual quality for a range of images.
2.2. The Proposed Methodology
In this method, the face location is determined automatically by using an optimized Kmeans algorithm. First, the input image is reshaped into vectors of pixels values, followed by clustering the pixels, based on applying a certain threshold determined by the algorithm, into two classes. Pixels with values below the threshold are put in the nonface class. The rest of the pixels are put in the face class. However, some unwanted parts may get clustered to this face class. Therefore, the algorithm is applied again to the face class to obtain only the face part.
2.2.1. Optimized KMeans Algorithm
The proposed Kmeans modified algorithm is intended to solve the limitations of the standard version by using differential equations to determine the optimum separation point. This algorithm finds the optimal centers for each cluster which corresponds to the global minimum of the kmeans cluster. As an example, say we would like to cluster an image into two subclasses: face and nonface. So we look for a separator which will separate the image into two different clusters in two dimensions. Then the means of the clusters can be found easily. Figure 2 shows the separation points and the mean of each class. To find these points we proposed the following modification in continuous and discrete cases.
Let be a dataset and let be an observation. Let be the probability mass function (PMF) of where is the Dirac function.
When the size of the data is very large, the question of the cost of the minimization of arises, where is the number of clusters and is the mean of cluster .
Let
Then it is enough to minimize without losing the generality since where is the dimension of and .
2.2.2. Continuous Case
For the continuous case is like the probability density function (PDF). Then for 2 classes, we need to minimize the following equation:
If we can use (8)
Let be any random variable with probability density function ; we want to find and such that it minimizes (7) as follows:
We know that
So we can find and in terms of . Let us define and as follows:
After simplification we get
To find the minimum, and are both partially differentiated with respect to and , respectively. After simplification, we conclude that the minimization occurs when where and .
To find the minimum of , we need to find
After simplification we get
In order to find the minimum of , we will find and separately and add them as follows:
After simplification we get
Similarly
After simplification we get
Adding both these differentials and after simplifying, it follows that
Equating this with 0 gives
But , as in that case , which contradicts the initial assumption. Therefore
Therefore, we have to find all values of that satisfy (26), which is easier than minimizing directly.
To find the minimum in terms of , it is enough to derive
Then we find
2.2.3. Finding the Centers of Some Common Probability Density Functions
Uniform Distribution. The probability density function of the continuous uniform distribution is where and are the two parameters of the distribution. The expected values for and are
Putting all these in (28) and solving, we get
LogNormal Distribution. The probability density function of a lognormal distribution is
The probabilities in left and right of are, respectively,
The expected values for and are
Putting all these in (28) and assuming and and solving, we get
Normal Distribution. The probability density function of a normal distribution is
The probabilities in left and right of are, respectively,
The expected values for and are
Putting all these in (28) and solving we get . Assuming and and solving, we get
2.2.4. Discrete Case
Let be a discrete random variable and assume we have a set of observations from . Let be the mass density function for an observation .
Let and . We define as the mean for all , and as the mean for all . Define as the variance of two centers forced by as a separator
The first part of simplifies as follows:
Similarly, Define
Now, in order to simplify the calculation, we rearrange as follows:
The first part is simplified as follows:
Similarly, simplifying the second part yields
In general, these types of optimization arise when the dataset is very big.
If we consider to be very large then , , and : and, as , we can further simplify as follows:
To find the minimum of , let us locate when :
But , and so is , since is an increasing sequence, which implies that where
Therefore,
Let us define where and are the means of the first and second parts, respectively.
To find the minimum for the total variation, it is enough to find the good separator between and such that and (see Figure 3).
After finding few local minima, we can get the global minimum by finding the smallest among them.
3. Face Localization Using Proposed Method
Now, it becomes clear the superiority of the Kmeans modified algorithm over the standard algorithm in terms of finding a global minimum and we can explain our solution based on the modified algorithm. Suppose we have an image with pixels values from 0 to 255. First, we reshaped the input image into a vector of pixels; Figure 4 shows an input image before the operation of the Kmeans modified algorithm. The image contains a face part, a normal background, and a background with illumination. Then we split the image pixels into two classes: one from 0 to and another from to 255 by applying a certain threshold determined by the algorithm.
Let be the class 1which represents the nonface part and let be the class 2which represents the face with the shade.
All pixels with values less than the threshold belong to the first class and will be assigned to 0 values. This will keep the pixels with values above the threshold and these pixels belong to both the face part and the illuminated part of the background, as shown in Figure 5.
In order to remove the illuminated background part, another threshold will be applied to the pixels using the algorithm meant to separate between the face part and the illuminated background part. New classes will be obtained, from to and from to 255. Then, the pixels with values less than the threshold will be assigned a value of 0 and the nonzero pixels left represent the face part. Let be the class 1 which represents the illuminated background part and let be the class 2 which represents the face part.
The result of the removal of the illuminated background part is shown in Figure 6.
One can see that some face pixels were assigned to 0 due to the effects of the illumination. Therefore, there is a need to return to the original image in Figure 4 to fully extract the image window corresponding to the face. A filtering process is then performed to reduce the noise and the result is shown in Figure 7.
Figure 8 shows the final result which is a window containing the face extracted from the original image.
4. Results
In this Section, the experimental results of the proposed algorithm will be analyzed and discussed. First, it starts with presenting a description of the datasets used in this work, followed by the results of the previous part of the work. Three popular databases were used to test the proposed face localization method. The MITCBCL database has image size , and images were selected from this dataset based on different conditions such as light variation, pose, and background variations. The second dataset is the BioID dataset with image size and images from the database were selected based on different conditions such as indoor/outdoor. The last dataset is the Caltech dataset with image size and the images from the database were selected based on different conditions such as indoor/outdoor.
4.1. MITCBCL Dataset
This dataset was established by the Center for Biological and Computational Learning (CBCL) at Massachusetts Institute of Technology (MIT) [41]. It has persons with images per person and an image size of pixels. The images of each person are with different light conditions, clutter background, and scale and different poses. Few examples of these images are shown in Figure 9.
4.2. BioID Dataset
This dataset [42] consists of gray level images with a resolution of pixels. Each one shows the frontal view of a face of one out of different test persons. The images were taken under different lighting conditions, with a complex background, and contain tilted and rotated faces. This dataset is considered as the most difficult one for eye detection (see Figure 10).
4.3. Caltech Dataset
This database [43] has been collected by Markus Weber at California Institute of Technology. The database contains face images of subjects under different lighting conditions, different face expressions, and complex backgrounds (see Figure 11).
4.4. Proposed Method Result
The proposed method is evaluated on these datasets where the result was with a localization time of s. Table 1 shows the proposed method results with the MITCBCL, Caltech, and BioID databases. In this test, we have focused on the use of images of faces from different angles from 30 degrees left to 30 degrees right and the variation in the background as well as the cases of indoor and outdoor images. Two sets of images are selected from the MITCBCL dataset and two other sets from the other databases to evaluate the proposed method. The first set from MITCBCL contains the images with different poses and the second set contains images with different backgrounds. Each set contains 150 images. The BioID and Caltech sets contain images with indoor and outdoor settings. The results showed the efficiency of the proposed Kmean modified algorithm to locate the faces from the image with high background and poses changes. In addition, another parameter was considered in this test which is the localization time. From the results, the proposed method can locate the face in a very short time because it has less complexity due to the use of differential equations.

Figure 12 shows some examples of face localization on MITCBCL.
In Figure 13(a), an original image from BioID dataset is shown. The localization position is shown in Figure 13(b). But there is need to do a filtering operation in order to remove the unwanted parts and this is shown in Figure 13(c) and then the area of ROI will be determined.
(a)
(b)
(c)
Figure 14 shows the face position on an image from the Caltech dataset.
(a)
(b)
(c)
5. Conclusion
In this paper, we focus on developing the RIO detection by suggesting a new method for face localization. In this method of localization, the input image is reshaped into a vector and then the optimized kmeans algorithm is applied twice to cluster the image pixels into classes. At the end, the block window corresponding to the class of pixels containing the face only is extracted from the input image. Three sets of images taken from MIT, BioID, and Caltech datasets and differing in illumination, and backgrounds, as well as indoor/outdoor settings were used to evaluate the proposed method. The results show that the proposed method achieved significant localization accuracy. Our future research direction includes finding the optimal centers for higher dimension data, which we believe is just generalization of our approach to higher dimension (8) and to higher subclusters.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This study is a part of the Internal Grant Project no. 419011811122. The authors gratefully acknowledge the continued support from Alfaisal University and its Office of Research.
References
 R. Chellappa, C. L. Wilson, and S. Sirohey, “Human and machine recognition of faces: a survey,” Proceedings of the IEEE, vol. 83, no. 5, pp. 705–740, 1995. View at: Publisher Site  Google Scholar
 M.H. Yang, D. J. Kriegman, and N. Ahuja, “Detecting faces in images: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 34–58, 2002. View at: Publisher Site  Google Scholar
 K. S. Fu and J. K. Mui, “A survey on image segmentation,” Pattern Recognition, vol. 13, no. 1, pp. 3–16, 1981. View at: Publisher Site  Google Scholar
 N. R. Pal and S. K. Pal, “A review on image segmentation techniques,” Pattern Recognition, vol. 26, no. 9, pp. 1277–1294, 1993. View at: Publisher Site  Google Scholar
 S.Y. Wan and W. E. Higgins, “Symmetric region growing,” IEEE Transactions on Image Processing, vol. 12, no. 9, pp. 1007–1015, 2003. View at: Publisher Site  Google Scholar
 Y.L. Chang and X. Li, “Fast image region growing,” Image and Vision Computing, vol. 13, no. 7, pp. 559–571, 1995. View at: Publisher Site  Google Scholar
 T. N. Pappas and N. S. Jayant, “An adaptive clustering algorithm for image segmentation,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP '89), vol. 3, pp. 1667–1670, May 1989. View at: Google Scholar
 T. N. Pappas, “An adaptive clustering algorithm for image segmentation,” IEEE Transactions on Signal Processing, vol. 40, no. 4, pp. 901–914, 1992. View at: Publisher Site  Google Scholar
 M. Y. Mashor, “Hybrid training algorithm for RBF network,” International Journal of the Computer, the Internet and Management, vol. 8, no. 2, pp. 50–65, 2000. View at: Google Scholar
 N. A. M. Isa, S. A. Salamah, and U. K. Ngah, “Adaptive fuzzy moving Kmeans clustering algorithm for image segmentation,” IEEE Transactions on Consumer Electronics, vol. 55, no. 4, pp. 2145–2153, 2009. View at: Publisher Site  Google Scholar
 S. N. Sulaiman and N. A. M. Isa, “Adaptive fuzzyKmeans clustering algorithm for image segmentation,” IEEE Transactions on Consumer Electronics, vol. 56, no. 4, pp. 2661–2668, 2010. View at: Publisher Site  Google Scholar
 W. Cai, S. Chen, and D. Zhang, “Fast and robust fuzzy cmeans clustering algorithms incorporating local information for image segmentation,” Pattern Recognition, vol. 40, no. 3, pp. 825–838, 2007. View at: Publisher Site  Google Scholar
 P. Tan, M. Steinbach, and V. Kumar, Introduction to Data Mining, Pearson AddisonWesley, 2006.
 H. Xiong, J. Wu, and J. Chen, “Kmeans clustering versus validation measures: a datadistribution perspective,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 39, no. 2, pp. 318–331, 2009. View at: Publisher Site  Google Scholar
 I. Borg and P. J. F. Groenen, Modern Multidimensional Scaling—Theory and Applications, Springer Science + Business Media, 2005.
 S. Guha, R. Rastogi, and K. Shim, “CURE: an efficient clustering algorithm for large databases,” ACM SIGMOD Record, vol. 27, no. 2, pp. 73–84, 1998. View at: Publisher Site  Google Scholar
 E. M. Knorr, R. T. Ng, and V. Tucakov, “Distancebased outliers: algorithms and applications,” VLDB Journal, vol. 8, no. 34, pp. 237–253, 2000. View at: Publisher Site  Google Scholar
 P. S. Bradley, U. Fayyad, and C. Reina, “Scaling clustering algorithms to large databases,” in Proceedings of the 4th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '98), pp. 9–15, 1998. View at: Google Scholar
 D. Arthur and S. Vassilvitskii, “kmeans++: the advantages of careful seeding,” in Proceedings of the 18th Annual ACMSIAM Symposium on Discrete Algorithms, pp. 1027–1035, 2007. View at: Google Scholar
 T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu, “A local search approximation algorithm for kmeans clustering,” Computational Geometry: Theory and Applications, vol. 28, no. 23, pp. 89–112, 2004. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 M. C. Chiang, C. W. Tsai, and C. S. Yang, “A timeefficient pattern reduction algorithm for kmeans clustering,” Information Sciences, vol. 181, no. 4, pp. 716–731, 2011. View at: Publisher Site  Google Scholar
 P. S. Bradley, U. M. Fayyad, and C. Reina, “Scaling clustering algorithms to large databases,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '98), pp. 9–15, 1998. View at: Google Scholar
 F. Farnstrom, J. Lewis, and C. Elkan, “Scalability for clustering algorithms revisited,” SIGKDD Explorations, vol. 2, no. 1, pp. 51–57, 2000. View at: Publisher Site  Google Scholar
 C. Ordonez and E. Omiecinski, “Efficient diskbased Kmeans clustering for relational databases,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 8, pp. 909–921, 2004. View at: Publisher Site  Google Scholar
 Y. Li and S. M. Chung, “Parallel bisecting kmeans with prediction clustering algorithm,” Journal of Supercomputing, vol. 39, no. 1, pp. 19–37, 2007. View at: Publisher Site  Google Scholar
 C. Elkan, “Using the triangle inequality to accelerate kmeans,” in Proceedings of the 20th International Conference on Machine Learning, pp. 147–153, August 2003. View at: Google Scholar
 D. Reddy and P. K. Jana, “Initialization for Kmeans clustering using voronoi diagram,” Procedia Technology, vol. 4, pp. 395–400, 2012. View at: Publisher Site  Google Scholar
 J. B. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297, University of California Press, 1967. View at: Google Scholar  MathSciNet
 R. Ebrahimpour, R. Rasoolinezhad, Z. Hajiabolhasani, and M. Ebrahimi, “Vanishing point detection in corridors: using Hough transform and Kmeans clustering,” IET Computer Vision, vol. 6, no. 1, pp. 40–51, 2012. View at: Publisher Site  Google Scholar
 P. S. Bishnu and V. Bhattacherjee, “Software fault prediction using quad treebased Kmeans clustering algorithm,” IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 6, pp. 1146–1150, 2012. View at: Publisher Site  Google Scholar
 T. Elomaa and H. Koivistoinen, “On autonomous Kmeans clustering,” in Proceedings of the International Symposium on Methodologies for Intelligent Systems, pp. 228–236, May 2005. View at: Google Scholar
 J. M. Peña, J. A. Lozano, and P. Larrañaga, “An empirical comparison of four initialization methods for the KMeans algorithm,” Pattern Recognition Letters, vol. 20, no. 10, pp. 1027–1040, 1999. View at: Publisher Site  Google Scholar
 T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu, “An efficient kmeans clustering algorithms: analysis and implementation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 881–892, 2002. View at: Publisher Site  Google Scholar
 A. Likas, N. Vlassis, and J. J. Verbeek, “The global kmeans clustering algorithm,” Pattern Recognition, vol. 36, no. 2, pp. 451–461, 2003. View at: Publisher Site  Google Scholar
 A. M. Bagirov, “Modified global kmeans algorithm for minimum sumofsquares clustering problems,” Pattern Recognition, vol. 41, no. 10, pp. 3192–3199, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 J. Xie and S. Jiang, “A simple and fast algorithm for global Kmeans clustering,” in Proceedings of the 2nd International Workshop on Education Technology and Computer Science (ETCS '10), vol. 2, pp. 36–40, Wuhan, China, March 2010. View at: Publisher Site  Google Scholar
 G. F. Tzortzis and A. C. Likas, “The global kernel κmeans algorithm for clustering in feature space,” IEEE Transactions on Neural Networks, vol. 20, no. 7, pp. 1181–1194, 2009. View at: Publisher Site  Google Scholar
 W.L. Hung, Y.C. Chang, and E. Stanley Lee, “Weight selection in WKmeans algorithm with an application in color image segmentation,” Computers and Mathematics with Applications, vol. 62, no. 2, pp. 668–676, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 B. Maliatski and O. YadidPecht, “Hardwaredriven adaptive κmeans clustering for realtime video imaging,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 15, no. 1, pp. 164–166, 2005. View at: Publisher Site  Google Scholar
 T. Saegusa and T. Maruyama, “An FPGA implementation of realtime Kmeans clustering for color images,” Journal of RealTime Image Processing, vol. 2, no. 4, pp. 309–318, 2007. View at: Publisher Site  Google Scholar
 “CBCL face recognition Database,” Vol 2010, http://cbcl.mit.edu/softwaredatasets/heisele/facerecognitiondatabase.html. View at: Google Scholar
 BioID: BioID Support: Downloads: BioID Face Database, 2011.
 http://www.vision.caltech.edu/htmlfiles/archive.html.
Copyright
Copyright © 2014 Samir Brahim Belhaouari et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.