Research Article  Open Access
Honghong Liao, Jinhai Xiang, Weiping Sun, Jianghua Dai, Shengsheng Yu, "Adaptive Initialization Method Based on Spatial Local Information for Means Algorithm", Mathematical Problems in Engineering, vol. 2014, Article ID 761468, 11 pages, 2014. https://doi.org/10.1155/2014/761468
Adaptive Initialization Method Based on Spatial Local Information for Means Algorithm
Abstract
means algorithm is a widely used clustering algorithm in data mining and machine learning community. However, the initial guess of cluster centers affects the clustering result seriously, which means that improper initialization cannot lead to a desirous clustering result. How to choose suitable initial centers is an important research issue for means algorithm. In this paper, we propose an adaptive initialization framework based on spatial local information (AIFSLI), which takes advantage of local density of data distribution. As it is difficult to estimate density correctly, we develop two approximate estimations: density by nearest neighborhoods (NN) and density by neighborhoods (Ball), leading to two implements of the proposed framework. Our empirical study on more than 20 datasets shows promising performance of the proposed framework and denotes that it has several advantages: (1) can find the reasonable candidates of initial centers effectively; (2) it can reduce the iterations of means’ methods significantly; (3) it is robust to outliers; and (4) it is easy to implement.
1. Introduction
Clustering is a process of grouping a set of data objects into clusters based on information found in that data [1], which has a long history in a variety of scientific disciplines from statistics and computer science to biology, medicine, and even psychology. The main goals of clustering involve compressing, classifying, and gaining some useful information from data.
Clustering algorithms can be divided into two categories roughly: hierarchical and partitional [2]. Hierarchical clustering algorithms recursively find nested clusters either in agglomerative (buttonup) mode or in divisive (topdown) mode, whereas partitional algorithms find clusters simultaneously as a partition of the dataset. Most hierarchical algorithms have quadratic or higher time complexity with the number of data points [3] and therefore are not suitable for large scale big data application. However, partitional algorithms often have lower time complexity and are used in many large scale tasks including bagoffeatures (BoF) method in computer vision [4], color quantization in graphics and image processing [5], bagofwords model for text classification [6], and pretraining of deep learning nowadays [7].
The means algorithm is undoubtedly one of the most popular and widely used clustering algorithms [8]. means algorithm is a hard partitional algorithm, which divides a dataset into a set of exhaustive and mutually exclusive clusters. That is, for a given dataset in , means algorithm iteratively divides into clusters , subject to and ,for all . The means algorithm usually generates clusters by optimizing a certain criterion function, and the most intuitive and frequently used one is the Sum of Squared Error (SSE) which is given by where denotes the Euclidean norm or norm and is the center of cluster whose cardinality is . As a result, finding the optimal clusters for means algorithm turns to minimize a criterion function. Other criterion functions can also be used, such as city block ( norm), hamming distance, and cosine dissimilarity.
After giving the initial centers, means algorithm repeats two alternate procedures to group data points into desired clusters [9]: it first properly assigns each data point to one of the separate initial centers and then updates the clusters’ center based on the assignments. The assignment and update procedures are repeated until either there is no further changes of the criterion function or the maximum number of iteration reaches.
In spite of it is popularity, means algorithm has some drawbacks [10]: (1) it needs the user to specify the number of clusters in advance or run independently for different values of and then selects the partition that appears to be the most meaningful by domain experts; (2) it can only detect compact, hyperphysical clusters that are well separated and is not suitable for high dimension task because of using Euclidean distance as its default similarity metric; (3) it is sensitive to noise and outliers in datasets, for even a few of such data points can significantly influence the center (mean) of their respective cluster; (4) it often converges to a local minimum of the criterion function due to its gradient descent nature and nonconvexity of the criterion function; (5) it is highly sensitive to the selection of the initial guess of centers. Adverse effects of improper initialization include empty cluster, slower convergence, and higher chance of getting stuck in bad local minima [5]. As discussed by Celebi et al. [10], all of these drawbacks except for choosing the number of clusters can be remedied by using a suitable adaptive initialization method.
As mentioned by Celebi and Kingravi in [11], initial centers should avoid choosing outliers and being too close to each other. There are many studies focusing on initialization of means, such as random selection, probabilitybased initial methods, and factor analysis. A more intuitive idea is to determine initial centers according to the spatial distribution of data points. That is, if we choose initial centers from regions with high local density of data distribution, which is a kind of spatial local information of data points, outliers will be prevented from being chosen. And, by keeping them away with certain distance, we will get the suitable initial centers for means algorithm (as shown in Figure 2).
Inspired by the discussion above, we propose an adaptive initialization framework based on spatial local information (AIFSLI), which takes advantage of local density of data distribution, and develop two approximate estimations for density, that is, density by nearest neighborhoods (NN) and density by neighborhoods (Ball), leading to two implements of the proposed framework. Both implements need three steps. Firstly, we have to find out the high density regions of data points based on the estimation of local data density. Secondly, we need to mark data points that belong to high density regions as candidates of initial centers. And thirdly, we should determine which candidates should be selected as initial centers and make sure that they are kept away with certain distance.
The contributions of this paper are as follows: (1) we propose an adaptive framework of initial guess of clusters’ center based on spatial local information; (2) we derive two implements of the proposed framework; (3) we give a comparative empirical study among the proposed framework and the stateoftheart techniques and analyse the rationality of the proposed framework.
Our empirical study on more than 20 datasets shows promising performance of the proposed framework and denotes that it has several advantages: (1) it can find the suitable candidates of initial centers effectively; (2) it can reduce the iteration of means methods significantly; (3) it is robust to outliers; and (4) it is easy to implement.
The rest of this paper is organized as follows. Section 2 reviews the related work in the literature of initialization for means algorithm. Section 3 gives the proposed adaptive initialization framework based on spatial local information. The two implements of the proposed framework, nearest neighborhoods (NN) and neighborhoods (Ball), are derived in Section 4. Section 5 describes the datasets, feature normalization, the performance criteria, and parameter settings in our experiments. Section 6 denotes our extensive empirical study of evaluating the performance of the proposed method. Section 7 analyses the complexity for our method and the suitability of initialing means algorithm based on spatial local information. After that, we conclude this work.
2. Related Work
There is a considerable amount of literature on the initialization methods of means algorithm; we briefly review some of the commonly used ones. A comprehensive review can be found in recent work by Celebi et al. [10].
There are a number of initialization methods selecting the initial centers in a probability manner, implicitly or explicitly. The most used initial method for means algorithm is MacQueen’s second initialization, that is, selecting the initial centers randomly [12]. Due to its random nature, it inevitably selects outliers as the initial centers and leads to more iteration or bad local minima. means based on this initial method usually needs to run several times with multiple different initial partitions and chooses the partition with the smallest squared error. Forgy’s method [13] is a random assignment method, which first assigns each point to one of the clusters uniformly. Then the initial centers are obtained by centroids of data points according to their assignment. This method usually confuses with MacQueen’s second method discussed above in which the first step is selecting initial centers not doing assignment. Both methods can be viewed as selecting (assigning) under the uniform distribution of data points. The means++ method [14] chooses the first center arbitrarily and the th center () with a probability of , where denotes the minimum distance from a point to the previously selected centers. It implies that data points with larger distance to the selected centers have higher probability to be chosen as new initial centers. A parallel version of means++ algorithm was developed in [15].
Several works consider choosing initial centers with a desired distance apart from each other. Ball and Hall's method [16] takes the centroid of dataset as the first initial center; that is, . It then takes the data point, which is at least units apart from the previously selected centers, as initial center until centers are obtained. This method chooses data points with a desired distance apart from each other and at the same time prevents initial centers from becoming too close to each other. However, it is difficult to determine a reasonable threshold . Maximin method [17, 18] takes the first center arbitrarily in dataset . And the second initial center is chosen with the largest distance from the first center. Other initial centers are chosen to be the points with the greatest minimum distance to the previously selected centers. It should be noted that in the work of [18], it chooses the data point with maximum norm in dataset as the first center. Erisoglu et al. [19] choose two of features that describe the changes best in the dataset as main axes, where is the dimension of the space where dataset lies. All initial centers are chosen based on the main axes by projecting dataset to main axes. The data point with maximum distance from the mean of dataset is chosen as the first initial center. The th center is chosen with the largest sum of distance from previously selected centers, until centers are obtained. They cannot guarantee not selecting outliers as initial centers.
Recently, researches utilize factor analysis method to initialize means algorithm. The PCAPart method [20] uses a divisive hierarchical approach based on Principal Component Analysis (PCA). This method iteratively selects the cluster with the greatest SSE and divides it into two subclusters by a hyperplane that passes through the cluster center and orthogonal to the direction of the principal eigenvector of related covariance matrix. In the first step, it takes the entire dataset as the cluster with the greatest SSE. The VarPart method [20] is an approximation to PCAPart, which assumes that the covariance matrix is diagonal. In this case, the direction of hyperplane is orthogonal to the coordinate axis with the greatest variance. Celebi and Kingravi [11] use Otsu’s algorithm to get an adaptive threshold for PCAPart and VarPart algorithm [20], in which original takes the cluster center as threshold to divide a cluster into two subclusters. This leads a deterministic initialization method for means and experiments to show that it gets promising performance compared to the original algorithm. Onoda et al. [21] acquire initial centers by independent components analysis (ICA). They first calculate the independent components of and then choose the point that has the least cosine distance from the th independent components as the th () center.
AlDaoud’s method [22] first uniformly partitions the data space into grids. It takes points as initial centers from grid , , where , is the number of data points in grid and is the total number of data points in dataset . This method can be viewed as a densitybased initialization method. However, this method would ignore clusters with fewer number of data points and it is not easy to decide how many grids are suitable.
The proposed initialization method bears some similarity to that of AlDaoud and Robetrs [22], in which local density of data points is taken into consideration when determining initial centers. However, the proposed method differs remarkably from AlDaoud’s method. In their work, local density is estimated based on artificial and hard grid segmentation; different grid segmentation leads to different local density estimation. Our method estimates local density of data points according to their local spatial distribution, which can be regarded as a nature and soft group segmentation, and initial centers are determined by the distribution of data points.
3. Proposed Framework of AIFSLI
Since reasonable initial centers should avoid choosing outliers and being too close to each other, we propose an adaptive initialization framework based on spatial local information (AIFSLI) for means algorithm.
AIFSLI takes advantage of local density of data points; that is, for a given dataset in , we first estimate its local density via defining a function to describe the local density for each data point . For point , describes the compact of data points within a small region containing as its inner point. After that, we then find out regions with density higher than a threshold . Initial centers of means algorithm are chosen from these regions with higher local density.
Generally, the proposed initialization framework involves three steps: (1) estimating local density for each data point ; (2) finding out regions with density higher than a threshold and marking data points in these regions as candidates for initial centers; (3) determining initial centers from candidates and making sure that the initial centers are separate with certain distance. The workflow of the proposed framework is demonstrated in Figure 1.
(a) Candidates obtained by AIFSLI NN
(b) Candidates obtained by AIFSLI Balls
It should be noticed that although being used to initialize means algorithm, the proposed framework can be easily extended to other clustering algorithms such as the Gaussian Mixture Model.
4. Two Implements for AIFSLI
In this section, we denote the two implements of the proposed framework: based on nearest neighborhoods (NN) and based on neighborhoods (Ball), leading to two adaptive initialization methods. We also give an algorithm to determine initial centers from candidates.
4.1. AIFSLI Based on Nearest Neighborhoods (NN)
In this subsection, we denote an approximate estimation for local density of dataset by nearest neighborhoods (NN). It is well known that NN is a natural choice for the approximate estimation of local density. Inspired by the superiority of Laplacian eigenmaps [23], we first construct an adjacency graph by NN and obtain a Gram matrix from the constructed graph. Gram matrix can be precomputed and loaded to memory before clustering. Values for Gram matrix is computed by where denotes the nearest neighborhood of .
Then, we introduce a vector with components whose entries are given by . The vector provides a natural measure of local density of data points: for data point , if the value of is larger, the local density is higher.
Algorithm 1 shows the detailed steps of approximate version of the AIFSLI method based on nearest neighborhoods (NN). In step (4), the denotes the median value of vector (in our experiments, we use the median function in MATLAB); other criteria can also be used to determine the threshold , such as the mean or certain confidence interval of quantile.

4.2. AIFSLI Based on Neighborhoods (Ball)
In this subsection, we denote an approximate estimation for local density of dataset by neighborhoods (Ball). We also construct an adjacency graph and Gram matrix. Values for Gram matrix are computed by
The definition above of (3) seems much more intuitive than the definition of (2). Equation (3) counts the number of data points in a small region which contains as its center. More numbers in a region seem more compact of data points in that region. Here, the value of is also an issue that needs to be considered, and we set it to a weighted half of the average distance of points in ; that is, . The means average distance of points in which is computed as follow: where is the distance of data points and .
Algorithm for density by neighborhoods (Ball), which we give its name as AIFSLIBall, describes the same steps as Algorithm 1, and its details are shown in Algorithm 2.

4.3. Determining Initial Centers from Candidates
In this subsection, we describe the details of how to determine initial centers from candidates. We propose an algorithm very similar to the Maximin method [17] in the candidates set obtained by Algorithm 1 or Algorithm 2. Algorithm 3 shows the detailed steps of the proposed algorithm. Without loss of generality, we choose the first data point in candidates set as the first initial center in our experiments, which makes our algorithm deterministic. It should be noted that any other methods such as means++ can also be used to determine initial centers from candidates in our work and lead to promising performance.
5. Experiment Setup
5.1. Dataset Description
We use 25 common datasets from the UCI machine learning repository [24] to perform our experiments. Table 1 gives the descriptions for these UCI datasets. For each dataset, the number of clusters is set to be equal to the true number of classes (), as commonly seen in the related literature [9–11, 20].

5.2. Performance Criteria
As in [10, 11], the performance of the initialization methods is quantified using two effectiveness (quality) and one efficiency (speed) criteria.(i)Initial SSE. This is the SSE value calculated after the initialization phase, before the clustering phase. It gives us a measure of the effectiveness of an initialization method by itself.(ii)Final SSE. This is the SSE value calculated after the clustering phase. It gives us a measure of the effectiveness of an initialization method when its output is refined by means; lower value infers compact clusters. Note that SSE value is the objective function of means algorithm, that is, (1).(iii)Number of Iterations. This is the number of iterations that means requires until reaching convergence when initialized by a particular initialization method. It is an efficiency measure independent of programming language, implementation style, compiler, and CPU architecture.
All methods compared in this paper are implemented using MATLAB and executed on a PC with 2.5 GHz CPU and 4 GB RAM.
5.3. Attributes Normalization
Feature normalization is a common preprocessing step in computer vision and machine learning community. Normalization is necessary to prevent features with large ranges from dominating the distance calculations and also to avoid numerical instabilities in the computations. Two commonly used normalization schemes are linear scaling to unit range (minmax normalization) and linear scaling to unit variance (score normalization). Minmax normalization is shown as follows: where denote the minimum and maximum values of the feature, respectively. Minmax normalization scales the attribute to the interval.
score normalization approximately maps the attribute almost to the interval using where and denote the mean and variance of the feature, respectively.
Several studies reveal that the former is more preferable than the latter in clustering research since the latter is likely to eliminate valuable betweencluster variation [20, 25]. In this paper, we apply the minmax normalization to map the attributes of each dataset to the interval.
5.4. Parameter Settings
In our AIFSLINN algorithm, how to choose the number of neighborhood, , is an important issue. We analyse the final SSE by different choices of the value of on some datasets described in Table 1, and the results are demonstrated in Table 2.

From this table, we set in our experiments, which shows neither less discriminatory nor sensitive to outliers. And the convergence of means was controlled by two criteria: the number of iterations reaches a maximum allowable value (we set the maximum value to be 100 in our experiments) or the relative improvement in SSE between two consecutive iterations drops below a threshold; that is, , and .
6. Experimental Results
In this section, we compare two implements of our AIFSLI method: AIFSLINN and AIFSLIBalls, with five stateoftheart methods: MacQueen’s second method (random) [12], Maximin method (Maximin) [18], means++ (Kpp) [14], VarPart [20], and PCAPart [20]. All seven methods are executed 100 times and the mean is collected for each performance criterion. Tables 3, 4, 5, and 6 denote the performance measurements for each method with respect to initial SSE, final SSE, percentage comparison for final SSE, and number of iterations, respectively. We should note that Maximin method (Maximin) [18], VarPart [20], PCAPart [20], and our two algorithms AIFSLINN and AIFSLIBalls are deterministic algorithms.




Table 3 shows that PCAPart and VarPart obtaining lowest initial SSE over other methods at most datasets, and our two algorithms achieve second optimal value. This may be because PCAPart and VarPart take the mean of each feature as threshold. Table 4 shows PCAPart and VarPart obtaining largest final SSE over other methods, which infers that PCAPart and VarPart cannot get compact clusters after clustering. Table 4 also shows that AIFSLINN and AIFSLIBalls get lowest final SSEs on most datasets, which infers that they obtain more compact clusters after clustering than other methods. Table 5 describes the percentage comparison for final SSE. We take the Kpp as the baseline algorithm, which is demonstrated by number “1”; the values in this table denote the percentage of algorithms compared to Kpp; the symbol “+” (“−”) denotes that its value is larger (smaller) than Kpp. Take the dataset 4 as an example; the value −0.8% under algorithm random denotes that the final SSE by random is smaller than that of Kpp by 0.8%, and +9.0% under algorithm Maximin denotes that the final SSE by Maximin is larger than that of Kpp by 9.0%. The last line of this table denotes the average percentage of these algorithms compared to Kpp, and this line shows that both AIFSLINN and AIFSLIBalls get better final SSE than Kpp. Table 6 describes the number of iterations from after initialization until means algorithm terminates. This table implies that AIFSLINN and AIFSLIBalls converge quickly at most datasets.
From the experiment results, the proposed two algorithms reduce the number of iterations for means methods significantly, and from final SSE results in Tables 4 and 5, both proposed algorithms get compact clusters, which infers that our initial centers are suitable for means algorithm, and this suitability or reasonableness comes from the advantage of spatial local information. The proposed two algorithms are approximate estimation of spatial local information; if we make use of more elegant approaches to estimate the spatial local information, better results may be obtained.
7. Discussion
7.1. Complexity Analysis
Table 7 gives the average CPU time over 100 times executed by seven algorithms mentioned above under a subset of datasets in Table 1 (as the CPU time is dependent on implementation style, compiler, and CPU architecture, we do not add it as a performance criterion in our experiments). The CPU time is taken by preprocessing (including the calculation of Gram matrix), the initialization and the means running time.

In Table 7, for datasets with small scale datasets and low dimension, as datasets 4 and 5, all algorithms run quickly. However, when the number of data points is large but dimension is low, such as datasets 1, 13, and 25, both algorithms, AIFSLINN and AIFSLIBalls, need more time to execute than others. As the dimension is larger, taking datasets 8, 18, and 22 as an example, PCAPart algorithm takes the longest running time, followed by algorithms AIFSLINN, VarPart, and AIFSLIBalls. The AIFSLIBalls algorithm always runs quicker than AIFSLINN, which is due to the lower computational complexity of finding the Balls than that of nearest neighborhood.
Table 8 shows the CPU time of a single running for each component in AIFSLINN algorithm; these components include calculating the vector (including constructing the Gram matrix), initialization, and running means. Calculating the vector takes more than 90% of the running time for large datasets, because of finding the nearest neighborhoods for each data point, whose computational complexity is , time consuming.

It is should be noted that the Gram matrix has dimension , which can be very large for many applications when is of the order of a million or impossible for cases when is of the order of a billion. In our experiments, we use the single command in MATLAB to change the data points from 8 bytes to 4 bytes to save the memory, especially for dataset 13. And also, if the parameter in nearest neighborhood algorithm is small enough, the Gram matrix can be very sparse and a sparse matrix can be constructed to replace the full Gram matrix. Computational complexity and optimization of memory of the proposed method are worthy of study when the Gram matrix is sparse.
The AIFSLIBalls algorithm has similar characteristic to AIFSLINN, and we omit its analysis for the limited space.
7.2. Rationality Analysis
In this section, we denote the rationality analysis of the proposed framework. Figure 2 shows a dataset generated by three Gaussian functions with different means and variance. Figure 2(a) denotes the candidates obtained by AIFSLINN (Algorithm 1) and Figure 2(b) denotes candidates obtained by AIFSLIBalls (Algorithm 2).
From Figure 2, we confirm that, by utilizing the spatial local information, which takes advantage of local density of dataset, the initial centers can be chosen from regions with high local density and avoid choosing outliers as initial centers, which is reasonable as discussed above. At the same time, the initial centers are very close to the final centers which explains the reason why our method can reduce the iteration significantly.
8. Conclusion
In this paper, we propose an adaptive initialization framework based on spatial local information (AIFSLI) for means algorithm, which is designed by taking advantage of the local density of data points. We first describe the framework of AIFSLI based on defining a function to describe local density for data points. Since it is difficult to estimate the local density correctly, we derive two approximate estimations: density by nearest neighborhoods (NN) and density by neighborhoods (Ball), leading to two implements of the proposed framework. Experiments on more than 20 datasets show promising performance of the proposed methods and denote that the proposed methods have several advantages: (1) they can find the reasonable candidates of initial centers effective; (2) they are robust to outliers; (3) they can reduce the iterations of means methods significantly; and (4) they are easy to implement.
In the future, taking the Gaussian Mixture Model (GMM), for example, we plan to extend our framework to other clustering algorithms. And for the rapid growth of applications for Big Data, we also plan to parallel our framework on GPU or MapReduce platform to accelerate means algorithm for large scale applications.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors would like to thank the referees for the very helpful remarks which helped improve the paper. This work is supported by the National Natural Science Foundation of China (no. 61300140).
References
 J. A. Hartigan, Cluster Analysis, John Wiley & Sons, 1975.
 A. K. Jain, “Data clustering: 50 years beyond Kmeans,” Pattern Recognition Letters, vol. 31, no. 8, pp. 651–666, 2010. View at: Publisher Site  Google Scholar
 A. K. Jain, M. N. Murty, and P. J. Flynn, “Data clustering: a review,” ACM Computing Surveys, vol. 31, no. 3, pp. 316–323, 1999. View at: Google Scholar
 K. Grauman and T. Darrell, “The pyramid match kernel: discriminative classification with sets of image features,” in Proceedings of the 10th IEEE International Conference on Computer Vision (ICCV '05), vol. 2, pp. 1458–1465, IEEE, October 2005. View at: Publisher Site  Google Scholar
 M. E. Celebi, “Improving the performance of kmeans for color quantization,” Image and Vision Computing, vol. 29, no. 4, pp. 260–271, 2011. View at: Publisher Site  Google Scholar
 B. Liu, X. Li, W. S. Lee, and P. S. Yu, “Text classification by labeling words,” in Proceedings of the Innovative Applications of Artificial Intelligence Conference (IAAI '04), pp. 425–430. View at: Google Scholar
 A. Coates and A. Y. Ng, “Learning feature representations with kmeans,” in Neural Networks: Tricks of the Trade, pp. 561–580, Springer, 2012. View at: Google Scholar
 X. Wu, V. Kumar, Q. J. Ross et al., “Top 10 algorithms in data mining,” Knowledge and Information Systems, vol. 14, no. 1, pp. 1–37, 2008. View at: Publisher Site  Google Scholar
 C.Y. Tsai and C.C. Chiu, “Developing a feature weight selfadjustment mechanism for a Kmeans clustering algorithm,” Computational Statistics and Data Analysis, vol. 52, no. 10, pp. 4658–4672, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 M. E. Celebi, H. A. Kingravi, and P. A. Vela, “A comparative study of efficient initialization methods for the kmeans clustering algorithm,” Expert Systems with Applications, vol. 40, no. 1, pp. 200–210, 2013. View at: Publisher Site  Google Scholar
 M. E. Celebi and H. A. Kingravi, “Deterministic initialization of the kmeans algorithm using hierarchical clustering,” International Journal of Pattern Recognition and Artiffcial Intelligence, vol. 26, Article ID 1250018, 25 pages, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297, Berkeley, Calif, USA, 1967. View at: Google Scholar  MathSciNet
 E. W. Forgy, “Cluster analysis of multivariate data: efficiency vs interpretability of classifications,” Biometrics, vol. 21, pp. 768–769, 1965. View at: Google Scholar
 D. Arthur and S. Vassilvitskii, “kmeans++: the advantages of careful seeding,” in Proceedings of the 18nth Annual ACMSIAM Symposium on Discrete Algorithms, pp. 1027–1035, Society for Industrial and Applied Mathematics, 2007. View at: Google Scholar  MathSciNet
 B. Bahmani, B. Moseley, A. Vattani, R. Kumar, and S. Vassilvitskii, “Scalable kmeans++,” Proceedings of the VLDB Endowment, vol. 5, no. 7, pp. 622–633, 2012. View at: Google Scholar
 G. H. Ball and D. J. Hall, “A clustering technique for summarizing multivariate data,” Behavioral Science, vol. 12, no. 2, pp. 153–155, 1967. View at: Publisher Site  Google Scholar
 T. F. Gonzalez, “Clustering to minimize the maximum intercluster distance,” Theoretical Computer Science, vol. 38, pp. 293–306, 1985. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 I. Katsavounidis, C.C. J. Kuo, and Z. Zhang, “New initialization technique for generalized Lloyd iteration,” IEEE Signal Processing Letters, vol. 1, no. 10, pp. 144–146, 1994. View at: Publisher Site  Google Scholar
 M. Erisoglu, N. Calis, and S. Sakallioglu, “A new algorithm for initial cluster centers in kmeans algorithm,” Pattern Recognition Letters, vol. 32, no. 14, pp. 1701–1705, 2011. View at: Publisher Site  Google Scholar
 T. Su and J. G. Dy, “In search of deterministic methods for initializing Kmeans and Gaussian mixture clustering,” Intelligent Data Analysis, vol. 11, no. 4, pp. 319–338, 2007. View at: Google Scholar
 T. Onoda, M. Sakai, and S. Yamada, “Careful seeding method based on independent components analysis for kmeans clustering,” Journal of Emerging Technologies in Web Intelligence, vol. 4, no. 1, pp. 51–59, 2012. View at: Publisher Site  Google Scholar
 M. B. AlDaoud and S. A. Roberts, “New methods for the initialisation of clusters,” Pattern Recognition Letters, vol. 17, no. 5, pp. 451–455, 1996. View at: Publisher Site  Google Scholar
 M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Computation, vol. 15, no. 6, pp. 1373–1396, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 K. Bache and M. Lichman, UCI machine learning repository, 2013, http://archive.ics.uci.edu/ml.
 G. W. Milligan and M. C. Cooper, “A study of standardization of variables in cluster analysis,” Journal of Classification, vol. 5, no. 2, pp. 181–204, 1988. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2014 Honghong Liao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.