Abstract

A new approach to fuzzy clustering is proposed in this paper. It aims to relax some constraints imposed by known algorithms using a generalized geometrical model for clusters that is based on the convex hull computation. A method is also proposed in order to determine suitable membership functions and hence to represent fuzzy clusters based on the adopted geometrical model. The convex hull is not only used at the end of clustering analysis for the geometric data interpretation but also used during the fuzzy data partitioning within an online sequential procedure in order to calculate the membership function. Consequently, a pure fuzzy clustering algorithm is obtained where clusters are fitted to the data distribution by means of the fuzzy membership of patterns to each cluster. The numerical results reported in the paper show the validity and the efficacy of the proposed approach with respect to other well-known clustering algorithms.

1. Introduction

Clustering algorithms always represented an efficient and important method for analysing either small or big amounts of data, namely, for dividing groups of objects into clusters by using some measures of similarity or dissimilarity on the basis of a suited number of features representing data [13]. The applications of clustering span in every field of science and technology, especially in machine learning, computer science, statistics, engineering, physics, mathematics, medicine, and so on. Early in the twentieth century, a huge number of algorithms and the related variants have been proposed in the literature, each being adapted to the specific field of application [416].

Clustering techniques deal with unsupervised learning as they are used when it is not possible to define data labels a priori. They utilize several metrics for the determination of similar objects (patterns) belonging to the same group (a cluster) that, in turn, are different from patterns of other clusters [17]. Clearly, the shape of a cluster is influenced by the chosen metric, such as Euclidean, Manhattan, Chebyshev, or Mahalanobis distance; in fact, two patterns can be “close” (or “similar”) using one metric and “far” (or “dissimilar”) by using another one.

Similar considerations are also valid when clusters are considered as fuzzy sets [18]. In this case, the patterns are assigned to several clusters in a nonexclusive way by determining the degree of fuzzy membership of every pattern to the present clusters. However, the geometrical constraints imposed by the membership function (MF) may represent a remarkable obstacle for the clustering analysis. In this regard, most algorithms tend to create spherical, ellipsoidal, or polygonal fuzzy clusters having a simple geometry that is computationally affordable but possibly unfit to the actual distribution of data.

Different taxonomies hold for both fuzzy and crisp algorithms; the most considered aspects are as follows:(i)-clustering or free-clustering techniques, according to the a priori determination of the number () of clusters;(ii)partitional or hierarchical (agglomerative/divisive) procedures for cluster generation, where the dataset is partitioned directly into a set of disjoint clusters or else the solution depends on the previous or successive ones in a hierarchical sequence;(iii)sequential (online) or batch (iterative) algorithms, through which either clusters are updated sequentially at any presentation of a new pattern or they are updated iteratively considering a given set of data. As discussed successively, there may exist hybrid cases when a dataset is used to determine clusters sequentially but several times, as, for example, in a learning process by epochs or in tuning procedures of parameters;(iv)model-based, distribution-based, or density-based clusters, when clusters are associated with geometric models defined in the data space or they are associated with suitable statistic distributions or density functions;(v)point-to-centroid or point-to-boundary based metrics, where the distances of patterns from clusters are computed considering a single prototype (i.e., a point or centroid) representing each cluster or distances are scaled according to the actual extension of clusters in the data space, independently of the use of model-based, distribution-based, or density-based clusters.

Nowadays, there are no clustering algorithms whose performance is universally recognized to be satisfactory for all problems. A trade-off is often necessary among computational complexity, model fitting, and explanatory tools of the clusters’ structure, depending on the nature of data under analysis and the specific field of application. Iterative algorithms perform clustering until a stopping rule is verified; they tend to be more accurate than sequential algorithms that, in turn, are faster but depend on the pattern presentation order. In this regard, well-known online clustering methods recently proposed in the literature are the recursive fuzzy -means [19], recursive Gustafson-Kessel clustering [20], recursive subtractive clustering (eTS) method [21], evolving clustering method (ECM) [22], dynamic evolving neural-fuzzy inference system (DENFIS) method [23], and so on.

Furthermore, -clustering techniques have a great limitation, since they are useful only for those problems when the number of clusters may be known in advance [2426]. Actually, there is a huge amount of literature that focuses on the problem of  “cluster validity,” that is, how to determine the optimal value of for a given dataset [2729]. These methods are able to evaluate whether a final clustering result is better than another one by means of suited criteria as, for instance, the compactness and separability of clusters. Therefore, they usually work by defining an index and then by finding the minimum (or maximum) of the values associated with each clustering solution.

The underlying idea of this paper is to propose a new approach to fuzzy clustering, with the aim of relaxing some constraints imposed by known algorithms and using a new method for the computation of MFs. The starting point is Simpson’s idea of the well-known “Fuzzy Min-Max” clustering algorithm [30]: we propose a free-clustering, partitional, online algorithm using model-based clusters whose shape is determined in a new way by the convex hull computation. Our contribution comes from the awareness that Simpson’s method is very efficient but it has an important constraint, the shape of clusters, given that it creates hyperboxes parallel to the coordinate axes of the data reference frame only. This constraint will be removed by using the convex hull computation of clusters and, necessarily, an original methodology in order to define a metric associated with the MFs.

The use of unconstrained clusters in the analysis of large datasets allows us to assort patterns in extremely compact clusters [31, 32]. Nevertheless, we will show that the use of fuzzy logic combined with a more flexible geometry of clusters yields robust results with respect to the uncertainty of data by means of computationally efficient procedures [3335]. Anyway, the approach herein proposed, essentially applied to the class of online algorithms and model-based clusters fitted by convex geometrical polytopes, can be generalized also to a larger choice of algorithms, even in the case of hierarchical procedures, iterative algorithms, and nonconvex models of clusters.

The paper is organized as follows. In Section 2, we introduce and discuss well-known techniques for convex hull computation with regard to their application in the field of pattern recognition, in particular for data clustering; an overview of the most relevant works presented in the literature is reported in Section 3. The new fuzzy clustering algorithm proposed in the paper is illustrated in detail in Section 4, where the use of convex hull is demonstrated by means of simple toy tests. Successively, the way by which MFs are determined in order to represent fuzzy clusters based on the adopted geometrical models is clearly explained in Section 5, while the performance of the proposed algorithm and its comparison with other popular clustering algorithms, considering different datasets, are reported in Section 6. Finally, our conclusions and discussions are drawn in Section 7.

2. Convex Hull Computation and Data Clustering

In this paper, we propose a novel and generalized fuzzy clustering algorithm, which is useful to analyse data for online and real-time applications [36]. The shape of clusters is generalized by using less regular structures seemingly more complex but more computationally affordable and also it is able to fit better the local distribution of data, with less sparse geometrical structures as well as using more flexible and dynamic clustering rules. In this regard, we propose the use of the convex hull for the determination of irregular convex polytopes.

The convex hull of a set of points is the smallest convex set that contains these points, as illustrated in Figure 1 for 2D and 3D datasets. We can represent an -dimensional convex hull by a set of points in called “vertices” or, equivalently, by -dimensional faces called “facets.”  Each facet is characterized by the following:(i)a set of vertices;(ii)a set of neighboring facets;(iii)a hyperplane equation.The -dimensional faces are the “ridges” of the convex hull; each ridge is the intersection of the vertices of two neighboring facets. The relationship between the number of vertices and facets of convex polytopes for is not trivial; for this reason the convex hull determination is also referred to as the “vertex enumeration” or “facet enumeration” problem.

There are many methods for the convex hull evaluation [3742]. In this paper, we propose the “Quickhull” algorithm [43], which is able to compute the convex hull in 2D, 3D, and higher dimensions. The Quickhull realizes an efficient implementation of the convex hull algorithm by combining a 2D procedure with the -D Beneath-Beyond algorithm [44]. Precisely, Quickhull algorithm uses a simplification of the Beneath-Beyond theorem to determine efficiently the visible facets for a given set of points.

Several works can be found in the literature that focus on the use of the convex hull within clustering. For example, an interesting method is the one proposed in [45], which is a two-level fuzzy clustering adaptive method able to expand or merge “flexible” convex polytopes that makes use of convex hull for modeling clusters. Several experimental results are given to show the validity of this method but there are several drawbacks, in particular the dependence of the MF on four parameters, the use of a training set to initialize the clustering algorithm, and the high computational cost for a pattern inclusion within a convex set, calculated by means of a pure geometrical method.

An intelligent fuzzy convex hull based clustering approach to address the problem stating number of clusters and parameter adjustment is proposed in [46], where the authors fused the concept of convex hull with fuzziness parameter. The proposed algorithm tries to capture the basic idea of clustering and to provide an optimal set of clusters such that the overlapping of cluster points can be easily identified by defining the boundary around the points. The numerical results seem to be interesting but the authors considered just one dataset, so the proposed algorithm lacks other comparisons by using more data.

A three-phase -means convex hull triangulation approach is shown in [47], which is able to detect clusters with both complex and nonconvex shapes. The authors show optimal results just by comparing their numerical simulations with the spectral clustering using the Nyström method. The convex hull is applied only at the end of the first clustering phase in which a standard -means algorithm can determine the correct initialization.

A novel pattern recognition method called “NFPC” has been introduced for training a neurofuzzy classifier by the identification of convex subsets of patterns in the data space [48]. The performed tests ensured the accuracy of the proposed method with respect to other different classifiers. Although, strictly speaking, this is not a clustering algorithm, it uses a convex-set initialization and it incorporates the fuzziness into the decision surfaces to further improve the classification performance.

A fast and reliable distance measure between two convex clusters has been also proposed by using Support Vector Machines (SVM) and a merging algorithm that groups convex clusters obtained from clustering [49]. In addition, a new semisupervised clustering method based on convex hull has been proposed [50]; in its learning stage, by using the patterns whose class is known (a class in this case is also representing a cluster), the method builds the initial convex hull as the boundary for each class. Successively, in the classification stage, the class of any pattern is determined considering the convex hull having the vertex at minimum distance from the pattern.

In [51], the authors propose a dynamic convex hull based clustering algorithm dealing with data appearing sequentially and where clusters are modified by using a combination of vertices of the convex hull containing the dataset. The developed algorithm is assessed at first on some empirical data and then it is applied for the monitoring of a complex system to illustrate its efficiency in real-time applications.

With respect to early clustering approaches that make use of convex hull computations, we propose an algorithm where the convex hull is not only used at the end of clustering for the geometric data interpretation but also used during the fuzzy data partitioning. Furthermore, we will show that this procedure involves a fuzzy set determination through the convex hull and hence a suited method is proposed for associating an MF to each convex hull-shaped cluster. We propose a brand new approach that makes use of kernel-based membership functions to model clusters, where the convex hull is adopted for the point-to-boundary metric evaluation only. In other words, a pure fuzzy clustering algorithm is obtained, where clusters are fitted to the data distribution also considering the fuzzy membership of patterns to each cluster.

4. The Proposed Algorithm for Fuzzy Clustering

We illustrate in the following the proposed method by considering two fundamental aspects: the illustration of the new clustering algorithm and a discussion of some computational remarks.

Generally, all the algorithms manipulating heterogeneous data originating from different sources need a preprocessing step for data normalization. It is used in order to accommodate every feature of the data space in the range between 0 and 1, so that any metric defined in the data space can be managed with absolute reference values.

Let be the number of patterns of the dataset and let be the number of data features; that is, each pattern of the dataset is represented by -tuple of real numbers:When the data features have a different nature, either physical or semantic, patterns can be normalized column by column:where and for . Alternatively, when some data homogeneity there exists, an affine normalization is usually preferred:where and for and . In the present case we adopted the column-by-column normalization since the used datasets present patterns with heterogeneous features.

The proposed method will be denoted in the following as convex hull (CH) clustering algorithm. Its basic operations are summarized by the flowchart shown in Figure 2. As previously stated, it is a free-clustering sequential algorithm and so the number of clusters is not fixed in advance and it may change during the pattern presentation process.

Let be the number of clusters currently identified during the operation of the algorithm; the following steps can summarize the detailed operations of the CH algorithm.(i)The algorithm is initialized considering the first pattern , which is identified as the first cluster. In other words, the first cluster will coincide with the first pattern of the dataset and is set to 1.(ii)Successively, the algorithm iterates for each pattern , of the dataset. Let be the array of MF values of the th pattern with respect to the clusters currently determined; that is,where each MF is obtained using the procedures illustrated in Section 5 and considering the convex hull representing each cluster.Let be the maximum value in scored in correspondence with the th cluster:Let and be two parameters that will be successively discussed; three different conditions are successively evaluated in a conditional branch as follows.(1): the algorithm recognizes that no clusters meet the membership criteria for that pattern and hence a new cluster is created that coincides with the current pattern and no convex hull computation is performed; thereafter .(2): the algorithm assigns to the th cluster scoring the maximum MF; based on the former condition, we are sure that some MFs in will be greater than . Therefore, the algorithm reestimates the convex hull relevant to the th cluster and it stores the set of points constituting the new convex hull that represents that cluster in a suited array. The value of does not change.(3): the algorithm assigns to the th cluster scoring the maximum MF. Unlike the previous case, the algorithm does not perform the convex hull, since it is supposed that is within the boundaries of the th cluster because of its high MF value. This choice implies a great saving in terms of computational cost and avoids the computation of the convex hull in situations where it might not be necessary.In addition, this choice aims at ruling the case very close to 1, in which the current pattern likely belongs to the th cluster with a high degree of membership, either within the cluster vertices or not. In this situation, the algorithm assumes that the pattern has a sufficiently high membership value and so it can be assigned to the cluster without requiring the clusters boundary update. For this reason, it also avoids the fact that clusters may become too large, causing inaccuracies or errors in the overall clustering results. Also in this case the value of does not change.

We used a method to set the value of both    and in advance on the basis of the dataset under analysis. Therefore, this procedure can be considered as being integrated within the proposed algorithm with no further optimizations necessary in this regard (alternative methods can be investigated and adopted in future works). Let be the vector of the standard deviations of the patterns of the dataset evaluated along each column (a feature):The values adopted in the following will be

An example of the algorithm output is shown in Figure 3 considering a simple 2D dataset consisting of patterns that can be equally partitioned into clusters. The real output of the algorithm is represented by the solid lines, which show the convex hull used to build the MF of each fuzzy cluster as explained successively. Obviously, the subdivision of patterns among the various clusters can be represented by the dashed lines, which show the plain convex hull of each cluster though not really computed by the CH algorithm. In fact, we outline the fact that not all the points of a cluster may be used to calculate the convex hull and the related MF of a cluster, thanks to the last condition previously explained that allowed assigning a pattern to a cluster without updating the convex hull.

5. MF Evaluation of Convex Hull-Shaped Clusters

When clusters in a dataset have particular geometries or, more in general, when one wants to disengage from specific geometrical structures (such as hypercubes, hyperspheres, and regular polytopes), it is useful and appropriate to rely on more flexible and, at the same time, computationally affordable MFs [52]. Since the actual structure of clusters is possibly irregular, the MF should be based on a point-to-boundary distance of patterns from clusters rather than on a point-to-centroid based metric. The shape of the resulting MF will follow in this manner the particular form on which a cluster is structured.

In this paper, the convex hull is adopted as a geometrical model for fuzzy clusters. As previously illustrated, each convex hull is represented by a number of vertices corresponding to some patterns belonging to that cluster. Each convex hull will be associated with an MF whose trend is inversely related to the distance of a pattern from its boundaries; looking at Figure 3 for the 2D case, the distances are considered from the polygon bounded by straight sides between the vertices of the convex hull. To achieve this goal for any dimension, that is, for any number of features of the dataset, the basic idea successively illustrated is to use a mixture of kernel functions centered at each vertex of the convex hull and at the cluster’s centroid as well.

Considering the computational cost, the convex hull evaluation, in particular the Quickhull algorithm, has complexity, where is the maximum number of facets of a convex hull with vertices and is the number of processed points [53]. In case of datasets with a high number of dimensions, the convex hull approach suffers from a relatively high computational speed; in such cases, the computational cost can be overcome by using parallel algorithms implemented on multicore processors [54] and GPU [55], which also minimize the impact of irregular data. This choice can improve the performance in the case of simple 2D [56, 57] and 3D [58, 59] datasets, but also in the case of generic -dimensional data [60, 61]. Finally, the use of convex hull exhibits a good flexibility that combines computational performance with good spatial representation, since the convex hull is generally more compact, in terms of spatial occupation, when compared to the volume of hyperboxes.

In the following, we illustrate two possible procedures for the MF computation of convex hull-shaped clusters, using either Gaussian or cone-shaped kernel functions.

5.1. Gaussian-Based Kernels

This method exploits the superposition of an appropriate number of univariate (isotropic) Gaussian kernels to associate the MF with the convex hull. Let be a matrix , where is the number of features of the data space and is the number of vertices of the convex hull representing the cluster:

Let be the pattern whose MF to the cluster must be computed. By using the Gaussian method, the MF takes the form:where is the cluster centroid, is the squared point-to-centroid Euclidean distance, is the squared point-to-th-vertex Euclidean distance, and is the maximum distance that can occur between two patterns. This value is not an external parameter, which requires an initial set-up; rather it depends on the number of features according to the following expression:

The variance of each Gaussian kernel is set to a fixed value equal to . As shown in Figure 4, the value of determines how quickly the function decreases and hence it is related to the fuzziness of the resulting MF; the higher the value of is, the faster the function goes to zero.

The proposed CH algorithm where the MFs in (4) are computed by using (9) will be denoted in the following as convex hull with Gaussian-based kernels (CH-GBK).

5.2. Cone-Based Kernels

This method uses cone-shaped kernel functions to build the MF of the convex hull according to the following expression:where is the point-to-centroid Euclidean distance and is the point-to-th-vertex Euclidean distance.

The proposed CH algorithm where the MFs in (4) are computed by using (11) will be denoted in the following as convex hull with cone-based kernels (CH-CBK).

In either Gaussian-based or cone-based MFs, the parameter defines how quickly the function tends to zero: the higher the value of is, the faster the function goes to zero, as shown, for example, in Figure 5, by the graphical representation of the cone-based MF using different values of . For this reason, the parameter determines the fuzziness of the related MF.

Cone-shaped MFs have some similarities and some differences with respect to the Gaussian ones. Differently from Gaussian kernels, this method uses functions that effectively reach the zero value of the MF. As the Gaussian one, the cone-shaped method uses the superposition of functions: functions are placed at the vertices of the convex hull and one is placed on its centroid. The latter is useful in both cases to assign the right relevance to the cluster centroid and to fill the gap that may exist in the convex hull around the cluster centroid, since the other functions are placed at the vertices of the convex hull.

We will adopt isotropic Gaussian kernels having the same variance for each vertex and cone-shaped kernels having a hyperspherical (isotropic) section with the same radius for each vertex. Regarding this aspect, some preliminary tests were carried out for both Gaussian and cone-based kernels to ascertain the necessity of computing more parameters for each MF [62]. These tests proved that isotropic functions, with a predetermined width used for each vertex, are able to obtain good results in terms of performance and efficiency.

The two proposed options for MF evaluation, that is, Gaussian-based and cone-based, are considered in this paper since they can perform differently in terms of efficiency or accuracy. In fact, in [62] we performed specific tests in order to compare the performance of such methods. The results of these tests proved that cone method is slightly faster than the Gaussian one, which in turn is more accurate in many practical cases. This is also confirmed by the following experimental results.

6. Experimental Results

The performances of the proposed CH algorithm are validated through the analysis of several clustering benchmarks. We present some experimental results that are representative of a general behaviour. Several datasets having a different number of features and clusters are considered: Hepta, [63], Iris, User Knowledge Modeling (UKM), and Seed [64].

Although some datasets, like Iris and UKM, should be properly used for classification benchmarks, they are often adopted also for clustering. Generally, patterns of the same class may be grouped in several clusters of a dataset (i.e., different regions of the data space) or else the same cluster may contain patterns of different classes. In case of Iris, for example, many clustering algorithms are able to identify only two clusters given that patterns of two classes are overlapping in the input space of the considered dataset.

We compared our approach with several clustering algorithms characterized by different taxonomies: -means (partitional-batch-crisp) [65], FCM (partitional-batch-fuzzy) [66], Min-Max (partitional-sequential-fuzzy), and Clusterdata (hierarchical-batch-crisp) in the MATLAB (ver. R2013a) environment. Any clustering algorithm has a dependence on one or more critical parameters that affect its overall performance. For this reason, a sound comparison between different algorithms should take into account also the bootstrap procedures to set up the related parameters. For instance, the number of clusters in -means and FCM (i.e., and , resp.) should be determined by using a cluster validity index as the ones mentioned in Section 1. In the same way, clusters and the related number must be selected within the hierarchy generated by Clusterdata. FCM, Min-Max, and the proposed CH algorithm need a suited choice of the fuzzification parameter for MFs. Min-Max and CH algorithms adopt a sequential procedure and so the performance should be averaged over different permutations of the pattern presentation order (unless differently justified by the specific nature of the dataset). Similarly, the performance should be averaged over different centroid initializations for -means and FCM.

In addition, the determination of optimal parameters is critical for online algorithms given that different operative frameworks may be considered as follows:(i)A training/tuning set is used to find the optimal values of parameters, possibly using cluster validity procedures, and then the algorithm is used for online clustering of different test sets (hopefully generated by the same random process as the one of the training/tuning set).(ii)The same dataset is firstly used to find the optimal parameters and successively it is clustered by using such parameters. In this case, the online algorithm is inserted into a totally batch, iterative procedure; therefore its use is justified only by more accurate performances or faster computational times with respect to iterative clustering algorithms.(iii)The parameters are fixed in advance by relying on some a priori hypotheses; successively the algorithm is used for pure online clustering. In this case, the values of parameters may be adjusted adaptively, for instance, if the specific application requires a data analysis by epochs or it is a big data problem where errors due to the initial guess marginally affect the overall clustering performance.

In the following, we consider the measure of the error rate in terms of pattern assignment for known benchmarks. Precisely, these numerical tests focus on the number of errors obtained by the CH algorithm compared with the aforementioned clustering algorithms. A study on cluster validity procedures is out of the scope of this paper; consequently, in order to have a broad context for the analysis and consider algorithms of different nature, in the following, we will assume that all the parameters of these algorithms are ideally obtained through a suitable procedure, to generate the right number of clusters. In fact, although clustering is an unsupervised learning problem, we are using some reference datasets for benchmarking and hence we know the right number of clusters and the true label of each pattern.

Consequently, the value of is suitably fixed in advance for -means as well as the value of for FCM, using in this case a default fuzzification parameter for its well-known MF. Moreover, since the result of these algorithms depends on a centroids initialization, we will take as whole result the average over 100 different initializations. For Clusterdata algorithm, we choose in the generated hierarchy the solution containing the right number of clusters.

For instance, the parameter controls the fuzziness of MFs in the original Simpson’s algorithm and a threshold is also necessary to compare the MF values and control the hyperbox expansion process. For Min-Max and CH algorithms, we randomly change the presentation order of patterns for 100 times; for each sorting we consider the value of that yields the right number of clusters and then we take as whole result the average over the different sortings. For Min-Max, also the maximum size of a hyperbox is considered [67], which is another critical parameter: it is varied (for each sorting of patterns) in the range from to with steps, in order to obtain the best choice of both and .

The results in terms of mean error rate of pattern assignment, which is the percentage of patterns with respect to the cardinality of the dataset not correctly assigned to the right cluster, possibly averaged over different runs of that algorithm, are shown in Table 1. For each dataset, both the CH-GBK and the CH-CBK methods are able to achieve a performance comparable with the FCM algorithm and even a better performance than -means, Min-Max, and Clusterdata algorithms. We remark that the same performance is obtained with respect to FCM although CH is an online free-clustering algorithm that is faster than the iterative FCM, since it analyses the data only once and it does not suffer from the initialization guess of centroids, while being robust against the pattern presentation order.

Another result can be shown in Table 2, which illustrates the minimum (e.g., the best) error rate (%) obtained after different runs of the above introduced algorithms for each of the four datasets. We outline the fact that both CH-GKB and CH-CKB algorithms are able to obtain the best performance in terms of patterns assignment.

As discussed in the paper, the performance of every algorithm depends on at least one initialization parameter. A more interesting test has been carried out for comparing the performance of the proposed approach versus the popular Min-Max algorithm that, as discussed above, is online/sequential algorithm as well and it also depends on the order of pattern presentation. In Table 3 we report the results obtained after running CH-GKB, CH-CKB, and Min-Max algorithms for runs and counting the times in which an algorithm is able to obtain the minimum error rate for each data sorting. Both CH-GKB and CH-CKB achieve the best performance in terms of number of runs in which they are able to obtain the minimum error rate (we remark that the sum in each column may be greater than 100% because CH and Min-Max can obtain the same error rate that is the same best result for a given run).

7. Conclusions

In this paper, we propose a new fuzzy clustering algorithm with two different variations both based on a new metric to calculate the MF of fuzzy sets representing the clusters. The algorithm is intended to eliminate as much as possible the dependence of clustering results on the use of simple and predetermined geometrical models for clusters. We solve this problem through the computation of a suited convex hull representing the cluster. We also reduced the dependence on critical thresholds and parameters, which often leads to wrong computations of the number of clusters and wrong data partitions.

The experimental results show that the proposed CH approach is able to achieve a comparable performance with respect to other well-known clustering algorithms, introducing some desirable features thanks to the use of a sequential and free-clustering approach whose computational complexity is controlled. The only critical parameter to be optimized is the fuzziness of the adopted MFs. The experimental tests confirm that our algorithm is quite stable in a wide range of , considering any heuristic procedure to find the optimal value of this parameter.

In the future, the CH algorithm could be considered also for iterative and hierarchical clustering procedures and it could be enhanced by using suited techniques to generalize the shape of clusters to nonconvex structures as well. In fact, the proposed approach for the MF computation can be applied independently of the use of the convex hull, for example, considering a set of vertices of a concave polytope family, which could be more suited to fit semicircle, curvilinear, and other irregular structures, although they may appear in peculiar datasets only.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.