Research Article  Open Access
A Density Peak Clustering Algorithm Based on the KNearest Shannon Entropy and TissueLike P System
Abstract
This study proposes a novel method to calculate the density of the data points based on Knearest neighbors and Shannon entropy. A variant of tissuelike P systems with active membranes is introduced to realize the clustering process. The new variant of tissuelike P systems can improve the efficiency of the algorithm and reduce the computation complexity. Finally, experimental results on synthetic and realworld datasets show that the new method is more effective than the other stateoftheart clustering methods.
1. Introduction
Clustering is an unsupervised learning method, which aims to divide a given population into several groups or classes, called clusters, in such a way that similar objects are put into the same group and dissimilar objects are put into different groups. Clustering methods generally include five categories: partitioning methods, hierarchical methods, densitybased methods, gridbased methods, and modelbased methods [1]. Partitioning and hierarchical methods can find sphericalshaped clusters but do not perform well on arbitrary clusters. Densitybased clustering [2] methods can be used to overcome this problem, which can model clusters as dense regions and boundaries as sparse regions. Three representative approaches of the densitybased clustering method are DBSCAN (DensityBased Spatial Clustering of Application with Noise), OPTICS (Ordering Points to Identify the Clustering Structure), and DENCLUE (DENsitybased CLUstEring).
Usually, an objective function measuring the clustering quality is optimized by an iterative process in some clustering algorithms. However, this approach may cause low efficiency. Thus, the density peaks clustering (DPC) algorithm was proposed by Rodriguez and Laio [3] in 2014. This method can obtain the clusters in a single step regardless of the shape and dimensionality of the space. DPC is based on the idea that cluster centers are characterized by a higher density than the surrounding regions by a relatively large distance from points with higher densities. For the DPC algorithm, scholars have done a lot of research. However, DPC still has several challenges that need to be addressed. First, the local density of data points can be affected by the cut off distance, which can influence the clustering results. Second, the number of clusters needs to be decided by users, but the manual selection of the cluster centers can influence the clustering result. Cong et al. [4] proposed a clustering model for high dimensional data based on DPC that accomplishes clustering simply and directly for data with more than sixdimensions with arbitrary shapes. The problem in this model is that clustering effect is not ideal for different classes and big differences in order of magnitude. Xu et al. [5] introduced a novel approach, called density peaks clustering algorithm based on grid (DPCG). But this method also needs to rely on the user experiment in the choice of clustering centers. Bie et al. [6] proposed a fuzzyCFSFDP method for adaptively but effectively selecting the cluster centers. Du et al. [7] proposed a new DPC algorithm using geodesic distances. And Du et al. [8] also proposed a FNDP (fuzzy neighborhood density peaks) clustering algorithm. But they cannot select clustering center automatic and the FNDP algorithm can cost much time in the calculation of the similarity matrix. Hou and Cui [9] introduced a density normalization step to make largedensity clusters partitioned into multiple parts and smalldensity clusters merged with other clusters. Xu et al. proposed a FDPC algorithm based on a novel merging strategy motivated by support vector machines [10]. But it also has the problem of higher complexity and needs to select the clustering center by users. Liu et al. [11] proposed a sharednearestneighborbased clustering by fast search and find of density peaks (SNNDPC) algorithm. Based on prior assumptions of consistency for semisupervised learning algorithms, some scholars also made assumptions of consistency for densitybased clustering. The first assumption is of local consistency, which means nearby points are likely to have similar local density, and the second assumption is of global consistency, which means points in the same highdensity area (or the same structure, i.e., the same cluster) are likely to have the same label [12]. This method also cannot find the clustering centers automatically. Although many studies about DPC have been reported, it still has many problems that need to be studied.
Membrane computing proposed by Pǎun [13], as a new branch of natural computing, abstracts out computational models from the structures and functions of biological cells and from the collaboration between organs and tissues. Membrane computing mainly includes three basic computational models, i.e., the celllike P system, the tissuelike P system, and the neurallike P system. In the computation process, each cell is treated as an independent unit, each unit operates independently and does not interfere with each other, and the entire membrane system operates in maximally parallel. Over the past years, many variants of membrane systems have been proposed [14–18], including membrane algorithms of solving global optimization problems. In recent years, applications of membrane computing have attracted a lot of attention from researchers [19–22]. There are also some other applications; for example, membrane systems are used to solve multiobjective fuzzy clustering problems [23], solve unsupervised learning algorithms [24], solve automatic fuzzy clustering problems [25], and solve the problems of fault diagnosis of power systems [26]. Liu et al. [27] proposed an improved Apriori algorithm based on an EvolutionCommunication tissueLike P System. Liu and Xue [28] introduced a P system on simplices. Zhao et al. [29] proposed a spiking neural P system with neuron division and dissolution.
Based on previous works, the main motivation of this work is using membrane systems to develop a framework for a density peak clustering algorithm. A new method of calculating the density of the data points is proposed based on the Knearest neighbors and Shannon entropy. A variant of the tissuelike P system with active membranes is used to realize the clustering process. The new model of the P system can improve efficiency and reduce computation complexity. Experimental results show that this method is more effective and accurate than the stateoftheart methods.
The rest of this paper is organized as follows. Section 2 describes the basic DPC algorithm and the tissuelike P system. Section 3 introduces the tissuelike P system with active membranes for DPC based on the Knearest neighbors and Shannon entropy and describes the clustering procedure. Section 4 reports experimental results on synthetic datasets and UCI datasets. Conclusions are drawn and future research directions are outlined in Section 5.
2. Preliminaries
2.1. The Original Density Peak Clustering Algorithm
Rodriguez and Laio [3] proposed the DPC algorithm in 2014. This algorithm is based on the idea that cluster centers have higher densities than the surrounding regions and the distances among cluster centers are relatively large. It has three important parameters. The first one is the local density of data point , the second one is the minimum distance between data point and other data points with higher density, and the third one is the product of the other two. The first two parameters correspond to two assumptions of the DPC algorithm. One assumption is that cluster centers have higher density than the surrounding regions. The other assumption is that this point has larger distance from the points in other clusters than from points in the same cluster. In the following, the computations of and are discussed in detail.
Let be a dataset with data points. Each has attributes. Therefore, is the jth attribute of data point . The Euclidean distance between the data points and can be expressed as follows:
The local density of the data point is defined aswithwhere is the cutoff distance. In fact, is the number of data points adjacent to data point . The minimal distance between data point and any other data points with a higher density is given by
After and are calculated for each data point , a decision graph with on the vertical axis and on the horizontal axis can be plotted. This graph can be used to find the cluster centers and then to assign each remaining data point to the cluster with the shorted distance.
The computation of the local densities of the data points is a key factor for the effectiveness and efficiency of the DPC. There are many other ways to calculate the local densities. For example, the local density of can be computed using (5) in the following [3]:
The way in (5) is suitable for “small” datasets. In fact, it is difficult to judge if the dataset is small or large. When (5) is used to calculate the local density, the results can be greatly affected by the cutoff distance .
Each component on the right side of (5) is a Gaussian function. Figure 1 visualizes the function and two Gaussian functions with different values of . The blue and red curves are the curves of with and , respectively. The curve with a smaller value of declines more quickly than the curve with a larger value of . Comparing the curve of , the yellow dash dotted curve, with the curves of , it can be found that values of are greater than those of when but decay faster when . This means that if the value of the parameter needs to be decided manually in the density calculation, the result of the calculated densities will be influenced by the selected value. This analysis shows that the parameter has a big effect on the calculated results. Furthermore, the density in (5) can be influenced by the cutoff distance . To eliminate the influence from the cutoff distance and give a uniform metric for datasets with any size, Du et al. [30] proposed the Knearest neighbor method. The local density in Du et al. [30] is given bywhere is an input parameter and is the set of the nearest neighbors of data point . However, this method did not consider the influence of the position of the data point on its own density. Therefore, this current study proposes a novel method to calculate the densities.
2.2. The TissueLike P System with Active Membrane
A tissuelike P system has a graphical structure. The nodes of the graph correspond to the cells and the environment in the tissuelike P system, whereas the edges of the graph represent the channels for communication between the cells. The tissuelike P system is slightly more complicated than the celllike P system. Each cell has a different state. Only the state that meets the requirement specified by the rules can be changed. The basic framework of the tissuelike P system used in this study is shown in Figure 2.
A P system with active membranes is a construct:where(1) is the set of alphabets of all objects which appear in the system;(2) represents the states of the alphabets;(3) is the set of labels of the membranes;(4) are the initial multiple objects in cells 1 to m;(5) is the set of objects present in an arbitrary number of copies in the environment;(6) is the set of channels between cells and between cells and the environment;(7) is the initial state of the channel (i, j);(8) is a finite set of symport/antiport rules of the form with and :(i), where , , and .(Object evolution rules: an object is evolved into another in a membrane).(ii), where and . (Sendin communication rules: an object is introduced into a membrane and may be modified during the process).(iii), where . (Sendout communication rules: an object is sent out of the membrane and may be modified during the process).(iv), where and . (Division rules for elementary membranes: the membrane is divided into two membranes with possibly different labels; the object specified in the rule is replaced by possibly new objects in the two new membranes; and the remaining objects are duplicated in the process).(9) is the output cell.
The biggest difference between a celllike P system and a tissuelike P system is that each cell can communicate with the environment in the tissuelike P system, but only the skin membrane can communicate with the environment in the celllike P system. This does not mean that any two cells in the tissuelike P system can communicate with each other. If there is no direct communication channel between the two cells, they can communicate through the environment indirectly.
3. The Proposed Method
3.1. Density Metric Based on the KNearest Neighbors and Shannon Entropy
DPC still has some defects. The current DPC algorithm has the obvious shortcoming that it needs to set the value of the cutoff distance manually in advance. This value will largely affect the final clustering results. In order to overcome this shortcoming, a new method is proposed to calculate the density metric based on the Knearest neighbors and Shannon entropy.
Knearest neighbors (KNN) is usually used to measure a local neighborhood of an instance in the fields of classification, clustering, local outlier detection, etc. The aim of this approach is to find the Knearest neighbors of a sample among N samples. In general, the distances between points are achieved by calculating the Euclidean distance. Let KNN() be a set of nearest neighbors of a point and it can be expressed aswhere is the Euclidean distance between and and is the kth nearest neighbor of . Local regions measured by KNN are often termed Knearest neighborhood, which, in fact, is a circular or spherical area or radius . Therefore, KNNbased method cannot apply to handle datasets with clusters nonspherical distributions. Therefore, these methods usually have poor clustering results when handling datasets with clusters of different shapes.
Shannon entropy measures the degree of molecular activity. The more unstable the system is, the larger the value of the Shannon entropy is, and vice versa. The Shannon entropy, represented by , is given bywhere is the set of objects and is the probability of object appearing in . When is used to measure the distance between the clusters, the smaller the value of is, the better the clustering result is. Therefore, the Shannon entropy is introduced to calculate the data point density in the Knearest neighbor method, so that the final density calculation not only considers the distance metric, but also adds the influence of the data point position to the density of the data point.
However, the decision graph is calculated by the product of and . A larger value of makes it easier to choose the best clustering centers. Therefore, the reciprocal form of the Shannon entropy is adopted. The metrics for and may be inconsistent, which directly leads to and playing different roles in the calculation of the decision graph. Hence, it is necessary to normalize and .
The specific calculation method is as follows. First, the local density of data point is calculated:where and are data points and is the density of data point . Next, the density of data point is normalized and the normalized density is denoted as :Finally, the density metric which uses the idea of the Knearest neighbor method is defined as
To guarantee the consistence of the metrics of and , also needs to be normalized.
3.2. TissueLike P System with Active Membranes for Improved Density Peak Clustering
In the following, a tissuelike P system with active membranes for density peak clustering, called KSTDPC, is proposed. As mentioned before, assume the dataset with data points is represented by . Before performing any specific calculation of the DPC algorithm, the Euclidean distance between each pair of data points in the dataset is calculated and the result is stored in the form of a matrix. The initial configuration of this P system is shown in Figure 3.
When the system is initialized, the objects are in membrane for and object is in membrane , where means there is no object. First, the Euclidean distance between the data points and (represented by for ) is calculated with the rule . Note that for are expressed as . The results are stored as the distance matrix, also called the dissimilarity matrix, ,
At the beginning, there are membranes in the P system. After the distances are calculated, objects are placed in membrane for . In the next step, the densities of the data points are calculated by the rule . Then the sendin and sendout communication rules are used to calculate the values of , , and and to put them in membrane for . Next, according to the sorted results of for , the number of clusters can be determined. The rule of the active membranes is used to split membrane into membranes as shown in Figure 4. The cluster centers are put in membranes to , respectively. Finally, the remaining data points are divided and each is put into a membrane with a cluster center that is closest to the data point. Up to this point, the clusters are obtained.
The main steps of KSTDPC is summarized as in Algorithm 1.
Inputs: dataset X, parameter K  
Output: Clusters  
Step 1: The objects are in membrane for  
and object is in membrane ;  
Step 2: Compute the Euclidean distance matrix by the rule1;  
Step 3: Compute the local densities of the data points by the rule2 and  
normalize them using (10) and (11);  
Step 4: Calculate and for data point using (12) and (4) in every  
membrane , respectively;  
Step 5: Calculate for all in membrane and sort them  
by descend, and select the top K values as the initial cluster center. So as to  
determine the centers of the clusters;  
Step 6: Split the membrane to K membranes by the division rules, which membranes can  
be number from to ;  
Step 7: The clustering centers are put in membranes to , respectively.  
Step 8: Assign each remaining point to the membrane with the nearest cluster center;  
Step 9: Return the clustering result. 
3.3. Time Complexity Analysis of KSTDPC
As usual, computations in the cells in the tissuelike P system can be implemented in parallel. Because of the parallel implementation, the generation of the dissimilarity matrix uses computation steps. The generation of the data points densities needs 1 computation step. The calculation of the final density uses computation steps. The calculation of needs steps. The calculation of uses 1 step. steps are used to sort for . Finally, the final clustering needs 1 more computation step. Therefore, the total time complexity of KSTDPC is . The time complexity of the DPCKNN is . As compared to DPCKNN, KSTDPC reduces the time complexity by transferring time complexity to space complexity. The above analysis demonstrates that the overall time complexity of KSTDPC is superior to that of DPCKNN.
4. Test and Analysis
4.1. Data Sources
Experiments on six synthetic datasets and four realworld datasets are carried out to test the performance of KSTDPC. The synthetic datasets are from http://cs.uef.fi/sipu/datasets/. These datasets are commonly used as benchmarks to test the performance of clustering algorithms. The realworld datasets used in the experiments are from the UCI Machine Learning Repository [31]. These datasets are chosen to test the ability of KSTDPC in identifying clusters having arbitrary shapes without being affected by noise, size, or dimensions of the datasets. The numbers of features (dimensions), data points (instances), and clusters vary in each of the datasets. The details of the synthetic and realworld datasets are listed in Tables 1 and 2, respectively.


The performance of KSTDPC was compared with those of the wellknown clustering algorithms SC [32], DBSCAN [33], and DPCKNN [28, 34]. The codes for SC and DBSCAN are provided by their authors. The code of DPC is optimized by using the matrix operation instead of iteration cycle based on the original code provided by Rodriguez and Laio [3] to reduce running time.
The performances of the above clustering algorithms are measured in clustering quality or Accuracy (Acc) and Normalized Mutual Information (NMI). They are very popular measures for testing the performance of clustering algorithms. The larger the values are, the better the results are. The upper bound of these measures is 1.
4.2. Experimental Results on the Synthetic Datasets
In this subsection, the performances of KSTDPC, DPCKNN, DBSCAN, and SC are reported on the six synthetic datasets. The clustering results by the four clustering algorithms for the six synthetic datasets are color coded and displayed in twodimensional spaces as shown in Figures 5–10. The results of the four clustering algorithms on a dataset are shown as four parts in a single figure. The cluster centers of the KSTDPC and DPCKNN algorithms are marked in the figures with different colors. For DBSCAN, it is not meaningful to mark the cluster centers because they are chosen randomly. Each clustering algorithm ran multiple times on each dataset and the best result of each clustering algorithm is displayed.
(a) KSTDPC
(b) DPCKNN
(c) DBSCAN
(d) SC
(a) KSTDPC
(b) DPCKNN
(c) DBSCAN
(d) SC
(a) KSTDPC
(b) DPCKNN
(c) DBSCAN
(d) SC
(a) KSTDPC
(b) DPCKNN
(c) DBSCAN
(d) SC
(a) KSTDPC
(b) DPCKNN
(c) DBSCAN
(d) SC
(a) KSTDPC
(b) DPCKNN
(c) DBSCAN
(d) SC
The performance measures of the four clustering algorithms on the six synthetic datasets are reported in Table 3. In Table 3, the column “Par” for each algorithm is the number of parameters the users need to set. KSTDPC and DPCKNN have only one parameter K, which is the number of nearest neighbors to be prespecified. In this paper, the value of K is determined by the percentage of the data points. It references the method in [34]. For each dataset, we adjust the percentage of data points in the KNN for the multiple times and find the optimal percentage that can make the final clustering reach the best. Because we perform more experiments, we only list the best result in Tables 3 and 4. And in order to be consistent wi other parameters in the table, we directly convert the percentage of data points into specific K values. DBSCAN has two input parameters, the maximum radius Eps and the minimum point MinPts. The SC algorithm needs the true number of clusters. C1 in Table 3 refers to the number of cluster centers found by the algorithms. The performance measures including Acc and NMI are presented in Table 3 for the four clustering algorithms on the six synthetic datasets.


The Spiral dataset has 3 clusters with 312 data points embracing each other. Table 3 and Figure 5 show that KSTDPC, DPCKNN, DBSCAN, and SC can all find the correct number of clusters and get the correct clustering results. All the benchmark values are 1.00 reflecting the four algorithms all performing perfectly well on the Spiral dataset.
The Compound dataset has 6 clusters with 399 data points. From Table 3 and Figure 6, it is obvious that KSTDPC can find the ideal clustering result; DBSCAN cannot find the right clusters whereas DPCKNN and SC cannot find the clustering centers. Because DPC has a special assignment strategy [3], it may assign data points erroneously to clusters once a data point with a higher density is assigned to an incorrect cluster. For this reason, some data points belonging to cluster 1 are incorrectly assigned to cluster 2 or 3 as shown in Figures 6(b)–6(d). DBSCAN has some prespecified parameters that can have heavy effects on the clustering results. As shown in Figure 6(c), two clusters are merged into one cluster in two occasions. KSTDPC obtained Acc and NMI values higher than those obtained by the other algorithms.
The Jain dataset has two clusters with 373 data points in a 2 dimensional space. The clustering results show that KSTDPC, DBSCAN, and SC can get correct results and both of the benchmark values are 1.00. The experimental results of the 4 algorithms are shown in Table 3 and the clustering results are displayed in Figure 7. DPCKNN devides some points that should belong to the bottom cluster into the upper cluster. Although all the four clustering algorithms can find the correct number of clusters, KSTDPC, DBSCAN, and SC are more effective because they can put all the data points into the correct clusters.
The Aggregation dataset has 7 clusters with different sizes and shapes and two pairs of clusters connected to each other. Figure 8 shows that both the KSTDPC and DPCKNN algorithms can effectively find the cluster centers and correct clusters, except that an individual data point is put into an incorrect cluster by DPCKNN. Table 3 shows that the benchmark values of KSTDPC are all 1.00 and those of DPCKNN are close to 1.00. SC also can recognize all clusters, but the values of Acc and NMI are lower than those of DPCKNN. DBSCAN did not find all clusters and could not partition the clusters connected to each other.
The R15 dataset has 15 clusters containing 600 data points. The clusters are slightly overlapping and are distributed randomly in a 2dimensional space. One cluster lays in the center of the 2dimensional space and is closely surrounded by seven other clusters. The experimental results of the 4 algorithms are shown in Table 3 and the clustering results are displayed in Figure 9. KSTDPC and DPCKNN can both find the correct cluster centers and assign almost all data points to their corresponding clusters. SC also obtained good experimental result, but DBSCAN did not find all clusters.
The D31 dataset has 31 clusters and contains 3100 data points. These clusters are slightly overlapping and distribute randomly in a 2dimensional space. The experimental results of the 4 algorithms are shown in Table 3 and the clustering results are displayed in Figure 10. The values of Acc and NMI obtained by KSTDPC are all 1.00. This shows that KSTDPC obtained perfect clustering results on the D31 dataset. DPC and SC obtained similar results to those of KSTDPC on this dataset, but DBSCAN was not able to find all clusters.
4.3. Experimental Results on the RealWorld Datasets
This subsection reports the performances of the clustering algorithms on the four realworld datasets. The varying sizes and dimensions of these datasets are useful in testing the performance of the algorithms under different conditions.
The number of clusters, Acc and NMI are also used to measure the performances of the clustering algorithms on these realworld datasets. The experimental results are reported in Table 4 and the best results of the each dataset are shown in italic. The symbol “ ” indicates there is no value for that entry.
The Vertebral dataset consists of 2 clusters and 310 data points. As Table 4 shows, the value of Acc got by KSTDPC is equal to that got by DPCKNN, but the value of NMI got by KSTDPC is lower than that got by DPCKNN. No values of Acc and NMI were obtained by SC. As Table 4 shows, all algorithms could find the right number of clusters.
The Seeds dataset consists of 210 data points and 3 clusters. Results in Table 4 show that KSTDPC obtained the best, whereas DBSCAN obtained the worst, values of Acc and NMI. It is obvious that all four clustering algorithms could get the right number of clusters.
The Breast Cancer dataset consists of 699 data points and 2 clusters. The results on this dataset in Table 4 show that all four clustering algorithms could find the right number of clusters. KSTDPC obtained the Acc and NMI values of 0.8624 and 0.4106, respectively, which are higher than those obtained by other clustering algorithms. The results also show that DBSCAN has the worst performance on this dataset, except that SC did not get experimental results on these benchmarks.
The Banknotes dataset consists of 1372 data points and 2 clusters. From Table 4, it is obvious that KSTDPC got the best values of Acc and NMI among all four clustering algorithms. The values of Acc and NMI obtained by KSTDPC are 0.8434 and 0.7260, respectively. Larger values of these benchmarks indicate that the experimental results obtained by KSTDPC are closer to the true results than those obtained by the other clustering algorithms.
All these experimental results show that KSTDPC outperform the other clustering algorithms. It obtained larger values of Acc and NMI than the other clustering algorithms.
5. Conclusion
This study proposed a density peak clustering algorithm based on the Knearest neighbors, Shannon entropy and tissuelike P systems. It uses the Knearest neighbors and Shannon entropy to calculate the density metric. This algorithm overcomes the shortcoming that DPC has that is to set the value of the cutoff distance in advance. The tissuelike P system is used to realize the clustering process. The analysis demonstrates that the overall time taken by KSTDPC is shorter than those taken by DPCKNN and the traditional DPC. Synthetic and realworld datasets are used to verify the performance of the KSTDPC algorithm. Experimental results show that the new algorithm can get ideal clustering results on most of the datasets and outperforms the three other clustering algorithms referenced in this study.
However, the parameter in the Knearest neighbors is prespecified. Currently there is no technique available to set this value. Choosing a suitable value for is a future research direction. Moreover, some other methods can be used to calculate the densities of the data points. In order to improve the effectiveness of DPC, some optimization techniques can also be employed.
Data Availability
The synthetic datasets are available at http://cs.uef.fi/sipu/datasets/ and the realworld datasets are available at http://archive.ics.uci.edu/ml/index.php.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was partially supported by the National Natural Science Foundation of China (nos. 61876101, 61802234, and 61806114), the Social Science Fund Project of Shandong (16BGLJ06, 11CGLJ22), China Postdoctoral Science Foundation Funded Project (2017M612339, 2018M642695), Natural Science Foundation of the Shandong Provincial (ZR2019QF007), China Postdoctoral Special Funding Project (2019T120607), and Youth Fund for Humanities and Social Sciences, Ministry of Education (19YJCZH244).
References
 J. Han, J. Pei, and M. Kamber, Data Mining: Concepts and Techniques, San Francisco, CA, USA, 3rd edition, 2011.
 R. J. Campello, D. Moulavi, and J. Sander, “Densitybased clustering based on hierarchical density estimates,” in Advances in Knowledge Discovery and Data Mining, vol. 7819 of Lecture Notes in Computer Science, pp. 160–172, Springer, Berlin, Germany, 2013. View at: Publisher Site  Google Scholar
 A. Laio and A. Rodriguez, “Clustering by fast search and find of density peaks,” Science, vol. 344, no. 6191, pp. 1492–1496, 2014. View at: Publisher Site  Google Scholar
 J. Cong, X. Xie, and F. Hu, “A density peak cluster model of highdimensional data,” in Proceedings of the AsiaPacific Services Computing Conference, pp. 220–227, Zhangjiajie, China, 2016. View at: Google Scholar
 X. Xu, S. Ding, M. Du, and Y. Xue, “DPCG: an efficient density peaks clustering algorithm based on grid,” International Journal of Machine Learning and Cybernetics, vol. 9, no. 5, pp. 743–754, 2016. View at: Google Scholar
 R. Bie, R. Mehmood, S. Ruan, Y. Sun, and H. Dawood, “Adaptive fuzzy clustering by fast search and find of density peaks,” Personal and Ubiquitous Computing, vol. 20, no. 5, pp. 785–793, 2016. View at: Publisher Site  Google Scholar
 M. Du, S. Ding, X. Xu, and X. Xue, “Density peaks clustering using geodesic distances,” International Journal of Machine Learning and Cybernetics, vol. 9, no. 8, pp. 1–15, 2018. View at: Google Scholar
 M. Du, S. Ding, and Y. Xue, “A robust density peaks clustering algorithm using fuzzy neighborhood,” International Journal of Machine Learning and Cybernetics, vol. 9, no. 7, pp. 1131–1140, 2018. View at: Publisher Site  Google Scholar
 J. Hou and H. Cui, “Density normalization in density peak based clustering,” in Proceedings of the International Workshop on GraphBased Representations in Pattern Recognition, pp. 187–196, Anacapri, Italy, 2017. View at: Google Scholar
 X. Xu, S. Ding, H. Xu, H. Liao, and Y. Xue, “A feasible density peaks clustering algorithm with a merging strategy,” Soft Computing, vol. 2018, pp. 1–13, 2018. View at: Google Scholar
 R. Liu, H. Wang, and X. Yu, “Sharednearestneighborbased clustering by fast search and find of density peaks,” Information Sciences, vol. 450, pp. 200–226, 2018. View at: Publisher Site  Google Scholar
 M. Du, S. Ding, Y. Xue, and Z. Shi, “A novel density peaks clustering with sensitivity of local density and densityadaptive metric,” Knowledge and Information Systems, vol. 59, no. 2, pp. 285–309, 2019. View at: Publisher Site  Google Scholar
 G. Paun, “A quick introduction to membrane computing,” Journal of Logic Algebraic Programming, vol. 79, no. 6, pp. 291–294, 2010. View at: Google Scholar
 H. Peng, J. Wang, and P. Shi, “A novel image thresholding method based on membrane computing and fuzzy entropy,” Journal of Intelligent & Fuzzy Systems Applications in Engineering & Technology, vol. 24, no. 2, pp. 229–237, 2013. View at: Google Scholar
 M. Tu, J. Wang, H. Peng, and P. Shi, “Application of adaptive fuzzy spiking neural P systems in fault diagnosis of power systems,” Journal of Electronics, vol. 23, no. 1, pp. 87–92, 2014. View at: Google Scholar
 J. Wang, P. Shi, H. Peng, M. J. PerezJimenez, and T. Wang, “Weighted fuzzy spiking neural P systems,” IEEE Transactions on Fuzzy Systems, vol. 21, no. 2, pp. 209–220, 2013. View at: Publisher Site  Google Scholar
 B. Song, C. Zhang, and L. Pan, “Tissuelike P systems with evolutional symport/antiport rules,” Information Sciences, vol. 378, pp. 177–193, 2017. View at: Publisher Site  Google Scholar  MathSciNet
 H. Peng, J. Wang, M. J. PérezJiménez, and A. RiscosNúñez, “Dynamic threshold neural P systems,” KnowledgeBased Systems, vol. 163, pp. 875–884, 2019. View at: Publisher Site  Google Scholar
 L. Huang, I. H. Suh, and A. Abraham, “Dynamic multiobjective optimization based on membrane computing for control of timevarying unstable plants,” Information Sciences, vol. 181, no. 11, pp. 2370–2391, 2011. View at: Publisher Site  Google Scholar
 H. Peng, Y. Jiang, J. Wang, and M. J. PérezJiménez, “Membrane clustering algorithm with hybrid evolutionary mechanisms,” Journal of Software. Ruanjian Xuebao, vol. 26, no. 5, pp. 1001–1012, 2015. View at: Google Scholar  MathSciNet
 H. Peng, J. Wang, M. J. PérezJiménez, and A. RiscosNúñez, “The framework of P systems applied to solve optimal watermarking problem,” Signal Processing, vol. 101, pp. 256–265, 2014. View at: Publisher Site  Google Scholar
 G. Zhang, J. Cheng, M. Gheorghe, and Q. Meng, “A hybrid approach based on different evolution and tissue membrane systems for solving constrained manufacturing parameter optimization problems,” Applied Soft Computing, vol. 13, no. 3, pp. 1528–1542, 2013. View at: Publisher Site  Google Scholar
 H. Peng, P. Shi, J. Wang, A. RiscosNúñez, and M. J. PérezJiménez, “Multiobjective fuzzy clustering approach based on tissuelike membrane systems,” KnowledgeBased Systems, vol. 125, pp. 74–82, 2017. View at: Publisher Site  Google Scholar
 H. Peng, J. Wang, M. J. PérezJiménez, and A. RiscosNúñez, “An unsupervised learning algorithm for membrane computing,” Information Sciences, vol. 304, pp. 80–91, 2015. View at: Publisher Site  Google Scholar
 H. Peng, J. Wang, P. Shi, M. J. PérezJiménez, and A. RiscosNúñez, “An extended membrane system with active membranes to solve automatic fuzzy clustering problems,” International Journal of Neural Systems, vol. 26, no. 3, pp. 1–17, 2016. View at: Publisher Site  Google Scholar
 H. Peng, J. Wang, J. Ming et al., “Fault diagnosis of power systems using intuitionistic fuzzy spiking neural P systems,” IEEE Transactions on Smart Grid, vol. 9, no. 5, pp. 4777–4784, 2018. View at: Publisher Site  Google Scholar
 X. Liu, Y. Zhao, and M. Sun, “An improved apriori algorithm based on an evolutioncommunication tissuelike P System with promoters and inhibitors,” Discrete Dynamics in Nature and Society, vol. 2017, pp. 1–11, 2017. View at: Google Scholar
 X. Liu and J. Xue, “A cluster splitting technique by hopfield networks and P systems on simplices,” Neural Processing Letters, vol. 46, no. 1, pp. 171–194, 2017. View at: Google Scholar
 Y. Zhao, X. Liu, and W. Wang, “Spiking neural P systems with neuron division and dissolution,” PLoS ONE, vol. 11, no. 9, Article ID e0162882, 2016. View at: Publisher Site  Google Scholar
 M. Du, S. Ding, and H. Jia, “Study on density peaks clustering based on knearest neighbors and principal component analysis,” KnowledgeBased Systems, vol. 99, no. 1, pp. 135–145, 2016. View at: Publisher Site  Google Scholar
 K. Bache and M. Lichman, UCI machine learning repository, 2013. http:// archive.ics.uci.edu/ml.
 A. Ng, M. Jordan, and Y. Weiss, “On spectral clustering: analysis and an algorithm,” in Advances in Neural Information Processing Systems, pp. 849–856, Vancouver, British Columbia, Canada, 2001. View at: Google Scholar
 M. Ester, H. Kriegel, J. Sander, and X. Xu, “A densitybased algorithm for discovering clusters in large spatial databases with noise,” in Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 226–231, Menlo Park, Portland, USA, 1996. View at: Google Scholar
 L. Yaohui, M. Zhengming, and Y. Fang, “Adaptive density peak clustering based on Knearest neighbors with aggregating strategy,” KnowledgeBased Systems, vol. 133, pp. 208–220, 2017. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2019 Zhenni Jiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.