Research Article  Open Access
Hong Xia, Qingyi Dong, Hui Gao, Yanping Chen, ZhongMin Wang, "Service Partition Method Based on Particle Swarm Fuzzy Clustering", Wireless Communications and Mobile Computing, vol. 2021, Article ID 7225552, 12 pages, 2021. https://doi.org/10.1155/2021/7225552
Service Partition Method Based on Particle Swarm Fuzzy Clustering
Abstract
It is difficult to accurately classify a service into specific service clusters for the multirelationships between services. To solve this problem, this paper proposes a service partition method based on particle swarm fuzzy clustering, which can effectively consider multirelationships between services by using a fuzzy clustering algorithm. Firstly, the algorithm for automatically determining the number of clusters is to determine the number of service clusters based on the density of the service core point. Secondly, the fuzzy means combined with particle swarm optimization algorithm to find the optimal cluster center of the service. Finally, the fuzzy clustering algorithm uses the improved Gramcosine similarity to obtain the final results. Extensive experiments on real web service data show that our method is better than mainstream clustering algorithms in accuracy.
1. Introduction
With the development of serviceoriented architecture technology, web service has become a vital software resource on the Internet. The number, scale, and types of services have grown rapidly, and services with similar functions have also increased. In the current situation, the difficulty of managing services and assisting in service discovery is timeconsuming. Therefore, how to manage web services more conveniently and quickly and accurately find the service that meets the needs of users in a large number of services is a big challenge [1].
The service clustering method can effectively help manage services and assist in service discovery. Web service clustering method has become a key method for service discovery, service recommendation, and service management, which can help web service search engines search services and reduce their search space [2]. Service clustering is aimed at dividing multiple services into different clusters based on similarity. In the same cluster, each service is as similar as possible, while in different clusters, each service is as different as possible. Service clustering can better classify services, compress search space, shorten search time, help quickly manage services, and provide users with accurate and efficient services.
Most related research shows that the service clustering method is based on a topic model, which can improve the efficiency of search services. Many scholars have studied the service clustering method based on the topic model. Paper [3] first applies BTM to learn the latent topics of web service description corpus and then uses the means algorithm to cluster web services. Considering the web service’s description on text is short and lacks enough adequate information. Paper [4] proposes a web service clustering method based on Word2vec and Latent Dirichlet Allocation (LDA) topic model. Word2vec expands the content of web service description documents. Then, use the topic model to model the extended description document.
Most topic models cause web service clustering with low accuracy because most topic models cannot build a well model with short text. Paper [5] proposes a web service clustering with multifunctionality based on LDA and fuzzy means algorithm. LDA topic model is used to model description documents of web service, and fuzzy means algorithm clusters web services into different functional classes. Paper [6] proposes a semantic web service discovery based on fuzzy clustering optimization. As the preprocessing part, the improved fuzzy means clustering algorithm cluster services into a different class. In this preprocessing process, the improved fuzzy clustering algorithm considers four functional parameters of service input, output, premise, and effect of service as the clustering parameters.
In summary, the existing web service cluster method only focuses on the cluster of individual services and does not consider the connection between services. In fact, there is a mutual invocation relationship between services, and they are not independent individuals. If the interconnection between services is not considered, the accuracy of service clustering will be affected. In addition, most of the current clustering algorithms select the cluster position randomly. However, in a real web environment, this usually leads to poor accuracy of the clustering algorithm. Therefore, it is more difficult to dig out the interconnections between services. Now, most service clustering technology mainly uses LDA model and means algorithms to work in the same field. Generally, the existing work has the following two shortcomings. Firstly, the semantic relationship between words is not fully considered, leading to unsatisfactory service discovery results. Secondly, the interconnection between services is not fully considered, resulting in low service clustering accuracy.
We propose a service partition method based on particle swarm fuzzy clustering (NFCNSPO). Firstly, this method first preprocesses web service description and fully considers the semantic relationship between words. Then, use the automatic clustering algorithm to determine the number of service clusters. Secondly, fuzzy clustering algorithm combines particle swarm optimization, in order to avoid fuzzy clustering algorithm random selected of cluster location and random selection of cluster location caused poor accuracy of fuzzy clustering algorithm. Finally, the fuzzy clustering algorithm is based on Gramimproved cosine to measure the similarity of services. The function based on Gramimproved cosine similarity is used to control the sliding window to compare the service description one by one.
2. Background and Related Work
2.1. Web Service Clustering
In the past, the number diversity of web services has increased rapidly and still keeps emerging [7]. Many researchers pay much attention to serviceoriented tasks [8, 9], and service computing developed so fast. Web service clustering is one of the most classical and important tasks in service computing [10, 11].
Service clustering is an integral approach to manage services and assist service discovery [12]. Service clustering is an essential part of service matching, service recommendation, service composition, and service discovery. Service clustering decomposes so many services into a set of smaller clusters to help service engineers manage the system effectively. Service engineers match or recommend a set of services in different clusters according to customer demand.
Web service clustering work can be classified into two types: semantic web services and nonsemantic web services [13]. For the semantic web services, combine keywords extracted from the web service document languages (WSDL). This method describes semantic levels through users’ queries and searches by keywords. Clustering based on semantic web services is relatively mature.
Bo et al. present a service clustering based on the functional semantics requirements (SCFSR). It extracts functional information in the service requirement documents by natural language processing, then calculated the similarity between functional information matrix. Finally, apply means to cluster these services [1]. In paper [14] since web service description documents are short, they use Word2vec to expand the content of description document and then use the LDA topic model to find web services.
Sheeba et al. propose mathematical web service semantic description and registration by ontology. They use an ontology tree to catalog the mathematical web service characteristics, about functional as well nonfunctional [15]. Nguyen and Kuo present a web service discovering through ontology matching semantic relationships. The ontology is built to represent the relationship between semantics with keywords matching, and keywords matching method can find best suitable service for user request [16].
Hsu and Chiu propose a semantic Latent Dirichlet Allocation, which obtains synonym table and then acquisition domain feature word set by Word2vec model. It clusters same domain services, and based on this, builds a framework domain semanticaided web service clustering [17]. Paper [18] proposes an improved multirelational topic model for web service clustering. Since web service description documents contain limited words, the existing LDA model is hard for short text documents. They care about web service multirelational network, so build a model called MRLDA. This model consider relationship and annotation relationships and then apply a clustering algorithms to get final results.
For the nonsemantic web service clustering. Service clustering methods do not consider the semantic relationship among services; they pay more attention to the service clustering method. In paper [19], they propose a cluster featurebased latent factor model for Qos prediction. Divide users and services into different groups based on historical records. And that is the same group; users and services have the same latent feature. Furthermore, they design an integrated latent factor model to cluster. In [20], a means clustering method based on principal component analysis was proposed to predict web services. Solve the problem of low quality accuracy of service prediction caused by the sparse web service matrix.
In short, the particle swarm fuzzy clustering proposed in this paper adopts the fuzzy clustering algorithm based on Gramimproved cosine similarity to consider the connection between services in more detail. At the same time, combining with the particle swarm algorithm can find the optimal service cluster center. It also avoids the fuzzy means algorithm to randomly select the cluster center, thereby improving the accuracy of service clustering.
2.2. Particle Swarm Optimization Algorithm
The particle swarm optimization algorithm (PSO) simulates the predation behavior of a flock of birds. A flock of birds is searching for food at random, and there is only one piece of food in this area. All the birds do not know where the food is. But they know how far they are from the food. The best way to find food is to search for birds around the area close to the food. The particle swarm optimization algorithm is a kind of bionic evolutionary algorithm [21].
The mathematical model of PSO [21] supposes that in a search space , the position of the particle is , the particle velocity is , and represents the personal best position of current particle, and represents the global best position of current particle, the particle update velocity, and position according to the equation:
represents the velocity of particles, represents the position of the particle, is number of iterations, represents the personal best position of current particle, represents the global best position of current particle, represents the acceleration factor which is a random value between 0 and 1, and represents the influence degree of personal best and global best position on particle moving direction.
2.3. Fuzzy Means Clustering Algorithm
In many problems, the result is only two possibilities, 0 or 1. For example, a student is either a boy or a girl. But this cannot describe the attributes of many things, such as the degree of hot or cold weather. There is no clear definition of what temperature is hot and what is cold. The reason is that in many cases, the boundaries between multiple categories are not absolutely clear. It is needed to use vague words to judge. Fuzzy logic extends the general concept of taking only 1 or 0 (belonging to/not belonging) to taking real numbers between 0 and 1, “Degree of Membership Function.” The “Degree of Membership Function” is used to describe the relationship between elements and sets, and the degree of membership is used to express the probability of a sample belonging to a certain class.
Fuzzy means clustering algorithm (FCM) is a partitionbased clustering algorithm. FCM combines the essence of fuzzy theory. Compared with the hard clustering of means, FCM provides more flexible clustering results. Because in most cases, the objects in the dataset cannot be divided into clearly separated clusters, assigning an object to a specific cluster is a bit blunt, and errors may also occur. Therefore, a weight is assigned to each object and each cluster, indicating the degree to which the object belongs to the cluster. Of course, probabilitybased methods can also give such weights, but sometimes, it is difficult for us to determine a suitable statistical model. Therefore, it is a better choice to use FCM with natural and nonprobabilistic characteristics.
FCM work for service clustering base on the similarity between services in dataset and service clusters through the iteration of the objective function; the final service clustering result is obtained. The objective function is as follows:
represents service dataset, is the number of service, represents cluster center of service clustering, represents degree of membership of the service sample belong to cluster , and represents distance between service sample and service cluster . In this paper, applies improved cosine similarity based on Gram.
3. Method
We introduce the presented framework in Section 3.1 and details from Section 3.2 to Section 3.6.
3.1. Framework
The flowchart of the service partitioning method based on particle swarm fuzzy clustering is proposed in this chapter. The method is divided into the preprocessing part of web service description and the service partition method NFCNSPO based on particle swarm fuzzy clustering (as shown in Figure 1). The left part is the preprocessing part of the web service description. First, crawl the web service description from the programmable website and write it into excel, then extract keywords from excel and filter stop words from it, and finally, restore the word to the stem and use TFIDF to calculate the frequency of each word. Among them, the preprocessing part is an important part of the service partition method based on particle swarm fuzzy clustering. The right part is the main introduction of NFCNSPO, a service partition method based on particle swarm fuzzy clustering, which is divided into the following steps. The first step is to identify the number of service clusters and use it as the number of particles in the particle swarm algorithm, where each particle is designed into two parts. The first part is the control variable used to identify the number of the service cluster. The second part of the function is the service distribution of the cluster. The second step is to initialize the speed and position of the particles and calculate the fitness value of each particle. The fitness function is a linear combination of the overall compactness evaluation function and the fuzzy separation function. The third step is to update the speed and position of each particle and repeat the process; if the output condition is met, the output is performed; if the condition is not met, return to the third step. The output result is based on Gramimproved cosine similarity fuzzy clustering algorithm. Clustering obtains the final clustering result.
3.2. Identify the Number of Service Clusters ()
In the service clustering algorithm, the number of clusters plays a vital role in the accuracy of service clustering. Most existing clustering algorithms used empirical rules that is to determine the number of clusters, represents the number of service clusters, and is the number of service samples. There are some drawbacks about empirical rules to identify the number of service clusters, for large datasets. The number of service clusters will be very large, which will increase the time complexity of service clustering. For small datasets, , the number of service clusters will be greater than or equal to the number of samples.
In the study of how to identity the number of service clusters [22], most of the ideas about identifying the number of service clusters is based on the local density of service sample points. The center of the service cluster is surrounded by other services, so the local density center of the service cluster is larger than noncenter density. For example, [23] proposes a clustering algorithm using relative KNN kernel density called RECOME. Firstly, this algorithm is to determine the core object, also known as the cluster center. Secondly, sort according to the local density of the core object. Finally, the point with the highest density is selected as the first cluster center, so that the adjacent noncore object data points form a cluster, and the other data points repeat this process became clusters.
Since RECOME algorithm [23] is only suitable for numerical data, this paper solves the problem of how to determine the number of service clustering clusters. The service description data WSDL belongs to the text, so the formula for calculating the density of core numerical data should be modified to the density formula for calculating the core points of service.
Let is a set of service data objects with attributes. Each service can be represented as a set with classification attributes as features. Therefore, a certain service can be expressed as , and the core point density of the service is expressed as , so density of each objects can be defined as follows [24]:
For each attribute , , if:
Otherwise, =1 if:
Therefore, the density of a categorical object is limited at . However, is very rare because it means that the two services are completely similar.
Services adjacent to the service core point are defined as services in noncore areas. is a cutoff distance, which represents the distance between the service core point and the service noncore point. To calculate the similarity between the two services using Gramimproved cosine similarity, choose the number of nearest neighbor services about 1% to 2% of the total number of services by Rodriguez and Laio [22]. Specifically, first calculate the density of service and calculate the neighboring service of service , and sort in descending order. Second, determine the cutting distance . Finally, by identifying the core object and its neighbors several times to form atom clusters, get the number of clusters in each service dataset. The algorithm for identifying the number of services in this paper adopts the literature [16].
3.3. Particle Swarm Representation in Web Service
As shown from Table 1, the particle swarm represents a web service clustering fuzzy matrix whose size is , where represents the maximum number of clusters and where is the number of web services. Cluster control variable in the particle is used to identify the number of clusters that should be defined in web service, which is from 0 to 1. In this case, if the cluster control variable is larger than 0.5 or equal to 0.5, it means the fuzzy membership function will assign web service objects to cluster based on the control variable. On the other hand, if the cluster control variable is less than 0.5, there does not exist a cluster in the variable, and the fuzzy membership values are 0. The web service clusters’ assignment is a fuzzy membership matrix , , when , the specific way of expressing particles as fuzzy matrix is shown in Table 2.


3.4. Fitness Function
The fitness function is the linear combination of the compactness function and the fuzzy separation function, clustering compactness , and fuzzy separation to evaluate the clusters. If clustering compactness is smaller, it represents the clusters is tighter. If fuzzy separation is larger, it means the distance between clusters is larger, and the gap in different clusters is larger. This function is calculated as follows: where is the fuzzy membership matrix, is the set of web service cluster centers, is the weight, presents the distance between web services with clusters , and is the distance from cluster and .
3.5. Particle Swarm Algorithm Procedure
Kennedy et al. proposed a particle swarm algorithm by observing the trajectory of birds looking for food and conducting research [25]. In the particle swarm algorithm, individuals are called particles. The algorithm includes the following steps:
Step1. Initializing particle swarm
particle population, each particle consists of two parts, control variables and cluster assignment. Control variables identify how many clusters are active. Next, set the velocity and position of the initial particle. The initialization process of particles: first, randomly generate control variables from 0 to 1 for all particles, denoted as , . After calculating the number of active clusters in each particle, when , according to the initialization process in [26], a fuzzy membership matrix with active clusters is generated; otherwise, no partitioning is performed. Finally, use to get the allocated by the cluster. The initialization process of particle velocity: first, randomly generate control variables for all particles, denoted as , . Finally, use to get the cluster distribution speed . Note that during the initialization process, the number of cluster activities must ensure that .
Step 2. Use formula (8) to calculate the fitness function value of each particle and record the number of iterations
Step 3. For each particle, compared with the fitness value, is compared with (individual extremum), if , value replaced ; otherwise, keeps before value
Step 4. For each particle, the fitness value is compared with (local extremum), if , value replaced ; otherwise, keeps before value
Step 5. Update new position and velocity for all particle; the new position and velocity are updated as follows:
is the inertia weight, , , respectively, represent particles position and speed, , is positive acceleration constant, representing local and global learning ability of particles, and , are random numbers in intervals. In the update process, the value of the control variable will be greater than 1 or less than 0 as the velocity value in the particle changes. If , then adjust to 1. If , adjusted to 0. The update position of control variables may lead to changes in the number of active clusters. Therefore, the number of active clusters will also be updated (as shown in formula (13)):
In addition, in order to increase the flexibility of membership function, this paper uses the degree of hesitation of IFS Sugenos [27] to add to the speed of cluster allocation. Like the traditional particle swarm algorithm, the algorithm for updating the velocity and position of the particles adopts literature [16].
Step 6. If the condition is satisfied, exit; otherwise, return step 2
3.6. Improved Cosine Algorithm Based on Gram
Gram algorithm [28] considers that the topic of the service is closely related to the words in the description of the service. So, the probability of the words in the service description is used to describe the topic of the service, and the probability of the topic data is its Gram value. If the Gram value is higher, the services are more similar. The Gram algorithm process is as follows: (1)Dataset preprocessing includes deleting special characters and filtration stop words(2)Establish corpus, count the number of each word, and record it as (3)Each service topic is closely related to the word. Markov probability formula is used to calculate the probability of a topic word as shown in
is the probability of the entire data, and represents the relative topic words.
Since the probability of a certain word in the service is only related to the previous word. Formula (12) can be simplified to formula (13), reducing unnecessary operations.
Gram algorithm uses a sliding window to assist the measure of service similarity. When the Gram value is small, the service window is expanded to accelerate the measure of service similarity. When the Gram value in the sliding window is large, the window is narrowed to improve the accuracy of service similarity calculation and improve the accuracy of a service clustering algorithm. Gram dynamic sliding window size calculation is shown in formula (14). where is the dynamic change of the window. represents the size of the window of the service data, which is updated by the variance of the service data . represents the variance of the Gram value of the service data in the window.
The cosine similarity value is obtained from the word frequency vector. As shown in formula (15),
is the number of service samples, and the vector represented as two services.
4. Experiment
Firstly, the details of preprocessing and evaluation metrics were introduced. Secondly, compared with other algorithms in terms of entropy, accuracy, recall, and value.
4.1. Preprocessing
4.1.1. Remove Stop Words
According to our observation of WSDL documents. We found some words do not have practical meaning as stop words (“be,” “the,” “a,” and “an”). We will filter the stop word in order to filter the noise of the data.
4.1.2. Stemming
WSDL descriptions are written in English, and the same words will be different for different people and tenses. For example, “change” and “changed” have the same meaning, but the computer will consider them to have different meanings. Therefore, we process such words to improve the accuracy of NFCNSPO through python NLTK (Natural Language Toolkit).
4.1.3. TFIDF
The TFIDF algorithm calculates feature words in a document while it offers the frequency. In this paper, we use TFIDF to calculate the frequency of web service document words and then generate a word frequency matrix.
represents the number of word in the service, donates the total number of words, and measures the importance of word. represents the number of web service descriptions, and presents the number of in the web service description.
4.2. Evaluation Measures
This section introduces evaluation measures and comparison algorithms.
It is important to evaluate the performance of the algorithm. We choose widely employed metrics: entropy, recall, accuracy, and value to assess the performance of NFCNSPO in web services. The four metrics are shown as follows. We can compare the results with the class label. The accuracy rate is shown in formula (18). The recall rate is shown in formula (17). The entropy is shown in formula (20). The value of is as shown in formula (19). where represents the cluster , represents the number of web service put correct cluster , represents the number of web service false put in cluster , and represents the number of web service should be in cluster but put it in other clusters. represents the probability that the services belongs to the cluster.
There are several classic clustering methods, such as means [29] and PSOmeans algorithms [30]. It has been applied to web service recommendations and service combinations. These algorithms have achieved good results. means, modes, and prototype algorithms represent different feature algorithms. For PSOmeans, it is similar to our algorithm. We conduct experiments on the same dataset.
means: for the means algorithm, the first thing to pay attention to is the choice of the value. Generally speaking, we will choose an appropriate value of based on the prior experience of the data. means divides the sample set into clusters according to Euclidean distance [29].
modes: modes is a method used by means on nonnumerical sets. Replace the Euclidean distance used by means with the Hamming distance between characters [31].
prototype: prototype is a collective form of means and modes, which is suitable for data of the numerical type and character type collection. But the web service has only a small amount of numerical data. To some extent, the prototype has weakened into mode and means [32].
PSOmeans: the literature [30] proposes a means algorithm based on particle swarm optimization (PSO). It makes the means algorithm unaffected by the initial cluster centers.
In web service clustering, for means and modes, they need to give the value of in advance, and the clustering effect is affected by the cluster center. The particle swarm fuzzy clustering method we proposed uses an automatic clustering algorithm to determine the number of service clusters, and the fuzzy clustering algorithm combines a particle swarm optimization algorithm to determine the location of the cluster center. Although means and modes have improved, they still cannot get rid of the limitation of . prototype is a combination of means and modes, and the cluster center is updated by combining means and modes. PSOmeans is similar to our proposed algorithm. It makes the means algorithm unaffected by the initial cluster centers. In order to increase the accuracy of clustering, our algorithm uses Gramimproved cosine similarity to measure the similarity between services. We conducted experiments on real web service datasets, and the experiments proved that our algorithm is better than these four algorithms.
4.3. Dataset
In this paper, the dataset of web service text data crawl from a programmable website by python. ProgrammableWeb is a public web service repository. The method of obtaining our dataset is the same as that of the literature [33, 34] using python crawler. The website is https://www.programmableweb.com/. These datasets are statistically calculated, including the number of each service statistics. The number of services is in descending order; it can be seen from Figure 2 that the number of Mapping services is up to nearly 1000. In the dataset, there are more than 200 services that are less than 300, such as search, social, eCommerce, photos, and music. The number of other services is less than 200. Search, social, eCommerce, photos, music, and other services are selected for clustering in this experiment.
The name and number of each service in the service dataset are shown in Table 3. The most significant number of services in the first column is 286 named search, and the smallest number of services is 229 called music. The most significant number of services in the second column is 95 named Government, and the smallest number of services is 52 named Movies. In the third column, the most significant number of services is 46 named Financial, and the smallest number of services is 32 called Books.

The web service sample data obtained includes service names, labels, descriptions, and categories (as shown in Table 3).
4.4. Analysis of Results
In this paper, MATLAB R2016a is used to generate the experimental results. In order to avoid the contingency of the experiment, each clustering algorithm run 10 times, and the average value of the running results is each algorithm’s final clustering result. Accuracy, entropy, recall, and value are used to evaluate each clustering algorithm. The MATLAB code and dataset are available at https://github.com/dqy1122/PSOcmeans.git.
As shown from Table 4, in terms of accuracy, the highest accuracy of NFCNSPO algorithm is 0.896. The second is PSOmeans algorithm. Its accuracy is 0.845. The worst is prototype clustering algorithm. The accuracy is 0.743. In terms of recall rate, the highest recall rate of modes algorithm is 0.756. Next is the NFCNSPO algorithm. Its value is 0.734. The worst is means algorithm; its value is 0.621. In terms of entropy, prototype clustering algorithm has the maximum entropy value of 0.781. The second is PSOmeans algorithm; its value is 0.772. The worst is NFCNSPO algorithm; its value is 0.642. In terms of value, the value of NFCNSPO algorithm is the highest, which is 0.806. The second is the modes algorithm, whose value is 0.789. The worst is PSOmeans algorithm. Its value is 0.772.

The reason analysis is as follows: NFCNSPO fuzzy clustering service partition method based on particle swarm optimization. Firstly, the improved cosine similarity calculation based on Gram is used to calculate the similarity between services. Gram algorithm uses a sliding window to assist the service similarity. When the Gram value in the window is small, the service window should be expanded to accelerate the detection of service similarity. When the Gram value in the sliding window is large, the window should be narrowed to improve the accuracy of service clustering. Secondly, using the advantages of the particle swarm optimization algorithm, the optimal global solution can be found through the movement of particles. It avoids the problem that fuzzy clustering algorithm randomly selects the clustering center and falls into the local optimum. Therefore, the clustering accuracy of NFCNSPO is improved. In means, Euler distance is used to measure the similarity between different services. Since Euler distance is not suitable for calculating the similarity of text data, the calculation of service similarity is not accurate enough. At the same time, means belongs to the partition clustering algorithm. The location and number of clusters are randomly selected so it is easy to fall into the local optimum, thereby affecting the accuracy of the clustering algorithm.
As Figure 3 shown, the accuracy of NFCNSPO peaked at 0.896. At the same time, the accuracy of the PSOmeans algorithm and prototype algorithm is smaller than NFCNSPO, with 0.845 and 0.734, respectively. Some reasons cause NFCNSPO higher accuracy. Firstly, NFCNSPO applies improved cosine similarity based on Gram, which can better calculate the similarity between two services. Secondly, fuzzy clustering algorithm combines particle swarm algorithm, to avoid fuzzy clustering algorithm random selection of cluster location. PSOmeans algorithm uses Euler distance to calculate the similarity between two services. It will lead to low service clustering accuracy, because Euler distance is not suitable for calculating service similarity.
In Figure 4, the recall rate of the modes algorithm reached the highest, followed by NFCNSPO and PSOmeans. The similarity calculation of modes used Hamming distance to measure the similarity between services. By comparing whether each bit of the vector is the same or not. If the vectors were different, the Hamming distance increased by 1. Otherwise, the Hamming distance remained unchanged.
The entropy value represents the degree of the chaos of an object. If the entropy value is larger, it means that objects are chaotic. If the entropy value is smaller, the object is stable, and the chaos coefficient is low. Figure 5 shows the entropy value of prototype reached the largest, followed by PSOmeans, and the entropy value of NFCNSPO algorithm fall the lowest. On the one hand, the prototype algorithm is an improved algorithm combining means and modes, which can deal with both numerical data and categorical data. Since web service description text has only a small amount of numerical data. To some extent, prototype is the similarity to modes. On the other hand, the prototype algorithm is easily affected by the position of the cluster center, which is easy to fall into local optimum. So the prototype algorithm is unstable.
The value is a linear combination of accuracy and recall, which measures the performance of the algorithm in a more stable form. Figure 6 shows that value of NFCNSPO reached the highest, followed by modes, and value of prototype falls the lowest.
Figure 3 shows that the accuracy of NFCNSPO is significantly higher than other algorithms. Figure 4 shows that the recall value of NFCNSPO is lower than that of the modes algorithm. The recall rate of NFCNSPO is still higher than other clustering algorithms, because NFCNSPO clustering algorithm applies the improved cosine similarity based on Gram, which can better calculate the similarity between two samples. On the other hand, fuzzy clustering algorithm combined particle swarm algorithm, to avoid fuzzy clustering algorithm random selected of cluster location; the random selection of cluster location caused poor accuracy of a fuzzy clustering algorithm.
The similarity measure plays a very important role in the clustering algorithm. Even if the same clustering algorithm uses different similarity measures, the accuracy of the clustering algorithm is different. This paper improves the similarity measure, which combined the Gram algorithm and cosine measure. To verify the performance of the improvement similarity measure. We use the same algorithm and use different similarity functions to compare accuracy, recall, entropy, and value.
In Table 5, the accuracy of NFCNSPO (GramCosine similarity) is the highest with 0.896. Followed by NFCNSPO (Cosine similarity) with 0.842, the accuracy of NFCNSPO (Euler) falls lowest. In terms of recall, NFCNSPO (GramCosine similarity) reached the highest with 0.734, followed by NFCNSPO (Manhatten) with 0.612. About entropy, the entropy of NFCNSPO (Euler) reaches the highest with 0.773, followed by NFCNSPO (GramCosine similarity) with 0.713. In terms of value, the value of NFCNSPO (GramCosine similarity) reaches the highest, which is 0.806. The value of NFCNSPO (Euler) falls the lowest, which is 0.637.

This paper gave a brief analysis of the point of higher performance of NFCNSPO (GramCosine similarity), which uses the improved cosine similarity based on Gram to better measure the similarity between two services. This method can adjust the window size between services and clusters. This method can improve the accuracy of service clustering algorithm.
In most existing algorithms for automatically determining the number of clusters, is obtained by means of running many times, which leads to the fact that is not an integer. For solving this problem, most scholars choose the rounded method to take the integer value, because the optimal number of clusters should be an integer. In this paper, the NFCNSPO algorithm proposed can determine the number of clusters on six datasets. The NFCNSPO algorithm calculates the number of clusters equal to the number of predetermined classes. On the contrast, the rounding method can only correctly provide the optimal on five datasets. NFCNSPO algorithm generates that the number of clusters on eCommerce dataset is the same as the expected value.
5. Conclusion
Because of the interrelationship between services, it is challenging to assign services to a specific cluster accurately. This paper propose a service partition method based on particle swarm fuzzy clustering. It can avoid the random selection of clustering positions, which leads to poor accuracy of the fuzzy clustering algorithm. The fuzzy clustering algorithm applies based on Gramimproved cosine similarity measure the similarity service. The function based on Gramimproved cosine similarity controls the sliding window to compare the service description one by one. When the Gram value is small, the Gram window is expanded to accelerate the measure service similarity. When the Gram value is large, the window is narrowed to improve service similarity measures’ accuracy and service clustering accuracy. Experimental results show that the NFCNSPO algorithm can better evaluate the interconnection between services and improve the accuracy of service clustering compared with existing algorithms. That can reasonably consider the relationship between service and service. Combined with the particle swarm algorithm, the relationship between the two can find the optimal cluster center position.
Data Availability
The MATLAB code and dataset are available at https://github.com/dqy1122/PSOcmeans.git.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work is supported by the Science and Technology Project in Shaanxi Province of China (program no. 2019ZDLGY0708), Natural Science Basic Research Program of Shaanxi Province, China (grant no. 2020JM582), Science and Technology of Xi’an (grant no. 2019218114 GXRC017CG018GXYD17.9), Scientific Research Program Funded by Shaanxi Provincial Education Department (no. 21JP115), Natural Science Basic Research Program of Shaanxi (program no. 2021JQ719), and Special Funds for Construction of Key Disciplines in Universities in Shaanxi.
References
 J. Bo, H. U. Song, P. WeiFeng, W. Ye, and S. BeiBei, “Service clustering based on the functional semantics of requirements,” Chinese Journal of Computers, vol. 41, no. 6, pp. 1036–1040, 2015. View at: Google Scholar
 Z. Haoquan and Z. Qi, “Web service discovery clustering performance analysis based on clustering LDA method,” Computer Application, vol. 39, no. 10, pp. 27–30, 2020. View at: Google Scholar
 D. Yang and D. He, “Web service clustering method based on word vector and biterm topic model,” in 2021 IEEE 6th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA), pp. 299–304, Chengdu, China, 2021. View at: Google Scholar
 Ö. Çoban and G. T. Özyer, “Word2vec and clustering based twitter sentiment analysis,” in 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), pp. 1–5, Adana, Turkey, September, 2018. View at: Google Scholar
 Z. Xiangping, J. Liu, Q. Xiao, M. Shi, and B. Cao, “Web services clustering with multifunctionality based on lda and fuzzy cmeans algorithm,” Journal of Central South University(Science and Technology), vol. 49, no. 12, pp. 2986–2992, 2018. View at: Google Scholar
 Y. M. Wang, Y. J. Zhang, B. H. Xie, L. H. Pan, and L. C. Chen, “Semantic web service discovery based on fuzzy clustering optimization,” Computer Engineering, vol. 39, no. 7, pp. 219–223, 2013. View at: Google Scholar
 Q. Feng, L. Chen, C. L. P. Chen, and L. Guo, “Deep fuzzy clustering—a representation learning approach,” IEEE Transactions on Fuzzy Systems, vol. 28, no. 7, pp. 1–1433, 2020. View at: Publisher Site  Google Scholar
 C. Lv, W. Jiang, S. Hu, J. Wang, G. Lu, and Z. Liu, “Efficient dynamic evolution of service composition,” IEEE Transactions on Services Computing, vol. 11, no. 4, pp. 630–643, 2018. View at: Publisher Site  Google Scholar
 Y. Yin, J. Xia, Y. Li, xu, W. Xu, and L. Yu, “Groupwise itinerary planning in temporary mobile social network,” IEEE Access, vol. 7, pp. 83682–83693, 2019. View at: Publisher Site  Google Scholar
 H. Gao, K. K. Dluzniak, H. Xia et al., “A service clustering method based on wisdom of crowds,” in 2019 IEEE International Congress on Big Data (BigDataCongress), pp. 98–105, Milan, Italy, July, 2019. View at: Google Scholar
 C. Cho, K. Lee, M. Lee, and C. Lee, “Software architecture moduleview recovery using cluster ensembles,” IEEE Access, vol. 7, pp. 72872–72884, 2019. View at: Publisher Site  Google Scholar
 B. Cao, X. F. Liu, M. M. Rahman, B. Li, J. Liu, and M. Tang, “Integrated content and networkbased service clustering and web apis recommendation for mashup development,” IEEE Transactions on Services Computing, vol. 13, no. 1, pp. 99–113, 2020. View at: Publisher Site  Google Scholar
 T. Liang, Y. Chen, W. Gao, M. Chen, M. Zheng, and J. Wu, “Exploiting user tagging for web service coclustering,” IEEE Access, vol. 7, pp. 168981–168993, 2019. View at: Publisher Site  Google Scholar
 Q. Xiao, B. Cao, X. Zhang, J. Liu, and L. I. Yanxinwen, “Web services clustering based on word2vec and lda topic model,” Journal of Central South University(Science and Technology), vol. 49, no. 12, pp. 2979–2985, 2018. View at: Google Scholar
 A. Sheeba, S. Padmakala, and C. A. Subasini, “Ontology based semantic description and registration of mathematical web services,” in 2019 Third International conference on ISMAC (IoT in Social, Mobile, Analytics and Cloud) (ISMAC), pp. 521–525, Coimbatore, India, February, 2019. View at: Google Scholar
 T. P. Q. Nguyen and R. J. Kuo, “Automatic fuzzy clustering using nondominated sorting particle swarm optimization algorithm for categorical data,” IEEE Access, vol. 7, pp. 99721–99734, 2019. View at: Publisher Site  Google Scholar
 C.I. Hsu and C. Chiu, “A hybrid latent dirichlet allocation approach for topic classification,” in 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), pp. 312–315, Gdynia, Poland, July, 2017. View at: Google Scholar
 M. Shi, J. X. Liu, D. Zhou, B. Q. Cao, and Y. P. Wen, “Multirelational topic modelbased approach for web services clustering,” Chinese Journal of Computers, vol. 42, no. 4, pp. 820–836, 2019. View at: Google Scholar
 S. Chen, Y. Peng, H. Mi, C. Wang, and Z. Huang, “A cluster feature based approach for QoS prediction in web service recommendation,” in 2018 IEEE Symposium on ServiceOriented System Engineering (SOSE), pp. 246–251, Germany, March, 2018. View at: Google Scholar
 H. Yang, H. Yan, and C. Dong, “A kmeans clustering approach for PCAbased web service QoS prediction,” in 2019 IEEE International Conferences on Ubiquitous Computing Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS), pp. 129–132, Shenyang, China, October 2019. View at: Google Scholar
 P. C. Fourie and A. A. Groenwold, “The particle swarm optimization algorithm in size and shape optimization,” Structural and Multidisciplinary Optimization, vol. 23, no. 4, pp. 259–267, 2002. View at: Publisher Site  Google Scholar
 A. Rodriguez and A. Laio, “Clustering by fast search and find of density peaks,” Science, vol. 344, no. 6191, pp. 1492–1496, 2014. View at: Publisher Site  Google Scholar
 M. Liang, Q. Li, Y. A. Geng, J. Wang, and Z. Wei, “Remold: an efficient modelbased clustering algorithm for large datasets with spark,” in 2017 IEEE 23rd International Conference on Parallel and Distributed Systems (ICPADS), pp. 376–383, Shenzhen, China, December, 2017. View at: Google Scholar
 F. Cao, J. Liang, and L. Bai, “A new initialization method for categorical data clustering,” Expert Systems with Applications, vol. 36, no. 7, pp. 10223–10228, 2009. View at: Publisher Site  Google Scholar
 J. Kennedy, R. C. Eberhart, and Y. Shi, “The particle swarm,” Swarm Intelligence, pp. 287–325, 2001. View at: Publisher Site  Google Scholar
 I. Heloulou, M. S. Radjef, and M. T. Kechadi, “A multiact sequential gamebased multiobjective clustering approach for categorical data,” Neurocomputing, vol. 267, pp. 320–332, 2017. View at: Publisher Site  Google Scholar
 K. T. Atanassov, “Intuitionistic fuzzy sets,” Fuzzy Sets & Systems, vol. 20, no. 1, pp. 87–96, 1986. View at: Publisher Site  Google Scholar
 A. Ahmad, M. Rub Talha, M. Ruhul Amin, and F. Chowdhury, “Pipilika ngram viewer: an efficient large scale ngram model for bengali,” in 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), pp. 1–5, Sylhet Bangladesh, September, 2018. View at: Google Scholar
 I. Alhadid, S. Khwaldeh, M. al Rawajbeh, E. AbuTaieh, R.’e. Masa’deh, and I. Aljarah, “An intelligent web service composition and resourceoptimization method using Kmeans clustering and knapsack algorithms,” Mathematics, vol. 9, no. 17, p. 2023, 2021. View at: Publisher Site  Google Scholar
 M. Handa, H. Xiaoyu, and M. Renqing, “Parallel PSOkmeans algorithm implementing web log mining based on hadoop,” Computer Science, vol. 42, no. S1, pp. 470–473, 2015. View at: Google Scholar
 O. S. Soliman, D. A. Saleh, and S. Rashwan, “A hybrid fuzzy particle swarm and fuzzy kmodes clustering algorithm,” in 8th International Conference on Informatics and Systems (INFOS), pp. 68–75, Giza, Egypt, July, 2012. View at: Google Scholar
 X. Chen, “An improved clustering algorithm for mixed attributes data based on Kprototypes algorithm,” in 2015 10th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA), pp. 396–399, Krakow, Poland, March, 2015. View at: Google Scholar
 Z. Neng, W. Jian, and M. Yutao, “An intelligent web service composition and resourceoptimization method using Kmeans clustering and knapsack algorithms,” IEEE Transactions on Services Computing, vol. 13, no. 3, pp. 488–502, 2020. View at: Google Scholar
 N. Zhang, K. He, J. Wang, and Z. Li, “WSGM‐SD: an approach to RESTful service discovery based on weighted service goal model,” Clarivate Analytics Web of Science, vol. 25, no. 2, pp. 256–263, 2016. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2021 Hong Xia et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.