Complexity

Complexity / 2020 / Article
Special Issue

Collective Behavior Analysis and Graph Mining in Social Networks

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 5206087 | https://doi.org/10.1155/2020/5206087

Jianrui Chen, Zhihui Wang, Tingting Zhu, Fernando E. Rosas, "Recommendation Algorithm in Double-Layer Network Based on Vector Dynamic Evolution Clustering and Attention Mechanism", Complexity, vol. 2020, Article ID 5206087, 19 pages, 2020. https://doi.org/10.1155/2020/5206087

Recommendation Algorithm in Double-Layer Network Based on Vector Dynamic Evolution Clustering and Attention Mechanism

Guest Editor: Liang Wang
Received13 Mar 2020
Accepted23 May 2020
Published07 Jul 2020

Abstract

The purpose of recommendation systems is to help users find effective information quickly and conveniently and also to present the items that users are interested in. While the literature of recommendation algorithms is vast, most collaborative filtering recommendation approaches attain low recommendation accuracies and are also unable to track temporal changes of preferences. Additionally, previous differential clustering evolution processes relied on a single-layer network and used a single scalar quantity to characterise the status values of users and items. To address these limitations, this paper proposes an effective collaborative filtering recommendation algorithm based on a double-layer network. This algorithm is capable of fully exploring dynamical changes of user preference over time and integrates the user and item layers via an attention mechanism to build a double-layer network model. Experiments on Movielens, CiaoDVD, and Filmtrust datasets verify the effectiveness of our proposed algorithm. Experimental results show that our proposed algorithm can attain a better performance than other state-of-the-art algorithms.

1. Introduction

Information overload is a pervasive problem in our era of big data, being a consequence of the rapid development of the Internet and other information technologies. Recommendation algorithms are one of the most widespread approaches to address this problem [1], whose purpose is to help users to find information quickly and conveniently. Additionally, recommendation systems usually suggest new items using information from previous searches, including product or media recommendation [2, 3]. Recommendation systems play a key role in the digital economy, as they allow web services to improve user’s experience, increase product sales, and help products to realize their commercial value.

While the literature of recommendation systems is vast, most algorithms can be classified in three categories: content-based [4], collaborative filtering [5], and hybrid recommendation systems [6]. Among these methods, collaborative filtering recommendation algorithms are the most popular in both research and industry, as they can exploit social information better. Moreover, collaborative filtering algorithms can be further divided into three categories: memory-based [7], model-based [8], and hybrid filtering [9]. Among model-based recommendation algorithms, matrix factorization models stand out for their superior speed and strong scalability. In this literature, many matrix factorization models were proposed following the seminal work of Billsus and Pazzani [10]. Recent contributions include nonnegative matrix factorization methods for community detection [11] and dynamic networks [12]. Matrix factorization methods are particularly attractive as they can consider the influence of various factors, while bringing good performance and scalability. Unfortunately, conventional collaborative filtering approaches are not well-suited to deal with various problems related with the explosive development of information technologies, which often involve sparse data, cold start, or multidimensional data. These issues are likely to become widespread in the near future due to the continuous increase of online users and items, and hence new approaches that can address these challenges are required. Since the community detection method can be applied to the recommendation system to find the interest communities of the target users, it can effectively achieve personalized recommendation.

On the above basis, the aim of this paper is to establish a double-layer network, apply attention mechanism to connect the double-layer network, and use the vector dynamic evolution clustering method to detect the community of users and items. The main challenges are as follows:(i)Constructing the double-layer network: in this paper, the recommender system is modeled as a double-layer network, and they are the user layer and item layer. Similarity between nodes is the weighted edge between nodes in their corresponding layer.(ii)Applying attention mechanism to connect the double-layer network: this paper puts forward a novel approach to carry out real-time recommendations, which is based on an attention mechanism and forgetting function that are used to fit scores and build relationships between users and items. The attention mechanism allows the algorithm to focus on particularly relevant factors while ignoring others, hence surpassing previous approaches based on limited scores.(iii)Evolving the state vectors to detect the community of users and items: as the community detection method that can be used to find users’ interest communities, it can be used to effectively achieve personalized recommendation. Hence, this paper proposes a community detection procedure for users and items, which uses an evolutionary clustering method based on vectorial dynamics. This clustering procedure enables a more accurate representation of the state value than other approaches based on scalar quantities. Our approach then leverages the community structure of users and items, finding neighbor sets of target users via a Cosine similarity method.

The efficiency of our approach is confirmed via various simulation experimental results, which show that the attained similarity between attribute and rating information is better than that of other state-of-the-art approaches.

The rest of the paper is organized as follows. The state of the art and related work is given in Section 2. Then, the proposed algorithm is introduced in Section 3. Section 4 presents experimental results, and finally Section 5 summarises our main conclusions. The convergence analysis of dynamical evolution clustering method in the double-layer network is shown in Appendix.

Contemporary recommendation algorithms usually have to deal with challenges including sparse data, cold start, or multidimensional data. To deal with these challenges, Ling et al. proposed a recommendation algorithm for solving cold user problems by applying character capture and clustering methods [13]. Also, West et al. [14] illustrate how clustering technology can be combined with collaborative filtering to improve the recommendation performance. Importantly, as the clustering method divides users and items into several categories and then carries out collaborative filtering recommendation within the class, the recommendation time is greatly reduced. Building up on the well-known k-means clustering algorithm [15], Zahra et al. proposed the different kinds of recommendation algorithms for random selection of initial center of improved k-means clustering algorithm [16]. Because the community detection method can cluster the users with similar interest into the same community, push the users with different interest into different communities, and one can then find the nearest neighbor set in the similar interest community carrying out collaborative filtering. By performing recommendations within the community of the target user not only improves the recommendation accuracy but also reduces the complexity of the algorithm. So, community detection methods have been considered, including the community detection algorithm based on the similarity of paths proposed by Wu et al. [17], and the community detection algorithm is based on the simplification of complex network proposed by Bai et al. [18].

Other focus of study has been scenarios where networks are not static but evolve in time. In these cases, dynamic clustering algorithms are needed in order to obtain an adequate clustering effect. Wu et al. proposed a method for clustering based on dynamic synchronization [19, 20], and then we developed community detection approaches based on evolutionary clustering [21, 22]. Both methods brought similar users together while making dissimilar items distant, which improved the performance of the clustering algorithms. Bu et al. proposed a dynamic clustering strategy based on the attribute clustering graph [23]. One of the main findings of these works is that dynamical clustering methods tend to enable more efficient recommendation algorithms than traditional clustering methods.

Additionally, researchers have proposed different recommendation algorithms for various scenarios, including recommendation algorithm based on graph network models, attention networks, and multilayer networks. These are reviewed as follows:(i)Graph network models: graph network has a flexible topology and can express complex relationships. This method treats users and items as nodes and represents relationships as edges between them. Edges are usually weighted and might be directed or undirected. To deal with sparse data and cold start, Moradi et al. applied trust information to the collaborative filtering method by graph clustering algorithm [24]. For cases with strong time constraints, such as financial news, Ren et al. proposed a graph embedding recommendation algorithm based on a heterogeneous graph model [25]. The performance of recommendation can be greatly improved by exploring the graph network to recommend suitable items to users. Therefore, the relationship between the user and the item is represented by a graph network in this paper, so as to dig out more information in recommendation.(ii)Attention mechanism: attention mechanisms are crucial in modulating the user experience and have been leveraged in various engineering applications including image processing and natural language processing. Extending previous recommendation algorithms were driven on user preferences but did not consider user attention. It is a good idea to incorporate the attention mechanism into the recommendation algorithm, which can make the recommendation algorithm more practical. Liang et al. proposed a mobile application for feature interaction through an attention mechanism [26]. At the same time, Feng et al. proposed a recommendation algorithm based on an attention network [27]. The abovementioned work has improved the performance to a certain extent by incorporating the attention mechanism into the recommendation system. Therefore, this paper uses the attention mechanism to connect the user layer and the item layer, so as to obtain a large double-layer graph network.(iii)Multilayer networks: while most recommendation algorithms do not consider interactions that can take place between users and items, recent works have proposed to encode them in multilayer networks. Shang et al. proposed a video recommendation algorithm based on a hyperlink multilayer graph model [28]. Yasami proposed a new knowledge-based link recommendation approach using a nonparametric multilayer model of dynamic complex networks [29]. And the methods proposed by them all improve the algorithm based on single-layer network to some extent.

3. Proposed Algorithm

3.1. Scenario

Sometimes users do not know what clothes to buy, what movies to watch, what songs to listen, and so on. At this time, users can look at the items recommended by the recommendation system for users. Recommendation system is an information filtering scheme and it is based on the user’s historical behavior data. The main task of the recommendation system is to predict the rating of the items by users and recommend the items to the users. However, given user history evaluation information (evaluation, rating, and timing), user attribute information (gender, age, occupation, zip code, etc.), and item attribute information (comedy, science, war, etc.), it is a problem to recommend appropriate and interested items to users.

Suppose there are users and items in a recommendation system. Most recommendation methods work in two ways. On the one hand, a recommendation might be made based on the attributes of the items that users like. For example, if a user likes comedies, then the system will recommend her to choose comedies within the available films. On the other hand, the system might look for the users who have similar preferences to a target user and recommend to the target user what similar users have chosen before.

3.2. Algorithm Detail Description

Here, we give the detail demonstration of our proposed method. Among them, Section 3.2.1 presents the construction of the double-layer network, and Section 3.2.2 presents the vector dynamic evolution clustering method and main convergence analysis. Furthermore, the predicted scores are obtained in each cluster, as shown in Section 3.2.3. Finally, the complete pseudocode and flow chart of our proposed algorithm are shown in Section 3.2.4.

The notations and their explanations in this paper are summarized in Table 1.


NotationsExplanations

The number of users
The number of items
Attribute vector of user
Attribute vector of item
Original score of user to item
Rating time of user to item
Attribute similarity between users and
Rating similarity between users and
Similarity matrix between users
Attribute similarity between items and
Rating similarity between items and
Similarity matrix between items
Attention matrix of user to item
Attention matrix of item to user
State vector of user node at time
State vector of item node at time

3.2.1. Network Model Construction

In this paper, all users and items are treated as nodes in networks, and the state of each node is represented as a vector. The user layer is set up by all users in the system, and the item layer is constructed by all items. The similarity between users is regarded as the edge weight of the user layer. The similarity between items is regarded as the edge weight of the item layer. The user and item layers are connected through attention mechanisms and ratings. In this way, the double-layer network model is formed and a simple example is shown in Figure 1.

(1) Constructing the User Layer Network. Everyone observes the same thing in different angles. In the same way, interests of everyone will be various. According to Movielens dataset, from the perspective of gender, male prefers action movies and female prefers romantic movies. Based on this, to a large extent, people who have similar attribute information have similar interests and preferences. We integrate the score information into the calculation of similarity. Firstly, the ages are divided into three stages: younger than 18, 18 to 55, and older than 55. The occupations are divided into three classes: culture class, leisure class, and management class. The sexes are categorized into male and female. If the user has this attribute, it is 1; and if the user does not have this attribute, it is 0. Thus, the attribute vector of the th user is 8-dimensional 0-1 vector, denoted as .

Then, attribute similarity between users is defined as follows:where denotes the attribute vector of the th user.

Define rating similarity between users as follows:where and denote the score vectors of users and with sharing scores on items.

In order to get more consistent with the actual user similarity, the final user similarity obtained by associating user attribute similarity and common rating similarity is defined as follows:

Here, is a mixture parameter, which is the convex combination of two similarity with different angles. So, the adjacent matrix of user layer is obtained, which can represent the coupling relation between users.

(2) Constructing the Item Layer Network. Similar to the construction of the user layer, this paper integrates the score information and item attribute into the calculation of similarity between items. In MovieLens dataset, there are 19 attributes of movies. If movie has this attribute, it is 1; if movie does not have this attribute, it is 0. Thus, the attribute vector of the th item can be represented as a 19 dimensional vector, denoted as .

Define attribute similarity between items is defined as follows:where denotes the attribute vector of the th item.

Define rating similarity between items as follows:where and denote the score vectors of items and which are evaluated jointly by the users.

In order to get more accurate relation between items, the final item similarity obtained by associating item attribute similarity and common rating similarity is defined as follows:

Here, is a mixture parameter, which is the convex combination of two similarities with different angles. So, the adjacent matrix of the item layer is obtained, which can represent the coupling relation between items.

(3) Connections between the User Layer and the Item Layer. In addition, to establish the relationship in the user layer and the item layer, the connection between the two network layers plays a critical role. German psychologist Hermann Ebbinghaus found that the human brain forgetting rule is as follows: the process of forgetting is very fast, and forget quickly at first, and then slowly [30]. Inspired by Ebbinghaus, we believe that interests of people will also change with time. The closer the score to the current time is, the better it can express current interests of users. According to fitting the Ebbinghaus forgetting curve, we obtain a forgetting curve more in line with interests and hobbies of the people. The forgetting function proposed in this paper is as follows:

Moreover, the fitted coefficients are , , , and . Additionally, exp is the exponential function. represents the forgetting degree of th user to the th item. represents the time of the th user rating for the th item. represents the earliest rating time of user . represents the latest rating time of user .

Then, we use the attention mechanism to connect the user layer to the item layer on the processed score. In daily life, everyone pays attention to different things differently. For example, young girls pay more attention to romantic movies than to war movies, and no one can pay attention to everything. Based on this, we define the attention of the user to the item as follows:

Similarly, every item is not designed for everyone. Some items target different types of users. So, define the attention of the item to the user as follows:

In equations (8) and (9), represents the initial score of the th item rated by the th user. is the score that item was rated by user . represents the neighbor user set of user u. We view the items evaluated by user as the item neighbor set of user . represents the neighbor set of item . We treat all users who have evaluated item as neighbors of item . represents the attention of the th user to the th item. represents the attention of the th item to the th user. Obviously, and are not mutually symmetric matrices.

To illustrate the previous definitions, we give a simple example with three users and four items . represents the original scoring matrix:

represents the user attention matrix, obtained by equation (8):

In the original score matrix , we can find that and . They have the same scores, but they are and in the attention matrix of users. That means the attention mechanism is meaningful in this paper.

represents the item attention matrix, obtained by equation (9):

Besides, in the original score matrix , we can find that and , but by processing, and in the attention matrix of items. The attention values of the users to the items are the directed edges from the user layer to the item layer, and the attention values of the items to the users are the directed edges from the item layer to the user layer. Experiments show that attention mechanism can greatly improve the recommendation performance.

3.2.2. Dynamic Evolution Clustering in Double-Layer Network

In recent years, due to the explosion of data, clustering methods emerge endlessly. The clustering method can not only greatly reduce the recommendation time but also improve the recommendation performance. Cluster analysis finds different communities, gathers similar things into one cluster, and pushes dissimilar things in different clusters. Among them, dynamic clustering is more in line with the real situation, so it is applied in various scenarios [19, 21, 22, 3135]. The phase of the previous dynamic evolution clustering method is only a scalar, which cannot express the interest of users better. In order to grasp the changing rules of interest in different periods, we propose a vector dynamic evolution clustering method.

In this paper, we propose a vector dynamic evolution clustering method in the user layer as follows:

Here, and represent the state vectors of user node at time and time . , where represents the average edge weight of user node , which is the average similarity between user node and other users. , where represents the average value of nonzero elements in attention matrix . , , , and are the clustering coefficients. and are the positive coupling coefficients, and and are the negative coupling coefficients. The matrix is defined as follows:

Here, represents the influence degree of the th user attribute on the th item attribute. represents the set of users who have evaluated items in the th category. represents the set of users with the th attribute. The purpose of adding matrix to the evolution equation is to emphasize the different influences of different user attributes.

In the same way, we propose the vector dynamic clustering method in the item layer as follows:

Among them, and represent the state vectors of item node at time and time . , where represents the average edge weight of user node , that is, the average similarity between item node and other items. , where represents the average value of nonzero elements in attention matrix . , , , and are the clustering coefficients. and are the positive coupling coefficients, and and are the negative coupling coefficients. is the transpose of the matrix . The purpose of adding matrix to the evolution equation is to emphasize the different influences of different item attributes.

Besides, this community detection evolution process can be stable after some iterations. The convergence results are obtained from the following theorems according to Lyapunov theory.

Theorem 1. Vector dynamic evolution process equations (13) and (15) can be converted into the following forms:

Proof. See it in the Appendix, in the end of this paper.

Theorem 2. If appropriate parameters , , , , , , , and make the and , then the fixed points in equations (13) and (15) are uniformly stable.

Proof. See it in the Appendix, in the end of this paper.
The convergence analysis shows that our community detection algorithm will be stable after some iterations. Finally, the user nodes with similar state vectors and item nodes with similar state vectors are assigned to the same community, that is, all users and all items with higher similarity are assigned to their community with similar interest.

3.2.3. Score Prediction and Recommendation

In order to sort the user similarity of the same community and obtain the nearest neighbor set of the target user for collaborative filtering recommendation, Cosine similarity is used in this paper to calculate the community similarity:

To compare different similarity indexes, Pearson correlation coefficient and adjusted Cosine similarity [36, 37] are adopted:

Here, represents the score of the th item is rated by the th user, represents the set of items both scored by users and , represents the average score of user , and represents the set of items scored by user . Through the above methods, the similarity ranking of target users within the community can be obtained, and the score can be predicted.

Previous prediction methods did not take into account the item community and averaged all item scores, which could not better express the score of target users in the community of target items. In order to better reflect the role of the community, this paper proposes the following methods to predict the score:

Here, is the community of item , is the average score of user to the th item community, represents the nearest neighbor set of user , represents the prediction score of target user for item , represents the similarity value between users and , and represents the average score of user .

3.2.4. Algorithm Flowchart

This paper presents a recommendation algorithm in Double-layer Network based on Vector dynamic evolution Clustering and Attention mechanism (denoted as DN-VCA). The pseudocode is shown in Algorithm 1.

Input: Training set , Test set . Parameters , , …, , , .
Output: Prediction score , MAE, RMSE, Recommendation list.
 //Constructing the double-layer network
 Compute the User similarity matrix , Item similarity matrix , User-item attention matrix , Item-user attention matrix according to equations (3), (6), (8) and (9).
 //Community detection in double-layer network
for each node do
  Apply vector dynamic evolution clustering equations (13) and (15) to find the appropriate community for each node.
  If and then all nodes stop iterating
  end if
 end for
 //Calculate the dynamic similarity
for each user community and item community do
  Apply equation (17) to calculate the similarity matrix in the same community.
 end for
 //Prediction score
for each target user and target item do
  Find the neighbor set of the target user by similarity sort.
  Compute the prediction score according to equation (19)
 end for
 //Select the Top-N items as recommendation list for target user

Firstly, we use node attribute information to construct an undirected network in the layer. Secondly, connect double layers of networks with attention mechanism so that a double-layer network connection is established as directed relationship. Thirdly, community detection is carried out according to vector dynamic evolution clustering. Fourthly, according to the new prediction method proposed in this paper, score prediction is carried out. Finally, we give a list of recommendations.

The flow chart of the algorithm proposed in this paper is shown in Figure 2.

4. Experiments and Results

4.1. Datasets

To verify the effectiveness of the model and our proposed algorithm, Movielens-100k, CiaoDVD, and Filmtrust datasets are tested in this paper.(i)Movielens-100k1: MovieLens is a set of movie ratings. The dataset contains 100,000 ratings provided by 943 users for 1682 movies, and scores are 1–5. Each score has its corresponding time. The dataset also has user attribute information and movie category information. Each user has at least 20 score records. The data sparsity is 93.7%.(ii)CiaoDVD2: the dataset contains 278,483 ratings provided by 7375 users for 99746 DVDs, and scores are 1–5. As the sparsity of this dataset was 99.97%, we retain the users with more than 20 evaluation DVD and the DVD with more than 20 evaluation values so that the sparsity after processing is 97.01% and the processed dataset is denoted as CiaoDVD-1.(iii)Filmtrust3: the dataset contains 35,497 ratings provided by 1508 users for 2071 movies, and scores are 0.5–4 in intervals of 0.5. The data sparsity is 98.86%. Moreover, most items have not been evaluated by users, so we only keep the items that have been evaluated more than 3 times. At the same time, we delete the users whose evaluation times are less than 3 times. Finally, the data sparsity was slightly reduced to 96.61% and the processed dataset is denoted as Filmtrust-1.

The original dataset is randomly divided into 80% training set and 20% test set to verify the proposed algorithm on three datasets. In addition, all the algorithms in this paper have conducted five cross experiments, and finally, the average of five cross experiments are taken as the results. Table 2 gives the statistics of three datasets.


Movielens-100kCiaoDVDCiaoDVD-1FilmtrustFilmtrust-1

User943737547115081227
Item1682997466892071793
Rating10000027848397073549733009
Scale[1–5][1–5][1–5][0.5–4][0.5–4]
Sparse degree (%)93.7199.9797.0198.8696.61

4.2. Evaluation Indexes

In order to verify the accuracy of the algorithm proposed in this paper, we use the following five evaluation indicators:

Here, MAE is the mean absolute error, RMSE is the root mean squared error, represents the true score of item by user in the test set, represents the prediction score generated by the algorithm, and represents the number of scores be predicted in the test set. Fixing integer processing is conducted on Movielens and CiaoDVD-1 datasets, but the scores on Filmtrust-1 are floating point numbers with 0.5 as the interval, so no integer processing is performed on this dataset.

Since we present recommendations to target users after the prediction, the following three indexes are used to verify the efficiency of recommendations:

Here, represents the set of items recommended by the algorithm for the target user and represents the set of items that target user really likes in the test set. Precision indicates the proportion of the items the user really likes in the recommendation list to the total number of recommendations. Recall represents the ratio of the favorite items of users in the recommendation list to the favorite items of the user. value is a comprehensive indicator of Precision and Recall.

4.3. Parameter Analysis

There are several parameters in our proposed algorithm and we will discuss their influences in the recommendation performance.

The influence of convex combination parametersin equation3andin equation6: here, on the Movielens dataset, the selected sets of and both are [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]. We tested the effect of and values on MAE and RMSE.

As shown in Figure 3(a), when and , that is, user similarity and item similarity only contain score information, . When and , that is, user similarity and item similarity only contain attribute information, . When and , that is, user similarity only contains attribute information and item similarity only contains score information, . When and , that is, user similarity only contains score information and item similarity only contains attribute information, . From the results, we can see that is the lowest, as and , in other words, the experimental results show the best when the similarity includes both score information and attribute information. Similarly, as shown in Figure 3(b), is lowest when and . In summary, and on Movielens dataset in the following experiment. Because there is no attribute information in CiaoDVD-1 and Filmtrust-1 datasets, so and .

The influence of thresholdsandin termination criteria: in order to obtain stable state vectors of nodes more accurately and faster, we limit the number of iterations of dynamic evolution clustering equations. If the state difference between the front and back vectors is less than the threshold, iteration is terminated. We selected several pairs of user layer threshold and item layer threshold for experiments.

As shown in Figure 4, on the Movielens dataset, both MAE and RMSE are the lowest when and . In other words, when and , the algorithm achieves a better prediction accuracy. Therefore, on the Movielens dataset, we choose and for experiments.

As shown in Figure 5, on the CiaoDVD-1 dataset, both MAE and RMSE are the lowest when and , except for the prediction when the number of neighbors is 90. So, on the CiaoDVD-1 dataset, we choose and for following experiments.

As it can be seen from Figure 6,  = 0.6164 when and , and the number of neighbors is 60. When and ,  = 0.6155. When the neighbor number is taken into other values, both MAE and RMSE of and are the lowest, that is, the prediction effect is the best. As a result, the two threshold values selected in the experiment on Filmtrust-1 dataset in this paper are and .

In this paper, only four pairs of parameters are selected for comparison on each dataset, and finally the threshold values with the best effect are selected. In fact, thresholds other than the above can lead to better predictions.

4.4. Comparing Similarity

In order to get better results, we choose one of Cosine similarity, Pearson correlation coefficient, and adjusted Cosine similarity, so we conducted experiments on the Movielens dataset.

As shown in Figure 7, because MAE and RMSE of Cosine similarity are lower than Pearson correlation coefficient and adjusted Cosine similarity regardless of the number of neighbors, we select Cosine as the measure of dynamic similarity within the community.

4.5. Comparing Results of Different Recommendation Algorithms

In this paper, our proposed algorithm (DN-VCA) is compared with three existing algorithms from Collaborative Filtering recommendation algorithm based on K-means clustering (named as K-CF), Ref. [38] (named as DTNM), and Ref. [32] (named as EHC-CF). The selected neighbor set is [10, 20, 30, 40, 50, 60, 70, 80, 90, 100].

4.5.1. Movielens-100k

In order to compare the accuracy of our proposed algorithm with other algorithms, a number of experiments are carried out on the Movielens dataset.

Figure 8 shows the clustering results based on our double-layer network evolutionary clustering method. Six communities are formed for Movielens-100k.

As shown in Figure 9, regardless of the number of neighbors, the MAE values and RMSE values of our DN-VCA are lower than the other three compared algorithms.

The reason our DN-VCA has a good performance is that double-layer network can better represent the relations between users and items. Moreover, our prediction method is no longer based on the user community, but adds the item community into the prediction method, highlighting the role of the item community.

Next, we present the Top-N recommendation list for the user, and select the set of recommended number as [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]. The comparison results of Precision, Recall, and are shown in Table 3.


AlgorithmMetrics (%)Recommended number
2468101214161820

EHC-CFPrecision90.380589.765989.357888.728788.357388.214787.878287.712987.633087.4568
Recall7.848015.134821.719927.531032.693337.378641.525245.370948.915752.1453
14.375825.722634.669841.688547.360652.123556.005759.409062.387364.9418

DTNMPrecision91.205389.796489.104788.848788.491788.288488.226487.955687.768187.6785
Recall7.907215.128921.674427.570132.765637.420941.688645.487348.995552.2710
14.485725.714634.593141.747547.457752.178556.226359.565462.488065.1010

K-CFPrecision91.227490.173989.515889.139688.832888.405188.211688.029787.930587.7079
Recall7.907215.200021.764027.659732.867237.473841.681245.531249.089552.2939
14.486025.833534.738341.882947.613752.250956.216459.620862.606265.1270

DN-VCAPrecision91.724790.470789.754189.443388.944388.549988.338088.089087.978387.8778
Recall7.957015.243721.807827.740032.902137.525341.731345.546649.107752.3803
14.576525.910034.812242.008647.666252.326756.287459.646662.632865.2411

As shown in Table 3, as the number of recommendations changes, Recall and values will increase, while the Precision will decrease. Compared with algorithm DTNM, EHC-CF, and K-CF, our DN-VCA has a slightly higher advantage in Precision, Recall, and values.

4.5.2. CiaoDVD-1

To further test the performance of the algorithm, we test many experiments on the CiaoDVD-1 dataset.As shown in Figure 10, because the sparsity of this dataset is very large, the result of the algorithm has a strong oscillation. However, no matter how many neighbors we have, the MAE and RMSE values of our DN-VCA are lower than the other three compared algorithms, except that when the number of neighbors is 20, and the MAE and RMSE values are slightly higher than EHC-CF.

As shown in Table 4, compared with algorithm DTNM and K-CF, our DN-VCA has a significant advantage, and our DN-VCA is very close with EHC-CF. In general, the DN-VCA algorithm we proposed is higher than the other three algorithms in MAE, RMSE, Precision, Recall, and values.


AlgorithmMetrics (%)Recommended number
2468101214161820

EHC-CFPrecision24.590013.65579.69857.60196.39585.55714.93204.42144.06433.7721
Recall11.387711.893712.398912.813413.384313.890414.331214.642515.109015.5502
15.566212.713410.88319.54208.65517.93797.33816.79166.40526.0711

DTNMPrecision13.92748.48025.92975.11094.44843.84223.60723.40073.17223.0071
Recall6.44377.37747.57148.60989.29749.594710.479411.255311.785712.3821
8.81077.89016.65046.41396.01745.48695.36675.22314.99874.8388

K-CFPrecision20.615412.10888.25846.73215.58484.92034.31534.00333.71573.3792
Recall9.544810.544110.557311.349711.684512.293912.542613.255613.811613.9290
13.048011.27209.26698.45097.55717.02766.42106.14925.85575.4387

DN-VCAPrecision24.813613.79309.89387.66976.53245.66634.97174.42294.12803.8314
Recall11.489712.007812.643212.930113.668414.161314.446714.641515.341215.7958
15.706212.838111.10029.62778.83968.09367.39726.79326.50526.1667

4.5.3. FilmTrust-1

Different from the previous three datasets, the scale of this dataset is 0.5, so in the final prediction results, the experiments in this paper do not conduct rounding processing on the predicted scores.

It can be seen from Figure 11 that the DN-VCA proposed in this paper is slightly lower than EHC-CF when the number of neighbors is 10, and the MAE of EHC-CF is 0.6365, while the MAE of DN-VCA is 0.6371. When the number of neighbors is taken as other values, the accuracy of our DN-VCA has significant advantages with DTNM. Compared with EHC-CF and K-CF, the MAE and RMSE of DN-VCA are also the lowest. The results show that the proposed DN-VCA achieves good performance on FilmTrust-1 dataset when predicting scores.

As shown in Table 5, each of the four algorithms gives similar results in Precision, Recall, and values. However, DN-VCA that we proposed has the best results, which are optimal in Precision, Recall, and values except that when the number of recommendations is 18, the value of DN-VCA is lower than 0.0068 of K-CF, the Precision of DN-VCA is lower than 0.006 of K-CF, and the Recall of DN-VCA is lower than 0.0077 of K-CF. DN-VCA obtained the best recommendation result when the number of recommendations is 20.


AlgorithmMetrics (%)Recommended number
2468101214161820

EHC–CFPrecision82.255583.845082.844482.236481.537181.295281.101481.029280.948280.9327
Recall34.348257.592573.434384.568791.356595.283497.218498.189498.661899.0253
48.967468.281977.855283.385386.166987.734188.430488.786888.930689.0687

DTNMPrecision85.238684.052283.001482.138281.651381.253781.088780.964280.963680.9419
Recall34.341557.735373.572984.467291.484095.234397.203498.110598.680599.0364
48.957868.451078.002483.285586.287487.689188.416788.715588.947589.0788

K-CFPrecision85.199284.172383.055982.207581.725081.321081.110781.044681.015780.9571
Recall34.325657.817673.621784.538891.566895.313297.229798.208398.744599.0555
48.935168.548678.053983.355986.365487.761788.440788.803789.005089.0957

DN-VCAPrecision85.832784.369083.318082.404381.781781.391481.173381.094381.009780.9695
Recall34.580457.952973.854884.741491.630195.395697.304598.268298.736899.0703
49.298668.709078.300683.555686.425287.837688.508888.858088.998289.1092

Experimental results on three datasets show that MAE and RMSE of our proposed DN-VCA are lower than the other four comparison algorithms in predicting scores, so our proposed algorithm is effective in predicting scores. In addition, in terms of recommendation, DN-VCA also achieves better results than other algorithms in Precision, Recall, and values. The K-CF and EHC-CF of the three comparison methods are recommendation algorithms based on clustering algorithms. Our results show that our vector dynamic evolution clustering algorithm outperforms these other clustering algorithms, hence suggesting that our proposed method can also be effective for generating recommendations. So, it proves that the algorithm we proposed is effective in the recommend system. That means our double-layer network construction is meaningful and the dynamic clustering in the double-layer network can gather the similar interest users together to the same community. Neighbor users with high similarity give valuable suggestions in the collaborative filter recommendation process.

5. Conclusion

In this paper, a novel vector dynamic evolutionary clustering recommendation algorithm DN-VCA based on double-layer network and attention mechanism is proposed. Our algorithm firstly constructs a double-layer network model through node similarity and an attention mechanism. Then, the improved vector dynamic evolution clustering equation is used in the double-layer network to cluster the nodes into the most suitable community. Finally, the similarity between nodes is calculated within the community for enabling collaborative filtering recommendation. We not only verify the validity of the DN-VCA, but also prove the theoretical results of the algorithm. Additionally, we solve the shortcomings of the existing methods. For example, previous algorithms are only based on the single-layer network, and the state of nodes is only a scalar when clustering is dynamically evolving. With the development of big data, the recommendation system faces more and more users and items. It is impossible for us to carry out similar comparison with all users for collaborative filtering based on users, and the vector dynamic evolution clustering proposed in this paper is a good community detection method to solve this problem. Please note that our algorithm has a number of degrees of freedom that are highly nontrivial to optimize. Actually, our results do not use optimal parameters values; if those parameters could be further optimized then the performance of our algorithm would further improve. Finding efficient optimization methods for these parameters constitute an interesting field of future research.

Appendix

A. Proof of Theorem 1

In order to evolve the double-layer network model, the following matrices are firstly defined:

Matrix keeps the elements of , and the rest are all 0. Matrix keeps the elements of , and the rest elements in are all 0. And similarly, matrix keeps the elements of , and the rests are all 0. Matrix keeps the elements of , and the rest elements in are all 0. and contain elements that satisfy and , and the other elements are all 0. Matrix contains all elements that are greater than or equal to the mean similarity of each item. Matrix contains all elements less than the mean similarity of each item, and all other elements are equal to 0.

Theorem A.1. Vector dynamic evolution process equations (13) and (15) can be converted into the following forms:

Proof. (i)The user layer vector dynamic evolution process equation (13) isThe initial values are all selected from the range of [0, ]. and . We can obtain and . And then can obtain and , where and and they meet the following conditions:Then, equation (13) can be converted toThen,Suppose the user state vector is an -dimensional column vector, and the item state vector is a -dimensional column vector. Denote and . So, is a column vector in dimension and is a column vector in dimension.Define a special matrix as follows:Matrix is a large matrix of dimension, composed of small diagonal matrices with dimension. Similarly, the elements of are made up of the elements of .Define the diagonal elements of matrix as follows: matrix is the diagonal matrix of , the first diagonal elements are the same, the to elements are the same, and so on, and the last elements are the same. Similarly, we can define , and the elements of are made up of the elements of the matrix .Then, define the matrix as follows:Matrix is a large matrix of dimension. When the elements of become are substituted by the elements of , the above matrix becomes .Define the matrix as follows:The matrix is made up of multiple matrices and it is diagonal block matrix. Matrix is the dimension matrix defined by equation (14). The dimension of matrix is , which means that matrix is made up of matrices .And finally, we define the matrix as follows:The composition of the matrix is the same as the composition of the matrix , and the element source is the matrix .Based on the above definitions, can be further transformed:DenoteThen, we have(ii)Similarly, the item layer vector dynamic evolution quation 15 can also be in a similar form. is defined as follows: is defined as follows:Similarly, the elements of are made up of the elements of . The elements of are made up of the elements of the matrix . When the elements of the become elements of , this matrix becomes the . The matrix is made up of multiple matrices. The composition of the matrix is the same as the composition of the matrix , and the element source is the matrix .
Then, we haveHere,

B. Proof of Theorem 2

Theorem B.1. If appropriate parameters , , , , , , , and make the and , then the fixed points in equations (13) and (15) are uniformly stable.

Proof. Since any isolated equilibrium state can be moved to the origin of the state space by coordinate transformation, we only discuss the stability of the equilibrium state at the origin of the coordinates.
Lyapunov function is defined as follows:So, the following transformations are conducted: