Abstract
Recommender systems are recently becoming more significant in the age of rapid development of the information technology and pervasive computing to provide ecommerce users’ appropriate items. In recent years, various modelbased and neighborbased approaches have been proposed, which improve the accuracy of recommendation to some extent. However, these approaches are less accurate than expected when users’ ratings on items are very sparse in comparison with the huge number of users and items in the useritem rating matrix. Data sparsity and high dimensionality in recommender systems have negatively affected the performance of recommendation. To solve these problems, we propose a hybrid recommendation approach and framework using Gaussian mixture model and matrix factorization technology. Specifically, the improved cosine similarity formula is first used to get users’ neighbors, and initial ratings on unrated items are predicted. Second, users’ ratings on items are converted into users’ preferences on items’ attributes to reduce the problem of data sparsity. Again, the obtained useritemattribute preference data is trained through the Gaussian mixture model to classify users with the same interests into the same group. Finally, an enhanced social matrix factorization method fusing user’s and item’s social relationships is proposed to predict the other unseen ratings. Extensive experiments on two realworld datasets are conducted and the results are compared with the existing major recommendation models. Experimental results demonstrate that the proposed method achieves the better performance compared to other techniques in accuracy.
1. Introduction
Collaborative filtering (CF) is one of the most mature and widely used recommendation algorithms, which can be divided into modelbased methods and memorybased methods [1, 2]. The former uses machine learning methods to model users’ preferences by training data and predicts unknown ratings through the trained model. The latter is a neighborbased approach, which calculates the similarities between users/items to get neighbor users with similar interests to an active user or neighbor items with similar characteristics to an active item, so as to predict and recommend the interest items for the active user.
According to recommendation strategies, recommendation algorithms can be divided into CF recommendation algorithm, contentbased filtering, knowledgebased filtering, association rulebased and graphbased recommendation algorithm, etc. [3, 4]. Due to the shortcomings of a single algorithm in the recommendation performance, such as coldstart problem of CF algorithm and the lack of diversity of recommended resources in the contentbased recommender system (RS) [5, 6], the researchers begin to merge together a variety of algorithms to improve the recommendation performance. Hernando et al. [7] introduce the reliability measure into prediction based on CF, to improve the prediction information provided to users. Chen et al. [8] propose a hybrid model, which combines Gaussian mixture model with itembased CF recommendation algorithm and predicts the ratings on items from users to improve the recommendation accuracy. Luo et al. [9] propose a hierarchical Bayesian modelbased CF and related inference algorithm, which reduces the prediction errors. Hofmann et al. [10] propose an algorithm of latent semantic models for CF, which is a usercentric view technique and introduces probabilistic latent semantic analysis (pLSA) into user modeling process, and it achieves higher prediction accuracies.
In recent years, some researchers introduced the context information into RS to improve the accuracy of the recommendation algorithms. Massa et al. [11] propose a trustaware collaborative filtering algorithm, which increases the coverage of recommender systems with preserving the quality of predictions. Azadjalal et al. [12] employ trust relationships into recommendation algorithms and propose a method to identify implicit trust statements by applying a specific reliability measure. The proposed algorithm outperforms the traditional recommenders in accuracy and coverage measures. Gao et al. [4] propose a method to obtain contextaware preference based on cognitive behaviors, which elicits user preferences under multidimensional context environments by establishing the cognitive factors’ mutual effect model, to achieve better prediction accuracy.
The above methods improve the accuracy of the recommendation to some extent. However, the useritem rating matrix, namely, one of the inputs to the recommendation algorithm for large amounts of data, is often highly sparse, which leads to unreliable predictions [13–17]. The main reasons are as follows: (1) excessive sparsity leads to the lack of common rating items, so neighbors cannot be selected to make predictions; (2) due to too few neighbors and common rating items, similarity is not accuracy; thus the quality of recommendation is difficult to guarantee; (3) a variety of recommendation algorithms proposed cannot predict the ratings or make the recommendations for all the items; (4) matrix factorization algorithms can predict ratings for all the items, but the accuracy did not reach the desired effect on the basis of data sparsity.
To overcome the above problems, a hybrid recommendation method that combines the Gaussian mixture model with an enhanced social matrix factorization algorithm is proposed in this paper. To reduce the sparsity and predict unknown ratings more accurately and completely, we first fill in the rating matrix from the perspectives of users, items, and users’ interests so that an accurate recommendation can be made for users to achieve users’ satisfaction. Specifically, CF and Gaussian mixture model are used to fill in the useritem rating matrix, and then matrix factorization technique is used to predict all unknown ratings. Firstly, we make a preliminary prediction using the proposed improved similarity method to reduce the data sparsity. Secondly, in order to classify users with the same interests into same group, ratings on items from users are converted into the users’ preferences on items’ attributes. Based on the assumption that each user has multiple interests, the users are divided into different groups according to different interests by using the Gaussian mixture model. Then the partial unknown ratings are predicted using the probability that each user belongs to each different group. Finally, all unrated items are predicted by using the proposed enhanced social matrix factorization.
The remainder of this paper is organized as follows: Related studies are reviewed in Section 2. A hybrid recommendation model and framework are proposed in Section 3. In Section 4 several experiments are conducted and evaluated. In Section 5, we draw our conclusions.
2. Related Work
CFbased RS recommends items to an active user based on the opinions of his/her likeminded neighbors [24]. According to modeling methods and recommended strategies of CF recommendation methods, their methods can be classified into two categories of neighbor and modelbased methods. For neighborbased CF, all the ratings provided by users are kept in memory and used for prediction. To calculate the similarity between users/items, all the previously rated items are considered. For modelbased CF, a model based on training data is obtained and the model is used to predict unknown ratings for real data.
2.1. NeighborBased Collaborative Filtering
The neighborbased CF algorithm can be divided into two subcategories: userbased CF and itembased CF [25, 26]. The execution process of the neighborbased CF recommendation algorithm can be divided into the following three steps [27, 28]: (a) calculate the similarity between an active user (or item) and other users (or items) through ratings on items from users; (b) select nearest neighbors for the active user (or item) according to the obtained similarity; (c) predict the ratings on the candidate items from the active user according to historical preference information of nearest neighbors, so as to produce recommended results. The rationales of userbased and itembased CF algorithms are shown in Figure 1.
2.1.1. UserBased CF Recommendation Algorithm
In general, the users and items are denotes as the vectors and , respectively. The ratings on items from users are usually expressed as a rating matrix, and the ratings on items from a user can be denoted as a rating vector . For instance, the rating on item from user is 4 in Figure 1, which is denoted as . The similarity between two users is obtained by comparing the rating vectors of the two users. In RS, one of the most frequently used measures to calculate the similarities between users is cosine similarity.
Cosine similarity: the ratings on items from user u and user v are described as ndimensional vectors, respectively, and the similarity between user u and user v is obtained through comparing the angles of the two rating vectors. The smaller the angle is, the higher the similarity value is. Cosine vector similarity measure is calculated as follows [24] (see (1)):where represents the similarity between user u and user v, and represent the rating vectors of u and v, respectively, and represent the 2norm of u and v, respectively, and and are the ratings on item i from u and v, respectively. and represent the sets of items rated by user u and user v, respectively. indicates the set of common items rated by both u and v.
According to [2, 24, 28], the predicted rating on item i form the active user u using the incremental weighted average is described as follows [2] (see (2)):where represents the predicted rating on item i from user u, is the set of users that are most similar to active user u, and represent the average ratings on all rated items from user u and v, respectively, and indicates the similarity between active user u and neighbor user v according to (1).
2.1.2. ItemBased CF Recommendation Algorithm
Similar to userbased CF recommendation algorithm, itembased CF recommendation algorithm predicts unknown ratings by using rated ratings on neighbor items from the active user. Pearson correlation coefficient (PCC) is one of the most frequently used measures to calculate the similarities between items, which is described as follows [14] (see (3)):where denotes the similarity between item i and item j, and represent user u’s rating vectors for items i and j, respectively, and and represent the average ratings on items i and j in , respectively. denotes the set of users who rated both items i and j.
Ratings are predicted as follows [2] (see (4)):where is the set of items that are most similar to item i.
2.2. ModelBased CF Recommendation Technology
A recommendation algorithm should scale gracefully with the increase data. For the conventional neighborbased CF methods, all the previously rated items are considered to compute the similarity between uses/items, it is timeconsuming, and the methods are fail to achieve good scalability [18]. In order to reduce the computational complexity and improve the efficiency of recommendation without reducing the recommendation accuracy, subsequent clusteringbased and classificationbased techniques are introduced into RS to generate and evolve a variety of modelbased recommendation algorithms, such as clustering models [29–33], singular value decomposition (SVD) based models [15, 18], and probabilistic matrix factorization (PMF) based models [34–37]. Modelbased methods initially train a model based on training data to find patterns and then makes predictions for real data [38, 39].
2.2.1. Gaussian Mixture Model
Clustering analysis is an unsupervised classification approach for recognizing patterns, which is based on grouping similar objects together. In RS, similar users/items are classified into the same category by using clustering techniques, and then only the ratings of neighbor users/items in the same category are calculated to predict unrated items, which greatly reduce the computational effort. Generally, clustering is divided into hard and soft clustering. Kmeans is an important and well known hard clustering technique. Each object belongs to exactly one category in hard clustering, and there is no uncertainty in category membership of an object [29, 30]. In soft clustering, each object belongs to two or more categories with different degree of membership, rather than fully belonging one of the categories [31]. Gaussian mixture model (GMM) is a very wellknown soft clustering technique. GMM is used in RS to more accurately discover a user’s multiple interests and degree of preference on an item to better recommend items of interest to the user [8].
GMM assumes that the data obeys the mixture Gaussian distribution [40]. In other words, the data can be thought of as being generated from several Gaussian distributions. Each GMM consists of k Gaussian distributions. Among them, each distribution is called a “component”, and these components are linearly added together to form the probability density function of GMM.
For a random vector x in ndimensional sample space , if x obeys Gaussian distribution, the probability density function [34] is as follows (see (5)):where is an ndimensional mean vector and is a covariance matrix of . It can be seen that the Gaussian distribution is completely determined by two parameters of the mean vector and the covariance matrix .
Gaussian mixture distribution consists of k mixing components, each of which corresponds to a Gaussian distribution [37]. The mixture distribution is defined as follows (see (6)):where and are the parameters of the ith Gaussian mixture and is the corresponding mixing coefficient. Here, and .
2.2.2. Matrix Factorization
In RS, the most frequently used matrix factorization methods are singular value decomposition (SVD) [2, 15], latent factor model (LFM) [2, 39, 41], nonnegative matrix factorization (NMF) [23, 42–45], and trustaware matrix factorization (TMF) [19, 20, 46–48], which are techniques of dimensionality reduction with implementation to RS. Reductions in dimensionality effectively preserve the information content while drastically decreasing the computation complexity and memory requirements for making recommendations [15, 39].
Latent Factor Model. The essence of matrix factorization is decomposing a very sparse rating matrix into two matrices: one represents the characteristics of the user and the other represents the features of the item [2]. The inner product of each row and each column of two matrices yields the corresponding rating.
A useritem rating matrix denoted by can be decomposed into the product of two matrices and , which is expressed as follows (see (7)):where represents the relationships between N users and f topics and represents the relationships between f topics and M items. f indicates the dimensions of the latent factors.
In RS, the recommendation algorithm based on matrix factorization is implemented in two processes: (1) The original matrix is decomposed into two lowrank matrices by using known ratings in (7). The decomposition of the original matrix is the process of model training, and model parameters are usually obtained by using the stochastic gradient descent method for the established loss function. (2) Predict the unknown ratings by using the inner product of the obtained lowrank matrices U and V, i.e., .
Therefore, the key task of recommendation algorithm based on matrix factorization is to solve and , which can be transformed into a regression problem. A loss function, namely, L, which represents the sum of squares of the errors between the original ratings and the predicted ratings, is defined as follows [1, 2, 21] (see (8)):where . represents the kth feature of user i and represents the kth feature of item j. The loss function after adding the regularization term is defined as follows (see (9)):
In general, two regularization terms and are set to be the same for ease of calculation, i.e., . Then the stochastic gradient descent method is used to solve (9) (see (10)(11)):
Update variable and according to the negative gradient as follows (see (12)(13)):where α denotes learning rate, λ is the regularization term parameter, and represents the difference between the real rating and the predicted rating.
Singular Value Decomposition. SVD is a powerful computation tool in latent semantic analysis which results in downsizing of relevant variables and finding a good approximation of R [15]. The purpose of SVD in RS is to compress a very high dimensional data into a lowdimensional space, while preserving the major information content. A matrix with the rank of can be decomposed as follows [2, 15] (see (14)):where and . E represents a unit square matrix. The rows of P and Q indicate users’ features and items’ features, namely, leftsingular vectors and rightsingular vectors, respectively, and their values correspond the eigenvectors of and . is a diagonal matrix, which represents the degree of association between P and Q. The diagonal values, namely, the singular values are arranged from large to small, each of which is indeed the square root of the eigenvalue of R(or R). Moreover, the eigenvectors of and , i.e., rows of P and Q, are arranged according to the eigenvalues.
To benefit from dimensionality reduction, we put and include only the f largest singular values in and replace the others by zero. Therefore, R is described approximately with as follows (see (15)):where is the frank approximation of Σ. The matrices and , respectively, contain the f largest eigenvectors of and , whereas the diagonal matrix contains the nonnegative square roots of the f largest eigenvalues of either matrix along its diagonal [20, 46]. For finding the positions of users and items, we can map raw data to the p dimensions space as follows [17] (see (16)(17)):where and are new positions of users and items in the f dimensions space, respectively.
For instance, the matrix in Figure 1(a) is denoted as A, which can be decomposed into P, Q, and . The approximation of A can be obtained by taking the first 2dimensional data, i.e., , , . and are projected in 2dimensional space and plotted in Figure 2.
When a new user with a shared rating of arrives, the following calculation is performed to find the position of the new user in the 2dimensional space as follows:
As can be seen in Figure 2, it can be found that the user is close to the new user. Therefore, is considered to be the nearest neighbor of the new user, and the similarity of the approximate lowdimensional data can be calculated to make a recommendation.
Classic matrix factorization [19, 41] methods can provide very accurate predictions and have the advantages of high computational efficiency and expansibility. However, classic matrix factorization algorithms would make unreliable predictions when encountering new users/items and extremely sparse data. To solve the problems mentioned above, social relationships between users are introduced into RS, and trustbased CF (TCF) [23, 37, 45, 49, 50] algorithms arise in recent years. TCF algorithms integrate trust relationships between users into recommendation algorithms to improve the quality of the recommendation [11, 13, 18, 24, 29, 30]. These methods are also divided into neighborbased TCF (NTCF) and modelbased TCF (MTCF). NTCF is not scalable, so the computational cost is extremely high when faced with large amounts of data. Therefore, NTCF is not suitable when dealing with a great number of users and items. MTCF incorporates social network information into the matrix factorization model, to optimize the parameters by using the useritem ratings and individual trust among users. Thus it is scalable and also makes an accurate recommendation [15, 18, 37, 41, 49, 50]. Therefore, MTCF has been drawing considerable attention recently [19, 51]. Many of the current recommendation methods are based on the idea of MTCF.
3. The Proposed Hybrid Recommendation Method and Framework
In this section, an overview of the proposed hybrid recommendation method is described in Section 3.1. Second, an improved CF recommendation method is described in Section 3.2 and predicting the initial ratings using EM algorithm is discussed in Section 3.3. Third, the calculation method of user’s social status in trust networks is discussed and an enhanced social matrix factorization model is proposed in Section 3.4. Finally, computational complexity of the hybrid method in this paper is analyzed in Section 3.5.
3.1. Overview
In the case of sparse data, in order to improve the effect of user clustering and the accuracy of predicted ratings, we present a hybrid recommendation method based on Gaussian mixture model and enhanced social matrix factorization algorithm (GMMESMF) as Figure 3.
The prediction process is described as follows: (1) The similarities of users and items are calculated using the improved userbased and itembased measures. Here, the user’s similarity calculation method takes into account factors such as the trust relationships between users, the number of users’ common interactions with the same item, and the users’ evaluation times for the items, to reduce the influence of data sparsity on the performance of CF algorithm. Then partial unrated items are predicted using userbased and itembased CF methods. (2) The users’ preferences on the items are converted into the users’ preferences on items’ attributes. Then, the users are clustered using the Gaussian mixture model. In this way, the users in the same cluster have the same interests. This can speed up the search for nearest neighbors and increase the speed of recommendation generation. (3) We then predict some unrated items using expectation maximization (EM) algorithm to calculate the probability that each user belongs to each cluster. (4) After filling the useritem rating matrix in the first two steps, we can obtain a matrix that is not very sparse and reliable. The trust relationships between users and trust propagation are introduced into matrix factorization model, and an enhanced social matrix factorization technique is proposed to learn more accurate user characteristics and item characteristics in each cluster, so that the prediction for unrated items is completed; meanwhile, the prediction accuracy can be greatly improved. The density of the useritem rating matrix of each cluster is usually denser than the original useritem rating matrix when the number of users in each cluster is relative small. (5) According to the result of the prediction, the rating results are sorted in descending order, and the candidate items are recommended to the active user.
Through the above stepbystep filling method, partial unrated items can be filled according to the userbased and itembased collaborative filtering algorithms using their neighbor relationships. If the unknown ratings are predicted by matrix decomposition techniques directly, the predicted ratings would be inaccurate due to too few rated items for reference. To obtain reliable ratings, we estimate the reliability of filling ratings by using the reliability measure and then predict all unknown ratings using the matrix factorization algorithm.
3.2. Fill the Rating Matrix by Improved Collaborative Filtering Recommendation Algorithm
CF algorithm collects users’ preferences from the useritem rating matrix and produces the recommendations only based on the opinions that users’ interests are similar to current active user. However, the rating matrix is usually very sparse, when there are few neighbor users highly related to the active user, the quality of predicted ratings would be seriously affected, resulting in poor quality recommendation [28]. However, CF is one of the most successful and effective recommendation algorithms. Therefore, we use CF algorithm to predict some unrated items to reduce the sparsity of the rating matrix.
Cosine similarity and PCC are common methods of calculating similarity between two users and between two items, respectively, which directly affect the prediction accuracy. When two rating vectors given by two users are equal ratio, such as (4, 4, 4) and (5, 5, 5), (1, 2, 1), and (2.5, 5, 2.5), as Table 1, the similarities calculated by cosine and PCC will be both 1.0, although the aforementioned rating pairs are quite different.
In order to avoid the above problem and improve the similarity between the users or the items, inspired by [3, 4, 14, 18, 49, 52–54], the current method of similarity calculation needs to be improved.
Definition 1. Mean User Reference Center (MURC). The mean user reference center, i.e., average rating on items by users, denotes the average rating of all items from all users.where MURC represents the center point of user preference and C and O represent the sets of all users and all items, respectively. Card(C) and Card(O) represent the numbers of C and O, respectively.
Definition 2. Average Similarity Standard Deviation (ASSD). The average similarity standard deviation refers to the distance of the similarity values of all the other users from the mean user reference center. It reflects the average distribution of the average ratings on all items from each user. where denotes the average rating on all items from user u.
Definition 3 (Basic Similarity Region (BSR)). The basic similarity region is described as the user preference similarity distribution range whose center is the mean user reference center and whose radius is the average similarity standard deviation. If , then ; otherwise, . BSR denotes the basic similarity region. is calculated in the coordinate of the dimensionality reduction by (16).
Inspired by [4, 14, 26, 27, 53], we incorporate salton factors and trust relationships into cosine similarity calculation between users. The improved similarity calculation method is as follows (see (21)):
This will determine whether the users’ preference behaviors need to be revised. Here represents the similarity between u and v, which are filtered out the noisy data. When the similarity between u and v is not in BSR, the similarity is considered to be a noisy data. Cosine similarity in (1) only considers the angle between two vectors but does not consider the length of the vector, so it needs to be revised according to (21).
Here, and represent the salton and time attenuation factors, respectively. Among them, , and denotes the set of common items rated by both u and v. denotes the set of items rated by any other two users. is a constant parameter, which aims at avoiding the case where the denominator in the formula is zero or is zero. The difference between u and v’s rating times for the same item means that their interest changes are not synchronized. Therefore, it is necessary to introduce the time attenuation factor to weight the similarity between u and v; i.e., it is necessary to reduce the similarity when the rating times of two users are far apart. , where ω denotes the parameter of time attenuation and and denote the times of the item i rated by users u and v, respectively.
The improved similarity between user u and user v is described as follows (see (22)):where denotes the similarity between users u and v, which can be calculated as follows [11, 30] (see (23)):where denotes the maximum allowable propagation distance among users and denotes the trust statement between users u and v. In this section, the dimensions of data first are reduced as(16) and (17) before calculating the similarity. To ensure the accuracy of the predictions, a reliability measure in [7, 24, 55] is employed to evaluate the quality by providing a feedback on the quality of the predicted rates. The reliability measure is calculated as follows [7, 24] (see (24)):where is the reliability of a prediction . is the number of neighbors of u who have rated the item i. is the variance of the ratings made by the neighbors of u over the item i. The value range of is . The larger the value of is, the higher the accuracy of the prediction is, and vice versa. If the reliability value of is less than a threshold value in this paper), the prediction rating would be set to 0.
3.3. Predict Ratings Using EM Algorithm
3.3.1. Covert Ratings on Items from Users into Preferences on Items’ Attributions from Users
In the case of very sparse useritem rating matrix, in order to form a group of users with the same interest, ratings on items from users are converted into users’ preferences on items’ attributes, and then the attributes are clustered to get the users’ interest groups by using Gaussian mixture model, and finally unknown ratings are predicted by using the neighborbased method.
Definition 4. A user’s preferences on an item’s attributes reflect the user’s interests on the item. The relation matrix between the users and the attributes of the items can be expressed as the product between the useritem rating matrix and the item’ attribute matrix.where , , and denote the useritem’s attribute matrix, the useritem rating matrix, and the item’s attribute matrix, respectively. · denotes the product of the two matrices, but it is not simply a multiplication of two matrices.
If a user likes an item, it is because the user is interested in the attributes of the item. Under this assumption, we can obtain the preferences on items from users by using the ratings in the useritem rating matrix and itemattribute information in itemattribute matrix. Specifically, the preference of the user u on the attribute a of the item i can be described as follows (see (26)):where denotes the preference of user u on attribute a, denotes the rating of the user on the item i, and denotes the attribute a of the item i. Here, is defined as follows (see (27)):
The transition from the useritem ratings to the useritem’s attribute preferences are showed in Figure 4. For instance, the rating on from is 3 and the item has the attributes of , , and . Thus, the preference values on , , and from are all 3.
In general, the more a user’s ratings on an item’s attribute is, the more reliable his/her evaluation on the item’s attributes is. The lower the preference value is, the less reliable the user’s attribute rating on the item is. Therefore, these ratings can be removed. Considering the reliability of the user’s preferences on the item’s attributes, the preference values against each item are weighted average as follows (see (28)):where denotes the reliable preference of the user u on the attribute a and denotes the set of items rated by user u. denotes the kth preference value on attribute a from user u according to rating on the kth item from the user u, and s represents the number of the item’s attribute. When the preference values of items’ attributes from are obtained, a very sparse matrix is mapped to a lowdimensional space; thus the sparsity is also alleviated.
For instance, according to the rating on item from the user and the attributes of the item , the preference values on , , and from user can be obtained using (26), which are 3, 3, and 3, respectively, and the other preference values on , , , and are all zero. That is, , , and . The obtained attribute preference values are denoted by . Likewise, according to the ratings , , and on items , , and from , we can obtain the preference values on , , and from the user , which are denoted by . For example, , and . Therefore, we can obtain the weighted attribute's preference values from user according to (28); i.e., . For example, , and . The unreliable attribute’s preference values are removed, and then the reliable attribute’s preference from is obtained as shown in Figure 4.
3.3.2. Predict the Partial Ratings Using Gaussian Mixture Model
According to [8, 10, 56], whether a user is interested in an item depends on the nature of the item. For example, whether a user likes a movie may depends on three factors: whether it is an entertainment or a literary film, a foreign language, or a Chinese language film and whether the actor is famous. This article uses the Gaussian mixture model proposed by Hofmann as the basis of clustering and supposes that the conditional probability of the rating on an item’s attribute a by the group z obeys the Gaussian distribution [2, 40] (see (29)).where denotes the joint probability of user u and item’s attribute a. is a potential group that has not been observed. denotes the probability that the user u belongs to the group . For each k, represents the probability that user u belongs to the group z, and . indicates that the rating of each user in group z obeys a Gaussian distribution, which represents the conditional probability of the ratings on the attribute a from the group z.
According to the Bayesian formula, the joint probability density formula can be obtained as follows (see (30)):where p(u) and p(a) are both constants. Therefore, the joint probability density function can be written in the following form (see (31)):where θ represents the general name of all parameters. Loglikelihood function is described as follows (see (32)):
Then we use the parameter to optimize the function.
To obtain the values of each parameter (k, , , ), we use EM algorithm to alternately execute step E and step M to solve the parameters as follows [10, 56] (see (33)(36)).
EStep:
MStep:
Execute E and M steps alternately, until which is less than a given value of ε converges, the iteration ends, and the model parameters are obtained.
Finally, the predicted rating on attribute a from user u is as follows [10] (see (37)):where denotes groupattribute average value. In a similar way, we can predict the unrated items by using the mixture model.
Next, an example is used to illustrate how to predict user ratings using a Gaussian mixture model as Figure 5. A useritem rating matrix contains 80 ratings on a scale from 1 to 5 of 6 movies by 20 users, and each movie has several different genres, e.g., comedy, crime, action, and romance. The ratings on movies from users are converted to users’ preferences on movies’ attributes according to (28). Then three attributes are selected as features to train the data; suppose that each user has multiple different interests and three groups are divided on the basis of Gaussian mixture model. In Figure 5, different colors represent different groups, respectively. If we will predict the rating , i.e., rating on item from user in the useritem rating matrix. According to (33), we can obtain the probabilities that user belongs to group one, group two, and group three and is denoted as , , and . The probabilities are 1.169e72, 0.133, and 0.867. The first probability is close to zero, and it is ignored for ease of calculation. Therefore, .
3.3.3. Reliability of Ratings Prediction
Until now, initial unknown ratings are predicted in the mixture Gaussian model. However, sometimes the predicted ratings as the above method are not very accurate, especially when an item is rarely rated by users. Therefore, these predicted rating data is unreliable, and we will remove the reliable data from the useritem rating matrix.
Definition 5 (reliability of rating prediction (RRP)). In the each component of the mixture Gaussian model, when the proportion of the number of rated item in the number of the current component is less than ( in this paper), the predicted ratings on these items are unreliable. The definition is as follows (see. (38)):
For , we will set the ratings to zero; in other words, they are considered as unknown ratings, so that they are predicted by the matrix factorization method in Section 3.4.
The EM algorithm of initial ratings prediction is described in Algorithm 1.

3.4. Complete Matrix with Matrix Factorization Algorithm
In Section 3.3.2, Gaussian mixture model is used to predict unknown ratings from user’s perspective. However, when some users have few or no ratings on items, it is difficult to predict unknown ratings according to nearest neighbors and the ratings obtained are inaccurate. In this section, inspired by [13, 18, 19, 21, 22, 46, 51], an enhanced social matrix factorization model fusing user’s social status and item’s similarity is proposed, which is called ESMF, to predict all unrated items. The graphical model is shown in Figure 6. The enhanced social matrix factorization model uses the rated items and trust relationships between users to train model parameters and predicts unknown ratings by using the trained model parameters. It avoids the problem of inaccurate prediction because some users have few ratings on items and some items are only rated by few users.
3.4.1. Motivation
To precisely capture user latent feature and item latent feature, a novel enhanced social matrix factorization method is proposed. The method uses the individual trust with social status and item’s social relationship to optimize the solution in both user latent feature space and useritem rating space, which is our main contribution in this section. Firstly, to accurately reflect the impact of users with different social status on user decisions, a regularization term is added in (9) to minimize the differences among the latent feature of trusted users. In addition, inspired by the user’s social relationship, an item social relationship matrix S is constructed, which is used to improve the item’s latent feature vector V. This is because there is also a relationship between the items. In real life, when people buy products, they will consider similar or substitute products in many cases. Based on this consideration, the item’s social relationship is introduced into the matrix factorization model. Therefore, the proposed approach accurately and realistically models realworld recommendation processes.
3.4.2. Calculate the Social Status of Users in Trust Networks
The user’s social status is a very important concept, which reflects the importance of a user in social networks and the degree of an individual attachment to other individuals in social networks. The social status theory is used to explain how the user’s social level influences the establishment of trust relationships between users. Usually, in the social network, highlevel users usually belong to the authority users, and lowlevel users are more likely to establish trust relationships with their higherlevel users. For instance, in a social network, user v is an authoritative scholar in terms of historical research but is a beginner in terms of computer technology. Therefore, the user u will accept suggestion of the user v when selecting historical books but will not accept suggestion of the user v when choosing computer books.
In social networks, users with higher social status usually provide valuable information to users with lower social status; therefore, they have a lot of indegrees and outdegrees. In fact, users with lower social status usually refer to the suggestions of users with higher social status, and thus they have more outdegrees [56]. In this study, we employ the PageRank algorithm to calculate the social status of each user in social networks as follows [17, 56, 57] (see (39)):where denotes the value of the user u’s PageRank and T(u) denotes the set of the user u who trusts friends. N represents the number of users, and η is the probability of jumping out of the currently trust network, which is range of .
Definition 6. The trust relationships with social status between users u and v. The higher a person’s social status in a field is, the greater the influence is, and the more likelihood that others will accept his suggestion. In addition, the number of common ratings is considered as interaction relationship factor between two users. The more the number of common rating items is, the more similar the rating is, and the closer their interests get. A trust network can be constructed based on the combination of the trust statements and the similarities between users. Therefore, the trust statement with social status between users and common interests of users is defined as follows (see (40)): where and denote the sets of items rated by users u, v, respectively, and denotes the set of common items rated by users u and v. denotes the similarity between users u and v that is calculated using (21).
The higher the status of the user v in social networks is, the higher the credibility of the user v is. For instance, if the values of , , and are 0.8, 1, and 0.6, respectively, the user u is likely to accept the recommendation of v rather than w. Otherwise, the possibility that user v accepts the suggestion of u is less than the possibility that user u accepts the suggestion of v. It is noticed that and are normalized values of social status.
3.4.3. The Proposed Enhanced Recommendation Model Based on Social Networks
Similar to other matrix factorization methods [18, 21], zeromean spherical Gaussian priors are placed on user and item feature vectors as follows (see (41)(42)):
Hence, through a Bayesian inference, the posterior probability of latent feature vectors of U and V can be obtained as follows (see (43)):where is the indicator function that is equal to 1 if u has rated i and equal to 0 otherwise. For the user latent features, there are two influence factors: the zeromean Gaussian prior to avoid overfitting and the conditional distribution of user latent features given the latent features of his trusted neighbors (see (44)).
Similar to (42), through a Bayesian inference, the posterior probability of latent feature vectors given the rating and social trust matrices can be obtained as follows (see (45)):
Similarly, item’s similarity is introduced into matrix factorization model. The idea of social matrix factorization is to obtain a high quality fdimensional feature vector V of items based on analyzing the item’s similarity matrix S. Let and be the latent item and auxiliary feature matrices. The conditional distribution over the observed item social network relationships is described as follows (see (46)):where denotes the similarity between items i and j. According to Figure 6, zeromean spherical Gaussian prior on auxiliary feature vector is as follows (see (47)):
Hence, through a Bayesian inference, the posterior probability of latent feature vectors given the item’s similarity matrix can be obtained as follows (see (48)):
Based on Figure 6, using a Bayesian inference, the posterior probability of latent feature vectors of ESMF given rating, item similarity, and social trust matrices is described as follows (see (49)):
The log of the above posterior probability can be computed, and the parameters (observation noise variance and prior variance) are kept fixed, then maximizing the logposterior over latent features of users and items is equivalent to minimizing the following objective function (see (50)):
Among them, , , , and . represents the degree of users u trust in v, whose formula is shown in (39). N and M are the number of users and items, respectively. g(x) is the logistic function g(x) = , which bounds the range of within . The value of is also normalized to the range of using the function f(x) = (x1)/(max1), and max is the maximum of ratings in the useritem rating matrix. and are the latent features of users u and v, respectively, and , , and are the Frobenius 2norm of matrices U, V, and Z respectively.
Then the stochastic gradient descent method is used to optimize the aforementioned objective function as follows (see (51)(53)):where is the derivative of logistic function; .
The proposed hybrid recommendation algorithm is summarized in Algorithm 2.

3.5. Complexity Analysis
The amount of data available in many practical applications of RS can be enormous and the scalability of recommendation algorithms is a crucial factor for a successful system deployment. Therefore, considering the efficiency of execution of the system, one has to distinguish between the offline and online computational complexity of an algorithm. In this paper, filling the rating matrix using improved CF recommendation algorithm, calculation of the Estep and Mstep, and model training of matrix factorization are all executed in the offline stage, and rating predictions of EM and matrix factorization algorithms are executed in online stage.
It is very sparse for the original useritem rating matrix, and it needs to use the CF algorithm to calculate the user’s similarity and item’s similarity offline to fill the useritem rating matrix. The computational complexity is , where is the average number of ratings per user, and it is a very small value. Analyzing the offline complexity of the EM algorithm requires first of all to calculate the Estep and Mstep, respectively. For a single Estep, the computational complexity is . In the Mstep, the posterior probabilities for each rating are accumulated to form the new estimates for , and the Mstep also requires operations. Here, k is the number of clusters, and N is the number of users. For the enhanced social recommendation model, the main computation is to estimate the objective function L and its gradients against variables. Because of the sparsity of R and T, the computational complexity of evaluating the object function L is , where , , and are the numbers of nonzero entries in matrices R, T, and S, respectively. f is the dimensions of the latent feature vectors. The computational complexities for gradients , , and in ((50)(52)) are , , and , respectively. Therefore, the total computational complexity in one iteration is , which is linear with respect to the number of observations in the two sparse matrices.
More important for a lot of applications is the online complexity of computing predictions in a dynamic environment. For a prediction in (36), and are assumed to be explicitly available as part of the statistical model, this requires 2k arithmetic operations, so the computational complexity is O(k). For the enhanced social matrix factorization algorithm, the computational complexity is for one iteration, where M is the number of items. Thus, the total online computational complexity of the proposed hybrid system in this paper is .
Compared with the existing recommendation algorithms, online computational complexity in this paper is superior to that of algorithms in [18, 19, 41, 46], and they are on the same order of magnitude. Offline computational complexity of the proposed hybrid system exceeds that of most traditional methods, such as userbased CF, itembased CF, SVD, and kmeans CF; this is because our method only fills very small amounts of data using CF and EM. Compared with the most social matrix factorization algorithms, offline computational complexity of these algorithms is all on the same order of magnitude. The results of complexity analysis show that the hybrid algorithm proposed in this paper is efficient and can be extended to larger datasets.
4. Experiment
In order to evaluate the performance of the proposed method in this paper, several experiments are performed to show the effectiveness of our proposed method. In particular, the proposed method is compared to other major existing recommendation approaches in terms of their recommendation performances on Epinions and Tencent datasets.
4.1. Datasets
In this paper, two realworld availability datasets, namely, Epinions and Tencent datasets, are used for conducting the experiments. The two realworld datasets contain the trust statements among users, so that the information can be integrated into improved similarity calculation and the enhanced social matrix factorization model to improve the accuracy of recommendations.
Epinions.com is a consumer opinion site that was established to facilitate knowledge sharing about products. Users on Epinions can write reviews about items (e.g., foods, books, and electronics) and assign numeric ratings, which range from 1 to 5, to these items [30]. Moreover, these users can also express their trust statements with the other users. The values of the trust statements in this dataset are 0 or 1. The extracted dataset from the Epinions website consists of 1,261,218 ratings rated by 12,630 users on 3,620 different items. The sparsity is 97.24%.
Tencent dataset is from the track 1 task of the 2012 KDD, provided by Tencent microblog. Tencent microblog, an online social networking site similar to Facebook and Twitter, has become an important communication platform for making friends and sharing information [17]. This dataset is sampled from 50 days of behavioral data of about 200 million registered users, including about 2 million active users, 6,000 items, and 300 million records of historical activity. A smaller dataset is extracted, which contains 326,560 ratings data from 9,650 users on 1,650 items. The sparsity is 97.95%.
4.2. Evaluation Measures
In this paper, Mean Absolute Error (MAE) is used for evaluating the performance of the proposed methods. The MAE measure for the user u is calculated as follows [19] (see (54)):where and are real and predicated ratings of the item i for the user u, respectively. N is the total number of ratings that are predicated by the recommendation method.
In addition, precision and recall are also widely used metrics in recommender systems. The evaluation metrics are averaged over all users. The items are sorted according to their ratings from the largest to the smallest, the top N items are recommended for the current active user, and the N items are compared with the most relevant items to in the test set. The larger the value of precision@N (P@N) is, the higher the accuracy is. Recall@N (R@N) describes how many percentage of related items are included in the list of the recommendation. The calculations of accuracy and recall metrics are depicted as follows [54] (see (55)(56)):where top@N_items and relevant_items are the recommended and actually liked list, respectively.
4.3. Results of the Experiments
4.3.1. Parameter Settings
Table 2 shows the details about the parameters used in all methods, including their meanings and the default values.
4.3.2. The Influence of the Parameters
Firstly, we analyze the impact of the parameter changes in the data processing process of each step on the proposed algorithm and find out their optimal configuration by setting up multiple sets of different parameter configurations.
The parameter is an important parameter which defines the number of nearest neighbors for the active user. The performed experiments test the effect of different values of (i.e., = 10, 50, 100, 150, 200, 250, 300, 350, and 400) on the mentioned measures for the Epinions and Tencent datasets. Figure 7 shows the results of the MAE measure based on different values of the parameter for both of the mentioned datasets. As the number of the nearest neighbors grows, the MAE measure gradually increases; however, when the numbers of the nearest neighbors exceed 200 for the Epinions dataset and 100 for Tencent dataset, respectively, the MAE measures begin to decrease, indicating that some weak similarity neighbors are introduced to increase noisy data, and their performances have begun seriously degraded.
Figure 8 shows the performance of the CF method on different datasets through the use of different similarities as (1), (3), and (19), respectively, i.e., cosine, PCC, and the improved cosine similarity. The numbers of nearest neighbors are set to 200 and 100 on Epinions and Tencent datasets, respectively. It can be seen that the accuracy of the recommendations has been significantly improved. It indicates that the proposed improved cosine similarity method is superior to the traditional cosine similarity and PCC method on the Epinions and Tencent datasets.
Moreover, the parameters , , and are important parameters which control the influences of the common item set rated by users, time attenuation factor, and the tradeoff between the similarities and the trust relationships among users, respectively. We set from 0.5 to 5, ω from 0.01 to 1, and φ from 0 to 1 to evaluate the performance of the experiment. Figures 9, 10, and 11 show the MAE measures based on the different values of the parameters γ, ω, and φ on improved CF recommendation algorithm, while is set to 250 and 50 for the Epinions and Tencent datasets, respectively. As shown in Figures 9, 10, and 11, the accuracies reach the highest level when γ = 2.5, ω = 0.05, φ = 0.4 and γ = 3.5, ω = 0.08, φ = 0.4 on the Epinions and Tencent datasets, respectively.
As shown in Figure 11, it can be concluded that the recommendation accuracy can achieve its better performance when = 0.4; i.e., the trust relationship and the similarity reach a balance on the Epinions and Tencent datasets, respectively.
In order to obtain a more optimized combination of parameters, we choose the appropriate value for parameters combining from a single parameter optimal interval. Then, the experimental results of the parameters combination performance are presented in Table 3.
Secondly, in order to evaluate the influence of number of clusters on recommendation accuracy, several experiments are conducted on the Epinions and Tencent datasets, and the value of k is set from 1 to 35. As shown in Figure 12, it can be seen that the predicted rating has a lower predicted error when k is set to relatively small values, and the value of MAE grows quickly when the value of k exceeds 8. It can be seen that the predicted error reaches the minimum when the value of k gets closer to 15.
Thirdly, in order to evaluate the influence of the parameters and on recommendation accuracy, experiments are conducted through setting different values of and . Among them, balances the information from the useritem rating matrix and the user social trust network. If , the useritem rating matrix is only mined for matrix factorization, and if , the social network information is only extracted to predict user’s preference. controls the influence of item’s similarity on the item latent feature space V. Figure 13 shows the impact of when = 10 and = 20 on the MAE measure for the Epinions and Tencent datasets, respectively. As shown in Figure 13, our model obtain the lowest MAE at 0.616 when = 10 on the Epinions dataset and at 0.585 when = 7 on the Tencent dataset, respectively.
Figure 14 shows the impact of when = 10 and = 7 on the MAE measure for the Epinions and Tencent datasets, respectively. It can be observed from Figures 13 and 14 that the values of and affect the recommendation results significantly, which demonstrate that, fusing the item’s social relationship, user’s trust relationship with social status, with the useritem rating matrix greatly, improves the recommendation accuracy. As increases, the prediction accuracy also increases at first, but when surpasses a certain threshold, the prediction accuracy decrease with further increase of the value of .The trend of is similar to . Our model obtain the lowest MAE at 0.591 when = 15 on the Epinions dataset and at 0.553 when = 20 on the Tencent dataset, respectively.
In addition, the number of hidden features, i.e., f, is another important parameter affecting the recommendation performance of the proposed algorithm. f varied the range from 5 to 50 with a step value of 5 and other parameters = 10, = 15 and = 7, = 20 on the Epinions and Tencent datasets, respectively. Figure 15 shows the MAE result based on different values of the parameter f for both of the mentioned datasets. It can be observed that the value of MAE decreases at first and then gradually increases and finally tends to be stable. The overall recommendation performance decreases with the increase of f. This observation shows that although the increase of f can make the matrix factorization model show more hidden features, some noise will be introduced at the same time to reduce the accuracy of recommendation algorithm. It verifies the basic assumptions of the matrix factorization model: only a small amount of hidden factors affects the user’s preferences and characterizes the item.
4.3.3. Performance Comparison and Analysis
In the experiments, the proposed method (GMMESMF) is compared to the probabilistic matrix factorization (PMF) [34], reliabilitybased trustaware collaborative filtering (RTCF) [24], matrix factorizationbased model for recommendation in social rating networks (SocialMF) [18], time and communityaware RS (TCARS) [49], contextaware recommender system via individual trust among users (CSIT) [19], imputationbased matrix factorization (IMF) [23], and implicit social recommendation (ISRec) [22] in terms of the MAE, P@N, and R@N measures on the Epinions and Tencent datasets.
PMF is proposed by Salakhutdinov and only uses the useritem rating matrix for recommendations based on probabilistic matrix factorization. RTCF is proposed by Moradi to improve the accuracy of the trustaware RS, and a novel trustbased reliability measure is used to evaluate the quality of the predicted ratings. SocialMF is a recommendation algorithm based on social networks proposed by Jamali, which adds a trust propagation mechanism to PMF to improve the accuracy of recommendations. TCARS is a novel recommendation method that efficiently uses the time of ratings and an improved overlapping community detection method to complete the recommendation lists for users. ISRec is an implicit recommendation model, which integrates user’s and item’s social relationships with matrix factorization through constructing implicit social relationship. CSIT is proposed by Li to optimize the prediction solution in both user latent feature space and useritem rating space using the individual trust among users. IMF is proposed to improve the performance of matrix factorizationbased methods using imputed ratings of unknown entries, which utilizes imputed ratings to overcome the sparsity problem. ISRec is proposed to find user’s implicit social relationship to reduce the sparsity and improve the recommendation quality.
Each of the datasets has been categorized into fivefold cross validation subsets with the assessment goal of the proposed method. In the process of recommendation prediction, 80% of dataset is used for training and remaining of 20% for testing. As a result, we will have five different results on the basis of the five different testing subsets, and the average of these results will be considered as the final result.
For fair comparison, we set the parameters of different algorithms by referring to the corresponding literatures and experimental results of the comparison algorithms. Under the settings of these parameters, each comparison algorithm achieves optimal performance.
Table 4 shows the MAE, P@N, and R@N measures on the Epinions and Tencent datasets. It compares the proposed method (i.e., GMMESMF) with the PMF, RTCF, SocialMF, PRM, CSIT, DSTNMF, and ISRec in terms of the MAE, P@N, and R@N measures on the Epinions and Tencent datasets. On the Epinions dataset, for fair comparison, the common parameter settings are the same. The number of hidden features, i.e., f, is set to 10, is set to 200, and regularization term parameters , , , and for all algorithms. is a tradeoff parameter that plays the role in adjusting the degree of influence of neighbors and trusted friends on the recommendation, since it has different degrees of influence in each algorithm, so the settings are different. For TCARS, , . For CSIT, . For IMF, , . For GMMESMF, the parameters , , and the results achieve the MAE value of 0.591, while the P@10 and R@10 obtain 0.452 and 0.265, respectively. The results show that the GMMESMF method obtains the lowest accuracy and the highest recall on the Epinions dataset.
Likewise, the experiments are repeated on the Tencent dataset, and as shown in Table 4, the performance of the GMMESMF outperforms the other algorithms on the Tencent dataset.
Among all the methods, the traditional method, i.e., PMF, behaves the worst, followed by IMF. This is because they only use the known useritem rating information to predict the unknown ratings without using additional social network information. The performance of IMF is better than PMF because the former improves the performance of matrix factorizationbased methods through employing filled ratings of unknown entries to overcome the sparsity problem. IMF increases recommendation accuracy by 4.08% on Epinions dataset and by 1.13% on Tencent dataset in terms of MAE over PMF, respectively.
The other recommendation algorithms, i.e., RTCF, SocialMF, TCARS, CSIT, ISRec, and GMMESMF, outperform PMF and IMF. Among them, RTCF, SocialMF, TCARS, ISRec, CSIT, and GMMESMF increase recommendation accuracy by 8.6%, 7.1%, 11.8%, 13.5%, 9.1%, and 19.6% in terms of MAE over PMF for Epinions dataset and by 3.7%, 3.1%, 5.5%, 7%, 4.6%, and 10.1% in terms of MAE over PMF for Tencent dataset, respectively. This is because these algorithms integrate several social relationships, including user’s trust, interest, and item’s attributes, into model training to obtain more accurate users’ preferences. The experimental results indicate that social relationships are useful to alleviate the problems of data sparsity and cold start.
Our proposed GMMESMF model achieves the best performance compared to other models. GMMESMF increases recommendation accuracy by 12.1%, 13.5%, 8.8%, 11.5%, and 7.1% in terms of MAE over RTCF, SocialMF, TCARS, CSIT, and ISRec for Epinions dataset, respectively. Similarly, there are significant improvements in terms of P@N and R@N over the other recommendation algorithms. The difference with ISRec model is that our proposed GMMESMF model not only combines the individual trust between users, but also considers user’s social status and item’s social relationship to improve the recommendation performance. This is because GMMESMF simultaneously optimizes the solution in user latent feature space, item latent feature space, and useritem rating space using the above three social factors. In addition, our model alleviates data sparsity and ensures the reliability of the predictions through filling unknown ratings based on improved CF method. The experimental results demonstrate that GMMESMF is effective.
Similarly, to verify the effectiveness of our model under coldstart users, an experiment is conducted on coldstart users by using the Epinions and Tencent datasets, and comparative results for different models are shown in Table 5. Here, we define the users who have rated no more than 3 items as coldstart users [11, 13].
It can be observed that GMMESMF improves the MAE performance of PMF by more than 31% and by more than 15%, 18%, 10%, 12%, 22%, and 8% against RTCF, SocialMF, TCARS, CSIT, IMF, and ISRec on the Epinions dataset, respectively. Similarly, GMMESMF improves the MAE performance of PMF by more than 32%, by more than 12%, 15%, 10%, 11%, 25%, and 8% against RTCF, SocialMF, TCARS, CSIT, IMF, and ISRec respectively on the Tencent dataset.
It is because that the GMMESMF model is able to decrease the recommendation error by using the combination of improved CF filling method based on trust relationships and matrix factorization.
5. Conclusions
In this paper, a novel hybrid method is proposed to improve the accuracy of RS. A constrained similarity measure is proposed which is based on cosine similarity, salton factors, and trust relationship. In addition, a novel multiple steps filling method is also proposed to improve the prediction based on the assumptions that a user has multiple interests, similar users have the same preference, and similar items are liked by users with the same interests. The proposed method first uses the userbased CF and the itembased CF to fill in the useritem rating matrix and then uses the Gaussian mixture model to predict ratings to reduce the sparsity of rating matrix. Finally, an enhanced social matrix factorization method is proposed to predict the ratings of unrated items, which fuses user’s trust relationship with social status and item’s social relationship into matrix factorization algorithm, aiming at improve the accuracy of recommendation through mining the intrinsic connections from the useritem rating matrix and users interaction with items. Extensive experiments are also conducted on two realworld datasets, and the experimental results show that the proposed method achieves higher accuracy compared to the existing major methods in this paper. Although it has some advantages in its recommendation effectiveness, our algorithm still has some limitations and there is room for further improvement in some aspects. The limitations of our approach are twofold: first, GMMESMF needs to be prepopulated, and we have to fill in some unrated items in the useritem rating matrix before predicting the ratings for all of the unrated items. Second, our GMMESMF model faces an increased computational complexity when the similarity of too many users and items are calculated.
There will be several interesting directions to explore for our future work. We would like to develop a novel kNN graph construction algorithm that reduces computational complexity and extend the model to make recommendation based on social networks integrating multiple context information. Furthermore, our future study focuses on constructing recommendation models from the perspective of users, such as social relationships between users, social tags, and item’s attributes which will be considered and further investigated.
Data Availability
The data used to support the findings of this study is available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China under Grant 61272286, in part by Joint Funded Projects of the Special Scientific Research Fund for Doctoral Program of Higher Education under Grant 20126101110006, and in part by the Industrial Science and Technology Research Project of Shaanxi Province under Grant 2016GY123.