Abstract

Bipartite network projection method has been recently employed for personal recommendation. It constructs a bipartite network between users and items. Treating user taste for items as resource in the network, we allocate the resource via links between user nodes and item nodes. However, the taste model employed by existing algorithms cannot differentiate “dislike” and “unrated” cases implied by user ratings. Moreover, the distribution of resource is solely based on node degrees, ignoring the different transfer rates of the links. To enhance the performance, this paper devises a negative-aware and rating-integrated algorithm on top of the baseline algorithm. It enriches the current user taste model to encompass “like,” “dislike,” and “unrated” information from users. Furthermore, in the resource distribution stage, we propose to initialize the resource allocation according to user ratings, which also determines the resource transfer rates on links afterward. Additionally, we also present a scalable implementation in the MapReduce framework by parallelizing the algorithm. Extensive experiments conducted on real data validate the effectiveness and efficiency of the proposed algorithms.

1. Introduction

With the rapid growth of the World Wide Web, people are emerged in an overwhelming amount of information, which makes it difficult to obtain relevant information of interest. Personal recommendation, as a promising method to overcome information overload, is employed to suggest products to the consumers who may be interested, such as news, books, music, and movies. As a consequence, a large number of diverse algorithms were proposed to solve the problem. Besides the classic methods, for example, content-based methods [1], collaborative filtering, and variants [24], some new paradigms have been introduced lately, including matrix factorizations [5, 6], social filtering [7], and network-based [810].

Bipartite network projection is among these approaches, which was initially introduced in physics but found applications in personal recommendation [8]. It relies on a resource distribution process in a bipartite network to provide a top- recommendation. Particularly, it considers users and items as two types of network nodes, respectively, and treats user taste as resource to be allocated in the bipartite network. User rating is scaled into a numeral (either 1 or 0), in comparison with the median of the predefined rating range, to indicate user taste. In another word, numeral 1 means the user likes a particular item, while numeral 0 means the user does not like or has not rated the item. Afterwards, two rounds of resource distribution are carried out, regardless of the user ratings. Eventually, the final resource that an unrated item gets indicates its possibility to be recommended to users. The method was shown to outperform the collaborative filtering methods. Albeit, we observe that there are at least two shortcomings that limit its further improvement. Firstly, the current user taste model does not differentiate “dislike” and “unrated” cases, both expressed by numeral 0, which potentially stops the algorithm to provide a more precise recommendation. Secondly, the resource distribution process does not leverage user ratings; that is, the transfer rates on different links are not proportional to the corresponding user ratings. Such allocation, and hence the recommendation, can be inaccurate when a user has a biased preference among the items. We rectify these issues in this work.

This paper first presents a negative-aware user taste model to encompass “like,” “dislike,” and “unrated” cases implied by user ratings, so that the user preference is fully reflected during recommendation. Instead of using the predefined rating range, we propose to use an adaptive user rating range as threshold for determining user taste. While the rationale behind is similar to the existing model, we compare a rating with the average rating of the user. Hence, a rating above this average is considered as a “like,” expressed by numeral 1; a rating below this average is regarded as a “dislike,” expressed by numeral −1; and unrated cases are noted by numeral 0. Based on this user taste model, we initialize the resource of each item according to the user taste and the proportion of the rating to the sum of all ratings from this user, which can appropriately reflect the dissimilarity of user interest. Similarly, when deriving the transfer rates of links in resource distribution, we design a rating-integrated method to allocate resource via a link proportional to the user ratings on the corresponding item. In this way, the final resource allocation is expected to more expressively reflect user preference. To scale the solution towards massive datasets, we further present a MapReduce-based implementation to parallelize the computation.

To summarize, we make the following contributions.(i)We devise a negative-aware user taste model to encompass “like,” “dislike,” and “unrated” cases implied by user ratings.(ii)We propose a rating-integrated method to allocate initial resource and determine transfer rates on network links according to user ratings.(iii)We extend the proposed algorithm into the MapReduce programming framework such that it is able to deal with massive volume of rating data.(iv)We implement all the proposed algorithms, and extensive experiments are conducted on real public datasets to evaluate their effectiveness and efficiency.

The rest of the paper is organized as follows. We discuss the related work in Section 2; Section 3 introduces the preliminaries and the baseline algorithm. Section 4 presents the new taste model and the method for initialization and distribution of the resource, followed by the implementation on MapReduce. Experimental results are listed and commented in Section 5, and we conclude the paper in Section 6.

Personal recommendation has been a topic that draws much attention, and most of the existing work falls into the following five categories.

Content-based method extracts features for items, and a profile is created for a user with the features, which is then utilized to find out the most similar items to those collected. However, feature extraction is not easy, and the recommendation is restricted to items very similar to those who have been liked [1]. Semantic reasoning was used to overcome the shortage of vector space model [11]. This kind of algorithms needs to describe item content and extract properties of rated items to build user profiles, so the cold start problem is serious.

Contrarily, collaborative filtering does not rely on item content. It utilizes rating behavior data to select the most similar neighbor and makes recommendation on the assumption that who behaved similarly in the past will behave similarly in future. The key steps of collaborative filtering are computing the similarity and generating the recommendation from the nearest neighbors. Representative work includes [24]. Due to the vast quantity of items and users but quite finite ratings, the sparsity problem turns into the bottleneck to raise the effectiveness.

Matrix factorization [5, 6] characterizes both items and users by vectors of factors inferred from rating patterns, and high correspondence between item and user factors leads to a recommendation. The single value decomposition [12] is the most commonly used method to reduce the matrix dimension. The complexity of the computation is so high that few practice applications adopt this kind of algorithms.

Social-based method recommends the user with what his neighbors like according to the link structure of social network. An InterestMap is built in [7] based on cooccurring keywords for recommendation. The social relation of users was incorporated into collaborative filtering to adjust the nearest neighbor selection strategy [13]. It is suitable for the data set which also describes the friend relations of users.

A line of more closely related work is the algorithms based on bipartite networks [8, 10], into which our proposed algorithm falls. High-order correlations were considered in [14]. There are also methods based on heat conduction [9, 15] and random walk [16] on bipartite networks. A hybrid approach containing multistep random walk, and -means clustering was introduced to achieve smaller mean absolute error and root mean square error [17]. It is more precise than traditional recommendation algorithms, yet its accuracy and scalability could be further improved.

In order to improve the scalability of recommender systems, cloud-computing platforms are usually employed, for example, the Dynamo [18] of amazon.com and the Dryad [19] of Microsoft. Hadoop (Apache Hadoop, http://hadoop.apache.org/) is the most popular and widely-used open source platform implementing the MapReduce framework [20]. Hadoop is employed for content characterizing and profile matching for content based recommendation in [21]. A scalable online collaborative filtering algorithm for news recommendation is proposed in [22], which combines memory based and model based collaborative filtering algorithm. Recommendation algorithm with matrix factorization is implemented in [23] based on MapReduce, and the efficiency is proved to be higher. To solve the scalability problem, [24] implements the network-based inference algorithm on Hadoop; however, the accuracy remains to be improved.

3. Preliminaries

The baseline algorithm for personal recommendation based on bipartite network projection relies on a bipartite network, consisting of two types of nodes, user and item nodes, denoted by and , respectively. Let denote the th user in , and denote the th item. has a rating on . All the ratings from user constitute a rating set with cardinality of , and all the ratings on item constitute a rating set with cardinality of . Links exist between different types of nodes, but not within the same type of nodes. That is, a link always connects a user node and an item node with no exception. Each link models a rating behavior of a user on an item. Assume every rated item by a given user is assigned with certain quantity of resource, that is, initial resource allocation. The resource first flows from item nodes to user nodes, then back to item nodes along the links. Hence, every item gets a final resource allocation after the two-round resource distribution. The top items with bigger resource would be recommended to the current user then the recommendation will be executed for the next user.

Consider the initial resource as recommendation power. The intuition behind is that if an item gets more resource after distribution, it is more highly to be liked based on the user’s preference. Thus, the resource distribution process is of importance. Currently, the taste of a user is determined based on a threshold equal to the median of the predefined rating range. Specifically, a user likes an item, expressed by numeral 1, if she rates the item above the threshold; otherwise, the case is concluded into the unrated category, expressed by numeral 0. For example, the threshold is 3, if the predefined rating range is ; a user rating 4 on an item implies that she likes the item, since the rating is above the threshold; a user rating 2 on an item is diminished to unrated case, not affecting the recommendation thereafter.

After determining user tastes, the resource allocation process in the bipartite network is carried out. Two rounds of resource distribution are conducted. In the first round, resources are transferred from item nodes to user nodes, and all user nodes having links to a particular item share their resource equally. In the second round, resources are transferred back to item nodes, and all item nodes having links to a user share their resource equally. The final resource allocated to each item indicates its probability of being recommended to the given user. For a top- recommendation, a list of unrated items with largest resource allocation is created for the given user. We refer to this as “the baseline algorithm” in the rest of the paper when context is clear.

Example 1. Figure 1 shows an example of recommendation for user by the baseline algorithm. Assume that the constructed bipartite network is in Figure 1(a), with item nodes in the top and user nodes in the bottom. The ratings of user to items to are 5, 0, 2, 0, and 4, respectively, and the median is 3. So the initial resource configuration is 1, 0, 0, 0, and 1, respectively. The initial resource and raw rating value are tagged above the item nodes. In the first round as in Figure 1(b), resources are transferred from item nodes to users nodes; for example, user gets from and from , in total. Then, in the second round as in Figure 1(c), resources are transferred back to item nodes; for example, item receives from users and from user . So the final resource allocation for item is in total. Similarly, item gets 11/36 in total. So item will be recommended to user for its allocated resource is the biggest among the unrated items of user .

4. Algorithms

This section first improves the accuracy of the baseline algorithm by introducing a negative-aware user taste model, along with the rating integrated method for initializing and distributing the resource. Afterward, the scalability is improved by implementing on the MapReduce.

4.1. User Taste Model

Let us first take a motivating example.

Example 2. Consider a predefined rating range and a user rating 1 on an item. In the baseline algorithm, we take the median of the rating range, that is, 3 as the threshold. As 1 is below the threshold, the user may not like the item. Subsequently, we use a numeral 0 in the initial resource allocation for this item. Recall that a user taste to an item is also expressed by 0 if she has not rated the item.

The current user taste model does not distinguish the aforementioned cases, which hinders the algorithm from reaching a higher recommendation precision. Furthermore, it is intuitive that a rating close to the bottom of the rating range implies a low satisfaction or preference towards an item, that is, negative attitude towards the item. We argue that the rating reflects the user’s taste only if it is compared with her own rating range, rather than the predefined rating range. As a consequence, we distinguish the three cases—“like,” “dislike,” and “unrated”—in the new adaptive user taste model.

Firstly, we adopt the user’s rating range as the reference, instead of the predefined rating range. That is, we compare a user rating with respect to the user’s own rating range; for example, given a predefined rating range , a user always rates within and then the latter is the user’s own rating range. This is intuitive, as users may have different rating habit. Some users are harsh when rating and, hence, always give ratings across the whole predefined rating range, while some are soft towards the items and, hence, usually give ratings within a small subdomain of the predefined rating range.

Subsequently, it is further observed that, as to an item, even if her rating is above some others’, this may indicate that she dislike the item, since she has a narrow ranting range. Therefore, instead of using median of the predefined rating range, we propose to use the average of a user’s ratings as the threshold to determine the user’s taste towards an item. That is, we take as the threshold for determining user taste, where is the set of ratings from . Consequently, we have the taste model regarding user and item :

The taste model is an indicator function such that 1 means the user likes the item, while −1 means the user dislikes the item. Note that we follow the convention to denote as 0 the user’s taste for an unrated item.

Example 3. Consider a predefined rating range of , and a user rates five items as 2, 3, 3, 4, and 5, respectively. Thus, according to the new model, we first take the average as the threshold. Hence, rating 3 is considered as a dislike, since it is close to the bottom of the rating range according the user’s rating habit. On the other hand, rating 4 implies a like, since it is above the threshold.

One may argue that there are subtle cases that a user always rates high on items, where our model would consider those ratings below her average rating as “dislike.” Actually, to the best of our knowledge, there is no existing rating system in-use that allows negative ratings, and hence, the “dislike” rating behavior is difficult to capture. We argue that the proposed user taste model is able to capture more information implied by user rating behaviors and demonstrate better performance (cf. Section 5.1).

Our user taste model is adaptive in the sense that it adjusts the threshold according to different users so that the user’s taste can be well reflected. Introducing the taste of dislike can make full use of user taste information. Moreover, this distinguishes the dislike and unrated cases that are treated identically in the baseline, and hence the recommendation performance is expected to be improved due to these improvements. We will see shortly how this affects the resource distribution process.

4.2. Initial Resource Allocation

A higher rating implies a stronger recommendation from a user towards an item. To reflect this information in the initial resource allocation, we weight the initial resource for every item with a coefficient. In particular, we multiple the taste numeral with the ratio of the user’s rating to current item to the average of the user’s ratings for all items. For a given user and an item , the initial resource allocated to is where is as the taste model formulated in Section 4.1, denotes the weight we put on the user taste to generate the initial resource, which is where is the top of the user rating range, and is to make sure that a smaller rating will get a smaller initial resource under the negative taste, when the rating is less than the average of user ratings.

This initial resource allocation tries to emphasize the distinction of user taste to different items that the user likes. Thus, the initial resources become more distinguishable and accordant with user taste.

Example 4. In the baseline algorithm, the initial resource is the same as the user taste for items. Since the ratings of the user to the items with same user taste are different, we have good reasons to doubt the way of initializing resource by simply equating the initial resource with the user taste for them. In Figure 2(a), the numbers next to the links are the ratings. To use the aforementioned model to allocate initial resource, we first determine the user taste for by formula (2), as 1, 0, −1, 0, and 1, respectively, for all items. Then, we allocate initial resource by weighting 5/3.7, 2/3.7, and 4/3.7 for the nonzero user taste, respectively, as formulae (3) and (4) represented.

4.3. Resource Distribution

Resource distribution based degree of nodes ignores the difference of links, so it cannot reflect the distinctness in the extent of interest from different users to the same item or that from the same user to different items. During resource transferring from item nodes to user nodes and then back to items, we make it aware of the transfer rates of all links, which is the ratio of user rating to the sum of ratings from all users who have rated the given item in the first round, or the ratio of user rating for an item to the sum of ratings by the user to all items in the second round. This can improve the accuracy of resource distribution by making the distribution process more specific. In particular, in the first round of transfer from item nodes to user nodes, the ratings are normalized with each user’s average rating to avoid the influence of rating bias from different users. Thus, for each user node , the resource transferred from item to this node is Adding the contributions from all item nodes connecting to ,

Example 5. Recall Example 4. In Figure 2(b), the numbers around the edges are the transfer rates from item nodes to user nodes. The initial resource is distributed from item nodes to user nodes in the first round by formula (5). Particularly, the initial resource of item is distributed to users , , and according to the transfer rate of each link. The resource allocation of , , and from item is 0.48, 0.52, and 0.4, respectively. After the other items finish resource distribution, the resource allocation of every user can be obtained. For example, 0.48 in total for user with resource 0.48 transferred from item and 0 from , respectively.

Similarly, in the second round, resource transfers from user nodes back to item nodes according to transfer rate from user nodes to item nodes. Nonetheless, no normalization needs to be done in the round, as these ratings are all given by the same user. Hence, we can derive the final resource allocation to an item by where denotes the final resource from user to item , which is

Example 6. Recall Example 4. In Figure 2(c), the numbers around the edges are the transfer rates from user nodes to item nodes. The resource is distributed from user nodes to item nodes in the second round by formula (8). After every user distributes the resource to the items with which there is a link, the final resource for every item can be summed up. For example, the final resource is 0.19 for item and 0.05 for item . So, in this bipartite network sample, the recommendation result for user in our proposed algorithm is item , and this is different from the result in the baseline algorithm which is item with a resource value of 0.3.

The correctness of the method remains, since the different weighting of initial resource allocation and distribution only affects the volume of transferred resource but not the distribution process. Furthermore, this rating-aware resource distribution allocates the user taste resource discriminatively. Therefore, the final resource allocation is expected to provide a more accurate suggestion on the top- recommendation. We verify the effectiveness of the proposed algorithm in Section 5.

4.4. Implementation on MapReduce

Along with the surge of data collected during the use, there is a rapid proliferation of data available that can be utilized for recommendation, which is considered as a leap on top of the big data. Seeing the opportunity, we are also aware of the scalability issue involved in the massive computation. We address the challenge in the sequel by incorporating the MapReduce programming paradigm.

MapReduce distributes the computation by splitting a given task into several small subtasks and processes them simultaneously on different computers. In MapReduce, the whole calculation of a job is divided into map and reduce phase. Mapper decomposes the process into several parts and assigns them to available nodes. The input of map phase is a set of key/value pairs. The form of the output is also key/value pairs but both the key and the value could be different from the input ones. The intermediate values with the same key would be grouped together by MapReduce library. The reducers sum up the values corresponding to the same key or do some other operations. The output of the reduce phase is also key/value pairs.

Recall the algorithm introduced in previous subsections. It measures how the initial resource would be distributed to the user nodes from the item nodes, and how much is transferred back to item nodes from the user nodes. In another words, it tries to find a value to measure the resource allocation from one item rated by the user to another one on which the user has not expressed his/her attitude. It is easy to put all the formulae that describe the process of resource transfer into one, which calculates the resource allocated from an item to another, when the initial resource configuration is still the same. We use to represent the resource transferred from item to item , with item rated by the target user while item did not; that is, where the first fraction represents the transfer rate from item to user with the user average rating to eliminate effect of the user rating bias, the second denotes the transfer rate from user to the item , and and are the total number of users and items, respectively. and denote a binary function for user rating; that is, In essence, it is a classification indicator judging if there is a rating behavior between the corresponding user and item. Consider where denotes the resource transfer matrix, is the final resource of items, and is the initial resource of the items for specific user as defined in previous subsections. The items to be recommended for the target user are selected according to the final resource in the descending order.

In the proposed implementation, we use four jobs for our purpose.

JOB 1. The first job prepares a list for each user which contains all the rated items as well as the corresponding ratings; it also calculates the sum and the average of each user’s ratings, which are tagged as and for short. The input is the rating record containing user id, item id and rating, which would be treated as value of the key/value pair. We use , , and to represent the user id, the identity of item , and the rating from user to item . The default key is the offset of the row containing the rating record. These raw records are distributed to all mappers, and every mapper reorganizes the records in the form of user id and the pair of item id and rating. The key becomes the user identity after map phase while the value turns into item id coupled with the rating. Afterward, the intermediate results are gathered according to the key. Reducers obtain all item-rating pairs of the same user, and compute the sum along with the average of these ratings. Here the key is still the user id, but the value turns into the sum and the average of the user. The functions for Map and Reduce are as follows:Map: ;Reduce: .

JOB 2. The second job reorganizes the records according to the item identity. For every item, the ratio of the rating by each user to the average rating of this user is computed, which is named as unbiased user rating (, for short), and then the sum of the unbiased ratings from all users is calculated. In the map phase, the records will be sorted by item id and unbiased user rating is calculated. The key of output is item id, and the value is the user id paired by the unbiased user ratings. Reducers gather the intermediate results and output the unbiased rating sum for item (, for short). The item id along with the sum of unbiased rating for this item is the key, while value is a string remarked as , which contains all the users that rated this item and the unbiased rating from this user. All the results would be stored for the following jobs. The Map and Reduce functions are as follows:Map: ;Reduce: .

JOB 3. The third job uses the outputs of the first two jobs to calculate the value of all the cells in the resource allocation matrix. Noticing that the value of resource allocated from item to item is calculated by summing up the resource quantity from every user, the given items passing through to get to item , we assign the mappers to compute the quantity for every user with different combination of item and item . The key turns into the identities of the two items; the value is the resource quantity of each user, for short. The reducers sum these intermediate values to get the final allocation quantity of resource between two items. The key of the reduce phase is the pair of item identities, and the value is the resource allocation from item to item . The Map and Reduce functions are as follows:Map: ;Reduce: .

JOB 4. The last job firstly gets the initial resource for every user using the output of the first job and then calculates the final resource (, for short) allocated to the items by multiplying resource allocation matrix with the initial resource, noticing that the computation for the items which have been rated by this current user will be skipped. Besides, the items will be ordered according to the final resource descending and the top n items of the rank will be recommended to the user. In the map phase, the key of output is the identities of the user and the destination item, the value is the temp resource (, for short) from item . The key of output in the reduce phase is the user identity and the item identity, the value is the final resource of this item for the user. The Map and Reduce functions are as follows:Map: ;Reduce: .

It is easy to verify that the four MapReduce jobs correctly compute the results, inheriting from the correctness of the implementation on single machine. We evaluate the effect of the MapReduce-based implementation in Section 5.4.

5. Experiments

This section presents the experimental results and analyses.

5.1. Experiments Setup

We run two series of experiments on distinct data sets for different purposes.

To evaluate the accuracy improvement, we conduct experiments on a real dataset MovieLens (http://www.grouplens.org/), which is one of the most famous datasets for evaluating personal recommendation. It consists of 943 users, 1,682 movies, and about 100,000 rating records. Users rate movies according to their interest with discrete numerals from 1 to 5. The data is cleaned by removing users who rated less than 50 movies. The sparsity of the dataset is about 94.6%, meaning that the ratings by users to movies are rather insufficient. Further, we select the records if the rating exceeding the average rating of the user as primary data set, in which there is about 54,800 ratings in number. Then, we pick out ratings from the primary data set randomly by a ratio of 20% to construct the test set with a scale of 10,960 records, which would be done five times for cross-validation. The rest ratings in the primary data set and the others records which are not included in the primary data set are the members of our training set with a scale of 83,640 rating records.

For measuring the accuracy of top- recommendation, hit ratio and average rank score are two popular evaluation metrics while comparing different algorithms.(i)Hit ratio counts the number that movies of the test set occur in the recommendation list and then uses the total length of recommendation list as the divisor to get the final ratio value; that is, where denotes the number of users, denotes the number of items recommended for each user, denotes the number of items in test set for each user, and is an indicator function, where is the th item for user in the recommendation list and is the th item of user in the test set.(ii)Average rank score measures the position of each movie of the test set in the sequence of all unrated movies, and smaller score value means better recommendation result. The average rank score is the average of these scores for each record in the test set which is shown as follows: where is the location of item in the sequence for user in the test set and is the number of unrated items by user in the sequence.

To validate the efficiency of the implementation on MapReduce, we use the Netflix data set which contains about 480,189 users, 17,770 items, and 100,480,507 rating records. We will compare the efficiency of the proposed algorithm implemented on single computer and the clusters within MapReduce on Hadoop platform. The Hadoop computer-cluster consists of 5 computers, with one name node and other four data nodes. The configurations for all computers are all same, INTEL Dual-core 2.73 GHZ CPU, 4 GB memory, 500 GB hard disk and UBUNTU operation system. We will change , which is the number of computers that are employed for the parallel computing, from 3 to 5 step by step to observe the runtime trends.

The time of single computer cost is marked as . The time consumed by the cluster containing 3, 4, and 5 computers is denoted as , , and , respectively. We could expect that the computation complexity is , if we ignore the communication in computation between these computers. However, it is impossible to not take into account the communication time. In fact, the complexity for the communication is sensitive to the number of both items and users. So the practical time complexity would be higher than expected.

To show the comparison results more clearly, we use the ratio of the time consumed by the single computer version to the one by the implementation on MapReduce, which is denoted as where can be 3, 4, or 5, which lies on the number of data nodes employed by the MapReduce-based implementation. The bigger the result is, the more efficient the Implementation on MapReduce is.

5.2. Comparing with the Baseline Algorithm

In this set of experiments, we measure the comparison with hit ratio and average rank score.

First, we evaluate the effects of Average-Rating Criterion and Taste with Dislike (labeled by “ARC” and “TD”, resp.) in the user taste model, Rating-Aware Initial Configuration and Rating-Aware Transfer (labeled by “RAIC” and “RAT”, resp.) in the resource allocation with hit ratio metric.

As Figure 3 shows, the user-specific average ratings criterion outperforms the median one when specifying the taste of users. Median method ignores the difference of rating habits between different users while average rating method judges whether users like an item according to more particular and personal criterions.

When the tastes of users are specified as dislike together with like, higher hit ratio was achieved than using only like information as user tastes. The dislike information as negative resource can spread over the bipartite network to rectify the single taste resource allocation. A comprehensive taste considering both like and dislike of users got a more accurate recommendation result.

The initial configuration of resource based on comprehensive user taste model sets the values of items which had been rated by current user as 1 or −1 representing like or dislike. Considering the extent of a user likes or dislikes these items could still be very different, the ratings by users to items are utilized to weight the initial resource. This is the rating-aware initial configuration. We can tell from Figure 3 that rating-aware initial resource configuration got better result.

The distribution based on degree considered only the relationship of user and item nodes, but the weight for each link between item and user is not measured. The rating-aware transfer uses the ratio of the rating by a user to an item to the sum of ratings by the particular user to all items or all users to the given item as the importance of the edge between this user and the item. The rating-aware transfer weights degree-based transfer with the importance of links, so its recommendation accuracy is higher.

The proposed algorithm emphasizes the taste of dislike as much as the taste of like which are specified according to the average rating of each user; the resource allocation and distribution are adjusted to consider the rating ratios and the transfer rates of the edges. Compared with the uniform algorithm, which is also named network based inference (NBI), our algorithm performs obviously better with higher hit ratio.

Then we compare the NBI and our proposed algorithm with average ranking score metric. Avg-rating, Com-taste, RA-initial, and RA-transfer denote the average rating based coarse-graining, the comprehensive taste with both like and dislike, rating-aware initial configuration, and rating-aware transfer of resource, respectively. They are used to indicate the algorithm which adopted the improvements that they represent and the ones before them. denotes the average rank score.

As Figure 4 shows, is the total scale of items in the test set which are used to validate the recommendation results while the sequence attribute is ignored temporarily and is the rank score. We can see that the rank scores for the five curves increase exponentially while the length of rank grows, especially when the items number is around 6000. Averaged over records in the test set, the average rank scores for the five implementations are shown, respectively, in the legend. It is easy to find that all the improvements are effective, and the introduction of dislike as taste got the biggest improvement. Our proposed algorithm achieves better performance than NBI by 16.7% using average rank score metric, which is in accordance with the conclusion using hit ratio metric.

5.3. Comparing with Other Three Algorithms

We compare the proposed algorithm with an algorithm based on mass diffusion and two popular variants of bipartite network projection to further validate effectiveness with both hit ratio and average rank score metrics.(i)IMD [25] is an improvement of mass diffusion which takes into account the average degree of user nodes to weight the initial resource distribution;(ii)E-NBI [26] is a variant of uniform NBI which depresses the impact of high-degree with a negative exponential function to improve the accuracy;(iii)INBI [27] combines weighted bipartite network with a tunable parameter to depressing high-degree nodes for top- recommendation.

In Figure 5, the left part demonstrates the comparison with hit ratio metric and while the recommend list varies from 10 to 50 the proposed algorithm achieves the best result all the time; the right part is the result of comparison with average rank score; the proposed algorithm outperforms the other ones by 13.0%, 11.7%, and 3.9% on average, respectively.

5.4. Evaluating the Implementation on MapReduce

First, we make the number of items fixed at 17770 while the number of users changing from 10000 to 100000 by a step length of 10000. Then, we fix the number of users at 480000 and change the number of items from 2000 to 16000 by an interval of 2000. All these samples are randomly chosen. The results for the two sets of experiments are shown as Figure 6.

With the increase in the number of users or items, the computation costs more time for both the implementation on single computer and the one on Hadoop cluster; however, the former one grows far more than the latter, so the ratio of time consuming increases fast. We can learn the result from Figure 6. Besides, when the number of data nodes increases, the efficiency becomes higher. All the curves nearly stop rising and converge at different sizes of the users or items because the finite data nodes restrict the further improvement of the time performance. To propose huger data set, more cluster nodes with MapReduce framework should be employed. However, our implementation on Hadoop is effective and the parallel computing in the MapReduce framework can improve the efficiency of our proposed algorithm.

6. Conclusion

In this paper, we have proposed a negative-aware and rating-integrated personal recommendation algorithm based on bipartite network projection. It takes the advantages of the information implied by user ratings to differentiate dislike cases of user tastes, and weight the resource distribution process. Better empirical results on real data are obtained regarding both hit ratio and average ranking score. To cope with large amount of data, we implement the proposed algorithm on MapReduce which is confirmed to be more efficient by our experiments. As future work, we plan to conduct more comprehensive comparison between the proposed algorithm and other state-of-the-art algorithms for recommendation. The other possibility is to devise more precise models for depicting user behaviors.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is partially supported by the National Natural Science Foundation of China (61302144 and 61303062). The authors also gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation.