Abstract

As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first introduce CF tasks and their main challenges, such as data sparsity, scalability, synonymy, gray sheep, shilling attacks, privacy protection, etc., and their possible solutions. We then present three main categories of CF techniques: memory-based, model-based, and hybrid CF algorithms (that combine CF with other recommendation techniques), with examples for representative algorithms of each category, and analysis of their predictive performance and their ability to address the challenges. From basic techniques to the state-of-the-art, we attempt to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area.

1. Introduction

In everyday life, people rely on recommendations from other people by spoken words, reference letters, news reports from news media, general surveys, travel guides, and so forth. Recommender systems assist and augment this natural social process to help people sift through available books, articles, webpages, movies, music, restaurants, jokes, grocery products, and so forth to find the most interesting and valuable information for them. The developers of one of the first recommender systems, Tapestry [1] (other earlier recommendation systems include rule-based recommenders and user-customization), coined the phrase “collaborative filtering (CF),” which has been widely adopted regardless of the facts that recommenders may not explicitly collaborate with recipients and recommendations may suggest particularly interesting items, in addition to indicating those that should be filtered out [2]. The fundamental assumption of CF is that if users and rate items similarly, or have similar behaviors (e.g., buying, watching, listening), and hence will rate or act on other items similarly [3].

CF techniques use a database of preferences for items by users to predict additional topics or products a new user might like. In a typical CF scenario, there is a list of users and a list of n items , and each user, , has a list of items, , which the user has rated, or about which their preferences have been inferred through their behaviors. The ratings can either be explicit indications, and so forth, on a 1–5 scale, or implicit indications, such as purchases or click-throughs [4]. For example, we can convert the list of people and the movies they like or dislike (Table 1(a)) to a user-item ratings matrix (Table 1(b)), in which Tony is the active user that we want to make recommendations for. There are missing values in the matrix where users did not give their preferences for certain items.

There are many challenges for collaborative filtering tasks (Section 2). CF algorithms are required to have the ability to deal with highly sparse data, to scale with the increasing numbers of users and items, to make satisfactory recommendations in a short time period, and to deal with other problems like synonymy (the tendency of the same or similar items to have different names), shilling attacks, data noise, and privacy protection problems.

Early generation collaborative filtering systems, such as GroupLens [5], use the user rating data to calculate the similarity or weight between users or items and make predictions or recommendations according to those calculated similarity values. The so-called memory-based CF methods (Section 3) are notably deployed into commercial systems such as http://www.amazon.com/ (see an example in Figure 1) and Barnes and Noble, because they are easy-to-implement and highly effective [6, 7]. Customization of CF systems for each user decreases the search effort for users. It also promises a greater customer loyalty, higher sales, more advertising revenues, and the benefit of targeted promotions [8].

However, there are several limitations for the memory-based CF techniques, such as the fact that the similarity values are based on common items and therefore are unreliable when data are sparse and the common items are therefore few. To achieve better prediction performance and overcome shortcomings of memory-based CF algorithms, model-based CF approaches have been investigated. Model-based CF techniques (Section 4) use the pure rating data to estimate or learn a model to make predictions [9]. The model can be a data mining or machine learning algorithm. Well-known model-based CF techniques include Bayesian belief nets (BNs) CF models [911], clustering CF models [12, 13], and latent semantic CF models [7]. An MDP (Markov decision process)-based CF system [14] produces a much higher profit than a system that has not deployed the recommender.

Besides collaborative filtering, content-based filtering is another important class of recommender systems. Content-based recommender systems make recommendations by analyzing the content of textual information and finding regularities in the content. The major difference between CF and content-based recommender systems is that CF only uses the user-item ratings data to make predictions and recommendations, while content-based recommender systems rely on the features of users and items for predictions [15]. Both content-based recommender systems and CF systems have limitations. While CF systems do not explicitly incorporate feature information, content-based systems do not necessarily incorporate the information in preference similarity across individuals [8].

Hybrid CF techniques, such as the content-boosted CF algorithm [16] and Personality Diagnosis (PD) [17], combine CF and content-based techniques, hoping to avoid the limitations of either approach and thereby improve recommendation performance (Section 5).

A brief overview of CF techniques is depicted in Table 2.

To evaluate CF algorithms (Section 6), we need to use metrics according to the types of CF application. Instead of classification error, the most widely used evaluation metric for prediction performance of CF is Mean Absolute Error (MAE). Precision and recall are widely used metrics for ranked lists of returned items in information retrieval research. ROC sensitivity is often used as a decision support accuracy metric.

As drawing convincing conclusions from artificial data is risky, data from live experiments are more desirable for CF research. The commonly used CF databases are MovieLens [18], Jester [19], and Netflix prize data [20]. In Section 7, we give the conclusion and discussion of this work.

2. Characteristics and Challenges of Collaborative Filtering

E-commerce recommendation algorithms often operate in a challenging environment, especially for large online shopping companies like eBay and Amazon. Usually, a recommender system providing fast and accurate recommendations will attract the interest of customers and bring benefits to companies. For CF systems, producing high-quality predictions or recommendations depends on how well they address the challenges, which are characteristics of CF tasks as well.

2.1. Data Sparsity

In practice, many commercial recommender systems are used to evaluate very large product sets. The user-item matrix used for collaborative filtering will thus be extremely sparse and the performances of the predictions or recommendations of the CF systems are challenged.

The data sparsity challenge appears in several situations, specifically, the cold start problem occurs when a new user or item has just entered the system, it is difficult to find similar ones because there is not enough information (in some literature, the cold start problem is also called the new user problem or new item problem [21, 22]). New items cannot be recommended until some users rate it, and new users are unlikely given good recommendations because of the lack of their rating or purchase history. Coverage can be defined as the percentage of items that the algorithm could provide recommendations for. The reduced coverage problem occurs when the number of users’ ratings may be very small compared with the large number of items in the system, and the recommender system may be unable to generate recommendations for them. Neighbor transitivity refers to a problem with sparse databases, in which users with similar tastes may not be identified as such if they have not both rated any of the same items. This could reduce the effectiveness of a recommendation system which relies on comparing users in pairs and therefore generating predictions.

To alleviate the data sparsity problem, many approaches have been proposed. Dimensionality reduction techniques, such as Singular Value Decomposition (SVD) [23], remove unrepresentative or insignificant users or items to reduce the dimensionalities of the user-item matrix directly. The patented Latent Semantic Indexing (LSI) used in information retrieval is based on SVD [24, 25], in which similarity between users is determined by the representation of the users in the reduced space. Goldberg et al. [3] developed eigentaste, which applies Principle Component Analysis (PCA), a closely-related factor analysis technique first described by Pearson in 1901 [26], to reduce dimensionality. However, when certain users or items are discarded, useful information for recommendations related to them may get lost and recommendation quality may be degraded [6, 27].

Hybrid CF algorithms, such as the content-boosted CF algorithm [16], are found helpful to address the sparsity problem, in which external content information can be used to produce predictions for new users or new items. In Ziegler et al. [28], a hybrid collaborative filtering approach was proposed to exploit bulk taxonomic information designed for exact product classification to address the data sparsity problem of CF recommendations, based on the generation of profiles via inference of super-topic score and topic diversification [28]. Schein et al. proposed the aspect model latent variable method for cold start recommendation, which combines both collaborative and content information in model fitting [29]. Kim and Li proposed a probabilistic model to address the cold start problem, in which items are classified into groups and predictions are made for users considering the Gaussian distribution of user ratings [30].

Model-based CF algorithms, such as TAN-ELR (tree augmented naïve Bayes optimized by extended logistic regression) [11, 31], address the sparsity problem by providing more accurate predictions for sparse data. Some new model-based CF techniques that tackle the sparsity problem include the association retrieval technique, which applies an associative retrieval framework and related spreading activation algorithms to explore transitive associations among users through their rating and purchase history [32]; Maximum margin matrix factorizations (MMMF), a convex, infinite dimensional alternative to low-rank approximations and standard factor models [33, 34]; ensembles of MMMF [35]; multiple imputation-based CF approaches [36]; and imputation-boosted CF algorithms [37].

2.2. Scalability

When numbers of existing users and items grow tremendously, traditional CF algorithms will suffer serious scalability problems, with computational resources going beyond practical or acceptable levels. For example, with tens of millions of customers and millions of distinct catalog items , a CF algorithm with the complexity of is already too large. As well, many systems need to react immediately to online requirements and make recommendations for all users regardless of their purchases and ratings history, which demands a high scalability of a CF system [6].

Dimensionality reduction techniques such as SVD can deal with the scalability problem and quickly produce good quality recommendations, but they have to undergo expensive matrix factorization steps. An incremental SVD CF algorithm [38] precomputes the SVD decomposition using existing users. When a new set of ratings are added to the database, the algorithm uses the folding-in projection technique [25, 39] to build an incremental system without re-computing the low-dimensional model from scratch. Thus it makes the recommender system highly scalable.

Memory-based CF algorithms, such as the item-based Pearson correlation CF algorithm can achieve satisfactory scalability. Instead of calculating similarities between all pairs of items, item-based Pearson CF calculates the similarity only between the pair of co-rated items by a user [6, 40]. A simple Bayesian CF algorithm tackles the scalability problem by making predictions based on observed ratings [41]. Model-based CF algorithms, such as clustering CF algorithms, address the scalability problem by seeking users for recommendation within smaller and highly similar clusters instead of the entire database [13, 4244], but there are tradeoffs between scalability and prediction performance.

2.3. Synonymy

Synonymy refers to the tendency of a number of the same or very similar items to have different names or entries. Most recommender systems are unable to discover this latent association and thus treat these products differently. For example, the seemingly different items “children movie” and “children film” are actual the same item, but memory-based CF systems would find no match between them to compute similarity. Indeed, the degree of variability in descriptive term usage is greater than commonly suspected. The prevalence of synonyms decreases the recommendation performance of CF systems.

Previous attempts to solve the synonymy problem depended on intellectual or automatic term expansion, or the construction of a thesaurus. The drawback for fully automatic methods is that some added terms may have different meanings from intended, thus leading to rapid degradation of recommendation performance [45].

The SVD techniques, particularly the Latent Semantic Indexing (LSI) method, are capable of dealing with the synonymy problems. SVD takes a large matrix of term-document association data and construct a semantic space where terms and documents that are closely associated are placed closely to each other. SVD allows the arrangement of the space to reflect the major associative patterns in the data, and ignore the smaller, less important ones. The performance of LSI in addressing the synonymy problem is impressive at higher recall levels where precision is ordinarily quite low, thus representing large proportional improvements. However, the performance of the LSI method at the lowest levels of recall is poor [25].

The LSI method gives only a partial solution to the polysemy problem, which refers to the fact that most words have more than one distinct meaning [25].

2.4. Gray Sheep

Gray sheep refers to the users whose opinions do not consistently agree or disagree with any group of people and thus do not benefit from collaborative filtering [46]. Black sheep are the opposite group whose idiosyncratic tastes make recommendations nearly impossible. Although this is a failure of the recommender system, non-electronic recommenders also have great problems in these cases, so black sheep is an acceptable failure [47].

Claypool et al. provided a hybrid approach combining content-based and CF recommendations by basing a prediction on a weighted average of the content-based prediction and the CF prediction. In that approach, the weights of the content-based and CF predictions are determined on a per-user basis, allowing the system to determine the optimal mix of content-based and CF recommendation for each user, helping to solve the gray sheep problem [46].

2.5. Shilling Attacks

In cases where anyone can provide recommendations, people may give tons of positive recommendations for their own materials and negative recommendations for their competitors. It is desirable for CF systems to introduce precautions that discourage this kind of phenomenon [2].

Recently, the shilling attacks models for collaborative filtering system have been identified and their effectiveness has been studied. Lam and Riedl found that item-based CF algorithm was much less affected by the attacks than the user-based CF algorithm, and they suggest that new ways must be used to evaluate and detect shilling attacks on recommender systems [48]. Attack models for shilling the item-based CF systems have been examined by Mobasher et al., and alternative CF systems such as hybrid CF systems and model-based CF systems were believed to have the ability to provide partial solutions to the bias injection problem [49]. O’Mahony et al. contributed to solving the shilling attacks problem by analyzing robustness, a recommender system’s resilience to potentially malicious perturbations in the customer/product rating matrix [50].

Bell and Koren [51] used a comprehensive approach to the shilling attacks problem by removing global effects in the data normalization stage of the neighbor-based CF, and working with residual of global effects to select neighbors. They achieved improved CF performance on the Netflix [20] data.

2.6. Other Challenges

As people may not want their habits or views widely known, CF systems also raise concerns about personal privacy. Miller et al. [4] and Canny [52] find ways to protect users’ privacy for CF recommendation tasks.

Increased noise (or sabotage) is another challenge, as the user population becomes more diverse. Ensembles of maximum margin matrix factorizations [35] and instance selection techniques [53] are found useful to address the noise problems of CF tasks. As Dempster-Shafer (DS) theory [54, 55] and imputation techniques [56] have been successfully applied to accommodate imperfect and noisy data for knowledge representation and classification tasks, they are also potentially useful to deal with the noise problem of CF tasks.

Explainability is another important aspect of recommender systems. An intuitive reasoning such as “you will like this book because you liked those books” will be appealing and beneficial to readers, regardless of the accuracy of the explanations [57].

2.7. The Netflix Prize Challenge

Launched in October 2006, the Netflix prize challenge [20] attracted thousands of researchers to compete in the million-dollar-prize race for a most improved performance for movie recommendations. The challenge is featured with a large-scale industrial dataset (with 480,000 users and 17,770 movies), and a rigid performance metric of RMSE (see detailed description in Section 6).

Up to July, 2009, the Leaderboard on the Netflix prize competition is as Table 3, in which the leading team “BellKor in Pragmatic Chaos” (with 10.05% improved RMSE over the Netflix movie recommendation system: Cinematch) based their solution on a merged model of latent factor and neighborhood models [58]. Some interesting research papers on the Netflix prize challenge can be found in the 2008 KDD Netflix Workshop (http://netflixkddworkshop2008.info/).

3. Memory-Based Collaborative Filtering Techniques

Memory-based CF algorithms use the entire or a sample of the user-item database to generate a prediction. Every user is part of a group of people with similar interests. By identifying the so-called neighbors of a new user (or active user), a prediction of preferences on new items for him or her can be produced.

The neighborhood-based CF algorithm, a prevalent memory-based CF algorithm, uses the following steps: calculate the similarity or weight, , which reflects distance, correlation, or weight, between two users or two items, and ; produce a prediction for the active user by taking the weighted average of all the ratings of the user or item on a certain item or user, or using a simple weighted average [40]. When the task is to generate a top- recommendation, we need to find most similar users or items (nearest neighbors) after computing the similarities, then aggregate the neighbors to get the top- most frequent items as the recommendation.

3.1. Similarity Computation

Similarity computation between items or users is a critical step in memory-based collaborative filtering algorithms. For item-based CF algorithms, the basic idea of the similarity computation between item i and item j is first to work on the users who have rated both of these items and then to apply a similarity computation to determine the similarity, wi,j, between the two co-rated items of the users [40]. For a user-based CF algorithm, we first calculate the similarity, wu,v, between the users u and v who have both rated the same items.

There are many different methods to compute similarity or weight between users or items.

3.1.1. Correlation-Based Similarity

In this case, similarity between two users and , or between two items and , is measured by computing the Pearson correlation or other correlation-based similarities.

Pearson correlation measures the extent to which two variables linearly relate with each other [5]. For the user-based algorithm, the Pearson correlation between users and is

where the summations are over the items that both the users and have rated and is the average rating of the co-rated items of the th user. In an example in Table 4, we have

For the item-based algorithm, denote the set of users who rated both items and , then the Pearson Correlation will be

where is the rating of user u on item is the average rating of the th item by those users, see Figure 2 [40].

Some variations of item-based and user-based Pearson correlations can be found in [59]. The Pearson correlation-based CF algorithm is a representative CF algorithm, and is widely used in the CF research community.

Other correlation-based similarities include: constrained Pearson correlation, a variation of Pearson correlation that uses midpoint instead of mean rate; Spearman rank correlation, similar to Pearson correlation, except that the ratings are ranks; and Kendall’s     correlation, similar to the Spearman rank correlation, but instead of using ranks themselves, only the relative ranks are used to calculate the correlation [3, 60].

Usually the number of users in the computation of similarity is regarded as the neighborhood size of the active user, and similarity based CF is deemed as neighborhood-based CF.

3.1.2. Vector Cosine-Based Similarity

The similarity between two documents can be measured by treating each document as a vector of word frequencies and computing the cosine of the angle formed by the frequency vectors [61]. This formalism can be adopted in collaborative filtering, which uses users or items instead of documents and ratings instead of word frequencies.

Formally, if is the user-item matrix, then the similarity between two items, and , is defined as the cosine of the dimensional vectors corresponding to the th and th column of matrix .

Vector cosine similarity between items and is given by

where “” denotes the dot-product of the two vectors. To get the desired similarity computation, for items, an similarity matrix is computed [27]. For example, if the vector , vector , the vector cosine similarity between and is

In an actual situation, different users may use different rating scales, which the vector cosine similarity cannot take into account. To address this drawback, adjusted cosine similarity is used by subtracting the corresponding user average from each co-rated pair. The Adjusted cosine similarity has the same formula as Pearson correlation (2). In fact, Pearson correlation performs cosine similarity with some sort of normalization of the user’s ratings according to his own rating behavior. Hence, we may get negative values with Pearson correlation, but not with cosine similarity, supposing we have an -point rating scale.

3.1.3. Other Similarities

Another similarity measure is conditional probability-based similarity [62, 63]. As it is not commonly-used, we will not discuss it in detail in this paper.

3.2. Prediction and Recommendation Computation

To obtain predictions or recommendations is the most important step in a collaborative filtering system. In the neighborhood-based CF  algorithm, a subset of nearest neighbors of the active user are chosen based on their similarity with him or her, and a weighted aggregate of their ratings is used to generate predictions for the active user [64].

3.2.1. Weighted Sum of Others’ Ratings

To make a prediction for the active user, , on a certain item, , we can take a weighted average of all the ratings on that item according to the following formula [5]:

where and are the average ratings for the user a and user on all other rated items, and is the weight between the user and user . The summations are over all the users who have rated the item . For the simple example in Table 4, using the user-based CF algorithm, to predict the rating for on , we have

Note the above prediction is based on the neighborhood of the active users.

3.2.2. Simple Weighted Average

For item-based prediction, we can use the simple weighted average to predict the rating, , for user u on item [40]

where the summations are over all other rated items for user is the weight between items and , is the rating for user on item .

3.3. Top- Recommendations

Top-N recommendation is to recommend a set of top-ranked items that will be of interest to a certain user. For example, if you are a returning customer, when you log into your http://amazon.com/ account, you may be recommended a list of books (or other products) that may be of your interest (see Figure 1). Top- recommendation techniques analyze the user-item matrix to discover relations between different users or items and use them to compute the recommendations. Some models, such as association rule mining based models, can be used to make top- recommendations, which we will introduce in Section 4.

3.3.1. User-Based Top- Recommendation Algorithms

User-based top- recommendation algorithms firstly identify the most similar users (nearest neighbors) to the active user using the Pearson correlation or vector-space model [9, 27], in which each user is treated as a vector in the -dimensional item space and the similarities between the active user and other users are computed between the vectors. After the most similar users have been discovered, their corresponding rows in the user-item matrix are aggregated to identify a set of items, , purchased by the group together with their frequency. With the set , user-based CF techniques then recommend the top- most frequent items in that the active user has not purchased. User-based top- recommendation algorithms have limitations related to scalability and real-time performance [62].

3.3.2. Item-Based Top- Recommendation Algorithms

Item-based top- recommendation algorithms have been developed to address the scalability problem of user-based top- recommendation algorithms. The algorithms firstly compute the most similar items for each item according to the similarities; then identify the set, , as candidates of recommended items by taking the union of the most similar items and removing each of the items in the set, , that the user has already purchased; then calculate the similarities between each item of the set and the set . The resulting set of the items in , sorted in decreasing order of the similarity, will be the recommended item-based Top- list [62]. One problem of this method is, when the joint distribution of a set of items is different from the distributions of the individual items in the set, the above schemes can potentially produce suboptimal recommendations. To solve this problem, Deshpande and Karypis [63] developed higher-order item-based top- recommendation algorithms that use all combinations of items up to a particular size when determining the itemsets to be recommended to a user.

3.4. Extensions to Memory-Based Algorithms
3.4.1. Default Voting

In many collaborative filters, pair-wise similarity is computed only from the ratings in the intersection of the items both users have rated [5, 27]. It will not be reliable when there are too few votes to generate similarity values. Also, focusing on intersection set similarity neglects the global rating behavior reflected in a user’s entire rating history.

Empirically, assuming some default voting values for the missing ratings can improve the CF prediction performance. Herlocker et al. [64] accounts for small intersection sets by reducing the weight of users that have fewer than 50 items in common. Chee et al. [13] uses the average of the clique (or small group) as default voting to extend each user’s rating history. Breese et al. [9] uses a neutral or somewhat negative preference for the unobserved ratings and then computes the similarity between users on the resulting ratings data.

3.4.2. Inverse User Frequency

The idea of inverse user frequency [61] applied in collaborative filtering is that universally liked items are not as useful in capturing similarity as less common items. The inverse frequency can be defined as , where is the number of users who have rated item and is the total number of users. If everyone has rated item , then is zero. To apply inverse user frequency while using the vector similarity-based CF algorithm, we need to use a transformed rating, which is simply the original rating multiplied by the factor [9].

3.4.3. Case Amplification

Case amplification refers to a transform applied to the weights used in the basic collaborative filtering prediction. The transform emphasizes high weights and punishes low weights [9]:

where is the case amplification power, , and a typical choice of is 2.5 [65]. Case amplification  reduces noise in the data. It tends to favor high weights as small values raised to a power become negligible. If the weight is high, for example, , then it remains high ; if it is low, for example, , then it will be negligible .

3.4.4. Imputation-Boosted CF Algorithms

When the rating data for CF tasks are extremely sparse, it will be problematic to produce accurate predictions using the Pearson correlation-based  CF. Su et al. [37, 66] proposed a framework of imputation-boosted collaborative filtering  (IBCF), which first uses an imputation technique to fill in the missing data, before using a traditional Pearson correlation-based CF algorithm on this completed data to predict a specific user rating for a specified item. After comprehensively investigating the use of various standard imputation techniques (including mean imputation, linear regression imputation, and predictive mean matching [67] imputation, and Bayesian multiple imputation [68]), and machine learning classifiers [66] (including naïve Bayes, SVM, neural network, decision tree, lazy Bayesian rules) as imputers for IBCF, they found that the proposed IBCF algorithms can perform very effectively in general, and that IBCF using Bayesian multiple imputation, IBCF-NBM (a mixture IBCF which uses IBCF using naïve Bayes for denser datasets and IBCF using mean imputation for sparser ones) [37], and IBCF using naïve Bayes perform especially well, outperforming the content-boosted CF algorithm (a representative hybrid CF), and do so without using external content information.

3.4.5. Weighted Majority Prediction

The weighted majority prediction algorithm proposed by Goldman and Warmuth [69] makes its prediction using the rows with observed data in the same column, weighted by the believed similarity between the rows, with binary rating values. The weights (or similarities, with initialized values of 1) are increased by multiplying it by when the compared values are same, and decreased by multiplying by when different, with . This update is equivalent to , where is the number of rows that have the same value as in row and is the number of rows having different values. The prediction for a rating on a certain item by the active user is determined by the rating on the item by a certain user, who has the highest accumulated weight value with the active user. This algorithm can be generalized to multiclass data, and be extended from user-to-user similarity to item-to-item similarity and to user-item-combined similarity [70]. One shortcoming of this algorithm is the scalability, when the user number or item number grows over a certain large number , it will be impractical for the user-to-user or item-to-item similarity computations to update the similarity matrices.

4. Model-Based Collaborative Filtering Techniques

The design and development of models (such as machine learning, data mining algorithms) can allow the system to learn to recognize complex patterns based on the training data, and then make intelligent predictions for the collaborative filtering tasks for test data or real-world data, based on the learned models. Model-based CF algorithms, such as Bayesian models, clustering models, and dependency networks, have been investigated to solve the shortcomings of memory-based CF algorithms [9, 71]. Usually, classification algorithms can be used as CF models if the user ratings are categorical, and regression models and SVD methods and be used for numerical ratings.

4.1. Bayesian Belief Net CF Algorithms

A Bayesian belief net (BN) is a directed, acyclic graph (DAG) with a triplet , where each node represents a random variable, each directed arc between nodes is a probabilistic association between variables, and is a conditional probability table quantifying how much a node depends on its parents [72]. Bayesian belief nets (BNs) are often used for classification tasks.

4.1.1. Simple Bayesian CF Algorithm

The simple Bayesian CF algorithm uses a naïve Bayes  (NB) strategy to make predictions for CF tasks. Assuming the features are independent given the class, the probability of a certain class given all of the features can be computed, and then the class with the highest probability will be classified as the predicted class [41]. For incomplete data, the probability calculation and classification production are computed over observed data (the subscript o in the following equation indicates observed values):

The Laplace Estimator is used to smooth the probability calculation and avoid a conditional probability of 0:

where is the size of the class set . For an example of binary class, will be will be using the Laplace Estimator.

Using the same example in Table 4, the class set is , to produce the rating for on using the simple Bayesian CF algorithm and the Laplace Estimator, we have

in which .

In Miyahara and Pazzani [10], multiclass data are firstly converted to binary-class data, and then converted to a Boolean feature vector rating matrix. These conversions make the use of the NB algorithm for CF tasks easier, but bring the problems of scalability and the loss of multiclass information for multiclass data. In Miyahara and Pazzani [41], they applied the simple Bayesian CF model only on binary data.

Because most real-world CF data are multiclass ones, Su and Khoshgoftaar [11] apply the simple Bayesian CF algorithm to multiclass data for CF tasks, and found simple Bayesian CF has worse predictive accuracy but better scalability than the Pearson correlation-based CF as it makes predictions based on observed ratings, and the prediction-making process is less time-consuming.

The simple Bayesian CF algorithm can be regarded as memory-based CF technique because of its in-memory calculation for CF predictions. We put it in this section for the reason that most other Bayesian CF algorithms are model-based CFs.

4.1.2. NB-ELR and TAN-ELR CF Algorithms

Because of the limitations of the simple Bayesian algorithm for CF tasks, advanced BNs CF algorithms, with their ability to deal with incomplete data, can be used instead [11]. Extended logistic regression (ELR) is a gradient-ascent algorithm [31, 73], which is a discriminative parameter-learning algorithm that maximizes log conditional likelihood.

TAN-ELR and NB-ELR (tree augmented naïve Bayes [74] and naïve Bayes optimized by ELR, resp.) have been proven to have high classification accuracy for both complete and incomplete data [31, 73].

Applied to CF tasks, working on real-world multiclass CF datasets and using MAE as evaluation criterion, the empirical results show that the TAN-ELR CF and NB-ELR CF algorithms perform significantly better than the simple Bayesian CF algorithm, and consistently better than the Pearson correlation memory-based CF algorithm [11]. However, TAN-ELR and NB-ELR need a longer time to train the models. A solution is to run the time-consuming training stage offline, and the online prediction-producing stage will take a much shorter time.

4.1.3. Other Bayesian CF Algorithms

Bayesian belief nets with decision trees at each node.  This model has a decision tree at each node of the BNs, where a node corresponds to each item in the domain and the states of each node correspond to the possible ratings for each item [9]. Their results show that this model has similar prediction performance to Pearson correlation-based CF methods, and has better performance than Bayesian-clustering and vector cosine memory-based CF algorithms.

Baseline Bayesian model uses a Bayesian belief net with no arcs (baseline model) for collaborative filtering and recommends items on their overall popularity [75]. However, the performance is suboptimal.

4.2. Clustering CF Algorithms

A cluster is a collection of data objects that are similar to one another within the same cluster and are dissimilar to the objects in other clusters [76]. The measurement of the similarity between objects is determined using metrics such as Minkowski distance and Pearson correlation.

For two data objects, and , the popular Minkowski distance is defined as

where is the dimension number of the object and , are the values of the th dimension of object and respectively, and is a positive integer. When , is Manhattan distance; when , is Euclidian distance [76].

Clustering methods can be classified into three categories: partitioning methods, density-based methods, and hierarchical methods [76, 77]. A commonly-used partitioning method is -means, proposed by MacQueen [78], which has two main advantages: relative efficiency and easy implementation. Density-based clustering methods typically search for dense clusters of objects separated by sparse regions that represent noise. DBSCAN [79] and OPTICS [80] are well-known density-based clustering methods. Hierarchical clustering methods, such as BIRCH [81], create a hierarchical decomposition of the set of data objects using some criterion.

In most situations, clustering is an intermediate step and the resulting clusters are used for further analysis or processing to conduct classification or other tasks. Clustering CF models can be applied in different ways. Sarwar et al. [43] and O'Connor and Herlocker [42] use clustering techniques to partition the data into clusters and use a memory-based CF algorithm such as a Pearson correlation-based algorithm to make predictions for CF tasks within each cluster.

Using the k-means method with , the RecTree method, proposed by Chee et al. [13], recursively splits the originally large rating data into two sub-clusters as it constructs the RecTree from the root to its leaves. The resulting RecTree resembles an unbalanced binary tree, of which leaf nodes have a similarity matrix and internal nodes maintain rating centroids of their subtrees. The prediction is made within the leaf node that the active user belongs to. RecTree scales by for off-line recommendation and for on-line recommendation, where is the dataset size and is the partition size, a constant, and it has an improved accuracy over the Pearson correlation-based CF when selecting an appropriate size of advisors (cluster of users).

Ungar and Foster [12] clusters users and items separately using variations of -means and Gibbs sampling [82], by clustering users based on the items they rated and clustering items based on the users that rated them. Users can be reclustered based on the number of items they rated, and items can be similarly re-clustered. Each user is assigned to a class with a degree of membership proportional to the similarity between the user and the mean of the class. Their CF performance on synthetic data is good, but not good on real data.

A flexible mixture model (FMM) extends existing clustering algorithms for CF by clustering both users and items at the same time, allowing each user and item to be in multiple clusters and modeling the clusters of users and items separately [15]. Experimental results show that the FMM algorithm has better accuracy than the Pearson correlation-based CF algorithm and aspect model [83].

Clustering models have better scalability than typical collaborative filtering methods because they make predictions within much smaller clusters rather than the entire customer base [13, 27, 44, 84]. The complex and expensive clustering computation is run offline. However, its recommendation quality is generally low. It is possible to improve quality by using numerous fine-grained segments, but then online user-segment classification becomes almost as expensive as finding similar customers using memory-based collaborative filtering [6]. As optimal clustering over large data sets is impractical, most applications use various forms of greedy cluster generation techniques. For very large datasets, especially those with high dimensionality, sampling or dimensionality reduction is also necessary.

4.3. Regression-Based CF Algorithms

For memory-based CF algorithms, in some cases, two rating vectors may be distant in terms of Euclidean distances but they have very high similarity using vector cosine or Pearson correlation measures, where memory-based CF algorithms do not fit well and need better solutions. Also, numerical ratings are common in real-life recommender systems. Regression methods that are good at making predictions for numerical values are helpful to address these problems.

A regression method uses an approximation of the ratings to make predictions based on a regression model. Let be a random variable representing a user’s preferences on different items. The linear regression model can be expressed as

where Λ is a matrix. is a random variable representing noise in user choices, is an matrix with is the rating of user on item , and is a matrix with each column as an estimate of the value of the random variable (user’s ratings in the -dimensional rating space) for one user. Typically, the matrix is very sparse.

To remedy this, Canny [52] proposed a sparse factor analysis, which replaces missing elements with default voting values (the average of some nonmissing elements, either the average by columns, or by rows, or by all), and uses the regression model as the initialization for Expectation Maximization (EM) [85] iterations. According to Canny [52], the sparse factor analysis has better scalability than Pearson correlation-based CF and Personality Diagnosis (PD), a representative hybrid CF algorithm [86], and better accuracy than singular value decomposition (SVD) [23]. Sparse factor analysis also protects user privacy, as it supports computation on encrypted user data [52].

Vucetic and Obradovic [87] proposed a regression-based approach to CF tasks on numerical ratings data that searches for similarities between items, builds a collection of simple linear models, and combines them efficiently to provide rating predictions for an active user. They used ordinary least squares to estimate the parameters of the linear regression function. Their experimental results show the approach has good performance in addressing the sparsity, prediction latency and numerical prediction problems of CF tasks. Lemire and Maclachlan [88] proposed slope one algorithms to make faster CF prediction than memory-based CF algorithms.

4.4. MDP-Based CF Algorithms

Instead of viewing the recommendation process as a prediction problem, Shani et al. [14] views it as a sequential optimization problem and uses a Markov decision processes (MDPs) model [89] for recommender systems.

An MDP is a model for sequential stochastic decision problems, which is often used in applications where an agent is influencing its surrounding environment through actions. An MDP can be defined as a four-tuple: , where is a set of states, is a set of actions, is a real-valued reward function for each state/action pair, and is the transition probability between every pair of states given each action.

An optimal solution to the MDP is to maximize the function of its reward stream. By starting with an initial policy , computing the reward value function based on the previous policy, and updating the policy with the new value function at each step, the iterations will converge to an optimal policy [90, 91].

In Shani et al. [14], the states of the MDP for the CF system are tuples of items, with some null values corresponding to missing items; the actions of the MDP correspond to a recommendation of an item; and the rewards in the MDP correspond to the utility of selling an item, for example, the net profit. The state following each recommendation is the user’s response to that recommendation, such as taking the recommended item, taking the non-recommended item, or selecting nothing. To handle the large action space, it is assumed that the probability that a user buys an item depends on his current state, item, and whether or not the item is recommended, but does not depend on the identity of the other recommended items.

Working on an Israeli online bookstore, Mitos, the deployed MDP-recommender system produced a much higher profit than the system without using the recommender. Also, the MDP CF model performs much better than the simpler Markov chain (MC) model, which is simply an MDP without actions [14].

The MDP-Based CF model in Shani et al. [14] can be viewed as approximating a partial observable MDP (POMDP) by using a finite rather than unbounded window of past history to define the current state. As the computational and representational complexity of POMDPs is high, appropriate approaches to tackling these problems must be developed, which are generally classified into three broad strategies: value function approximation [92], policy based optimization [84, 93], and stochastic sampling [94]. The application of these strategies to CF tasks may be an interesting direction of future research.

4.5. Latent Semantic CF Models

A Latent semantic CF technique relies on a statistical modeling technique that introduces latent class variables in a mixture model setting to discover user communities and prototypical interest profiles. Conceptionally, it decomposes user preferences using overlapping user communities. The main advantages of this technique over standard memory-based methods are its higher accuracy and scalability [7, 95].

The aspect model, proposed by Hofmann and Puzicha [83], is a probabilistic latent-space model, which models individual ratings as a convex combination of rating factors. The latent class variable is associated with each observed pair of , with the assumption that users and items are independent from each other given the latent class variable. The performance of the aspect model is much better than the clustering model working on the EachMovie dataset [96].

A multinomial model is a simple probabilistic model for categorical data [9, 97] that assumes there is only one type of user. A multinomial mixture model assumes that there are multiple types of users underlying all profiles, and that the rating variables are independent with each other and with the user’s identity given the user’s type [98]. A user rating profile (URP) model [97] combines the intuitive appeal of the multinomial mixture model and aspect model [83], with the high-level generative semantics of Latent Dirichlet Allocation (LDA, a generative probabilistic model, in which each item is modeled as a finite mixture over an underlying set of users) [99]. URP performs better than the aspect model and multinomial mixtures models for CF tasks.

4.6. Other Model-Based CF Techniques

For applications in which ordering is more desirable than classifying, Cohen et al. [100] investigated a two-stage order learningCF approach to learning to order. In that approach, one first learns a preference function by conventional means, and then orders a new set of instances by finding the total ordering that best approximates the preference function, which returns a confidence value reflecting how likely that one is preferred to another. As the problem of finding the total ordering is NP-complete, a greedy-order algorithm is used to obtain an approximately optimal ordering function. Working on EachMovie [96], this order learning CF approach performs better than a nearest neighbor CF algorithm and a linear regression algorithm.

Association rule based CF algorithms are more often used for top- recommendation tasks than prediction ones. Sarwar et al. [27] describes their approach to using a traditional association rule mining algorithm to find rules for developing top- recommender systems. They find the top- items by simply choosing all the rules that meet the thresholds for support and confidence values, sorting items according to the confidence of the rules so that items predicted by the rules that have a higher confidence value are ranked higher, and finally selecting the first highest ranked items as the recommended set [27]. Fu et al. [101] develop a system to recommend web pages by using an a priori algorithm to mine association rules over users’ navigation histories. Leung et al. proposed a collaborative filtering framework using fuzzy association rules and multi-level similarity [102].

Other model-based CF techniques include a maximum entropy approach, which clusters the data first, and then in a given cluster uses maximum entropy as an objective function to form a conditional maximal entropy model to make predictions [17]. A dependency network is a graphical model for probabilistic relationships, whose graph is potentially cyclic. The probability component of a dependency network is a set of conditional distributions, one for each node given its parents. Although less accurate than Bayesian belief nets, dependency networks are faster in generating predictions and require less time and memory to learn [75]. Decision tree CF models treat collaborative filtering as a classification task and use decision tree as the classifier [103]. Horting is a graph-based technique in which nodes are users and edges between nodes are degrees of similarity between users [104]. Multiple multiplicative factor models (MMFs) are a class of causal, discrete latent variable models combining factor distributions multiplicatively and are able to readily accommodate missing data [105]. Probabilistic principal components analysis (pPCA) [52, 106] determines the principal axes of a set of observed data vectors through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. Matrix factorization based CF algorithms have been proven to be effective to address the scalability and sparsity challenges of CF tasks [33, 34, 107]. Wang et al. showed how the development of collaborative filtering can gain benefits from information retrieval theories and models, and proposed probabilistic relevance CF models [108, 109].

5. Hybrid Collaborative Filtering Techniques

Hybrid CF systems combine CF with other recommendation techniques (typically with content-based systems) to make predictions or recommendations.

Content-based recommender systems make recommendations by analyzing the content of textual information, such as documents, URLs, news messages, web logs, item descriptions, and profiles about users’ tastes, preferences, and needs, and finding regularities in the content [110]. Many elements contribute to the importance of the textual content, such as observed browsing features of the words or pages (e.g., term frequency and inverse document frequency), and similarity between items a user liked in the past [111]. A content-based recommender then uses heuristic methods or classification algorithms to make recommendations [112]. Content-based techniques have the start-up problem, in which they must have enough information to build a reliable classifier. Also, they are limited by the features explicitly associated with the objects they recommend (sometimes these features are hard to extract), while collaborative filtering can make recommendations without any descriptive data. Also, content-based techniques have the overspecialization problem, that is, they can only recommend items that score highly against a user’s profile or his/her rating history [21, 113].

Other recommender systems include demographic-based recommender systems, which use user profile information such as gender, postcode, occupation, and so forth [114]; utility-based recommender systems and knowledge-based recommender systems, both of which require knowledge about how a particular object satisfies the user needs [115, 116]. We will not discuss these systems in detail in this work.

Hoping to avoid limitations of either recommender system and improve recommendation performance, hybrid CF recommenders are combined by adding content-based characteristics to CF models, adding CF characteristics to content-based models, combining CF with content-based or other systems, or combining different CF algorithms [21, 117].

5.1. Hybrid Recommenders Incorporating CF and Content-Based Features

The content-boosted CF algorithm uses naïve Bayes as the content classifier, it then fills in the missing values of the rating matrix with the predictions of the content predictor to form a pseudo rating matrix, in which observed ratings are kept untouched and missing ratings are replaced by the predictions of a content predictor. It then makes predictions over the resulting pseudo ratings matrix using a weighted Pearson correlation-based CF algorithm, which gives a higher weight for the item that more users rated, and gives a higher weight for the active user [16] (see an illustration in Table 5). The content-boosted  CF recommender has improved prediction performance over some pure content-based recommenders and some pure memory-based CF algorithms. It also overcomes the cold start problem and tackles the sparsity problem of CF tasks. Working on reasonably-sized subsets instead of the original rating data, Greinemr et al. used TAN-ELR [31] as the content-predictor and directly applied the Pearson correlation-based CF instead of a weighted one on the pseudo rating matrix to make predictions, and they achieved improved CF performance in terms of MAE [118].

Ansari et al. [8] propose a Bayesian preference model that statistically integrates several types of information useful for making recommendations, such as user preferences, user and item features, and expert evaluations. They use Markov chain Monte Carlo (MCMC) methods [119] for sampling-based inference, which involve sampling parameter estimation from the full conditional distribution of parameters. They achieved better performance than pure collaborative filtering.

The recommender Fab, proposed by Balabanović and Shoham [117], maintains user profiles of interest in web pages using content-based techniques, and uses CF techniques to identify profiles with similar tastes. It can then recommend documents across user profiles. Sarwar et al. [120] implemented a set of knowledge-based “filterbots” as artificial users using certain criteria. A straightforward example of a filterbot is a genrebot, which bases its opinion solely on the genre of the item, for example, a “jazzbot” would give a full mark to a CD simply because it is in the jazz category, while it would give a low score to any other CD in the database. Mooney and Roy [121] use the prediction from the CF system as the input to a content-based recommender. Condiff et al. [113] propose a Bayesian mixed-effects model that integrates user ratings, user, and item features in a single unified framework. The CF system Ripper, proposed by Basu et al. [71], uses both user ratings and contents features to produce recommendations.

5.2. Hybrid Recommenders Combining CF and Other Recommender Systems

A weighted hybrid recommender combines different recommendation techniques by their weights, which are computed from the results of all of the available recommendation techniques present in the system [115]. The combination can be linear, the weights can be adjustable [46], and weighted majority voting [110, 122] or weighted average voting [118] can be used. For example, the P-Tango system [46] initially gives CF and content-based recommenders equal weight, but gradually adjusts the weighting as predictions about user ratings are confirmed or disconfirmed. The strategy of the P-Tango system is similar to boosting [123].

A switching hybrid recommender switches between recommendation techniques using some criteria, such as confidence levels for the recommendation techniques. When the CF system cannot make a recommendation with sufficient confidence, then another recommender system such as a content-based system is attempted. Switching hybrid recommenders also introduce the complexity of parameterization for the switching criteria [115].

Other hybrid recommenders in this category include mixed hybrid recommenders [124], cascade hybrid recommenders [115], meta-level recommenders [110, 115, 117, 125], and so forth.

Many papers empirically compared the performance of hybrid recommenders with the pure CF and content-based methods and found that hybrid recommenders may make more accurate recommendations, especially for the new user and new item situations where a regular CF algorithm cannot make satisfactory recommendations. However, hybrid recommenders rely on external information that is usually not available, and they generally have increased complexity of implementation [110, 115, 126].

5.3. Hybrid Recommenders Combining CF Algorithms

The two major classes of CF approaches, memory-based and model-based CF approaches, can be combined to form hybrid CF approaches. The recommendation performances of these algorithms are generally better than some pure memory-based CF algorithms and model-based CF algorithms [22, 86].

Probabilistic memory-based collaborative filtering (PMCF) combines memory-based and model-based techniques [22]. They use a mixture model built on the basis of a set of stored user profiles and use the posterior distribution of user ratings to make predictions. To address the new user problem, an active learning extension to the PMCF system can be used to actively query a user for additional information when insufficient information is available. To reduce the computation time, PMCF selects a small subset called profile space from the entire database of user ratings and gives predictions from the small profile space instead of the whole database. PMCF has better accuracy than the Pearson correlation-based CF and the model-based CF using naïve Bayes.

Personality diagnosis (PD) is a representative hybrid CF approach that combines memory-based and model-based CF algorithms and retains some advantages of both algorithms [86]. In PD, the active user is assumingly generated by choosing one of the other users uniformly at random and adding Gaussian noise to his or her ratings. Given the active user’s known ratings, we can calculate the probability that he or she is the same “personality type” as other users, and the probability he or she will like the new items. PD can also be regarded as a clustering method with exactly one user per cluster. Working on EachMovie [96] and CiteSeer [127], PD makes better predictions than Pearson correlation-based and vector similarity-based CF algorithms and the two model-based algorithms, Bayesian clustering and Bayesian network, investigated by Breese et al. [9].

As an ensemble classifier is able to give more accurate prediction than a member classifier, a hybrid CF system that combines different CF algorithms using an ensemble scheme will also be helpful to improve predictive performance of CF tasks [118].

6. Evaluation Metrics

The quality of a recommender system can be decided on the result of evaluation. The type of metrics used depends on the type of CF applications. According to Herlocker et al. [60], metrics evaluating recommendation systems can be broadly classified into the following broad categories: predictive accuracy metrics, such as Mean Absolute Error (MAE) and its variations; classification accuracy metrics, such as precision, recall, F1-measure, and ROC sensitivity; rank accuracy metrics, such as Pearson’s product-moment correlation, Kendall’s Tau, Mean Average Precision (MAP), half-life utility [9], and normalized distance-based performance metric (NDPM) [128].

We only introduce the commonly-used CF metrics MAE, NMAE, RMSE, and ROC sensitivity here. For other CF performance metrics of recommendation quality, see [60]. There are other evaluations of recommender systems including usability evaluation [129] and so forth.

6.1. Mean Absolute Error (MAE) and Normalized Mean Absolute Error (NMAE)

Instead of classification accuracy or classification error, the most widely used metric in CF research literature is Mean Absolute Error (MAE) [3, 60], which computes the average of the absolute difference between the predictions and true ratings

where is the total number of ratings over all users, is the predicted rating for user on item , and is the actual rating. The lower the MAE, the better the prediction.

Different recommender systems may use different numerical rating scales. Normalized Mean Absolute Error normalizes MAE to express errors as percentages of full scale [3]:

where and are the upper and lower bounds of the ratings.

6.2. Root Mean Squared Error (RMSE)

Root Mean Squared Error (RMSE) is becoming popular partly because it is the Netflix prize [20] metric for movie recommendation performance:

where is the total number of ratings over all users, is the predicted rating for user on item , and is the actual rating again. RMSE amplifies the contributions of the absolute errors between the predictions and the true values.

Although accuracy metrics have greatly helped the field of recommender systems, the recommendations that are most accurate are sometimes not the ones that are most useful to users, for example, users might prefer to be recommended with items that are unfamiliar with them, rather than the old favorites they do not likely want again [130]. We therefore need to explore other evaluation metrics.

6.3. ROC Sensitivity

An ROC (Receiver Operating Characteristic) curve is a two-dimensional depiction of classifier performance, on which TPR (true positive rate) is plotted on the -axis and FPR (false positive rate) is plotted on the -axis. For the confusion matrix in Table 6, we have TPR = TruePositive/(TotalPositive), and FPR = FalsePositive/(TotalNegative). By tuning a threshold value, all the items ranked above it are deemed observed by the user, and below unobserved, thus the system will get different prediction values for different threshold values to draw the ROC curve of points [60].

Generally, if one ROC curve is consistently dominant over another, the system represented by the former curve has better prediction performance. But in actual situations, ROC curves may intersect with each other.

Variations of the ROC metric include GROC (global ROC) and CROC (customer ROC) [29].

ROC sensitivity is a measure of the diagnostic power of a CF system. Operationally, it is given by the Area Under the ROC Curve  (AUC). The calculation of AUC can be the actual area under the ROC curve for binary class problems. We can also use the strategy from [131] to estimate AUC:

where and are the numbers of negative and positive examples respectively, and , where is the rank of th positive example in the ranked list. From the above equation, the AUC is essentially a measure of the quality of a ranking.

For multiclass problems, we can use estimated AUC, which can be the weighted average of the AUCs obtained by taking each class as the reference class in turn (i.e., making it class 0 and all other classes class 1). The weight of a class’s AUC is the class’s frequency in the data [129, 132].

The performance of the recommender system with a bigger AUC value is better.

7. Conclusions

Collaborative filtering (CF) is one of the most successful recommender techniques. Broadly, there are memory-based CF techniques such as the neighborhood-based CF algorithm; model-based CF techniques such as Bayesian belief nets CF algorithms, clustering CF algorithms, and MDP-based CF algorithms; and hybrid CF techniques such as the content-boosted CF algorithm and Personality diagnosis.

As a representative memory-based CF technique, neighborhood-based CF computes similarity between users or items, and then use the weighted sum of ratings or simple weighted average to make predictions based on the similarity values. Pearson correlation and vector cosine similarity are commonly used similarity calculations, which are usually conducted between co-rated items by a certain user or both users that have co-rated a certain item. To make top- recommendations, neighborhood-based methods can be used according to the similarity values. Memory-based CF algorithms are easy to implement and have good performances for dense datasets. Shortcomings of memory-based CF algorithms include their dependence on user ratings, decreased performance when data are sparse, new users and items problems, and limited scalability for large datasets, and so forth [11, 42, 133]. Memory-based CF on imputed rating data and on dimensionality-reduced rating data will produce more accurate predictions than on the original sparse rating data [24, 25, 37].

Model-based CF techniques need to train algorithmic models, such as Bayesian belief nets, clustering techniques, or MDP-based ones to make predictions for CF tasks. Advanced Bayesian belief nets CF algorithms with the ability to deal with missing data are found to have better performance than simple Bayesian CF models and Pearson correlation-based algorithms [11]. Clustering CF algorithms make recommendations within small clusters rather than the whole dataset, and achieve better scalability. An MDP-based CF algorithm incorporates the users’ action of taking the recommendation or not into the model, and the optimal solution to the MDP is to maximize the function of its reward stream. The MDP-based CF algorithm brings profits to the customized system deploying it. There are downsides of model-based CF techniques, for example, they may not be practical when the data are extremely sparse, the solutions using dimensionality reduction or transformation of multiclass data into binary ones may decrease their recommendation performance, the model-building expense may be high, and there is a tradeoff between prediction performance and scalability for many algorithms.

Most hybrid CF techniques combine CF methods with content-based techniques or other recommender systems to alleviate shortcomings of either system and to improve prediction and recommendation performance. Besides improved performance, hybrid CF techniques rely on external content information that is usually not available, and they generally have increased complexity.

It is always desirable to design a CF approach that is easy to implement, takes few resources, produces accurate predictions and recommendations, and overcomes all kinds of challenges presented by real-world CF applications, such as data sparsity, scalability, synonymy, privacy protection, and so forth. Although there is no cure-all solution available yet, people are working out solutions for each of the problems. To alleviate the sparsity problem of CF tasks, missing-data algorithms such as TAN-ELR [31], imputation techniques such as Bayesian multiple imputation [68], and dimensionality reduction techniques such as SVD [23] and matrix factorization [107] can be used. Clustering CF algorithms and other approaches such as an incremental-SVD CF algorithm [38] are found promising in dealing with the scalability problem. Latent semantic indexing (LSI) is helpful to handle the synonymy problem. And sparse factor analysis is found helpful to protect user privacy [52].

Besides addressing the above challenges, future CF techniques should also be able to make accurate predictions in the presence of shilling attacks and noisy data, and be effectively applied in fast-growing mobile applications as well.

There are many evaluation metrics for CF techniques. The most commonly used metric for prediction accuracy include mean absolute error (MAE), recall and precision, and ROC sensitivity. Because artificial data are usually not reliable due to the characteristics of CF tasks, real-world datasets from live experiments are more desirable for CF research.

Acknowledgment

The authors are grateful to Drs. Miroslav Kubat and Moiez A. Tapia for their help during the early stage of this paper and also to Drs. Xingquan Zhu, Russ Greiner, Andres Folleco, and Amri Napolitano for their comments. Their thanks also go to Dr. Jun Hong and other editors/reviewers of the AAI Journal for their work as well as to many anonymous reviewers for their comments. This work was supported by the Data Mining and Machine Learning and Empirical Software Engineering and Laboratories at Florida Atlantic University.