Abstract

The choice of trustworthy interaction partners is one of the key factors for successful transactions in online communities. To choose the most trustworthy sellers to interact with, buyers rely on trust and reputation models. Therefore, online systems must be able to accurately assess peers’ trustworthiness. The Beta distribution function provides a sound mathematical basis for combining feedback and deriving users’ trustworthiness. But the Beta reputation system suffers from many forms of cheating behavior such as the proliferation of unfair positive ratings, leading a poor service provider to build a good reputation, and the proliferation of unfair negative feedback, leading a good service provider to end up with a bad reputation. In this paper, we propose a new and coherent method for computing users’ trustworthiness by combining the Beta trustworthiness expectation function with the credibility function. This novel combination mechanism mitigates the impact of unfair ratings. In comparison with Bayesian trust model, we quantitatively show that our approach provides significantly more accurate estimation of peers’ trustworthiness through feedback gathered from multiple sources. Furthermore, we propose an extension of Bayesian trustworthiness expectation function by introducing the initial trust propensity to allow assessing individuals’ initial trust.

1. Introduction

Social networks and e-commerce platforms are developed to enable social actors to share information and develop lucrative activities. These platforms have been successful in terms of security, but their openness and their ability to accommodate a large number of players make them vulnerable due to the proportionate number of malicious users that is associated. If the Public Key Infrastructure (PKI) has been designed to address key management issues, it has also created trust management problems. Indeed, the certificates provide information about the identity of actors without giving any information about their behavior. In addition, an actor may change his name or profile so that others will never recognize him as a malicious user. This has increased uncertainty among partners in peer community networks. As in [1], we argue that certificates alone are not enough and that we must also take into account the behavior of all participants in opportunistic networks such as social networks and e-commerce platforms. Currently, many researchers are focused on developing new techniques that address both device performance and users’ behavior.

The information society technologies should be able to accurately track the behavior of its users and make fair recommendations, warning, and sanction decisions depending on the context and the level of damage caused by any malicious user. To do so, we propose an adaptive evidence gathering mechanism that makes use of all the available information, that is, both positive and negative evidence, both from individual and collective experience. In order to reduce the impact of unfair ratings (both positive and negative) faced by Bayesian reputation systems, a novel trust evaluation model is proposed for quantifying users’ trustworthiness.

This paper is organized as follows. Section 2 presents the related work. In Section 3, we describe the system and explain how feedback (or evidence) is collected and updated. In Section 4, the Bayesian trust model and its weaknesses are presented, and solutions are proposed to overcome these weaknesses. In Section 5, we propose an extension of Bayesian trustworthiness expectation function by introducing the initial trust parameters that allow assessing individuals’ initial disposition to trust, and furthermore, we propose a modification of Bayesian trustworthiness evaluation method for mitigating the impact of unfair feedback, both positive and negative. The experiments comparing our approach to the Bayesian standard trustworthiness method for the scenarios of receiving unfair ratings are presented in Section 6. Finally, a short conclusion is given in Section 7.

In a virtual world such as e-commerce marketplace, users can neither physically verify the quality of the exchange products before buying them, nor ensure the security of personal data, which creates an uncertainty and mistrust between the actors of the same network [2]. The growing uncertainty over the Internet is a critical obstacle to the success of transactions in the network [3], and it has been a concern of many researchers and practitioners who have suggested that trust is one of the critical factors that impact the success of virtual communities, especially, electronic commerce [4].

Trust systems have been proposed by many practitioners and researchers for a variety of applications, among them are the selection of trustworthy peers in a peer-to-peer network, the choice of good transaction partners for online auctioning such as E-bay, and the detection of misbehaving nodes in mobile ad hoc networks [5]. There is a trade-off between efficiency in using all the available information from both direct and indirect experience and robustness against false ratings [6]. If the feedback provided by others is considered, the trust evaluation system can be vulnerable to false praise and false accusations. However, if only one’s own experience is considered, then the relevant experience made by others remains unused. The use of only positive or only negative ratings makes the system susceptible either to false praise or false accusations accordingly.

The Bayesian trust model is one of the most popular trust models in the literature. This model has seduced many researchers [711] and practitioners for its simplicity and mathematical foundation. In addition, the Bayesian trust model takes into account both positive and negative information. It is a well-adapted model for deriving trustworthiness expectation of an entity based on the evidence collected from the past interactions, either individual or collective [12]. The work presented in [13] shows that the Bayesian model lacks the subjectivity and can be extended by introducing context-dependent parameters such as the initial disposition to trust. This subjective approach of trust is congruent with the definitions of trust provided in [14, 15]. Indeed, trust is the subjective expectation of an entity about the future actions of another based on the past observations (direct and indirect experience) of others [16, 17].

As in [13], this article focuses on the assessment of online users’ trustworthiness by combining evidence provided by different sources. Instead of using the Beta distribution function to compute individuals' global trustworthiness, which opens the door to malicious behavior, this article offers a consistent method to fairly compute entities' trustworthiness through evidence gathered from different sources.

3. System Description and Evidence Collecting

3.1. A Brief Description of the System

In this section, we give a brief description of the model. It should be noted that our proposed trust model is a modification of the Bayesian trust model. Let denote the set of users or peers in online community, and let be two interaction partners in the community. We define two categories of participants, namely, the service provider (seller) and the client (buyer) . Each service provider can be considered as a potential evaluator for another target entity, since the former can ask for service from the later and conversely. We assume that every actor in community can assess the trustworthiness of others in the system, based on direct and/or indirect experience. In order to assess the trustworthiness of the service provider, a trustor maintains the outcome of each interaction as he/she perceives it. At the same time, the trustee may assess the credibility behavior of an evaluator by recording his honesty evaluation at each interaction. For this reason, we propose that bilateral interactions in which both partners have mutual assessment obligations can be considered as two separate interactions where each actor is both an observer and a target element.

3.2. Collecting and Updating Evidence

We must keep in mind that the actors are unknown to each other and they are engaged in an interdependent relationship where some provide services that others consume. That said, the interactions are based on mutual trust built up over time by the partners. So it is important that each actor maintains the outcomes of past interactions with other actors in the system.

For easily integrating evidence derived from feedback of past interactions between users, we propose that each feedback score is a value in the range , where the lowest value expresses an interaction resulting in a total disappointment or dissatisfaction and the highest value () expresses a total satisfaction of an actor after an interaction takes place. We think that an observer may have different parameter for every interaction partner and for each th transaction. Thus, this parameter should be indexed by , , and .

Suppose that, after each th transaction, a peer receives the score from a peer . Then, the couple of satisfactory and unsatisfactory ratings is generated by and formulas. In other words, the parameters and are the amounts of satisfaction and dissatisfaction the peer has with the peer for each th transaction, respectively. It is worth noting that each value of the couple belongs to the set . Thus, the total amount of satisfaction and dissatisfaction of the peer towards peer after transactions will bewhere the parameter is nothing other than the weight of the derived evidence as introduced in [8]. The importance and the impact of this parameter are no longer to be demonstrated and to facilitate our experiments, we have set the value of to 0.5. It should be noted that, even if the approach is capable of handling continuous evidence (), the binary evidence only () is considered in this paper. Furthermore, assuming that the total amount of satisfaction and dissatisfaction a peer has with peer after transactions is and respectively, after transactions, the updated total amount of satisfaction and dissatisfaction the peer has with peer is given by

In our approach, we propose that each client continuously maintains the two amounts of evidence ( is omitted for brevity) obtained after a sequence of interactions with each of its service providers in the P2P network. Conversely, each service provider holds a pair of praise and complaint ( is omitted again for brevity) values during interactions with his clients, which are calculated analogous to . Furthermore, we assume that an opinion about a target entity is an aggregation of provided by all of its interacted peers in P2P network, and for more accuracy in assessing the behavior of a target peer , it is necessary to collect a relatively large amount of evidence about that service provider taking into account the credibility of each of its raters computed using pairs. We argue that the more evidence becomes available, the more people become enlightened and more certain about the behavior of their interaction partners.

4. Trust Computation

Trust is a subjective and complex concept that is difficult to assess with multiple factors of different degrees of importance that intervene in its construction. One of the most important factors of trust towards an entity is his reputation that involves subfactors such as competence, reliability, and credibility. Additional subfactors can be associated depending on the context; see [18] for more information about trust attributes. In our approach, we assume that a higher reputation induces more trust and conversely. In other words, an entity with a higher reputation will be considered more trustworthy. Thus, computing the degree of trustworthiness of entities in online community is a simple and fair way of assessing trust of their interaction partners. This will help sort the nodes, from good to bad, and make recommendations accordingly. In the following subsections, we first present the Bayesian representation of trust and its weaknesses, and then we propose solutions and a mathematical method to model users’ trustworthiness based on the evidence provided by their interaction partners.

4.1. Bayesian Representation of Trust

Our trustworthiness representation is based on the Beta probability density function presented in [8], which is suitable for representing binary events. Indeed, Bayesian inference is an important method in mathematical statistics and the Beta probability density function provides a sound mathematical basis for a simple and flexible estimation of trustworthiness ratings from a collection of evidence. As more evidence or information becomes available, Bayes' theorem is used to update the probability for a hypothesis. Indeed, an observer may believe that there is a probability defined on the interval such that a given target entity acts honestly without betraying, and simultaneously, there is a probability such that the target entity misbehaves during an interaction. The parameters are random variables, and each observer models the uncertainty by assuming that each random variable itself is drawn according to a prior distribution that is updated as new observations become available. This is how the Bayesian standard framework works. Furthermore, the Beta distribution is a family of continuous probability distributions defined on the interval parametrized by two positive shape parameters, denoted by and , which appears as exponents of the random variable and control the shape of the distribution.

The original Beta probability density function (Beta-PDF) denoted by is expressed using the gamma function as follows:where , , and , such that the probability variable if , and if .

The associated posterior trustworthiness expectation function denoted by is defined as follows [8, 15]:

By setting the parameters and , where represents the peer ’s total number of evaluators, the traditional (standard) Bayesian approach evaluates the trustworthiness of the peer based on the following formula:

4.2. The Weaknesses of Bayesian Model

In this section, we have enumerated some weaknesses of Bayesian trust model. In addition, we explained the necessity of introducing new parameters and show their impact on users’ trust behavior.

4.2.1. The Lack of Subjectivity

Most of the existing trust models use the uniform distribution as the prior trustworthiness value assigned initially to each target entity when there is no information or evidence available to support a hypothesis. We argue that, by considering the initial trustworthiness value as being uniform for any beginner in the system, the Bayesian standard model lacks realism with respect to the subjectivity view of trust. Moreover, we argue that the associated expectation value assigned initially to each beginner in the Bayesian standard model is unfair because a malicious actor whose reputation rating falls below 50% () will find it more advantageous to abandon his current account and reenter the system in order to get the prior trustworthiness value ().

4.2.2. The Vulnerabilities of the Bayesian Model

In the Bayesian trust and reputation model, trust is based essentially on the reputation that each actor builds as he interacts with other actors in the system. This trust evaluation model merely aggregates the binary feedback scores collected from peers. But sources of information are not always credible. We argue that trust systems that rely on the Bayesian standard trust model are flawed in several ways. Indeed, the Bayesian standard trust model is exposed to the following cheating behaviors:(i)The proliferation of unfair positive ratings: a service provider may corrupt a handful of evaluators through profit-sharing so that they evaluate positively the service offered, even if the service is of poor quality, leading an untrustworthy actor to end up getting a large number of false positive statements.(ii)The proliferation of unfair negative feedback: a peer can be a victim of false judgments; in this case, an unlucky service provider comes to interact with a handful of malicious peers who always accuse others by falsely claiming that they provide poor service, leading a trustworthy actor to end up getting a large number of false negative statements.

Bayesian trust model is susceptible to all of these problems. To overcome these issues, we introduce new parameters and we propose a consistent method to fairly compute user’s trustworthiness.

4.3. Proposed Solutions
4.3.1. Overcoming Initial Trust Problem

As outlined in Section 4.2.1, the Bayesian standard model lacks realism with respect to the subjectivity view of trust and opens the door to several types of malicious behavior in the peer community. Thus, instead of using the traditional uniform distribution, , and its associated expectation value, , to compute the prior trustworthiness of each beginner in the absence of evidence, we introduce a new parameter called trust propensity to allow each observer to express personal disposition to trust before any prior interaction with an unknown target entity. The trust propensity is a subjective and context-dependent parameter. For example, an observer's trust propensity with an unknown entity in a file-sharing scenario might be higher, but this might be not true for online shopping or online medical advice. The benevolence, the expertise, the context, and other factors (see [18]) are trust attributes that affect highly human mind.

4.3.2. Overcoming Unfair Ratings

Even if the problem of unfair feedback, both positive and negative, seems unsolvable due to the subjective nature of peers’ behavior (i.e., the probability of performing a particular action depends on personal characteristics), we can develop strategies to deter malicious users and also minimize the impact of the unfair judgments they provide.

First, we propose a mechanism for gathering evidence, where the two parties (service provider and client) rate each other. On the one hand, customers have the opportunity to evaluate service providers through the quality of their services after each interaction; on the other hand, service providers have the opportunity to report the dishonest behavior of malicious evaluators. This mechanism is similar to a game, where each party will seek to cooperate with the other in order to avoid tensions that could bring down the reputations of both parties.

Secondly, we propose a simple trustworthiness computation model in which individuals make choices by weighting the impact of their decisions on themselves and others. The new trustworthiness rating function is an extension of the Bayesian standard trustworthiness expectation function. This was possible by integrating each evaluator’s credibility rating into the trustworthiness expectation function presented in [711].

4.3.3. Integration of New Parameters

Similar to [13, 15], we introduced the parameters and for expressing player ’s prior knowledge about player . These two parameters are, respectively, known as player ’s prior believe and prior disbelieve, and they allow introducing another parameter known as player ’s trust propensity. For more details on trust propensity, see [1921]. This leads us to define two important concepts: initial belief and initial disbelief.

Definition 1 (initial belief and disbelief). Initially (before any interaction happens between two peers), a service consumer (peer ) may naturally assume that there exists a parameter so that the service provider (peer ) will act honestly, based solely on its personal characteristics. Then, the parameters , respectively, , called peer ’s initial belief, respectively, initial disbelief, are defined as follows:where, , and is the set of peers. Another trust factor introduced in this model is the credibility of each evaluator which is defined as follows.

Definition 2 (credibility). Let represent the total number of complaints a client receives from all of its interaction partners after transactions. The parameter , defined byis peer ’s credibility, where and . Later we will use and instead of and accordingly for the brevity.

5. Extending Bayesian Trust

Let , be the set of evaluators of peer . The couple of positive and negative feedback that have been collected from the peer’s individual experience with the peer are the main parameters used to compute the degree of trustworthiness of the target entity in this model. Therefore, we compute the expected posterior trustworthiness value of the peer using the parameters and defined in the Beta distribution function, and the credibility rating of each source of evidence, where and represent the initial trust parameters of an individual as defined above (see (6)).

If we consider and as the total number of positive and negative feedback, respectively, received by peer from peer after interactions, then ’s modified posterior trustworthiness expectation value recorded by peer can be denoted by and defined as follows:where and denote the initial belief and the initial disbelief, respectively, and . Now that the ingredients are gathered, we can define the trustworthiness rating of each peer in the network.

Definition 3 (trustworthiness rating). Let , be the set of evaluators of peer . Let be the peer ’s credibility given by (7), and let be the peer ’s trustworthiness expectation value provided by peer and defined by (8); the peer ’s trustworthiness rating is defined as follows:Note that the trustworthiness rating is a value between 0 and 1. As a matter of fact, is the aggregation of the trustworthiness expectation values built based on the provided ratings by each evaluator of the peer .
In contrast, the traditional (standard) Bayesian approach evaluates the trustworthiness of the peer based on (5).

6. Experiments

Similar to [22], the experiments are conducted on a variety of P2P networks which are simulated with maximum number of 32, 64, and 128 users. 80% of the population consists of clients and the rest consists of service providers. The number of maximum interactions between a pair of client and service provider is set to . Interactions are created randomly for connecting clients to service providers. The experiments are performed from a time and contain records of transactions rating information. An example of records table is given in Table 1.

ClientId and ServiceProviderId fields are self-descriptive, is the total number of interactions between a pair of client and service provider, and are the number of positive and negative feedback accordingly, given by the client to the service provider, and and are the number of praise and complaints coming from a service provider to the client accordingly. The following scenarios are considered.

In all of the figures discussed below, the formula to compute the trustworthiness of a peer in case of the traditional (standard) Bayesian method is given by (5) (see Section 4.1) and in case of our proposed modified Bayesian approach the formula is presented by (9) (see Section 5).

The first case (Figures 1, 2, and 3) is devoted to observe the impact of a large amount of unfair positive ratings from a client. In order to accomplish the task, a service provider (seller) is chosen from the network that has no very high reputation and among his credible clients (buyer) one is chosen that is going to give only positive feedback for all of the next interactions. So, the client will also receive only praise ratings from the service provider. In order to observe the dependency of the evaluated trustworthiness of the service provider from the client's feedback, other clients are not interacting with him after time . Our method is compared with the traditional (standard) Bayesian method to measure the trustworthiness of the service provider after time .

Figures 1, 2, and 3 are presented for a randomly initiated P2P network where total number of users are 32, 64, and 128 accordingly.

From Figures 1, 2, and 3, it can be seen that our method is more robust against unfair positive feedback from a client compared to the traditional Bayesian method. Indeed, after the growing number of positive feedback, the perceived trustworthiness of the service provider, calculated by our approach, stops growing in practice, and the further corruption of the client is becoming useless. In contrast, using Bayesian traditional method, the trustworthiness of the service provider is enhanced significantly and tends to 1.

The second scenario explores the influence of giving unfair negative feedback to a reputable service provider. As mentioned in Section 4.2.2, this can happen when a competitor registers in the system as a client and tries to bring down the business of his college. It can also happen when a buyer colludes with a seller to badmouth the seller’s competitors, resulting in gains to the seller. A reputable service provider is chosen from a simulated network at a time , and one of his less active clients (the precondition that the client just starting the slandering attack) is chosen to start misbehaving. From that time, interactions are only between this pair of users. Other clients do not interact with the service provider for the same reason as in the first scenario. Figures 4, 5, and 6 are presented for randomly initiated networks where total number of users is 32, 64, and 128 accordingly.

From Figures 4, 5, and 6, it can be seen that our method mitigates the slandering attacks from the client compared to the traditional Bayesian approach and the measured trustworthiness of the service provider does not reduce much after the growing number of negative feedback from the client. Furthermore, the reducing speed is also decreasing and after some point it stops practically.

In all of Figures 16, at the time the curve of Bayesian traditional method and our approach starts from different trustworthiness scores owing to the different prior parameters and . As mentioned before, when there is no interactions, the standard Bayesian approach assigns the value 0.5 to the prior trustworthiness expectation. Moreover, all of the clients are assumed to be equally credible while giving their ratings. As a result, in most cases standard Bayesian trustworthiness assessment method is more sensible to both positive and negative feedback. In contrast, our method enables us to define the initial trust parameters and subjectively. In the experiments the values are given pseudo randomly, so, in some cases at the time our approach can evaluate the trustworthiness of an observed service provider higher (Figures 3 and 5) or lower (Figures 1, 2, 4, and 6) compared to the standard Bayesian method. In fact, in all of the cases our method is more resistive to growing number of both positive and negative feedback.

The experiments are conducted on many datasets and only few figures are presented. The source code in Java and the simulated networks in  .xlsx extension can be found in [23]. Figures 1, 2, and 3 are based on records131.xlsx, records132.xlsx, and records19.xlsx accordingly, and Figures 4, 5, and 6 are from records130.xlsx, records67.xlsx, and records125.xlsx accordingly.

7. Conclusion and Future Work

In sum, due to the problems concerned with the standard Bayesian trustworthiness evaluation approach such as vulnerability to unfair ratings, both positive and negative, in the case of having a low trustworthiness value, a malicious service provider will find it advantageous to reenter the system to return the initial higher trustworthiness expectation value 0.5; therefore, we propose a novel modified Bayesian trustworthiness evaluation method to overcome these problems.

We claim the necessity of giving bidirectional ratings in P2P networks. In this way, based on the ratings elicited from service providers the credibility of each of their clients is computed. Then, the trustworthiness scores of the service providers are calculated taking into account the credibility of each of their evaluators. As a matter of fact, the performed experiments have shown that our proposed approach performs better while dealing with unfair ratings, both positive and negative, compared to the traditional Bayesian trustworthiness method in the discussed scenarios. Thus, our mechanism enables us to diminish the influence of these ratings significantly. In addition, by considering the initial trust to be subjective, we give more flexibility to model the client’s initial trust with its further dynamic changes and make it useless to reenter the system by expecting to have a higher initial trustworthiness. As a result, our method is more resistive against these abovementioned problems than the standard Bayesian trustworthiness approach.

In future works, we are going to discuss how to integrate an aging factor in our method because users may change their behavior over time; therefore, it is desirable to define a way to give greater weights to more recent ratings. Moreover, we will observe techniques to detect malicious users and approaches to treat them. Thus, by tackling the problem by more ways, the behavior of untrustworthy peers can be handled better; as a consequence, we can improve the performance of our approach.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (no. 91118002).