Mathematical Problems in Engineering

Volume 2018, Article ID 5636319, 8 pages

https://doi.org/10.1155/2018/5636319

## A Modified Bayesian Trustworthiness Evaluation Method to Mitigate the Effect of Unfair Ratings

Key Laboratory of Trustworthy Distributed Computing and Service, Ministry of Education, School of Software, Beijing University of Posts and Telecommunications, 10 Xitucheng Road, Haidian District, Beijing 100876, China

Correspondence should be addressed to Manawa Anakpa; rf.oohay@apkana

Received 4 July 2017; Revised 31 October 2017; Accepted 10 December 2017; Published 14 May 2018

Academic Editor: Rattikorn Hewett

Copyright © 2018 Manawa Anakpa et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The choice of trustworthy interaction partners is one of the key factors for successful transactions in online communities. To choose the most trustworthy sellers to interact with, buyers rely on trust and reputation models. Therefore, online systems must be able to accurately assess peers’ trustworthiness. The Beta distribution function provides a sound mathematical basis for combining feedback and deriving users’ trustworthiness. But the Beta reputation system suffers from many forms of cheating behavior such as the proliferation of unfair positive ratings, leading a poor service provider to build a good reputation, and the proliferation of unfair negative feedback, leading a good service provider to end up with a bad reputation. In this paper, we propose a new and coherent method for computing users’ trustworthiness by combining the Beta trustworthiness expectation function with the credibility function. This novel combination mechanism mitigates the impact of unfair ratings. In comparison with Bayesian trust model, we quantitatively show that our approach provides significantly more accurate estimation of peers’ trustworthiness through feedback gathered from multiple sources. Furthermore, we propose an extension of Bayesian trustworthiness expectation function by introducing the initial trust propensity to allow assessing individuals’ initial trust.

#### 1. Introduction

Social networks and e-commerce platforms are developed to enable social actors to share information and develop lucrative activities. These platforms have been successful in terms of security, but their openness and their ability to accommodate a large number of players make them vulnerable due to the proportionate number of malicious users that is associated. If the Public Key Infrastructure (PKI) has been designed to address key management issues, it has also created trust management problems. Indeed, the certificates provide information about the identity of actors without giving any information about their behavior. In addition, an actor may change his name or profile so that others will never recognize him as a malicious user. This has increased uncertainty among partners in peer community networks. As in [1], we argue that certificates alone are not enough and that we must also take into account the behavior of all participants in opportunistic networks such as social networks and e-commerce platforms. Currently, many researchers are focused on developing new techniques that address both device performance and users’ behavior.

The information society technologies should be able to accurately track the behavior of its users and make fair recommendations, warning, and sanction decisions depending on the context and the level of damage caused by any malicious user. To do so, we propose an adaptive evidence gathering mechanism that makes use of all the available information, that is, both positive and negative evidence, both from individual and collective experience. In order to reduce the impact of unfair ratings (both positive and negative) faced by Bayesian reputation systems, a novel trust evaluation model is proposed for quantifying users’ trustworthiness.

This paper is organized as follows. Section 2 presents the related work. In Section 3, we describe the system and explain how feedback (or evidence) is collected and updated. In Section 4, the Bayesian trust model and its weaknesses are presented, and solutions are proposed to overcome these weaknesses. In Section 5, we propose an extension of Bayesian trustworthiness expectation function by introducing the initial trust parameters that allow assessing individuals’ initial disposition to trust, and furthermore, we propose a modification of Bayesian trustworthiness evaluation method for mitigating the impact of unfair feedback, both positive and negative. The experiments comparing our approach to the Bayesian standard trustworthiness method for the scenarios of receiving unfair ratings are presented in Section 6. Finally, a short conclusion is given in Section 7.

#### 2. Related Work

In a virtual world such as e-commerce marketplace, users can neither physically verify the quality of the exchange products before buying them, nor ensure the security of personal data, which creates an uncertainty and mistrust between the actors of the same network [2]. The growing uncertainty over the Internet is a critical obstacle to the success of transactions in the network [3], and it has been a concern of many researchers and practitioners who have suggested that trust is one of the critical factors that impact the success of virtual communities, especially, electronic commerce [4].

Trust systems have been proposed by many practitioners and researchers for a variety of applications, among them are the selection of trustworthy peers in a peer-to-peer network, the choice of good transaction partners for online auctioning such as E-bay, and the detection of misbehaving nodes in mobile ad hoc networks [5]. There is a trade-off between efficiency in using all the available information from both direct and indirect experience and robustness against false ratings [6]. If the feedback provided by others is considered, the trust evaluation system can be vulnerable to false praise and false accusations. However, if only one’s own experience is considered, then the relevant experience made by others remains unused. The use of only positive or only negative ratings makes the system susceptible either to false praise or false accusations accordingly.

The Bayesian trust model is one of the most popular trust models in the literature. This model has seduced many researchers [7–11] and practitioners for its simplicity and mathematical foundation. In addition, the Bayesian trust model takes into account both positive and negative information. It is a well-adapted model for deriving trustworthiness expectation of an entity based on the evidence collected from the past interactions, either individual or collective [12]. The work presented in [13] shows that the Bayesian model lacks the subjectivity and can be extended by introducing context-dependent parameters such as the initial disposition to trust. This subjective approach of trust is congruent with the definitions of trust provided in [14, 15]. Indeed, trust is the subjective expectation of an entity about the future actions of another based on the past observations (direct and indirect experience) of others [16, 17].

As in [13], this article focuses on the assessment of online users’ trustworthiness by combining evidence provided by different sources. Instead of using the Beta distribution function to compute individuals' global trustworthiness, which opens the door to malicious behavior, this article offers a consistent method to fairly compute entities' trustworthiness through evidence gathered from different sources.

#### 3. System Description and Evidence Collecting

##### 3.1. A Brief Description of the System

In this section, we give a brief description of the model. It should be noted that our proposed trust model is a modification of the Bayesian trust model. Let denote the set of users or peers in online community, and let be two interaction partners in the community. We define two categories of participants, namely, the service provider (seller) and the client (buyer) . Each service provider can be considered as a potential evaluator for another target entity, since the former can ask for service from the later and conversely. We assume that every actor in community can assess the trustworthiness of others in the system, based on direct and/or indirect experience. In order to assess the trustworthiness of the service provider, a trustor maintains the outcome of each interaction as he/she perceives it. At the same time, the trustee may assess the credibility behavior of an evaluator by recording his honesty evaluation at each interaction. For this reason, we propose that bilateral interactions in which both partners have mutual assessment obligations can be considered as two separate interactions where each actor is both an observer and a target element.

##### 3.2. Collecting and Updating Evidence

We must keep in mind that the actors are unknown to each other and they are engaged in an interdependent relationship where some provide services that others consume. That said, the interactions are based on mutual trust built up over time by the partners. So it is important that each actor maintains the outcomes of past interactions with other actors in the system.

For easily integrating evidence derived from feedback of past interactions between users, we propose that each feedback score is a value in the range , where the lowest value expresses an interaction resulting in a total disappointment or dissatisfaction and the highest value () expresses a total satisfaction of an actor after an interaction takes place. We think that an observer may have different parameter for every interaction partner and for each th transaction. Thus, this parameter should be indexed by , , and .

Suppose that, after each th transaction, a peer receives the score from a peer . Then, the couple of satisfactory and unsatisfactory ratings is generated by and formulas. In other words, the parameters and are the amounts of satisfaction and dissatisfaction the peer has with the peer for each th transaction, respectively. It is worth noting that each value of the couple belongs to the set . Thus, the total amount of satisfaction and dissatisfaction of the peer towards peer after transactions will bewhere the parameter is nothing other than the weight of the derived evidence as introduced in [8]. The importance and the impact of this parameter are no longer to be demonstrated and to facilitate our experiments, we have set the value of to 0.5. It should be noted that, even if the approach is capable of handling continuous evidence (), the binary evidence only () is considered in this paper. Furthermore, assuming that the total amount of satisfaction and dissatisfaction a peer has with peer after transactions is and respectively, after transactions, the updated total amount of satisfaction and dissatisfaction the peer has with peer is given by

In our approach, we propose that each client continuously maintains the two amounts of evidence ( is omitted for brevity) obtained after a sequence of interactions with each of its service providers in the P2P network. Conversely, each service provider holds a pair of praise and complaint ( is omitted again for brevity) values during interactions with his clients, which are calculated analogous to . Furthermore, we assume that an opinion about a target entity is an aggregation of provided by all of its interacted peers in P2P network, and for more accuracy in assessing the behavior of a target peer , it is necessary to collect a relatively large amount of evidence about that service provider taking into account the credibility of each of its raters computed using pairs. We argue that the more evidence becomes available, the more people become enlightened and more certain about the behavior of their interaction partners.

#### 4. Trust Computation

Trust is a subjective and complex concept that is difficult to assess with multiple factors of different degrees of importance that intervene in its construction. One of the most important factors of trust towards an entity is his reputation that involves subfactors such as competence, reliability, and credibility. Additional subfactors can be associated depending on the context; see [18] for more information about trust attributes. In our approach, we assume that a higher reputation induces more trust and conversely. In other words, an entity with a higher reputation will be considered more trustworthy. Thus, computing the degree of trustworthiness of entities in online community is a simple and fair way of assessing trust of their interaction partners. This will help sort the nodes, from good to bad, and make recommendations accordingly. In the following subsections, we first present the Bayesian representation of trust and its weaknesses, and then we propose solutions and a mathematical method to model users’ trustworthiness based on the evidence provided by their interaction partners.

##### 4.1. Bayesian Representation of Trust

Our trustworthiness representation is based on the Beta probability density function presented in [8], which is suitable for representing binary events. Indeed, Bayesian inference is an important method in mathematical statistics and the Beta probability density function provides a sound mathematical basis for a simple and flexible estimation of trustworthiness ratings from a collection of evidence. As more evidence or information becomes available, Bayes' theorem is used to update the probability for a hypothesis. Indeed, an observer may believe that there is a probability defined on the interval such that a given target entity acts honestly without betraying, and simultaneously, there is a probability such that the target entity misbehaves during an interaction. The parameters are random variables, and each observer models the uncertainty by assuming that each random variable itself is drawn according to a prior distribution that is updated as new observations become available. This is how the Bayesian standard framework works. Furthermore, the Beta distribution is a family of continuous probability distributions defined on the interval parametrized by two positive shape parameters, denoted by and , which appears as exponents of the random variable and control the shape of the distribution.

The original Beta probability density function (Beta-PDF) denoted by is expressed using the gamma function as follows:where , , and , such that the probability variable if , and if .

The associated posterior trustworthiness expectation function denoted by is defined as follows [8, 15]:

By setting the parameters and , where represents the peer ’s total number of evaluators, the traditional (standard) Bayesian approach evaluates the trustworthiness of the peer based on the following formula:

##### 4.2. The Weaknesses of Bayesian Model

In this section, we have enumerated some weaknesses of Bayesian trust model. In addition, we explained the necessity of introducing new parameters and show their impact on users’ trust behavior.

###### 4.2.1. The Lack of Subjectivity

Most of the existing trust models use the uniform distribution as the prior trustworthiness value assigned initially to each target entity when there is no information or evidence available to support a hypothesis. We argue that, by considering the initial trustworthiness value as being uniform for any beginner in the system, the Bayesian standard model lacks realism with respect to the subjectivity view of trust. Moreover, we argue that the associated expectation value assigned initially to each beginner in the Bayesian standard model is unfair because a malicious actor whose reputation rating falls below 50% () will find it more advantageous to abandon his current account and reenter the system in order to get the prior trustworthiness value ().

###### 4.2.2. The Vulnerabilities of the Bayesian Model

In the Bayesian trust and reputation model, trust is based essentially on the reputation that each actor builds as he interacts with other actors in the system. This trust evaluation model merely aggregates the binary feedback scores collected from peers. But sources of information are not always credible. We argue that trust systems that rely on the Bayesian standard trust model are flawed in several ways. Indeed, the Bayesian standard trust model is exposed to the following cheating behaviors:(i)The proliferation of unfair positive ratings: a service provider may corrupt a handful of evaluators through profit-sharing so that they evaluate positively the service offered, even if the service is of poor quality, leading an untrustworthy actor to end up getting a large number of false positive statements.(ii)The proliferation of unfair negative feedback: a peer can be a victim of false judgments; in this case, an unlucky service provider comes to interact with a handful of malicious peers who always accuse others by falsely claiming that they provide poor service, leading a trustworthy actor to end up getting a large number of false negative statements.

Bayesian trust model is susceptible to all of these problems. To overcome these issues, we introduce new parameters and we propose a consistent method to fairly compute user’s trustworthiness.

##### 4.3. Proposed Solutions

###### 4.3.1. Overcoming Initial Trust Problem

As outlined in Section 4.2.1, the Bayesian standard model lacks realism with respect to the subjectivity view of trust and opens the door to several types of malicious behavior in the peer community. Thus, instead of using the traditional uniform distribution, , and its associated expectation value, , to compute the prior trustworthiness of each beginner in the absence of evidence, we introduce a new parameter called trust propensity to allow each observer to express personal disposition to trust before any prior interaction with an unknown target entity. The trust propensity is a subjective and context-dependent parameter. For example, an observer's trust propensity with an unknown entity in a file-sharing scenario might be higher, but this might be not true for online shopping or online medical advice. The benevolence, the expertise, the context, and other factors (see [18]) are trust attributes that affect highly human mind.

###### 4.3.2. Overcoming Unfair Ratings

Even if the problem of unfair feedback, both positive and negative, seems unsolvable due to the subjective nature of peers’ behavior (i.e., the probability of performing a particular action depends on personal characteristics), we can develop strategies to deter malicious users and also minimize the impact of the unfair judgments they provide.

First, we propose a mechanism for gathering evidence, where the two parties (service provider and client) rate each other. On the one hand, customers have the opportunity to evaluate service providers through the quality of their services after each interaction; on the other hand, service providers have the opportunity to report the dishonest behavior of malicious evaluators. This mechanism is similar to a game, where each party will seek to cooperate with the other in order to avoid tensions that could bring down the reputations of both parties.

Secondly, we propose a simple trustworthiness computation model in which individuals make choices by weighting the impact of their decisions on themselves and others. The new trustworthiness rating function is an extension of the Bayesian standard trustworthiness expectation function. This was possible by integrating each evaluator’s credibility rating into the trustworthiness expectation function presented in [7–11].

###### 4.3.3. Integration of New Parameters

Similar to [13, 15], we introduced the parameters and for expressing player ’s prior knowledge about player . These two parameters are, respectively, known as player ’s prior believe and prior disbelieve, and they allow introducing another parameter known as player ’s trust propensity. For more details on trust propensity, see [19–21]. This leads us to define two important concepts: initial belief and initial disbelief.

*Definition 1 (initial belief and disbelief). *Initially (before any interaction happens between two peers), a service consumer (peer ) may naturally assume that there exists a parameter so that the service provider (peer ) will act honestly, based solely on its personal characteristics. Then, the parameters , respectively, , called peer ’s initial belief, respectively, initial disbelief, are defined as follows:where, , and is the set of peers. Another trust factor introduced in this model is the credibility of each evaluator which is defined as follows.

*Definition 2 (credibility). *Let represent the total number of complaints a client receives from all of its interaction partners after transactions. The parameter , defined byis peer ’s credibility, where and . Later we will use and instead of and accordingly for the brevity.

#### 5. Extending Bayesian Trust

Let , be the set of evaluators of peer . The couple of positive and negative feedback that have been collected from the peer’s individual experience with the peer are the main parameters used to compute the degree of trustworthiness of the target entity in this model. Therefore, we compute the expected posterior trustworthiness value of the peer using the parameters and defined in the Beta distribution function, and the credibility rating of each source of evidence, where and represent the initial trust parameters of an individual as defined above (see (6)).

If we consider and as the total number of positive and negative feedback, respectively, received by peer from peer after interactions, then ’s modified posterior trustworthiness expectation value recorded by peer can be denoted by and defined as follows:where and denote the initial belief and the initial disbelief, respectively, and . Now that the ingredients are gathered, we can define the trustworthiness rating of each peer in the network.

*Definition 3 (trustworthiness rating). *Let , be the set of evaluators of peer . Let be the peer ’s credibility given by (7), and let be the peer ’s trustworthiness expectation value provided by peer and defined by (8); the peer ’s trustworthiness rating is defined as follows:Note that the trustworthiness rating is a value between 0 and 1. As a matter of fact, is the aggregation of the trustworthiness expectation values built based on the provided ratings by each evaluator of the peer .

In contrast, the traditional (standard) Bayesian approach evaluates the trustworthiness of the peer based on (5).

#### 6. Experiments

Similar to [22], the experiments are conducted on a variety of P2P networks which are simulated with maximum number of 32, 64, and 128 users. 80% of the population consists of clients and the rest consists of service providers. The number of maximum interactions between a pair of client and service provider is set to . Interactions are created randomly for connecting clients to service providers. The experiments are performed from a time and contain records of transactions rating information. An example of records table is given in Table 1.