Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015 (2015), Article ID 789820, 9 pages
http://dx.doi.org/10.1155/2015/789820
Research Article

Modeling Dynamic Trust and Risk Evaluation Based on High-Order Moments

Yan Gao,1,2 Zhiyong Dai,1,2 and Wenfen Liu1,2

1Zhengzhou Information Science and Technology Institute, Zhengzhou 450001, China
2State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou 450001, China

Received 16 April 2014; Revised 5 August 2014; Accepted 14 August 2014

Academic Editor: Alessandro Palmeri

Copyright © 2015 Yan Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper proposes a dynamic trust and risk evaluation model based on high-order moments. The credibility of an entity is measured with trust degree and risk value comprehensively. Firstly, considering the dynamic and time decay characters of trust, a time attenuation function is defined, and direct trust is further expressed. Subsequently, in order to improve the accuracy of feedback trust, a filter mechanism is constructed to eliminate the false feedback, combining coefficient of skewness with hypothesis test. More importantly, the weights of direct trust and feedback trust are derived subjectively and adaptively with the moments and frequency of direct interactions. Furthermore, risk is evaluated with direct risk and feedback risk, which are obtained by mainly using coefficient of variation and coefficient of kurtosis. Risk value can be used to measure the stability of providing services. Simulation results show that the proposed model not only has high accuracy, but also resists effectively collusive attacks and strategic malicious behaviors.

1. Introduction

With the rapid development of network technology, peer-to-peer networks have been widely applied in a variety of distributed application systems, such as P2P file sharing, electronic commerce [1], and cloud computing, which make Internet applications simpler and more efficient. However, to guarantee the security and credibility of these network application environments is very difficult due to the openness, anonymity, and high dynamic. Therefore, there are very serious technical challenges to provide the safe and reliable services. Trust management has become a kind of effective technology of “soft security” in solving the existing network security issues related to trust [2].

In recent years, trust relationships under different application conditions have been analyzed [3], and many scholars have established trust models to study dynamical trust relationship with different mathematical methods and tools [4], which typically have EigenTrust [5], PeerTrust [6], PowerTrust [7], GossipTrust [8], and so on.

At present, the weighted average method is used to quantify trust in most of the literatures. Li et al. [9] presented trust management in large-scale P2P networks, where direct trust and feedback trust were computed considering time decay and distance attenuation, respectively, and the corresponding weights can be adjusted adaptively. On this basis, [10] analyzed five factors related to trust relationship, history, availability, feedback, motivation, and risk, and constructed a trust model based on the idea of multisource information fusion, in which their weights were assigned by WMA-OWA algorithms. Xue et al. [11] put forward a robust reputation system for structured P2P networks; the reputation managers were only responsible for local information, rather than global reputation, which can resist effectively collusion attack. In addition, by distinguishing the abilities of providing service and recommending, it makes trust quantification more accurate and reliable.

Except for the weighted average method, trust models are presented using probability and statistical methods. Wang et al. [12] proposed a robust linear Markov (RLM) reputation model with Kalman aggregation method through prediction variance, which is against malicious attacks by using EM algorithm to calibrate the dynamic parameters autonomously. Ayday and Fekri [13] attribute the reputation management problem to an inference problem of calculating the marginal distribution and present a centralized iterative method using the factor graph and belief propagation. Considering that trust is a dynamic process related to the context, Liu and Datta [14] established a context aware trust model using hidden Markov chain, in which the interaction results are the states of the Markov chain and the characteristics of context information are viewed as the observation sequence. The experiment shows that this model has lower positive false and negative false rates.

Although much new progress has been made in the existing researches, there are still some problems, which mainly are reflected in the following respects. First, dynamic is the biggest challenge in the evaluation and forecasting of trust relationship, due to the fact that trust relationship changes dynamically over time and has the time attenuation character in which interaction information further from current time has the weaker influence to trust relationship. Therefore, how to depict the time decay precisely and improve the accuracy and dynamic adaptation is a very difficult job. Secondly, false feedback filter is prerequisite to guarantee the correctness of feedback trust. At present, attacks against trust systems emerge endlessly in [15, 16]; malicious entities may provide false interaction information about a target entity to the requestor. The existing models only identify and defend simple attacks and fraud but lack protection mechanisms against complex attacks such as collusion, spying, and strategic attacks, which leads to the poor security and robustness. Further, in constructing quantitative models with weighted average methods, it is key to set the corresponding weight of direct trust and feedback trust. The existing setting methods are with strong subjectivity, less flexibility, and difficulty in being adjusted dynamically, which leads to lack adaptability and affect the trust decision ultimately. Lastly, risk evaluation is viewed as an objective preference to trust computation. The researches on trust at present often ignore the impact of risk, which makes trust computation not fully reflect the credibility of the entity behavior.

According to the above problems, this paper models behavior trust and risk evaluation using high-order moments. First of all, a time attenuation function is defined to characterize dynamic and time decay of trust more flexibly. Based on this, the direct trust is computed combined with the moments and frequency of direct interactions between entities. Subsequently, before calculating feedback trust, feedback interaction information is filtered through establishing a statistical model with standard coefficient of skewness and hypothesis testing, eliminating the false feedback. Then the feedback trust is calculated with the time attenuation function. Next, according to the basic principles, considering the number and moments of direct interactions, we give a weight setting method which is adaptive and dynamically adjusted as the interaction continues. On this basis, in order to fully study its ability to provide service, the stability of target entity behavior is measured by analyzing the discrete and sharp degree with coefficient of variation and standard coefficient of kurtosis of two samples: direct interactions and real feedback information. Further, risk evaluation model is given with the weighted average method. To measure the ability of the target entity, the values of trust and risk evaluation are both taken into account so as to make a trust decision.

The remainder of this paper is organized as follows. Section 2 presents a trust quantitative model through computing direct trust, feedback trust, and their own weight, respectively. A risk evaluation model is established in Section 3 by measuring direct risk and feedback risk comprehensively. Section 4 discusses the accuracy, dynamic, and robustness of the proposed model under several attacks and compares it with related work in simulation experiments. Finally, the summary analysis carries on.

2. A Quantitative Model for Trust

This section mainly gives the computational expressions of direct trust and feedback trust, and then total trust is ultimately quantified with the weighted average method. Set entity and as any service requester and service provider, respectively. Denote to be the total trust of entity for at the moment ; then it can be calculated as follows: where represents the direct trust computed with the direct interactions between the two entities and is the feedback trust obtained through the interactions between other entities and entity . is the weight of and , respectively, and . The following are the computation methods of , , and . For the sake of simplicity, every () in (1) is omitted hereinafter.

As we know, trust has the characteristics of time decay; that is, the interactions results farther from the current time have the weaker influence on the current trust value. Consequently, those interactions only in a certain period close to the current time are necessary to be analyzed so as to obtain the current trust degree. Given and , the time range in which these interactions are considered is specified as where . For convenience, set and , .

2.1. Direct Trust Computing

For any , assume that there were interactions in the interval between entity and , denoted by where is the satisfaction degree of entity at the th interaction, . Then

Based on (5), can be represented as , where is a function with variables. Accordingly, the key is to determine in order to obtain . Considering the dynamic and time decay of trust, a time attenuation function is first introduced before is given.

Definition 1 (time attenuation function). Consider where the parameter is used to adjust the attenuation speed.

Through analysis, it is known that and is a strictly monotonously increasing function of . The function is specified in Figure 1.

Figure 1: Time attenuation function .

From Figure 1, the rates of descending differ with different . As a result, the relative influence of interactions in each interval is distinguished. Additionally, the selection that is adopted mainly for the purpose of meeting the requirements of weight setting presented in Section 2.3.

Combining the direct interactions with time attenuation function (6), the computation expression for direct trust is as follows.

Definition 2 (direct trust ). For any , define , , and then
Set if . For any , due to and , . Specially, if in (7), it is similar to the expression for direct trust in [9]. It is worth mentioning that an entity is able to choose the appropriate parameter according to its own situation, which more reflects the personalization character of trust.

2.2. Feedback Trust Computing

After entity receives the interactions about recommended from several entities, it is necessary to filter this information, eliminate the false interaction information, and retain the true ones, which is a precondition to correctly compute the feedback trust . The storage and distribution of interaction information, filter mechanism, and feedback trust are introduced in the following.

2.2.1. Storage and Distribution of Feedback Information

This model adopts the storage and distribution means based on DHT for the interaction information of any entity. In order to more effectively resist malicious attacks, overlay network is used similar to DHTON presented in [11]; that is, all the interactions about are stored in several managers and each manager is responsible only for storage and distribution of local interactions.

The Chord network is first described. In a Chord network, every entity is represented by a -bit identifier and all entities are lined in the Chord circle in the clockwise direction. Set and denote by entity set in the Chord circle; then . We might set , . Hence, there exists to make and except that is divided into subsets: For any , define When , the entity in the front in the clockwise direction is just the finger entity of , which is responsible for the storage and distribution of interaction information between and all the entities in . Set and , , , ; then the set of all managers of entity is , where . Entity needs to send requests to the entities in to obtain the interaction information about .

There is not distinction between direct trust and feedback trust in the algorithm of computing trust in [11] where the direct interactions between and are not separated from all of the feedback information. However, considering that their own experience is often more inclined in the trust relationship, it is necessary to differentiate the two. For and , denote by the interaction information from the manager ; then the set of interactions used to compute feedback trust is , where is the interactions between and .

2.2.2. False Feedback Filter

According to the time instant of each interaction, is divided into groups. For any , denote where and are the number of feedback interactions and satisfaction degree at the th interaction in the interval , respectively, . Therefore, .

In each time interval , if the ability of providing service of entity is regarded as a random variable whose value range is , the feedback interactions are just the samples. Assume the providing service ability of entity is stable in each interval, and then the samples should approximately obey the normal distribution without the presence of attacks. If the attacks want to affect the computing result of trust degree on the whole, it is vital for malicious entities to return most false interactions to the requester. This false feedback is over high (collusive boost) or too low (collusive defaming), and its corresponding interaction moments should be close to the present moment. Through the above analysis, it can be judged that when there exists collusive defaming (boost) attack, the shape characteristics of the samples will be clearly to the left (right).

In order to filter the false feedback effectively, standard coefficient of skewness and hypothesis testing are used to analyze the feedback information. In statistics, coefficient of skewness is a measurement to reflect the skewness condition of sample distribution, including the degree and direction of skewness.

Definition 3 (standard coefficient of skewness [17]). Suppose are the samples from population and and are the sample mean and variance, respectively; then is the standard coefficient of skewness. states that the sample distribution is symmetrical; means that the distribution has a positive skewness; otherwise, the sample distribution has a negative skewness. The larger the absolute value of , the higher the degree of skewness.

The false feedback filter method is given using Definition 3 and hypothesis testing in the following. Based on Definition 3, the standard coefficient of skewness of the samples is computed as where , .

Set a threshold value . After is obtained by (12), if , it can be judged that there is false feedback in the samples. Moreover, for any , we get the following.

Case 1. It is when is regarded as false feedback and will be eliminated from if and only if .

Case 2. When , if , the feedback will be eliminated from .

2.2.3. Feedback Trust

For any , after the false data in each set are eliminated, denote by all of the honest feedback, and then . The following is the computing expression for feedback trust.

Definition 4 (feedback trust ). Using the real feedback interactions and time attenuation function (6), the computing expression for is where .

2.3. Weight Setting Method

To calculate the total trust with weighted average method, the weight setting needs to follow the following two principles at least:(i)the weight of direct trust should be not less than the weight of feedback trust; that is to say, ;(ii)lowering the weight properly can reduce the negative impact of malicious feedback and improve the accuracy so as to obtain good performance [18].

Based on the above principles, a theorem is first proven before the computing method of the weights is derived.

Theorem 5. For any , set , ; then is a strictly monotonously increasing function of variable and .

Proof. For any , is short for for convenience.
For any , , and from Definition 1, And, for any , ; hence, In addition, for any ,
For any , that is,  , . Therefore, is a strictly monotonously increasing function of variable .

Based on the principles of the weight setting, combining the conclusion of Theorem 5 with the moments and frequency of the direct interactions, the weights and are derived as follows:

From Theorem 5, and is significantly greater than 0.5 with the increase in the direct interactions; that is, is reduced as much as possible. Particularly, the weights can be updated as the interaction continues, and then they can be adjusted dynamically and adaptively.

To sum up, substituting (7), (13), and (17) into (1), we can obtain the total trust:

3. Risk Evaluation Model

In the open and dynamical network environment, trust and risk are the important factors to make security decisions. Trust computing can provide guidance for safety decision-making and risk evaluation is an objective reference for trust computation [19]. The existing researches on trust often ignore the impact of risk, which makes trust computation not fully reflect the credibility of entity behavior.

In this paper, we use the stability of providing this service of entity to represent the risk value. The higher the stability, the smaller the risk. The stability can be measured by analyzing the dispersion and sharpness of two samples: the direct interactions and honest feedback . In statistics, the coefficient of variation and coefficient of kurtosis can indicate the two statistical characteristics. Therefore, the coefficient of variation and coefficient of kurtosis are used to establish a risk quantification model in the following and it is comprehensively measured with direct risk and feedback risk.

Definition 6 (coefficient of variance [17]). Set as the coefficient of variance of , where and are the same as given in Definition 3.
Although the coefficient of variance can measure the dispersion degree of the sample, two samples with the same coefficient of variance have different kurtosis in many cases. Therefore, the standard coefficient of kurtosis is further used to reflect the sharpness of the sampling distributions.

Definition 7 (standard coefficient of kurtosis [17]). Denote to be the standard coefficient of kurtosis of the sample , where and are the same as those in Definition 3. shows that the sharpness degree of the sample distribution is equivalent to the normal distribution. and mean the sharpness degree of the sample distribution is higher and lower than the normal distribution, respectively.

Using Definitions 6 and 7, the coefficient of variance and standard coefficient of kurtosis of two samples and are computed. The coefficients of variance of the two samples are where And the standard coefficients of kurtosis of the two samples are

The larger the coefficient of variation, the higher the dispersion degree of a sample, which shows that the ability of providing service is more unstable. The larger the standard coefficient of kurtosis, the higher the concentration degree, which shows that the ability of providing service is more stable. Therefore, the larger the coefficient of variance, the lower the coefficient of kurtosis, and the risk of selecting as a target entity to provide service is higher.

The two coefficients of kurtosis range in . Because the risk value is a monotonously decreasing function of the coefficient of kurtosis, the relationship between the two can be calculated as follows: From the properties of , and, when , .

For the two coefficients of variance , the risk value is a monotonously increasing function about the coefficient of variance: From (24), .

Based on the above analysis, the direct risk and feedback risk are computed, respectively, by To sum up the risk value is given using (25): Substituting (17), (23), and (24) into (26), the risk value is obtained.

In conclusion, the credibility of entity is measured with a two-dimensional vector .

4. Simulation and Performance Analysis

We make the simulation to analyze the performance of the proposed model based on Peersim, a popular simulator of the P2P network. In the network, there are 2000 entities and 10000 kinds of services, in which 8000 are normal, 1000 are fake, and the remainders are vicious. We first use the RMS error to measure the accuracy of the proposed model. Subsequently, the dynamic is verified as the behavior of an entity changes over time. Furthermore, the percentage of successful interactions and the pass ratio of malicious services are adopted to analyze the performance of resisting the collusive attack and strategic malicious attacks.

4.1. Evaluation of Accuracy

We verify the accuracy through computing the root-mean-square (RMS) of the aggregated total trust of all entities [11]. The RMS error can be used to evaluate the effectiveness of the proposed model against various malicious behaviors. The smaller the RMS error, the higher the accuracy. The RMS error is given as follows: where is the total number of entities and and are the evaluated and actual total trust of entity , respectively.

It can be observed that the RMS error of our model is lower than the DHTrust and PowerTrust in Figure 2. This result emerges mainly due to three aspects. One is that we distinguish the direct trust and feedback trust and their weights in the proposed model are flexible and personalized. Another important point is the filtering mechanism proposed in Section 2.2.2, which eliminates the fake feedback so as to guarantee the correctness of the available feedback interactions. Moreover, different from only considering the trust degree, the trust and risk values are used comprehensively to reduce the RMS error.

Figure 2: RMS errors with different percentages of malicious entities.
4.2. Evaluation of Dynamic

The experiment is conducted to verify the dynamic of the proposed model. Assume that the behavior of an entity changes as time evolves. It can provide good services in the first 5 time intervals. Subsequently, it becomes unstable and provides honest services with probabilities of 0.9 and 0.8 from the 6th to 10th time interval and from 11th to 15th time interval, respectively. Next, it returns the good state. Trust degree of the entity changes as specified in Figure 3.

Figure 3: The trust degree of a dynamical entity.

When the entity changes from good to bad, trust degree descends quickly and, after 2 time intervals, the evaluated trust degree can reflect the real situation. However, trust degree grows slowly if it varies from bad to good and it takes almost 5 intervals to reach the real value. This phenomenon shows that it is necessary to take a long time to accumulate trust degree and it can be viewed as the punishment for the unstable behaviours before.

4.3. Analysis of Collusive Attacks

In the existing attack models, collusive attack is generally considered as the biggest threat aiming at the trust systems. This attack model mainly includes full collusion and spies. In the “full collusion” model, all entities of a malicious collective provide bogus services and create false positive feedback recommendation to all the other entities of the collectives. In the model “spies,” the malicious collective is divided into two groups: spies and malicious. The spies provide honest services to obtain a high reputation and simultaneously give false positive feedback to the malicious part of the collective. In order to analyze the performance of the proposed model to resist the collusive attack, two measurement indexes are first given as follows.

Definition 8. Set as the service request times of malicious entities and the pass count among these malicious requests in the th simulation cycle; the pass ratio of malicious service is defined as . It represents the ability to identify such malicious entities.

Definition 9. Set as the total number of interactions and the successful interactions in the th simulation cycle; the ratio of successful interactions is .

To measure the effect of collusive malicious entities, it is assumed that the percentages of malicious ones are 10% and 20%, respectively, half of which are spies. In this simulation experiment, 300 cycles are conducted and there are several services in every cycle. The comparisons of the proposed model with DHTrust and EigenTrust are detailed in Figures 4 and 5.

Figure 4: Pass ratio of malicious services.
Figure 5: The ratio of successful services.

The left parts of Figures 4 and 5 represent the and corresponding to the three trust models when the percentage of malicious entities is 10%, and the and are reflected in the right parts when the percentage is 20%. From the above graphs, the s and s of the proposed model are distinctly lower and higher than DHTrust and EigenTrust, respectively. In the proposed model, is almost less than 0.1 until and it tends to 0.05 as the interactions continue. Accordingly, floats around 0.95. A similar situation exists when the percentage is 20%. Such a result is obtained mainly due to the filtering mechanism in the proposed model. Because the filtering method is based on the statistics characteristics of several feedback entities, it is very effective to identify the malicious entities so as to resist the collusive attacks. And the experiment results precisely illustrate this point.

4.4. Analysis of Strategic Malicious Behaviors

The performance of the proposed model when strategic malicious entities exist in the network is analyzed in this subsection. Such malicious entities can be divided into two kinds in general. One kind is to promote trust degree with small deals and provide the fake services or malicious attacks for the big deals. The other is to provide the normal service when its credibility is low and give the fake services or malicious attacks after trust is improved.

In this experiment, we assume that the percentages of strategic malicious entities are 10%, 20%, and 30% and each half of such entities takes the two strategies, respectively.

The curves in Figure 6 show the changes of when the percentages are 10%, 20%, and 30% in order, respectively. It can be found that becomes greater than 0.95 after a few cycles and the asymptotic values do not have much difference as the interactions continue. This result reflects that the correctness of the proposed model is very high even in the presence of strategic malicious entities.

Figure 6: The ratio of successful services.

5. Conclusions

In this paper, a dynamic trust and risk quantitative model is established. The credibility of an entity is measured by two indicators: total trust and risk value. Total trust is obtained with the weighted average method through considering direct interactions and feedback information comprehensively. In order to ensure the accuracy, a feedback filter model about feedback information based on coefficient of skewness and hypothesis test is proposed. Additionally, the weights allocation mechanism is adaptive. Furthermore, risk value is used to describe the stability of providing services, which is modeled with coefficient of variation and coefficient of kurtosis. Through simulation experiments, it is verified that the proposed model makes significant improvement in accuracy and dynamic and then it is robust to collusive malicious attacks and strategic malicious attacks.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to express their sincere appreciation to the anonymous reviewers for their insightful comments, which have greatly aided them in improving the quality of the paper. This work is supported by the National Basic Research Program of China (973 Program) (nos. 2012CB315905 and 2012CB315901).

References

  1. L. Chang, Y. Ouzrout, A. Nongaillard, A. Bouras, and Z. Jiliu, “The reputation evaluation based on optimized Hidden Markov model in E-commerce,” Mathematical Problems in Engineering, vol. 2013, Article ID 391720, 11 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  2. R. Falcone and J. Kim, “Soft security: isolating unreliable agents from society,” in Proceedings of the International Workshop on Trust, Reputation and Security: Theories and Practice (AAMAS '02), R. Falcone, S. Barber, L. Korba, and M. Singh, Eds., Lecture Notes in Computer Science, pp. 224–233, Springer, 2003.
  3. L. Liu and W. Shi, “Trust and reputation management,” IEEE Internet Computing, vol. 14, no. 5, pp. 10–13, 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. F. Mármol and G. Pérez, “Trust and reputation models comparison,” Internet Research, vol. 21, no. 2, pp. 138–153, 2011. View at Google Scholar
  5. S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina, “The EigenTrust algorithm for reputation management in P2P networks,” in Proceedings of the 12th International Conference on World Wide Web (WWW '03), pp. 640–651, May 2003. View at Publisher · View at Google Scholar · View at Scopus
  6. L. Xiong and L. Liu, “PeerTrust: supporting reputation-based trust for peer-to-peer electronic communities,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 7, pp. 843–857, 2004. View at Publisher · View at Google Scholar · View at Scopus
  7. R. Zhou and K. Hwang, “PowerTrust: a robust and scalable reputation system for trusted peer-to-peer computing,” IEEE Transactions on Parallel and Distributed Systems, vol. 18, no. 4, pp. 460–473, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. R. Zhou, K. Hwang, and M. Cai, “GossipTrust for fast reputation aggregation in peer-to-peer networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 20, no. 9, pp. 1282–1295, 2008. View at Publisher · View at Google Scholar · View at Scopus
  9. X. Li, F. Zhou, and X. Yang, “Scalable feedback aggregating (SFA) overlay for large-scale P2P trust management,” IEEE Transactions on Parallel and Distributed Systems, vol. 23, no. 10, pp. 1944–1957, 2012. View at Publisher · View at Google Scholar · View at Scopus
  10. X. Li, F. Zhou, and X. Yang, “A multi-dimensional trust evaluation model for large-scale P2P computing,” Journal of Parallel and Distributed Computing, vol. 71, no. 6, pp. 837–847, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  11. W. Xue, Y. Liu, K. Li, Z. Chi, G. Min, and W. Qu, “DHTrust: a robust and distributed reputation system for trusted peer-to-peer networks,” Concurrency and Computation: Practice and Experience, vol. 24, no. 10, pp. 1037–1051, 2012. View at Publisher · View at Google Scholar · View at Scopus
  12. X. Wang, L. Liu, and J. Su, “RLM: A general model for trust representation and aggregation,” IEEE Transactions on Services Computing, vol. 5, no. 1, pp. 131–143, 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. E. Ayday and F. Fekri, “Iterative trust and reputation management using belief propagation,” IEEE Transactions on Dependable and Secure Computing, vol. 9, no. 3, pp. 375–386, 2012. View at Publisher · View at Google Scholar · View at Scopus
  14. X. Liu and A. Datta, “Modeling context aware dynamic trust using hidden markov model,” in Proceedings of the 26th AAAI Conference on Artificial Intelligence, pp. 1938–1944, July 2012. View at Scopus
  15. E. Koutrouli and A. Tsalgatidou, “Taxonomy of attacks and defense mechanisms in P2P reputation systems-lessons for reputation system designers,” Computer Science Review, vol. 6, no. 2-3, pp. 47–70, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. A. Jøsang, “Robustness of trust and reputation systems: does it Matter?” in Proceedings of the 6th IFIPTM International Conference on Trust Management (IFIPTM '12), vol. 374, pp. 253–262, Springer, Berlin, Germany, 2012.
  17. H. Gao, Statistics Computation, Peking University Press, Beijing, China, 1995.
  18. Z. Liang and W. Shi, “Analysis of ratings on trust inference in open environments,” Performance Evaluation, vol. 65, no. 2, pp. 99–128, 2008. View at Publisher · View at Google Scholar · View at Scopus
  19. F. Lu, K. Zheng, X. Niu, and Y. Yang, “Construct a risk-aware peer-to-peer security trust model,” Journal of Beijing University of Posts and Telecommunications, vol. 32, pp. 22–32, 2011. View at Google Scholar · View at Scopus