Abstract

While the mechanism of reputation aggregation proves to be an effective scheme for indicating an individual’s trustworthiness and further identifying malicious ones in mobile social networks, it is vulnerable to collusive attacks from malicious nodes of collaborative frauds. To conquer the challenge of detecting collusive attacks and then identifying colluders for the reputation system in mobile social networks, a fuzzy collusive attack detection mechanism (FCADM) is proposed based on nodes’ social relationships, which comprises three parts: trust schedule, malicious node selection, and detection traversing strategy. In the first part, the trust schedule provides the calculation method of interval valued fuzzy social relationships and reputation aggregation for nodes in mobile social networks; further, a set of fuzzy valued factors, that is, item judgment factor, node malicious factor, and node similar factor, is given for evaluating the probability of collusive fraud happening and identifying single malicious nodes in the second part; and moreover, a detection traversing strategy is given based on random walk algorithm under the perspectives of fuzzy valued nodes’ trust schedules and proposed malicious factors. Finally, our empirical results and analysis show that the proposed mechanism in this paper is feasible and effective.

1. Introduction

The nature of free communication and information acquiring of mobile social networks has made them one of the most popular platforms for people’s daily interactions [1]. There are massive amounts of entities which are shared among nodes on these mobile platforms. However, the open network environment has also made it unavoidable that dishonest individuals (called nodes in this work) and their malicious behaviors exist in the networks and honest nodes are vulnerable to frauds or attacks under such environment [2]. Therefore, how to counter the frauds and prevent nodes from malicious attacks has now gained much attention in network security researches.

For identifying the historical trustworthiness and predicting future likelihood of remaining reliable, a reputation system has been seen as a feasible and indispensable solution to ensuring the security of mobile social networks [3]. Commonly, a healthy reputation system can reflect a node’s trustworthy degree authentically through an indicator, named reputation, by aggregating comprehensive trustable opinions from other members. A higher reputation level of a node in the reputation system implies more benefits, such as more opportunities of attracting potential followers, more forwarding, higher approving rate, and even more chances of selling products. Naturally, frauds often take place in reputation systems for acquiring more benefits. Therefore, it is essential that the reputation systems be able to recognize whether the reputation of an individual is trustworthy or not.

For most reputation systems which utilize summary/average methods based on past experiences [4], there are unavoidable threats which can endanger the reputation aggregation mechanism because all judgments are given equally for aggregating reputation [5]. Those mass dishonest judgments, as a result, would damage the reliability of reputation systems; that is, reputation systems might show wrong trustworthiness of individuals once malicious ones attack the reputation aggregation mechanism through inflating or slandering. And worse still, the collusive attacks would bring more damage than single attacks because the scale of attack is larger and there are more attackers in collusive attacks [6, 7]. Therefore, detecting collusive attacks in reputation systems and further finding the malicious nodes in the attack in mobile social networks are a significant challenge for ensuring reputation aggregation security.

Over the course of the past decades, many efforts have been made to evaluate, recognize, predict, and prevent attacks or frauds in reputation systems [810]. There are three main kinds of techniques for detecting fraud in reputation aggregation: majority rule [11], signal modeling, and trust management [12]. However, most of these studies focused only on individual malicious behavior detection while an important factor, social relationship, has not been paid sufficient attentions in collusive cooperation. Another essential problem is that the collusive attack detection should be evaluated based on a fuzzy interval number rather than an exact crisp numerical value due to the uncertain nature of detection. In addition, even if malicious nodes can be recognized, it is also very difficult to recognize and verify their collusive partners, named colluders, in mobile social networks since the pairwise comparison method for detecting colluders often causes increased complexity and computation burden. Therefore, how to detect colluders with lower workload is another challenge tackled in this study. To do that, we notice that the trustworthiness relationships among nodes reflect a relative high probability of being partners in their collaborations, which also can be taken into account in collusive attack and colluder detection. That is, if a single node is recognized and verified as a malicious one, we can select the malicious one as a start point and then traverse through its trustworthy relationships to evaluate and detect colluders. From this consideration, the underlying principle of our work is that if a node maintains a higher trustworthy relationship with a verified malicious node, there would be more likelihood of being a colluder of the verified malicious one. Therefore, the main motivation of this paper includes two aspects: the need of representing the uncertainty or fuzziness of trust relationship and reputation and further the ultimate goal of evaluating and recognizing the collusions with lower workload.

To achieve that, the rationales of our proposed work are as follows: social relationship is introduced to measure the closeness of nodes in mobile social networks since we assume that the malicious colluders in collusions have relatively close ties, which is denoted as high trust relationships in this work; collusive frauds can be detected through evaluating the malicious probabilities of nodes based on their mutual past collaborations and behaviors; and the detection can be employed by traversing the trust relationship network of nodes from a verified colluder with lower workload.

In this study, we propose a fuzzy collusive attack detection mechanism for detection of reputation systems oriented collusive attack in mobile social networks. Our main proposals in this paper are as follows: the formal model of FCADM, which comprises three parts, namely, trust schedule, malicious node selection, and detection traversing strategy, and its related definitions; a fuzzy trust schedule node of FCADM, which comprises trust and reputation calculation methods; malicious factors, including item judgment factor, node malicious factor, and node similar factor based on fuzzy interval value for colluder evaluation and malicious node selection; and a traversing strategy based on random walk algorithm for FCADM to detect colluders according to nodes’ trust relationships in mobile social networks.

2.1. Reputation System

In general, reputation denotes a public and authoritative view of trust obtained from an impartial community [3, 13]. Thus, we consider that reputation should be established based on all the impressions. However, to ensure fairness, reputation aggregation must prevent malicious nodes from obtaining high scores by cheating or by reducing the reputation of honest nodes. All attacks on reputation aggregation must be detected and punished.

Many studies have addressed trust and reputation in recent decades [3, 4, 1417]. Methods have been proposed to optimize one or more aspects of trust computation, such as the summation/average/iteration of past trust ratings [18], Bayesian systems [14], and weighted average of ratings [19]. Traditionally, reputation systems have two main types of architecture: centralized [3] and distributed [18]. The former is a feasible method for a small-scale network, such as a single website, where a central authority can collect all the ratings and publish reputation scores for each participant. However, in a large-scale environment, centralized reputation system is not a reasonable method due to high costs, including high computational overheads, large storage space requirements, and time-consuming retrieval operations. By contrast, in distributed reputation systems, each member submits reputation assessments after being requested to do so by reliable members. However, the distributed nature of this scheme may lead to inconsistent opinions among nodes and malicious reputation attacks are more likely to succeed. Therefore, methods for measuring attacks on reputation aggregation are indispensable in distributed schemes.

Previous studies have also considered trust computing in social networks. For example, Ortega and colleagues [20] proposed a method for computing the rankings of nodes in social networks based on the positive and negative opinions of nodes. The opinions of others obtained from each node could then influence their global trust scores. Qureshi and colleagues [21] proposed a decentralized framework and related algorithms for trusted information exchange and social interactions among nodes based on a dynamicity-aware graph relabeling system.

2.2. Collusive Attack Detection

By observing the actions of surrounding nodes, reputation-based methods can be quite effective in combating internal active attacks or selfish behaviors. However, attacks pose great threats to reputation systems in mobile social networks [22], and even more collusive attacks bring more damage to reputation aggregation system [23]. A collusion attack [24] occurs when two or more selfish or malicious nodes collaborate to make an attack without being detected. In such cases, malicious peers perform attacks with their partners in collusion and cause more damage than any single malicious peer. Many efforts have been made to evaluate, recognize, detect, and prevent collusion in reputation systems, such as majority rule [11], signal modeling [25], and trust management [8, 12]. All abovementioned methods are a kind of nonanonymous attack detection based on explicit evidences which are accessible data in network. Our proposed work in this paper, too, belongs to this kind. The main features of abovementioned method are listed in Table 1.

As shown in Table 1, in the majority rule, the ratings that are far from the opinion of the majority are marked as suspicious ratings. Representative majority-rule based techniques include statistical filtering [11, 26], an endorsement-based method [12, 27], and an entropy-based method [12, 28]. However, if the colluders comprise the majority providing ratings, collusion will be difficult to detect. Signal modeling techniques are used to detect sudden temporal changes in the features of rating values to identify collaborative attacks [25]. Trust management aims to calculate the trustworthiness of raters to evaluate how much the raters are trusted to provide honest ratings [8]. These three techniques are used widely to detect attacks both independently and jointly, and however, the main defect is that they focus on examining the rating values for individual products [12]. That is, the above schemes cannot identify whether or not an attack is collusion.

Other methods have also been proposed for collusion detection. Rossi and Pierre proposed a method for detecting generic collusion attacks, as well as preventing them by extending the path rater component of reputation-based IDS [29]. Silaghi and colleagues [30] proposed a mechanism for detecting collusion, which successfully detects malicious clients that return incorrect results with a certain probability. In our previous work, we also proposed a relationship based collusive attack detection method for social network platforms [31]. We presented a set of factors by evaluating inauthentic judgment and attack behavior similarity to detect colluders through their close relationships in social networks. By contrast, there are following newly parts as: the fuzzy interval view for reputation aggregation, () the model of FCADM with its detail workflow, () a fuzzy evaluation for malicious factors, and () random walk based traversing strategy given in this work, which are also not included in our previous work.

However, these methods are all based on the details of malicious behaviors and evaluating all nodes in such large network environment is also a huge cost work. In our consideration, we can find colluders by checking a detected attacker’s connected nodes since they would have close relationships if they cooperated for their collusion in past. Different from existing works, we have the following concerns in our scenario of collusive fraud detection: () the main fundamental criterion, trust and reputation measurement, of our proposed is based on a fuzzy-interval perspective, while most other methods are based on exact numerical criteria in malicious fraud detection; () to ensure the accuracy of malicious colluder detection, we propose a series of factors for calculating and identifying the honest individual or malicious one based on nodes’ behaviors; () trust relationship is seen as a significant factor for detecting colluders in traversing strategy by finding colluders according to nodes’ mutual trust levels. That is, if a malicious node is detected, those that have high mutual trust levels with the detected malicious one would be more likely to be colluders; and () the detecting strategy is designed based on random walk algorithm rather than a flooding searching method, which aims to decrease the complexity of our proposed method.

3. Overview of Proposed Model

Here, we address the model of collusive attractive detection and then proposed related definitions in this section.

3.1. Model of FCADM

Most collusive attacks are lunched simultaneously by a large number of colluders in mobile social networks through giving mass inauthentic judgments to the target individual for attacking its reputation degree. In our consideration, there are following features of reputation oriented collusions in mobile social networks as () inauthentic judgments are dramatically different between malicious rates and honest rates, () most colluders remain consistent behaviors in past if they are in a same collusive team; () most colluders keep similar reputation degrees because they execute almost similar behaviors, and () there must be close social relationships, which can be described as a trust relationship, among colluders due to necessity of communication and cooperation for collusions. Based on such features, our proposed FCADM detects collusive attack through evaluating the four above aspects among suspicious nodes. The detail workflow of FCADM model is shown as in Figure 1.

Accordingly, our proposed model of FCADM includes the following parts.

(1) Part 1: Trust Schedule. In FCADM, we define a distributed formal list for each node in mobile social networks to record its local trust information, including trust relationships with others that have interactions with it, and reputation degree of nodes. As shown in Figure 1, a node can send judgments with numerical rates to another node and then aggregate trust relationship between nodes, while a node’s reputation is aggregated according to other nodes’ direct trust relationships (in blue dotted line with arrows pointing towards Part 1 in Figure 1). Trust schedule provides the trustworthiness information of nodes in mobile social networks, which enables FCADM to evaluate the malicious likelihood of a node being colluder and further offers traversing rules in node trust relationship network.

(2) Part 2: Malicious Node Selection. In FCADM, we give a set of malicious factors and related calculation methods for selecting a malicious colluder. As shown in Figure 1, two factors of item judgment factor (IJF) and node malicious factor (NMF) are evaluated for identifying the malicious degree of a node through fuzzy interval value (in blue dotted line with arrows pointing towards Part 2 in Figure 1). The factors are calculated based on node’s behaviors and records. Then, the results of factor calculation are combined to select single malicious node. Therefore, we can make decision to recognize whether a node is malicious node or not and the recognized malicious node is seen as the source node for further colluder detection.

(3) Part 3: Detection Traversing Strategy. In this part, we give a traversing strategy in node trust relationship network based on random walking algorithm [32]. FCADM selects a recognized malicious node as the source node in mobile social networks (given according to the result of Part 2) and then traverses the whole network along the relationships among nodes (in red dotted line in Figure 1) according to the probabilities calculated by node similar factor and trust degree (in blue dotted line pointing towards Part 3 in Figure 1). Our main rational of proposed traversing strategy is that if a node has both higher degree of trust relationship and larger value of NSF factor with a malicious one, it would have more possibility of being colluder with the malicious one. Then, the traversing strategy can calculate the probability of selecting traversing objects according to trust relationships among nodes.

Moreover, we here give our assumption for our work here. In this work, we aim to detect malicious colluders by retrieving them through users’ relationships. Then, the reasons of our assumption are as follows: () a single malicious node is easy to detect because of its explicit malicious behaviors, while it is difficult to verify that several malicious nodes which are detected separately are colluders in collusion without taking their relationships into account and () of cause, some malicious nodes want to hide their relations and show less evident relationship in social network. On contrary, in our consideration, the collusive attacks must be organized based on mobile social networks more or less by colluders. Therefore, our work focuses on nonanonymous attack detection based on explicit evidences (e.g., witness behaviors and accessible data in social network), because if colluders communicate without any explicit evidence (e.g., through an offline method), it is definitely hard to find their relationships only depending on social network. It can be seen as anonymous collusive attacks and it is not included in this work.

Therefore, our work is based on the following assumptions regarding organized collusion:(1)Malicious nodes always exhibit consistent behaviors during organized collusion. In particular, any node (honest node/colluder) will not change its identification and attitude (honest/malicious) while it appears in the mobile social networks, according to the theoretical analysis. In addition, all nodes will keep their certifications and they will not leave the network after they enter the mobile social networks and their investigation begins.(2)There are major differences between the malicious ones and honest ones. Thus, malicious behaviors are very different from factual behaviors. That is, malicious evidences of colluders are accessible more or less in mobile social networks.(3)Due to the necessity for communication and cooperation, there must be at least one social relationship between the organizing colluder and other colluders. Thus, colluders cannot attack collusively without possessing relationships in mobile social networks.

3.2. Related Definitions

First, we clarify the role of fuzzy view in our work. Commonly, the trust and reputation degrees are calculated based on past records or experiences from users. And also, we can see that trust or reputation is essentially a dynamic degree with the past records changing. Therefore, the uncertain nature of trust or reputation leads to unreasonable results if the degree is expressed as crisp numbers. Many efforts have been made to prove the fact that crisp numerical concepts are not sufficient for expressing the uncertainty of concepts [33]. For example, three nodes , , and are given such that has trust degrees to and , respectively. In traditional method, the trust degrees are expressed as crisp numbers, and then, it is hard to measure whether the trust degree of to exceeds, equals, or is less than the trust degree of to if the two crisp numbers are approximate. That is because the trust degrees will keep updating timely with the changing of interaction data among users. Such uncertainty of trust relationship leads to unreasonable comparison by using crisp numbers. In fact, we can find that the value of trust degree between nodes is in an interval and such fuzzy interval reflects the reasonable measurement for user’s uncertain trustworthy opinion to others. Similarly, the role of fuzzy applies to reputation aggregation and malicious factor evaluation as well, and the fuzzy interval expression is a reasonable method for our work.

Then, we address a set of related definitions for FCADM as follows.

Definition 1 (graph model of mobile social networks). Mobile social networks can be described as a directed graph which is a two-tuple as , where and denote the sets of nodes and their relationships, respectively.

Definition 2 (trust schedule). Trust schedule of a node is described as , where is the reputation degree of node and is the set of trust relationship values from to nodes that have direct trust relationships with .

From Definition 2, we can see that trust schedule records two kinds of trustworthiness information of each node, that is, reputation and trust relationship. To represent the fuzziness of trust, the value of is described as an uncertain value. That is, we here use an interval fuzzy value to describe trust schedule information as , , where and are the minimum values of them in past time and and are the maximum values of them in past. We also have and , where , , and are the nodes that get direct trust relationships by ; namely, is the out-degree node of . Hence, is the minimum value of the trust from to , while is the maximum value of the trust relationship from to . In other words, we have .

Additionally, we propose the factors for collusive attack evaluation in FCADM: malicious factor and item attack probabilities.

Definition 3 (malicious factor). Malicious factor evaluates a single node’s malicious possibility through two aspects: items and nodes which are judged by it. Malicious factor comprises three factors, that is, item attack probability, item judgment factor, and node malicious factor, which are utilized to evaluate the likelihood of a single node’s malicious attack.

(1) Item Attack Probability (IAP). This factor is to describe the probability of an item being attacked in its reputation aggregation. The factor IAP can be denoted as , and the item attack probability is in this range.

(2) Item Judgment Factor (IJF). Item judgment factor aims to evaluate the probability of malicious judgment occurrence given by a node’s judgments to items. can be denoted as , which signifies that the probability of malicious judgment occurrence is in this range.

(3) Node Malicious Factor (NMF). Node malicious factor aims to evaluate the likelihood of a single node being a colluder. In this study, we consider that if a node sends more judgments which are far different with reputation of targets, the likelihood of the node being a colluder, measuring through NMF, would be larger.

With respect to malicious factor, item judgment factor is given to evaluate the difference between node’s judgment and the obtained judgment of an item, while the item attack probability is to evaluate the likelihood of attack happening for an item. Meanwhile, NMF factor is given based on node’s past rates through evaluating whether it often sends inconsistent judgments to others for their reputation aggregation.

Definition 4 (node similar factor (NSF)). Node similar factor is presented to describe the behavior similarity of nodes, which can be used to evaluate likelihood of node being colluder in collusive attack.

The factor NSF is proposed to improve the performance of traversing in colluder detection, concerning the fact that colluders in collusive attacks might have similar behaviors. In part of detection traversing strategy, we combine the trust relationship and NSF factor to get the node selection criterion for traversing strategy.

Details of proposed related definitions will be discussed with calculation methods and examples in later sections.

4. Calculation Method of Trust Schedule

In this section, we give the calculation method of trust schedule in FCADM, which includes two aspects as reputation and trust relationship.

(1) Trust Relationship Calculation. Firstly, we discuss the trust relationship and its calculation method. Trust relationship demonstrates one’s subjective trustworthy willingness with respect to others, which can be measured based on a node’s past experience or interaction. For instance, a node completes interactions and then is able to send judgment concerning its experience from interaction with other nodes. From this perspective, trust relationship is an aggregation concerning the historical feelings between nodes. In this work, we use a fuzzy interval value to represent the trust relationship between nodes. Assume that gets direct trust relationship from ; is the set of directly linked nodes from and denotes the total number of set ; then can be calculated as follows: where denotes the trust degree at time point , is the impartial average judgment value from to , and is the average value from to its all directly linked nodes. According to (1), we can see that if the difference between and is in a reasonable range (), the value of is seen as reasonable; if such difference is out of the range, we give an adjusting mechanism for trust calculation as follows: if , the greater difference of denotes that does not trust more; if , the greater difference of  denotes that trusts more. Then, we address the value calculation methods of and as follows.

Assume that is the th judgment from to and is the maximum value of judgment from to in the past, while is the minimum value of judgment from to in the past. The calculations of and are as follows:where denotes the th judgment from to . Further, we assume that, at time point , the value of is minimum and we denote , while at time point , the value of is maximum and we denote . Then, the equation of trust degree can be as follows:

Here, we give an example as follows: assume that the out-degree collection of is . At time , we assume and the judgment values from to are listed in Table 2.

Then, we can calculate as follows:

Further, we list the trust values from node to others at different time points as shown in Table 3, we can therefore get the trust fuzzy interval values as in Table 3.

(2) Reputation Calculation. Reputation objectively denotes a shareable and authoritative trustworthy perspective from others that have direct interactions with it. In this work, the calculation of reputation is based on abovementioned trust relationship; that is, a node’s reputation is an integrated view based on trust relationships from other directly linked nodes to it. The higher the trust relationship values from other nodes to a node are, the higher its reputation would be. Then, fuzzy interval valued reputation can be calculated as follows: where is the node set which includes nodes directly linked to . Then, we can get the fuzzy interval valued reputation as follows:

Here, we give an example as follows: we assume that the node set which includes nodes directly linked to is and , . So, we can get reputation interval of and . Then, reputation of is as .

Since the reputation aggregation as listed in (5) and (6) is based on trust relationship calculation. Such aggregation mechanism will cause a vital problem that reputation is vulnerable if malicious ones establish inauthentic trust relationships to a node by sending malicious judgments to it. More seriously, collusive attacks from collaborative inauthentic judgments can do huge harm to reputation aggregation.

5. The Evaluation for Malicious Node Selection

In this section, we propose the evaluation method of selecting malicious nodes, which are verified to be malicious ones and further could be colluders in mobile social networks. Here, our criteria of evaluating colluders are as mentioned in Section 3, that is, malicious judgments dramatically different with honest rates and node malicious behaviors occurrence, which are calculated through two factors of IJF and NMF.

(1) Calculation of Item Attack Probability (IAP). In reputation systems, reputation of a node is aggregated according to the collected judgments of all items which belong to . Consequently, any attacks of item belonging to node would also damage the reputation aggregation of . From this point, we use the factor of IAP to measure the likelihood of attack happening for items. That is, if a node’s item received the average judgment value from other nodes with greater difference from reputation of node , there would be more probability of item attack happening to item . Therefore, we denote the given average judgment value of item as . The factor of IAP can be calculated through the difference between and . In addition, with respect to node’s uncertain or occasional quality of item which might earn trustworthiness or distrust extremely, we introduce a smooth coefficient as 0.5. Then, the item attack probability can be calculated as follows:

For instance, from Table 4 we get that , , and . Based on above example, we have reputation of as . Then, we can calculate IAP factor as follows:

(2) Calculation of Item Judgment Factor (IJF). Factor of IAP signifies the probability of a node attacking another node’s item. As for factor of item judgment factor, it describes the possibility of a node attacking all the items that it sends judgment to. Let there be a set of items, noted as , which includes all the items judged by . For , we denote that the overall average judgment value of the is as , and the th judgment value from to is denoted as . Then, the item judgment factor of can be calculated as follows:where denotes the total number of judgments from to .

Further, we use the factor of IAP to improve the calculation of IJF. Our underlying meaning is that if an item has a high probability of being attacked (with high value of IAP of the item) and meanwhile a node has a large overall judgment difference of the same item (with a high value of IJF), the node would have high possibility of attacking the item as a malicious one. For a node and its judged item set , the total item judgment factor of it can be calculated as follows:

(3) Calculation of Node Malicious Factor (NMF). Factor of NMF is given based on node’s rates to other nodes’ reputation aggregation. In general, a significant variation between a malicious and an honest node’s fraud is behavior. Thus, we can evaluate whether a single node is malicious or not by comparing its judgments with reputation values of the other nodes. For a node , assume that it rates times of reputation judgments to other nodes in past. And for each node that received rates voting by , it has reputation value of and the th judgment from to is denoted as . Then, the node malicious factor can be calculated as follows:where is the set of nodes that received judgments from in the past for reputation aggregation. The lower the value of the NMF factor, the more honest the node.

(4) Selection of Malicious Nodes. Here, we set the interval of IJF factor as , and the interval of NMF factor is . Therefore, for a node , if both of its range overlap rates of and are greater than a given threshold, we define as a malicious node. The range overlap rates are calculated as follows:By our empirical analysis, threshold of range overlap rates of and can be set as 0.3. And also, we will discuss the impacts of and setting in later examination section.

6. Traversing Strategy of Colluder Detection Based on Random Walk

6.1. Traversing Strategy of FCADM

In our traversing strategy of colluder detection in FCADM, we use the trust relationship as a factor for making decision of path detection in social graph model. Here, we first address the probability evaluation in traversing strategy for finding colluders by using trust relationships among nodes.

Firstly, we give a set of symbols in our traversing strategy in FCADM as follows:(i): the malicious node which is selected through method proposed in Section 5 and used as start point in traversing.(ii): the current node in each step of traversing.(iii): the node selected for next traversing step from current node.(iv): the set of direct neighbors that have direct current node trust relationships from .(v): the set of colluders that are selected from start point in traversing.(vi): the set of nodes that are traversed from in a single time of traversing.

We can select a malicious node from the set obtained by previous section. Starting from , we perform our traversing strategy through nodes’ trust relationships with a probability of selecting the next node for traversing. Our proposed traversing strategy is based on random walk algorithm under a certain probability condition. At each step of traversing, we are at a current node and need to make decision of whether the traversing should keep going on and which node would be selected as the next one. If the node is evaluated to be another colluder, then we mark it as colluder of ; if is an honest one, we skip it and continue our traversing until termination of traversing. For each , we have following options:(1)With probability , we continue our traversing and select . We evaluate the probability of selected node to be a colluder.(2)With probability , we do not continue the traversing strategy. That means that the current node is the end and we can restart another round of traversing.(3)If the out-degree of node is 0, we need to backtrack our traversing to the previous node (noted as ) and then restart the trust traversing according to option or .

Next, we discuss the former two options, respectively, as follows.

(1) Firstly, we need to decide the probability of continuing traversing. We consider that the probability of can be calculated based on number of neighbors set of current node and the shortest distance between the current node and start point node as follows:where is the shortest length from current node to start point node. Obviously, with more nodes in neighbor set and shorter length from current node to start point node, the probability of would be larger.

Then, we need to decide the probability of the traversing and selecting the next node. That means that we need to select a directly trust related neighbor of current node under a certain probability. We have two rules for defining the probability calculation here: trust relationship and node similar factor. That means that a higher valued trust relationship to neighbor and meanwhile a higher node similar factor imply a higher probability of being selected as the next traversing node. We denote as the next node for selecting a node from current node’s direct trust related nodes. Then, the probability of selecting node is aswhere is the total number of , denotes the average value of interval fuzzy value , and denotes the average value of . In addition, we define a damping factor to denote the probability of randomly being selected as the next node for a node. That means that each node has a probability if it is the direct neighbor of current node. It can be used for selecting a new connected neighbor as the next node in traversing under a certain probability even if it has a relative low trust with current node.

In addition, since we aim to find a malicious colluder in our traversing, the node similarity is another essential factor for our strategy. Then, we here propose a factor for evaluating the weight of node with its node similar factor. In our study, factor is calculated aswhere is the set of nodes that has voted to in the past; is the set of items which has sent judgments to in the past.

Correspondingly, we have the final probability of selecting next node as

(2) With probability , we stay at current node and then evaluate current node whether it would be a colluder. The idea is that we define an evaluation method of to recognize the colluder. We will discuss the details of the evaluation method later.

6.2. Evaluation of Colluder in Traversing Strategy

In each step of traversing strategy, we should evaluate whether the selected current node is a colluder of malicious node (start point) or not. Here, we propose an evaluation method for recognizing the colluder based on malicious factors and node’s reputation. The evaluation method comprises the following steps.

Step 1. For each current node , we evaluate its malicious factors of IJF and NMF based on (3), (5), and (6) and then measure whether it is a malicious one as in Section 5.

Step 2. If is evaluated as a malicious one in Step 1, we calculate its reputation difference with average reputation value of selected colluders as follows:

If we get , , we exclude the current node from colluder set.

6.3. Termination of Traversing Strategy in FCADM

Under above traversing strategy, we can detect colluders by traversing along node trust relationships. Such traversing can be repeated iteratively for discovering all possible colluders with the selected start point (a malicious node). In each step, we have two possible alternatives as follows:(1)Current node is a colluder, and then, we should definitely continue our traversing.(2)Current node is an honest node, and we skip the node and continue our traversing. However, we consider that an honest node would have a relatively low probability of connecting to a malicious one.Based on above consideration, we here propose a termination factor for a single time traversing to decide whether a traversing should be terminated or not. Firstly, if the current node is a colluder one, we consider that it would have high probability of connecting to another colluder, and then the termination factor should be low. By contrast, if the current node is an honest one, we should increase the termination factor. Therefore, the termination factor of traversing strategy can be calculated as follows:Here, we define that the single traversing would be terminated if factor is larger than a threshold (set as 0.8 in this study). In addition, to prevent a long depth traversing, we set the max step of traversing in a single traversing strategy as based on the idea of “six degrees of separation,” which is mentioned widely [32].

Moreover, we here address the termination of overall traversing strategy in FCADM. With the increasing times of traversing, we can obtain colluders with their trust malicious factors. Then, we can terminate the traversing if there is no new detected colluder in numerous times of traversing continuously. It is also noted that we set a minimum number of traversing thresholds as 100 for ensuring that the strategy is working well in cold start state. And meanwhile, if no colluder can be detected after a maximum number of traversing thresholds (set as 500 in this study) from starting point, we consider that there is no colluder with it.

7. Experiment and Analysis

Extensive simulation experiments have been conducted to evaluate the performance of our proposed FCADM in this section. We perform simulations using our prototype system. The simulation environment is as follows.

(1) Simulation Platform. To test the performance of our proposed method, we developed a prototype that can simulate the nodes and their interaction behaviors based on prototype configurations. In the prototype, nodes are controlled automatically by the system and interact with others according to certain set behaviors, such as propagating information, sending comments of items or nodes, forwarding posts, and approving posts.

(2) Dataset. In our experimental scenarios, the initial dataset was collected manually from mobile social based systems. Our data included about 1,560 IDs and more than 170,000 records (including posts, comments, and other behaviors). The collected nodes were inserted as initial nodes into the prototype with their real personal data. We then added about 900 additional nodes whose roles were set manually. There were two node roles in the prototype: honest node and malicious node. In the initial settings, all nodes with real data were seen as honest ones, while malicious nodes were set from the additional nodes. In our prototype, once a node was set as an honest node or a malicious one, it could not change its role. The reputation values for each ID were initially set according to a normal distribution with mean 0.7 and variance 0.1 in our prototype.

(3) Interaction Behavior of Node. Honest nodes executed no malicious behaviors toward others, while malicious nodes sent malicious comments to honest nodes and sent fake comments to malicious ones to inflate their reputations.

(4) Topology of Prototype. For the nodes collected from real data, there was a one-way direct link from one node towards another one if it had a relationship with the node, and all direct links were generated from the initial data set and were fixed and invariable in the prototype. The initial dataset was used to calculate the initial trust according to the reliability, and then a network topology was formed based on this real-world source. For our collected real nodes and their relations, the average out-degree of real nodes is around 6.8 and the in-degree is around 3.7. Therefore, we set that additional nodes are connected with an average out-degree 7 and an average in-degree 4.

Detailed characteristics of simulation in the prototype are shown in Table 5.

7.1. Performance Evaluation of Trust Schedule Calculation

In this test, we aim to evaluate the performance of trust schedule calculation based on fuzzy interval valued perspective. Firstly, we compare the calculation methods of trust relationship proposed in this work. We set four groups for the performance comparison of trust relationship: average of trust aggregation (AG), EigenTrust method (ET), ultimate trust rating (UT), and the proposed trust based on fuzzy interval value in FCADM (FT). In the test, we conduct two tests with 20% and 30% malicious additional nodes with 10,000 times of interactions among nodes and then record the accuracies of trust relationship calculation. As shown in Figures 2(a) and 2(b), we can see that the accuracies of proposed method (FT) are higher than methods of AG and ET, while the method of UT gets the best performance in all methods. In our consideration, the reasons are as follows: () the ultimate trust method provides trust by its dynamic adjusting of factor by large computational costs, which results in the best performance due to all nodes maintaining trustworthy knowledge about others for detecting malicious interactions; () the proposed method in this work can reflect a fuzzy scale based on judgments, which results in a reasonable numerical interval including maximum and minimum values rather than a single value. Such interval valued trust gives higher accuracy than exact crisp numeric value in other methods; and () the effect of time dimension is taken into account in our proposed method, while it is not considered in the other two methods.

Then, we examine the performance of reputation proposed in FCADM. Similar with above test, we conduct two tests with 20% and 30% additional nodes with 10,000 times of interactions among nodes and then record the accuracies of reputation of nodes. For comparison, we set three groups for reputation calculation: EigenRep method (ER), average aggregation method for reputation (AR), and proposed reputation based on fuzzy interval valued reputation in FCADM (FR). As shown in Figures 3(a) and 3(b), our proposed method gets a little better performance in all methods. In our analysis, the reasons are as follows: () the proposed method (FR) in this work can reflect a fuzzy scale based on reputation, which results in a reasonable numerical interval including maximum and minimum values rather than a single value. Similar with trust relationship, the interval valued reputation gives higher accuracy of meeting the requirement of accurate reputation aggregation than exact crisp numeric value in other methods; and () the consideration of time dimension also contributes to higher accuracies of reputation aggregation.

7.2. Performance Evaluation for Proposed Malicious Node Selection

In this examination, we firstly examine the impacts of the value setting for parameters and which are given to select malicious nodes. The parameters of and play important roles in the later selection of malicious nodes. For different combinations of and , the accuracies are different as shown in Figure 4(a). From the results, we can see that if the parameters are set as and , the performance of detecting malicious nodes is the best. Moreover, we conduct experiments to reveal the sensitivity of parameters of and . From Figures 4(b)-4(c), we can see that the accuracies of detecting malicious nodes are changing, while the results are in agreement with the result in Figure 4(a). Further, we analyze the impact of threshold of range overlap rates of and in (12). As shown in Figure 4(d), the accuracy of malicious node detecting is lower while the value of threshold is set too low or too high. We consider that a too high value of threshold leads to malicious nodes being excluded in detection while a too low value of threshold leads to honest node being included in detection mistakenly. This test validates that the thresholds around 0.3 are often a reasonable compromise.

Then, we examine the performance of proposed malicious node selection based on (1), (2), (3), (5), (6), and (9) for selecting malicious nodes. The results are shown in Figures 4(e)-4(f). In Figure 4(e), we give the performance comparison under optimal value combination and the worst value combination of parameters and . By comparing the accuracies as shown in Figure 4(e), we can see that the performance difference between OP and WP has become larger with the ratio of malicious node increasing. That means that an appropriate value combination of parameters is a significant factor for malicious node selection.

Further, we compare performances of four methods as follows: our proposed method (DM), the majority rule (MR), the entropy-based approach (EA), and the signal modeling-based method (SM). We record the average accuracy of each method after 10,000 transactions. In this examination, we set parameters as and . As shown in Figure 4(f), we can see that, with the changing of ratio of malicious nodes in mobile social networks, the overall accuracies of all methods are decreasing. However, our proposed method has the best performance in all methods.

7.3. Performance Evaluation for Detection Traversing Strategy

In this examination, we verify the performance of our proposed detection traversing strategy for detecting collusive attack nodes. For comparison, we set three groups as follows: our proposed FCADM with majority rule (MR) and behavior similarity (BS). The results are shown in Figures 5(a)-5(b). As shown in Figure 5(a), we can see that the collusive attack nodes detection based on proposed FCADM has the best performance in all methods, with an approximately average improvement of 12.3% and 15.5%, respectively. Furthermore, we set that there are 20% or 10% malicious nodes in mobile social networks and then detect collusive attacks. We notice that, under different numbers of nodes in mobile social networks, the accuracies of collusive attack detection based on FCADM maintain stability as shown in Figure 5(b). The results show that FCADM is feasible and effective for detecting collusive attack in mobile social networks.

8. Conclusion

For users in mobile social networks, receiving judgments for establishing personal reputation by interacting with strangers is very common and inevitable. However, due to the lack of knowledge of the other ends they are dealing with and the magnitude of the network, most receivers might face potential risks because of the existence of malicious users. Preventing honest users from frauds, especially collusive attacks, for their reputation aggregation has its practical significance. First, our work provides a valuable guideline for constructing a collusive attack detection framework with fuzzy view. The monitoring systems in mobile social networks can be easily built according to our proposed framework and its related calculation methods. Meanwhile, the given formal definitions and conceptions in this work are quite appropriate for machines that read and understand the calculation methods so that a machine driven mechanism can achieve higher efficiency than manual methods to reduce the workload of fuzzy intervals calculating and collusive attack detecting. Likewise, monitoring centers can inquire about details about attack detection, including information of nodes, reputation voting, and their reputations for the further purpose of fuzzy trust evaluation and malicious factor calculation. Secondly, application of the malicious factors helps monitoring centers to recognize many aspects as possible to get a more comprehensive evaluation about nodes. This is because the proposed IAP factor reflects whether an attack has happened, while IJP factor reflects whether a node has launched attacks. Further, UMF factor reveals whether a node is a malicious one in a comprehensive way. Thereby, our proposed method can provide sound usability for single malicious node discovering. Thirdly, the proposed traversing strategy helps to trace potential colluders connected to the recognized single malicious node according to the trust relationships among users. By traversing trustworthy relationships, we select nodes based on random walk algorithm in different probabilities corresponding to the trustworthy manners among people in real life; for example, a trustworthy node has more probability of being an associate. In fact, our proposed method can be used for reputation system monitoring and security enhancement in mobile social network based e-commerce platforms.

In our view, robustness and reliability are essential for reputation aggregation in mobile social networks. Therefore, our proposed study aims to detect collusive attacks by considering reputation from a fuzzy aspect based on node’s relationship. In this study, we propose a collusive attack nodes detection mechanism (FCADM) in mobile social networks comprising the following aspects: () formal definitions for FCADM; () fuzzy trust schedule for trust relationship and reputation calculation; () selection method of malicious nodes based on proposed factors; and () traversing strategy for detecting collusive attack nodes based on random walk.

The results have justified the performances of our proposed scheme and have shown accuracy under our dataset. To conclude, the experimental results have been analyzed, showing that the results based on our proposed scheme are in line with the actual statement. The proposed FCADM can make objective judgment of nodes of collusive attacks, which can enhance the security of mobile social networks.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work is funded by National Natural Science Foundation of China (61572326, 61103069, and 71171148), Key Lab of Information Network Security, Ministry of Public Security (C14602), and Program of Shanghai Normal University (DCL201302).