Abstract

Trust violation during cooperation of autonomous agents in multiagent systems is usually unavoidable and can arise due to a wide number of reasons. From a psychological point of view, the violation of an agent’s trust is a result of one agent (which is a transgressor) expressing a very low weight on the welfare of another agent (which is a victim) by inflicting a high cost for a very small benefit. In order for the victim to make an effective decision about whether to cooperate or punish for the next interaction, a psychological variable called welfare tradeoff ratio (WTR) can be used to upregulate the transgressor’s disposition so that the number of exploitive behaviors that are likely to happen in the future will be decreased. In this paper, we propose computational models of metrics based on the welfare tradeoff ratio along with the way by which multiple metrics can be integrated to provide the final result. Additionally, a number of experiments based on social network analysis are conducted to evaluate the performance of the proposed framework and the results show that by implementing WTR the simulated network is able to deal with different levels of trust violation effectively.

1. Introduction

In any social organism, individuals commonly encounter situations in which their actions can have impacts, either positive or negative, on their own welfare as well as the welfare of others, for example, suicidal hive defense by honeybees, blood sharing by vampire bats, alarm calls by birds, or sharing a cake with friends. As a consequence, mechanisms in one’s mind must carry out computations to this adaptive problem of how much to weight the welfare of the other relative to the self. Evolutionary psychologists refer to the cognitive variable regulating an individual’s disposition to trade off his own welfare for the benefit of another individual as a welfare tradeoff ratio or WTR [13]. The higher an individual’s WTR toward a target individual is, the more the individual will sacrifice his own welfare to enhance the target’s welfare. The lower an individual’s WTR toward a target is, the more likely the individual is to harm or punish the target. For instance, WTRs of 1 : 1 imply that the individual values the welfare of the target equally as his own, which means the individual imposes a 1-unit cost upon the target if and only if it, in turn, leads to at least a 1-unit benefit for the self. WTRs of 1 : 2 imply that the individual values the target’s welfare one-half less than one’s own; that is, the individual is willing to impose a 2-unit cost upon the target to obtain one unit of benefit [24].

Many theories including kinship [5], reciprocity [6, 7], and aggression [8] describe different sets of cues that can be determined, computed, and integrated to produce a welfare tradeoff ratio. When implementing these processes, three main steps are required: (i) to compute the effect of an act on the welfare of the self, (ii) to compute the effect of an act on the welfare of the target, and (iii) to deploy a welfare tradeoff function that indicates the degree to which the individual weights the welfare of the target compared to the self [2]. In line with this, an individual will, therefore, trade off his own welfare in favor of the target’s welfare when the following inequality holds: , that is, when the benefits to the target, after being discounted by the self’s WTR, are greater than the costs imposed by the self to deliver the benefits [9].

Within this concept, trust violation which is usually unavoidable during cooperation of autonomous agents in multiagent systems can be acknowledged as a result of an individual agent expressing a very low welfare tradeoff ratio by inflicting a high cost on the target agent for a very small benefit to the self [10]. In this regard, a number of counterviolation strategies are modeled to upregulate the transgressor’s welfare tradeoff ratio in order to decrease the number of exploitive behaviors that are likely to happen in future interactions [2]. These counterviolation strategies include punitive and reparative strategies [4, 10]. More specifically, factors based on the three theories mentioned above must be taken into consideration when making a choice of recalibration strategies that impacts the transgressor’s welfare tradeoff ratio, for example, genetic relatedness, relative formidability, and value of relationship as a reciprocity partner [11]. This implies that trust is likely to be recovered by the victimized individual if the transgressor is a closely related kin or more formidable or an irreplaceable partner. In other words, the combination of these factors is taken as an input to a procedure that is designed to compute a welfare tradeoff ratio in the mind of the victim toward the transgressor. If the outcome of such computational program is indexed as high or exceeds a predefined cooperation threshold, then reparative strategies are triggered to increase the transgressor’s welfare tradeoff ratio and as a result trust can be restored cooperatively through forgiveness. Otherwise, punitive strategies are activated to induce the transgressor to place greater weight on the welfare of the victim through revenge [12].

In fact, the issue of trust restoration based on the aspects of human social decision-making and behavior in psychology has received considerably less attention, especially in the context of distributed multiagent systems. The current study attempts to address this gap in our understanding by developing a comprehensive framework with the aim of achieving and identifying how one individual assesses the welfare of another individual after a trust violation. Additionally, a computational result of the framework is continually updated based on interactions. The rest of the paper is organized as follows. In Section 2, we present related work on trust restoration from a sociopsychological point of view and the implementations in multiagent systems paradigm. Section 3 is dedicated to details of five WTR metrics and how they can be integrated. Experiments to evaluate the effectiveness of the proposed framework are provided in Section 4. The last section concludes our study along with possible future work.

Trust restoration is defined as the process of restoring and improving the perception of trustworthiness of a victimized individual from three aspects—competence, integrity, and benevolence—after the occurrence of a trust violation [13], while [14] defines trust restoration as the reparative process with willingness to partially or totally expose one’s vulnerability to another party. In line with these definitions, trust restoration is the process of rebuilding trust and making a victimized individual feel hopeful in future cooperation by mitigating such negative motivations, that is, anger or fear, while developing positive motivations [15].

From a sociopsychological point of view, reparative strategies can be represented in several forms. In [17], a costly apology and self-punishment are served as costly signals to protect a transgressor’s reputation after an unintentional transgression. Plain apologies are more effective in restoring trust than denials when a trust violation is perceived either as competence-based or as morality-based [18]. In [15, 19], attribution models are proposed for identifying the causes of trust violation and rebuilding damaged trust through reparative actions which are based on emotion and motivation. In dealing with corporations’ negative publicity, cognitive response (i.e., providing sufficient information) is more effective for violations in ability, whereas affective response is more effective for violations related to integrity and benevolence [20]. The theoretical concepts employed in previous studies differ from ours. In our study, we propose a mechanism for governing the maintenance of reciprocal relationships after a trust breakdown. The proposed mechanism is built on the concept of a welfare tradeoff ratio (WTR), an adaptively internal representation to assess self and other welfare valuations, which also serves as the foundation of our computational model of trust restoration.

In the context of multiagent systems, two main prosocial motivations are essential for trust restoration, that is, forgiveness and regret [21]. In this study, forgiveness and regret are considered as implementable properties for formalizing the incorporation of trust defining a computational model. Forgiveness, in particular, has been extensively studied as a positive method of coping with trust violations in [12, 2027], while regret, always together with responsibility, is used to form the definition of an apology [13, 20, 21, 2831]. For instance, an apology was defined by [20] as “a decisive public expression regarding both responsibility and regret for a violation of trust.” Regret as a computational model takes into account this emotional reaction not only from the offender side, but also from the victimized individual after the offence in an information sharing context [21]. In the context of e-commerce, trust restoration is often necessary since online transactions are noisier than face-to-face transactions [18]. Specifically, when a trust violation occurs and a vendor acknowledges the incident, an apology and other appropriate restorative actions will shape the extent to which a victim is willing to reconcile [28, 32]. In centralized-based systems, the concept of forgiveness has been proposed in our previous work [23] to explore untrustworthy agents who are capable of fulfilling future transactions. The so-called forgiveness mechanism is a computational model based on five positive motivations, that is, intent, history, apology, severity, and importance. Our current study is partially inspired by this forgiveness mechanism. However, the major difference is that the current study attempts to identify metrics that are well suited to establish the framework for recovering trust in distributed multiagent systems. In addition, it is also worth noting that trust restoration is not a simple one-way approach; it is rather both a victimized individual and a transgressor getting involved with equally critical roles to play [33].

3. Welfare Tradeoff Ratio Metrics

In this section, we present how a welfare tradeoff ratio can be calculated in the context of distributed multiagent systems. For simplicity, we consider a set of agents , where is the number of agents in a community. The interaction among agents is represented as a weighted directed social network. In particular, an agent interacts with another agent establishing a link between them and the link is labeled as having a weight indicating the quality of the transaction in the form of rating. Suppose that agent and agent are autonomous and self-interested. At the current time , agent violates agent ’s trust by not fulfilling the transactional agreement. In other words, trust violation committed by agent occurs when agent does not place a sufficiently high weight on agent ’s welfare. As a result, agent ’s mind computes five different metrics [10], that is, kinship between agent and agent y, formidability of agent , irreplaceability of agent , signals of remorse, and past contributions from agent , in order to recalibrate the welfare tradeoff ratio of agent . Figure 1 shows WTR metrics’ computational system and the details of each metric are in the following subsections.

3.1. Kinship between Agent and Agent

Kin selection theory predicts that cues of genetic relatedness trigger positive valuation, willingness to help, and reparative rather than punitive responses to exploitation [5, 10, 11]. Specifically, a welfare tradeoff ratio should be upregulated for an individual when the following Hamilton’s rule is obtained: , where is an index of genetic relatedness between agent and agent . As increases (i.e., agent x and agent y are closely related kin, friend, or those who share the same interests), the discounted agent y’s benefit is more likely to exceed agent x’s cost. Interestingly, the human mind computes a psychological kinship index using a neurocomputational system to help decide which individuals are close genetic relatives and how much an individual’s own welfare is to be traded off against that of another [3, 34].

Unlike human kin detection, however, agents in multiagent systems use different mechanisms to identify who are like-minded and similar in preferences. Here, we propose one approach, that is, to compute the degree of similarity of interacting agents in the system based on their ratings of the same agents that both agents have rated. Intuitively, the degree of similarity indicates to which extent two agents who are heterogeneous in nature are similar in ways of judging. If their like-mindedness is contrasting, they would disagree with each other most of the time [35]. In the context of collaborative filtering systems, the similarity of two agents’ preferences can be used to measure the reliability of their opinions or ratings [3539]. In other words, the higher the degree of similarity in preferences, the more the trust in each other’s opinions or ratings. We apply a popular approach in collaborative filtering systems, Pearson’s correlation coefficient (PCC), to compute the similarity between agent and agent as follows:where is the set of agents corated by both agent and agent . is the average rating of all agents that are rated by agent , and is the average rating of all agents that are rated by agent .

3.2. Formidability of Agent

Formidability refers to the ability to inflict costs in order to enforce a welfare tradeoff ratio in an individual’s favor through his or her physical strength [3]. Ancestrally, a man’s higher formidability was a major component of his ability to inflict costs on others by fighting them when conflict occurs [2]. According to the theory of the asymmetric war of attrition [8], agent will cede a resource to agent as a function of agent ’s relative formidability value; that is, , where indicates how formidable agent is compared with agent [2, 9]. Aside from physical strength, another potential component of formidability is the position or role in the community [11]. In some species, a high position or influence in the community of an organism is a result of their personal physical strength [4, 12].

In the context of multiagent systems, identification of key actors which play a particular role or have a particular position in the system is meaningful in a number of ways, for example, searching for suspects of potential terrorist threats or mining influential people to help viral marketing [4042]. Based on [16], the roles of agents can be classified into four different types depending on the number of communities (community score) and links incident to that agent (degree), that is, ambassadors, big fish, bridges, and loners. Ambassadors are considered to be the most important role by providing connections to many members within the same community and to many other communities. Big fish which are named after the cliché “big fish in a small pond” are very important only within their own community. This means that they have a high number of degrees but a small number of community scores, while bridges which serve as bridges between a number of communities have a small number of degrees but a high number of community scores. Lastly, loners like to be companionless since they have both a low number of degrees and community scores. Figure 2 illustrates four types of community-based roles with a threshold of 0.5.

To achieve determining the role of agent , we assume that the information about community membership of every agent is available. Then, the number of communities linked to community score of agent can be calculated as follows:where is the set of neighbor agents of agent and is the community membership contribution of agent which can be defined as follows:where is the number of agents (excluding agent ) which are neighbor agents of agent and in the same community with agent . In case the community membership information is unavailable, needs a method for estimation from the topology of the network. Examples of theorems providing the calculation of the expected value of can be found in [16].

After obtaining the role of agent , the formidability value of agent can be assigned as a constant value according to its classified role and its relative degree with the condition that an ambassador role will be assigned the highest formidability value, while a loner will be assigned the lowest value. In the context of trust restoration in distributed environments, an agent having a large number of connections within its own community is considered to be more formidable than being a bridge among communities. Therefore, a big fish role will be assigned a higher formidability value than a bridge role. In our study, we assign a constant formidability value for agent as follows:

3.3. Irreplaceability of Agent

Agent is incentivized to place a greater weight on the welfare tradeoff ratio of agent , if the ability to confer benefits from the product or service provided by agent is high [1, 2, 11]. In much the same way, trust is likely to be rebuilt if the product or service provided by agent is irreplaceable or crucially required by most other agents in the community. The availability of a product or service, in some situations, is also important in order to fulfill the requirement of agents’ transactions even knowing that the outcome of future transactions may not be maximized. Based on community structure, three centrality measures are used to evaluate the importance of agent : degree, degree of neighbor agents, and betweenness with tuning parameters [43, 44]. The degree centrality of agent , , reflects the number of relations that agent is connected to and indicates the involvement of agent in the community. We extend the general definition of degree centrality by incorporating rating and the formidability value of each connected agent as follows:where is the adjacency matrix in which it is equal to 1 if agent is connected to agent and 0 otherwise. is the rating of agent by agent and is the formidability value of agent . The second measure is the degree centrality of neighbor agents of agent , , which is formalized in the same way as the degree centrality of agent :where is the set of neighbor agents of agent and is the number of neighbor agents of agent (which are the neighbor agents of agent ). For the last one, the betweenness centrality of agent , , measures the number of the shortest paths between a pair of agents and is defined as follows:where is the number of the shortest paths between agent and agent and is the number of the shortest paths between agent and agent containing agent . All measures are discounted by tuning parameters and summed up to give an overall importance value of agent :where , and are tuning parameters with conditions and The tuning parameters can be set according to the preference of the system designer.

3.4. Signals of Remorse from Agent

Expressions of remorse and sincerity are considered the key components of successful trust restoration mechanisms [14, 45]. Through these expressions, negative relationships can be resolved without having to involve third parties (e.g., police or courts). Otherwise, it only communicates with the victims in their beliefs that the transgression was intentional and similar behaviors might happen again in the future. Thus, retaliation is more likely to be triggered [10, 12]. The most typical form of affective trust recovery is an apology. The transgressor’s expression of a truthful apology can enhance the victim’s perceptions of interactional justice and improve postrecovery satisfaction [46]. Moreover, the effectiveness of trust recovery is concerned about the time of expressing apology as it shows a sense of taking responsibility from the transgressor. In other words, a transgressor who apologizes immediately after the offence takes place is more likely to be forgiven than that who apologizes later. To compute this metric, we apply the same mathematical formula on an apology factor in our previous work [22]. That is, we first define recency factor [47]:where is the recency factor of apology from agent . is the time interval difference between the time when the transgression takes place and the time when agent apologizes . More time differences mean significantly fewer positive judgements. The parameter is a decay rate of the apology offer. The small indicates inclination relying more on the early time of expressing apology. Conversely, increasing indicates more acceptability on the late apology. The apology value is then formalized as follows:where is the recency factors of apology offered to agent by agent . is the honesty of the apology offered by agent and lies in the interval , with 0 indicating a completely untruthful apology and 1 indicating that agent undoubtedly accepts a signal of remorse from agent .

3.5. Past Contributions of Agent

The outcomes of past contributions are typical information that human agents use to predict the likelihood of a participant’s future interactions. In the study of modern criminal justice [10], reparative strategies are preferred over punitive ones for the vandal who has no criminal record in the past compared to the one with a history of vandalism. Theoretically, past information serves as a reliable cue to support decisions through the concept of reputation on whether to trust or engage in future transactions with other agents particularly in multiagent systems and in online community settings [4750]. In our context of trust restoration, reputation reflects a history of past contributions as the combination of direct experience of particular agents (e.g., reputation of agent as a result of direct interactions with agent ) and information provided by others (e.g., reputation of agent as a result of interactions with other agents) [49, 5153]. In other words, agent with a high reputation score before the transgression can foster benevolence, which is a key component of trust rebuilding, while a low reputation score of agent decreases the likelihood of positive motivations which result in the negative inclination to restore trust.

To formalize this metric of past contributions based on reputation, we first define how each agent’s rating can be collected and updated. Agents rate their counterparts based on the outcome of each transaction within the range of [−1, 1], where −1 represents a fully defective transaction and 1 represents a totally successful transaction. Ratings then are aggregated and discounted by a time weighting factor [47] to converge direct reputation to a very small value as time passes:where is the number of ratings based on agent ’s direct interaction with agent . is the recency factor of agent ’s rating by agent of transaction , giving the recent rating more importance than the older one.

Information provided by other agents who have interacted with agent is used to model indirect reputation score of agent . However, the problem of indirect information is that it may not be as accurate or reliable as direct information due to its uncertainty. To overcome this problem, we take into account the formidability value, , which represents the role of agent to serve as the credibility of each rating. This means agents with more important roles provide more credible ratings than those whose roles are less important. Therefore, the indirect reputation of agent can be defined as follows:where is the set of agents who have provided ratings for agent and is the formidability value of agent . The outcome of past interactions based on the integration of both direct reputation of agent before the transgression and indirect reputations from other agents who have established interactions with agent is then computed as follows:where is the direct reputation provided by agent for agent without including the rating of current transgression. is the reliability factor of agent ’s direct reputation and lies in the range . More specifically, is calculated based on the minimum number of interactions that agent should carry out with agent in order to provide confidence about direct reputation it has for agent . Initially, since there is no direct interaction between agent and agent (i.e., ). But when also increases according to the expressionwhere is the minimum number of direct interactions between agent and agent required to achieve a predetermined acceptable error rate and confidence level and is calculated using the Chernoff Bound Theorem [51] as follows:

What we can observe from (13) and (14)  is that the result of the past contributions of agent only derives from direct reputation if the number of direct interactions between agent and agent is sufficiently large enough to increase the confidence level within the predefined error bound; otherwise, the result is a mixture of both direct and indirect reputations.

3.6. Integration of Multiple Metrics

In previous subsections, we provided a discussion of different metrics along with their computational aspects that agent can use to evaluate and make decisions that affect the welfare of both agent and agent , that is, whether agent should restore agent ’s trust who violates the current transaction. The integration process can be simply based on the strategy of weighted mean aggregation among all five metrics to provide the final result, that is, . However, this also can raise an important question of how much weight value should be placed on each single metric, as one metric may potentially enhance, moderate, or attenuate the effect of other metrics on the final output.

For instance, consider an interaction between agent who, in the community, is an ambassador and agent who is a loner. Agent has no cue of being related to agent . Agent has productive historical experience interacting with agent , and when defecting, agent always signals an expression of apology. Should agent forgive and restore agent ’s trust? According to theories of kinship, agent should not, as the index of relatedness is relatively low. According to theories of reciprocity, agent should reinstate agent ’s trust as agent has a tendency to cooperate with agent in future interactions. According to theories of conflict, agent should not since, being in a higher position, agent is more attractive to be interacted with by other agents.

Based on the above example, we categorize metrics into three different groups in accordance with three theories, that is, kinship, reciprocity, and conflict. The first category, kinship , consists of one metric: kinship metric The reciprocity category comprises three metrics: irreplaceability, signals of remorse, and past contributions. All metrics in this category are aggregated and normalized into the range as follows: , where , and are the maximum value of irreplaceability, signals of remorse, and past contributions metric, respectively. The conflict category also consists of one metric, that is, formidability metric Each category has a predefined threshold for determining its corresponding weight. The welfare tradeoff ratio of agent is, then, computed as follows:where and are the weight of kinship, reciprocity, and conflict category, respectively. These three weights sum up to 1. The rationale behind using weights is the ability to control the final WTR value either specifically based on the nature of the relationships among community members or dynamically based on the calculated WTR metrics. In this study, the value of each weight is assigned differently depending on the value of that category. This means the category in which the value of is higher than its predefined threshold will be given a greater weight than that in which it is less than its predefined threshold. For example, if and exceed their predefined kinship and reciprocity thresholds but does not, then we assign In case , and are less than their predefined threshold, we then assign the minimum value, that is, −1.

4. Experiments and Results

In this study, we investigate the effectiveness of our proposed framework in distributed settings through a number of network analysis experiments. Network analysis is typically to measure and to extract useful information about relationships among people, groups, communities, or other entities [54]. From a research perspective, network analysis provides a new means to understand the world. Our main purpose of using network analysis is to focus not only on the attributes of individuals, but also on social relationships which can be violated and restored between individuals. The basic components of the network being analyzed in the study are nodes (or agents) and links. Nodes are abstractions for individuals, organizations, or communities [55], whereas links can represent various types of relationships based on contexts [56].

Sociological information which is a type of information source relates to the social relationship among agents, and their role in the community is first generated and then evaluated by utilizing Matlab Network Analysis toolbox published by MIT’s Strategic Engineering Research Group [57]. The conduction of our experiments is carried out in steps as shown in Algorithms 1 and 2.

Require: , the number of nodes (agents)
modules, the number of modules
link-density, the probability of attachment
link-modules, the proportion of links within modules
, the number of rounds
Initialized ()
GenRandomNetwork (, modules, link-density, link-module)
for   to   do
for   to   do
if neighbor is empty then
/ randomly choose any agent to interact with except agent /
new-nei  =  random ;
AddLink (, new-nei);
PerformTransaction (, new-nei);
else
// neighbor agents exist
/ randomly choose any of its neighbor agents to interact with /
nei  =   random (neighbor );
PerformTransaction (, nei);
end if
end for
end for
Require: , WTR threshold
  if defection is true  then
//transaction is unsuccessful
UpdateRating ; //decrease rating
// calculate WTR value of agent
  =  CalWTRMetrics ;
if    <   then
RemoveLink  ;
end if
  else
UpdateRating   ; //increase rating
  end if

Initially, a weighted undirected random network of 40 nodes (or agents) is generated and divided into 4 different modules. However, even the choice of this number of nodes and modules is too far away from the number of nodes and modules of real network datasets; still, it is reasonably sufficient to capture the identification of social interactions in the presence of different levels of uncertainty. An agent begins the transaction process by choosing any of its neighbor agents, if they are available, to interact with. In case neighbor agents of a particular agent are inexistent, the agent randomly chooses any agent in the network to be its interacting partner. Transactions are performed repeatedly for 200 rounds or time periods in which some transactions in each time period will be unsuccessful due to the predefined probabilities of defection. Specifically, we set the probabilities of defection to be 0.1, 0.2, and 0.25, respectively, and then demonstrate the comparisons of results with the following two scenarios:(i)The scenario in which trust violation can be recovered if a transgressor’s WTR exceeds the predefined WTR threshold (which is 0). The computed WTR lies in a range between −1 and 1.(ii)The scenario in which there is no trust restoration mechanism applied when a transgressor’s violation of trust toward others in the network occurs.

We use a trust rating value as a weight reflecting the result of transaction carried out between two connected agents. At the outset, all agents are bootstrapped with an initial rating value of 0.01. After a transaction is complete, an agent provides feedback in the form of rating which is in the range of [−1, 1] to its interacting partner. We summarize all parameter settings used in the experiments in Table 1.

Four standard network statistical methods are analyzed in our experiments, that is, average degree, degree distribution, average weighted clustering coefficient, and dot matrix plot order by degree. The average degree of all agents in an undirected network is defined as where is the total number of agents in the network, is the degree of agent i, and K is the total degree of all agents. As shown in Figure 3, the utilization of the proposed trust restoration framework results in a higher average degree across all agents, especially when the probability of defection is 0.1 (blue line). However, in more noisy networks where probabilities of defection are 0.2 (orange line) and 0.25 (green line), agents still have the capability to maintain good relations with their neighbors for a longer period of time than the network without any mechanism to restore trust when it is damaged.

Similar to the average degree, in the next experiment, we determine the fraction of agents in the network for each degree as agents’ degree distribution. The distribution of each degree can simply be calculated as , where is the number of agents having degree k. Figure 4 compares degree distribution between the network without trust restoration framework (a) and the network implementing our proposed trust restoration framework (b). As we can see, the degrees of agents before the violation of trust are distributed in the range of with degrees of 6 and 8 having the highest frequency. When the network in Figure 4(a) has defective transactions as a consequence of the probabilities of defection, the degree distribution significantly declines into the range of for every probability of defection as it has no trust recovery mechanism. In the network in Figure 4(b), the frequency of degree after trust violation lies in the range of and when the probabilities of defection are 0.1 and 0.2, respectively. This can be interpreted as follows: the proposed trust restoration framework is able to recover some transgressors who play as key actors or provide products or services which are indispensable in the community.

To investigate the robustness of cooperation among agents in the network, the average weighted clustering coefficient can be used to achieve this purpose [58]. For a given agent in an unweighted network, the clustering coefficient is defined as the number of its neighbor agents that are connected over the total number of potential connections between them [58, 59]. Since we adopt a transaction rating to represent a connection weight of interacting agents in our weighted network, the value of average weighted clustering coefficient can be used to indicate the quality of transactions and also the strength of the network. Following [60], the formulation of the average weighted clustering coefficient is defined as where is the number of agents, is the strength of agent by summing up all weights of agent ’s connections, is 1 if the interacting agents are connected and 0 otherwise, and is an actual weight of a given connection.

The comparison of the average weighted clustering coefficients of the networks with different settings is presented in Figure 5. Except for the case of the network deploying WTR as a trust restoration framework with defective transaction probability of 0.1 (blue line), the average weighted clustering coefficients of all other networks gradually drop to 0 (the cause of negative average weighted clustering coefficients is that the rating of each transaction is set between −1 and 1 (see Table 1)) since most agents become independent as a result of either the increasing number of poor quality transactions or no mechanism to restore broken trust. In particular, we can observe that forgiving networks with trust restoration framework (orange and green lines) effectively resist against the adverse behaviors better than the networks implementing aggressive strategies (no trust restoration framework).

There is another useful way to visualize the arrangement of agents’ degree, that is, to draw a column/row sorted square dot matrix pattern. Dot matrix plots in Figures 6, 7, and 8 display the points by taking into account the sparsity patterns of given adjacency matrices representation of the networks with different probability of defections and the implementation of our proposed trust restoration framework. Moreover, we also measure the number of nonzero values () in which it can be used to describe the strength of the network in terms of the total weight of each agent’s connections at time period 0 (before violation of trust), 20, 50, 80, 100, 120, 150, and 200, respectively. According to [60], the number of nonzero values can be computed as .

An explicit distinction between the networks with and without reparative strategy based on WTR can be seen when the probability of defection is equal to 0.1 as depicted in Figure 6. Without trust restoration framework (Figure 6(a)), particularly, defective transactions make the strength of the network or the number of nonzero weight agents significantly drop starting from time period 50 onwards, compared to that in the network where the proposed trust restoration framework is employed (Figure 6(b)). This result is also analogous to the cases when the probabilities of defections are 0.2 and 0.25 as shown in Figures 7 and 8. In particular, the higher defective probabilities lead to the decline of the network strength even more quickly as we can observe from time period 20 onwards in Figures 7(a) and 8(a). Moreover, even the implementation of the proposed framework can maintain the network strength as reparative strategies are used to respond to adverse transactions rather than punitive strategies. Over time, punitive strategies tend to apply as the transgressors’ WTR decreases, prompting more agents to disconnect with their neighbors, and the network becomes less robust as illustrated in Figures 7(b) and 8(b).

5. Conclusion and Future Work

In this paper, a computational model of trust restoration inspired by a psychological regulatory variable, welfare tradeoff ratio or WTR, is constructed for agents in distributed multiagent systems. Based on the theories of kinship, reciprocity, and conflict, five different metrics are proposed, that is, kinship, formidability, irreplaceability, signals of remorse, and past contributions. The integration of all metrics provides the victim’s WTR which can be used to help decision-making about what type of counterviolation strategy should be implemented to recalibrate the WTR of the transgressor. Furthermore, in order to validate the applicability of the proposed framework in distributed environment settings, a number of experiments based on social network interactions are analyzed. The experimental results demonstrate that the implementation of the trust restoration framework can effectively respond to different levels of trust violation. However, we recognize that it is worth considering the evaluation of the proposed framework in more dynamic and practical environments. Therefore, we leave this issue as a priority in our future work.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This paper is supported by the National Natural Science Foundation of China (NSFC) under Grants nos. 61572095 and 61272173.