Abstract

Generally, system failures, such as crash failures, Byzantine failures, and so on, are considered as common reasons for the inconsistencies of distributed consensus and have been extensively studied. In fact, strategic manipulations by rational agents are not ignored for reaching consensus in a distributed system. In this paper, we extend the game-theoretic analysis of consensus and design an algorithm of rational uniform consensus with general omission failures under the assumption that processes are controlled by rational agents and prefer consensus. Different from crashing one, agent with omission failures may crash or omit to send or receive messages when it should, which leads to difficulty of detecting faulty agents. By combining the possible failures of agents at the both ends of a link, we convert omission failure model into link state model to make faulty detection possible. Through analyzing message passing mechanism in the distributed system with agents, among which agents may commit omission failures, we provide the upper bound on message passing time for reaching consensus on a state among nonfaulty agents and message chain mechanism for validating messages. Then, we prove that our rational uniform consensus is a Nash equilibrium when , and failure patterns and initial preferences are (an assumption of randomness). Thus, agents have no motivation to deviate the consensus, which could provide interpretable stability for the algorithm in multiagent systems such as distributed energy systems. Our research strengthens the reliability of consensus with omission failures from the perspective of game theory.

1. Introduction

How to reach consensus despite failures is a fundamental problem in distributed computing. In consensus, each process proposes an initial value and then executes a unique consensus algorithm independently. Eventually all processes need to agree on a same decision chosen from the set of initial values even if there may be some system failures, such as crash failures, omission failures, and Byzantine failures [1]. In the crash model, processes can get into failure state by stopping executing the remaining protocol. In the omission model, processes can get into failure state by omitting to send or receive messages. Also, in the Byzantine model, processes can fail by exhibiting arbitrary behavior. Extensive studies have been conducted on fault-tolerant consensus.

Moreover, two kinds of consensus problems are usually distinguished. One is non-uniform version (usually called “consensus” directly) where no two nonfaulty processes decide differently. The other is uniform version (called “uniform consensus”) where no two processes (whether correct or not) decide on different values. We believe that consensus protocols cannot simply replace uniform consensus protocols because the condition of non-uniform consensus is inadequate for many applications [2]. From [3], uniform consensus is harder than consensus because one additional round is needed to decide. Also, uniform consensus is meaningless with Byzantine failures.

Game theory provides interpretable equilibrium by analyzing the game among intelligent players. We argue that its incentive mechanism and punishment mechanism can be effectively applied in distributed systems. Recently, there is an increasing interest on distributed game theory especially in several fields such as peer-to-peer network, biological system, cryptocurrency, and e-commerce, in which processes are selfish called rational agents (or intelligent agents). Combining distributed computing with algorithmic game theory is an interesting research area enriching the theory of fault-tolerant distributed computing. In this framework, agents may deviate from protocols with any behaviors in order to increase their own profits according to utility functions, which could be regarded as general . In [4], this kind of deviation is referred to as of distributed protocol. This research is necessary in some practical scenarios, in which each process has selfish incentives. Also, we argue that the fairness of algorithms must be promoted by game theory. Clearly, the goal of distributed computing in the context of game theory is to design algorithms for reaching Nash equilibrium, in which all agents have no incentive to deviate from the algorithms. Perhaps, this framework has been investigated and formalized for the first time in the context of secret sharing and multiparty computation [58]. More recently, some fundamental tasks in distributed computing such as leader election and consensus have been studied from the perspective of game theory [917].

Following this new line of research, we combine fault-tolerant consensus with rational agents and study the rational uniform consensus problem in synchronous round-based system, where every agent has its own preference on consensus decisions. Thus, an algorithm of rational uniform consensus needs to be constructed. Also, for each agent, its utility is not less with following the consensus algorithm than with deviating from the algorithm. That achieves a Nash equilibrium. It is easy to see that standard consensus algorithms cannot reach equilibrium and they can be easily manipulated by even a single rational agent. Several research studies on rational consensus have been conducted [4, 1216, 18, 19], but none of them consider the uniform property. Also, most studies on rational consensus only support that there are crash failures or no system failures. We argue that omission failures, which are more subtle and complicated than crashing one, cannot be ignored for reaching uniform consensus. In this paper, we pay attention to a distributed system with agents, among which agents may experience omission failures. In this setting, we extend the game-theoretic analysis of consensus. Specifically, our contributions in this paper include the following:(i)We utilize a punishment mechanism to convert omission failure model into link state model, which makes faulty detection more direct. In the link state model, faulty links never recover whether or not omission failures recover. Therefore, it can provide an idea to simplify the problem of faulty recovery in distributed computing.(ii)An almost complete mechanism analysis is given for message passing in the distributed system with general omission failures. Then, we provide the upper bound on message passing time for reaching consensus on a link state. The upper bound determines the round complexity of our algorithm. Next, a message chain mechanism is introduced for validating messages.(iii)An algorithm of rational uniform consensus with agent omission failures is presented for any . We give a complete formal proof of correctness of our algorithm. The proof shows that our consensus is a Nash equilibrium.

The rest of the paper is organized as follows. Section 2 introduces the related work. Section 3 describes the model that we are working on. Section 4 presents the algorithm of rational uniform consensus for achieving Nash equilibrium and proves it correct. Section 5 concludes the paper.

From the view point of modeling methods about agents, the research framework for distributed game theory in the literature may be divided into three categories. In the first category, all of the agents in distributed system are controlled by rational agents preferring consensus and some of them may randomly fail by system failures. Bei et al. [4] studied distributed consensus tolerating both unexpected crash failures and strategic manipulations by rational agents. They considered agents that may fail by crashing. However, the correctness of their protocols needs a strong requirement that it must achieve agreement even if agents deviate. Afek et al. [18] proposed two basic rational building blocks for distributed system and presented several fundamental distributed algorithms by using these building blocks. However, their protocol is not robust against even crash failures. Halpern and Vilaça [12] presented a rational fair consensus with rational agents and crash failures. They used failure pattern to describe the random crash failures of agents. Clementi et al. [13] studied the problem of rational consensus with crash failures in the synchronous gossip communication model. The protocols of Halpern et al. and Clementi et al. do not tolerate omission failure, but we think the consideration to it is necessary. Harel et al. [15] studied the equilibria of consensus resilient to coalitions of and agents. They gave a separation between binary and multi-valued consensus. However, they assumed that there are no faulty agents.

The second category is named rational adversary. Groce et al. [19] studied the problem of Byzantine agreement with a rational adversary. Rather than the first model, they assumed that there are two kinds of processes: one is honest and follows the protocol without question; the other is a rational adversary and prefers disagreement. Amoussou-Guenou et al. [14] studied Byzantine fault-tolerant consensus from the game theory point. They modeled processes as rational players or Byzantine players and consensus as a committee coordination game. In [14], the Byzantine players have utility functions and strategies, which can be regarded as rational adversaries similar to [19]. In our opinion, this framework limits the scope of the Byzantine problem.

Finally, the BAR framework (Byzantine, Altruistic, and Rational) was proposed in [20]. In [16], Ranchal-Pedrosa and Gramoli studied the gap between rational agreements that are robust against Byzantine failures and rational agreements that are robust against crash failures. Their model consists of four different types of players: correct, rational, crash, or Byzantine, which is similar to the BAR model. They consider that rational players prefer to cause a disagreement than to satisfy agreement, which we view as a bit limited because only referring rational players as rational adversaries is one of the questions in the Byzantine model. Moreover, no protocols are proposed in [16].

3. Model

We consider a synchronous system with agents and each of agent has a unique and commonly known identify in . Execution time is divided into a sequence of rounds. Each round is identified by the consecutive integer starting from 1. There are three successive phases in a round: a in which each agent sends messages to other agents in system, a in which each agent receives messages that are sent by other agents in the send phase of the same round, and a where each agent verifies and updates the value of local variables and executes local computation based on the messages sent and received in that round. We assume that every pair of agents and in is connected by a reliable communication link denoted by . For an agent , all links in the system can be divided into two types: direct link where , and indirect link where neither nor is equal to .

3.1. Failure Model

Here the [21], which occur in agents and not in communication links [22], are considered. That is, an agent crashes or experiences either send omissions or receive omissions. Also, send omission means that the agent omits sending messages that it is supposed to send. Receive omission means that the agent omits receiving messages that it should receive. We define that agent omission failures never recover. We argue that our protocol also works even if failures could recover, but proving this seems more complicated. It is easy to see that crash failure can be converted to omission failure because if an agent crashes, it must omit to send and receive messages with all other agents after it has crashed. We assume that there are agents undergoing general omission failures.

Based on the failure model, we divide the agents in the system into three types:(i)Good Agent. Good agents do not have omission failures.(ii)Risk Agent. Risk agents experience omission failures but we temporarily consider them as correct agents in our protocol.(iii)Faulty Agent. Faulty agents have omission failures with more than agents.

It is easy to see that is the sum of the number of risk and faulty agents. We treat good agents and risk agents as uniformly. Send omission and receive omission are symmetrical. For example, the cases that omits to send messages with and that omits to receive messages with have the same view for i and j. Therefore, we may not be able to directly detect the states of some agents with omission failures. Thus, we call them risk agents and consider them as correct agents. For an agent that has omission failures with more than agents, it must have omission failures with at least one good agent and then clearly we can know it is a faulty agent.

Due to the symmetry of agent omission failures, we model the agent omission failures as the link state problem by a punishment mechanism. Specifically, in our protocol, if an agent receives no messages from in a round, then in the following rounds, sends no messages to and does not receive messages from [23]. Thus, both send omission or receive omission will cause the link interruption. So in a round, we divide each link into three types: , where neither nor experiences omission failures in this round, , where at least one of and has omission failures with the other one in the round, and , where the state of in this round is unknown to another agent . It is easy to see that we can determine the type of an agent by the number of correct direct links of it, which is the fault detection method in our protocol. Similarly, faulty links never recover under this punishment mechanism whether or not omission failures recover.

3.2. Consensus

In the consensus problem, we assume that every agent has an initial preference in a fixed value set (we follow the concept of initial preference in [12]). We are interested in uniform consensus in this paper. A protocol solving uniform consensus must satisfy the following properties [3]:(i)Termination. Every correct agent eventually decides.(ii)Validity. If an agent decides , then was the initial value of some agent.(iii)Uniform Agreement. No two agents (whether correct or not) decide on different values.

To solve uniform consensus in presence of agent omission failures, we assume that and .

In uniform consensus, an agent’s final decision must be one of the following formalized types:(i): it means that there is no consensus. is a punishment for inconsistency.(ii): it means no decision. Deciding is not ambiguous with validity, as cannot be proposed [23]. It does not affect the final consensus outcome.(iii): it satisfies the property of validity, which must be the initial preference of some agent.

3.3. Rational Agent

We consider that distributed processes act as rational agents according to the definition in game theory. Each agent has a utility function . We assume that agents have solution preference [18], and an agent’s utility depends only on the consensus value achieved. Thus, for each agent , there are three values of based on the consensus value achieved: (i) is ’s utility if ’s initial preference is decided; (ii) is ’s utility if there is a consensus value which is not equal to ’s initial preference; (iii) is ’s utility if there is no consensus. It is easy to see that , and our results can easily be extended to deal with independent utility function for each agent.

The of an agent is a local protocol satisfying the system constrains. takes actions according to the protocol in each round. That is, is a function from the set of messages received to actions. Each agent chooses the protocol in order to maximize its expected utility. Thus, there are local protocols chosen by every agent, which is called in game theory. The equilibrium is a , where each agent cannot increase its utility by deviating if the other agents fix their strategies. For each agent , if the local protocol is our consensus algorithm when reaching an equilibrium, then we say that consensus is a and the consensus reaching a is called . Formally, if a strategy profile (or consensus) is a , then for all agents and all strategies for , it must have .

3.4. Notation Description

The main notations used in following sections are summarized in Table 1.

4. Rational Uniform Consensus with General Omission Failures

4.1. A Rational Uniform Consensus in Synchronous Systems

In order to reach rational uniform consensus that can tolerate omission failures, our protocol adopts a simple idea from an early consensus protocol [23]: An agent does not send or receive any messages to those agents that did not send messages to it previously. Then, we convert the omission failure model which cannot be detected into the link state model which can be detected by agents in each round. However, the presence of rational agents makes protocol more complicated. It requires the protocol to prevent the manipulation of rational agents. Hence, the security of the algorithm needs to be improved from three aspects. The first is interacting with the latest network link states and message sources in each round. The update process of the latest link states within each agent depends on complete message chains, and we can obtain a unified decision round and decision set from message passing mechanism in omission failure environment. The second is using secret sharing for agents’ initial preferences [24]. It encrypts the initial preferences so as to prevent an agent knowing the values of other agents in advance. The third is signing each message with a random number and marking faulty links by faulty random numbers [4]. This can improve the difficulty of a rational agent to do evil.

The protocol is described in Algorithm 1. In more detail, we proceed as follows.

(1)function CONSENSUS()
(2) a random number
(3)random 1 degree polynomials with and             ⊳(2, n) threshold secret sharing
(4)
(5)for alldo
(6)  for alldo
(7)   a random bit
(8)random value in
(9) puts into X-RANDOM
(10) puts into RANDOM
(11)
(12)forround do
(13)  
(14)  Phase 1: send phase
(15)  for allanddo
(16)   ifthenSend to j
(17)   ifthenSendto j
(18)   ifthenSend to j
(19)   ifthenSend to j
(20)
(21)  Phase 2: receive phase
(22)  for allanddo
(23)   if newmessage has received from jthen
(24)    puts into X-RANDOM            ⊳round 1 to t + 2
(25)    puts into RANDOM            ⊳round 1 to t + 3
(26)                ⊳round 1 to t + 3
(27)    save and             ⊳round t + 3
(28)                ⊳round t + 4
(29)   else
(30)  ifthen                          ⊳Faulty agent
(31)   Decide                           ⊳No decision
(32)
(33)  Phase 3: computation phase
(34)  ifthen
(35)   
(36)    VERIFYANDUPDATE(, , , , , ) ⊳Punishment if an inconsistency is detected
(37)   ifthen
(38)    random value in
(39)    for alldo
(40)     for alldo
(41)      a random bit
(42)    puts into X-RANDOM
(43)    puts into RANDOM
(44)   else ifthen
(45)    LASTUPDATE ()            ⊳Update the of
(46)    fordo
(47)     ifis the first reliable round in then
(48)                  ⊳decision round
(49)    the set of nonfaulty agents in             ⊳decision set
(50)    for alldo
(51)     ifthe number of and then
(52)      restored by and received
(53)     
(54)    ifall values are known in then
(55)     the set of agents with the second max proposal in
(56)     ifthen
(57)     else ifthen
(58)      mod             ⊳the proposals are the same for all agents in
(59)      , where is the st highest id in
(60)     else            ⊳
(61)      the second max proposal in
(62)      mod
(63)      , where is the st highest id in
(64)  else            ⊳round t + 4
(65)   ifthen
(66)    
(67)    Decide
(68)   else           ⊳Inconsistency
(69)    Decide

Initially, each agent generates a random number which is used for consensus election later (line 2). Then computes two random 1-degree polynomials and with and , respectively (line 3). They satisfy (2, n) threshold which means that an agent can restore or if it knows more than two pieces of or . Then initialize set , , , and (line 4); we discuss these in more detail below. Then generates the faulty random number for each agent and each direct link that is the abbreviation of for the link between and (lines 5–7; represents the in round ). And the message random number for round 1 is randomly chosen from (line 8). For each link, generates faulty random numbers and then sends them to other agents, respectively, in round 1. So, we can get that contains faulty random numbers in total. Then puts and into X-RANDOM and RANDOM, respectively (lines 9 and 10), where X-RANDOM is a function storing all faulty random numbers known to and RANDOM stores all message random numbers. Agents can invoke these two functions to verify random numbers. Specifically, input the id, link, and round to invoke X-RANDOM and input id and round to invoke X-RANDOM.

There are rounds in total and each round has three phases. In phase 1 of round , , only sends messages to each agent who does not belong to that is a set of agents that have omission failures with detected by (line 15). If , sends and to . And if, also sends which contains random numbers (lines 16 and 17). If , also sends the piece of , and the piece of , to (line 16). If , also sends all the secret shares and that it has received from other agents (line 18). It is easy to see that the piece and must be in pairs. That is, if restores , then it can also restore . Finally, if , only sends to (line 19). For each agent , is the set of all consensus values calculated and received by . Hence, if the algorithm is executed validly, must be equal to 1.

In phase 2 of round , , only receives messages from agents that are not in set (line 22). And if there are no messages received from an agent , , adds to (line 29). Otherwise, stores the received information (lines 24–28). and are the sets of all new link states and message random numbers , respectively, received by from each agents in round (line 26). Correspondingly, the elements in and are one-to-one correspondence. Specially, if , knows that it becomes a faulty agent and then must decide directly and no longer run in later rounds (line 31). And we say that means agent does not decide in the end, which has no influence on the solution.

In phase 3 of round , firstly uses to update which is useful for the update and verification of link states (line 35). Then invokes the function VERIFYANDUPDATE to verify and update and by and (line 36; see Algorithm 2 for details). is the latest state known to of all links in the system in round . is the historical link state including rounds in total. If , generates the message random number and the faulty random numbers for round , which will be sent to other agents in round , and then puts these random numbers into X-RANDOM and RANDOM, respectively (lines 37–43). Then if , last updates by (line 45). Specifically, if a link is faulty in round in , then change the state of into fault from round to round in . This is the last time modifying . And following that, utilizes to find the from round 1 to round , which is the first in (lines 46–48). We follow the concept of in [12]. The number of faulty agents does not increase in and the previous round of is . Specially, we say that cannot be round 0, so that the first is the previous round of the second if the first is round 1. In , if less than links to agent are correct, then is a faulty agent. Otherwise, is a nonfaulty agent. We define that it must remove the explicitly faulty agents when computing the state of in round by . Then computes the that is the set of nonfaulty agents in (line 49). And then uses all the secret shares it has received in round to try to restore the initial preference and proposal of each agent (lines 50–53). If can reconstruct the values of all agents , then must know all proposals of these agents. Then computes the consensus proposal (lines 54–63). Firstly, sorts all the proposals in and finds the set of agents with the second max proposal value (line 55). Then if there is only one agent in , puts the initial preference of this agent into (line 56). If there are more than one agents in , that is, more than one agents have the same second max proposal value in (we say that the probability is extremely low), then uses the to mod the agent number of and gets (lines 60–62). In this case, finally puts into its where is the st highest id in (line 63). Finally, if there is no agent in , then the proposals must be the same for all agents in and the second max proposal value does not exist. Thus, uses the same proposal to mod the agent number of and gets (line 58). Similarly, elects the initial preference of the agent with the st highest id in (line 59). But if cannot restore all the values of agents , it does nothing and keeps as the empty set. Finally, if , if contains only one value, then makes a decision (lines 65–67). Otherwise, an inconsistency is detected and decides (line 69).

Require:, , , , ,
Ensure:or
(1)function VERIFYANDUPDATE(, , , , , )
(2)                      ⊳ The function IDs returns ids from NS set
(3)  
(4)
(5)  Phase 1: update the state of direct links
(6)  fordo
(7)                       ⊳ Message source verification is not required if
(8)   APPENDHS ()                    ⊳ append the state into HS or decide
(9)  fordo
(10)   ifthen
(11)    
(12)   else
(13)    
(14)    APPENDHS ()
(15)
(16)  Phase 2: verify message chain
(17)  fordo
(18)   VERIFYMSGCHAIN()                    ⊳ Message chain verification or decide
(19)
(20)  Phase 3: verify and update
(21)  fordo
(22)   fordo
(23)    fordo
(24)     
(25)     
(26)     if an inconsistency is detected then
(27)      Decide                    ⊳ Punishment
(28)     ifthen                    ⊳Case 10
(29)      
(30)     else ifthen                    ⊳ Case 11
(31)      
(32)      APPENDHS()
(33)     else
(34)      
(35)      
(36)      
(37)      
(38)      if (or) then                    ⊳ Direct link
(39)       ifandthen                    ⊳ Case 1
(40)        APPENDHS()
(41)       else ifandthen                    ⊳ Case 2
(42)        Decide
(43)       else ifandthen                    ⊳ Case 3
(44)        APPENDHS()
(45)       else ifandthen                    ⊳ Case 4 and 5
(46)        ifthen                    ⊳ Case 4
(47)         iforthen
(48)          APPENDHS()
(49)         else ifthen
(50)          
(51)          APPENDHS()
(52)      else                    ⊳ Indirect link
(53)       ifandthen                    ⊳ Case 6
(54)        ifthen
(55)         APPENDHS()
(56)        else ifthen
(57)         
(58)         APPENDHS ()
(59)       else ifandthen                    ⊳ Case 7
(60)        
(61)        APPENDHS ()
(62)       else ifandthen                    ⊳ Case 8
(63)        APPENDHS ()
(64)       else ifandthen                    ⊳ Case 9
(65)        ifthen
(66)         ifthen
(67)          APPENDHS ()
(68)         else
(69)          
(70)          APPENDHS ()
(71)return

The detailed implementation of the verification and update protocol in phase 3 is given in Algorithm 2.

Basically, for each link , is a tuple containing two tuples, and . The first tuple represents the state of , which contains three types: , and , representing correct link, faulty link, and unknown-state link, respectively. The format of type is , where is the round of the link state, is the agent reporting the link state, and is the message random number sent by in round . It is easy to see that if reports the state of its direct link is correct in round , then it must know. The format of type is , where and are the same as those in , is an identifier, and is the sorted set of faulty random numbers on which is generated by in round . Specifically, the set is sorted by the ids of agents from small to large. The format of type is because the state of is unknown for . The second tuple describes the source of and has the form , where is the agent sending the link state to , and is the round when sends it to . Specially, for direct link, when first updates the state in round , is meaning that the message source is itself. denotes the state of link known to and could range from 1 to . It contains at most two different tuples because the state of in round can only be detected and reported by and . The form of each tuple is similar to but the round in must be . And the agents in the two tuples must be different and be and , respectively. Specially, if the types of two tuples are and , then we think the state of is faulty. And if and , then is a unknown-state link which is regarded as a correct link when computing and in round .

The pseudocode in Algorithm 2 is explained in detail as follows.

initially generates from , which is easy to see that must receive the messages sent by agent in round (line 2). And also computes set that is equal to (line 3).

Firstly, in phase 1, updates the states of direct links in round . For each agent , has received the messages from it in round so that updates the of to Type (line 7). And must be able to obtain the message random number from . Then invokes APPENDHS to append the state into (line 8). We stipulate APPENDHS must guarantee that the inputting state satisfies the properties of which we have discussed above. For example, each link has at most two different tuples in each round, and they come from different agents, and . If a state violated the properties of , APPENDHS would decide and terminate the protocol early. Then for each agent , it has omission failures detected by because does not receive a message from it. If the type of link is already faulty in inherited , does nothing because for a link, only records the earliest round when the link has failures (lines 10–11). Otherwise, updates the to Type and appends the new state into (lines 13–14).

Then in phase 2, utilizes message chain mechanism to verify the correctness of messages received in receive phase (lines 17–18).

Message Chain Mechanism. For each agent , its message has the following properties.

Suppose is the set of agents that disconnected from in or before round and is the set of agents that are still connected to in round . Suppose represents the tuple where the round number is equal to .

Claim 1. For link in , where and , its state in round must be known and the number of correct links in is greater than or equal to .

Claim 2. For link in , where and , its state in round and later must be unknown.

Claim 3. For link in , where and , its state in round and later must be unknown.

Claim 4. For link in , where and , if the state of is and the state of is , suppose , then the state of must be known.

Claim 5. For link in , where and , its state in round must be known and that in round and later must be unknown.

Claim 6. For link in , where and , if the state of is , then the state of link in round , where , must be known and its state in round and later must be unknown.

Claim 7. For link in , where and , if the state of is equal to, where , then the state of , , must be known.
Explicitly, we say that the function VERIFYMSGCHAIN is to verify whether a message violates the above claims. If not, then continue to the phase 3. Otherwise, it decides and terminates the protocol early.
Finally, in phase 3, updates and by the states in . For a link, compares its state in with the state in , so as to implement update according to different cases.

Claim 8. For each link , the agent of must be or in all and .Case 1. For the direct link of , if the of is and the of is , then only needs to append the new state into (lines 39–40).

Claim 9. In Case 1, there must be .Case 2. For the direct link of , if the of is and the of is ,then detects an inconsistency and decides (lines 41–42).Case 3. For the direct link of , if the of is and the of is , then only needs to append the new state into (lines 43–44).

Claim 10. In Case 3, there must be .Case 4. For the direct link of , if the of is and the of is ,then only needs to append the new state into when or , and must update and when (lines 45–51). When updating , the of must be because the new state is obtained from and updated in round .

Claim 11. In Case 4, if , the of must be the same as the of , and if , it must have .Case 5. For the direct link of , if the of is and the of is ,then does nothing.

Claim 12. In Case 5, if , the of must be the same as the of , and if , it must have .Case 6. For the indirect link of , if the of is and the of is ,then only needs to append the new state into when , and must update and when (lines 53–58).Case 7. For the indirect link of , if the of is and the of is ,then needs to update and append the new state into (lines 59–60).

Claim 13. In Case 7, if , it must have , and if , it must have .Case 8. For the indirect link of , if the of is and the of is , then only needs to append the new state into (lines 62–63).

Claim 14. In Case 8, if , it must have , and if , it must have .Case 9. For the indirect link of , if the of is and the of is ,then does nothing when , and appends the new state into when . Specially, also updates using the new state received if (lines 64–70).

Claim 15. In Case 9, if , the of must be the same as the of , and if , it must have .Case 10. If the of is , then does nothing (lines 28–29).Case 11. For the indirect link of , if the of is and the of is or ,then needs to update and append the new state into . (lines 30–32).

Claim 16. If in round , agent receives a message in which is and or , then of the message must already be in when in round .

Claim 17. If in round , agent receives a message in which is from , then must be .
In phase 3, for a link , needs to detect whether there is an inconsistency firstly (line 26). An inconsistency detected in phase 3 may be because(1)(message format verification). The format of is incorrect;(2)(message source verification). violates Claim 16 or Claim 17;(3)(random number verification). If the type of is , the message random number in is different from that in RANDOM, or if , the faulty random numbers in X-RANDOM are different from the random numbers at the corresponding indexes of the sorted set in ;(4)(round number verification). violates one of the claims from Claim 8 to Claim 15.If detects an inconsistency, then it decides (line 27). If not, updates the states as previously discussed.

4.2. Proof of the Protocol

The proof assumes . Some variables are defined as follows.

Definition 1. denotes the detection result of agent on the state of direct link in round . The type of must be or .

Definition 2. denotes the agent chain (or we can call it message propagation path) from agent to . detects a direct link state in round and sends it to agent in round . Then also sends the state to another agent in round . Finally, receives the state in round .

Definition 3. denotes the set of nonfaulty agents in round . denotes the set of faulty agents in round . denotes the set of faulty agents newly detected in round . denotes the number of risk agents in round .
We first prove the upper bound of message passing time and give the round complexity of the algorithm.

Theorem 1. (message passing mechanism). If , all link states in round can be reached a consensus between and at the latest in round .

Proof. Consider the state of in round , where . Specially, we can consider the messages sent by and to be independent of each other and this does not affect the final consensus outcome. For example, and it is received by all nonfaulty agents in round , and and it is received by all nonfaulty agents in round . Even if the detection result of may no longer be forwarded after round , we still have the correct consensus state in round when we consider two detection results independently. We have following cases:(i)Case 1. and are good agents in round . In round , and send their detection results to all good agents. So if , all agents reach a consensus on the state of in round . If , all nonfaulty agents reach a consensus in round . Therefore, all link states of round among good agents can reach a consensus in round .(ii)Case 2. is a risk agent or faulty agent and is not equal to . Generally, since a receive omission can be converted to a send omission, then each risk agent and faulty agent has at most the following 3 choices when sending messages in each round:(1)It has sending omissions with all other agents.(2)It does not have sending omissions with at least a good agent.(3)It has sending omissions with all good agents and no sending omissions with some risk agents or faulty agents.Hence, has three choices in round .(1)Case 2.1. chooses 1. Then all agents do not know . All nonfaulty agents agree on the “unknown-state.”(2)Case 2.2. chooses 2. Then there must be some good agents knowing in round . And all good agents and know the state in round . If , is the only risk agent, then there is a consensus on the state in round . But if , all nonfaulty agents receive in round because all good agents must send it to all nonfaulty agents in this round. Thus, the lemma holds.(3)Case 2.3. chooses 3. So no good agents know in round and is detected faulty in round . Suppose that there is only a risk agent receiving the state. Since agents are independent of each other, it is easy to scale the number of the agents from one to many. We can also divide this case into two cases.(a)Case 2.3.1. only sends messages to in round . It means that . If , does not receive messages from in round . Then the result is the same as that in case 2.1. But if , has no influence on the final result and the consensus result of depends on the choice of . If also chooses Case 2.3.1, then two states of only exist in and . The final result is also the same as that in case 2.1.(b)Case 2.3.2. has no sending omissions with some risk agents or faulty agents other than . Then in round , the risk agents and faulty agents that have received messages from also have 3 choices. Take one of the risk agents as an example. If chooses 1 or 2 in round , the results are the same as those in case 2.1 and case 2.2 where the lemma holds. And if chooses 3, no good agents know in round . Suppose that when risk and faulty agents choose 3, they must send to risk agents or faulty agents other than the source agent of the state because if they only send the state back to the source agent, the final results depend only on the source agent, not on themselves. Then until round , if from round to round , all risk agents and faulty agents that have received choose 3, then the risk (or faulty) agent in round must be the last risk (or faulty) agent in system. At this time, has only 2 choices: 1 and 2. And it is easy to get that must be consensus in round . But if from round to round , some risk agents or faulty agents that have received choose 1 or 2, then the results are the same as those in case 2.1 and case 2.2.In summary, the lemma holds.

Corollary 1. If , all link states in round can reach a consensus between and in round .

Proof. There are faulty agents in round . Since the faulty agents before round do not send any messages in round , it is equivalent to case 2.1 that sends messages to these faulty agents in round . Then the total number of risk and faulty agents in case 2.3 can be reduced to . Therefore, in case 2.3.2, if keeping choosing 3, there are no risk or faulty agents anymore up to round and all link states in round can be reached a consensus between and in round .

Lemma 1. (round complexity). The link states of the second clean round and all previous rounds can reach a consensus among all nonfaulty agents at the latest in round .

Proof. By Theorem 1, the smaller the round , the smaller the supremum of the round in which the link states in round can reach a consensus. Hence, we directly consider the second clean round. Suppose the second clean round is . Then there are already at least faulty agents in round . That is, . By Corollary 1, the link states in round can reach consensus in round . Since , the lemma holds.
Then it is proved that the algorithm satisfies all the properties of uniform consensus with general omission failures.

Lemma 2. If is a nonfaulty agent, then when .

Proof. Link cannot recover after a fault occurs. So if is a faulty link in round , then its state must also be in subsequent rounds. Moreover, also expands all states backwards in LASTUPDATE.

Lemma 3. If is a nonfaulty agent, then when .

Proof. Since , there must be an agent in or (supposing ) that has reported in round , and finally the state has been transmitted to . We suppose that receives the state in round . Then we have . Since link omission is irreversible, must be nonfaulty from round 1 to round . Hence, must eventually be received by . That means . Combining Lemma 2, it is easy to get . Thus, the lemma holds.

Lemma 4. If is a nonfaulty agent, then when .

Proof. For a contradiction, let when . Suppose that receives the state of in round . Let us suppose receives it from . Then we must have . Since link omission is irreversible, the message propagation path is also correct for round , so that must receive and then . Therefore, we have a contradiction here and the lemma holds.

Lemma 5. A nonfaulty agent must have correct links with at least agents other than itself in a round.

Proof. We can know that for a nonfaulty agent , must be less than or equal to . So it is easy to get that have correct links with at least agents.

Lemma 6. A nonfaulty agent must have correct links with at least 2 good agents other than itself in a round.

Proof. We analyze the nonfaulty agent from two aspects of good agent and risk agent.(i)Case 1. is a good agent. Then must have correct links with all other good agents. Since and , there must be .(ii)Case 2. is a risk agent. Suppose that is nonfaulty in round and there are faulty agents in this round. Because it must remove faulty agents when computing the state of , combining Lemma 5, the risk agent needs to have correct links with at least good agents in round . Since , . Then always holds.Therefore, the lemma holds.

Remark 1. For a faulty agent in round r, since it has faulty links with more than agents in round , then it does not send messages to any agents after at most 2 rounds. Hence, we claim that in round and later, the faulty agent needs to be removed when computing the number of connections of other agents.

Lemma 7. Suppose that the direct link state information of in round can be agreed by all nonfaulty agents in round . If is a nonfaulty agent in round and agent is considered to be a uncertain agent in , then must be a faulty agent in .

Proof. The proof argument is by contradiction. Assume that is considered to be a nonfaulty agent or a uncertain agent in .(i)Case 1. is a nonfaulty agent in when it is considered to be a uncertain agent in . must send to at least 2 good agents in round (Lemma 6). Then these good agents send the direct link states of to all nonfaulty agents. Hence, must be a certain agent in . A contradiction.(ii)Case 2. is a uncertain agent in when it is considered to be a uncertain agent in . It is easy to see that the link states between and good agents cannot be unknown-state in round and . Since the number of good agents must be greater than , cannot have faulty links with all good agents. Then it must send to some good agents in round . Equally, must be a certain agent in . A contradiction.Thus, we reach contradictions in all cases, which proves the lemma.

Lemma 8. If round is a clean round, the state of can reach a consensus by all nonfaulty agents in round , where and .

Proof. We can pay attention to the state of . Consider the following cases:(i)Case 1. and are good agents. By Lemma 6, a risk agent must have correct links with some good agents. Hence, sends to all good agents and risk agents having correct links with in round . And also does this. In round all good agents have two detection results of . Then after updating, they send the uniform state to all risk agents that have faulty links with and in round .(ii)Case 2. and are risk agents. and send their detection results of to some good agents (denoted by ) and risk agents in round . Then two results are sent to all good agents by the agents in in round . So every good agent knows the uniform state of in round . Therefore, all nonfaulty agents reach a consensus on the state in round .(iii)Case 3. is a good agent and is a risk agent. Similarly, it is easy to get that all good agents have the uniform state of in round by case 1 and case 2. So this is what we want. Thus, the lemma holds.

Lemma 9. If round is a clean round, then in the , the state of link ( and ) cannot be .

Proof. Assume, without influence, that the messages of have no effect on the state of . Since is a nonfaulty agent, by Lemma 8, is received by all nonfaulty agents in round . Hence, must know the state of . The lemma holds.

Corollary 2. If round is a reliable round and the total number of rounds is greater than , there are not uncertain agents in round .

Proof. For a contradiction, let be a uncertain agent in round , by Lemma 7, we have two cases. For both case 1 and case 2, by Lemma 8, there are contradictions to the assumption. Hence, must be a faulty agent in . The unknown-state link is regarded as correct link so that is regarded as a nonfaulty agent in . Then it is a contradiction to the assumption that is a reliable round. Thus, the lemma holds.

Lemma 10. There is at least one clean round in rounds.

Proof. Suppose, for a contradiction, that there is no clean round in rounds. Then there must be new faulty agents added in each round. So there are at least faulty agents in rounds. This contradicts the assumption that there are at most faulty agents.

Corollary 3. There are at least two clean rounds in rounds.

Corollary 4. In rounds, there must be one reliable round in which at most one new faulty agent is detected.

Proof. Suppose that there are clean rounds in rounds. We prove the lemma from two cases:(i)Case 1. All clean rounds are greater than round 1. And for a contradiction, two new faulty agents are detected in each reliable round. Then there are faulty agents and there are still faulty agents remaining in rounds. It is easy to see that . Therefore, there must be clean rounds in the remaining rounds. This contradicts the assumption that there are clean rounds in rounds.(ii)Case 2. Round 1 is a clean round. And for a contradiction, two new faulty agents are detected in each reliable round. Then there are faulty agents and there are still faulty agents remaining in rounds. Since , there must be clean rounds in the remaining rounds, a contradiction.Thus, we reach a contradiction in every case, which proves the lemma.

Lemma 11. In round , if a faulty agent can receive messages from at least one good agent, the link states of the second clean round and all previous rounds can also reach a consensus among and all nonfaulty agents.

Proof. By Corollary 4, from the second clean round to round , there must be a reliable round (suppose the first is ) in which at most one new faulty agent is detected because the total number of risk and faulty agents is and the total number of rounds is . Suppose which is a clean round. Since , . Since no new faulty agents are detected in round , risk agents can only choose 2 in round (see details in Theorem 1). Hence, in round , all good agents reach a consensus on the link states of round and before rounds. We divide into two cases to prove as follows:(i)Case 1. . Then we have . In round , all good agents send the latest and uniform link states of round and before rounds to all agents. Thus, must reach a consensus.(ii)Case 2. . We assume that for the reliable rounds in which two or more faulty agents are detected, the faulty agents can be averaged to the next round and then the clean round can also be regarded as a normal round. Then it can be seen that the number of faulty agents keeps increasing in each round from round to round . Thus, at least faulty agents have been added until round . Since there are risk agents in round , then at most one risk agent remains in round and it must be . Then it is easy to get see must reach a consensus in round .Thus, the lemma holds.

Theorem 2. Consensus solves uniform consensus if at most agents omit to send or receive messages, , and suppose that all agents are honest.

Proof. Since , it is easy to see that no inconsistency is detected.
Termination. From Algorithm 1, nonfaulty agents must decide in round and faulty agents decide before round .
Validity. Since no inconsistency is detected, all agents make decisions different from . For agent , if decides a value , must be the initial preference of an agent in . Since depends on , it must have . Therefore, satisfies the validity property. If decides , has no decision and does not affect the final consensus outcome. Thus, it also conforms to the validity.
Uniform Agreement. We prove this from the following cases:(i)Case 1. Agents and . By Corollary 3, there must be a in rounds. And by Lemma 1, we have . Then . Since the pieces of preferences and proposals of all the agents in must be saved by at least 2 good agents (Lemma 6), all good agents and some risk agents can restore all initial values and proposals of the agents in in round . We denote these agents by and .Thus, all agents in have the same set and give a unified set containing one value. That is, if agent and , then there must be and in round . And if agent , in round .(ii)Case 2. We analyze the agents in . It is easy to see that .(1)Case 2.1. . must decide at the latest in the receive phase of round .(2)Case 2.2. . Suppose that denotes the set of faulty agents that can receive the messages from some good agents in round and send messages in round . Then .It is easy to see that the agents in definitely do not send messages in round and they must decide in round . If , by Lemma 11, must be the same as of good agents in case 1. Thus, if can restore all initial preferences and proposals in , it must have and in round . Otherwise, in round .(3)Case 2.3. . Since is a nonfaulty agent in round , has the same two possible states in round as in case 1. Suppose . denotes the set of faulty agents that cannot detect faulty by itself in the receive phase of round . denotes the set of faulty agents that can detect that they become faulty agents in the receive phase of round . Therefore, the agents in decide by and the agents in decide in round .In summary, set only has two types in round and round : and . The agents in decide in round , the agents in decide before round , and the agents in decide in round . Thus, uniform agreement holds.
To achieve the Nash equilibrium, we make some appropriate assumptions about initial preferences and failure patterns. Failure pattern represents a set of failures that occur during an execution of the consensus protocol [12]. Specifically, we assume that initial preferences and failure patterns are .

Definition 4. The blind initial values mean that each agent cannot guess the preferences of other agents and the probability of its own preference becoming the consensus cannot be improved by trusting others.
By Definition 4, we can get that if an agent wants to improve its own utility, it can only rely on itself, for example, increasing the probability of entering the and reducing the number of agents in the and so on.

Definition 5. The blind failure patterns mean that before faulty agents appear, an agent cannot guess the link states in the following rounds. Then we have that(i)If agent does not know the link states of round in round and is a nonfaulty agent in round , then.(ii)For round and , if and does not know the link states of round , then ) .(iii)For rounds, if the link states of each round in rounds are unknown to agent , then for a round in rounds,.

Theorem 3. If , at most agents have omission failures at the same time, agents prefer consensus, and failure patterns and initial preferences are blind, then is a .

Proof. To prove , we need to show that it is impossible for each agent to increase its utility with all possible deviations . That is, proving that for each agent , there must beWe use the same deduction method as in [12]. Consider all the ways that can deviate from the protocol to affect the outcome as follows:(1) generates a different value (or ) and sends (or ) to some agents .(2) (or ) sent by cannot restore (or ).(3) does not choose or or appropriately, such as not randomly.(4) sends an incorrectly formatted message to in round .(5)If in round , does not decide , but continues to execute the following protocol.(6) lies about the state of in round ; that is, in round , sends a state of which is different from .(7) sends an incorrect or of to in round .(8) sends an incorrect (or ) to different from the (or ) that has received from in round 1.(9) sends an incorrect to in round .(10) pretends to crash in round .We consider these deviations one by one and prove that does not gain by any of deviations. That is, equation (1) holds if deviates from the protocol by these deviations on the list above.(i)Type 1. (i) If sends to some agents, then either an inconsistency is detected because of secret restoring error, or does not gain. Specifically, if is the agent whose value is chosen, then is worse off if it lies than it does not, since some agents cannot restore , but they can restore it when following the protocol. Then if is not the agent whose preference is chosen, then it does not affect the outcome. (ii) sends . Then an inconsistency is detected if restoring polynomial error or generating different consensus values in the system. And if no inconsistency is detected, then either all agents that receive or are faulty or both and do not affect the final outcome. Since changing the proposal cannot increase ’s utility, does not gain. Therefore, both (i) and (ii), does at least as well if uses the strategy , as it deviates from the protocol according to type 1. So, equation (1) holds.(ii)Type 2. It is easy to see that either an inconsistency is detected or no benefit because there is no increase in the probability that becomes the consensus. Thus, equation (1) holds.(iii)Type 3. (i) Since other agents follow the protocol, it does not affect the final outcome because the two kinds of random numbers are only used for verification. (ii) Since does not know the proposals of other agents in round 1, then using different proposals cannot improve the probability that becomes the consensus. Thus, equation (1) holds.(iv)Type 4. If sends an incorrectly formatted message to , then either an inconsistency is detected by or it does not affect the outcome since omits to receive messages from in round . Thus, does not gain, so equation (1) holds.(v)Type 5. Since , does not receive messages from at least agents in round , that is, has receiving omission failures with at least two good agents.(1)Case 1. does not guess message random numbers in round . Then by Claim 1, an inconsistency is detected.(2)Case 2. guesses the message random number in round and has correct links with the remaining agents, and these agents are all nonfaulty agents. Then by the Claim 1, can successfully send messages in round iff can guess a message random number from a nonfaulty agent and has no sending omission failures with . That is, the random guessing does not change the detection result of the state of by other agents in round . Clearly the probability that can guess a random number is .(a)Case 2.1. If guesses some random numbers from agents and has sending omission failures with each agent , then it does not affect the outcome even if the random numbers are correct guesses.(b)Case 2.2. Suppose that guesses only one random number from the good agent in a round and does not have sending omission failures with . If only guesses the message random number in round , then we have thatThus,which means that only guesses one random number in round and then must be in . Since , there are at least agents in . It is easy to see that if guesses random numbers in multiple rounds, the utility must be less than (3). If follows the protocol, then must decide in round . Since has receiving omission failures in round , the link states after round must be unknown to . Thus, by Definition 1 and the assumptions about omission failures, we can get thatThenSince , (1) must hold.(c)Case 2.3. If guesses more than one random numbers in round , then the utility of must be less than (3). And does at least well by following the protocol as the deviation because the guessing work does not affect the state of in round . Specifically, either if is a nonfaulty agent detected by other agents, then (5) holds, or if is a faulty agent, then it does not affect the outcome even if guesses the random correctly. Thus, (1) holds.(3)Case 3. If has sending omission failures with the remaining agents, then either no benefit since is faulty in round detected by other agents, or does not gain if is nonfaulty because the utility of deviating from the protocol must be less than (3).In summary, either an inconsistency is detected by Claim 1 if does not guess the message random numbers, or no benefit from guessing the message random numbers. Thus, yet again, (1) holds.(vi)Type 6. By the proof of Type 5, it is easy to see that must be a nonfaulty agent detected by itself in round . Since there is more than one state of a link, we partition this deviation into eight cases and show that does at least well by using as it deviates from the protocol by these eight deviations.(1)Case 1. or , such as , and where , and pretends in round .(a)Case 1.1. . Then must be sent in round in order to enable message chain mechanism to succeed. Since , an inconsistency must be detected in round .(b)Case 1.2. and is the . Since does not know the link states of round in the sending phase of round , by Definition 1, if is a nonfaulty agent in round , then . Suppose that the in round is when following the protocol. We can see that . (i) If all agents become faulty due to the deviation of , then there is no consensus in the system and does not gain. (ii) If the deviation does not cause no solution and is a nonfaulty agent in round , then we have thatThat is,It is easy to get thatIf , then in (7) takes its supremum . Hence, there must be equation (1). (iii) If is a faulty agent in round , then the deviation does not affect the outcome. Thus, cases (i),(ii), and (iii) hold equation (1).(c)Case 1.3. and . Since the link states of next are unknown to , either there is no solution since all agents become faulty; the deviation of does not affect the final outcome; or by Definition 1, does at least well by using as it deviates from the protocol because the probability that becomes consensus does not increase. Thus, equation (1) holds.(d)Case 1.4. and and . Either there is no solution since all agents become faulty or there is no benefit because it does not affect the .In summary, all cases in case 1 cannot make gain.(2)Case 2. or , such as , and where , and pretends in round . If r < m-1, i does not gain, which is the same as that in case 1.1. If rm −1, i does not gain, which is the same as that in case 1.1. If , since is nonfaulty in round and does not know the link states of round , then by Definition 1, there is no benefit in guessing the message random number with the probability . So equation (1) holds.(3)Case 3. and , and or , and pretends in round . Suppose lies about the detection result of . (i) If is faulty in round , it does not affect the final outcome even if no inconsistency is detected. (ii) If is nonfaulty in round , then must send the faulty random numbers to at least two good agents (Lemma 6). And guesses a faulty random number with the probability . If it does not affect the states of and , then does not gain even if guessing the random correctly. Otherwise, if the is changed, thenAnd since is nonfaulty in round , thenBy the definition of , equation (1) holds. So equation (1) holds both in (i) and (ii).(4)Case 4. and , and or , and pretends in round . We also suppose that lies about the detection result of . (i) If , then it does not affect the final outcome even if guesses the random correctly, since and have the same meaning when computing the state of an agent. (ii) If , then either an inconsistency is detected by message random number verification or link state conflict; it does not affect the outcome if the states of and are unchanged after deviating; or it makes round a . Then if there is already a and , it does not affect the outcome because we need the first reliable round finally. And if , then the utility of decreases because is advanced compared to following the protocol. If there is no , by Definition 1, does at least well by using as it deviates from the protocol. Hence, equation (1) holds again.(5)Case 5. or , and or where , and pretends in round . By Claim 1, an inconsistency is detected. Thus, equation (1) holds.(6)Case 6. and , and or , and pretends in round . Since and have the same meaning when computing the state of an agent, there is no benefit, which is the same as that in case 4.(7)Case 7. and , and , and pretends in round . We can turn this case into case 3 because the type of must be . So it has the same result as that in case 3. Thus, yet again, equation (1) holds.(8)Case 8. or , such as , and where , and receives, and pretends or in round . (i) If is a nonfaulty agent in round , then it must send to at least two good agents except in round . Thus, all nonfaulty agents must know in round , so that does not gain. (ii) If is faulty in round , then since is nonfaulty in round , is also a nonfaulty agent in round even if the link between and is faulty. Thus, the results are the same as those in case 4 and case 6. Therefore, equation (1) holds both (i) and (ii).In summary, in any case, i’s utility is at least as high with as with .(vii)Type 7. Since the random numbers are only used in inconsistency detection, either detects an inconsistency and decides or it does not affect the final outcome if no inconsistency is detected. Thus, equation (1) holds.(viii)Type 8. It is easy to see that either an inconsistency is detected due to consensus difference or restoring secret faulty; it does not affect the outcome if or is not the agent whose preference is chosen; or does not gain due to the blind initial preferences by Definition 1 and the random agents’ proposals.(ix)Type 9. Clearly it does not affect the outcome if decides in the receiving phase of round or does not receive the messages from . Otherwise, must receive the messages from at least two good agents in last round. Then detects an inconsistency and decides . So equation (1) holds.(x)Type 10. We divide this type into two cases to prove.(1)Case 1. There is no consensus in the system. This case happens when either all agents become faulty due to the deviation, or restoring secret faulty in round for all good agents because of the missing pieces of. Thus, ’s utility with pretending to crash is lower than with following the protocol in case 1.(2)Case 2. There is a consensus finally. (i) . Since i does not send messages to any agents before decision round, cannot exist in decision set D. Thus, ’s utility also decreases when pretending the protocol. (ii) . It does not affect the outcome. Therefore, (1) holds both in cases 1 and 2.Finally, concluding the proof.

5. Conclusion

In this paper, we introduce game theory as an interpretable method for studying the algorithms in multiagent system and provide an algorithm for uniform consensus that is resilient to both omission failures and strategic manipulations. We prove that our uniform consensus is a Nash equilibrium as long as , and failure patterns and initial preferences are blind. Additionally, we present the theory of message passing in presence of process omission failures. We argue that our research enriches the theory of fault-tolerant distributed computing and strengthens the interpretable reliability of consensus with omission failures from the perspective of game theory. And our contribution provides a theoretical basis for the combination of distributed computing and strategic manipulations in omission failure environments, which we think is an interesting research area.

In our opinion, there are many interesting open problems and research directions which are not covered in this paper. We list a few here: (a) whether an algorithm for rational uniform consensus exists if coalitions are allowed; (b) the study of rational consensus with more general types of failures, such as Byzantine failures, is important; (c) with the problem setting of this paper, whether the rational consensus exists if we relax the constraint ; (d) studying the rational consensus in asynchronous system, which seems significantly more complicated; and (e) introducing the assumption of agent bounded rationality may be useful in practical scenarios.

Data Availability

The data and proof used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This study was supported by the National Key R&D Program of China (2018YFC0832300 and 2018YFC0832303).