Complexity

Volume 2017 (2017), Article ID 4832740, 11 pages

https://doi.org/10.1155/2017/4832740

## Distributed Sequential Consensus in Networks: Analysis of Partially Connected Blockchains with Uncertainty

^{1}Media Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139-4307, USA^{2}Harvard T. H. Chan School of Public Health, Harvard University, Boston, MA 02115, USA^{3}BISITE Research Group, University of Salamanca, Edificio Multiusos I+D+i, 37008 Salamanca, Spain

Correspondence should be addressed to Francisco Prieto-Castrillo; ude.tim@oteirpf

Received 9 July 2017; Revised 13 September 2017; Accepted 10 October 2017; Published 1 November 2017

Academic Editor: Dimitri Volchenkov

Copyright © 2017 Francisco Prieto-Castrillo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This work presents a theoretical and numerical analysis of the conditions under which distributed sequential consensus is possible when the state of a portion of nodes in a network is perturbed. Specifically, it examines the consensus level of partially connected blockchains under failure/attack events. To this end, we developed stochastic models for both verification probability once an error is detected and network breakdown when consensus is not possible. Through a mean field approximation for network degree we derive analytical solutions for the average network consensus in the large graph size thermodynamic limit. The resulting expressions allow us to derive connectivity thresholds above which networks can tolerate an attack.

#### 1. Introduction

Trust is usually conceived as the additive aggregation of reliable pieces. However, when it comes to cyber-security or privacy requirements, the challenge is how to collaboratively create trust out of uncertain sources in a networked environment [1–6]. A remarkable success story of this approach is Bitcoin [7]. In Bitcoin, trust is built by a set of agents—*miners*—which collaborates in sequencing* blocks* of transactions in a chain.* Blockchain* (BC) is the underpinning technology of Bitcoin, a protocol in which miners compete to solve a computationally expensive problem, known as* Proof-of-Work* (POW) [8]. The miners’ results are then assembled together in a distributed data chain. The outcomes are only embedded in the final version of the chain after consensus, which is only reached if the order relationships are consistent. POW is a proxy of trust and, hence, reliability increases as the chain grows; it is incrementally more difficult to revert—hack—the chain since this requires increasing computing power. Thus, although each agent generates insecure information locally, the resulting aggregate becomes more and more reliable over time.

Recently however, these advantages have also caused concerns about how the BC paradigm can be exported to domains other than cryptocurrency, such as the Internet-of-Things (IoT) or Wireless Sensor Networks (WSN) [9, 10]. This difficulty arises from the limitations of the BC architecture, which hamper the possibility of extending it to small devices (e.g., sensors). Sensors, in particular, lack the computing power to perform POW. An even more challenging fact is that BC requires full connectivity to operate (which is unfeasible for WSNs). Therefore, the question at issue is how to design blockchains without POW and partial connectivity while maintaining robustness to failures and attacks.

Distributed consistency is not a novel concept. In [11] the authors analyse the consistency of distributed databases by using algorithms which are closely related to epidemiological models [12]. Two information diffusion mechanisms, antientropy and rumor mongering, happen to be particularly useful for modelling distributed consistency. Antientropy regularises entries in the databases while rumor mongering updates the last information content from neighbour instances. This trade-off between ordered and random infection allow the authors to find exponential epidemic growth by using a mean field approach. The concept of diffusion in partially connected networks is treated rigorously in [13] in the context of glassy relaxation. Here, the geometrical aspects of the return probability of a Markovian hypercube walk are also analysed using mean field theory.

The effect of graph topology on information spreading has been extensively discussed in the literature (e.g., [14–16]). However, the model in [16] (a random graph superposed to a ring lattice) is particularly relevant to our discussion, since it ensures a minimum connectivity while maintaining the* small-world* property (i.e., high clustering coefficient and small characteristic path length [17]).

In [18] the general distributed consensus problem is described; nonfailing sites out of choices have to decide on a common value . The authors of that study found that the key components for consensus breakdown are asynchronicity and failure, which both inject uncertainty into the system at different scales. Distributed consensus in networks is also analysed in [19], where the authors address the most important applications of the concept, such as clock synchronisation in WSNs. The authors introduce the average consensus as the limit to which initial states converge, provided this limit is equal to the averaged initial values. Interestingly, a randomised consensus protocol (where only a fraction of sites needs to agree on a value) is shown to be more robust against crash than a deterministic algorithm [20].

When consensus is not reached, systems usually break down. From the point of view of control theory, a number of interesting results have been obtained in studies focused on this issue, for example, [19], aimed at self-healing the system momentously after failure. However, security and resilience are multidimensional objects which can be tackled more consistently through a complex systems approach [21, 22]. For instance, [23] proposes a phone call model where players broadcast rumors randomly among their partners. The authors study the effect of node failure and concentrate on an interesting result; if failure patterns are random, crashing nodes result in only uninformed players with high probability. The work also shows that any randomised rumor spreading algorithm running for rounds requires transmissions. This is consistent with what we know from network science [24]; random failures do not spread so easily. The model considered in [25] consists of sites running processes asynchronously where failures are modelled as a Bernoulli process. In [26] the problem is set in terms of a voter model and an invasion process; agreed values are exported from a set of sites but imported errors infect the rest of nodes.

When it comes to blockchain implementations, [27] analyses information propagation in the Bitcoin network. This work highlights the limitations of the synchronisation mechanisms in BC and the system’s weaknesses under attack. Here, the communication network is modelled as a random graph with a mean degree of ≈32 and it is found that the block verification process can majorly contribute to delay propagation and inconsistency. In their experiments the authors show that the probability distribution of the rate at which nodes learn about a block has a long tail. This means that there is a nonnegligible portion of nodes which does not receive information timely. The effect is equivalent to considering an incomplete consensus network. A typical example of organised attack in the BC is the so-called* selfish-mine* strategy. This consists of a subset of nodes which diffuse information partially to targets, instead of distributing updates homogeneously [28]. In [29] a Markov chain model is used to analyse the selfish-mine strategy in Bitcoin. This and other block-withholding behaviour can have a devastating effect on the performance if the dishonest community is around half the size of the network.

All these works provide key insights into the problem of network resilience, diffusion, and consensus from different perspectives. However, to the authors’ knowledge, a mathematical model of partially connected blockchains is still missing. Therefore, in this paper we make a theoretical and numerical analysis of the conditions under which a distributed sequential consensus is possible. In concrete, we examine the consensus level of partially connected blockchains under failure/attack events. To this end, we develop stochastic models for both verification probability once an error is detected and network breakdown when consensus is not possible. The resulting expressions allow us to derive connectivity thresholds above which networks can tolerate attack.

The paper is organised as follows. In Section 2 we formulate the problem. The results obtained in the study are presented in Section 3. Finally, we present the conclusions obtained from our research and discuss the possibilities for future work in Section 5.

#### 2. Problem Formulation

Blockchains can be conceived as dynamical distributed databases whose constituents (blocks) are collaboratively and incrementally built by a set of agents. There are three key factors in this process: (a) how information spreads, (b) how consensus can be achieved, and (c) how errors affect the overall performance. We elaborate on these elements below.

##### 2.1. Partial Connectivity in Consensus Networks

From a network perspective we consider a* Peer-to-Peer* (P2P) infrastructure with two types of nodes: communication sites and processing sites, miners (Figure 1). Users connected to nodes can launch transactions to other users in the network. If a group of users is involved in a transaction arrangement, one or more miners can attempt to verify the intended transactions and if successful, pack them into a block. This problem can be conceived as the interplay of three graphs: communication, transactions, and miners. As stressed, the usual BC protocol takes the full graph for granted, which is not always possible; there may either be failures or intentional attacks on a portion of the network. However, it is unlikely for a network to get disconnected under normal operation. Hence, graph connectedness is a reasonable lower bound assumption (particularly in the case of sensor networks and IoT). This leads us to consider the network proposed in [16]; , consisting of a random graph superposed to a ring lattice . This model still exhibits the* small-world* property found in [14, 15] but it is closer to the real requirements of minimum connectivity found in WSNs and other networked systems such as computer networks [30].