Security and Communication Networks

Volume 2018, Article ID 6254876, 7 pages

https://doi.org/10.1155/2018/6254876

## A Novel Load Capacity Model with a Tunable Proportion of Load Redistribution against Cascading Failures

^{1}School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing, China^{2}School of Automation, Nanjing University of Posts and Telecommunications, Nanjing, China^{3}Department of Computer Information and Cyber Security, Jiangsu Police Institute, Nanjing, China

Correspondence should be addressed to Yurong Song; nc.ude.tpujn@rygnos

Received 10 April 2018; Accepted 13 May 2018; Published 7 June 2018

Academic Editor: Lu-Xing Yang

Copyright © 2018 Zhen-Hao Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Defence against cascading failures is of great theoretical and practical significance. A novel load capacity model with a tunable proportion is proposed. We take degree and clustering coefficient into account to redistribute the loads of broken nodes. The redistribution is local, where the loads of broken nodes are allocated to their nearest neighbours. Our model has been applied on artificial networks as well as two real networks. Simulation results show that networks get more vulnerable and sensitive to intentional attacks along with the decrease of average degree. In addition, the critical threshold from collapse to intact states is affected by the tunable parameter. We can adjust the tunable parameter to get the optimal critical threshold and make the systems more robust against cascading failures.

#### 1. Introduction

Cascading failures are ubiquitous phenomena in real life and often occur in many networks such as power grids, Internet, and transportation systems. In 2003, the largest power outage took place in North America, which just resulted from a broken-down power plant in Ohio [1]. Traffic paralysis of south China caused by storm in 2008 and Internet congestions [2] are typical examples of cascading failures as well. These incidents seriously affect people’s life and threaten the stability of society. Therefore, more and more researchers come to investigate the issue from different perspectives.

There are several kinds of traditional models on researching cascading failures, respectively, known as the load capacity model [3], the double value impact model [4], the optimal power flow approach model [5], the sand pile model [6], the coupled map lattice model [7], and so on. Load capacity model [3] (ML model) proposed by Motter and Lai shows that, for such networks, where loads can redistribute to other nodes, intentional attacks can lead to a cascade of overload failures, which can in turn cause the entire or a substantial part of the network to collapse. To be more practical and reduce the collapse scale, scholars have put forward many cascading failures model based on ML model. Zhou et al. [8] deem that degrees of nodes in networks can to some extent reflect the processing ability and let nodes with both higher loads and larger degrees acquire more extra capacities. Sun et al. [9] propose a new matching model of capacity by developing a profit function to defence cascading failures on artificially created scale-free networks and the real network structure of the North American power grid. Fang et al. [10] investigate the cascading failures in directed complex networks and make a load redistribution rule of average allocation. Chen et al. [11] propose a nearest neighbours load redistribution model, where load of broken nodes is allocated to nearest neighbours according to their degrees. Wang et al. [12] propose a local load redistribution model. They adopt the initial load of node to be and the load redistribution proportion to be , where denotes the neighbors set of broken node . Wang et al. [13] consider that not all overload nodes will be removed from networks due to some effective measures to protect them and propose a new model with a breakdown probability. Also, they propose a new method considering neighbours’ degrees for initiating loads, where the initial load of a node is and the load redistribution proportion is . Peng et al. [14] propose a renewed cascading failures model. In this model, the initial loads are defined as a nonlinear function of the generalized betweenness which is . The redistribution strategy is . The numeric value of betweenness centrality is proportional to that of degree with power exponent [2, 15, 16], so the definition of initial loads is substantially a nonlinear function of degree. Generally, we can conclude that initial loads are all defined as a function of degree. And the load redistribution proportion can be seen as a function of initial loads, where and . Actually, the load redistribution proportion [8–14] depends on the initial loads that reflect the load processing ability to some extent. Duan et al. [17, 18] explore the critical thresholds of scale-free networks against cascading failures and spatiotemporal tolerance after a fraction of nodes attacked with a tunable load redistribution model that can tune the load redistribution range and heterogeneity of the broken nodes. The initial load is assumed as . The redistribution strategy is global. They make denote the distance between broken node and intact node . The redistribution proportion is . Likewise, the initial loads are defined as a function of degree, and the redistribution proportion can be concluded as a function of , where is a function of initial loads with a power exponent . And is a function of distance, where . Extending the redistribution range can improve the system robustness against cascading failures undoubtedly. However, this strategy is sometimes unpractical. Long distance load redistribution strategy costs too much in practical application and has high time complexity in computation. Recently, some scholars [19] pay attention to the application of load capacity model in information warfare and propose a cascading failures model for command and control networks with hierarchy structure.

The above scholars are devoted to improving robustness of networks from various points of view and have considered degree, betweenness, path length, and so on. However, scholars have not adopted clustering coefficient [20] into modelling cascading failures. Some researchers have recently investigated the effect of clustering coefficient [20] in the propagation of cascading failures. Zheng et al. [21] find that scale-free networks with larger clustering coefficient are sensitive and are prone to suffering from cascading failures. Ding et al. [22] explore the cascading failures in interconnected weighted networks and draw a conclusion that networks with smaller mean clustering coefficient have stronger ability to resist cascading failures. Eisenberg et al. [23] analyze the topology and resilience of the South Korea power grid. They discover that the power grid has a low efficiency and a high clustering coefficient, implying that highly clustered structure does not necessarily guarantee a functional efficiency of a network. Based on the error and attack tolerance analysis evaluated with efficiency, they find that the South Korea power grid is vulnerable to random or degree-based attacks. Likewise, Monfared et al. [24] investigate the structural properties of power transmission of Iran. The clustering coefficient displayed by Iranian power grid is much larger than that of corresponding random networks. Similarly, after studying the largest connected component of the network, they conclude that the power grid is vulnerable against cascading failures.

In this paper, we propose a novel load capacity model by considering clustering. The load redistribution strategy in our model is a kind of nearest neighbour redistribution methods, where the broken nodes allocate loads to their one-leap neighbours. We introduce a tunable parameter to govern the strength of load redistribution proportion. By taking the robustness quantified as the critical threshold , where a phase transition takes place from collapse to intact states, we investigate the relation between and on ER random graph networks [25], BA scale-free networks [26], WS small-world networks [20], North American power grid, and autonomous systems (AS) subnet topology. The simulation of the intentional attacks on a single node shows the nonmonotonic and nonlinear effect between the two parameters. We can control parameter to adjust the proportion of load redistribution, thus reaching the optimal robustness of networks. Our simulations also suggest that networks with large average degree may be robust under the intentional attacks in our model, and highly clustered networks with the same degree distribution cannot guarantee the robustness. By contrast with another nearest neighbours load redistribution model [14], we verify the better performance of our model. Our model may further the research of controlling and defence against cascading failures in complex networks, which is constructive in designing infrastructure networks, such as power grid, logistics network systems, and communication networks.

#### 2. Cascading Failures Model

For simplicity, we assume that the network is at the static state initially where the initial load of each node is less than its capacity and there are no broken nodes. After removal of one single node caused by intentional attacks, the balance among nodes will be changed. Therefore, the loads of the broken nodes will be redistributed to other nodes. In this paper, these nodes are one-leap neighbours of broken nodes. If some of these nodes do not have enough capacity to handle the extra load from the broken nodes, they will break down afterwards. In turn, these newly generated broken nodes will continue to allocate loads to their normal neighbours, triggering a collapse of partial nodes or even the whole network. This is the process of cascading failures under the frame of load capacity model [3].

Here, we let the initial load of node be a function of degree. The definition of the initial load of node is as follows: is the number of nodes in the network. is the degree of node . is a constant parameter that characters the strength of initial loads. is the set of node ’s neighbours. The capacity of a node is the maximal load that the node can manage under the normal operation. The definition is as follows: () is the tolerance parameter. Generally, the tolerance parameter reveals the node’s ability of defence against cascading failures. Evidently, the larger it is, the more robust the network is. However, improving the ability of tolerance at all costs is not reasonable. Here, we aim to seek the minimal that we define as critical threshold to get a balance between costs and robustness. Undoubtedly, reducing the critical threshold as much as possible is our ambition.

Considering that clustering coefficient plays a negative role in the propagation of cascading failures [21–24] and initial loads reflect the load processing ability to some extent [8–14, 17, 18], we make our redistribution strategy as follows:The term denotes the clustering coefficient [20] of node . The definition of clustering coefficient [20] of node is as follows. denotes the number of links among node ’s neighbours. is the degree of node .Function is proportional to initial loads and in this paper we adopt the function [12–14]. The function characters the negative effect of clustering coefficient [21–24] and is a decreasing function of clustering coefficient. When a node is broken, the neighbours will be redistributed the loads of the broken node. If the adjacent node has a higher clustering coefficient, it will be redistributed fewer loads from the broken node. We here adopt a simple exponential function, namely, , a decreasing function of clustering coefficient. Actually, we can apply a more complicated form of . However, a more complicated form of adds little value to characterize the effect of clustering coefficient but increases the computing complexity. In reality, the results and perspectives of our research are not limited by a specific function of clustering coefficient. denotes the set of intact neighbours of node. Here, node is an element of the set. When node breaks down, it will allocate its loads to intact neighbours at the certain proportion of . After getting the extra loads of node, node will break down if the updated loads exceed its capacity (). In turn, node will allocate its loads to intact neighbours, just as (3) and (4). The process will stop until the whole network breaks down or there are no newly generated broken nodes. The parameter () is tunable. By controlling parameter , we can adjust the proportion of load redistribution to reach the optimal robustness of networks at the lowest cost.

#### 3. Simulations

In this section, we first investigate the relation between and on ER random graph networks [25], BA scale-free networks [26], WS small-world networks [20], North American power grid, and autonomous systems (AS) subnet topology. The average degrees of artificial networks are, respectively, four, six, eight, and ten. Fifty networks of the same average degree are generated, and the simulations are implemented in each network. Average results are shown in this paper. Relevant parameters of networks are shown in Table 1.