Abstract

In network security, how to use efficient response methods against cascading failures of complex networks is very important. In this paper, concerned with the highest-load attack (HL) and random attack (RA) on one edge, we define five kinds of weighting strategies to assign the external resources for recovering the edges from cascading failures in heterogeneous scale-free (SF) networks. The influence of external resources, the tolerance parameter, and the different weighting strategies on SF networks against cascading failures is investigated carefully. We find that, under HL attack, the fourth kind of weighting method can more effectively improve the integral robustness of SF networks, simultaneously control the spreading velocity, and control the outburst of cascading failures in SF networks than other methods. Moreover, the third method is optimal if we only knew the local structure of SF networks and the uniform assignment is the worst. The simulations of the real-world autonomous system in, Internet have also supported our findings. The results are useful for using efficient response strategy against the emergent accidents and controlling the cascading failures in the real-world networks.

1. Introduction

The robustness properties of complex networks subject to either random breakdown or intentional attacks have attracted considerable interest [1, 2], due to the blackouts in US power grids [3, 4], the large-scale congestion in the Internet [5], and the electrical blackout in Italy [6]. These accidents have threatened the network safety and resulted in enormous loss in economy.

As a result, many issues have been investigated carefully, including the robustness of the topological structure of networks [711], the description of cascading phenomenon and transition [12], the protection strategies against cascade [1318], the cost of attack and defense [19, 20], and the reliability metrics of networks [21, 22]. In addition, the vulnerability of the real-world networks has become an important topic in the design of engineering safety [23, 24]. The cascading failure in power systems [25, 26] and the attacks in computer networks [27] have attracted more consideration. Some researches focus on the stability analysis for the uncertain systems [2830] and the analysis of cyberphysical networking systems [31]. Especially, the robustness and cascading failures in interdependent networks [3235] have become a hot topic for the past few years. Also, traffic bound [36], traffic delay [37], and the control systems of heavy inputs and delay systems [38, 39] are considered carefully.

The cascading failures [35, 40], which originate very locally but often result in a global collapse, have become one of the hottest topics in network safety. On the one hand, by characterizing the load on nodes, the considerable cascading models under the attacks on nodes have been presented. The conditions of the global cascade are explored [40, 41], where every node is assumed to have the same capacity [41]. The influence of the removal of nodes on reducing the efficiency of networks is investigated [42]. The cascading failures of the North American power grid under the loss of nodes [43] and the cascading failures induced by flux fluctuations [44] are also probed. On the other hand, the cascading dynamics induced by the edge-based attacks have also been probed. These researches focus on the cascading model by assigning the load and adopting a local load redistribution to edges [45, 46] and the model that the overloaded edges break down with some probability [47]. The size of cascade and the cost of investment under the removal of targeted edges [48] and the cascade by adopting the Ohw’ and Kirchhoff conservation law [49] are also probed.

However, once the cascading failures emerged, the important question is concerned with the efficient response to disasters. In real-world networks, there always exists some emergency mechanism or “recovery mechanism” that can be regarded as the coming external resources (e.g., manpower, number of vehicles) into networks, which recovers the overloaded components to the normal state. For example, police could deal with the emergent accidents or chaos in roads and the technical experts could handle the breakdown and repair the components damaged in technical networks, such as the electric power system and the Internet network. These accidents or breakdown can be caused by the natural events (earthquakes, floods, or extreme weather) or the intentional attacks. Such recovery mechanism can effectively counteract the overload and relieve the stress, which could make the edges or nodes from “overloaded” to “congestion” (the midstate) and maybe to “normal.” Yet this recovery mechanism has not been considered in previous works. Therefore, it is important to investigate the influence of the recovery mechanism on increasing the robustness of the networks against cascading failures, especially for the network safety. We argue that probing this question will give us important implications in using efficient strategy to deal with the disasters happening in real-world networks.

In this paper, induced by highest-load attack (HL) and random attack (RA) on one edge, we study the cascading dynamics of the heterogeneous scale-free (SF) network with recovery mechanism that is represented by the external resources entering into SF network. Our novel model defines four kinds of weighing strategies to assign the external resource to the edges for recovering the networks from cascading failures. The influence of , the tolerance parameter , and the different weighting strategies on improving the robustness against cascading failures in SF networks is investigated. We find that, firstly, under intentional attack, the fourth weighing method can more effectively decrease the number of avalanched edges, reduce the spreading speed of cascading failures, and control the outburst of cascading failures in SF networks than other methods. Secondly, as the most efficient strategy under intentional attack, the fourth weighting method needs to compute the betweenness centrality of nodes, which implies that the topological structure of SF networks is needed. Therefore, the third weighting method will be optimal if we only knew the local structure of network (namely, the degree of nodes). On the other hand, as an example in real-world networks, the simulation of the autonomous system in the Internet with scale-free characteristics also shows the same results of SF network model. It means that the simulation of real-world networks supports our findings.

The rest of this paper is organized as follows. Section 2 develops the novel model of cascading dynamics with recovery mechanism under edge-based attack, in which the external resource is assigned to the links according to the weight of links in SF network. In Section 3, we describe four kinds of weighting strategies to measure the weight of the links in SF networks. In Section 4, we compare the influence of four kinds of weighting strategies on the robustness of SF network against cascading failures and analyze the results of our simulations. Section 5 summarizes the most important findings and offers the future research.

2. The Cascading Dynamics with Recovery Mechanism

In this section, we focus on the development of cascading model on the weighted scale-free network subject to random and intentional attack on one edge.

Since many real-world networks have been observed to have a typical power-law degree distribution ( is the scale exponent), the vulnerability and the robustness of such scale-free networks (SF) under attacks have been an important problem in studying the cascading failures of complex networks [10, 40, 4547, 50].

Therefore, in this paper, we focus on the cascading dynamics of the Barabási-Albert scale-free network model generated according to the rule of growth and preferential attachment [50]. On the other hand, The large-scale congestion in the Internet has drawn attention to the robustness of the autonomous system (AS) [5]. Therefore, as an example in the real-world networks, considering that the autonomous system (AS) formed by the graph of routers comprising the Internet from the BGP (Border Gateway Protocol) logs has been observed to show power-law degree distribution [51], we also focus on the autonomous system (AS) defined as AS1470 which has 1470 nodes and 3997 edges and the mean degree . Here, we define the adjacent matrix of network considered as , where if the node links to the node ; otherwise . We denote as the weight of the edge in network.

Generally, the development of cascading model is based on the following three factors: the definition of the original load, the correlation between the original load and the capacity, and the dynamical redistribution of load after the attacks. Similarly, the cascading dynamics in this paper is modeled as follows.

(1) The original load on the edge : in many physical network structures, the physical flows (data packets or energy) are always forwarded along the edges according to the shortest path routing strategy. For a given pair of nodes , the flows are transmitted along the shortest paths connecting them; maybe there exist some shortest paths through the edge . Therefore, it is natural to define the total number of shortest paths passing through between any pair of nodes in a network as the load on . Naturally, for our weighted SF network, the load on the edge at time is defined as the number of the shortest paths through it ( means the initial load before attack). Now we assume that the original load on is .

(2) The capacity of the edge : we suppose that is the maximum load that an edge could handle and is proportional to the initial load ; that is, where is the tolerance parameter. The higher means that the edge has the higher capacity and the higher ability against failures. Also, it is rational in designing the real-world networks including power grids and the Internet, because the capacity of the links in these networks is always limited by the cost.

In most of the previous models, there were only two states assigned to a node or edge: normal or overloaded; besides the node or edge would break down (i.e., overloaded) once the load on them exceeded their capacity. However, in real-world networks, there exists some emergency mechanism that will handle the congestion state, relieve the pressure on them, and thus reduce the probability of the overload. For example, in transportation networks, the external resources (such as manpower or vehicles) will come to deal with the emergent events and recover the road from the “congested or overloaded” road to the “normal” state. Therefore, we assign a recovery rate to every edge and assume that the threshold is the upper bound load on in normal state. Naturally, we define where is the weight of the edge and is an adjustable parameter which represents the external resources entering into the network. Here we assume . When developing (2) and (3), we required the following.(i)We hope that the external resources enter into the network according to the importance of the edge that is measured by the normalized weight . The recovery rate should increase monotonically with the increasing . For some , the bigger , the more external resources are assigned to the edge , and then the recovery rate can be closer to the upper bound .(ii)We can control the parameter to adjust the recovery rate . When , there is no external resource and is the initial recovery rate.(iii)We have . The bigger is, the higher is, and then the closer is to . It implies that, when the more external resources entering into the network are assigned to the edges, the more easily the links are recovered from the abnormal to normal state. Namely, the external resources have only positive effect on the edge .

We can find that such definition is rational in the actual situations and highlights the protection of the important edges. Of course, we can choose other functions of (2) and (3) satisfying these conditions.

(3) The redistribution of load: when a few edges break down, at some time , we assume the temporary load on the edge as after the redistribution of load. Then, the edge will get a number of external resources according to (2) once the load on exceeds the threshold . It means that the recovery rate will work according to the degree of exceeding the threshold . Finally, the true load on becomes where . In fact, in (4), the final load indicates the three states of edge : normal (if ); congestion (if ); overloaded (if ). means that the edge deals with the load busily and still works; implies that the edge cannot handle the too high a load even with the recovery mechanism, and as a result, the edge fails. Thus, a larger leads to the stronger ability to handle the load on the edge, and finally the network will have the stronger robustness, which is consistent with the actual situations in many real-world networks. Generally, the more important the edges are, the higher investment and the force are on them.

In (2), the external resource is assigned to the edge according to the weight of , so that (2) highlights the protection of the important edges in SF network. However, the external resources are limited and the higher represents the higher cost for protection. Naturally, it is needed to measure how important the edge is in order to find the efficient response strategy against disasters. This will be discussed in the following sections.

3. The Weighting Strategy

In the description of network characterization, the centrality is significant for measuring the importance of an element (node or edge) in studying cascading failures, which can be used to measure the topological position of an element in network. In this part, we will introduce four kinds of weighting methods to measure the centrality of an edge , which is regarded as the weight of and can reflect the importance of in network.

(1) The weighting strategy : in many real-world networks, the flows are forwarded along the edge according to the shortest path routing strategy. Thus, the edge betweenness centrality is always used to measure the centrality of the edge [52, 53], which is defined as where is the number of the shortest paths between the nodes and passing through the edge . Then, we define the weight of the edge as

(2) The weighting strategy : however, in real networks, the edge centrality is always related to some intrinsic quality of the end node of the edge. For example, in traffic networks, the design of the highway or the airlines always depends on the population or the economic development conditions (like GDP) among cities. These intrinsic characteristics can be seen as the quality of the node (city). The lines or roads connected to the nodes (city) with high quality always have high edge betweenness centrality, which have not been considered in the previous models yet.

Thus we define a novel edge betweenness centrality of as where is the set of all shortest paths between the nodes and , is the shortest paths between and passing through the edge , and is the intrinsic quality of node . (Here we choose the degree of node as ; of course, one can choose other rational values.) Note that the definition of incorporates the intrinsic characteristics of nodes with the network structure, which can better reflect the weight importance of edges in actual situations. Specially, (7) will degenerate into the definition in (5) if every node has a uniform intrinsic quality (). Now we assume the weight of the edge as

(3) The weighting strategy : another centrality measure of the edge is the product of the nodes degree of the end node and , which has been used to measure the weight of the edge [45, 46]; that is, where and are the degrees of nodes and , respectively. Here we assume .

(4) The weighting strategy : usually, the link is also important when the end of a link is important;this is in accordance with the real-world networks [4547]. Moreover, the importance of one end of a link can be measured by the node betweenness centrality [51, 53]; that is, where is the number of the shortest paths between the nodes and passing through the node . This motivated the introduction of another weight measure for an edge. Therefore, we assume that the weight of the edge depends on the product of betweenness centrality of the end nodes and , which is defined as Here we assume .

(5) The uniform strategy: finally, we should note that the SF network considered will become an unweighted network if every edge has the uniform weight (e.t., ). It means that every edge will get the uniform external resource according to (2). We defined this strategy as the uniform assignment strategy.

Now one can see that the external resource , the different weighting methods , and the tolerance parameter would have great influence on the robustness of SF network subject to attacks on edges. This will be discussed in the following sections.

4. The Simulation and Analysis

In this paper, we mainly consider two kinds of attacks on one edge . (1) Highest-load attack (HL): we remove one edge with the highest initial load; (2) random attack (RA): we randomly choose one edge and then remove it. The attack originates from the removal of one edge and leads to the redistribution of load on other edges, and then some of them would fail as the load exceeds the capacity. This process is repeated until no edge fails, and at this moment, the cascade can be considered to be completed. Thus, the cascading process with the recovery mechanism under edge-based attacks can be described in Figure 1. Now, in the following section, we will reveal the function of the recovery mechanism on the network robustness against cascading failures from three aspects: improving the integral robustness, controlling the spreading velocity of cascading failures, and controlling the burst of cascading failures.

4.1. Improving the Integral Robustness against Cascading Failures

Now, in the first part of this section, we focus on the function of the recovery mechanism on improving the robustness of the heterogeneous scale-free network (SF) against cascading failures, which is quantified by the following metrics: the avalanche size (AS) after cascade failures which is defined as follows: where and are the number of the avalanched edges at each time step under attack and the total number of edges in initial networks, respectively. From (12), we can see that the metric can be regarded as a function of and , and then could quantify the integral robustness of structure against cascading failures.

From Figures 2(a) and 2(b), it is clear that, for SF network model and the autonomous system network AS1470 subject to HL attack, as the external resources are assigned to the edges according to the weighting method , it could be better at decreasing the avalanche size () thus improving the integrity of SF networks than other strategies. Especially, the effect is obvious for smaller tolerance parameter () and more external resources (). For example, as and , the weighting method could decrease the avalanche size from about 0.71 to 0.3 (see the arrow in Figure 2(a)). The simulations of the real-world networks (AS1470) have proved these findings (see Figures 3(a) and 3(b)). Moreover, as shown in Figures 2(b) and 3(b), the weighting method is suboptimal and the uniform method () is the worst. Although the weighting method is optimal, it depends on the betweenness centrality of the nodes that needs to know the whole topological structure of SF network from (11). It implies that the third weighting strategy is suggested if we only knew the local structure of networks, such as the degree of nodes.

On the other hand, as shown in Figures 2(c), 2(d), 3(c), and 3(d), under RA attack, the difference among the four kinds of weighting strategies is not clear if with fewer external resource (e.g., ). But, as , it seems that the second weighting strategy and the uniform assignment strategy () are optimal if with sufficient external resource (e.g., ).

4.2. Controlling the Spreading Velocity of Cascading Failures

In the second part of this section, to further measure how efficient the different weighting strategies are in response to the cascading failures in SF network, we will explore the spreading velocity of cascading failures, which is computed by : where is the number of the avalanched edges at each time step under attacks and is the evolving time step of cascading propagation in networks (see Figure 1).

As shown in Figures 4(a), 4(b), 5(a), and 5(b), under HL attack, the weighing method can obviously reduce the spreading velocity of cascading failures in both the SF network model and AS1470 network, regardless of the quantity of external resources . Moreover, the third weighting method is suboptimal if having more resources (e.g., ). It reveals that, under HL attack, the external resource assigned to edges according to the method can control the spreading speed of cascading failures in heterogeneous scale-free networks more efficiently.

4.3. Controlling the Process of Cascading Failures

In the previous two parts of this section, the function of different weighting methods on improving the robustness of networks against cascading failures has been shown. However, another question that whether the weighting methods could control the outbreak of cascading failures also should be considered. In this part of this section, we focus on controlling the process of cascading failures in networks and plot the avalanche size in each time step under HL attack to explore this question.

As shown in Figures 6 and 7, under HL attack with different tolerance parameters (, 0.04, 0.06, and 0.08), we can see that the weighting method can more effectively control the outburst of cascading failures in SF network model than other methods. Especially, with more external resources (), the more obviously can reduce the peak of cascading failures (see Figure 7). Moreover, the simulations of the autonomous system AS1470 also show the similar findings (see Figures 8 and 9).

5. Conclusion

In this paper, we study the cascading dynamics of heterogeneous scale-free (SF) network with the recovery mechanism subject to edge-based attack. The recovery mechanism is represented by the external resources that are distributed to the edge according to five kinds of weighting strategies: , , , , and the uniform strategy. We mainly investigate the influence of and different weighting strategies on the cascading dynamics of SF networks subject to intentional attack and random breakdown. On the whole, the main contributions of this paper are listed as follows.(1)Under intentional attack, is the most efficient response strategy against cascading failures in SF networks, which can obviously improve the integral robustness, simultaneously reduce the spreading speed, and control the outbreak of cascading failures in SF networks. Especially, the more external resources are, the more efficient is. The uniform assignment strategy is the worst strategy.(2)Although the method is optimal, it needs to compute the betweenness centrality of node that depends on the whole structure of networks. Therefore, will be optimal if we only knew the local structure of SF network (e.g., the degree of nodes). The simulations of autonomous system network have proved these results. However, the recent research [54] has shown that, the node betweenness centrality can be approximately estimated by using the local information of nodes in order to reduce the computational complexity in large networks. This implies that the weighting method defined in this paper has great significance in the protection of actual scale-free networks.(3)Under random breakdown, although the difference among the five kinds of weighting methods is not clear in terms of the protection result against cascading effect, the uniform assignment strategy () can better decrease the spreading velocity of failures in SF network than other strategies.

The results remind us to take different actions on handling and controlling the emergent disasters in heterogeneous SF networks. Here we just highlight the protection of the important links. Our approach makes contributions to understanding the dynamics of disaster spreading and provides some possible countermeasures to control the disasters and finally to repair the system damaged.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant nos. 61202362, 61121061, 61262057, 61372191, 11301302, and 91124002), State Key Development Program of Basic Research of China 973 (Grant nos. 2013CB329601 and 2011CB302600), National Key Technology Program (Grant no. 2012BAH38B-04), China Postdoctoral Science Foundation Program (2013M542560, BS2013SF009, 2012m520114, and 2013T60037), and 863 programs (Grant nos. 2012AA01A401, 2010AA-012505, and 2011AA010702).