Abstract

Usually, the existence of a complex network is considered an advantage feature and efforts are made to increase its robustness against an attack. However, there exist also harmful and/or malicious networks, from social ones like spreading hoax, corruption, phishing, extremist ideology, and terrorist support up to computer networks spreading computer viruses or DDoS attack software or even biological networks of carriers or transport centers spreading disease among the population. New attack strategy can be therefore used against malicious networks, as well as in a worst-case scenario test for robustness of a useful network. A common measure of robustness of networks is their disintegration level after removal of a fraction of nodes. This robustness can be calculated as a ratio of the number of nodes of the greatest remaining network component against the number of nodes in the original network. Our paper presents a combination of heuristics optimized for an attack on a complex network to achieve its greatest disintegration. Nodes are deleted sequentially based on a heuristic criterion. Efficiency of classical attack approaches is compared to the proposed approach on Barabási-Albert, scale-free with tunable power-law exponent, and Erdős-Rényi models of complex networks and on real-world networks. Our attack strategy results in a faster disintegration, which is counterbalanced by its slightly increased computational demands.

1. Introduction

Complex networks keep attracting an increasing amount of attention during the past couple of decades. This can be illustrated by a number of documents in Scopus search results for this term in the title, abstract, or key words, from 605 documents in 1986, 2372 documents in 1996, 8086 documents in 2006, and up to 20256 documents in 2016 [1]. The existence of such networks is being discovered in many areas of nature as well as society, for example, in biology as neural networks, networks of protein reactions, or plant immune signaling networks, in transport, in economics, in sociology as citation or rumor spreading networks, in computer science as the Internet, and in physics as power grids [26]. Mostly, these networks are considered a positive thing. A number of studies are devoted to measuring the robustness of such networks against a malicious attack or against a random degradation failure causing deletion of a node or of a connection. Such measures are used to increase security of complex systems and where possible, like in computer networks, to increase robustness, for example, by rewiring optimization [713].

Only in recent years, more attention has been given to malicious networks. Under such term, terrorist networks, fake-news spreading networks, malnets or botnets used in DDoS attacks, or for spreading worms and viruses, dark networks involved in various criminal activities like illegal arm selling or child pornography, and so forth can be understood [1420]. Attack strategies on such harmful complex networks (i.e., node deletion and occasionally also edge deletion) are studied in [2123]. For example, in terrorist networks, a sequence of individuals should be identified, whose arrest will result in the maximum breakdown of communication between remaining individuals in the network. Similar approach should work for disabling Internet access to computers used for illegal activities. Apart from networks, where the aim of the involved individuals is malicious, there exist also networks, where the harm is unintentional, but, which should nevertheless be quickly disabled. These involve spread of disease, where the goal is to design vaccination strategies to restrain the spread of pandemic diseases, when mass vaccination is an expensive process and only a specific number of people, modeled as nodes of a graph, can be vaccinated. Another example is cascading failures (blackouts) of electric power transmission network, where the goal is to prevent a total breakdown of power network by inhibiting some power transmission points or lines. Similar approach involves, for example, immunization of disease carriers [24, 25] or critical node detection [2628], where the sequence of deletion order is not taken into account, and only the optimum subset of nodes selected for deletion is important. Lately, a more computationally demanding approach to network disintegration and attack, using stochastic and evolutionary optimization, is applied on smaller scale networks, providing better results than traditional approaches [2932].

A related topic to edge deleting attacks is also community detection problem, because a network can be most easily dismantled by removing the edges between communities [34]. Therefore, a fast community detection algorithm can be used for edge network attack; and vice versa, approaches useful in edge removing attacks can be used in community detection.

Practically the same approach to the network disintegration measure as in network attacks can be found in Morone et al. [3537]. They use collective influence algorithm to find the minimal set of influencers. Their algorithm has complexity in [35, 37], where N is the number of nodes, followed by an algorithm with improved results but worse complexity in [37]. A number of related algorithms were inspired by this approach, a good survey of the topic is in [38].

Lately, Hirsch index and its generalization [39], leading eventually to coreness value, were proposed to be a good indicator of influence of nodes. This assumption was further critically analyzed in [40].

Attack strategies are important not only as a countermeasure against harmful networks but also for potential improvements in useful engineering networks. It is popular to study the robustness of networks, but the network disruptions are usually chosen either at random or by very simple targeting methods. In engineering, it is very important to know the worst-case scenario for vulnerability analysis, which our paper addresses.

In the attack algorithms, apart from the quality of results, which are measured by the level of network disintegration, also the complexity of these algorithms matters. While the approach of Morone et al. [35, 36] can handle hundreds of millions of nodes within hours of computation, tabu search approach [29] can handle a few hundred nodes in the same time, but it should provide better results.

In this paper, we shall describe new heuristics for attack strategies, merge them in a newly designed combination of new and classical attack heuristics, and investigate the effectiveness of this new approach both on model networks and on a couple of real-world collaboration networks. Our approach should find its niche on the Pareto optimal front somewhere between collective influence approaches [35, 36] and stochastic optimization approaches [29], both in the terms of the quality of results and in the computational time.

2. Materials and Methods

2.1. Networks, Their Types, Models, and Measures of Robustness

In the beginning of studies of networks, a model network was considered simply as a large random graph, typically rather sparse (i.e., its number of edges is much smaller than that in a complete graph). Random ER (Erdős-Rényi) graphs [41] start from unconnected nodes, which are then connected with a uniform probability. They have a Gaussian bell-shaped degree (number of connections to other nodes) distribution and couples of nodes have a short average path; that is, almost any node can be reached from any other node by going through a relatively small number of edges. Neighbors of any node are not likely mutually connected (low number of triangles corresponds to low clustering coefficient).

It has been discovered that in most real-world networks the neighbors of a node are likely connected to each other, while the property of having short average paths is satisfied. More exactly, a small-world network is characterized by an average distance growing proportionally to the logarithm of the number of nodes in the network. First such models by Watts and Strogatz (1998) [42] were created by rewiring (with a certain probability) connections between the nodes in a regular graph. However, such graphs had very narrow degree distribution, while most of discovered small-world networks had so-called “long tail” distribution, where there are a few nodes with a very high degree. A new type of network, a scale-free one, where zooming on any part of the distribution does not change its shape, has a degree distribution where the fraction of nodes of degree k asymptotically follows where parameter is usually in the range .

Typical example of a scale-free network is the Barabási-Albert model starting with a few nodes (e.g., a triangle), where one node is added at a time and connected with a given number of nodes which already exist in the network. The new edges are attached to these nodes selected pseudorandomly with a probability of attachment corresponding to their current degree, so-called linear preferential attachment .

A simplest type of attack on a network is to delete its node(s) together with its/their connections, which will cause the greatest damage. However, a number of possible selections of a set of nodes to be deleted from all the network nodes are a binomial coefficient , leading to combinatorial explosion for larger values.

Network damage can be established by various measures. One possible measure is a probability of a presence of a giant component, which Molloy-Reed criterion [43] defined as a threshold of division of average of squared degrees of nodes divided by average of degrees of nodes. However, this criterion, while easily calculable, is derived for random graph and randomly deleted nodes, not for nodes deleted by heuristic methods, which targets nodes pseudorandomly or deterministically. Another measure is an average inverse geodesic (average inverse of the shortest path length between all pairs of nodes) [44]. This measure would be suitable for a slightly damaged network, which is still fully connected as one component. We are interested in the more substantial destruction of the network. Therefore, we use in our paper a measure of network damage , Unique Robustness Measure (-index) [45, 46], defined aswhere is the number of nodes in the network and is the fraction of nodes in the largest connected component after removing nodes using a given strategy. The variable is a current fraction of deleted nodes against the total initial number of network nodes. The -index thus encompasses the whole attack process, not just one moment of damage at a current fraction .

2.2. Classical Attack Strategies

Finding the least number of nodes, whose removal would result in unconnected components of the network, is proved in graph theory to be an NP complete problem. A simplified problem is to find such a subset of nodes, that after their removal the remaining nodes shall be isolated. This problem is a reformulation of a node cover of a graph, which is a set of vertices such that each edge of the graph is incident to at least one vertex of the set. To find node cover is one of the famous Karp’s 21 NP complete problems [47]. This leads us to the necessity to use heuristics for the network attack.

In the most typical so-called ID attack, the nodes are deleted in descending order of their degrees [48]. The degrees are calculated and ordered only once in the original network, which is the least computationally demanding strategy of all the attack strategies. The calculation of degree centrality requires time in a sparse matrix representation, where E is the number of connections. A more efficient but slightly more demanding strategy known as RD recalculates the order of degrees after each removal of a node [49]. Similar couples of approaches, that is, calculating the sequence of nodes to be deleted all at once from the original network, or recalculating this sequence after each node removal, can be applied to all other centrality measures, that is, betweenness, closeness, Katz, and eigenvector centrality [50]. After RD approach, the second most effective measure generally was proved to be the betweenness centrality RB, recalculated after deletion of each node. Often used is also betweenness centrality calculated only once for the original network, named IB [50]. The betweenness centrality of a node v is defined aswhere is the number of shortest paths between nodes and and is the number of those paths which contain node v. Since the test networks do not have weighted connections, Brandes’ algorithm [51] requiring can be used, where is the number of nodes.

As in many other areas of optimization of NP problems, there exist first attempts to use a metaheuristic optimization for selection of nodes to be deleted. An example of this approach is a tabu search usage [29]. The approach seems to improve index measure of the RD attack by roughly 15 percent, but the expense is enormous, as in most metaheuristics. It requires tens of thousands of evaluations (it uses a local optimization, and, as the termination criterion, 1000 calculations are carried out when no improvement is reached; if there is an improvement in the thousands of calculations, the cycle starts all over).

2.3. Branch Removal Strategy

Both the degree and betweenness criterion work very well most of the time to identify the nodes, whose removal is most likely to break the component apart, so that the largest remaining component is as small as possible. However, for an already sparse component with a large branch, these centrality measures may not always be ideal. Let us have a simple component of a network as an example, containing a tree with 10 nodes, where on the end of a linear “network” the last node has four neighbors; see Figure 1. When attack strategy would be guided by maximum degree, the node with degree 4 would be deleted, leaving the largest connected component with six nodes (the “linear part” of the tree above the deleted node). The same node with the maximum betweenness equal to 21 would be selected using the betweenness based attack. However, when we would evaluate each node by a number of nodes, by which the largest common component would be diminished, if we would delete the node, then nodes with values equal to five in the last network in Figure 1 would be selected. This would leave the largest connected component with five nodes which provides better result than both the degree and the betweenness attack.

How to arrive to the branch weight evaluation of nodes? Firstly, all the nodes with a degree equal to 1 will be assigned by weight 1, and all nodes with other degrees will have weight 0. Then, recursively, (1) the nodes will be weighted by the sum of the weights of their neighbors of degree 1. This sum will be increased by 1 (for the node itself), unless the resulting weight would be more than half of the number of nodes of the component; (2) the nodes of degree one will be cut off for the sake of calculation (so that the degree of their neighbor is decreased by 1, while its weight remains the same).

If the network is a tree with nodes, the recursive weighting by the sum of neighbors of degree 1 and their cutting off continues until only two nodes remain. When the number of nodes is even, the weights of two remaining nodes are ; for odd , the weights of two remaining nodes are and . The other termination criterion of weighting evaluation is when the evaluated node is a part of a cycle. The iterative node evaluation by branch weighting is shown in Figure 2.

After the third iteration in Figure 2, the evaluation directly above the node with weight 5 is stopped (its weight would be more than a half of the component size), while the zero weighted node below the node weighted 3 is weighted by 4 and then in another iteration the node below is weighted by 5.

The computational complexity of the branch weighting clearly does not exceed the betweenness complexity. Unfortunately, the branch weight strategy can be applied only when at least some nodes of the network are not part of a cycle. This constraint is usually not satisfied for more complex networks, at least at the beginning of the attack; therefore, this attack strategy must be combined with another attack strategy. Moreover, even if a branch exists, it might be more advantageous in further stages to remove a node of high degree and betweenness, which is a part of a cycle, than to remove a node of low degree and betweenness that would cut off say only small branch. This is the reason for combination of the strategies.

2.4. Combination of Heuristics

It is clear that each of the heuristics stresses of different aspects of the problem and their combination might produce better results than any of the heuristics applied separately. However, how to produce the best combination? In multiple-criteria decision-making, the simplest approach is to weight single attributes and produce the resulting evaluation by adding the weighting score. After local optimization of weights, such combination produced the best results for the degree and branch weight heuristics, where the resulting weight of a node was calculated by

The parameter α is bounded by . Another possible approach is a multiplicative score, when the resulting weight is obtained by multiplication of factors and their weights are used as exponents. Empirical local optimization trials resulted in the addition of weighted scores of degree, branch weight, and betweenness. The resulting measure used for selection of a node to be deleted was thus defined as

Parameters and are bounded by , , . This criterion with its weight constants was produced by a local optimization for a set of Barabási-Albert model networks generated with a given set of parameters described further. While the results proved to be advantageous, it is entirely possible that other types of networks might require a different combination or at least slightly different weights , .

2.5. Reverse Build-Up Strategy

In the classical attacks, we use greedy heuristics to delete nodes, which would most advantageously disrupt the network. However, we can take another approach. We can start from an empty network, or a disrupted network, where only nodes of degree one and zero exist, and build the network by adding the nodes, which would connect the network as little as possible at a time. When we reverse the sequence of added nodes, we get the sequence of nodes for deletion.

In the beginning, we take the first evaluation of nodes for the complete network by the combined heuristics in our calculation. Then we start from a network with nodes of degree at most one, produced by a combination of previously described heuristics. Recursively, we try to add each of the deleted nodes and select the one, in which addition results in the smallest number of nodes of the largest component of the build-up network. If several nodes satisfy the criterion, we add the node with the smallest preference of the combined heuristics evaluated for the complete network.

While in the beginning of adding nodes, we cannot get worse results than we got during deletion of nodes, at the end of attaching of nodes, values might be bigger than those for the original sequence might.

Let us have as an example a simple tree as the first network in Figure 3. To keep the example simple, we shall use only degree-based strategy RD. In this strategy, the first deleted node has index 1 with degree 5, the second in the resulting network to the right is node indexed 2 with degree 4, and the third deleted node is, for example, node 3 with degree 2. The resulting network on the right has only two components of size 2. In the second row of Figure 3, we continue from the right to the left. From the 3 available deleted nodes, if we add node 2 first, the smallest component has 4 nodes instead of 5. When we add node 1, the greatest component has 5 nodes instead of 10 above. Therefore, when we delete the nodes in sequence we get the sizes of the largest component instead of by sequence . The sequence of the sizes of the largest component results in smaller index value. Since in each iteration we require to try to add each of the deleted nodes, the computational complexity of the approach is , similar to betweenness centrality.

3. Results and Discussion: Testing the Combined Heuristics Attack

In order to test our proposed heuristic strategies and their combination in the attack, we compared them against the four most popular and successful strategies based on degree and betweenness, namely, RD, RB, ID, and IB, described in Section 2.2, as well as against one of the state-of-the-art strategies (CI strategy in Table 1) based on collective influence [35, 36]. Additionally, the currently popular Hirsch index (H1 strategy in Table 1) and its second-order generalization [40] (H2 strategy in Table 1) were used in tests, but they did not bring improved results compared to other strategies. Our results for Hirsch index and its generalization support results in [38] where strategies using the Hirsch index as well as coreness have worse results in robustness measure than strategy using degree centrality. To estimate usefulness of the single strategies in the combined attack, we first used a combination of degree centrality and branch removal weight described by (3) with parameter , (see (3) strategy in Table 1). To this approach we added first betweenness centrality; the combined strategy is described by (4) with parameters and (see (4) strategy in Table 1). When the network would be destroyed to the degree, when only solitary edges and single vertices remain, we then used the nodes deleted in this process in the reverse build-up strategy (Rev (see(4)).

Similar to [29], we tested attack strategies on the networks produced by the BA preferential attachment model, generated by starting with a triangle and adding each time a node together with 3 edges and on ER random network model with . Further, we also tested attack strategies on random scale-free networks with a tunable parameter . We set the scale-free network with and with 1500 connections [52, 53]. We used 10 randomly generated networks with 500 nodes in all three test sets.

For the real-world network, we used collaboration network Erdos991 from repository [54] with 492 nodes and 1417 edges, originally coming from Pajek dataset [55], and the political blogosphere web, polblogs, with 643 nodes and 2280 edges, also from [54], which originated from [33]. Results of values for all the attack strategies and all the types of networks can be found in Table 1. Since the real-world network tests provide just single values, their standard deviation could not be calculated. The averages of values for all the tested strategies against the fraction of removed nodes are shown for the BA networks in Figure 4, for the ER networks in Figure 5, and for the scale free network with in Figure 6. For the collaboration and polblog networks the resulting values are shown in Figures 7 and 8.

4. Conclusions

Our combined heuristic attack strategy improved destruction of network measure for the collaboration network Erdos991 by more than 23 percent, for political blogosphere by 29 percent, for the Barabási-Albert model by more than 9 percent, for another scale-free model with by more than 8 percent, and for the Erdős-Rényi networks by 7 percent, compared to most popular RD attacks. Combined heuristics also achieved better results compared to RB attack, though less substantial. Even a comparison to the state-of-the-art algorithm CI [35, 37] is favorable, but this is counterbalanced by higher complexity of our algorithmic approach. The variability in results for different types of networks suggests that there might exist other types of networks, where the combined heuristic attack might not be advantageous. Moreover, even for networks similar to the Barabási-Albert model, none of the attack heuristics or approaches can be named a winner. While our combination of heuristics provided better results than the most popular and most useful single-heuristic strategy RD; it also required more computational resources for the betweenness, branch removal weight and the build-up strategy. Our current results are likely comparable to tabu search described in [29], while taking substantially less computational time. A suitable parametrized and adjusted stochastic evolutionary optimization is likely to provide even better results, at the expense of orders of magnitude growth of computing resources. On the other hand, the classical RD attack requires not only fewer computing resources (it has complexity) but also substantially less information about the structure of the network compared to all other strategies, heuristic or metaheuristic. The collective influence approach, CI [35, 36], with only slightly worse complexity than that of RD approach typically provides better results both for model and for real-world networks. Our combined heuristic attack strategy can be currently claimed as Pareto optimal, giving, for networks with few thousands nodes, reasonable tradeoff between execution time and the measure of network destruction.

We do not claim that our current set of methods and/or parameters are optimal. However, our aim is to show that when someone is interested in the best results for certain types of networks, it is worthwhile to try a combination of approaches, such as the case described in our manuscript.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The work was supported by Grant APV SK-SRB-2016-0003: Adaptation of Parallel WoBInGO Framework for Protection of Cloud and Grid Computing Systems by Computational Intelligence.