Abstract

Dengue virus infection is one of those epidemic diseases that require much consideration in order to save the humankind from its unsafe impacts. According to the World Health Organization (WHO), 3.6 billion individuals are at risk because of the dengue virus sickness. Researchers are striving to comprehend the dengue threat. This study is a little commitment to those endeavors. To observe the robustness of the dengue network, we uprooted the links between nodes randomly and targeted by utilizing different centrality measures. The outcomes demonstrated that 5% targeted attack is equivalent to the result of 65% random assault, which showed the topology of this complex network validated a scale-free network instead of random network. Four centrality measures (Degree, Closeness, Betweenness, and Eigenvector) have been ascertained to look for focal hubs. It has been observed through the results in this study that robustness of a node and links depends on topology of the network. The dengue epidemic network presented robust behaviour under random attack, and this network turned out to be more vulnerable when the hubs of higher degree have higher probability to fail. Moreover, representation of this network has been projected, and hub removal impact has been shown on the real map of Gombak (Malaysia).

1. Introduction

Hub evacuation has been a vital and fascinating issue in decimating a system. By this, complex network is disintegrated into small clusters that can be treated with convenience. A property of critical importance is its resilience to destruction. Robustness is the ability of a network system to cope with failures of node or focal hub; it is a critical attribute of many complex networks. In network, it is also helpful to gauge the resilience of substructure networks. Complex systems have different examples like social network of people, collaborative network of scientist, World Wide Web, electric power grid, and protein-protein interaction networks [16]. Network of diseases in the medicine and biological fields such as AIDS/HIV, Smallpox, and Dengue virus can also be represented as complex networks to analyze the spreading phenomenon [7, 8]. Dengue virus is an arbovirus caused by a mosquito named as Aedes aegypti and also comprises a complex network [9]. Until now, there are no authorized immunizations or particular therapeutics and generous vector control endeavors that have not halted its quick development and worldwide spread [10]. Researchers are trying to comprehend this spectacle as a network. They found various similar structural properties in many real-world systems, when these systems were modelled and analyzed as complex networks [11]. Since the last decade, research trend in these complex systems is to model and analyze the phenomenon by converting them into complex networks in terms of nodes and links [12]. It helps to consider both structural and dynamical features about these real-world complex networks. The research of these linkages has important role for inoculation in epidemic [13] and network tolerance to attack [14].

Numerous actual networks are made up of two or more networks that interact, and they are interdependent. Water and food supply networks, communication networks, fuel networks, and financial transaction networks are different examples of various infrastructures [15]. Buldyrev et al. [16] added to a hypothetical structure for contemplating the strength of two completely associated networks, subject to random attack. The authors found that, because of the reliance coupling between networks, they turn out to be a great degree powerless against random failure and framework breakdown in a sudden first request move. Huang et al. [17] contemplated the vigor of the two interdependent networks when high or low degree hubs are under targeted attack in associated networks to the random assault issue. They found that the interdependent scale-free networks are hard to safeguard the utilizing systems, for example, securing the high-degree hubs. Gallos et al. [18] proposed a likelihood function for a targeted attack on a secluded single network, , which showed the probability that a node with degree was removed. In case α < 0, it is accepted that ; this function was utilized to contemplate the robustness of the scale-free network to various methodologies.

In this research, dataset of dengue virus affected cases has been utilized, which has been obtained from the Ministry of Health (MOH), Malaysia [19]. We modelled the dataset of Gombak (a district of Selangor, Malaysia), where weekly number of dengue cases have been recorded in all affected localities from the periods 20 October, 2013, to 18 October, 2014 [7]. Furthermore, there have been 560 affected localities with the number of dengue affected cases being 36,878 [19]. Dataset has been formalized and modelled into two-mode network (Figure 1(a)) and then projected in one-mode network (Figure 1(b)) [7]. Here, the dengue affected localities represent the primary nodes and weeks are the secondary nodes, and the number of dengue cases is the link between primary and secondary nodes. This formed the two-mode network by the cooccurrence of number of weekly dengue cases among different localities. The analysis of this complex system is based on the behaviour of network under random and targeted link removal as dengue cases. The overall analysis and results highlighted the vulnerability of this system to minimize dengue epidemic.

Figure 1(a) is the representation on real dataset. A grey node represents the locations in Gombak, Selangor, that is, L1: Apartment Casmaria, L2: Apartment Desa Temenggong, L3: Apartment Fiona, and L4: Apartment Palma BCH. A white node represents the number of weeks, that is, W1: first week, W2: second week, W3: third week, W4: fourth week, and W5: fifth week.

The two-mode networks are usually converted into one-mode network by projection. We utilized three methods for this purpose, namely, Binary, Sum, and weighted Newman [6, 20]. In this research, result of weighted Newman method has been utilized. So, its concise description is given below.

Newman has given an idea of projecting two-mode network into one-mode network when he analyzed scientific collaboration networks [21]. According to him, in the network of scientific collaboration, if two authors write any paper, they have stronger relation (get weight 1) but when they write a paper collectively, the social bond weakens as the number of author’s increases.

It is formalized as follows:where is the weight between nodes and . is the number of authors in paper p (in the context of scientific collaboration network).

In Newman’s method, the actual weight of the links is not considered properly as many real-world networks have weighted information in their links. Opsahl et al. proposed the generalization form of Newman’s method as weighted Newman method [21]. According to them, the weight can be formalized as follows:where is the weight between node and j and is the weight of the link from node with the cooccurrence.

1.1. Background

In tropical and subtropical zone of the world, dengue scourge is a noteworthy sickness. From a couple of decades, its flare-up has been spread exponentially [22]. The World Health Organization (WHO) reported in 2012 that there may be 50–100 million dengue infections worldwide every year. It has been estimated that approximately 3.6 billion people live in the dengue affected parts of the world [23].

Regarding Malaysia, on 21 November 2015, there were 107,079 dengue cases with 293 deaths reported. This was a clear increase compared to the report, where MOH Malaysia recorded over 43,000 cases with 92 deaths [24]. In year 2014, during the 42nd week from October 12 to October 18, a total of 2,160 cases of dengue fever were reported. It was an increase of 338 cases (19%) as compared with the previous week (41st) that had 1,822 cases. An increase in dengue cases was recorded in ten states over the previous week (41st), namely, Selangor, Melaka, Kuala Lumpur & Putrajaya, Terengganu, Pulau Pinang, Perak, Kelantan, Negeri Sembilan, Pahang, and Johor. While the cumulative cases of dengue were reported nationwide from January to October 18, 2014, to be 82,738, an increase of 212% (56,211 cases) was reported compared to the same period in 2013 [19].

The rest of the paper has been structured as follows: in Section 2, the impact of node removal in the network has been discussed along the concept of targeted versus random attack. Section 3 covered the result analysis on the targeted and random link removal in the dengue epidemic network. In Section 4, the phenomenon of central node identification has been discussed and node removal influence has been presented on the real map of Gombak. Finally, conclusion of the study has been drawn in the last section.

2. Node Removal

In the research, node removal has been an important and interesting issue in destroying a complex network [25]. A property of basic significance is the flexibility of demolition. If, by any strategy, any single link is broken that uniquely connects two otherwise disconnected components of a network, that will have a good impact on breaking a network which relies on the size of the largest cluster as compared to the loss of one of the three links that make up a triangle (Figure 2). As clusters are disconnected, this distance is calculated in terms of the original network [26].

2.1. Random versus Targeted Attack

In this research, two typical cases of link and node removal in complex networks have been analyzed: random attack and targeted attack. In the random attack process, links/nodes are randomly selected from the network (different percentages) and their removal effects are observed on the network. On the other hand, targeted attack on the network, in which links/nodes are removed in descending order of their weight and degree, that is, the highest connected links/nodes are removed first, requires a much smaller percentage of removed links/nodes to destroy the network, since all the hubs that glue the network together are removed and the network disintegrates into small pieces. Then, its robustness is observed.

2.2. Robustness of Network under Links Removal

The dengue epidemic complex network has been analyzed under the random link removal and targeted removal, on different places of Selangor to see the behaviour of its robustness. As there are many links with varying weight and less number of nodes in the projected network, this leads naturally to random links removal as the appropriate method of observing behaviour under random attack. Removing link means minimizing the number of cases (dengue affected patients). The number of cases in particular locality (node) in a particular week shows the strength of Aedes aegypti (dengue vector). More dengue cases mean more strength of Aedes aegypti, and less number of dengue patients show weak strength of Aedes aegypti. Links also represent the importance of locality.

We randomly removed 10 and 15 percent of links, respectively, from this network and observed the robustness for Betweenness and Closeness measures (global) by using (3) and (4), respectively, proposed by Opsahl et al. [21]. When the value of tuning parameter alpha(α) is 0, it only considers the number of links attached with a node. If the value of α is equal to 1, then it includes the weight of links only. On the other hand, when the value of α is 0.5, both the number of links and the weight of links are included. Moreover, the centrality metrics have been used to identify important nodes and links in overall network from different point of views (local as well as global). Hence, sustainability of centrality score has been checked from the concept of robustness. Below is the brief description of Betweenness and Closeness centrality that have been used to watch the impact of link evacuation.

2.3. Betweenness Centrality

Opsahl et al.’s generalizations of Betweenness centrality depend on their generalization of the shortest routes [21]. The Betweenness centrality is formalized as follows:where is weighted Betweenness centrality of node i, where is the total number of weighted shortest paths between two nodes and is the number of those paths that pass through node , where α is tuning parameter, and when its value is 0, it calculates the binary shortest distance, whereas in case of 1 it uses Dijkstra’s algorithm [27]. When the value of α is greater than 1, the shortest paths are based on the strongest links rather than the fewest shortest links in between the nodes of the network.

2.4. Closeness Centrality

In [21], generalizations of Closeness centrality rely on the generalization of the shortest path. Closeness centrality is formalized as follows:Here, is weighted Closeness centrality of node i, where is weighted distance between nodes and where α is tuning parameter and when its value is 0 it calculates the binary shortest distance, whereas in case of 1 it uses Dijkstra’s algorithm [27]. When the value of α is greater than 1, the shortest paths are based on the strongest links rather than the fewest shortest links in between the nodes of the network.

Further, Spearman’s rank correlation has been used to see the effects on the ranks of nodes after the removal of links from the network.

In this section, the results and their interpretations are presented. Here, the results of weighted Newman method have been utilized to calculate the Betweenness and Closeness centrality measures. In this network, this method is more suitable compared to other projection methods (Binary and Sum). It represented dengue problem more adequately in terms of localities and number of dengue cases per week after the projection from two-mode to one-mode. And consequently it represented better results when we have analyzed its robustness. The experiment of link removal has been repeated forty times and finally average values of tuning parameter have been calculated to measure the robustness under Betweenness and Closeness centrality measures, using the following:where is first experiment, likewise is th experiment.where is the first experiment; likewise, is the th experiment.

If value of robustness will be closer to “1,” that means network is more robust under attack, whereas nearer to “0” means more vulnerable.

Moreover, R-project tool (version 3.2.1 (2015-06-18)) with a special program code has been utilized for the link removal and rank correlation purpose which has been given in the appendix.

In Table 2, 10% links have been removed randomly and similarly in Table 3; 15% links have been removed randomly. In Table 4, 5% targeted links have been removed, while in Table 5, comparison of 5% targeted and 65% random links removal has been given. To illustrate the effect of various levels of alpha (α), two measures on the dataset are applied, in which α is a positive tuning parameter that can be set according to the research setting and nature of data. If tuning parameter is between 0 and 1, then having a high degree is favorable, whereas if it is above 1, a low degree is favorable.

In Tables 1, 2, 3, 4, and 5, tuning parameter α = 0 means that it considers the number of links (degree) attached with nodes. α = 1.0 means it considers only weight of links (strength) and α = 0.5 means that the number of links and weight of links are included.

Before discussing the results of “after links removal,” the results of actual network have been observed which are given in Table 1.

In Table 2, the Betweenness and Closeness centrality measures have been calculated after randomly removing 10% links from the network of the given dataset. When α is 0.0, 0.5, and 1.0, the average values of robustness under Betweenness centrality are 0.95, 0.88, and 0.93, respectively. This is slightly different from the results in actual network (Table 1) that, under α = 0.0, 0.5, and 1.0, values are 0.96, 0.91, and 0.95, respectively, which are very close to the results after 10% random removal of links, meaning that 10% random removal did not affect the network too much.

The average values of robustness under Closeness measure are 0.14, 0.43, and 0.72 under α = 0.0, 0.5, and 1.0, respectively. In a comparison of these results with actual network results (Table 1) that are 0.16, 0.50, and 0.75, it has been observed that the experiment’s result in Table 2 indicates that the random link removal from dengue cases network has very less effect when 10% links are removed randomly.

The robustness under Betweenness centrality measure of this network under 10% link’s removal is high, and, hence, it does not have any clear effect on this network. In the same way, the result of robustness under Closeness centrality has been analyzed; it also showed that the network is not affected too much because correlation values are very less, when α is 0.0 and 0.5.

In Table 3, robustness under Betweenness and Closeness centrality measures has been calculated after randomly removing 15% links. The experiment has been repeated forty times and the average values of tuning parameter have been calculated. When α is 0.0, 0.5, and 1.0, the average values of Betweenness centrality measure are 0.92, 0.86, and 0.90, respectively. When we compare Table 3 with Table 2 concerning the Betweenness, when α = 1, in 10% random removal, the value is 0.93, and after 15% random removal the value is 0.90 which means the network is more robust under 10% removal of links. And when α = 0.0, the value of 10% random removal is 0.95 while the value of 15% random removal is 0.92. This showed that network has become more vulnerable as more links are removed, whereas under 15% random removal the average values of Closeness centrality measure are 0.12, 0.33, and 0.68, when α is 0.0, 0.5, and 1.0, respectively. While the value of α = 1, Closeness under 10% and 15% removal is 0.72 and 0.68. As a comparison of 10% and 15% randomly removal of links in regard to Closeness measure, it has been observed that 15% is more vulnerable and 10% is robust. Furthermore, 15% links removal has a small effect on the network in comparison with the actual network (Table 1).

In Table 4, the robustness under Betweenness and Closeness centrality measures has been calculated after removing 5% links using targeted attack. That means 5% links have been removed which had the highest weights. It has measured the centrality scores when different values of α are used. Under the value of α = 1.0, the average values of Betweenness and Closeness centrality are 0.64 and 0.35, respectively. And when α = 0.0, the value of Closeness is very low, that is, 0.09, and Betweenness showed low value too. It has been analyzed that it has weakened the strength of the network under targeted attack. In addition to this, 5% targeted removal has a clear impact on the network compared to the actual network (Table 1); this link removal affected the network too much.

In case of the Betweenness measure (Figure 3) under the value of α = 0.0 and α = 1.0, when 10% links have been removed randomly, the Betweenness measure produced the average values 0.95 and 0.93. On the other hand, when links up to 15% have been removed, the difference can be seen as by removing more links, the Betweenness has a minor effect due to the removal of more weighted links; under the value of α = 0.0 and α = 1.0, it gave result of 0.92 and 0.90. Though there is an effect of random links removal on network, the correlation is high which represented the similarity between original (True) network and (Observed) networks after random link removal. The rank correlation coefficient showed the similarity between the corresponding node centrality ranking of the real and each of the two observed networks for different values of α. The results in case of 5% targeted link removal from the network, under the value of α = 0.0 and α = 1.0, are 0.70 and 0.64, respectively, which showed the network is more vulnerable as compared to 10% and 15% randomly removal of links.

In the case of Closeness measure (Figure 4) under the value of α = 0.5 and α = 1.0, when 10% links have been removed randomly, it produced the average values 0.43 and 0.72. Secondly, when up to 15% links are removed, the difference can be observed that, by uprooting more links, the Betweenness has a minor impact due to the removal of more weighted links; under the value of α = 0.5 and α = 1.0, it gave the result 0.33 and 0.68. Though there is an effect of random link removal on network, the correlation is high which represented the similarity between original (True) network and networks (Observed) after the random link removal. The rank correlation coefficient showed the similarity between the corresponding node centrality ranking of the real and each of the two observed networks for different values of α. The results in the case of 5% targeted link removal from network under the value of α = 0.5 and α = 1.0 are 0.09 and 0.35, respectively, which showed the network is more vulnerable as compared to 10%  and 15% random links removal. It is noticeable that 5% targeted removal produced much better results than 10%  and 15% random removal under both centrality measures (Betweenness and Closeness).

In Figures 3 and 4, comparison of 5% targeted link removal and random removal of links (10%  and 15%) in terms of robustness of network has been shown by utilizing the Betweenness and Closeness centrality measures. The findings of the targeted removal and random removal clearly showed that targeted removal is much better than random removal in this dengue virus network. If someone desires to treat/destroy the dengue network, the targeted attack is more favorable than random attack in this case.

For the comprehensive comparison between targeted links removal and random removal of links, we removed the links randomly (10%, 15%, 30%, 40%, 50%, 60%, 65%, and 70%) and compared the results with targeted 5% removal of links. The analysis of the result showed that 5% targeted removal has given better results up to 60% of the randomly removed links. And almost 65% randomly removed links produced approximately equal results to 5% targeted removal (shown in Table 5). That means the dengue virus network can be controlled by two ways; either by clearing/treating 65% dengue affected areas randomly or by focusing on 5% targeted areas. Of course the treatment of 5% network is easier than 65%. So, this study emphasizes the point of finding and treating the 5% nodes instead of 65% nodes to isolate the clusters in the network.

4. Central Node Identification and Its Removal Impact

The identification of the central nodes/links has been a key point in the analysis of a complex network [21, 28] So, it is very important to know about the features of the central node. In 1978, Freeman [29] claimed that central nodes were those “in the thick of things” or central focusses. To elaborate his idea, he took a network that has five nodes (Figure 5). Further, he explained that the central node has three advantages over the other nodes. For example, it reaches the other nodes rapidly; being in the middle, it controls the flow of the other nodes, and it has more links as compared to others.

In Figure 5, node 1 is in the middle which has four links that can reach other nodes quickly and can control the flow of other nodes. On the basis of these three features, Freeman [29] formalized three node centrality measures: Degree, Closeness, and Betweenness. The number of the links corresponds to the degree of node; it is measured locally. It has limitations: the measure does not take into consideration the global structure of the network. Secondly, all connections have equal importance. To capture the issue of global consideration, Closeness centrality was introduced which is defined as the inverse sum of the shortest distances to all other nodes from a focal node. The main limitation of Closeness is the lack of applicability to networks with disconnected components; that is, two nodes that belong to different components do not have a finite distance between them. Thus, Closeness is generally restricted to nodes within the largest component of a network. Hence, Betweenness is the measure that assesses the degree to which a node lies on the shortest path between two other nodes and is able to funnel the flow in the network. In doing so, a node can assert control over the flow. Although this measure takes the global network structure into consideration and can be applied to networks with disconnected components, it is not without limitations. For instance, a great proportion of nodes in a network generally do not lie on the shortest path between any two other nodes and, therefore, receive the same score of 0. To overcome the other limitation of degree centrality measure that it considers all connections with equal importance, Eigenvector centrality measure was introduced that states that connection to a more important node is more important. This means all connections have no equal importance [30]. Score of the four centrality measures on the given dataset of dengue epidemic network is shown in Table 6.

We calculated the four centrality measures (i.e., Degree, Closeness, Betweenness, and Eigenvector). To be more focused on the networking characteristics of the given dataset, only the top three results of all these centrality measures have been given in Table 6. These four centrality measures have different characteristics and capture distinct features. In spite of this, they produced interesting results that the top three nodes are the same in these four centrality measures. This is another important point of this study that it highlights the most central nodes that are working as focal hubs in the network. So, these results show that nodes 272, 270, and 276 are very important for this dengue epidemic network. Dengue infections can be better controlled through targeted attack on highly affected hubs rather than random treatment.

Here, in dengue network, the flow process (number of cases) is in between localities which have been obtained from two-mode to one-mode projection. The network projection produced two directed links between localities where both these asymmetric links show the number of dengue cases as flow of diseases (dengue) in this case. So, numbers of cases are used as transfer of dengue infection as comparison between two localities.

4.1. Network Projection from Localities Perspective

In Figure 6, network projection of the Gombak nodes has been presented. This representation showed that nodes 1, 5, 15, 18, 39, and 40 are very important and to be focused for immunization. The proper treatment and cure of these nodes can break the clusters. Visualization of this network shows that all localities in Gombak are not completely connected with each other by the cooccurrence of weeks. Hence, all localities are not affected in all weeks. Furthermore, as the nodes in Table 6, some localities have also been working as the main hubs. Moreover, the actual map of Gombak has been shown in Figure 6, which highlighted the real geographical representation of the dengue affected localities in Gombak, Malaysia. We can see that there are different clusters, like Batu Caves, where people suffer badly from dengue disease.

4.2. Representation of Dengue Affected Nodes on Real Map (Figure 7)

After the formalization of the given data into two-mode and then projection into one-mode network and on the basis of the results of different centrality measures, fifty-eight affected nodes have been found in Gombak. These areas have been displayed on the real map of Gombak captured from the google maps. In Figure 7, the boundary of Gombak has been highlighted by red colour (Accent 2, Lighter 40%) and Gombak dengue affected localities have been represented with white circle as GL1 (Gombak Locality 1) and likewise up to GL58. On this map representation, dengue affected nodes can be seen in different clusters, such as GL18, GL19, GL20, GL22, GL29, GL06, GL15, GL16, and GL17, where GL15 and GL18 are the focal hubs in this cluster in terms of number of dengue cases. Second cluster can be observed containing the nodes like GL23, GL39, GL40, GL45, GL01, GL38, GL53, and GL52, where GL01 and GL39 are the focal hubs. Some other clusters can be seen such as GL49, GL50, GL07, GL05, GL10, and GL54, where GL05 is the focal hub in this cluster.

In Figure 7, we have highlighted the five focal hubs in grey dotted colour circle. These five nodes are 8% nodes of the whole dengue network in Gombak. These nodes are GL01, GL05, GL15, GL18, and GL39. By considering the number of dengue cases, it has been analyzed that these are the focal hubs that should be treated properly to suppress the dengue virus. The presence of increased number of dengue cases means more dengue vectors (Aedes aegypti) are present in the cluster. Further, these hubs are very important to break down the clusters, which consequently break the whole network.

By controlling the dengue virus in these 8% focal nodes, 34% dengue network can be destroyed. This is an example of targeted attack and how efficiently it works.

Similarly, some other focal hubs from the remaining 66% network can also be traced for the treatment. By doing the same, network can be minimized or avoided by the dengue virus. Further, by the results (Table 5), it has been analyzed that 5% targeted attack produced equivalent results as 65% of random attack on the network. So, this network should be treated as targeted attack instead of randomness.

5. Conclusion

The results of the study showed that the dengue epidemic network is vulnerable when targeted attack is performed compared to random attack. The highlighted targeted attack has been identified with BA scale-free network (Albert-Barabasi model). On the other side, the random removal takes after Erdos-Renyi model. We examined that the network becomes significantly more vulnerable when the nodes of higher degree have higher likelihood to fizzle, and the probability that a node will fail is proportional to its degree. We finalized the outcomes of the study in three parts. Firstly, this research concluded that, under the targeted attack, the dengue epidemic network is more vulnerable and, under random attack, it showed the more robust behaviour. From the results, it has been proved that 5% targeted attack and 65% random attack are equal. So, it is obvious that treatment of 5% nodes is easier than 65% nodes. Secondly, to determine and cure the most persuasive hub in the affected complex network has been a focal issue in the field of network science. The outcomes of this study have highlighted a few central nodes that could be focused on for immunization and other treatments, such as the result of the four centrality measures (Degree, Closeness, Betweenness, and Eigenvector), and the three most focal hubs (272, 270, and 276) have been found from the entire Selangor dataset. Furthermore, Gombak network projection and real map presentation showed a couple of central hubs such as GL01, GL05, GL15, GL18, and GL39. Thirdly, representation of real map (Figure 7) has shown the destruction of 34% dengue network as we control the dengue virus in 8% focal nodes in the area of Gombak. By all strategies it is determined that if critical fraction of nodes is removed, it will disintegrate the major part of the network. Thus, the dengue epidemic proliferation can be successfully controlled by inoculation. It is believed that this research work contributes to a better consideration of treatment techniques on dengue complex network and to an enhanced evaluation of the robustness of the given network. Furthermore, we have intention to extend the study of dengue epidemic to model as a dynamic network and comparative study among different epidemic spreading models in a future publication.

Appendix

See Algorithm 1.

degree_function <- function(network, alpha1=0.0, alpha2=1.0, step_size=0.5, header="deg_")
 library(tnet)
 num_iter <- /step_size
 a_vals <- C(alpha1)
 for ( in 1:num_iter)
  a_vals <- a_vals[i] + step_size
 result <- degree_(network, measure=C("alpha"), alpha=a_vals[])
 for ( in 2:length(a_vals))
  result <- merge(result, degree_(network, measure=C("alpha"), alpha=a_vals[i]), by="node")
  names(result)<- C(paste(header, a_vals[i], sep=""))
 names(result)[C()] <- C(paste(header, a_vals[], sep=""))
 return(result)
closeness_function <- function(network, alpha1=0.0, alpha2=1.0, step_size=0.5, header="clo_")
 library(tnet)
 num_iter <- /step_size
 a_vals <- C(alpha1)
 for (i in 1:num_iter)
  a_vals <- a_vals[i] + step_size
 result <- closeness_(network, alpha=a_vals[])
 for (i in 2:length(a_vals))
  result <- merge(result, closeness_(network, alpha=a_vals[i]), by="node")
  names(result <- C(paste(header, a_vals[i], sep=""), paste("n.", header, a_vals[i], sep=""))
 names(result) <- C(paste(header, a_vals[], sep=""), paste("n.", header, a_vals[], sep=""))
 return(result)
closeness_function2 <- function(network, alpha1=0.0, alpha2=1.0, step_size=0.5, header="clo_")
 library(tnet)
 num_iter <- /step_size
 a_vals <- C(alpha1)
 for (i in 1:num_iter)
 a_vals <- a_vals[i] + step_size
 result <- closeness_(network, alpha=a_vals[])
 result <- result[,1:ncol(result)-1]
 for (i in 2:length(a_vals))
  result <- merge(result, closeness_(network, alpha=a_vals[i]), by="node")
  result <- result[,1:ncol(result)-1]
  names(result) <- C(paste(header, a_vals[i], sep=""))
  #names(result)<- C(paste(header, a_vals[i], sep=""), paste("n.", header, a_vals[i], sep=""))
 names(result)[C()] <- C(paste(header, a_vals[], sep=""))
 retur (result)
betweenness_function <- function(network, alpha1=0.0, alpha2=1.0, step_size=0.5, header="bet_")
 library(tnet)
 num_iter <- /step_size
 a_vals <- C(alpha1)
  for (i in 1:num_iter)
  a_vals <- a_vals[i] + step_size
  result <- betweenness_(network, alpha=a_vals[i])
  for (i in 2:length (a_vals))
  result <- merge(result, betweenness_(network, alpha=a_vals[i]), by="node")
  names(result)<-C(paste(header, a_vals[i], sep=""))
  names(result)[C()]<- C(paste(header, a_vals[], sep=""))
  return(result)
# sorts dataset by nth column
order_by_nth_col <- function(dataframe=NULL, , top_rows=10, ascending=TRUE)
 if (ascending)
  return(dataframe[with(dataframe, order(dataframe[[n]])),][1:top_rows,])
 else
  return(dataframe[with(dataframe, order(-dataframe[[n]])),][1:top_rows,])
degree_comparison <- function(tnet1, tnet2, alpha1=0.0, alpha2=1.0, step_size=0.5, )
 res1 <- degree_function(tnet1, alpha1, alpha2, step_size)
 res2 <- degree_function(tnet2, alpha1, alpha2, step_size)
 num_comparisons <- /step_size + 1
 result <- C()
 for (i in 1:num_comparisons)
  tmp1 <- order_by_nth_col(res1, , top_rows=r, F)
  tmp2 <- order_by_nth_col(res2, , top_rows=r, F)
  #print(tmp1[])
  #print(tmp2[])
  result <- alpha1 + step_size
  #result <- r − length(intersect(tmp1[,1 : 1], tmp2[,1 : 1])) # non-matching records
  result <- sum(tmp1[,1 : 1]!=tmp2[,1 : 1])
  #result[i] <- r − length(intersect(tmp1[,1 : 1], tmp2[,1 : 1])) # non-matching records
r <- matrix(result, nrow=2, dimnames=list(C("alpha", "Hamming distance"), C()))
 return(r)
betweenness_comparison <- function(tnet1, tnet2, alpha1=0.0, alpha2=1.0, step_size=0.5, )
 res1<- betweenness_function(tnet1, alpha1, alpha2, step_size)
 res2<- betweenness_function(tnet2, alpha1, alpha2, step_size)
 num_comparisons <- /step_size + 1
 result <- C()
 for (i in 1:num_comparisons)
 tmp1 <- order_by_nth_col(res1, , top_rows=r, F)
 tmp2 <- order_by_nth_col(res2, , top_rows=r, F)
 result <- alpha1 + () step_size
 result <- sum(tmp1[,1 : 1] != tmp2[,1 : 1])
 res <- matrix(result, nrow=2, dimnames=list(C("alpha", "Hamming distance"), C()))
 return(res)
sort_all <- function(dataframe=NULL)
 rows <- nrow(dataframe)
 cols <- ncol(dataframe)
 result <- data.frame( 1:62)
  for (i in 1:cols)
  result <- cbind(result[,1:i], order_by_nth_col(dataframe, i, rows, F)[,1])
 #result <- result[,3:cols+1]
 return(result[,3 : 9])
spearman_corr <- function(df1, df2)
 library(Hmisc)
x <- sort_all(df1)
y <- sort_all(df2)
 cols <- ncol(x)
 for (i in 1:cols)
 print(names(df1))
 print("")
 print(rcorr(x[,i], y[,i]))
 print("–∖")
 return()
# Returns a random sample extracted from the specified network in the tnet format.
get_edge_sample <- function(network, , weighted=TRUE, weight_threshold=0)
 result <- network[sample(nrow(network), size=n),]
 return(result)

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.