Abstract

When dealing with evolving or multidimensional complex systems, network theory provides us with elegant ways of describing their constituting components, through, respectively, time-varying and multilayer complex networks. Nevertheless, the analysis of how these components are related is still an open problem. We here propose a general framework for analysing the evolution of a (complex) system, by describing the structure created by the difference between multiple networks by means of the Information Content metric. Differently from other approaches, which focus on assessing the magnitude of the change, the proposed one allows understanding if the observed changes are due to random noise or to structural (targeted) modifications; in other words, it allows describing the nature of the force driving the changes and discriminating between stochastic fluctuations and intentional modifications. We validate the framework by means of sets of synthetic networks, as well as networks representing real technological, social, and biological evolving systems. We further propose a way of reconstructing network correlograms, which allow converting the system’s evolution to the frequency domain.

1. Introduction

Although complex networks theory [1, 2] was initially used to describe the structure underpinning individual complex systems, in recent years there has been an explosion in the number of situations in which (potentially large) sets of networks have to be studied in a comparative way. The availability of multiple related networks may be the natural result of analysing different, yet compatible systems, for instance, functional brain networks obtained from a large set of healthy people, with the aim of identifying common connectivity patterns [3], or from control subjects and patients suffering from a given condition [4], to detect differences between them. This can nevertheless also stem from the analysis of a single system across its parameters’ and temporal dimensions. Following the previous example, neuroscientists may be interested in characterising the temporal evolution of such networks during a long cognitive task [5, 6] or across different frequency bands [7]. Potential examples are not limited to neuroscience and indeed appear in all research fields where complex networks have been applied [8], i.e., across social, biological, and technological systems, a clear example of the latter being air transport networks [9, 10].

The analysis of the differences between two or more networks is a twofold problem. On one hand, it entails the quantification of such differences [11], either by simply counting how many links have changed during the evolution, or by calculating a set of topological metrics and by comparing their normalised values [12]. On the other hand, another perspective comprises the understanding of the dynamical processes causing such changes or in other words why these links or topological properties have changed. These two aspects of the problem are complementary, as both of them have to be taken into account for the correct understanding of an observed evolution. The fact that two networks are not equal does not imply the presence of a structured evolutionary process, as they may be the result of describing the same system under observational noise. Such conclusion cannot be drawn even from a statistically significant change in some topological metric, e.g., a reduction in the modularity may be the result of a random link rewiring, but also of a targeted process aimed at disrupting the modular structure. Even an increase in modularity may be the result of a random process, albeit with low probability. Lastly, and on the same line, one should not correlate the magnitude of the changes with the presence of targeted processes: random noise does not necessarily result in small fluctuations only. These two aspects, i.e., description and structureness, are also of high relevance of real-world applications. For instance, in the specific case of brain functional networks, the presence of an unstructured difference between control subjects and patients may be ascribed to a global loss of brain connectivity, while structured changes may suggest a focused reorganisation of the information flow.

The latter point, i.e., the understanding of the dynamical processes causing a change, is a specific aspect of the more general problem known as phenotype to genotype [13, 14]. While we can observe only the phenotype of a system, in this case the resulting physical or functional network, what we would really like to understand is the genotype that has created it. If several phenotypes are available, e.g., we can observe the temporal evolution of the system, we can in principle use the phenotype’s dynamics to (partly) reconstruct the genotype: in other words, we can use the “difference of structures” to unveil the underlying “structure creating such difference”.

Inspired by this, we here present a framework designed to answer the following specific question: do the observed changes follow a structure, or are they simply the result of random fluctuations? This framework is based on (a) the calculation of the difference between the two observed networks, (b) the representation of such difference as a new difference network, and (c) the analysis of its structural characteristics. Specifically, we start from the assumption that changes resulting from nonrandom processes are characterised by correlations, which reflect in the presence of a mesoscale in the difference network. Such mesoscale can then be detected using a broad-band topological metric, i.e., the Information Content [15], and its significance assessed through a statistical test based on ensembles of equivalent random networks. By means of a set of synthetic evolving networks, we show that this approach is complementary to other alternatives that only focus on quantifying the magnitude of the change, and not on describing its nature, as the ones based on cross-network correlations [16] or von Neumann entropy [17, 18]. We further demonstrate the usefulness of the proposed solution by analysing three real systems, respectively, technical (the evolution of the world-wide air transport network), social (human contact networks in a hospital), and biological (comparison of functional brain networks corresponding to different frequency bands). We conclude this work by showing how this approach can be used to construct a network correlogram, which, among others, can be used to detect the natural frequency of a time-evolving network.

2. Methods

2.1. Information Content

For the sake of completeness, we here include a short overview of the Information Content metric, which is the basis of the proposed methodology. For a more complete description the reader may refer to [15].

The rationale behind the definition of the Information Content is that a regular network, or more generally any network presenting a mesoscale structure, displays strong correlations between the node’s connectivity patterns. The information encoded by pairs of such correlated nodes is thus redundant, as the connections of one of them almost completely define the second one’s. A clear example is yielded by networks with a strong community structure, in which two nodes belonging to the same community usually share most of their neighbours. If these two nodes are then substituted by a single one, with a similar connectivity pattern, the network structure would not substantially change. On the other hand, suppose two nodes belonging to a random network. As their connectivity patterns will be substantially different, their merging would induce an important loss of information about the original network structure. The measurement of such loss of information could then be used to numerically assess the presence of a mesoscale structure.

Following this idea, the algorithm iteratively identifies the pair of nodes whose merging would suppose the smallest information loss, i.e., that share most of their connections. Suppose a network composed of nodes and fully defined by its adjacency matrix , an matrix whose element is equal to when a link between nodes and exists and zero otherwise. The analysis of two nodes and thus entails, firstly, the creation of a vector of differences , with and being the Kronecker Delta. The number of elements of whose value is thus indicates the number of neighbours that are not shared by nodes and . Secondly, the information encoded by is assessed through the classical Shannon’s entropy, defined as

Following the standard notation, denotes the number of nodes in the network. Additionally, and , respectively, indicate the frequency of zeros and ones in ; note that while , , and are different for each pair of nodes , the corresponding subindices have been omitted for the sake of clarity. is equal to zero only when all neighbours are shared, or in the special case where the ’s neighbourhood is the complementary of ’s. Therefore, represents the quantity of information required to reconstruct ’s connections given ’s ones or the quantity of information lost when both nodes are merged. The pair of nodes minimising are then merged, and the quantity of information lost in the process is approximated by . The process is iteratively repeated, until one single node remains, being the final Information Content the sum of the information lost in all steps.

As shown in a previous work [15], low IC values indicate the presence of some kind of regularity in the link arrangement, including communities, hubs, or core-periphery configurations. To illustrate, let us consider the case of a star-like graph, in which all nodes are connected to a central one; the resulting adjacency matrix, for an example with four nodes, would be

By construction, all peripheral nodes are equal, as they share the same connectivity pattern; hence, the vector of differences between, e.g., the second and third nodes will be , and the information lost when merging them will also be zero. All peripheral nodes can then be merged into a single one without any loss of information: the final for this regular network is therefore zero. On the other hand, it is easy to see that the metric is maximised by Erdős-Rényi graphs, as no correlation is expected between different nodes. For instance, for a link density of , half of the elements of (between any pair of nodes) would be expected to be one, and hence (from (1) when ). Note that exceptions can be found, and for instance an Erdős-Rényi graph may have a regular structure, and therefore ; yet, such instances are extremely unfrequent, and do not modify the expected behaviour of the metric.

As a final note, it is worth pointing out that the flexibility of the metric in detecting multiple types of regularities comes at the cost of being computationally intensive. Specifically, given a network of nodes, merging iterations should be performed; furthermore, in each one of such iterations, the connectivities of all possible pairs of nodes have to be compared. This results in a complexity scaling as . The Information Content is therefore not suited for networks of more than a few thousand nodes.

2.2. Comparing Two Networks

Suppose two networks, each one described by a corresponding adjacency matrix and , which have been observed under different conditions. Firstly, the most simple case includes two independent networks, representing two different systems, albeit of the same size, i.e., the same number of nodes. Secondly, these adjacency matrices can represent different layers of a multiplex network [20]. Finally, the networks may represent different snapshots of the same time-evolving system [21]. In all cases, changes between and can be encoded in a matrix , whose element is equal to when the corresponding link has changed in the two analysed networks, and zero otherwise. Note that can be interpreted as the adjacency matrix of a network whose links depict a corresponding change between and .

With respect to the mesoscale structure of the difference network , only two situations can be encountered. First, changes between and can be random, for instance, due to measurement noise, or more generally due to uncorrelated forces; would then resemble the adjacency matrix of a random network. Second, if changes between and are somehow correlated, the resulting network should present some kind of mesoscale structure. For instance, if changes only affect the connections of one node, will be star-like shaped. All intermediate situations, e.g., with only a part of the links modified at random, can be interpreted as a special (and noisy) case of the latter situation.

If changes are not random and thus are correlated and form a mesoscale structure, the latter should be detected by the metric. An algorithm for the comparison of different networks can thus be designed, composed of the following steps: (i) calculate as ; (ii) calculate the of the network ; (iii) compare with the value obtained in an ensemble of equivalent random networks. As for the latter point, several ways of normalising the obtained value are available. Firstly, one can simply calculate

where is the average Information Content obtained in an ensemble of random networks, with the same number of nodes and links as . takes values in , with values close to one indicating a random structure of the network , and thus a random difference between and . On the other hand, values of substantially smaller than one, or close to zero, suggest the presence of a structure in the changes. Note that it is also possible to obtain values of , indicating that is higher than what expected in a random network. is thus random, and the obtained value the result of statistical fluctuations or the use of a too small random ensemble.

While provides a quantitative assessment of the structure of changes, it yields little information about the statistical significance of the same. In order to tackle this issue, a normalisation based on a Z-Score can be used

As in the previous case, denotes the average obtained in an ensemble of equivalent (same number of nodes and links) random networks, while denotes the corresponding standard deviation. values close to zero indicate random modifications between and , while negative values indicate modifications driven by some structure. The advantage of this formulation is that can easily be transformed into a -value, provided follows a normal distribution, a condition that is not fulfilled only for very small random networks.

It is finally worth noting how and are two complementary sides of the same coin. The former allows quantitatively assessing the structure of changes between two networks and creating rankings when multiple comparisons are available; the latter allows determining the corresponding statistical significance. Such duality in the metric definition will be exploited in the examples of Section 3.

2.3. Validation on Synthetic Networks

A simple way of validating the proposed algorithm involves the use of a set of controlled evolutions, i.e., governed by rules ensuring that the start and end points are known topologies. Given these two networks and , we construct a third network whose links are drawn from with probability and from with probability ; and finally compare with the initial network . Note that, for , and ; on the other hand, implies that and . Therefore, controls the degree of morphing between and .

Several evolutions of interest are analysed in Figure 1. The four columns, from left to right, respectively, represent the initial (rewiring ) and final () networks; , for the maximum rewiring ; and the evolution of the of the -value of , as a function of the rewiring , calculated between the original and the rewired network. While, for the sake of clarity, the depicted adjacency matrices have a small size, all results have been obtained with networks of nodes and random realisations.

The first row describes the rewiring of a random network into a second random one. As there is no correlation nor structure between the links that have changed, the resulting matrix presents a random connectivity and no mesoscale; consequently, the drop in never becomes statistically significant, as depicted in the right panel. The second example, while being similar, presents an important difference: if both the initial and final networks are random, the second is obtained by reversing the set of neighbours of one single node; see the corresponding matrix . Note that, in this case, while the initial and final points are random, the evolution process is a structured one. This is correctly detected by the proposed metric, with the -value dropping below for .

Similar behaviours are observed in the third and fourth examples, which describe two different networks converging towards a community structure. As creating or modifying a community requires links to be activated and de-activated in a targeted way, the metric detects the presence of a mesoscale in . Finally, the latter example consists of a situation in which both the starting and final networks have the same community structure, being both contaminated by random noise. Accordingly, the difference between both has a random nature, and the -value never becomes statistically significant.

Some general conclusions can be drawn from these results. Firstly, and most importantly, the structure of the two networks and is not relevant; instead, only the changes that are required to evolve from the former to the latter are. Specifically, two completely random networks may be associated with a structured change between them; and two well-structured networks may differ in a random fashion. Secondly, the presence of a statistically significant structured process is the result of the trade-off between the fraction of modified links and their organisation. For instance, it is worth noting that in the second example of Figure 1 a statistical significant result is reached for , as all modified links belong to the same node, while an is required in the third example. In other words, the change of a few strongly correlated links can be as significant as the change of many links, when the relationship between them is weaker.

2.4. Comparison with Other Approaches

Among the literature dealing with the problem of complex network comparison [11], two alternative approaches are commonly used: the comparison of network topological properties on one hand and of the raw adjacency matrices on the other.

The former approach is the most common: one or more metrics, synthesising the network structure, are calculated and compared. Two advantages are worth highlighting. Firstly, this method allows comparing heterogeneous networks, i.e., networks that may have different number of nodes and links, provided the metrics are normalised against equivalent random graphs. Nodes of the two networks may also not share identities, such that it is possible to compare, for instance, genetic and protein networks. Secondly, as the researcher fixes the topological metrics to be used, the analysis can be focused on specific aspects of the network structure (e.g., modularity, presence of triangles, etc.).

On the other hand, the second strategy is based on directly comparing two or more adjacency matrices, for instance, through the use of correlations or entropy measures, to quantify the magnitude of the difference between them. In other words, it provides an estimation of the number of links or nodes that must change to map one network into the other [11, 2224]. Both approaches can then be seen as complementary, and corresponding to a genotype/phenotype analysis of the change.

For the sake of completeness, this section compares the proposed methodology with the latter one, i.e., with strategies for directly comparing two or more adjacency matrices. The objective is to show that the scope of both approaches is not the same: while the latter aims at assessing how many links have changed, the -based one focuses on why these have changed, irrespectively of their number. The examples here reported are designed to highlight such difference, to clarify the added value of the metric, and to correctly position as a tool complementary to other metrics.

2.4.1. Correlation

An interesting and yet simple way of comparing two networks or two layers in a multiplex network is to calculate the correlation between the links present in both of them. In other words, given two networks and , the correlation expresses the probability that if , then . More generally, one can calculate a global overlap as the total number of pair of nodes connected at the same time by a link in networks and , as proposed in [16], i.e.,

Equation (5) can further be normalised by considering the number of links present in both networks. It has recently been shown [25, 26] that such global overlap has important implications in the percolation, and thus in the robustness, of multiplex networks, as the presence of correlated (redundant) links slows down the disruption of the giant component of the network under random link removal.

Two extreme situations can be encountered when considering the correlation between two networks: when all links are equal in both networks, and thus the correlation is maximal; and when links are reciprocal, i.e., , yielding a maximally negative correlation. would, respectively, be a null and a complete matrix, and in both cases; in other words, a strong structure drives the evolution between both networks. More interesting situations arise in the middle range, i.e., when only part of the links is different. To illustrate, let us consider the situation depicted in the first two rows of Figure 1 and suppose the initial and final matrices are random and have the same link density of . In both cases, (as half of the activated links are expected to coincide); yet, the two resulting values are completely different. This proves that the global overlap metric does not provide information on the underlying mechanism driving such difference, as the same correlation value may be the result of random or structured changes. Therefore, the proposed approach is complementary to the global overlap. Moreover, a metric very similar to , i.e., the Euclidean distance between adjacency matrices, has recently been found to be superior to other more complicated graph diffusion kernel distances [24]. We can therefore conclude that the proposed approach is not equivalent, but instead an alternative, to all this family of metrics.

2.4.2. von Neumann Entropy

The von Neumann entropy () is a metric that was initially introduced in quantum mechanics to assess the degree of mixing of the quantum states encoded in a probability distribution and hence in a density matrix . While the concept of a state probability distribution is not defined for complex networks, the metric can still be calculated over any density matrix, i.e., any Hermitian and positive semidefinite matrix. As previously shown [17, 18], can be calculated over the density Laplacian matrix as

where is the average degree, the number of nodes composing the network, and the corresponding Laplacian matrix. The von Neumann entropy has been demonstrated to be a good quantifier of the regularity of a network structure, with higher values obtained in graphs with uniform degree distributions, and smaller values in heterogeneous networks [27]. In a way similar to our approach, has been used to compare different networks [28], but with the limitations discussed below.

Let us suppose two networks with the same number of nodes and links, and , respectively, having a random and a modular structure. More specifically, half of the elements of the former adjacency matrix are randomly set to one, while those of the second are defined as for , and (and zero otherwise). In the limit of large values of , both networks will have a similar average degree, i.e., , with links equally distributed among nodes. Due to the dependency of on the degree distribution, both networks are expected to have similar values of the entropy.

It is easy to construct situations in which the difference network is equal to or . For instance, starting from a random network with a link density of , the first case is obtained when this is compared with another random network with the same size and link density; on the other hand, the second case is obtained by inverting the activation of links in the upper left and bottom right quarters of the adjacency matrix. The behaviour of the von Neumann entropy in these two situations is depicted in Figure 2. Note that the right panels depict the evolution of the Z-Score of the ; that is, the number of standard deviations deviates from the values obtained in random networks with the same number of nodes and links. In synthesis, these results suggest that the von Neumann entropy and the metric are not equivalent. While they both are designed to detect regularities in a network topology, the latter is able to detect situations that are not statistically significant for the former.

3. Results

3.1. World-Wide Air Transport Network

As a first test case, we here consider the network created by flights between the top-50 and top-200 world airports, as extracted from the Sabre Airport Data Intelligence data set. As previously proposed [29, 30], nodes represent airports, pairwise connected when the total number of passengers per month who used a direct flight between both airports is larger than , i.e., at least passengers/day. 72 snapshots are available, representing the monthly evolution of the system between January 2010 and December 2015.

The air transport network is known to present a strong seasonality, both on the short (i.e., daily) and long scales (monthly and yearly) [31]. This magnifies the importance of using a correct temporal representation, as projecting the system into a single atemporal network may result in severe topological distortions [32]. This fact is here confirmed by Figure 3, which represents the evolution of three topological metrics (link density, modularity, and assortativity) through time; note the annual sinusoidal behaviour of all curves. For a detailed discussion of the effect of including different sets of airports on the observed topological metrics, the reader can refer to [10].

The evolution of the of the -value of the test, for all possible pairs of months, is depicted in the top panels of Figure 4. Light colours represent changes with a random structure; dark colours represent the presence of a mesoscale regularity. In the case of airports, it is interesting to see bright squares on the main diagonal, of size , corresponding to the summer and winter seasons; this is to be expected, as flights seldom change within the same season, and differences are thus the consequence of small and random adjustments in the schedules. The yearly seasonality of the air transport is also evident in the case of airports, with bright colours concentrating around the and diagonals. When the time distance between two snapshots is greater than two years, and when consecutive summer/winter pairs are compared, the test suggests that changes are not random: they thus correspond to systematic reconfigurations of the air transport market, driven by business considerations, and which cannot just be explained by a random rewiring.

As a comparison, Figure 4 bottom left panel depicts the evolution of the normalised global overlap for the case of airports. While prima facie the colour map is similar to the one presented in Figure 4 top left, several differences can be observed, especially far away from the main diagonal, i.e., for distances greater than 2 years. In order to clarify these differences, the bottom right panel reports a scatter plot comparing the values yielded by and . While there is a general positive correlation, it is possible to find completely different values for the same overlap. For instance, for , one can find instances with -value . This suggests that small changes, i.e., high overlaps, can be due to both (almost) random and strongly structured evolutions. The thus yields a more complete view of the evolution of the network, providing information (specifically, the nature of the changes) that is disregard by other metrics.

3.2. Hospital Contact Network

As a second example, we here consider the temporal network of contacts in the geriatric unit of a Lyon university hospital, including patients and health care workers, as described in [33, 34]. Nodes represent 46 health care workers and 29 patients, and links close-range interactions between them as detected by wearable sensors. The full data set spans from Monday, December 6, 2010, at 1:00 pm to Friday, December 10, 2010, at 2:00 pm, with a temporal resolution of 20 seconds. We extracted a set of 97 contact networks, by aggregating all contacts made within an one-hour interval, in order to avoid the sparsity characterising higher temporal resolutions. The median density of the network is , the median and standard deviation of the average shortest path length is , and the median clustering coefficient is .

In a way similar to Figure 4, Figure 5 (left) represents the evolution of the structure of changes, for all pairs of available networks. Note that, in this case, is used, such that values close to one (smaller than one) indicate random (respectively, structured) changes. A clear trend, with a -hours period, can be identified, as confirmed by the central panel, depicting the evolution of the link density across several days. A comparison between and the global overlap can be made by considering the right panel of Figure 5, representing an equivalent analysis performed with the latter metric. Some interesting situations can be detected. For instance, one can observe that several time windows correspond to a very high and, at the same time, to a very low global overlap; see, for instance, the bottom lower part of both colour maps. A high can nevertheless also be found in the square in the main diagonal, at around hours, which corresponds to a high global overlap. The presence of a random change between two snapshots is thus not correlated with their overlap: it can appear when both a few or most of the links are rewired.

More in general, from the analysis of this system it can be concluded that it presents two different regimes. On one hand, most of the times the contact network evolves in a structured manner, reflecting the fact that health care workers perform regular tasks. On the other hand, nights are characterised by fewer contacts, which develop in a random fashion, possibly the result of emergencies and other random situations.

3.3. Brain Functional Networks

As a third case study, we present an analysis of the brain activity of multiple healthy subjects during a resting state, as made available by the Human Connectome Project (HCP) [35]. Magnetoencephalographic (MEG) recordings [36] were performed on a group of individuals, obtaining for each of them time series (each representing one MEG sensor) with points. Note that only a subset of the original group of people has been considered here, in order to ensure homogeneity in the number of channels and time series length. Functional networks were then reconstructed as described in [7], by firstly extracting the time series corresponding to four standard bands (theta Hz, alpha Hz, beta Hz, and gamma Hz); secondly, by calculating the Mutual Information (MI) between each pair of channels; and finally, by binarising the resulting networks, through a threshold defined by surrogated time series obtained by a block-permutation procedure [37]. The final result is thus a set of four functional networks per subject, representing brain activity at rest in four frequency bands. For further details about the recording and data processing, the reader is referred to [7, 35].

As a first objective, we here want to show that the proposed algorithm can be used to quantify and describe the nature of the differences between the networks representing different frequency bands. The average and standard deviation of when comparing each person’s four networks is reported in Table 1. It can be seen that results are consistently included within the range , thus indicating the presence of structural differences between the networks corresponding to different frequency bands. This is to be expected, as these bands are supposed to correspond to different functional tasks, contributing differently to the overall resting state activity, and therefore not to be equivalent [38, 39].

A quite different picture nevertheless arises when one shifts the focus to subjects. Figure 6 (left) depicts the average and standard deviation of the values for each subject, i.e., corresponding to pairwise comparing the four networks of each subject. A greater intersubject variability emerges, with the average varying between for subject and for subject . An even stronger effect can be observed for the between frequency bands alpha and beta, as depicted in Figure 6 (Right): the same two subjects present values of, respectively, and .

This last result highlights an important fact: alpha and beta bands can contribute to the global resting state activity in very different ways. In some of them, as in subject , they have completely different topologies, while in others (as for subject ) their differences is only due to random fluctuations. More generally, different frequency bands interact between them in a way that is dependent on the subject, thus with high intersubject variability. These results are aligned with previous findings, in which MEG studies report a low reproducibility for resting states in test-retest experiments [40, 41].

3.4. Finding a System’s Natural Frequency: Network Self-Correlations and Correlograms

If a set of networks represents the evolution of the connectivity of a system through time, the parallelism with time series analysis can be pushed one step further by defining the equivalent of a network autocorrelation function. This requires calculating the similarity of the sequence of networks with itself, when one of the two instances is time-displaced with respect to the other.

Let us denote by the matrix of similarity, whose element encodes the similarity of the two networks, respectively, representing the system at times and ; note that such matrix is completely equivalent to the results presented in Figures 4 and 5. The autocorrelation of the sequence of networks, for a time displacement of , is given by

In the r.h.s. of (7), the measure is used as a proxy of the similarity between two networks; to be more precise, this self-correlation thus assesses how a sequence of networks is intentionally equivalent to itself, excluding the presence of uncorrelated noise (unintentional changes) in the links. is, by construction, equivalent to the average of the -diagonal of , or of the matrices depicted in Figures 4 and 5.

By calculating over all s, it is possible to construct a full correlogram of the evolution of the studied system, with the maxima representing its natural frequencies. In order to illustrate this idea, Figure 7 depicts the correlograms for the air transport networks (left panel) and the hospital networks (right panel); the brain functional networks have not here been considered, as they do not represent a temporal evolution. The respective matrices encode different variants of the metric: the of the -value of for the former (Figure 5), and for the latter (Figure 4); as a consequence, the axis of the two panels has different scales. This is not a problem as long as the meaning is similar; in this case, both and indicate highly similar networks and both lie in the top part of the graph. As should be expected, the maximum in both correlograms is located at . Local minima can additionally be found at () for the hospital data set, corresponding to a daily activity cycle; and at () in the case of the air transport, indicating a yearly seasonality.

As a final remark, it has to be highlighted that the concept of correlogram is a general one, and not tied to the use of the metric: on the contrary, any metric taking as input two adjacency matrices and yielding a scalar value can be used. To illustrate, the global overlap may be introduced within Eq. (7). This would nevertheless result in a change in the meaning of the output: while in the case the correlogram indicates the time scale at which results appear due to nonrandom forces, the metric version would indicate the time scale at which changes are minimised, irrespectively of how they were generated.

4. Discussion and Conclusions

Beyond the quantification of the magnitude of the difference between two networks, a more complex and challenging problem is to detect if such difference is due to random modifications or to organised forces. The two problems are complementary and not necessarily correlated. The network structure of a system may substantially change between two measurements, but still be the same topology deformed by strong observational noise. On the other hand, small changes may be due to the targeted (intentional) attempt of, e.g., promoting a node. Although the former problem has extensively been tackled in the literature, and specific metrics have been created and compared, less attention has been devoted to the latter.

Given two or more networks, in this contribution we proposed a way of answering the question: do the observed changes just have a stochastic nature, or on the contrary they display a form of organisation? We presented the use of Information Content [15] as a way of assessing the presence of mesoscale structures in the difference between two networks. The effectiveness of the metric has been demonstrated in several synthetic network evolutions, and tested with three real data sets, respectively, representing social, technological, and biological systems. We additionally discussed the differences between the proposed approach and two a priori similar metrics, i.e., the network correlation [16] and the von Neumann entropy [17, 18]; while being able to detect the magnitude of the evolution, they are insensitive to its nature and are therefore not suitable to discriminate between random and organised changes.

The availability of a similarity metric further allows adapting some standard techniques in time series analysis to the study of the evolution of networked systems. We here considered the case of self-correlations and correlograms and showed that the natural frequency of the system, in terms of recurrence of intentional network changes, can be estimated by the maxima in the network self-correlation. While not explicitly discussed here, the proposed analysis can be extended to the more general case of the cross-correlation, in which multiple sequences of networks, for instance, representing two or more systems, can be pairwise analysed. Correlograms could also be used to select the best time resolution for sampling temporal networks, a topic still to be explored [42].

As a final thought, a hidden assumption of this work is that the networks to be compared are expected to be topologically compatible, i.e., to have the same number of nodes. While this holds for multiplex networks, general multilayer and temporal graphs can have variable sizes. The proposed methodology can still be used, provided an initial preprocessing is performed: for instance, the cores composed of nodes common to both networks could be isolated; while some information would be lost, the main evolutive trends could still be characterised. Furthermore, networks whose nodes do not have a shared identity could in principle be compared; this allows studying networks coming from different systems, e.g., respectively, representing brain activity and air transport. Nevertheless, it would firstly be necessary to match nodes between both networks, that is, to create a map relating each node of the first network with the topological equivalent one of the second, by means of, e.g., the SimRank [43] or similar algorithms.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This paper is supported by the National Natural Science Foundation of China (Grants no. 61650110516 and No. 61601013).