Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2011 / Article
Special Issue

Academic Software Applications for Electromagnetic Brain Mapping Using MEG and EEG

View this Special Issue

Research Article | Open Access

Volume 2011 |Article ID 747290 |

Eleni G. Christodoulou, Vangelis Sakkalis, Vassilis Tsiaras, Ioannis G. Tollis, "BrainNetVis: An Open-Access Tool to Effectively Quantify and Visualize Brain Networks", Computational Intelligence and Neuroscience, vol. 2011, Article ID 747290, 12 pages, 2011.

BrainNetVis: An Open-Access Tool to Effectively Quantify and Visualize Brain Networks

Academic Editor: Sylvain Baillet
Received17 Sep 2010
Revised25 Nov 2010
Accepted31 Dec 2010
Published14 Mar 2011


This paper presents BrainNetVis, a tool which serves brain network modelling and visualization, by providing both quantitative and qualitative network measures of brain interconnectivity. It emphasizes the needs that led to the creation of this tool by presenting similar works in the field and by describing how our tool contributes to the existing scenery. It also describes the methods used for the calculation of the graph metrics (global network metrics and vertex metrics), which carry the brain network information. To make the methods clear and understandable, we use an exemplar dataset throughout the paper, on which the calculations and the visualizations are performed. This dataset consists of an alcoholic and a control group of subjects.

1. Introduction

One of the major issues in neuroscience is to describe how different brain areas communicate with each other during perception, cognition, and action as well as during spontaneous activity in the default or resting state. Mainly two different approaches for capturing and localizing brain activity motifs have been proposed; univariate spectrum based analysis and functional connectivity analysis [1]. Friston [2] defined functional connectivity as the statistical dependence between the activations of distinct and often well-separated neuronal populations.

Network models and graph theory provide a common framework for describing brain functional connectivity [3–5]. The interdependence between brain areas is estimated using multivariate neurophysiological signals (EEG, MEG, ECoG) and/or haemodynamic response images (fMRI). Then, a network is formed by corresponding either brain areas or channels to vertices and by considering an edge between two vertices if and only if the estimated interdependence is above a threshold. Regarding threshold selection, it is important to notice that it is a rather tricky part and there is currently no established way of favouring a specific threshold value. In practice, a broad range of threshold values is used to characterize the network. However, the authors propose two alternative approaches in selecting a threshold value based either on group statistics between specific graph-theoretic measures of the populations under analysis [6] or utilizing a signal-based technique of selecting the optimal visualization threshold using surrogate (artificially generated ensemble of data aiming at revealing the most significantly coupled brain regions) datasets to correctly identify the most significant correlation patterns [7]. The next step in the analysis, after edge identification, is to measure some networks statistics and characterize the network. Then, using the network characterization, one can draw conclusions on the effect of illnesses or of cognitive loads on functional connectivity [6–11].

In this study, we briefly refer to pairwise (bivariate) and multivariate interdependence measures, as well as linear and nonlinear ones, that have been successfully used as indices of cerebral engagement [12]. This information is important for the correct usage of the tool, especially for nonexpert users, as the application of these measures on the raw EEG data produces the input to our tool. The BrainNetVis tool provides a dynamic snapshot of the highly complex underlying neural mechanisms by means of graph visualization [13]. BrainNetVis is an open-access multiplatform tool, provided by ICS-FORTH, for graph representation and brain network visualization. Please note that BrainNetVis calculates the following presented metrics on the synchronization matrices (adjacency matrices) that the user should calculate in advance! However, the preprocessing section (Section 3.2) briefly presents some widely used techniques to assess functional brain connectivity and form the adjacency matrix.

At this point, we refer to some already existing tools on the field. These tools capture different kinds of EEG information than BrainNetVis and they may be used complementary to it. One of them is EEGLAB [14], which we have been using extensively for better perception of the brain area. EEGLAB is an interactive Matlab toolbox for processing continuous and event-related EEG, MEG, and other electrophysiological data incorporating independent component analysis (ICA), time/frequency analysis, artifact rejection, event-related statistics, and several useful modes of visualization of the averaged and single-trial data. EEGlab offers also dipole localization functions. Some of the metrics that we implement have also been implemented in the Brain Connectivity toolbox (a matlab toolbox) by Rubinov and Sporns [15]. Other related toolboxes include MEA-Tools [16] and ERPWAVELAB [17]. In these toolboxes, however, the measures for quantifying channel interactions are mainly confined to the temporal crosscorrelation [16] and the coherence spectrum [17, 18]. However, more sophisticated interdependence techniques addressing not only linear but also nonlinear synchronization and causality are also available and applied in certain pathologies like Epilepsy [12]. Such measures can act complementary to graph theoretic indices that characterize brain networks as discussed in [19] and can be used as input to BrainNetVis.

The paper is organized as follows. Section 2 presents essential information on the different ways of graph modelling and manipulations, using BrainNetVis. Section 3 refers to the preprocessing needed (Section 3.2), the most commonly used menu calls and the GUI (Section 3.3), and the possible graph visualization options (Section 3.4). Our conclusion is given in Section 4.

2. Network Analysis

Before presenting BrainNetVis, it is important to provide here some basic definitions from graph theory.

A graph 𝐺=(𝑉,𝐸) defined on a set of vertices 𝑉={𝑣1,…,𝑣𝑛} and edges 𝐸={𝑒1,…,𝑒𝑚}, where each edge 𝑒∈𝐸 is an ordered or unordered pair of vertices. An ordered pair 𝑒=(𝑢,𝑣)∈𝑉×𝑉 is called a directed edge, while an unordered pair 𝑒={𝑢,𝑣}, where 𝑢,𝑣∈𝑉, is called an undirected edge. In case 𝑢=𝑣, 𝑒 is called a self-loop. In our study, we consider simple graphs, that is, graphs without self-loops. Also the cardinality of 𝑉 is denoted by 𝑛 (i.e., 𝑛=|𝑉|).

A weighted network 𝐺=(𝑉,𝐸,𝜔) consists of a graph with vertex set 𝑉 and edge set 𝐸 augmented with an edge value function 𝜔∶𝐸→ℝ that assigns to each edge 𝑒∈𝐸 a real value 𝜔(𝑒). Every weighted network 𝐺=(𝑉,𝐸,𝜔) corresponds to a real 𝑛×𝑛 matrix 𝑊=(𝑤𝑖𝑗), 𝑖,𝑗∈{1,2,…,𝑛}, where 𝑤𝑖𝑗 is equal to value 𝜔(𝑒) of edge 𝑒=(𝑣𝑖,𝑣𝑗) if 𝑒∈𝐸, or to 0 otherwise. If we reserve value 0 to mean the absence of an edge, then the correspondence between 𝐺 and 𝑊 is one to one. In this work, we consider a subset of weighted networks, which we call synchronization networks, where edge values are restricted to interval (0,1] and interpreted as strength of dependence between vertices.

In synchronization networks, higher edge values indicate stronger dependencies. To define the length of an edge, we should at least reverse the order of edge values by applying, for example, the inverse function 𝑔∶(0,1]→[1,+∞), that is,1𝑔(𝑥)=𝑥.(1) We also propose another function 𝑔∶(0,1]→[1,+∞), where𝑔(𝑥)=1−log2(𝑥).(2)

These are definitions on how to transform the edge lengths in the case of synchronization networks. Which of the two functions performs better depends on the graph structure and on the metric or the visualization method that uses these functions. When choosing the appropriate formulation, one should consider that the function 1/𝑥 tends to +∞ faster than the function 1−log2(𝑥) when 𝑥→0+. Therefore, the edges with small values are assigned longer lengths with the 1/𝑥 function than those with the 1−log2(𝑥) function.

The length of a path from vertex 𝑢 to vertex 𝑣 is the sum of the lengths of the edges of the path. The shortest path distance from vertex 𝑢 to vertex 𝑣 is denoted by 𝑑𝐺(𝑢,𝑣). If vertex 𝑣 is unreachable from vertex 𝑢, then 𝑑𝐺(𝑢,𝑣)=+∞.

3. Methods and Results

3.1. Exemplar Case

In what follows, we are using the data of a specific use case, consisting of alcoholic and control subjects, in order to provide concrete examples of use of the application. Briefly, the specific study included 30 control subjects and 30 alcoholic subjects. Each subject was fitted with a 61-lead electrode cap (ECI, Electro-Cap International). All scalp electrodes were referred to 𝐶𝑧. In this experiment, each subject was exposed to pictures of objects chosen from the 1980 Snodgrass and Vanderwart picture set [20]. The stimuli in each trial were randomized (but not repeated) and were presented on a white background for 300 ms at the center of a computer monitor. Their size was approximately 5–10 cm × 5–10 cm, thus subtending a visual angle of 0,05°–0,1°. Ten trials were shown, with the interval between trials fixed to 3.2 s. The participants were instructed to memorize the pictures in order to be able to identify them later. For each subject and for each trial and frequency band (0.5–4 Hz, 4–8 Hz, 8–13 Hz, 13–30 Hz, 30–45 Hz) the interdependence for each channel pair (there are 61(61−1)/2 channel pairs since the number of active EEG channels is 61) was calculated using the coherence and the RIM methods. The results were stored in 61×61 interdependence matrices 𝑊 with elements ranging from 0 to 1. The main finding of this study, using BrainNetVis, was that the alcoholic subjects have impaired synchronization of brain activity and loss of lateralization during the rehearsal process as compared to control subjects.

3.2. Preprocessing

In order to create a graph, a matrix containing the EEG channel pairwise correlations is required. Thus, one needs to calculate the correlations among all pairs of electrodes and deduce the respective adjacency matrix, called synchronization matrix. There exist a number of measures that capture the linear and the nonlinear links between time-series in a frequency band in order to calculate the required correlations (in the EEG analysis context they are called synchronization indices). Three measures have been chosen after an extensive study in linear and nonlinear synchronization measures [12]: the typical magnitude squared coherence method (MSC) [21], a nonlinear bivariate measure for generalized synchronization (RIM) [22] and Partial Directed Coherence (PDC) [23]. The advantage of magnitude squared coherence is that it is well known and widely accepted. The advantage of RIM is that it is able to capture nonlinear patterns available in the signals, whereas PDC can measure causality.

(1) Magnitude Squared Coherence (MSC)
MSC (or simply coherence) has been a well-established and traditionally used tool to investigate the linear relation between two signals or EEG channels. Let us suppose that we have two simultaneously measured discrete time series 𝑥𝑖 and 𝑦𝑖, 𝑖=1…𝑁. MSC is the cross-spectral density function 𝑆𝑥𝑦(𝑓), which is simply derived via the fourier transform of the crosscorrelation, normalized by their individual autospectral density functions. Hence, MSC is calculated using the Welch’s method as 𝛾𝑥𝑦(||𝑆𝑓)=𝑥𝑦||(𝑓)2||⟨𝑆𝑥𝑥||||𝑆(𝑓)⟩𝑦𝑦||,(𝑓)(3) where ⟨⋅⟩ indicates window averaging. The estimated MSC for a given frequency 𝑓 ranges between 0 (no coupling) and 1 (maximum linear interdependence).

(2) A Robust Interdependence Measure (RIM)
Given two scalar time series {𝑥(𝑡)}𝑡∈𝕋 and {𝑦(𝑡)}𝑡∈𝕋 with 𝕋={1,…,𝑁}, which have been measured from dynamical systems 𝑋 and 𝑌, the dynamics of the systems are reconstructed using delay coordinates [24] []𝐱(𝑡)=𝑥(𝑡),𝑥(𝑡+𝜏),…,𝑥(𝑡+(𝑚−1)𝜏)𝑇(4) and similarly we reconstruct 𝐲(𝑡) from {𝑦(𝑡)}𝑡∈𝕋, with an embedding dimension 𝑚 and a delay time 𝜏 for 𝑛∈𝕋′={1,…,ğ‘î…ž}, where ğ‘î…ž=𝑁−(𝑚−1)𝜏. Regarding 𝜏 and 𝑚, they are parameters of Arnhold′s method [25]. Taken's [24] embedding theorems and their sequels (e.g., [26]) are existence proofs but they do not directly show how to get a suitable time delay 𝜏 or embedding dimension m from a finite time series. Empirical and heuristic criteria are employed for selecting 𝜏 and m. Usually, a choice of 𝜏 is the value for which the autocorrelation function first passes through zero, while m is determined using variations of false nearest neighbour statistics [27–29]. Parameter 𝜏 can also be calculated using the method of Fraser [30].
Let 𝑟𝑡,𝑗 and 𝑠𝑡,𝑗, 𝑗=1,…,𝑘, denote the time indices of the 𝑘 nearest Euclidean neighbors of 𝐱(𝑡) and 𝐲(𝑡), respectively. Temporally correlated neighbors are excluded by means of a Theiler correction: |𝑟𝑡,𝑗−𝑡|>𝑚⋅𝜏 and |𝑠𝑡,𝑗−𝑡|>𝑚⋅𝜏. For each ğ‘¡âˆˆğ•‹î…ž, the average square distance of 𝐲(𝑡) to all remaining points in {𝐲(𝑗)}ğ‘—âˆˆğ•‹î…ž is given by 𝑅𝑡1(𝑌)=ğ‘î…žâˆ’1𝑁′𝑗=1,𝑗≠𝑡||||𝐲(𝑡)−𝐲(𝑗)2.(5) For each 𝐲𝑡, the X-conditioned mean squared Euclidean distance is defined as 𝑅𝑡(𝑘)𝑌𝑋=1𝑘𝑘𝑗=1||𝑟𝐲(𝑡)−𝐲𝑡,𝑗||2.(6) Quiroga et al. [25] defined the dependence measure 𝑁𝑌𝑋=1ğ‘î…žğ‘î…žî“ğ‘¡=1𝑅𝑡(𝑌)−𝑅𝑡(𝑘)(𝑌/𝑋)𝑅𝑡.(𝑌)(7) The measure 𝑁(𝑋/𝑌) is defined in complete analogy, and as interdependence measure between 𝑋 and 𝑌, we use the mean value (𝑁(𝑋/𝑌)+𝑁(𝑌/𝑋))/2.

(3) Partial Directed Coherence (PDC)
Let {𝐱(𝑡)}𝑡∈ℕ with 𝐱(𝑡)=[𝑥1(𝑡),…,𝑥𝑛(𝑡)]𝑇 be a stationary 𝑛-dimensional time series with mean zero. Then, a vector autoregressive model of order 𝑝 for 𝐱 is given by 𝐱(𝑡)=𝑝𝑟=1𝐀(𝑟)𝐱(𝑡−𝑟)+𝜀(𝑡),(8) where 𝐀(𝑟) are the 𝑛×𝑛 coefficient matrices of the model and 𝜀(𝑡) is a multivariate Gaussian white noise process with covariance matrix 𝚺. In this model, the coefficients 𝐴𝑖𝑗(𝑟) describe how the present values of 𝑥𝑖 depend linearly on the past values of the components 𝑥𝑗. In order to provide a frequency domain measure for Granger-causality, Baccala and Sameshima [23] introduced the concept of PDC. This measure is based on the Fourier transform of the coefficient series 𝐀(𝜔)=𝐼−𝑝𝑟=1𝐀(𝜔)𝑒−𝑖𝜔𝑟.(9) More precisely, the PDC from 𝑥𝑗 to 𝑥𝑖 is defined as 𝜋𝑖←𝑗|||(𝜔)=𝐴𝑖𝑗|||(𝜔)∑𝑛𝑙=1|||𝐴𝑙𝑗|||(𝜔)2.(10) The PDC 𝜋𝑖←𝑗(𝜔) takes values between 0 and 1 and vanishes for all frequencies 𝜔 if and only if the coefficients 𝐴𝑖𝑗(𝑟) are zero for all 𝑟=1,…,𝑝.

The synchronization matrix created using one of the above methods serves as input to the BrainNetVis tool thus, it should be calculated separately and a priori. Please note that the presented tool currently implements only graph characterization measures and visualization schemes. It can be used with a variety of inputs in the form of the adjacency matrix. However, we provide the preprocessing section mostly for the interested but not expert user that wishes to investigate how graph analysis may be applied to the neuroscience field. In this sense, even if signal processing techniques are outside of the scope of the tool, we do describe the most widely used methods that provide the input for the further graph analysis. Nevertheless, it is true that most of the methods presented, linear (i.e., PDC) but mostly nonlinear ones (i.e., RIM), assume some kind of stationarity. Generally EEG distribution is considered as a multivariate Gaussian process even if the mean and covariance properties generally change from segment to segment. Therefore, strictly speaking, EEG meets quasistationarity because it can be considered stationary only within short intervals. Hence, the user should somehow test the stationarity assumptions prior to using these methods. Hopefully, a novel and prosperous technique capable of decomposing a multivariate time series into its stationary and nonstationary part is known as stationary subspace analysis (SSA) [31] and can be utilized to overcome the implicit stationarity constraints.

3.2.1. Binary and Greyscale Networks on BrainNetVis

BrainNetVis provides the option of using either a binary or a greyscale network by adjusting, respectively, the Network Metrics Options under the View drop down menu. In our use case, we provided as input to the tool a synchronization matrix describing the brain network of a virtual alcoholic patient. This virtual patient has been created by taking the means across the node and edge values over all 30 alcoholic subjects. We underline that this subject does not actually exist. We applied a binary network, using ğ‘¡â„Žğ‘Ÿğ‘’ğ‘ â„Žğ‘œğ‘™ğ‘‘=0.4 and a greyscale network which we visualized using colormap scale. The edge length transformation function can also be selected under the same menu. We used1ℓ(𝑒)=𝑥.(11) The results are depicted in Figure 1.

3.2.2. Data Structure

Two types of files are required for the algorithms that BrainNetVis encapsulates to run properly (1)A square synchronization matrix with the data from the EEG study (required for the algorithms to function).(2)A file containing a matrix of the labels and the coordinates of each electrode. The rows of the table correspond to the electrodes. The first column contains the electrodes' labels, and the other columns contain the coordinates of the electrodes. These will be either 2 columns (for 2D data, respective to 𝑥 and 𝑦 coordinates) or three columns (for 3D data, respective to 𝑥, 𝑦, and 𝑧 coordinates). (required for the visualization options)

3.3. Menu Calls (GUI)

The network metrics available in BrainNetVis will be presented here, in a way that follows the tool's structure.

3.3.1. Global Network Metrics

Networks are often classified into unifying categories in order to obtain a better understanding of their structure and function. Network measures are numbers which capture reduced information for graphs and describe essential properties. Network measures should catch the relevant and needed information, they should differentiate between certain classes of networks and be easily computed in order to be useful in algorithms and applications.

A very important global network metric is clustering coefficient. The clustering coefficient has been introduced by Watts and Strogatz [32] in 1998. For a vertex 𝑣, the clustering coefficient 𝑐(𝑣) measures the connectivity of its direct neighborhood. The clustering coefficient 𝐶(𝐺) of a graph is the average of 𝑐(𝑣) taken over all vertices.

In the BrainNetVis application, we implement two different kinds of clustering coefficients, proposed by Zhang and Horvath (the first) and Onnela (the second). Zhang and Horvath proposed a definition which uses only the network values, in the context of gene coexpression networks. On the other hand, Onnela proposed a version of local clustering coefficient based on the concept of subgraph intensity, defined as the geometric average of subgraph edge values. Both metrics are defined in Table 1. It has to be noticed that the Onnela clustering coefficient definition suffers from the drawback that it requires an underlying binary network; if this is not available as a separate set of data, then presumably it must be obtained by discretizing the weighted edges.

Zhang and Horvath 𝑐 𝑍 ∑ ( 𝑣 ) = 𝑖 ≠ 𝑗 ∈ 𝑉 ⧵ { 𝑣 }  𝑤 𝑣 𝑖  𝑤 𝑖 𝑗  𝑤 𝑗 𝑣 / ∑ 𝑖 ≠ 𝑗 ∈ 𝑉 ⧵ { 𝑣 }  𝑤 𝑣 𝑖  𝑤 𝑗 𝑣 ⇒
𝑐 𝑍 ( 𝑣 ) = ( 1 / m a x 𝑖 , 𝑗 ( 𝑤 𝑖 𝑗 ∑ ) ) ⋅ ( 𝑖 ≠ 𝑗 ∈ 𝑉 ⧵ { 𝑣 } 𝑤 𝑣 𝑖 𝑤 𝑖 𝑗 𝑤 𝑗 𝑣 / ∑ 𝑖 ≠ 𝑗 ∈ 𝑉 ⧵ { 𝑣 } 𝑤 𝑣 𝑖 𝑤 𝑗 𝑣 )
The weights have been normalized by m a x 𝑖 , 𝑗 ( 𝑤 𝑖 𝑗 ) .
The above definition uses only the network values, in the context of gene coexpression networks.

Onnela 𝑐 𝑂  ( 𝑣 ) = ( 1 / 2 d e g ( 𝑣 )  ) ∑ 𝑖 ≠ 𝑗 ∈ 𝑉 ⧵ { 𝑣 } (  𝑤 𝑣 𝑖  𝑤 𝑖 𝑗  𝑤 𝑗 𝑣 ) 1 / 3 ⇒
𝑐 𝑂 ( 𝑣 ) = ( 1 / m a x 𝑖 , 𝑗 ( 𝑤 𝑖 𝑗 )  2 d e g ( 𝑣 )  ) ∑ 𝑖 ≠ 𝑗 ∈ 𝑉 ⧵ { 𝑣 } ( 𝑤 𝑣 𝑖 𝑤 𝑖 𝑗 𝑤 𝑗 𝑣 ) 1 / 3
Here, the edge values are normalized by the maximum value in the network,
 𝑤 𝑖 𝑗 = 𝑤 𝑖 𝑗 / m a x 𝑙 , 𝑘 𝑤 𝑙 𝑘 .

Assortative mixing
Symmetrical weighted networks ∑ 𝑟 = ( 4 𝑚 { 𝑢 , 𝑣 } ∈ 𝐸 ∑ 𝜌 ( 𝑢 ) 𝜌 ( 𝑣 ) − [ { 𝑢 , 𝑣 } ∈ 𝐸 ( 𝜌 ( 𝑢 ) + 𝜌 ( 𝑣 ) ) ] 2 ∑ ) / ( 2 𝑚 { 𝑢 , 𝑣 } ∈ 𝐸 ( 𝜌 ( 𝑢 ) 2 + 𝜌 ( 𝑣 ) 2 ∑ ) − [ { 𝑢 , 𝑣 } ∈ 𝐸 ( 𝜌 ( 𝑢 ) + 𝜌 ( 𝑣 ) ) ] 2 )
Directed weighted networks ∑ 𝑟 = ( 𝐻 ( 𝑢 , 𝑣 ) ∈ 𝐸  𝜔 ( 𝑢 , 𝑣 ) 𝜌 ( 𝑢 ) 𝜌 ( 𝑣 ) − 𝐴 𝐵 ) / ( 𝐻 ∑ ( 𝑢 , 𝑣 ) ∈ 𝐸 𝜔 ( 𝑢 , 𝑣 ) 𝜌 ( 𝑢 ) 2 − 𝐴 2  𝐻 ∑ ( 𝑢 , 𝑣 ) ∈ 𝐸 𝜔 ( 𝑢 , 𝑣 ) 𝜌 ( 𝑣 ) 2 − 𝐵 2 )
∑ 𝐴 = ( 𝑢 , 𝑣 ) ∈ 𝐸 𝜔 ( 𝑢 , 𝑣 ) 𝜌 ( 𝑢 )
∑ 𝐵 = ( 𝑢 , 𝑣 ) ∈ 𝐸 𝜔 ( 𝑢 , 𝑣 ) 𝜌 ( 𝑣 )
∑ 𝐻 = 𝑒 ∈ 𝐸 𝜔 ( 𝑒 ) is the sum of all values of edges in 𝐸 .

Degree centrality 𝑐 𝐷 ( 𝑣 ) of vertex v
Undirected binary network Degree d e g ( 𝑣 ) of vertex 𝑣
Directed binary network In-degree 𝑐 𝑖 𝐷 ( 𝑣 ) = d e g − ( 𝑣 )
Out-degree 𝑐 𝑜 𝐷 ( 𝑣 ) = d e g + ( 𝑣 )

Strength centrality 𝑐 𝑆 ( 𝑣 )
Greyscale symmetric network Strength 𝑠 ( 𝑣 ) of vertex 𝑣
Greyscale assymetric network In-strength: 𝑐 𝑖 𝑆 ( 𝑣 ) = 𝑠 − ( 𝑣 )
Out-strength: 𝑐 𝑜 𝑆 ( 𝑣 ) = 𝑠 + ( 𝑣 )

Shortest-path Efficiency 𝑐 𝐸 𝑓 ( 𝑣 ) = ( 1 / 𝑛 𝐸 𝑓 ) ∑ 𝑢 ≠ 𝑣 1 / 𝑑 𝐺 ( 𝑣 , 𝑢 ) , where 𝑛 𝐸 𝑓 = 𝑛 − 1

Shortest-path Betweeness centrality 𝑐 𝐵 ( 𝑣 ) of a vertex 𝑣 ∈ 𝑉 𝑐 𝐵 ( 𝑣 ) = ( 1 / 𝑛 𝐵 ) ∑ 𝑠 ∈ 𝑉 ⧵ { 𝑣 } ∑ 𝑡 ∈ 𝑉 ⧵ { 𝑣 , 𝑠 } ( ğœŽ 𝑠 𝑡 ( 𝑣 ) / ğœŽ 𝑠 𝑡 ) , where ğœŽ 𝑠 𝑡 is the number of shortest ( 𝑠 , 𝑡 ) -paths
ğœŽ 𝑠 𝑡 ( 𝑣 ) is the number of shortest ( 𝑠 , 𝑡 ) -paths passing through some vertex 𝑣 other than 𝑠 , 𝑡 and 𝑛 𝐵 = ( 𝑛 − 1 ) ( 𝑛 − 2 ) is a normalizing constant.

Bonacich's eigenvector centrality 𝜆 𝑐 ( 𝑣 𝑖 ∑ ) = 𝑛 𝑗 = 1 𝑤 𝑗 𝑖 𝑐 ( 𝑣 𝑗 )
In matrix notation with 𝐜 = [ 𝑐 ( 𝑣 1 ) , 𝑐 ( 𝑣 2 ) , … , 𝑐 ( 𝑣 𝑛 ) ] 𝑇 , this yields:
𝜆 𝐜 = 𝑊 𝑇 𝐜 .
This type of equation is well known and solved by the eigenvalues and eigenvectors of 𝑊 𝑇 .
We call the eigenvector 𝐬 = [ 𝑠 1 , … , 𝑠 𝑛 ] 𝑇 of the maximal eigenvalue of 𝜆 𝐜 = 𝑊 𝑇 𝐜 principal eigenvector. Then, the eigenvector centrality of node 𝑣 𝑖 is defined as: 𝑐 𝐸 𝑉 ( 𝑣 𝑖 ) = | 𝑠 𝑖 | / ‖ 𝐬 ‖ 𝑝 ,
where the centrality vector 𝐬 is normalized by dividing it by its 𝑝 -norm
‖ 𝐬 ‖ 𝑝 ∑ = ( 𝑛 𝑖 = 1 | 𝑠 𝑖 | 𝑝 ) 1 / 𝑝 1 ≤ 𝑝 < ∞ , and ‖ 𝐬 ‖ 𝑝 = m a x 𝑖 = 1 , … , 𝑛 { | 𝑠 𝑖 | } 𝑝 = ∞ to produce centrality scores 𝑐 ( 𝑣 𝑖 ) ≤ 1 .

Hubbell's centrality 𝐜 = 𝛼 𝑊 𝑇 𝐜 + 𝐞 where 𝐜 = [ 𝑐 ( 𝑣 1 ) , 𝑐 ( 𝑣 2 ) , … , 𝑐 ( 𝑣 𝑛 ) ] 𝑇 and 𝐞 = [ 𝑒 1 , 𝑒 2 … , 𝑒 𝑛 ] 𝑇 .
In order to get meaningful results, 𝛼 should be chosen according to restriction | 𝛼 | < 1 / 𝜆 1 , where 𝜆 1 is the maximum value of an eigenvalue of 𝑊 .
This restriction is not mentioned in the literature.

Subgraph centrality of vertex 𝑣 𝑖 It is given by the 𝑖 th diagonal entry of the 𝑘 th power of the adjacency matrix, 𝐴
𝑐 𝑆 𝐺 ( 𝑣 𝑖 ∑ ) = ∞ 𝑘 = 0 𝜇 𝑘 ( 𝑖 ) / 𝑘 ! with number of closed walks: 𝜇 𝑘 ( 𝑖 ) = ( 𝐴 𝑘 ) 𝑖 𝑖 .
This measure generalizes to greyscale networks by substituting matrix 𝑊 for 𝐴 .
Network entropy  ∑ 𝐻 ( 𝑃 ) = − 𝑖 , 𝑗 𝜋 𝑖 ̂ 𝑝 𝑖 𝑗 l o g ̂ 𝑝 𝑖 𝑗 = ∑ 𝑖 𝜋 𝑖 𝐻 𝑖
To produce the above equation, we have set a Markov matrix 𝑃 = [ 𝑝 𝑖 𝑗 ] be the stochastic process which defines the information source and its stationary distribution 𝜋 ∶ 𝜋 𝑃 = 𝜋 .

The other important global network metric, included in the tool, is assortative mixing. This feature captures the similarity between properties of adjacent network vertices. Intuitively, this measure captures the tendency of network vertices to connect either to vertices with similar degrees (high degrees connected with high degrees and low degrees connected with low degrees) or to vertices that have dissimilar degrees (high degrees connected with low degrees). Newman [33] proposed an interesting measure to quantify the degree of similarity (dissimilarity) between adjacent vertices in a network using assortative mixing, which is given as the correlation between properties of every pairs of adjacent vertices. Each vertex may have assigned a single scalar, such as a centrality measure of the vertex position in a network, or a set of scalar properties. Then, the assortativity coefficient for an undirected graph is defined as the (sample) Pearson product-moment correlation coefficient. The formula of this computation is given in Table 1, and it is written in a symmetrical form. This equation can also be used for directed graphs by simply ignoring the direction of edges.

The value of the assortativity coefficient, 𝑟, lies in the range −1≤𝑟≤1, with 𝑟=1 indicating perfect assortativity and 𝑟=−1 indicating perfect disassortativity (perfect negative correlation between the properties of the vertices of the edges under consideration). Brain functional networks tend to be assortative [34, 35]. From computational studies, it has been observed that information gets easily transferred through assortative networks as compared to that in disassortative networks [36].

Global network metrics on BrainNetVis
BrainNetVis allows the calculation of the mentioned global network metrics by following the Tools menu (see Figure 2). Continuing the previous example on an alcoholic patient, we applied the simple Clustering Coefficient and the Assortative Mixing.

3.3.2. Vertex Metrics-Centrality Measures

The above concerned global network metrics. There exists a significant interest in local network properties as well, which concentrates on one node of interest. These properties are very important since at the local scale we can detect which vertices are the most relevant for the organization and functioning of a network. These local measures are commonly named centrality measures (or centrality indices) and have proved of great value in analysing the role played by individuals in social networks and in identifying essential proteins, keystone species, and functionally important brain regions.

Centrality Measures Based on Neighbourhoods
The simplest and most basic centrality measure is degree centrality 𝑐𝐷(𝑣) of a vertex 𝑣. In practice, this is the number of neighbours of the node of interest. In spite of the simplicity of this concept, degree is the most fundamental network measure and most other centrality measures are linked to it. The definitions of degree centrality, both for directed and for undirected networks are provided in Table 1.
In the case of greyscale networks, instead of using the term degree centrality, we use the term strength centrality. The formulas for strength centrality are defined correspondingly (Table 1). In BrainNetVis, strength centrality is presented as normalized degree centrality. This is accessed when the user chooses the Normalized Metrics on the Tools ⇒ Network Metrics Options ⇒ General tab and normalizes the edge values to range from 0 to 1 accordingly.

Centrality Measures Based on Distances
Another set of informative measures are the Centrality Measures Based on Distances, implying distances that information has to cover in order to be transferred through the network. The first metric that falls in this category is closeness centrality. Closeness can be regarded as a measure of how long it will take the information to spread from a given vertex to others in the network. Setting 𝐺=(𝑉,𝐸) as an undirected graph, the shortest path closeness centrality of vertex 𝑣∈𝑉 is defined as the inverse of the mean geodesic distance from vertex 𝑣 to every other vertexe. A serious drawback of this metric is that it can only be used for connected graphs. A new measure, called shortest path efficiency, is proposed in Latora and Marchiori [37] and implemented in BrainNetVis application.

For a vertex 𝑣, Latora and Marchiori defined efficiency as1𝑒𝑓(𝑣)=𝑛−1𝑢≠𝑣1𝑑𝐺.(𝑣,𝑢)(12)

The formula for that is provided in Table 1.

Note that (12) can also be used for disconnected graphs. If some vertices 𝑣 and 𝑢 are not connected, then they do not contribute to 𝑒𝑓(𝑣). In this case, 𝑑𝐺(𝑣,𝑢)=+∞⇒1/𝑑𝐺(𝑣,𝑢)=0. The global efficiency, 𝑒𝑓(𝐺), of a graph is the average of 𝑒𝑓(𝑣) taken over all vertices [37] 1𝑒𝑓(𝐺)=𝑛𝑣∈𝑉1𝑒𝑓(𝑣)=𝑛(𝑛−1)𝑣∈𝑉𝑢≠𝑣1𝑑𝐺.(𝑣,𝑢)(13)

In addition to shortest path efficiency, we are interested in shortest-path betweenness centrality. In this metric, two other nodes, apart from the central vertex v, are involved. We call these nodes s and t, respectively. This metric intuitively refers to the number of shortest paths which connect vertices 𝑠 and 𝑡 that pass through vertex 𝑣. In the formula provided in Table 1, the relative numbers ğœŽğ‘ ğ‘¡(𝑣)/ğœŽğ‘ ğ‘¡ are interpreted as the extent to which vertex 𝑣 controls the communication between vertices 𝑠 and 𝑡. A vertex is considered central, if it is between many pairs of other vertices. Shortest-path betweenness centrality can be generalized to greyscale networks where the length of a path is equal to the sum of the lengths of its edges.

Centrality measures based on Neighborhoods and on Distances in BrainNetVis
We applied the above types of centrality measures on our synchronization matrix of the alcoholic patient's EEG. Figure 3 depicts the visualization of the individual's brain network using the Static Visualization Method. The Binary Network using threshold = 0.4 has been selected. The centrality measures calculated are the Degree Centrality, Shortest Path Efficiency and Shortest Path Betweenness Centrality. They are depicted on the respective table, shown in the same figure. Both the figure and the table with the metrics can be created by the following the View menu.

Spectral Centrality Measures
Another set of network metrics is based on the calculation of the eigenvectors of the adjacency matrix of the network, produced at the preprocessing step. Most of them are calculated by solving a linear equation system. These measures are called Spectral Centrality Measures. Bonacich's eigenvector centrality is one of them according to which the centrality of each vertex is proportional to the sum of the centralities of the vertices to which it is directly connected. The respective formula is presented in Table 1.
Expanding the simple Bonacich's eigenvector centrality, Hubbell [38] suggested yet another centrality measure based on the solution of a system of linear equations. Hubbell's centrality uses an approach based on directed weighted graphs where the weights of the edges may be real numbers. The general assumption of Hubbell's centrality is similar to the idea of Bonacich, but the centrality of a vertex depends both on its connection to other vertices and to exogenous input which sometimes is called boundary conditions. In this case, we include one more input to the equation 𝜆𝐜=𝑊𝑇𝐜 which describes Bonacich's eigenvector centrality. The result is shown on Table 1. This formula encapsulates the relative importance of endogenous versus exogenous factors in the determination of centrality.
The next spectral centrality measure, subgraph centrality, has been introduced by Estrada et al. [39]. It is calculated as the weighted sum of the number of closed walks in a graph, where longer walks receive lower weight than shorter ones. Very relative to the subgraphs of the network is the number of short walks of length 𝑘, starting and ending on vertex 𝑣𝑖. This number is symbolized with 𝜇𝑘(𝑖) on Table 1.
Last but not least, a very interesting idea was suggested by Demetrius et al. [40], describing network entropy. Evidence has been presented that this quantity is related to the capacity of the network to withstand random changes in the network structure. Network entropy is based on the Kolmogorov-Sinai (KS) entropy, which is a generalization of the Shannon entropy in that it describes the rate at which a stochastic process generates information. In our context, information corresponds to a sequence of vertices visited by an assumed Markov process on the network. Network entropy takes into account the impact of a vertex's removal on the network. This is captured by the product 𝜋𝑖𝐻𝑖 of the respective definition on Table 1. The interested reader could find more detailed information in [41].

Spectral Centrality Measures in BrainNetVis
We applied the above types of centrality measures on our synchronization matrix of the alcoholic patient's EEG. Using links from the Tools menu, we calculated the Bonacich's Eigenvector Centrality, Hubbell's Centrality, Subgraph Centrality, and Network Entrophy. One can define the type of networks with which he wishes to work (binary or greyscale) and also select the threshold value.

3.4. Graph Drawing Techniques

Regarding the way in which the brain is depicted, BrainNetVis tool incorporates three different kinds of visualization as the follows.

3.4.1. Static Visualization Method

In this method, in order to visualize the topology of the emerged network, we create a static framework where each electrode is depicted by a node placed in a position similar to the actual electrode's position on the human cortex. Depending on the number of the electrodes of each experiment, an oval shape is outlined (which corresponds to the scalp) and inside this oval shape, a number 𝑉 of circles exist that correspond to the electrodes placed on the subjects' head during the experiments.

3.4.2. Multidimensional Scaling

Multidimensional Scaling (MDS) is a family of techniques for analysis and visualization of complex data. The "beauty" of MDS is that we can analyze any kind of distance or similarity matrix, in addition to correlation matrices. Objects in a data set are represented as points in a geometric space; distance in this space represents proximity or similarity among objects. In our case, the objects are the electrodes and the distances among them are respective to their correlation in the synchronization matrix. In general, the goal of the analysis is to detect meaningful underlying connections among the electrodes which reflect the connections among different brain functional regions. In BrainNetVis, we incorporated a 2D visualization of the connections among electrodes. At this point, it has to be noticed that the more dimensions we use in order to reproduce the distance matrix, the better the fit of the reproduced is matrix to the observed matrix (i.e., the smaller the stress is). In fact, if we use as many dimensions as there are variables, then we can perfectly reproduce the observed distance matrix. Of course, our goal is to reduce the observed complexity of nature, that is, to explain the distance matrix in terms of fewer underlying dimensions. Some exemplar views of multidimensional scaling are shown in Figure 4

3.4.3. Force-Based or Force-Directed Algorithms

These are a class of algorithms for drawing graphs in an aesthetically pleasing way. Their purpose is to position the nodes of a graph in two-dimensional or three-dimensional space so that all the edges are of more or less equal length and there are as few crossing edges as possible. The force-directed algorithms achieve this by assigning forces amongst the set of edges and the set of nodes; the most straightforward method is to assign forces as if the edges were springs (see Hooke's law), and the nodes were electrically charged particles (see Coulomb's law). The entire graph is then simulated as if it were a physical system. The forces between its nodes change the dynamics and the layout of the system which at some point reaches its equilibrium state: at that moment, the graph is drawn. For force-directed graphs, it is also possible to employ mechanisms that search more directly for energy minima, either instead of or in conjunction with physical simulation. One of these mechanisms is binary stress (bStress), and it is the one we have incorporated in our tool. This model bridges the two most popular force directed approaches—the stress and the electrical-spring models—through the binary stress cost function, which is a carefully defined energy function with low descriptive complexity allowing fast computation via a Barnes-Hut scheme. Both electric-spring and stress approaches enjoy successful implementations and offer pleasing layouts to many graphs. Electric-spring models have the advantage of a lower descriptive complexity compared to the stress model. On the other hand, the stress function has a mild landscape, which allows utilizing powerful optimization techniques such as majorization. This way, good minima are usually achieved regardless of the initial positions. As far as the binary stress model is concerned, computationally, it is able to merge the advantages of both the electric-spring model and the stress model. Namely, it offers a low descriptive complexity, while at the same time, it is similar in its form to the known stress function, thus enabling the use of the majorization optimization scheme. More than other models, bStress emphasizes uniform spread of the nodes within a circular drawing area. In addition, bStress is suitable for drawing large graphs, not only because of its improved scalability, but also because it achieves good area utilization. Some exemplar views of binary stress visualization scaling are shown in Figure 5

More information on graph drawing techniques can be found in [13].

When we choose to visualize our graphs using the static visualization method, a change in the network metrics is not depicted on the output panel; this is because the electrode positions are stable and set from the beginning. Nevertheless, the changes in the calculations are saved in a matrix which is accessible by the end user. On the other hand, in multidimensional and binary stress modeling, the effects that take place when a network metric changes its value are depicted immediately after the change.

One can then set up the display options of his/her preference, for example, set up the way the graph vertices and edges will be displayed. As far as the nodes of the network are concerned, one can arrange their size, their color (uniform or colormap)and the depiction of the node labels. Regarding the edges, there exist three options for the color: uniform for directed networks, greyscale for greyscale networks (the intensity of the shadows of grey corresponds to the strength of the respective edge), and colormap. Colormap is also used in the case of greyscale networks but in this case colors are used: the closer the tint is to red color, the larger the strength of the respective edge is and the closer the tint is to blue color, the smaller the strength of the edge is. Moreover, one can adjust the size of the edge and whether this will be directed or not. Figure 6 depicts the brain of the virtual control subject using both binary and colormap networks. In both cases, the threshold was set to 0.5.

4. Conclusion

Using BrainNetVis, one can visualize and quantify the connections of the brain, based on EEG or MEG acquired signals. The inner brain connectivity is depicted as a graph; different sensor locations (electrodes) are visualized as nodes and their interconnections as edges. Therefore, scientists and clinicians will be able to get a better insight regarding brain connectivity and functionality and deduce more accurate results. We tested the tool using EEG data from alcoholic patients [7]. We were thus able to investigate some structural brain features that EEG and clinical data alone would not reveal. This tool can be easily used by the interested researcher, and it is accessible via It runs in every operating system that has JRE installed. Future work includes the support of the preprocessing methods mentioned in the same intuitive environment and the support of the binary European Data Format (EDF). Currently, simple ASCii text format is supported for simplicity and flexibility reasons.


We present here a summary of the metrics used at BrainNetVis and their placement under the tools menu. The main menu when the GUI opens contains the options: File, View, Tools, Window, and Help.

This drop-down menu includes the following tabs. (i)Import. Following this tab, the user can give as input the greyscale matrix that corresponds to the network of interest along with the vertex coordinates. He can browse his computer for these required files. (ii)Export. It is used to export the produced visualizations to a file with various formats (.eps,.pdf,.jpg, etc) (iii)Exit. It is used to quit the GUI. (iv)Output. One can export all the metrics of the examined network at a.txt file, which is saved in the same directory with the tool executable.

Under the View drop-down menu, one can find the following.(i)Network Visualization. One can choose among the three supported visualization techniques: Channel/Source coordinates, Multidimensional Scaling and Binary Stress, described in detail in Section 3.4(ii)Network Metrics. Following this tab, the user can ask either for the Vertex level metrics table, which contains the values of the vertex metrics that interest the user (and which he chooses under the Tools drop-down menu), or for the Network level metrics, which contains the values of the global network metrics.

This menu contains the following.(i)Display Options. Following this tab, the user can set up the display of the graphs. He can set his preferences concerning the nodes (size, color, label, font) and/or the edges (size, color, direction, arrow size). (ii)Network Metrics Options. Three tabs appear in this sub-menu. The first one is named General and contains options like if the network is directed or not, binary or not and synchronization network or not. In the latter case, the tool provides an option on the normalization of the edge length. The second tab is named Vertex Metrics and contains options for all the vertex metrics described in Section 3.3.2. Finally, the last tab is named Network Metrics and contains options for the network metrics described in Section 3.3.1.

Here, the user can change the size of the window of the GUI.


The authors wish to thank Dimitris Andreou for the development of the supportive software of the tool's different versions.


  1. V. Sakkalis, “Applied strategies towards EEG/MEG biomarker identification in clinical and cognitive research,” Biomarkers in Medicine, vol. 5, no. 1, pp. 93–105, 2011. View at: Google Scholar
  2. K. J. Friston, “Functional and effective connectivity in neuroimaging: a synthesis,” Human Brain Mapping, vol. 2, no. 1-2, pp. 56–78, 1994. View at: Google Scholar
  3. E. Bullmore and O. Sporns, “Complex brain networks: graph theoretical analysis of structural and functional systems,” Nature Reviews Neuroscience, vol. 10, no. 3, pp. 186–198, 2009. View at: Publisher Site | Google Scholar
  4. C. J. Stam and J. C. Reijneveld, “Graph theoretical analysis of complex networks in the brain,” Nonlinear Biomedical Physics, vol. 1, article 3, 2007. View at: Publisher Site | Google Scholar
  5. F. De Vico Fallani, L. Astolfi, F. Cincotti et al., “Brain network analysis from high-resolution EEG recordings by the application of theoretical graph indexes,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 16, no. 5, pp. 442–452, 2008. View at: Publisher Site | Google Scholar
  6. V. Sakkalis, T. Oikonomou, E. Pachou, I. Tollis, S. Micheloyannis, and M. Zervakis, “Time-significant wavelet coherence for the evaluation of schizophrenic brain activity using a graph theory approach,” in Proceedings of the 28th IEEE-EMBS, Engineering in Medicine and Biology Society (EMBC '06), vol. 1, pp. 4265–4268, New York, NY, USA, 2006. View at: Google Scholar
  7. V. Sakkalis, V. Tsiaras, M. Zervakis, and I. Tollis, “Optimal brain network synchrony visualization: application in an alcoholism paradigm,” in Proceedings of the 29th Annual International Conference of IEEE-EMBS, Engineering in Medicine and Biology Society (EMBC '07), pp. 4285–4288, 2007. View at: Publisher Site | Google Scholar
  8. C. J. Stam, B. F. Jones, G. Nolte, M. Breakspear, and P. Scheltens, “Small-world networks and functional connectivity in Alzheimer's disease,” Cerebral Cortex, vol. 17, no. 1, pp. 92–99, 2007. View at: Publisher Site | Google Scholar
  9. N. Situ, R. Rezaie, A. Papanicolaou, L. Pollonini, U. Patidar, and G. Zouridakis, “Functional connectivity networks in the autistic and healthy brain assessed using granger causality,” in Proceedings of the 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2010. View at: Google Scholar
  10. M. Massimini, F. Ferrarelli, R. Huber, S. K. Esser, H. Singh, and G. Tononi, “Neuroscience: breakdown of cortical effective connectivity during sleep,” Science, vol. 309, no. 5744, pp. 2228–2232, 2005. View at: Publisher Site | Google Scholar
  11. M. Valencia, M. A. Pastor, M. A. Fernández-Seara, J. Artieda, J. Martinerie, and M. Chavez, “Complex modular structure of large-scale brain networks,” Chaos, vol. 19, no. 2, Article ID 023119, 2009. View at: Publisher Site | Google Scholar
  12. V. Sakkalis, C. Doru Giurcǎneanu, P. Xanthopoulos et al., “Assessment of linear and nonlinear synchronization measures for analyzing EEG in a mild epileptic paradigm,” IEEE Transactions on Information Technology in Biomedicine, vol. 13, no. 4, pp. 433–441, 2009. View at: Publisher Site | Google Scholar
  13. G. Di Battista, P. Eades, R. Tamassia, and I. G. Tollis, Graph Drawing: Algorithms for the Visualization of Graphs, Prentice Hall, Upper Saddle River, NJ, USA, 1999.
  15. M. Rubinov and O. Sporns, “Complex network measures of brain connectivity: uses and interpretations,” NeuroImage, vol. 52, no. 3, pp. 1059–1069, 2010. View at: Publisher Site | Google Scholar
  16. U. Egert, TH. Knott, C. Schwarz et al., “MEA-Tools: an open source toolbox for the analysis of multi-electrode data with MATLAB,” Journal of Neuroscience Methods, vol. 117, no. 1, pp. 33–42, 2002. View at: Publisher Site | Google Scholar
  17. M. Mørup, L. K. Hansen, and S. M. Arnfred, “ERPWAVELAB: a toolbox for multi-channel analysis of time-frequency transformed event related potentials,” Journal of Neuroscience Methods, vol. 161, no. 2, pp. 361–368, 2007. View at: Publisher Site | Google Scholar
  18. A. Delorme and S. Makeig, “EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis,” Journal of Neuroscience Methods, vol. 134, no. 1, pp. 9–21, 2004. View at: Publisher Site | Google Scholar
  19. V. Sakkalis, V. Tsiaras, and I. Tollis, “Graph analysis and visualization for brain function characterization using EEG data,” Journal of Healthcare Engineering, vol. 1, no. 3, pp. 435–460, 2010. View at: Google Scholar
  20. J. G. Snodgrass and M. Vanderwart, “A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity,” Journal of Experimental Psychology: Human Learning and Memory, vol. 6, no. 2, pp. 174–215, 1980. View at: Publisher Site | Google Scholar
  21. S. M. Kay, Modern Spectral Estimation, Prentice-Hall, Englewood Cliffs, NJ, USA, 1988.
  22. J. Arnhold, P. Grassberger, K. Lehnertz, and C. E. Elger, “A robust method for detecting interdependences: application to intracranially recorded EEG,” Physica D, vol. 134, no. 4, pp. 419–430, 1999. View at: Google Scholar
  23. L. A. Baccalá and K. Sameshima, “Partial directed coherence: a new concept in neural structure determination,” Biological Cybernetics, vol. 84, no. 6, pp. 463–474, 2001. View at: Google Scholar
  24. F. Takens, “Detecting strange attractors in turbulence,” in Proceedings of the Dynamical Systems and Turbulence Symposium, vol. 898 of Lecture Notes in Mathematics, pp. 366–381, 1981. View at: Google Scholar
  25. R. Q. Quiroga, A. Kraskov, T. Kreuz, and P. Grassberger, “Performance of different synchronization measures in real data: a case study on electroencephalographic signals,” Physical Review E, vol. 65, no. 4, Article ID 041903, 14 pages, 2002. View at: Publisher Site | Google Scholar
  26. T. Sauer, J. A. Yorke, and M. Casdagli, “Embedology,” Journal of Statistical Physics, vol. 65, no. 3-4, pp. 579–616, 1991. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  27. M. B. Kennel, R. Brown, and H. D. I. Abarbanel, “Determining embedding dimension for phase-space reconstruction using a geometrical construction,” Physical Review A, vol. 45, no. 6, pp. 3403–3411, 1992. View at: Publisher Site | Google Scholar
  28. L. Cao, “Practical method for determining the minimum embedding dimension of a scalar time series,” Physica D, vol. 110, no. 1-2, pp. 43–50, 1997. View at: Google Scholar
  29. R. Hegger, H. Kantz, and T. Schreiber, “Practical implementation of nonlinear time series methods: the TISEAN package,” Chaos, vol. 9, no. 2, pp. 413–435, 1999. View at: Google Scholar
  30. A. M. Fraser and H. L. Swinney, “Independent coordinates for strange attractors from mutual information,” Physical Review A, vol. 33, no. 2, pp. 1134–1140, 1986. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  31. P. Von Bünau, F. C. Meinecke, F. C. Király, and K. R. Müller, “Finding stationary subspaces in multivariate time series,” Physical Review Letters, vol. 103, no. 21, Article ID 214101, 2009. View at: Publisher Site | Google Scholar
  32. D. J. Watts and S. H. Strogatz, “Collective dynamics of ‘small-world’ networks,” Nature, vol. 393, no. 6684, pp. 440–442, 1998. View at: Google Scholar
  33. M. E. J. Newman, “Assortative mixing in networks,” Physical Review Letters, vol. 89, no. 20, Article ID 208701, 4 pages, 2002. View at: Google Scholar
  34. C. H. Park, S. Y. Kim, Y. H. Kim, and K. Kim, “Comparison of the small-world topology between anatomical and functional connectivity in the human brain,” Physica A, vol. 387, no. 23, pp. 5958–5962, 2008. View at: Publisher Site | Google Scholar
  35. V. M. Eguíluz, D. R. Chialvo, G. A. Cecchi, M. Baliki, and A. V. Apkarian, “Scale-free brain functional networks,” Physical Review Letters, vol. 94, no. 1, Article ID 018102, 2005. View at: Publisher Site | Google Scholar
  36. R. Xulvi-Brunet and I. M. Sokolov, “Reshuffling scale-free networks: from random to assortative,” Physical Review E, vol. 70, no. 6, Article ID 066102, 6 pages, 2004. View at: Publisher Site | Google Scholar
  37. V. Latora and M. Marchiori, “Efficient behavior of small-world networks,” Physical Review Letters, vol. 87, no. 19, Article ID 198701, 4 pages, 2001. View at: Google Scholar
  38. C. H. Hubbell, “An input-output approach to clique identification,” Sociometry, vol. 28, pp. 377–399, 1965. View at: Google Scholar
  39. E. Estrada and J. A. Rodríguez-Velázquez, “Subgraph centrality in complex networks,” Physical Review E, vol. 71, no. 5, Article ID 056103, pp. 1–9, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  40. L. Demetrius, V. M. Gundlach, and G. Ochs, “Complexity and demographic stability in population models,” Theoretical Population Biology, vol. 65, no. 3, pp. 211–225, 2004. View at: Publisher Site | Google Scholar
  41. V. L. Tsiaras, Algorithms for the analysis and visualization of biomedical networks, Ph.D. thesis, Computer Science Department, University of Crete, Heraklion, Greece, 2009.

Copyright © 2011 Eleni G. Christodoulou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.