- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Computational Intelligence and Neuroscience

Volume 2011 (2011), Article ID 747290, 12 pages

http://dx.doi.org/10.1155/2011/747290

## BrainNetVis: An Open-Access Tool to Effectively Quantify and Visualize Brain Networks

^{1}Institute of Computer Science (ICS), Foundation for Research and Technology—Hellas (FORTH), N. Plastira 100, GR-70013 Heraklion, Greece^{2}Department of Computer Science, University of Crete, GR-71409 Heraklion, Greece

Received 17 September 2010; Revised 25 November 2010; Accepted 31 December 2010

Academic Editor: Sylvain Baillet

Copyright © 2011 Eleni G. Christodoulou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper presents BrainNetVis, a tool which serves brain network modelling and visualization, by providing both quantitative and qualitative network measures of brain interconnectivity. It emphasizes the needs that led to the creation of this tool by presenting similar works in the field and by describing how our tool contributes to the existing scenery. It also describes the methods used for the calculation of the graph metrics (global network metrics and vertex metrics), which carry the brain network information. To make the methods clear and understandable, we use an exemplar dataset throughout the paper, on which the calculations and the visualizations are performed. This dataset consists of an alcoholic and a control group of subjects.

#### 1. Introduction

One of the major issues in neuroscience is to describe how different brain areas communicate with each other during perception, cognition, and action as well as during spontaneous activity in the default or resting state. Mainly two different approaches for capturing and localizing brain activity motifs have been proposed; univariate spectrum based analysis and functional connectivity analysis [1]. Friston [2] defined functional connectivity as the statistical dependence between the activations of distinct and often well-separated neuronal populations.

Network models and graph theory provide a common framework for describing brain functional connectivity [3–5]. The interdependence between brain areas is estimated using multivariate neurophysiological signals (EEG, MEG, ECoG) and/or haemodynamic response images (fMRI). Then, a network is formed by corresponding either brain areas or channels to vertices and by considering an edge between two vertices if and only if the estimated interdependence is above a threshold. Regarding threshold selection, it is important to notice that it is a rather tricky part and there is currently no established way of favouring a specific threshold value. In practice, a broad range of threshold values is used to characterize the network. However, the authors propose two alternative approaches in selecting a threshold value based either on group statistics between specific graph-theoretic measures of the populations under analysis [6] or utilizing a signal-based technique of selecting the optimal visualization threshold using surrogate (artificially generated ensemble of data aiming at revealing the most significantly coupled brain regions) datasets to correctly identify the most significant correlation patterns [7]. The next step in the analysis, after edge identification, is to measure some networks statistics and characterize the network. Then, using the network characterization, one can draw conclusions on the effect of illnesses or of cognitive loads on functional connectivity [6–11].

In this study, we briefly refer to pairwise (bivariate) and multivariate interdependence measures, as well as linear and nonlinear ones, that have been successfully used as indices of cerebral engagement [12]. This information is important for the correct usage of the tool, especially for nonexpert users, as the application of these measures on the raw EEG data produces the input to our tool. The BrainNetVis tool provides a dynamic snapshot of the highly complex underlying neural mechanisms by means of graph visualization [13]. BrainNetVis is an open-access multiplatform tool, provided by ICS-FORTH, for graph representation and brain network visualization. Please note that BrainNetVis calculates the following presented metrics on the *synchronization matrices* (adjacency matrices) that the user should calculate in advance! However, the preprocessing section (Section* *3.2) briefly presents some widely used techniques to assess functional brain connectivity and form the adjacency matrix.

At this point, we refer to some already existing tools on the field. These tools capture different kinds of EEG information than BrainNetVis and they may be used complementary to it. One of them is EEGLAB [14], which we have been using extensively for better perception of the brain area. EEGLAB is an interactive Matlab toolbox for processing continuous and event-related EEG, MEG, and other electrophysiological data incorporating independent component analysis (ICA), time/frequency analysis, artifact rejection, event-related statistics, and several useful modes of visualization of the averaged and single-trial data. EEGlab offers also dipole localization functions. Some of the metrics that we implement have also been implemented in the Brain Connectivity toolbox (a matlab toolbox) by Rubinov and Sporns [15]. Other related toolboxes include MEA-Tools [16] and ERPWAVELAB [17]. In these toolboxes, however, the measures for quantifying channel interactions are mainly confined to the temporal crosscorrelation [16] and the coherence spectrum [17, 18]. However, more sophisticated interdependence techniques addressing not only linear but also nonlinear synchronization and causality are also available and applied in certain pathologies like Epilepsy [12]. Such measures can act complementary to graph theoretic indices that characterize brain networks as discussed in [19] and can be used as input to BrainNetVis.

The paper is organized as follows. Section 2 presents essential information on the different ways of graph modelling and manipulations, using BrainNetVis. Section 3 refers to the preprocessing needed (Section 3.2), the most commonly used menu calls and the GUI (Section 3.3), and the possible graph visualization options (Section 3.4). Our conclusion is given in Section 4.

#### 2. Network Analysis

Before presenting BrainNetVis, it is important to provide here some basic definitions from graph theory.

A *graph * defined on a set of *vertices * and *edges *, where each edge is an ordered or unordered pair of vertices. An ordered pair is called a *directed edge*, while an unordered pair , where , is called an *undirected edge*. In case , is called a *self-loop*. In our study, we consider *simple* graphs, that is, graphs without self-loops. Also the cardinality of is denoted by (i.e., ).

A *weighted network * consists of a graph with vertex set and edge set augmented with an edge value function that assigns to each edge a real value . Every weighted network corresponds to a real matrix , , where is equal to value of edge if , or to 0 otherwise. If we reserve value 0 to mean the absence of an edge, then the correspondence between and is one to one. In this work, we consider a subset of weighted networks, which we call *synchronization networks*, where edge values are restricted to interval and interpreted as strength of dependence between vertices.

In synchronization networks, higher edge values indicate stronger dependencies. To define the length of an edge, we should at least reverse the order of edge values by applying, for example, the inverse function , that is, We also propose another function , where

These are definitions on how to transform the edge lengths in the case of synchronization networks. Which of the two functions performs better depends on the graph structure and on the metric or the visualization method that uses these functions. When choosing the appropriate formulation, one should consider that the function tends to faster than the function when . Therefore, the edges with small values are assigned longer lengths with the function than those with the function.

The length of a path from vertex to vertex is the sum of the lengths of the edges of the path. The shortest path distance from vertex to vertex is denoted by . If vertex is unreachable from vertex , then .

#### 3. Methods and Results

##### 3.1. Exemplar Case

In what follows, we are using the data of a specific use case, consisting of alcoholic and control subjects, in order to provide concrete examples of use of the application. Briefly, the specific study included 30 control subjects and 30 alcoholic subjects. Each subject was fitted with a 61-lead electrode cap (ECI, Electro-Cap International). All scalp electrodes were referred to . In this experiment, each subject was exposed to pictures of objects chosen from the 1980 Snodgrass and Vanderwart picture set [20]. The stimuli in each trial were randomized (but not repeated) and were presented on a white background for 300 ms at the center of a computer monitor. Their size was approximately 5–10 cm × 5–10 cm, thus subtending a visual angle of 0,05°–0,1°. Ten trials were shown, with the interval between trials fixed to 3.2 s. The participants were instructed to memorize the pictures in order to be able to identify them later. For each subject and for each trial and frequency band (0.5–4 Hz, 4–8 Hz, 8–13 Hz, 13–30 Hz, 30–45 Hz) the interdependence for each channel pair (there are channel pairs since the number of active EEG channels is 61) was calculated using the coherence and the RIM methods. The results were stored in interdependence matrices with elements ranging from 0 to 1. The main finding of this study, using BrainNetVis, was that the alcoholic subjects have impaired synchronization of brain activity and loss of lateralization during the rehearsal process as compared to control subjects.

##### 3.2. Preprocessing

In order to create a graph, a matrix containing the EEG channel pairwise correlations is required. Thus, one needs to calculate the correlations among all pairs of electrodes and deduce the respective adjacency matrix, called *synchronization matrix*. There exist a number of measures that capture the linear and the nonlinear links between time-series in a frequency band in order to calculate the required correlations (in the EEG analysis context they are called synchronization indices). Three measures have been chosen after an extensive study in linear and nonlinear synchronization measures [12]: the typical magnitude squared coherence method (MSC) [21], a nonlinear bivariate measure for generalized synchronization (RIM) [22] and Partial Directed Coherence (PDC) [23]. The advantage of magnitude squared coherence is that it is well known and widely accepted. The advantage of RIM is that it is able to capture nonlinear patterns available in the signals, whereas PDC can measure causality.

*(1) Magnitude Squared Coherence (MSC)*

MSC (or simply coherence) has been a well-established and traditionally used tool to investigate the linear relation between two signals or EEG channels. Let us suppose that we have two simultaneously measured discrete time series and , . MSC is the cross-spectral density function , which is simply derived via the fourier transform of the crosscorrelation, normalized by their individual autospectral density functions. Hence, MSC is calculated using the Welch’s method as
where indicates window averaging. The estimated MSC for a given frequency ranges between 0 (no coupling) and 1 (maximum linear interdependence).

*(2) A Robust Interdependence Measure (RIM)*

Given two scalar time series and with , which have been measured from dynamical systems and , the dynamics of the systems are reconstructed using delay coordinates [24]
and similarly we reconstruct from , with an embedding dimension and a delay time for , where . Regarding and , they are parameters of Arnholds method [25]. Taken's [24] embedding theorems and their sequels (e.g., [26]) are existence proofs but they do not directly show how to get a suitable time delay or embedding dimension m from a finite time series. Empirical and heuristic criteria are employed for selecting and m. Usually, a choice of is the value for which the autocorrelation function first passes through zero, while m is determined using variations of false nearest neighbour statistics [27–29]. Parameter can also be calculated using the method of Fraser [30].

Let and , , denote the time indices of the nearest Euclidean neighbors of and , respectively. Temporally correlated neighbors are excluded by means of a Theiler correction: and . For each , the average square distance of to all remaining points in is given by
For each , the X-conditioned mean squared Euclidean distance is defined as
Quiroga et al. [25] defined the dependence measure
The measure is defined in complete analogy, and as interdependence measure between and , we use the mean value .

*(3) Partial Directed Coherence (PDC)*

Let with be a stationary -dimensional time series with mean zero. Then, a vector autoregressive model of order for is given by
where are the coefficient matrices of the model and is a multivariate Gaussian white noise process with covariance matrix . In this model, the coefficients describe how the present values of depend linearly on the past values of the components . In order to provide a frequency domain measure for Granger-causality, Baccala and Sameshima [23] introduced the concept of PDC. This measure is based on the Fourier transform of the coefficient series
More precisely, the PDC from to is defined as
The PDC takes values between 0 and 1 and vanishes for all frequencies if and only if the coefficients are zero for all .

The synchronization matrix created using one of the above methods serves as input to the BrainNetVis tool thus, it should be calculated separately and a priori. Please note that the presented tool currently implements only graph characterization measures and visualization schemes. It can be used with a variety of inputs in the form of the adjacency matrix. However, we provide the preprocessing section mostly for the interested but not expert user that wishes to investigate how graph analysis may be applied to the neuroscience field. In this sense, even if signal processing techniques are outside of the scope of the tool, we do describe the most widely used methods that provide the input for the further graph analysis. Nevertheless, it is true that most of the methods presented, linear (i.e., PDC) but mostly nonlinear ones (i.e., RIM), assume some kind of *stationarity*. Generally EEG distribution is considered as a multivariate Gaussian process even if the mean and covariance properties generally change from segment to segment. Therefore, strictly speaking, EEG meets quasistationarity because it can be considered stationary only within short intervals. Hence, the user should somehow test the stationarity assumptions prior to using these methods. Hopefully, a novel and prosperous technique capable of decomposing a multivariate time series into its stationary and nonstationary part is known as stationary subspace analysis (SSA) [31] and can be utilized to overcome the implicit stationarity constraints.

###### 3.2.1. Binary and Greyscale Networks on BrainNetVis

BrainNetVis provides the option of using either a *binary* or a *greyscale* network by adjusting, respectively, the *Network Metrics Options* under the *View* drop down menu. In our use case, we provided as input to the tool a synchronization matrix describing the brain network of a *virtual* alcoholic patient. This virtual patient has been created by taking the means across the node and edge values over *all* 30 alcoholic subjects. We underline that this subject does not actually exist. We applied a binary network, using and a greyscale network which we visualized using colormap scale. The edge length transformation function can also be selected under the same menu. We used
The results are depicted in Figure 1.

###### 3.2.2. Data Structure

Two types of files are required for the algorithms that BrainNetVis encapsulates to run properly (1)A square synchronization matrix with the data from the EEG study (*required for the algorithms to function*).(2)A file containing a matrix of the labels and the coordinates of each electrode. The rows of the table correspond to the electrodes. The first column contains the electrodes' labels, and the other columns contain the coordinates of the electrodes. These will be either 2 columns (for 2D data, respective to and coordinates) or three columns (for 3D data, respective to , , and coordinates). (*required for the visualization options*)

##### 3.3. Menu Calls (GUI)

The network metrics available in BrainNetVis will be presented here, in a way that follows the tool's structure.

###### 3.3.1. Global Network Metrics

Networks are often classified into unifying categories in order to obtain a better understanding of their structure and function. *Network measures* are numbers which capture reduced information for graphs and describe essential properties. Network measures should catch the relevant and needed information, they should differentiate between certain classes of networks and be easily computed in order to be useful in algorithms and applications.

A very important global network metric is *clustering coefficient*. The clustering coefficient has been introduced by Watts and Strogatz [32] in 1998. For a vertex , the clustering coefficient measures the connectivity of its direct neighborhood. The clustering coefficient of a graph is the average of taken over all vertices.

In the BrainNetVis application, we implement two different kinds of clustering coefficients, proposed by Zhang and Horvath (the first) and Onnela (the second). Zhang and Horvath proposed a definition which uses only the network values, in the context of gene coexpression networks. On the other hand, Onnela proposed a version of local clustering coefficient based on the concept of subgraph intensity, defined as the geometric average of subgraph edge values. Both metrics are defined in Table 1. It has to be noticed that the Onnela clustering coefficient definition suffers from the drawback that it requires an underlying binary network; if this is not available as a separate set of data, then presumably it must be obtained by discretizing the weighted edges.

The other important global network metric, included in the tool, is *assortative mixing*. This feature captures the similarity between properties of adjacent network vertices. Intuitively, this measure captures the tendency of network vertices to connect either to vertices with similar degrees (high degrees connected with high degrees and low degrees connected with low degrees) or to vertices that have dissimilar degrees (high degrees connected with low degrees). Newman [33] proposed an interesting measure to quantify the degree of similarity (dissimilarity) between adjacent vertices in a network using assortative mixing, which is given as the correlation between properties of every pairs of adjacent vertices. Each vertex may have assigned a single scalar, such as a centrality measure of the vertex position in a network, or a set of scalar properties. Then, the assortativity coefficient for an undirected graph is defined as the (sample) Pearson product-moment correlation coefficient. The formula of this computation is given in Table 1, and it is written in a symmetrical form. This equation can also be used for directed graphs by simply ignoring the direction of edges.

The value of the assortativity coefficient, , lies in the range , with indicating perfect assortativity and indicating perfect disassortativity (perfect negative correlation between the properties of the vertices of the edges under consideration). Brain functional networks tend to be assortative [34, 35]. From computational studies, it has been observed that information gets easily transferred through assortative networks as compared to that in disassortative networks [36].

*Global network metrics on BrainNetVis*

BrainNetVis allows the calculation of the mentioned global network metrics by following the *Tools* menu (see Figure 2). Continuing the previous example on an alcoholic patient, we applied the simple *Clustering Coefficient* and the *Assortative Mixing*.

###### 3.3.2. Vertex Metrics-Centrality Measures

The above concerned global network metrics. There exists a significant interest in local network properties as well, which concentrates on one node of interest. These properties are very important since at the local scale we can detect which vertices are the most relevant for the organization and functioning of a network. These local measures are commonly named *centrality measures* (or centrality indices) and have proved of great value in analysing the role played by individuals in social networks and in identifying essential proteins, keystone species, and functionally important brain regions.

*Centrality Measures Based on Neighbourhoods*

The simplest and most basic centrality measure is *degree centrality * of a vertex . In practice, this is the number of neighbours of the node of interest. In spite of the simplicity of this concept, degree is the most fundamental network measure and most other centrality measures are linked to it. The definitions of degree centrality, both for directed and for undirected networks are provided in Table 1.

In the case of greyscale networks, instead of using the term *degree centrality*, we use the term *strength centrality*. The formulas for strength centrality are defined correspondingly (Table 1). In BrainNetVis, strength centrality is presented as *normalized degree centrality*. This is accessed when the user chooses the *Normalized Metrics* on the Tools Network Metrics Options General tab and normalizes the edge values to range from 0 to 1 accordingly.

*Centrality Measures Based on Distances*

Another set of informative measures are the *Centrality Measures Based on Distances*, implying distances that information has to cover in order to be transferred through the network. The first metric that falls in this category is *closeness centrality*. Closeness can be regarded as a measure of how long it will take the information to spread from a given vertex to others in the network. Setting as an undirected graph, the shortest path closeness centrality of vertex is defined as the inverse of the mean geodesic distance from vertex to every other vertexe. A serious drawback of this metric is that it can only be used for connected graphs. A new measure, called *shortest path efficiency*, is proposed in Latora and Marchiori [37] and implemented in BrainNetVis application.

For a vertex , Latora and Marchiori defined efficiency as

The formula for that is provided in Table 1.

Note that (12) can also be used for disconnected graphs. If some vertices and are not connected, then they do not contribute to . In this case, . The global efficiency, , of a graph is the average of taken over all vertices [37]

In addition to *shortest path efficiency*, we are interested in *shortest-path betweenness centrality*. In this metric, two other nodes, apart from the central vertex *v*, are involved. We call these nodes *s* and *t*, respectively. This metric intuitively refers to the number of shortest paths which connect vertices and that pass through vertex . In the formula provided in Table 1, the relative numbers are interpreted as the extent to which vertex controls the communication between vertices and . A vertex is considered central, if it is between many pairs of other vertices. Shortest-path betweenness centrality can be generalized to greyscale networks where the length of a path is equal to the sum of the lengths of its edges.

*Centrality measures based on Neighborhoods and on Distances in BrainNetVis*

We applied the above types of centrality measures on our synchronization matrix of the alcoholic patient's EEG. Figure 3 depicts the visualization of the individual's brain network using the *Static Visualization Method*. The *Binary Network* using threshold = 0.4 has been selected. The centrality measures calculated are the *Degree Centrality*, *Shortest Path Efficiency* and *Shortest Path Betweenness Centrality*. They are depicted on the respective table, shown in the same figure. Both the figure and the table with the metrics can be created by the following the *View* menu.

*Spectral Centrality Measures*

Another set of network metrics is based on the calculation of the *eigenvectors* of the adjacency matrix of the network, produced at the preprocessing step. Most of them are calculated by solving a linear equation system. These measures are called *Spectral Centrality Measures. Bonacich's eigenvector centrality* is one of them according to which the centrality of each vertex is proportional to the sum of the centralities of the vertices to which it is directly connected. The respective formula is presented in Table 1.

Expanding the simple Bonacich's eigenvector centrality, Hubbell [38] suggested yet another centrality measure based on the solution of a system of linear equations. *Hubbell's centrality* uses an approach based on directed weighted graphs where the weights of the edges may be real numbers. The general assumption of Hubbell's centrality is similar to the idea of Bonacich, but the centrality of a vertex depends both on its connection to other vertices and to exogenous input which sometimes is called boundary conditions. In this case, we include one more input to the equation which describes Bonacich's eigenvector centrality. The result is shown on Table 1. This formula encapsulates the relative importance of endogenous versus exogenous factors in the determination of centrality.

The next spectral centrality measure, *subgraph centrality*, has been introduced by Estrada et al. [39]. It is calculated as the weighted sum of the number of closed walks in a graph, where longer walks receive lower weight than shorter ones. Very relative to the subgraphs of the network is the number of short walks of length , starting and ending on vertex . This number is symbolized with on Table 1.

Last but not least, a very interesting idea was suggested by Demetrius et al. [40], describing *network entropy*. Evidence has been presented that this quantity is related to the capacity of the network to withstand random changes in the network structure. Network entropy is based on the Kolmogorov-Sinai (KS) entropy, which is a generalization of the Shannon entropy in that it describes the rate at which a stochastic process generates information. In our context, information corresponds to a sequence of vertices visited by an assumed Markov process on the network. Network entropy takes into account the impact of a vertex's removal on the network. This is captured by the product of the respective definition on Table 1. The interested reader could find more detailed information in [41].

*Spectral Centrality Measures in BrainNetVis*

We applied the above types of centrality measures on our synchronization matrix of the alcoholic patient's EEG. Using links from the *Tools* menu, we calculated the *Bonacich's Eigenvector Centrality*, *Hubbell's Centrality*, *Subgraph Centrality*, and *Network Entrophy*. One can define the type of networks with which he wishes to work (binary or greyscale) and also select the threshold value.

##### 3.4. Graph Drawing Techniques

Regarding the way in which the brain is depicted, BrainNetVis tool incorporates three different kinds of visualization as the follows.

###### 3.4.1. Static Visualization Method

In this method, in order to visualize the topology of the emerged network, we create a static framework where each electrode is depicted by a node placed in a position similar to the actual electrode's position on the human cortex. Depending on the number of the electrodes of each experiment, an oval shape is outlined (which corresponds to the scalp) and inside this oval shape, a number of circles exist that correspond to the electrodes placed on the subjects' head during the experiments.

###### 3.4.2. Multidimensional Scaling

Multidimensional Scaling (MDS) is a family of techniques for analysis and visualization of complex data. The "beauty" of MDS is that we can analyze any kind of distance or similarity matrix, in addition to correlation matrices. Objects in a data set are represented as points in a geometric space; distance in this space represents proximity or similarity among objects. In our case, the objects are the electrodes and the distances among them are respective to their correlation in the synchronization matrix. In general, the goal of the analysis is to detect meaningful underlying connections among the electrodes which reflect the connections among different brain functional regions. In *BrainNetVis*, we incorporated a 2D visualization of the connections among electrodes. At this point, it has to be noticed that the more dimensions we use in order to reproduce the distance matrix, the better the fit of the reproduced is matrix to the observed matrix (i.e., the smaller the stress is). In fact, if we use as many dimensions as there are variables, then we can perfectly reproduce the observed distance matrix. Of course, our goal is to reduce the observed complexity of nature, that is, to explain the distance matrix in terms of fewer underlying dimensions. Some exemplar views of multidimensional scaling are shown in Figure 4

###### 3.4.3. Force-Based or Force-Directed Algorithms

These are a class of algorithms for drawing graphs in an aesthetically pleasing way. Their purpose is to position the nodes of a graph in two-dimensional or three-dimensional space so that all the edges are of more or less equal length and there are as few crossing edges as possible. The force-directed algorithms achieve this by assigning forces amongst the set of edges and the set of nodes; the most straightforward method is to assign forces as if the edges were springs (see Hooke's law), and the nodes were electrically charged particles (see Coulomb's law). The entire graph is then simulated as if it were a physical system. The forces between its nodes change the dynamics and the layout of the system which at some point reaches its equilibrium state: at that moment, the graph is drawn. For force-directed graphs, it is also possible to employ mechanisms that search more directly for energy minima, either instead of or in conjunction with physical simulation. One of these mechanisms is binary stress (bStress), and it is the one we have incorporated in our tool. This model bridges the two most popular force directed approaches—the stress and the electrical-spring models—through the binary stress cost function, which is a carefully defined energy function with low descriptive complexity allowing fast computation via a Barnes-Hut scheme. Both electric-spring and stress approaches enjoy successful implementations and offer pleasing layouts to many graphs. Electric-spring models have the advantage of a lower descriptive complexity compared to the stress model. On the other hand, the stress function has a mild landscape, which allows utilizing powerful optimization techniques such as majorization. This way, good minima are usually achieved regardless of the initial positions. As far as the binary stress model is concerned, computationally, it is able to merge the advantages of both the electric-spring model and the stress model. Namely, it offers a low descriptive complexity, while at the same time, it is similar in its form to the known stress function, thus enabling the use of the majorization optimization scheme. More than other models, bStress emphasizes uniform spread of the nodes within a circular drawing area. In addition, bStress is suitable for drawing large graphs, not only because of its improved scalability, but also because it achieves good area utilization. Some exemplar views of binary stress visualization scaling are shown in Figure 5

More information on graph drawing techniques can be found in [13].

When we choose to visualize our graphs using the static visualization method, a change in the network metrics is not depicted on the output panel; this is because the electrode positions are stable and set from the beginning. Nevertheless, the changes in the calculations are saved in a matrix which is accessible by the end user. On the other hand, in multidimensional and binary stress modeling, the effects that take place when a network metric changes its value are depicted immediately after the change.

One can then set up the display options of his/her preference, for example, set up the way the graph vertices and edges will be displayed. As far as the nodes of the network are concerned, one can arrange their size, their color (*uniform* or *colormap*)and the depiction of the node labels. Regarding the edges, there exist three options for the color: *uniform* for directed networks, *greyscale* for greyscale networks (the intensity of the shadows of grey corresponds to the strength of the respective edge), and *colormap*. *Colormap* is also used in the case of greyscale networks but in this case colors are used: the closer the tint is to red color, the larger the strength of the respective edge is and the closer the tint is to blue color, the smaller the strength of the edge is. Moreover, one can adjust the size of the edge and whether this will be directed or not. Figure 6 depicts the brain of the virtual control subject using both binary and colormap networks. In both cases, the threshold was set to 0.5.

#### 4. Conclusion

Using BrainNetVis, one can visualize and quantify the connections of the brain, based on EEG or MEG acquired signals. The inner brain connectivity is depicted as a graph; different sensor locations (electrodes) are visualized as nodes and their interconnections as edges. Therefore, scientists and clinicians will be able to get a better insight regarding brain connectivity and functionality and deduce more accurate results. We tested the tool using EEG data from alcoholic patients [7]. We were thus able to investigate some structural brain features that EEG and clinical data alone would not reveal. This tool can be easily used by the interested researcher, and it is accessible via http://www.ics.forth.gr/bmi/tools.html. It runs in every operating system that has JRE installed. Future work includes the support of the preprocessing methods mentioned in the same intuitive environment and the support of the binary European Data Format (EDF). Currently, simple ASCii text format is supported for simplicity and flexibility reasons.

#### Appendix

We present here a summary of the metrics used at BrainNetVis and their placement under the tools menu. The main menu when the GUI opens contains the options: *File, View, Tools, Window, and Help. *

*File*

This drop-down menu includes the following tabs. (i)*Import*. Following this tab, the user can give as input the greyscale matrix that corresponds to the network of interest along with the vertex coordinates. He can browse his computer for these required files. (ii)*Export*. It is used to export the produced visualizations to a file with various formats (.eps,.pdf,.jpg, etc) (iii)*Exit*. It is used to quit the GUI. (iv)*Output*. One can export all the metrics of the examined network at a.*txt* file, which is saved in the same directory with the tool executable.

*View*

Under the *View* drop-down menu, one can find the following.(i)*Network Visualization*. One can choose among the three supported visualization techniques: Channel/Source coordinates, Multidimensional Scaling and Binary Stress, described in detail in Section 3.4(ii)*Network Metrics*. Following this tab, the user can ask either for the *Vertex level metrics* table, which contains the values of the vertex metrics that interest the user (and which he chooses under the *Tools* drop-down menu), or for the *Network level metrics*, which contains the values of the global network metrics.

*Tools*

This menu contains the following.(i)*Display Options*. Following this tab, the user can set up the display of the graphs. He can set his preferences concerning the nodes (size, color, label, font) and/or the edges (size, color, direction, arrow size). (ii)*Network Metrics Options*. Three tabs appear in this sub-menu. The first one is named *General* and contains options like if the network is directed or not, binary or not and synchronization network or not. In the latter case, the tool provides an option on the normalization of the edge length. The second tab is named *Vertex Metrics* and contains options for all the vertex metrics described in Section 3.3.2. Finally, the last tab is named *Network Metrics* and contains options for the network metrics described in Section 3.3.1.

*Window*

Here, the user can change the size of the window of the GUI.

#### Acknowledgment

The authors wish to thank Dimitris Andreou for the development of the supportive software of the tool's different versions.

#### References

- V. Sakkalis, “Applied strategies towards EEG/MEG biomarker identification in clinical and cognitive research,”
*Biomarkers in Medicine*, vol. 5, no. 1, pp. 93–105, 2011. - K. J. Friston, “Functional and effective connectivity in neuroimaging: a synthesis,”
*Human Brain Mapping*, vol. 2, no. 1-2, pp. 56–78, 1994. - E. Bullmore and O. Sporns, “Complex brain networks: graph theoretical analysis of structural and functional systems,”
*Nature Reviews Neuroscience*, vol. 10, no. 3, pp. 186–198, 2009. View at Publisher · View at Google Scholar · View at PubMed - C. J. Stam and J. C. Reijneveld, “Graph theoretical analysis of complex networks in the brain,”
*Nonlinear Biomedical Physics*, vol. 1, article 3, 2007. View at Publisher · View at Google Scholar · View at PubMed - F. De Vico Fallani, L. Astolfi, F. Cincotti et al., “Brain network analysis from high-resolution EEG recordings by the application of theoretical graph indexes,”
*IEEE Transactions on Neural Systems and Rehabilitation Engineering*, vol. 16, no. 5, pp. 442–452, 2008. View at Publisher · View at Google Scholar · View at PubMed - V. Sakkalis, T. Oikonomou, E. Pachou, I. Tollis, S. Micheloyannis, and M. Zervakis, “Time-significant wavelet coherence for the evaluation of schizophrenic brain activity using a graph theory approach,” in
*Proceedings of the 28th IEEE-EMBS, Engineering in Medicine and Biology Society (EMBC '06)*, vol. 1, pp. 4265–4268, New York, NY, USA, 2006. - V. Sakkalis, V. Tsiaras, M. Zervakis, and I. Tollis, “Optimal brain network synchrony visualization: application in an alcoholism paradigm,” in
*Proceedings of the 29th Annual International Conference of IEEE-EMBS, Engineering in Medicine and Biology Society (EMBC '07)*, pp. 4285–4288, 2007. View at Publisher · View at Google Scholar · View at PubMed - C. J. Stam, B. F. Jones, G. Nolte, M. Breakspear, and P. Scheltens, “Small-world networks and functional connectivity in Alzheimer's disease,”
*Cerebral Cortex*, vol. 17, no. 1, pp. 92–99, 2007. View at Publisher · View at Google Scholar · View at PubMed - N. Situ, R. Rezaie, A. Papanicolaou, L. Pollonini, U. Patidar, and G. Zouridakis, “Functional connectivity networks in the autistic and healthy brain assessed using granger causality,” in
*Proceedings of the 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society*, 2010. - M. Massimini, F. Ferrarelli, R. Huber, S. K. Esser, H. Singh, and G. Tononi, “Neuroscience: breakdown of cortical effective connectivity during sleep,”
*Science*, vol. 309, no. 5744, pp. 2228–2232, 2005. View at Publisher · View at Google Scholar · View at PubMed - M. Valencia, M. A. Pastor, M. A. Fernández-Seara, J. Artieda, J. Martinerie, and M. Chavez, “Complex modular structure of large-scale brain networks,”
*Chaos*, vol. 19, no. 2, Article ID 023119, 2009. View at Publisher · View at Google Scholar · View at PubMed - V. Sakkalis, C. Doru Giurcǎneanu, P. Xanthopoulos et al., “Assessment of linear and nonlinear synchronization measures for analyzing EEG in a mild epileptic paradigm,”
*IEEE Transactions on Information Technology in Biomedicine*, vol. 13, no. 4, pp. 433–441, 2009. View at Publisher · View at Google Scholar · View at PubMed - G. Di Battista, P. Eades, R. Tamassia, and I. G. Tollis,
*Graph Drawing: Algorithms for the Visualization of Graphs*, Prentice Hall, Upper Saddle River, NJ, USA, 1999. - http://sccn.ucsd.edu/eeglab/.
- M. Rubinov and O. Sporns, “Complex network measures of brain connectivity: uses and interpretations,”
*NeuroImage*, vol. 52, no. 3, pp. 1059–1069, 2010. View at Publisher · View at Google Scholar · View at PubMed - U. Egert, TH. Knott, C. Schwarz et al., “MEA-Tools: an open source toolbox for the analysis of multi-electrode data with MATLAB,”
*Journal of Neuroscience Methods*, vol. 117, no. 1, pp. 33–42, 2002. View at Publisher · View at Google Scholar - M. Mørup, L. K. Hansen, and S. M. Arnfred, “ERPWAVELAB: a toolbox for multi-channel analysis of time-frequency transformed event related potentials,”
*Journal of Neuroscience Methods*, vol. 161, no. 2, pp. 361–368, 2007. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus - A. Delorme and S. Makeig, “EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis,”
*Journal of Neuroscience Methods*, vol. 134, no. 1, pp. 9–21, 2004. View at Publisher · View at Google Scholar · View at PubMed - V. Sakkalis, V. Tsiaras, and I. Tollis, “Graph analysis and visualization for brain function characterization using EEG data,”
*Journal of Healthcare Engineering*, vol. 1, no. 3, pp. 435–460, 2010. - J. G. Snodgrass and M. Vanderwart, “A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity,”
*Journal of Experimental Psychology: Human Learning and Memory*, vol. 6, no. 2, pp. 174–215, 1980. View at Publisher · View at Google Scholar - S. M. Kay,
*Modern Spectral Estimation*, Prentice-Hall, Englewood Cliffs, NJ, USA, 1988. - J. Arnhold, P. Grassberger, K. Lehnertz, and C. E. Elger, “A robust method for detecting interdependences: application to intracranially recorded EEG,”
*Physica D*, vol. 134, no. 4, pp. 419–430, 1999. - L. A. Baccalá and K. Sameshima, “Partial directed coherence: a new concept in neural structure determination,”
*Biological Cybernetics*, vol. 84, no. 6, pp. 463–474, 2001. - F. Takens, “Detecting strange attractors in turbulence,” in
*Proceedings of the Dynamical Systems and Turbulence Symposium*, vol. 898 of*Lecture Notes in Mathematics*, pp. 366–381, 1981. - R. Q. Quiroga, A. Kraskov, T. Kreuz, and P. Grassberger, “Performance of different synchronization measures in real data: a case study on electroencephalographic signals,”
*Physical Review E*, vol. 65, no. 4, Article ID 041903, 14 pages, 2002. View at Publisher · View at Google Scholar - T. Sauer, J. A. Yorke, and M. Casdagli, “Embedology,”
*Journal of Statistical Physics*, vol. 65, no. 3-4, pp. 579–616, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. B. Kennel, R. Brown, and H. D. I. Abarbanel, “Determining embedding dimension for phase-space reconstruction using a geometrical construction,”
*Physical Review A*, vol. 45, no. 6, pp. 3403–3411, 1992. View at Publisher · View at Google Scholar - L. Cao, “Practical method for determining the minimum embedding dimension of a scalar time series,”
*Physica D*, vol. 110, no. 1-2, pp. 43–50, 1997. - R. Hegger, H. Kantz, and T. Schreiber, “Practical implementation of nonlinear time series methods: the TISEAN package,”
*Chaos*, vol. 9, no. 2, pp. 413–435, 1999. - A. M. Fraser and H. L. Swinney, “Independent coordinates for strange attractors from mutual information,”
*Physical Review A*, vol. 33, no. 2, pp. 1134–1140, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - P. Von Bünau, F. C. Meinecke, F. C. Király, and K. R. Müller, “Finding stationary subspaces in multivariate time series,”
*Physical Review Letters*, vol. 103, no. 21, Article ID 214101, 2009. View at Publisher · View at Google Scholar - D. J. Watts and S. H. Strogatz, “Collective dynamics of ‘small-world’ networks,”
*Nature*, vol. 393, no. 6684, pp. 440–442, 1998. - M. E. J. Newman, “Assortative mixing in networks,”
*Physical Review Letters*, vol. 89, no. 20, Article ID 208701, 4 pages, 2002. - C. H. Park, S. Y. Kim, Y. H. Kim, and K. Kim, “Comparison of the small-world topology between anatomical and functional connectivity in the human brain,”
*Physica A*, vol. 387, no. 23, pp. 5958–5962, 2008. View at Publisher · View at Google Scholar - V. M. Eguíluz, D. R. Chialvo, G. A. Cecchi, M. Baliki, and A. V. Apkarian, “Scale-free brain functional networks,”
*Physical Review Letters*, vol. 94, no. 1, Article ID 018102, 2005. View at Publisher · View at Google Scholar - R. Xulvi-Brunet and I. M. Sokolov, “Reshuffling scale-free networks: from random to assortative,”
*Physical Review E*, vol. 70, no. 6, Article ID 066102, 6 pages, 2004. View at Publisher · View at Google Scholar - V. Latora and M. Marchiori, “Efficient behavior of small-world networks,”
*Physical Review Letters*, vol. 87, no. 19, Article ID 198701, 4 pages, 2001. - C. H. Hubbell, “An input-output approach to clique identification,”
*Sociometry*, vol. 28, pp. 377–399, 1965. - E. Estrada and J. A. Rodríguez-Velázquez, “Subgraph centrality in complex networks,”
*Physical Review E*, vol. 71, no. 5, Article ID 056103, pp. 1–9, 2005. View at Publisher · View at Google Scholar · View at MathSciNet - L. Demetrius, V. M. Gundlach, and G. Ochs, “Complexity and demographic stability in population models,”
*Theoretical Population Biology*, vol. 65, no. 3, pp. 211–225, 2004. View at Publisher · View at Google Scholar · View at PubMed - V. L. Tsiaras,
*Algorithms for the analysis and visualization of biomedical networks*, Ph.D. thesis, Computer Science Department, University of Crete, Heraklion, Greece, 2009.