Table of Contents Author Guidelines Submit a Manuscript
BioMed Research International
Volume 2015, Article ID 936295, 13 pages
http://dx.doi.org/10.1155/2015/936295
Research Article

Application of Stochastic Automata Networks for Creation of Continuous Time Markov Chain Models of Voltage Gating of Gap Junction Channels

1Department of Mathematical Modelling, Kaunas University of Technology, Studentų Street 50, 51368 Kaunas, Lithuania
2Laboratory of Systems Control and Automation, Lithuanian Energy Institute, Breslaujos Street 3, 44403 Kaunas, Lithuania
3Department of Applied Informatics, Vytautas Magnus University, Vileikos Street 8-409, 44404 Kaunas, Lithuania
4Department of Business Informatics Research in Systems, Kaunas University of Technology, Studentų Street 56, 5142 Kaunas, Lithuania
5Department of Anesthesiology, Albert Einstein College of Medicine, 1300 Morris Park Avenue, Bronx, NY 10461, USA
6Department of Anesthesiology, New York Hospital Queens, 56-45 Main Street, Flushing, NY 11355, USA
7Institute of Cardiology, Lithuanian University of Health Sciences, Sukileliu Street 17, 50009 Kaunas, Lithuania
8Department of Neuroscience, Albert Einstein College of Medicine, 1300 Morris Park Avenue, Bronx, NY 10461, USA

Received 4 July 2014; Revised 7 December 2014; Accepted 8 December 2014

Academic Editor: Carlo Cattani

Copyright © 2015 Mindaugas Snipas et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC) of voltage gating of gap junction (GJ) channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs), which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ∼20 times.

1. Introduction

Gap-junctional communication plays an important role in many processes, such as impulse propagation in the heart, communication between neurons and glia, organ formation during early development, regulation of cell proliferation, and metabolic exchange between cells of various tissues, including the lens that lack blood circulation. Gap junction (GJ) channels are formed of connexin (Cx) proteins, which belong to a family of integral membrane proteins exhibiting a tissue specific expression pattern. GJs provide a direct pathway for electrical and metabolic signalling between the cells [1]. In humans, twenty-one isoforms of Cxs form GJ channels [2]. Each GJ channel is composed of two hemichannels (HCs), both oligomerized of six Cxs. Cxs have four alpha helical transmembrane domains (M1 to M4), intracellular N- and C-termini (NT and CT), two extracellular loops (E1 and E2), and a cytoplasmic loop (CL) [3]. Docking of two HCs from neighbouring cells leads to formation of the GJ channel composed of 12 Cxs.

However, despite such complexity, all GJ channels share the same common property—sensitivity to transjunctional voltage (), called voltage gating. Junctional conductance () measured under steady-state conditions decays symmetrically in response to of either polarity, which have been explained by the presence of a -sensitive gate in each opposed HC [4]. Such property, being inherently quantitative, is amenable to the investigation by computational methods.

Earlier, we developed stochastic 4- and 16-state models of voltage gating, containing 2 and 4 gates in series in each GJ channel. In order to demonstrate that the proposed -gating model is adequate, it is necessary to compare its output to experimental results. For example, the proposed stochastic 4- and 16-state models of -gating contained a sizable number (>10) of parameters and for their estimation global optimization (GO) algorithms were successfully used [5, 6]. The simulation of -gating was performed with different sets of parameters. However, for the estimation of a global minimum, it typically requires running thousands of iterations, each lasting for up to 10 seconds, and consequently GO takes tens of hours or days. Thus, the reduction of the computation time that is needed for GO of experimental data is a critical task.

In previous work a discrete time Markov chain (DTMC) model was used [7]. This model described the GJ channel containing 12 gates. In such model, differently from the 4- and 16-state models, it was assumed that each Cx of the GJ channel contains the gate. Since all 12 gates operate at the same time, construction of the transition matrix is not a trivial task. Therefore, transition matrix is dense and when direct methods, that is, Gaussian elimination, are applied, the run-time complexity of steady-state probabilities is in the neighbourhood of . Numerical experiments showed that the use of DTMC model, as opposed to simulation, reduced CPU time ~18- and ~7-fold for 4- and 16-state models, respectively.

When using Markov chain models one needs to build the probability transition matrix and to estimate steady-state probabilities at different s at ~1000 different time moments during a single iteration. Typically, GO of experimental data require to use 500–5000 iterations to find a global minimum. Altogether, this will require to perform ~2 500 000 simulations using Markov model. Thus, it is evident that modeling requires fast construction of the matrix of transition probabilities (transition rates) and fast solution of the steady-state probabilities because the amount of central processing unit (CPU) time is high even at relatively small number of states. In our prior studies [8], we already used continuous time Markov chain (CTMC) model of GJs gating. A transformation of the transition probabilities into transition rates is necessary to generate CTMC model with the same steady-state probabilities, but infinitesimal generator matrix of CTMC model is sparse. For example, infinitesimal generator of GJ model presented in [8] was a tridiagonal matrix. Therefore, generation of matrix and modeling of GJs under steady-states conditions using the CTMC model require smaller amount of CPU time compared to using the DTMC model.

In the present study, we used CTMC for the modeling of GJs containing two voltage-sensitive gates, each of which is composed of six subgates attributed to each Cx; in stochastic 4- and 16-state gating models each gate is regarded as one unit. We also used a stochastic automata network (SAN) formalism for the Markov model specification. SAN formalism allowed accelerating generation of the transition-rate matrices.

In previous study [8] we used piece linear aggregate (PLA) formalism for CTMC model creation. PLA formalism allows building and storing infinitesimal generator of Markov chain model automatically, but matrix structure cannot be easily deduced, especially in more complex models. On the other hand, SAN formalism is a method that is based on using tensor algebra matrix operations. Consequentially, infinitesimal generators have the distinct structure allowing for very efficient application of the numerical methods. Here, we present SAN description of CTMC models of three different GJ models and analyze the efficiency of numerical solution. Since CPU time depends on software and its implementation, we focused on more universal evaluation of a complexity of algorithms. It is based on the exact number of mathematical operations, which is necessary to perform steady-state probabilities calculation.

Our studies showed that if a proper numerical method is applied, then a steady-state solution of the proposed CTMC model of GJs requires 10–20-fold less CPU time compared to DTMC models. We suggest that the use of iterative methods might be especially beneficial in estimation of gating parameters, since it requires repetitive simulations with different sets of parameters. We showed that using a previous solution for evaluating continuous one, when changes are small (<1 mV), allows reducing the number of iterations for at least 30 percent.

2. Methods

2.1. Markov Chain Modeling

We assume the stationary analysis of a homogenous irreducible Markov chain with a finite number of system states, denoted by . Markov chain modeling consists of two stages: (1) construction of transition-rates matrix called an infinitesimal generator and (2) calculation of steady-state probabilities.

The first stage is a model specification. For a GJ gating model this could mean defining the states of a single gate and possible transitions among them, the number of gates in the GJ, and so forth. Basically, it results in formation of a transition matrix , if one assumes a DTMC, or infinitesimal generator matrix , if one assumes CTMC. In this paper we consider mainly the CTMC models.

Formation of and can be performed manually if the size of state space () is relatively small. For larger models the special software can be used, for example, methods based on events language [9], Petri nets [10], and stochastic automata networks [11].

One of the main problems in Markov chain modeling of real systems is a rapid growth of the number of states. The number of states of the Markov chain grows exponentially, when the number of system components grows linearly. Therefore the use of efficient model creation tools and numerical methods is crucial [12].

2.2. Calculation of Steady-State Probabilities

The most difficult and time consuming part of Markov chain modeling is calculation of steady-state probabilities.

Computation of steady-state probabilities of DTMC, which are stored as a row-vector , is the solution of a system of linear equations:

Similarly, computation of steady-state probabilities of DTMC, which are stored as a row-vector , is the solution of a system of linear equations: where denotes a zero row-vector of size .

Equations (1)-(2) can also be interpreted as computations of left side eigenvector corresponding to eigenvalue 1 (in case of DTMC) or eigenvalue 0 (in case of CTMC). Since and are singular, an additional condition is used in all cases.

There are three big classes of algorithms allowing evaluation of steady state probabilities: direct methods, iteration methods, and projection methods. More about numerical methods for solution of general linear systems can be found in [13]; more about application of numerical methods specifically for Markov chains is in [14].

2.3. Stochastic Automata Networks

SAN formalism [15] is one of the most efficient methods used to solve state-space explosion problem, which is very detrimental in Markov chain modeling. SAN allows very efficient construction and storage of infinitesimal generator by using tensor (Kronecker) algebra operations.

Though SAN formalism originally was developed for the modeling of computer networks and communication systems [16, 17], there are multiple examples of SAN use in biology. For example, DeRemigio et al. [18] and Hao et al. [19] used SANs formalism to model calcium channels; Wolf [20] used SANs to describe kinetics of biochemical reactions.

The main idea of SAN formalism is based on the division of a system into smaller subsystems, which can interact among themselves. Those subsystems are described by different stochastic automata. A single automaton is represented by a Markov chain, that is, by the set of subsystem states and possible transitions among them. If two (or more) automata somehow interact among themselves, then transition in one automaton may depend on the state of another one.

The state of the whole system, so called global state, is a compositional state of all automata. An infinitesimal generator matrix of the whole system, so called global generator matrix, can be expressed by infinitesimal generators of individual automata and Kronecker algebra operations. We recall basic definitions below.

Kronecker product of two matrices and is given by

Kronecker sum of two squared matrices and is given by where , are identity matrices of sizes and , respectively. The Kronecker sum of more than two square matrices is also well defined [21].

If the network consists of independent stochastic automata , each governed by infinitesimal generators , , then the global infinitesimal generator can be expressed as a tensor sum:

Expression (5) is also called the SAN descriptor of the system. For a SAN of independent automata steady-state probability, the vector of the whole system is given by where is a steady-state probability vector of an individual automaton .

If we are to consider a network, describing two independent gates that operate between open and closed states with transition rates and , then automata and model each hemichannel/gate, and each of them can be described using the infinitesimal generator:

Thus, the SAN descriptor of the network of two independent gates is given by

If automata in SAN are not completely independent, the interaction among them can also be expressed by the use of Kronecker algebra operations. Plateau expressed two different ways [15] to describe the interaction among automata.

(1) Functional Transition Rates. A transition rate in a single automaton may depend on the state of the other automata, that is, on the global system state. Transition rates, which are independent on the global system state, are called constant transition rates.

If we are to consider a network composed of 2 automata and suppose that the transition rates of each connexin depend on the number of connexins/subgates in the open state (denoted by ), then transition rates are functions: and . Consequently, the SAN descriptor of the system is as follows:where denotes the generalized Kronecker product, which deals with functional transition rates [22].

(2) Synchronizing Events. Transition in one automaton can cause transitions in other automata. Transition rates are called local, if they are not transition rates of synchronizing events. Synchronizing transitions may also be functional. In this paper, we do not use synchronizing events for the creation of gap junction models.

Plateau and Atif showed that the SAN descriptor of a network consisting of automata and having synchronizing events can be expressed as follows:

The use of SAN formalism basically solves matrix construction problems, since even large matrices can be built and stored (assuming there is enough storage space in operative memory) in a very short time.

The main problem that arises when dealing with SANs of interacting automata is that the steady-state solution cannot be expressed as a simple product form (6). In this case, steady-state probabilities can be found either from solving (2) after is built from the SAN descriptor or directly from the descriptor. That is, building and storing of is not necessary, if special numerical methods are applied. It is possible not to build , since vector SAN descriptor product can be implemented efficiently, for example, by using shuffle algorithm. These problems are considered in detail in [23].

3. Results and Discussion

In this section, we present CTMC models of GJs, created by using SAN formalism. The structure of infinitesimal generators and efficient application of numerical methods for steady-state solutions are considered in detail.

3.1. CTMC Model of the GJ Channel Containing the 12 Two-State Subgates

Gap junctions form clusters (junctional plaques) of individual channels arranged in parallel in the junctional membrane of two adjacent cells. The GJ channel is composed of 2 hemichannels (left and right) arranged in series. Each hemichannel is composed/oligomerized from six Cxs forming a hexamer with the pore inside. We envision that each hemichannel forms the gate, which is composed of six subgates arranged in parallel; that is, to each connexin the subgate is attributed and the GJ channel ultimately contains two gates composed of 12 subgates (see Figure 1).

Figure 1: Electrical scheme of the GJ channel composed of two hemichannels each formed of 6 connexins. Transjunctional voltage () controls both hemichannels and all Cxs can operate between open and closed (and deep closed) states.

In this section, we will consider a model, in which each subgate operates between open () and closed () states (see Figure 2), with transition rates (from to ) and (from to ).

Figure 2: Two-state transition graph of a subgate.

One of the most important steps of SAN model creation is to decide which part of a system to model by an individual automaton. In this case it is possible to describe each subgate as an individual automaton with two states. This would result in SAN of 12 automata with 212 = 4096 states.

However, it is unnecessary to track each subgate individually; since all subgates are identical then -gating depends on the number of open and closed gates in each hemichannel. Thus, much more convenient way is to describe the whole hemichannel as an individual automaton, whose states denote the number of closed (or open) subgates in hemichannel.

Thus we model the GJ channel by two automata—an automaton , which describes the left hemichannel, and , which describes the right hemichannel (number 2 in the subscript denotes the fact, that each subgate has two possible states).

We assume that both automata have 7 possible states, which denote the number of closed subgates in each hemichannel (i.e., it can be denoted by ).

Thus, automaton can leave a state and enter a state with transition rate . Similarly, it can leave state and go to the state with transition rate .

Thus an infinitesimal generator of automaton is as follows: where diagonal entries (denoted by *) are equal to the negated sum of the nondiagonal entries in that row.

It is crucial to emphasize that transition rates of the matrix in (11) depend on the voltage across the left and right hemichannels, which accordingly depends on the number of closed (open) gates:

Since both hemichannels are identical, an infinitesimal generator of the right hemichannel is the same; that is, . In this case, global infinitesimal generator of the GJ, which we denote by , may be written as

Since both and are tridiagonal matrices, it follows from (13) that is a block tridiagonal matrix. It consists of 49 square blocks, each of size 7 and can be written as follows: Here denote square zero matrix blocks of size 7.

Since and are tridiagonal matrices, then from (14) it follows that diagonal blocks are the following tridiagonal matrices: Here and is the row in a block , in which transition rate appears. Similarly, and is the row in a block , in which appears.

Similarly, upper subdiagonal blocks in may be written as follows: where and is the row in a block , in which appears.

And finally, lower subdiagonal blocks in (14) are as follows: where and is the row in a block , in which appears.

3.2. Evaluation of Functional Transition Rates

As we mentioned in a previous chapter, all transition rates in GJ voltage gating model are functional. Therefore, each transition changes the number of closed (open) gates, which changes the conductance and voltage across the channel accordingly, thus changing the values of transition rates. These changes depend on gating parameters of subgates; in homotypic GJ channels they are identical for all 12 subgates.

Even though these formulas were published earlier [6], we present them here, since they demonstrate the complexity of functional transition rates using SAN modeling of GJ.

In the DTMC model, probabilities of two-state gate transitions can be described as follows:

In (18), is where is a gating polarity (+1 or −1);  (1/mV) is a coefficient characterizing gating sensitivity to voltage; is a constant used to change kinetics of transitions ( can accelerate or decelerate transitions but does not affect conditions of the steady-state);  (mV) is a voltage across the hemichannel/connexin at which probabilities for and states are equal; is variable voltage across the left or right subgate (mV).

DTMC probabilities can be transformed into transition rates of CTMC by the following equation: where is a short period of time, in which the probability to observe multiple transitions is negligible; that is, for , if .

Each subgate, depending on a voltage across it (), can gate/operate by changing stepwise between the open state with conductance and the closed state with conductance . It was assumed that and values exhibit rectification, that is, depend on exponentially: where is a voltage across the left or right hemichannel, while and are rectification constants.

The conductance of the left hemichannel, when Cxs are closed, can be described as follows:

Similarly, the conductance of the right hemichannel, when Cxs are closed, is

During gating, conductance of subgates ranges between and , and the total conductance of the GJ channel can be found using steady-state probabilities of Markov-chain model: where is a steady-state probability for Cxs in the left hemichannel and Cxs in the left hemichannel to be closed.

Conductance of the GJ channel depends on the voltage; that is, the circuit is nonlinear. In order to calculate voltage across each Cx, we used an iterative procedure [7]. We assumed that the value of voltage is settled, if a difference between voltage values, calculated at two consecutive iterations, is less than 0.1 percent. Calculation showed that at least 5 iterations were needed to achieve the aforesaid precision.

As one can see, the evaluation of functional transition rates of SAN model is not a trivial task in this case. SAN formalism allows estimating steady-state probabilities directly from SAN descriptor but not building and storing infinitesimal generator, for example, by using shuffle algorithm. However, there would be a necessity to evaluate functional transition rates during each iteration for evaluation of steady-state probabilities. Thus, it would require too much CPU time in case of GJ models. Moreover, the number of system states of GJ models presented in this paper is relatively small; therefore we use the different approach. We apply SAN formalism to specify the system behaviour and to create equivalent Markov chain model. The use of Kronecker algebra representation of global infinitesimal generator helps to get the insight of matrix structure and to apply numerical methods for evaluation of a steady-state solution.

3.3. Numerical Solution of Two-State CTMC Models of the GJ Channel Containing 12 Two-State Subgates

It follows from (16)-(17) that subdiagonal blocks in (14) are diagonal matrices. These properties allow using numerical methods for the calculation of steady-state probabilities and we consider three algorithms for steady-state solution in detail: banded Gaussian elimination, direct solution by recursion, and block Gauss-Seidel methods.

3.3.1. Banded Gaussian Elimination for Steady-State Solution of the CTMC Model of GJ Channel Containing 12 Two-State Subgates

A square matrix is called banded if its entries are zero outside of the diagonally bordered band, which can be described by the following equation:

In (25), numbers and are called left and right half-bandwidths, respectively. The bandwidth of the matrix is equal to (). For example, a matrix with is tridiagonal matrix, that is, matrix with bandwidth 3.

Complexity. If a matrix has bandwidth , a more efficient implementation of Gaussian elimination exists than the standard one, which has a complexity of . The solution of the linear system with bandwidth has an approximate complexity of . To be exact, the complexity of banded Gaussian elimination is , while it is for standard Gaussian elimination [24].

From (14) it follows that has bandwidth . Thus, it requires approximately 13 times less CPU time to apply banded Gaussian elimination to solve an equivalent DTMC model, which has a dense transition probability matrix.

3.3.2. Recursive Method for Steady-State Solution of CTMC Model of GJ Channel Containing 12 Two-State Subgates

An algorithm similar to the Thomas algorithm for tridiagonal matrices can be used to calculate steady-state probabilities of the GJ model. We use the matrix form of solution technique as presented in [14]. Infinitesimal generator can be divided into four blocks as follows:

Dividing vector into segments, solution of can be implemented by solving in two steps. At first, solving for gives , while gives coefficient matrix for solving . After that, the remaining part of steady-state vector can be found from .

Complexity. The procedure for obtaining a recursive solution can be implemented very efficiently due to the structure of the infinitesimal generator . For example, computation of is basically a backward substitution, since is lower triangular. Also, finding the inverse of subdiagonal blocks is a trivial task, since these blocks are diagonal matrices.

Since the size of state-space is not large for this Markov model, an approximate complexity evaluation by using big notation might be too general in this case. Therefore, we evaluated the amount of operations necessary to implement the recursive procedure in more detailed way. We distinguished 6 types of distinct operations (e.g., matrix summation, matrix multiplication, etc.) adapted to different types of operands, such as dense matrix and tridiagonal matrix. A conservative estimation of number of arithmetic operations is presented in Table 1. We assume that these operations are implemented in the most basic way (e.g., we use for matrix multiplication ).

Table 1: Evaluation of the number of arithmetic operations necessary to implement the recursive procedure for GJ model with 12 two-state subgates.

Detailed evaluation showed that recursive solution requires about 1.8 times less CPU time than banded Gaussian elimination and at least 23 times less than standard Gaussian elimination.

Stability. The main problem of the block recursive procedure is numerical stability, due to rounding errors [14]. However, these problems are mitigated by the fact that matrix entries are relatively small (in the range of 0.001–0.01).

We compared the solution provided by the recursive procedure with the solution calculated using a stable numerical method. Experimental data showed that steady-state probabilities obtained by the recursive procedure differ less than 10−12 from an exact solution for this particular GJ model.

3.3.3. Block Gauss-Seidel Method for Steady-State Solution of CTMC Model of GJ Channels Containing 12 Two-State Subgates

The block Gauss-Seidel method is an iterative technique and thus eliminates stability concerns completely [14]. It can also be implemented very efficiently for block tridiagonal matrices.

If solution vector is divided according to the block structure of , then at outer iteration it is required to solve 7 inner iterations, which may be written as follows:

Complexity. In this case, diagonal blocks are tridiagonal matrices and the linear system solution has a complexity of . The number of operations (including a proof for convergence) necessary to perform a single outer iteration is presented in Table 2.

Table 2: Number of arithmetic operations necessary to implement a single outer iteration of block Gauss-Seidel algorithm for GJ model with 12 two-state subgates.

The efficiency of the whole algorithm depends on convergence speed. That is, block Gauss-Seidel method would be more efficient than the recursive procedure if it required 4 or less iterations. Similarly, it would be more efficient than banded Gaussian elimination if the number of outer iterations was less than 7, and, finally, it would be more efficient than standard Gaussian elimination if block Gauss-Seidel required less than 96 outer iterations.

We evaluated the number of outer iterations necessary to find a steady-state solution of the GJ model containing 12 two-state gates at  mV. Each time an initial iteration vector was chosen as a standard vector with equal entries, that is, each entry equal to . We assumed that necessary precision is achieved if the following condition was satisfied:

Inequality (29) means that all entries of consecutive iterations differ less than in absolute value. The results of a convergence are presented in Table 3.

Table 3: Number of outer iterations necessary to achieve the required precision by using block Gauss-Seidel algorithm for GJ model with 12 two-state subgates.

Thus, for this particular GJ model the block Gauss-Seidel method is more efficient than standard Gaussian elimination at any precision level. However it is less efficient than the direct methods, for example, banded Gaussian algorithm or the recursive procedure presented in Section 3.3.2.

In addition, modification of the block Gauss-Seidel method outperformed other standard iterative algorithms (Jacobi, Gauss-Seidel) as well as projection methods (Arnoldi, GMRES, and BiCGSTAB). While the convergence of the fastest of projection methods, BiCGSTAB, was slightly better than that of block Gauss-Seidel, overall CPU time was longer (about 18 percent), since it required more time to perform a single iteration.

3.3.4. Repetitive Model Solution

Moreover, since the block Gauss-Seidel method is an iterative technique, it can be implemented more efficiently for repetitive solutions, which are especially important in the search for optimal model parameters. Iterative methods have an advantage over direct methods, since the previous solution vector can be used as a starting point for the solution of model with different set of parameters.

A solution of the model at different values can be described using Pseudocode 1.

Pseudocode 1

A similar procedure can be applied not only at different s, but also with other gating parameters as well.

It is known [25] that the difference between steady-state solutions of two linear systems, and , is constrained by the following relationship between norms of matrices and vectors: is group inverse of infinitesimal generator .

Thus, steady-state vectors are relatively close to each other if matrices and are close and is not large.

Simulation of -gating partially satisfies aforementioned conditions, since changes of entries of are in the range of 0.0001–0.001 due to the voltage change from 40 mV to 41 mV, while the norm is equal to 213.73 at 40 mV level.

In order to evaluate an effect of iterative methods for the repetitive model solution, more detailed experimental research was performed. We changed voltage across GJ channel from 0 to 100 mV by 0.1 mV intervals. Thus, 1000 different infinitesimal generators were built and the steady-state solution was calculated according to a scheme presented in Pseudocode 1. Firstly, we used standard iteration vector with equal entries each time (we will refer to it as “Method I”), while for the second round of calculation a previous steady-state solution was used each time, as presented in Pseudocode 1 (we will refer to it as “Method II”). We evaluated the number of outer iterations necessary to achieve 10−6 precision for both cases. The results are presented in Table 4.

Table 4: Number of outer iterations for repetitive model solution by using block Gauss-Seidel algorithm for GJ model with 12 two-state subgates.

A positive effect of “Method II” is obvious. The number of outer iterations decreases from about 30 to 60 percent depending on level. Overall, it required about 38 percent less CPU time to perform calculations by using “Method II” instead of “Method I.” CPU time in this case was comparable to that of banded Gaussian elimination.

3.4. CTMC Model of the GJ Channel Containing 6 Three-State Gates

We assume that in this GJ model only subgates in the left hemichannel operate [4], while subgates in the right hemichannel are always open (see Figure 3).

Figure 3: Electrical scheme of the GJ channel composed of two hemichannels each formed of 6 connexins/subgates. influence both hemichannels but only Cxs on the left operate between open, closed, and deep closed states, while Cxs on the right are always open.

We also assume that each subgate operates between open (), closed (), and deep-closed () states. The transition among these states is presented in Figure 4.

Figure 4: Three-state transition graph of a subgate.

In this case our SAN model consists of a single automaton (subscript value 3 denotes that each subgate has 3 possible states), which describes the left hemichannel. If we assume that states of the left hemichannel are numbers of open and closed gates, denoted by and , respectively, then the state-space of a system consists of 2-tuples () satisfying the following inequality:

It follows from (31) that the state-space has the size of 28. From a state automaton can go to the state with transition rate ; to the state with transition rate ; to the state with transition rate ; and finally to the state with transition rate .

An infinitesimal generator of the left hemichannel (as well as the global system generator, i.e., ) has the same block tridiagonal structure as in (14):

However, all the blocks in (32) including zero-entry blocks are of varying sizes. In general, block entry of the infinitesimal generator is of size . For example, diagonal blocks are square matrices of size each and have the following tridiagonal form: where and and is the row in a block where or appears.

All upper diagonal blocks have the following structure (zero-entries row vector augmented with diagonal matrix): where and is the row in a block where appears.

Similarly, lower diagonal blocks are diagonal matrices augmented with a zero-entry column and may be written as follows: where and is the row in a block , where appears.

The first problem arising in steady-state calculation is that the blocks are of different sizes. It means that the recursive procedure, which was most efficient for GJ models with 12 two-state Cxs, cannot be applied in this case. This leaves only banded Gaussian elimination and block Gauss-Seidel methods for further detailed consideration.

3.4.1. Banded Gaussian Elimination for Steady-State Solution of CTMC Model of GJ Channel Containing 6 Three-State Subgates

It is easy to see from (32) that has the same bandwidth as but the size of state-space is different, in this case. So, according to the complexity evaluation presented in [24], the steady-state solution by using banded Gaussian elimination requires only about 4-fold less CPU time than standard Gaussian elimination for dense linear system. Thus, the difference is not as pronounced as in the previous example.

3.4.2. Block Gauss-Seidel Elimination for Steady-State Solution of CTMC Model of GJ Channel Containing 6 Three-State Gates

The application of block Gauss-Seidel is very similar to that for GJ model with 12 two-state Cxs. Basically, the same iterative procedure (28) can be applied; however blocks are of different sizes in this case. Thus, even though the same basic operations with the same complexity as presented in Table 2 are used for this model, one must evaluate varying sizes of blocks in the matrix . For example, instead of solving 7 tridiagonal systems, each of size 7, one needs only to solve one system of size 7, one of size 6, and so forth.

This gives an estimated complexity of 196 per single outer iteration with the block Gauss-Seidel method. Thus, block Gauss-Seidel outperforms standard Gaussian elimination if less than 80 outer iterations are needed. It also outperforms banded Gaussian elimination if less than 18 outer iterations are needed.

As for the 12 two-state Cxs GJ model, we evaluated the number of outer iterations necessary to achieve required precision. The convergence speed is presented in Table 5.

Table 5: Number of outer iterations necessary to achieve the required precision by using block Gauss-Seidel algorithm for GJ model with 6 three-state subgates.

Thus, it is less feasible to use block Gauss-Seidel method than banded Gaussian elimination if higher than 10−5 precision is used, though it is more efficient than standard Gaussian elimination even with 10−15 precision.

However, the efficiency of block Gauss-Seidel becomes higher if the repetitive model solution is performed. A similar experiment as with the 12 two-state Cxs model was performed for 6 three-state subgates GJ model. The results are presented in Table 6.

Table 6: Number of outer iterations for repetitive model solution by using block Gauss-Seidel algorithm for GJ model with 6 three-state subgates.

The results showed that block Gauss-Seidel becomes more efficient than banded Gaussian elimination under reasonable 10−6 precision if repetitive calculations are performed.

3.5. CTMC Model of the GJ Channel Containing 12 Three-State Subgates

Here, we consider that in the model each subgate of both hemichannels operates between open (), closed (), and deep closed () states. Thus, its electrical scheme is the same as presented in Figure 1, while the state graph of the subgate is as presented in Figure 3.

We model this type of GJ by SAN, containing two automata. We assume that describes the left hemichannel, while describes the right one (again, subscript values 3 denote the fact that each subgate has 3 possible states). The states of both automata and are 2-tuples and , respectively. Here and denote the number of open and closed subgates on the left hemichannel, while and denote the number of closed and open gates on the right one. They also must satisfy the following inequalities:

Transitions among states in each hemichannel are analogous to those of the previous model. Similarly, infinitesimal generators and can be written as in (32). Thus global infinitesimal generator may be expressed as

It is easy to see from (37) that has the size of 748; thus it is a much larger model than in previous examples. It is not possible due to the size of the matrix to present its full block structure. However, its nonzero entry structure, which was obtained by using spy() function from MATLAB package, is presented in Figure 5.

Figure 5: Nonzero entry structure of global system generator of GJ channel composed of two hemichannels formed of 6 connexins with 3 states.

It is also possible to analyze its structure based on the Kronecker representation of system descriptor (37). As can be seen from Figure 5, it consists of 5 layers of blocks, all of which are square matrices of size 28. Diagonal blocks are all block tridiagonal matrices, which may be written as follows: Here denotes the th diagonal entry of the matrix .

It follows from (38) that has the same nonzero entries structure as . Matrix also has two layers of blocks in lower and upper parts. For simplicity, we call them inside-lower (inside-upper) and outside-lower (outside-upper) layers. All of these blocks are diagonal matrices and can be written as where denotes the th entry of matrix .

Since matrix is not tridiagonal, it leaves banded Gaussian elimination and block Gauss-Seidel methods for more detailed consideration.

3.5.1. Banded Gaussian Elimination for Steady-State Solution of CTMC Model of GJ Channels Containing 12 Three-State Subgates

Even though matrix (38) is banded, its bandwidth 393 is much bigger in both absolute and relative ( exceeds half of the size of ) values than that of 12 two-state Cxs GJ model. It requires about 5 times less CPU time than standard Gaussian elimination to calculate steady-state probabilities.

3.5.2. Block Gauss-Seidel Algorithm for Steady-State Solution of CTMC Model of GJ Channels Containing 12 Three-State Subgates

Analyzing the zero-entries structure of (38) allows for adaptation of the block Gauss-Seidel algorithm for model. Basically, in one outer iteration step one needs to solve 28 inner iterations—each of them is a linear system of size 28: where row vector might consist of up to four vectors of form or . In general, the number of arithmetic operations necessary to perform one outer iteration of the block Gauss-Seidel algorithm for model is presented in Table 7 (size in this case is equal to 28).

Table 7: Number of operations necessary to perform a single outer iteration of block Gauss-Seidel algorithm for GJ model with 12 three-state subgates.

Thus, the block Gauss-Seidel algorithm becomes more efficient than standard Gaussian elimination if it requires less than 5954 outer iterations. Actual numbers of outer iterations necessary to achieve the required precision at 40 mV transjunctional voltage level are presented in Table 8.

Table 8: Number of outer iterations necessary to achieve the required precision by using block Gauss-Seidel algorithm for GJ model with 12 three-state subgates.

Basically the convergence speed of block Gauss-Seidel for solving steady-state probabilities of the 12 three-state subgates GJ model is comparable to that of previous models. Also, block Gauss-Seidel is much more efficient than even banded Gaussian elimination with any precision. As in previous cases, it outperformed other standard iterative and projection methods.

For example, it required less than 40 milliseconds of CPU time to calculate steady-state probabilities with 10−6 precision with MATLAB while its implementation in C++ required about 18 milliseconds.

The repetitive model solution showed the same effect of iterative algorithms on convergence speed as in previous examples. The results are presented in Table 9.

Table 9: Number of outer iterations for repetitive model solution by using block Gauss-Seidel algorithm for GJ model with 6 three-state subgates.

Overall, about 40 percent less CPU time was required to estimate steady-state probabilities in the whole range of transjunctional voltage by using, if previous solutions were used as the first iteration vector.

4. Conclusion

SAN formalism is efficient for the creation of models of GJ voltage gating, because system description is relatively simple, and the building and storing of the infinitesimal generator is very rapid. We used SAN to create equivalent CTMC model due to the complexity of estimation of functional transition rates. Unlike a PLA approach, which we have used earlier for CTMC modelling of GJs [8], SAN and Kronecker algebra operations help to get more insight into the structure of infinitesimal generator matrix.

Infinitesimal generators of GJ models have a distinct block structure that allows selecting most efficient numerical methods. For example, application of banded Gaussian method lowers CPU time at least 4-fold as compared to standard Gaussian elimination, which typically is applied for estimation of steady-state probabilities of DTMC models with dense matrices.

Iterative methods are very suitable for GO of gating parameters, since it requires numerous simulations at different s. In particular, the block Gauss-Seidel method can be applied very successfully, since it also benefits from the block structure of the infinitesimal generator. The implementations proposed in this study outperformed even direct methods used in calculation of steady-state probabilities.

We assume that implementation of different numerical methods, for example, using the most advanced numerical techniques in SAN modelling, could lead to even better results. This could help to reduce computational time in search of most adequate mathematical models in describing voltage gating of GJs.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Research Council of Lithuania for collaboration of Lithuanian and USA scientists under the Grant no. MIT-074/2012 to Henrikas Pranevicius and by the NIH Grant no. R01NS 072238 to Feliksas F. Bukauskas.

References

  1. J. F. Ek-Vitorin, T. J. King, N. S. Heyman, P. D. Lampe, and J. M. Burt, “Selectivity of connexin 43 channels is regulated through protein kinase C-dependent phosphorylation,” Circulation Research, vol. 98, no. 12, pp. 1498–1505, 2006. View at Publisher · View at Google Scholar · View at Scopus
  2. G. Söhl and K. Willecke, “Gap junctions and the connexin protein family,” Cardiovascular Research, vol. 62, no. 2, pp. 228–232, 2004. View at Publisher · View at Google Scholar · View at Scopus
  3. G. E. Sosinsky and B. J. Nicholson, “Structural organization of gap junction channels,” Biochimica et Biophysica Acta (BBA)—Biomembranes, vol. 1711, no. 2, pp. 99–125, 2005. View at Publisher · View at Google Scholar · View at Scopus
  4. F. F. Bukauskas and V. K. Verselis, “Gap junction channel gating,” Biochimica et Biophysica Acta: Biomembranes, vol. 1662, no. 1-2, pp. 42–60, 2004. View at Publisher · View at Google Scholar · View at Scopus
  5. N. Paulauskas, H. Pranevicius, J. Mockus, and F. F. Bukauskas, “Stochastic 16-state model of voltage gating of gap-junction channels enclosing fast and slow gates,” Biophysical Journal, vol. 102, no. 11, pp. 2471–2480, 2012. View at Publisher · View at Google Scholar · View at Scopus
  6. N. Paulauskas, M. Pranevicius, H. Pranevicius, and F. F. Bukauskas, “A stochastic four-state model of contingent gating of gap junction channels containing two “fast” gates sensitive to transjunctional voltage,” Biophysical Journal, vol. 96, no. 10, pp. 3936–3948, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Sakalauskaite, H. Pranevicius, M. Pranevicius, and F. Bukauskas, “Markovian model of the voltage gating of connexin-based gap junction channels,” Elektronika ir Elektrotechnika, no. 5, pp. 103–106, 2011. View at Google Scholar · View at Scopus
  8. H. Pranevicius, M. Pranevicius, O. Pranevicius, and et al, “Continuous time Markov chain models of voltage gating of gap junction channels,” Information Technology and Control, vol. 43, no. 2, pp. 133–142, 2014. View at Google Scholar
  9. W. J. Stewart, MARCA—Markov-Chain Analyzer: A Software Package for Markov Modeling, Marcel Dekker, New York, NY, USA, 1991.
  10. N. Yang, H. Yu, Z. Qian, and H. Sun, “Modeling and quantitatively predicting software security based on stochastic Petri nets,” Mathematical and Computer Modelling, vol. 55, no. 1-2, pp. 102–112, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. L. Brenner, P. Fernandes, B. Plateau et al., PEPS2007—Stochastic Automata Networks Software Tool, IEEE Computer Society, Los Alamitos, Calif, USA, 2007.
  12. P. Buchholz and T. Dayar, “Block SOR preconditioned projection methods for Kronecker structured Markovian representations,” SIAM Journal on Scientific Computing, vol. 26, no. 4, pp. 1289–1313, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. Y. Saad, “Iterative Methods for Sparse Linear Systems,” in Society for Industrial and Applied Mathematics, 2003. View at Google Scholar
  14. W. J. Stewart, Introduction to the Numerical Solution of Markov Chains, Princeton University Press, Princeton, NJ, USA, 1994. View at MathSciNet
  15. B. Plateau and K. Atif, “Stochastic automata network for modeling parallel systems,” IEEE Transactions on Software Engineering, vol. 17, no. 10, pp. 1093–1108, 1991. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. L. Mokdad and J. Ben-Othman, “Admission control mechanism and performance analysis based on stochastic automata networks formalism,” Journal of Parallel and Distributed Computing, vol. 71, no. 4, pp. 594–602, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. K. R. Braghetto, J. E. Ferreira, and J.-M. Vincent, “Performance evaluation of re source-aware business processes using stochastic automata networks,” International Journal of Innovative Computing, Information and Control, vol. 8, no. 7B, pp. 5295–5316, 2012. View at Google Scholar · View at Scopus
  18. H. DeRemigio, M. D. LaMar, P. Kemper, and G. D. Smith, “Markov chain models of coupled calcium channels: Kronecker representations and iterative solution methods,” Physical Biology, vol. 5, no. 3, Article ID 036003, 2008. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. Hao, P. Kemper, and G. D. Smith, “Reduction of calcium release site models via fast/slow analysis and iterative aggregation/disaggregation,” Chaos, vol. 19, no. 3, Article ID 037107, 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. V. Wolf, “Modelling of biochemical reactions by stochastic automata networks,” Electronic Notes in Theoretical Computer Science, vol. 171, no. 2, pp. 197–208, 2007. View at Publisher · View at Google Scholar · View at Scopus
  21. A. N. Langville and W. J. Stewart, “The Kronecker product and stochastic automata networks,” Journal of Computational and Applied Mathematics, vol. 167, no. 2, pp. 429–447, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. A. Benoit, P. Fernandes, B. Plateau, and W. J. Stewart, “On the benefits of using functional transitions and Kronecker algebra,” Performance Evaluation, vol. 58, no. 4, pp. 367–390, 2004. View at Publisher · View at Google Scholar · View at Scopus
  23. A. Benoit, B. Plateau, and W. J. Stewart, “Memory-efficient Kronecker algorithms with applications to the modelling of parallel systems,” Future Generation Computer Systems, vol. 22, no. 7, pp. 838–847, 2006. View at Publisher · View at Google Scholar · View at Scopus
  24. J. Thorson, Gaussian Elimination on a Banded Matrix, Stanford University, Stanford, Calif, USA, 1982.
  25. G. E. Cho and C. D. Meyer, “Comparison of perturbation bounds for the stationary distribution of a Markov chain,” Linear Algebra and Its Applications, vol. 335, no. 1–3, pp. 137–150, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus