Abstract

We discuss the application of the spectral element method to the monodomain and bidomain equations describing propagation of cardiac action potential. Models of cardiac electrophysiology consist of a system of partial differential equations coupled with a system of ordinary differential equations representing cell membrane dynamics. The solution of these equations requires solving multiple length scales due to the ratio of advection to diffusion that varies among the different equations. High order approximation of spectral elements provides greater flexibility in resolving multiple length scales. Furthermore, spectral elements are extremely efficient to model propagation phenomena on complex shapes using fewer degrees of freedom than its finite element equivalent (for the same level of accuracy). We illustrate a fully unstructured all-hexahedra approach implementation of the method and we apply it to the solution of full 3D monodomain and bidomain test cases. We discuss some key elements of the proposed approach on some selected benchmarks and on an anatomically based whole heart human computational model.

1. Introduction

Mechanistic models of heart physiology are emerging as an important paradigm able to complement the insight and exploit the information provided by genetics and biological experiments. Very recent years have seen the first attempts to define models for the numerical simulation of the individual response of real cardiology patients. For instance, in [1] the clinical study of a 65-year-old male with ischemic cardiomyopathy was addressed by means of a workflow consisting of detailed anatomic reconstruction and effective modelling of electrical activity, biomechanics, and hemodynamics. The modelling steps are challenging because they require performing (i) multiple simulations to estimate parameters and test outcomes of different treatment options and (ii) many large scale computer analyses in a clinically useful time-frame. These are strong motivations in favour of effective, anatomy-compliant high performance computer methods.

In particular, current software to handle the forward problem of the electrocardiology, that is, modelling the electrical activity of the heart, largely fails to deliver the performance needed for clinical applications. Typical discretization techniques applied to this problem have an average nodal spacing of 0.2 mm or less, resulting in approximately 30 million grid points for an average-size human heart. This possibly gives rise to large scale computational problems which are difficult to solve in real-time. The most common approach relies on the use of large high performance computer systems featuring to cores [2, 3], a choice often beyond the possibility of typical end-users. Several strategies have been suggested for facing this hindrance, including simplifying/optimizing the cardiac cell models [4], code optimization for run on next-generation hardware devices [5], and adoption of more effective solutions for the approximations of the differential equations at hand. This work is based on the latter approach.

The bidomain equations [6] are a commonly used model for the propagation of depolarization wavefronts across cardiac tissue. They are often solved numerically using finite elements (FE), finite volumes or finite differences, and a variety of algorithms based on these techniques have been proposed [714]. However, each of these methods suffers from a well-known limit affecting the numerical simulation of propagation phenomena: to prevent the onset of nonphysical, spurious effects commonly referred to as numerical dispersion it is necessary to properly fix the accuracy of the method. In cardiac simulation this has most commonly been achieved by using grid refinement, that is, choosing a node spacing that is sufficiently fine that numerical dispersion is not a problem.

Alternatively, one can improve the quality of numerical simulations by using a relatively coarse mesh, increasing the polynomial degree of the basis functions used in the numerical scheme. This is the case of high order methods, examples of which are the -version of the finite element method and the spectral element method. High order methods possess many advantages over standard techniques, such as considerably higher convergence rates, and, in our experience, the possibility of using elements with large aspect ratios without significant deterioration in accuracy. Designed to combine the good accuracy properties of pseudospectral techniques such as Legendre or Chebyshev methods with the geometrical flexibility of classical low order FE methods, spectral element methods were first introduced in 1984 [15], combining spectral methods with domain decomposition approaches. During the last decades, spectral elements have firmly established as an effective tool to treat diverse compute-intensive physical problems on complex geometries in fluid dynamics, continuum mechanics, geophysics, and electromagnetics; see, for example, [16] and references therein.

As for our knowledge, few attempts have been done to exploit spectral elements in electrocardiology. An interesting idea is provided by the so-called spectral smoothed boundary method [17], which tries to merge the excellent convergence property of Fourier-type spectral methods (which, on the other hand, are not able to accommodate irregular geometries) with the phase-field method, an embedding approach capable of dealing with complex domains when no-flux boundary conditions are present. In [18], high order spectral/hp elements on curvilinear triangles are used to model electrical propagation in the heart surface, addressing the monodomain problem and including anisotropic heterogeneous propagation in presence of fibre orientation. In [19], which in the authors’ words “should be seen as a proof-of-concept,” high order finite elements based on triangles have been applied to the solution of the monodomain problem in 1D and 2D reference geometries: furthermore, authors provide rigorous analysis and numerical experiments on theoretical error convergence rates and suggest the possibility to extend their method to spatial adaptivity.

When solving numerically the electrocardiology problem, extremely high accuracy is not a valuable goal. To be realistic, none of the numerous parameters involved in the mono-/bidomain model is known with a precision higher than 10%. On a logical ground, there is no use in solving with outstanding accuracy a problem based on a model which is only a partial approximation of a real patient cardiovascular system. In this frame, the rationale behind the use of spectral elements is the possibility to achieve an a priori fixed, reasonable accuracy using less degrees of freedom (DOF) with respect to low order methods. Matrices resulting from space discretization are more dense but have a smaller overall number of nonzero entries: this is important for decreasing both memory footprints and execution times. On the other hand, a reduced number of DOFs mean smaller ODE systems describing cell models, possibly shrinking the CPU-time for the in silico analysis.

Based on our experience in several fields of engineering, we propose to use straightforward, fully unstructured all-hexahedra spectral elements (SEM in the sequel). Differently from high order methods based on triangular elements, the SEM rely on a plain, standard, consolidated, widely accepted mathematical formulation based on tensor product, which brings tangible advantages in practical algorithm implementation. Comparison with triangular spectral elements is a difficult task as the latter came up in many different algorithm implementations and most of the works on theory and numerical experiments concern the 2D case. For instance, in [20] Koornwinder-Dubiner polynomials and Fekete points were chosen as orthogonal basis and approximation points, respectively. This choice brings differentiation matrices greater in size with respect to their SEM counterpart, thus raising the computational effort for matrix-vector products and, consequently, the overall CPU-time. Remarkably, numerical experiments suggest that the condition number of triangular spectral matrices is worse than the SEM ones, with clear advantage for the latter in terms of convergence of iterative methods and error propagation. Furthermore, preconditioners for SEM have been long studied and developed; see, for instance [21] and references therein. Apart from these good mathematical properties, simplicity of SEM results in easy to implement, readable computer codes enjoying good portability to high performance platforms and new hardware devices like GPGPUs (as, for instance, there is no need for complex data-structures). Also, since they share a common philosophy with traditional finite elements, they could benefit from several existing and well-consolidated tools (pre- and postprocessing routines, mesh partitioning, iterative solvers, preconditioners, etc.).

The other side of the picture is that all-hexahedral grids are more difficult to generate than simplicial tetrahedral grids. While automatic (or semiautomatic) hex-only mesh generation for complex geometries is an active subject of research, generation of an all hexahedral mesh for applications in electrophysiology requires nowadays more effort than hybrid or all-tet grids. This is particularly true for those studies where anatomical properties should be resolved at a very fine anatomical detail. Clearly, the extent to which a study requires representation of structural detail is determined by the particular application and there might be some cases where this is not a serious limitation. Finally, the spectral element method implementation we propose is restricted to 8-node hexahedra. Extension to (curved) isoparametric elements is currently being addresses and will allow mapping curved boundaries of irregularly shaped anatomical details and represent geometries more closely.

3. Materials and Methods

3.1. The Bidomain Model

For a 3D domain the bidomain equations in parabolic-parabolic form read [22] where and are the intracellular and extracellular potentials, is the transmembrane potential, is the membrane capacitance per unit area, and are the intracellular and extracellular conductivity tensors modeling the anisotropy of the cardiac tissue, and is the cell surface area to volume ratio. Here, and are arrays of ionic gating variables and ionic concentrations, respectively [22], and and are externally applied intracellular and extracellular stimulus currents. Suitable boundary conditions on either or the current flux , and initial conditions for , , and are set. Functions , and the transmembrane current are determined by parameters and data fitting and represent an electrophysiological cell model. Many of such cells models of variable complexity exist; see, for example, [23] for a recent review. In this work we model the ionic current using the phase-I Luo-Rudy (LR1) cell model [24]. However, the derivation of the SEM formulation of the bidomain model follows analogously for any ordinary differential equation (ODE) based cell model. The LR1 model relies on six gating variables along with the intracellular calcium concentration . The compatibility conditionon and should hold for the system to be solvable. Note that the transmembrane potential is uniquely determined, while the intra- and extracellular potentials and are determined up to the same additive time-dependent constant, whose value is usually obtained by imposing a normalization condition on , for example, zero average on .

We remark that a form alternative to (1a)-(1b), the parabolic-elliptic formulation of the bidomain equations, exists, with unknowns . Such formulation is quite popular, even because it allows dealing with the parabolic and elliptic blocks at different steps by Gauss-Seidel method. Nevertheless, (i) recent studies show that decoupling the bidomain equations can be less efficient than solving them as a global system (for the same level of accuracy [25]), and (ii) evidence exists that formulation provides a significantly more efficient numerical framework [26]. Our SEM implementation addresses the coupled parabolic-parabolic problem, even because, being the SEM mass matrix diagonal, this allows a significant save in memory allocation and, quite likely, a reduced computational effort. This is because the number of nonzero entries of the global matrix (hence the cost for sparse matrix-vector products) is greatly reduced with respect to the parabolic-elliptic formulation.

3.2. Variational Formulation of the Bidomain Model

The variational form of the bidomain equations is as follows: given and fulfilling the compatibility condition (2), for all , find satisfying given initial and boundary conditions such that for all admissible test functions from a suitable test space, where . See [27] for a mathematical analysis of the bidomain model. Existence and uniqueness for a solution of the bidomain problem for a wide class of cell models, including LR1, are discussed in [28].

3.3. Spatial Discretization

Let be a decomposition of the model volume into a family of nonoverlapping hexahedra with typical size . Each element is obtained by a transformation from a reference (or parent) element . On the reference element, we consider the space of polynomial functions with degree less than or equal to in each variable , . Then, for each letbe the space of the functions that are images through of polynomials functions in . Hence, for all in . Finally, we define the spectral element space asIn our implementation ; thus, when the mapping is subparametric, meaning that its degree is lower than the spectral degree. This choice is essentially motivated by practical considerations: mesh generators produce 8-point hexahedra, while -order hexahedra need total points (and produce different grids for analyses with different spectral degree). The subparametric mapping is known to enjoy good mathematical properties (see, for instance, [29]). Nevertheless, if high order hexahedra are desirable (for instance, when dealing with domains with curved boundary which can be described in terms of geometrical primitives), they can be incorporated in the spectral element frame with small additional effort.

3.4. Construction of SEM Basis Functions

First, the Gauss-Lobatto (LGL) points in the reference element are obtained via tensor product of the one-dimensional LGL nodes in . The full spectral grid is then built mapping the LGL nodes over the hexahedra and eliminating duplicated points: as for finite elements, a global numbering is associated with the grid points. In we consider the Lagrange polynomials of degree corresponding to the LGL nodes and, in , the elemental basis functions for obtained by mapping such nodal basis functions according to . Finally, a basis for the whole space is obtained as a patchwork of these elemental functions on each element . More precisely, we choose as a basis for the set , that is, the transformation of polynomials of order from to each , such thatwhere is the Kronecker delta. It is easy to see that the restriction of such spectral shape functions to either coincides with a Lagrange polynomial or vanishes. It is easily verified that the above choice of nodal basis functions assures that the corresponding global bases enjoy as much localization as possible.

3.5. SEM Formulation

Taking the test space equal to , the SEM approximation of (3a)-(3b) consists in finding such that for all and for every where , while and denotes the inner product in . For the numerical realization of the SEM we resort to the so-called GNI approach (see, e.g., [16]), which consists in exploiting numerical integration for computing integrals. With this choice, the semidiscrete form of the bidomain equations becomes the following: find satisfying given initial and boundary conditions and such that for all where the notation indicates that the integrals on are computed using Gauss-Lobatto numerical integration formulas on the corresponding reference element .

3.5.1. Semidiscrete Form of the Bidomain Equation

In terms of the SEM basis, the spectral element solution is where represent the unknown nodal value. If we denote by the diagonal spectral element mass matrix and with the symmetric intra and extracellular stiffness matrices; with elementswe may write the semidiscrete form of the bidomain equations in compact matrix form as follows:orwhere the bidomain mass and stiffness matrices and are defined asIn a similar, but easier way, one can show that the SEM formulation of the monodomain problem reads along with given cell model, boundary and initial conditions.

3.6. Semi-Implicit Time Discretization

We use a mixed time-marching scheme which is implicit for the intracellular concentration variables while it is explicit for and . This is motivated by the fact that the components of in (1c) are fully decoupled and each depends only on itself and , whereas the component of has complex, non-linear dependencies on , on one another and on , making an implicit time-step much more expensive to compute.

Step 1. Given the potential at the previous time , solvewhere denotes the time step size.

Step 2. Find and by solving the linear systemwhere with and . For the solution of (15) we exploit classical Krylov methods, taking into account the linear system consistency: see, for instance, [30, Theorem 2].

Clearly, the effective solution of (15) requires the choice of an optimal preconditioner, especially for the bidomain case. This is a difficult task which truly deserves a dedicated investigation. In particular, algebraic multigrid preconditioners (AMG) have been applied to the bidomain equations, often outperforming standard methods like the ILU or the block Jacobi preconditioner adopted in this work [31]. Quite often, AMG preconditioners are applied to the parabolic-elliptic formulation of the bidomain equations, while our SEM implementation stems from the coupled parabolic-parabolic problem (see Section 3.1). Analysis of AMG preconditioning for the SEM discretization of the coupled parabolic-parabolic problem (3a)-(3b) will be the object of a future paper, along with a study on the condition numbers of SEM matrices, both unpreconditioned and preconditioned with different methods.

4. Results and Discussion

In order to validate the proposed spectral element scheme we start analyzing its performance for monodomain and bidomain simulations using a well know benchmark on a simplified geometry. Then, to further assess the ability of the method to cope with the additional complexities associated with organ scale cardiac electrophysiology models we present the results of simulating monodomain activity on a realistic human heart geometry. Numerical experiments were performed at the CRS4 High Performance Computing centre comprising more than 3000 processors, grouped into 382 nodes of 8 Intel Xeon (2.8 GHz) cores, connected through a low latency Infiniband network.

4.1. The Monodomain Niederer Benchmark

We use the benchmark proposed in [32]: it is a consensus cardiac tissue electrophysiology model problem that is used for evaluating and verifying cardiac tissue electrophysiology simulators. The problem geometry is defined as a parallelepipedal portion of cardiac tissue (cuboid or slab) with dimensions of cm (see Figure 1), characterized by transversely isotropic conductivity with fibres aligned to the long axis of the computational domain.

Intralongitudinal, intratransversal, extralongitudinal, and extratransversal conductivities are assumed as 0.17, 0.019, 0.62, and 0.24 S/m. The stimulus current is applied to a cubic region of cm located at one corner of the computational domain. The monodomain equations coupled with ten Tusscher and Panfilov model cell model [33] are solved for all simulations. We set a simple diagonal preconditioner, using a tolerance for the Conjugate Gradient iterative algorithm. Activation times at the points shown in Figure 1 are evaluated at spatial resolution of 0.05, 0.02, and 0.01 cm and varying temporal discretisation for PDE (, and ms). In the present case, activation time is defined as the unique time when the membrane potential passes through 0 mV upon first activation. In the following, we focus on the effect of varying the spectral degree for the transmembrane potential . Due to space constraints, we will present only data obtained using a temporal time step for PDE ms. Selected quantitative data for the simulations described above are presented in Table 1. Figure 2(a) illustrates the measured activation times at the centroid with to for the three levels of spatial refinement, displaying the difference that high order methods brings. While inaccuracy is present with for all spatial discretisations, we can see that on the finest mesh and on the intermediate mesh resolution are sufficient to get a practically converged solution. Note that using and we can mitigate the adverse coarsest mesh effects on the measured centroid activation times. In fact (see Table 1) the computed relative error with respect to the finest mesh solution with is 3.398% and 1.286% for and , respectively. The activation times at point P8 are showed in Figure 2(b) for varying degree and different mesh resolutions. Due to error accumulation as the wave propagates across the computational domain, the effect of using higher degree at lowest space resolution is slightly less favourable. The relative error with respect to the finest mesh solution with is equal to 4.420% when and 1.822% when . However, as the mesh is refined, the conduction velocity of the activation wave is again well represented with and , on the finest and intermediate mesh resolution, respectively.

To test if these results were converged at these spatial scales we performed highly refined simulations with a cm spatial resolution. Results for the cuboid centroid and for point P8 with and are illustrated in Figure 3. The degrees of freedom and the numbers of nonzero entries of the solution matrices (NNZ) for the various meshes in the simulations give some insight into the computational workload associated with the solution of the monodomain problem using spectral elements for a given degree of accuracy. In this case relative accuracy is measured by relative error on the activation time at a specific point in the computational domain with respect with a reference solution. In fact, we may consider that for a given cellular and electrophysiology model at a given spatial and temporal resolution the overall computational burden is essentially related to the one required for the iterative solution of the sparse linear system arising from PDEs discretization. Using iterative methods, computational effort is driven by matrix-vector multiplication whose cost, in turn, is essentially determined by NNZ, the number of nonzero entries of the solution matrix. In the case of spectral element method the latter are the most sensible parameter, as it is shown in Table 1 where it is apparent that they grow more than linearly for higher degrees . If, for the moment, we disregard any optimisation issue of the solution of the linear system, we can assume NNZ as a rough estimate of the computational workload required to solve the problem for a given iterative algorithm and preconditioner. It is worth observing that assuming as a reference the solution generated using cm and these data show that a practically converged solution with with cm or with or cm requires about of the effort required for the reference solution in the worst case.

To show how errors are distributed in space, we computed the activation times on the diagonal from points P1 to P8 for for varying mesh resolutions. Results are shown in Figure 4. The graph represents the activation times of points distributed along a line of roughly 21.4 mm in length. Ten points at equal intervals are selected along this line, and the activation time for each point is interpolated from the hexahedron which contains it. These curves highlight that for this choice of there is essentially no dependence on spatial resolution except for the lowest one. All curves share a common morphology with the activation velocity slightly increasing as the mesh resolution improves. Note that in Figure 4 the solution with cm (green line) cm is virtually not distinguishable from the solution with cm (red line). Furthermore, boundary effects are not seen for intermediate to high spatial resolutions and remain limited for cm.

To determine the impact of conduction velocity error in directions perpendicular to the fibres due to the lower conductivity, we show in Figure 5 the isosurfaces of activation times on the cm mesh with . For the sake of comparison with the data published in [32] we also provide the plots of the activation along the plane shown in Figure 1. Note that, for this choice of and , propagation along and across the preferential fibre direction does not affect activation wave curvature. These data also show the ability of the proposed method to capture off-fibre conduction velocities at relatively coarse spatial resolution and demonstrate visually that no boundary effects are present.

To complete this test case, Figure 6 shows the activation potential time history at the cuboid centroid for a 350 ms simulation. It can be seen how for the transmembrane potential depolarization, plateau and recovery phases are well reproduced.

4.2. The Bidomain Niederer Benchmark

We studied bidomain simulation with the same benchmark used in Section 4.1 for the monodomain case. As before, we consider a simulation domain of size cm at mesh discretisations of 0.05, 0.02, and 0.01 cm and anisotropic conductivity. The domain was presented in Figure 1 and further details were given in Section 4.1.

The measured activation times at the cuboid centroid are shown for a range of spectral degrees and varying spatial resolutions in Figure 7. Note how cubic and quartic degrees display superior accuracy to lower degrees by the second coarsest level. Furthermore, on the finest mesh resolution, the solution with is undistinguishable from the other solutions showing that with  cm roughly equivalent accuracy can be attained by .

These data suggest that on the second coarsest mesh or alternatively on the finest spatial resolution is practical and combinations also for the bidomain model. This is further illustrated from the results of computing activation times along the cuboid diagonal. Figure 8 shows how errors are distributed in space for selected combinations of and . Essentially, at the suggested mesh discretisation the solutions for and are practically undistinguishable. Furthermore, in all cases, the boundary effects are minimal. Note that the effect of using on the second coarsest mesh reduces to slightly underestimate the conduction velocity with no apparent effects on the morphology of the resulting curve. Data for point P8, not shown here due to space constraints, confirm the findings the monodomain case we discussed in Section 4.1.

Further investigations on the shape of the wave produced with the simulations quantify the differences one might expect in using one or the other combination of and . This is visually illustrated in Figure 9 which shows the action potential time history at the centroid of the cuboid diagonal for a 60 ms simulation. Results shown in the right panel confirm that with there is still a minimal overestimate of the conduction velocity with no practical effects when compared with the converged solution with .

These data show that on the finer mesh or on the second coarsest mesh ought to be preferred over any other combination of and given that this produces equivalent accuracy. However, details on the degrees of freedom and on the number of nonzero entries of the solution matrix, and for and and for , indicate that this latter combination produces a fully converged solution at roughly 69% of the effort required to solve the problem in the former case.

Finally, Figure 10 confirms the ability of the proposed method to capture off-fibre lower conduction velocities when solving the bidomain problem. Note how the wave activation pattern propagates for all times with no relevant boundary effects.

4.3. High-Resolution Anatomically Realistic Heart

We continue the assessment of the method presenting the results of simulating monodomain activity on an anatomically realistic, high-resolution cardiac computational mesh. At present, fully automatic all-hex mesh generation of complex solids, such as a whole heart, is not yet possible, and most volumes require some measure of decomposition before they can be meshed with a hexahedral meshing scheme. To tackle this problem, we developed a computational pipeline based upon CUBIT (Sandia Laboratory, http://cubit.sandia.gov), an advanced and robust solid modeler and 3D unstructured hexahedral mesh generator that offers state-of-the-art capabilities to design, assess, and improve the quality of a mesh in terms of both geometrical and numerical accuracy. Starting from a ventricular surface obtained from CT images and represented by means of triangular elements, a volume model is generated from which a decomposition into automatically meshable subvolumes can be constructed. A conforming unstructured all-hexahedra mesh is then semiautomatically generated and quality checks and mesh validation are carried out on each subvolume. At the end of the process the quality of resulting mesh with respect to standard finite element metrics is assessed and the mesh is finally validated for use in simulation studies. The surface definition we used was derived from the statistical shape model constructed from a training set comprising 100 asymptomatic and pathologic subjects described in [34]. Data were originally provided by the CISTIB at the Universitat Pompeu Fabra, Spain. Figure 11 shows an anterior and a posterior view of the complete human heart model after subvolume partition.

Using the pipeline outlined above we are currently able to generate whole human heart meshes with very fine average edge length equal to 0.018 cm ( nodes and elements). Due to hardware constraints the version of the mesh used in the simulation study presented below has an average edge length of 0.036 cm and consists of nodes and elements. Figure 12 shows some detailed views of the all-hexahedral mesh we used in our simulations. In the present benchmark the cellular membrane dynamics were defined by the phase-I Luo-Rudy cell model [24], although any other ionic model could have been used. Spatial variation in fibre direction is not accounted for and homogenous membrane properties are assumed throughout all volume. The isotropic baseline conductivities were  S/m and  S/m, where is a diagonal matrix with values along the diagonal. Other parameters were and μF/cm2. A stimulus of magnitude μA/cm2 and duration of 2.0 ms was applied to surface nodes within the apical region of the mesh ( cm) which elicited the propagation of a quasi-planar wavefront in approximately an apicobasal direction.

Figures 1315 show the results of simulating 250 ms of monodomain activity on the heart geometry described above. Simulations were run with spectral degree and time steps of 0.005 ms and 0.01 ms were used for the ODE and PDE solves, respectively. The simulations were run on 128 cores. Our FORTRAN90/MPI parallel spectral element code is based on the parallel library PETSc from the Argonne National Laboratory (http://goo.gl/LGvZKZ) and embeds METIS for partitioning spectral element meshes (http://glaros.dtc.umn.edu/gkhome/views/metis). The iterative linear solver used was Conjugate Gradient with an ILU preconditioner. The total number of degrees of freedom of the problems is equal to with nonzero matrix entries.

We do not have a reference solution in this case, but we should expect to achieve a degree of accuracy similar to the cm or, in the worst case, to the cm Niederer benchmark with . Note that the quality of numerical simulations can be improved at runtime simply by increasing the polynomial degree of the basis functions used in the numerical scheme without the need of generating a new mesh as it would be the case for standard finite elements. Figure 13 shows the action potential time history for three points located in the heart internal septum. These plots demonstrate the ability of the proposed method of capturing the shape of a travelling steep front at this spatial resolution. The absence of oscillations at the end of the depolarization phase further corroborates this observation.

Panels from ms to ms in Figure 14 show the resulting electrical activation sequence of an action potential cycle. Note how the wavefront is well represented and propagates without distortion at this mesh resolution. Convergence studies using finite elements as a comparison, discussed below, demonstrate that this is definitively not attainable at the present spatial resolution with standard low order methods. Finally, to better appreciate the quality of the wavefront resolution some zoomed snapshots of the transmembrane potential at 95 ms and 105 ms after activation are presented in Figure 15.

The analysis of parallel performance of the SEM algorithm deserves a dedicated study and is not included in this work. As a matter of fact, SEM bring overall matrices which are more dense and smaller in size with respect to low order methods: this, in principle, may allow good computational efficiency. Without going into details, this depends on (i) the way matrices are split between parallel tasks, (ii) the low number of interface nodes (i.e., nodes shared by different tasks) typical of SEM, as they need less grid points to provide the same accuracy, and (iii) the fact that parallel efficiency roughly depends on the ratio between computational effort of each task and amount of message passing. Improvements in parallel efficiency for increasing value of the spectral degree have been measured in other fields of application long time ago; see for example, [35]. Last, the choice of the preconditioner may significantly affect the parallel performance.

4.4. Computational Effort: A Comparison with Finite Elements

We exploit the previous example to assess the accuracy and computational cost of our code in comparison with standard finite elements. Because of the time-consuming nature of performing such investigation across a range of different parameters in three dimensions, we used a very small test domain. To this scope, we extracted from the right ventricle wall of the human heart geometry described above a rectangular cuboid spanning from to epicardial to endocardial wall surface with a volume of approximately 0.365 cm3. We then solved an isotropic monodomain problem on this subvolume, using linear finite elements on a series of progressively uniformly refined computational meshes. The meshes we used have an average spatial resolution of approximately equal to 0.02, 0.01, 0.005, and 0.0025 cm.

The parameters used for the simulation were the same used for the heart-scale problem. A stimulus of magnitude μA/cm2 and duration of 2.0 ms was applied to the bottom surface nodes of the cuboid. The resulting time histories at a receiver located in the external surface of the subvolume at a distance from the stimulation face approximately equal to 80% of its maximum vertical dimension are shown in Figure 16 where they are compared with a SEM solution obtained with and average approximately equal to 0.01 cm.

These curves highlight that finite elements progressively converge to the SEM solution as the mesh resolution improves. In particular the FEM solution with cm (continuous red line) is virtually not distinguishable from the solution obtained with SEM using and cm (dotted blue line).

Table 2 shows a comparison between FEM and SEM analyses in terms of DOFs, nonzero entries, and CPU-time to time-advance the monodomain equation. The SEM are able to provide the same accuracy of the FEM solver using approximately 20% of DOFs: this is not too far from our findings in different fields of applications. The DOFs ratio drives quite well the CPU-time for time-advancing the ODE system associated with the cell model (28%). While the SEM NNZ are roughly half of their FEM counterpart, the CPU-time for time-marching the monodomain equation with SEM is about of that required by FEM. There are several reasons for such deviation, including the number of iterations needed to solve the linear system at each time-step (slightly in favour of SEM), the additional cost of the preconditioner when solving the linear system (15), the fact that in modern computer architectures memory access and data movement are at least as expensive as number crunching.

These results are, in our opinion, quite promising; we understand that the potential of method we are presenting to reduce the amount of time required to perform large scale electrophysiology simulations has to be further confirmed with a thorough comparison of the spectral element and the finite element methods on both the monodomain and bidomain equations. This will be the subject of a forthcoming contribution.

5. Conclusions

In this paper we presented an effective high order discretisation technique based on the spectral element method for the study of electrophysiological wave propagation. A fully unstructured all-hexahedra approach implementation of the method has been illustrated and applied to the solution of full 3D monodomain and bidomain models. A careful validation of the implementation was performed on a consensus monodomain benchmark with anisotropic conductivities which was also extended to the bidomain case. At the organ level, a full scale simulation model of the heart was used to assess the ability of the proposed method to cope with the additional complexities associated with large scale cardiac electrophysiology models. As for our knowledge, few attempts have been done to exploit spectral elements in electrocardiology; our work represents an extension to these studies, which were typically treating with over-simplified computational domains. The approach here proposed enjoys many advantages over standard techniques, such as considerably higher convergence rates, geometrical flexibility, and, in our experience, the possibility of using elements with large aspect ratios without significant deterioration in accuracy. Furthermore, the flexibility and robustness of the method, as demonstrated in solving complex three-dimensional problems at organ scale, are quite appealing and promising and will be further explored in combination with the development of advanced approaches to maximise code performance. We are aware that there are a number of aspects that deserve further investigation. For example we have not covered the experimental evaluation of the parallel performance of the method at hand. This will be subject of a forthcoming contribution. Further improvements of this work may include the exploitation of new hardware devices like GPGPUs as well as the investigation of efficient and scalable preconditioners that would allow an increased performance.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work, part of the preDiCT Project, was supported by a grant from the European Commission DG-INFSO, Grant no. 224381. The authors are grateful to the Center for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB) at Universitat Pompeu Fabra for providing the surface mesh used to generate the human ventricular volumetric mesh used in this study. The collaboration was supported by the VPH Network of Excellence and the euHeart Grant (FP7/2007–2013/224495) funded by the European Commission. Gianmauro Cuccuru, Giorgio Fotia, and Fabio Maggio would like to thank the Sardinian Regional Authorities for partially supporting their research work. The authors would like to thank anonymous reviewers for their comments, criticisms, and suggestions which were of valuable help to improve the paper.