Abstract

Diffuse optical tomography problems rely on the solution of an optimization problem for which the dimension of the parameter space is usually large. Thus, gradient-type optimizers are likely to be used, such as the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, along with the adjoint-state method to compute the cost function gradient. Usually, the -inner product is chosen within the extraction procedure (i.e., in the definition of the relationship between the cost function gradient and the directional derivative of the cost function) while alternative inner products that act as regularization can be used. This paper presents some results based on space-dependent Sobolev inner products and shows that this method acts as an efficient low-pass filter on the cost function gradient. Numerical results indicate that the use of Sobolev gradients can be particularly attractive in the context of inverse problems, particularly because of the simplicity of this regularization, since a single additional diffusion equation is to be solved, and also because the quality of the solution is smoothly varying with respect to the regularization parameter.

1. Introduction

Diffuse optical tomography (DOT) consists in reconstructing the spatial radiative property distributions of an optically thick participating (absorbing and scattering) medium, , from optical measurements collected at the border of [13]. This inverse problem has been a surge of interest since the middle of the nineties, due to better theoretical understanding in radiative transfer; improvements in light detection and control of light sources; and the increase of the computing resource performances. This optical diagnostic technique is mainly driven by potential applications in biological tissues imaging although it can be used for nondestructive testing in industrial processes [4, 5] or the characterization of complex semitransparent media such as cellular foams with spatially varying porosities or fiber reinforced thermoplastic composites with inhomogeneity in the fibers distribution. In this domain, the characterization has often been limited to the reconstruction of constant absorption and scattering coefficients and anisotropy factor of the phase function, considering the unidimensional radiative transfer equation (RTE) as forward model [6].

Biomedical applications of the DOT include imaging of finger joints [7], breast imaging [8, 9], neuroimaging of humans [1012], or small-animal imaging [13] to name but a few. In this context, near-infrared light is used in order to take advantage of the so-called therapeutic window (600–900 nm) in which tissues have low absorption. According to the application, RTE has to be considered as forward model [1419], diffuse approximation (DA) may be applied [8, 12, 2023], or hybrid model is employed [2428]. This paper deals with two very different forward models, namely, the bidimensional steady-state RTE and the three-dimensional frequency DA, in order to gauge the potential ability of the introduced regularization with the different models.

The evaluation of space-distributed physical properties from the knowledge of measurements is an inverse problem as opposed to forward problems where the state is computed from the knowledge of properties, boundary conditions, and so forth. Such inverse problems are usually difficult to solve due to their ill-posed nature [29].

The solution of the inverse problem is obtained through the minimization of a cost function that measures the misfit between the predictions (solution of the forward problem) and some experimental measurements. Though zero-order optimization methods may be used for relatively similar problems [31, 32], gradient-type methods are dealt with in this paper because of the large dimension of the parameter space. Due to the ill-posed nature of the inverse problem, unstable solutions may be obtained if no regularization is performed.

In order to cope with such instabilities, the ordinary Tikhonov approach consists in assuming that the properties are subject to a minimal norm in a given vectorial space which leads to enhancing the a priori regularity of the properties [33, 34]. This approach relies on the penalization of the cost function adding prior information on the properties to be recovered. However, the weight associated with the penalization function must be chosen carefully. Indeed, a too small value for the penalization function leads to a small cost function but with high property variations, while too high values lead to small property variations about priors but with a high resulting cost function value at the optimum. The compromise minimizing both the cost function and the property variations may be performed according to the construction of the so-called L-curve as suggested by Hansen and O’Leary [35]. Even though, for appropriate penalization, the optimal Tikhonov parameter has been found to be quasi-independent of the parameter space dimension [36], the construction of such curve, searching its convex corner location, is likely to necessitate a large number of inverse problem solutions for plenty of penalization function weights. Furthermore, it is now well known that the Tikhonov regularization is to be used only together with matrix-inversion-based optimizers, but not with BFGS-like optimizers because these do not rely on matrix inversion [19, 23].

Appropriate parameterization for the space-dependent properties may also affect the solution regularity. As a matter of fact, the amount of measurements data is likely to be insufficient to retrieve the large amount of parameters when considering, at the end, the discrete optimization problem on a fine grid. Thus, regularization by reduction of the dimension of the parameter space could be employed, which consists in searching the physical properties projected on a coarse finite element grid, as suggested for instance in [37], and applied for optical tomography applications in [23].

This paper focuses on a different strategy to improve the reconstruction of space-distributed parameters when the data is corrupted by noise. The proposed strategy filters the effect of the noise, not on the data as it is done for instance when using the mollification method [29, 38], but on the cost function gradient itself. This means that the optimization is not modified at all since the original cost function remains the same. It also means that the data is not modified: the whole information contained within the experimental data is integrated within the cost function to be minimized. As a matter of fact, several inner products are to be defined when considering the inverse optimization problem. According to Protas [39], each of them (cost function definition, adjoint identity, and gradient extraction after defining the space used in the definition relationship between the cost function derivative and its gradient) may somehow yield to a particular regularization opportunity. The noise present in the measurements data propagates through the adjoint problem and then to the cost function gradient from which the directions of descent are computed to supply the optimizer. The propagation of the noise induces high variations of the cost function gradient with respect to what the gradient would be without noise. The idea is thus to somehow apply a low-pass filter to the cost function gradient, exactly where the noise is the highest. This is done choosing the space-dependent weighted Sobolev inner product definition when extracting the cost function gradient rather than the ordinary Hilbert inner product. Note that space-independent Sobolev inner products have been used in other situations, in image treatment (e.g., [40]), in partial differential equations (PDEs), in energy minimization (e.g., [41]), and in boundary conditions reconstruction with accurate final time [42, 43].

The underlying application of this study is optical tomography, that is, the reconstruction of radiative properties from light intensity measurements on some parts of the boundary. The paper is therefore organized as follows. Section 2 gives the theoretical bases for extracting the cost function gradients when the cost function is expressed in terms of a state which is solution of some PDEs. This section is written in a concise way, following [44], such that the results can be easily transposed to other practical applications. Also, only the continuous equations are dealt with, for the states, the adjoints, and the gradients. This “differentiate-then-discretize” approach has the advantage of versatility when considering afterwards different discretization choices (one for the states, one for the physical properties, etc.). The cost function gradient is then extracted firstly in a general framework using the ordinary -inner product and then using space-independent and finally space-dependent Sobolev inner products. Section 3 presents application of space-dependent Sobolev gradients for the inverse problem of optical tomography in which the forward model is the RTE. The continuous cost function gradient is fully derived and numerical tests are performed for several levels of noise. As the Sobolev filter is chosen to follow an exponential function, only one parameter fully describes the regularization. Next, in Section 4, a very different model, namely, the frequency DA in the three-dimensional case, is considered as forward model of light propagation. For the two models, the quality of the reconstructions is measured quantitatively with the aid of both deviation and correlation factors. Without contest, reconstructions are greatly improved with the space-dependent Sobolev gradients. Finally, Section 5 naturally summarizes the main conclusions and perspectives of the current work.

2. Space-Dependent Sobolev Cost Function Gradient

Let us define the domain of interest and denote the cost function expressed in terms of the state . This cost function may be expressed as the half of the squared norm of a quadratic function based on the discrepancy between some state predictions and the related measurements . The norm is expressed with a complex inner product defined later on for specific applications. Cost function is actually to be minimized with respect to some space-distributed parameters , with , for example (depending on the forward model considered). The generic cost function is given by

The state is related to the parameters through an operator (that is likely to be nonlinear) that combines the partial differential equations along with the boundary conditions, initial condition, and so forth. This operator is denoted as for the state problem. To be concise, one writes down

In order to use efficient optimization algorithms for large-scale inverse problems encountered in distributed parameter estimation, the cost function derivative or rather the cost function gradient has to be computed. To do so, the definition of the directional derivative of towards the direction is used.

Definition 1 (directional derivative). Let a point and a direction . The directional derivative of at point in the direction is [45]

The application of the directional derivative (3) on the cost function (1) leads to the equalityand, because of the linearity of with respect to ,

The inner product involved in the right-hand side of (5) will be also defined later on, when necessary. That is, in this inner product, regularization should be introduced.

The differentiation of the state problem in the direction yields the directional derivative that can further be injected into the inner product (4). The state equation being given by (2), the derivative of towards the direction is written aswhere is the tangent operator, that is, the Jacobian of with respect to , and is the Jacobian of with respect to . Using this basic approach, one needs integrations of (6) to access the full gradient for a fully discretized problem. When is large, which is likely to be the case when recovering space-distributed parameters, this approach becomes inefficient even though the computation of the linear tangent operator can be reused for all directions of the canonical basis of .

Following Lions [46] and Céa [47], the adjoint-state method computes the whole cost function gradient through the integration of a single additional adjoint-state problem. A new variable (the adjoint-state variable ) is then introduced so that the cost function derivative also satisfies the relationshipwhere, again, the inner product should be defined later on when necessary. Combining (6) and (7), the cost derivative is rewritten as

The adjoint-state problem is then identified through (4) and (8), such that it satisfies the equality

Next, if the adjoint problem (9) is satisfied (this means that the adjoint-state is accessible), then the cost function gradient is given by the inner product (7). In practice, operators involved in (9) are transposed. The inner product property (with being the transposed conjugate operator of ) gives the adjoint equation concisely written as

As reported by Protas [39], the gradient is identified in a given space for which the metric had been selected. To do so, the two following definitions are needed.

Definition 2 ( Hilbert space). ConsiderThis functional space is associated with the inner product:

Definition 3 ( Sobolev space). ConsiderThis functional space is associated with the inner product:

Most often, the real inner product for is used to extract the gradient. Combining (5) and (7) with the inner product of Definition 2 gives, after transposing within the right-hand side inner product, the following result:

This way for writing down the cost function gradient is the ordinary one as first suggested by Lions [46] and Céa [47]. However, according to Protas [39], it is suggested to use other inner product definitions due to the poor scaling of the corresponding discrete optimization problem. As reported in [48], the incorporation of such derivatives into the inner product, instead of using the inner product, has the effect of scale-dependent filtering and allows one to extract smoother gradients, thereby preconditioning the optimization process. In some other studies, solving partial differential equations through optimization (see, e.g., the articles of Mahavier [49], Danaila and Kazemi [41], and Majid and Sial [50]), in image treatment (see, e.g., Renka [40]), in boundary conditions reconstruction (see, e.g., Nassiopoulos and Bourquin [42] and Bourquin and Nassiopoulos [43]), or in initial state reconstruction [39], the Sobolev space is used for extracting the cost function gradient. This acts as a preconditioner modifying the direction of descent involved in the gradient-type optimization algorithm. It is shown, for instance, in the study of Danaila and Kazemi [41] that the cost function decrease when solving the PDEs problem can be much higher with the inner product than with the ordinary one.

In the present study, the choice of another inner product comes from the fact that we are faced with dealing with noisy measurements . As a matter of fact, the measurements are noisy by nature. These measures are involved in the cost function definition (1), through a difference with the predictions. The noisy measurements are thus also involved in the forcing term . The noise then propagates through the adjoint equation (10), and then to the cost function gradient (15).

In order to deal with the smoothing of the measurements noise that propagates in the adjoint system and then to the cost function gradient, the weighted Sobolev space is introduced.

Definition 4 ( weighted Sobolev space inner product). Consider

Remark 5. is the regularization parameter.

This strategy has been applied in bidimensional optical tomography applications based on the radiative transfer equation [19] where it was shown how the diffusion of the noise could attenuate fluctuations and thus give better reconstructions with a moderate parameter .

The strategy that is developed here goes much further: it consists in filtering the cost function gradient where, and only where it is needed, the high frequency fluctuations exist, at the vicinity of the sensors.

In applications considered here, that is, in optical tomography, sensors are located on the boundaries, so that the effect of the noise appears on the boundaries before being diffused through the adjoint-state equation. Thus, the idea is to choose a filtering function whose value is high on the boundary and that continuously decreases within the medium. To do so, one uses the distance function definition.

Definition 6 (distance function). Let and ; is the Euclidean distance function to the set if

Remark 7. As an example, the filtering function can be written as

This makes the construction of the space-dependent weighted Sobolev space possible.

Definition 8 ( space-dependent weighted Sobolev space inner product). Consider

Remark 9. With the choice of the filtering function (18), the regularization parameter becomes .

Using the inner product for and if , then there exists a unique such that

Taking into account thatone obtains by identifying termswith a null flux boundary condition; that is, on . At this stage, the application being defined (measurements, state model, etc.), the -norm is given accordingly, and the cost function gradient can be extracted.

Remark 10. Due to the inclusion , the space-dependent weighted Sobolev gradient is indeed an acceptable direction of descent for the optimization problem that consists in minimizing (1).

3. Application on 2D RTE-Based Optical Tomography

3.1. Inverse Problem Statement

The inverse problem consists mathematically in minimizing a cost function which quantitatively measures the discrepancy between some optical measurements and related predictions over a set of sources, sensors, and angles of directional integration. In optical tomography, taking into account the reflection at the interface and assuming that the medium is convex, the measurable quantity is the exitance, or portion of it in a given solid angle centered on direction . The related prediction can be written as where is the radiative intensity and the function is the Fresnel reflection factor. In what follows in this section, the border of the medium is assumed to be transparent; that is, . Moreover, the solid angle is assumed to be sufficiently small so that the approximation holds. Therefore, the radiative intensity measurements can be assessed and the cost function (24) mapping from to uses, according to the schematic description of the problem (Figure 1), the specific Euclidean norm:where the index refers to the source (test) number, the index refers to the detection node, and the index refers to the direction. In (24), is the cost function explicitly expressed in terms of the prediction , while is the cost function implicitly expressed in terms of the parameters. This latter function is often called the reduced cost function. According to (1), one has the straightforward correspondence . Note that the scalar in (24) is introduced in order to avoid infinite Dirichlet boundary condition in the adjoint-state problem. For a given source configuration, say the th one, the prediction at the sensor and direction is given through the solution of the forward steady-state radiative transfer problem formulated aswhere is the considered direction of propagation, is the phase function representing the probability that a photon arriving from the direction is scattered to the direction , and are the absorption and scattering space-dependent functions, respectively, and denotes the indicator function.

3.2. Cost Function Gradient Derivation

The computation of the cost function gradient follows the generic methodology detailed in the previous section. The cost function derivative towards the direction is given differentiating (24):

In order to derive the cost gradient for this specific application, -based inner products are defined.

Definition 11. Consider

The -inner product satisfies Properties 1 and 2, both being useful for the cost function gradient derivation.

Property 1. Consider

Property 2. Consider

The generic equation for the cost gradient (15) applied on both space-distributed parameters and gives, for direction ,and, for direction ,

In these two equations, the direction is split into two terms and in order to be able to compute the cost function gradient relatively to each of the functions and separately.

On one hand, (30) is rewritten such thatand, on the other hand, Property 2 enables rewriting (31) such that

Besides, the differentiation of the forward model enables rewriting previous equations too, for being either or :

Properties 1 and 2 enable rewriting (34) such that

The adjoint problem is eventually obtained identifying (26) with (35):where denotes the set of the pointwise sensors.

3.3. Numerical Results

The optimization relies on the BFGS algorithm described by Liu and Nocedal [51] coupled with a quadratic interpolation line search [52]. Generally speaking, this latter has proved to be particularly effective in these recent years in the area of inversion (see [5355] to name but a few) due to its effectiveness in minimizing nonlinear problems [56].

The two-dimensional domain is a square of 2 cm length. Four collimated sources are located at the center of each side, while sensors are placed around sources towards twenty directions. The target properties to be reconstructed, and , are defined by the background for which  cm−1 and  cm−1 and two square inclusions, the former for which  cm−1 and  cm−1 and the latter for which  cm−1 and  cm−1. Both the forward and the adjoint models are solved using discontinuous Galerkin finite elements for which the mesh convergence has been validated. Three distinct meshes have been created, one in order to build the synthetic data before adding random noise following Klose [57] (3 721 nodes), one in order to project the forward and adjoint-state variables (1 681 nodes), and one associated with the property functions with the aim of implicitly regularizing the inverse problem ( nodes). Seeking to gauge the accuracy of the reconstructions, the deviation factor and the correlation factor are used [58]. The former factor measures the deviation of the reconstructed image from the target image: a small value indicates a reconstructed image with high accuracy. The latter factor measures the linear correlation between the target and the reconstructed image: a large value shows a high detectability in the reconstructed image and indicates a reconstructed image with high accuracy. These errors are defined as in [15], for the parameter being either or :where is the dimension of the parameter space (the number of finite elements nodes), is the value of on the th node, and and are the mean values of vectors and , and , are standard deviations of target and reconstructed images, respectively. Standard deviations are given by

Numerical tests were performed for different levels of noise. First, a small noise of 20 dB is added to the synthetic data (this corresponds to 1% of noise). Then, a moderate noise of 15 dB is added to the data (approximately 3% of noise). Finally, the highest level of noise of 10 dB is considered (10% of noise).

Figure 2 presents the evolution of dimensionless cost function values along with iterations. Note that the main used stopping criterion has been based on the cost function stabilization. As expected, it is seen from this figure that the higher the level of noise, the higher the cost function value at stabilization. Moreover, the convergence to the solution is slightly faster with Sobolev gradients than with ordinary ones, even though, far at the end, the value of the cost function is the lowest for the ordinary gradient.

Note that the BFGS optimizer is not based on matrix inversion, as it is the case for Gauss-Newton method and its derivative. It has been shown in other works that such a choice implicitly regularizes the inverse problem. Another regularization tool comes from the parameter mesh that is chosen sufficiently coarse to avoid the presence of too many local minima [37]. These tools make the reconstructions possible without the need of adding a penalization term to the cost function.

Note also that the choice of the stopping criterion plays a crucial role in the effectiveness of the algorithm: a too weak stopping criterion may lead to useless solution, while a too severe stopping criterion may lead to a divergent solution. In fact, the stopping criterion based on cost function stabilization is another dummy regularization that is to be used in practical applications. The determination of an appropriate value for this criterion is a challenging task in theory because it highly depends on the application. For the needs of this paper, the value equal to has been determined empirically.

Figures 3, 4, and 5 present the evolutions, with respect to iterations, of deviation and correlation factors, respectively, to both parameters and , for the three levels of noise. These results show that when compared to ordinary gradients, the use of space-dependent Sobolev gradients decreases deviation factors and increases correlation factors. The optimum regularization parameter, , is about 100 in this particular case: when is too small, there is too much gradient diffusion, while when is too high, this yields the ordinary gradient.

Figures 6 and 7 corroborate these conclusions. The Sobolev gradients filter the high frequency fluctuations and avoid the propagation of the noise to the cost function gradient and then to the reconstructions. It is observed that the regularization parameter gives the best reconstructions as regards smoothness and precision of reconstructions.

4. Application on 3D DA-Based Optical Tomography

4.1. Inverse Problem Statement

The radiative transfer equation (RTE) provides an equation of light propagation valid in most of participating (absorbing and scattering) media as long as the independent scattering regime is fulfilled [59]. For specific applications, approximations of the RTE could be applied. Solutions of the RTE being computationally expensive, especially if the third dimension has to be considered, approximated models of the RTE should be employed when valid. Among the different models approaching the RTE, the diffuse approximation (DA) is the privileged one. The latter, which describes the photon density field inside the medium, is given in the frequency domain by [1, 60]Here, is the Fourier transform at the frequency of the internal source of radiation, [W cm−3]; is the Fourier transform at the frequency of the temporal signal associated with the th diffuse source, [W cm−2]; is the Fourier transform of the th photon density field, ; is the macroscopic scattering coefficient [cm]; is the dimension of ; and are the absorption and reduced scattering coefficients ( is the scattering coefficient as previously defined) [cm−1]; is the asymmetry factor which equals the average cosine of the scattering angle; is the speed of light in (constant) [cm s−1]; is the unit normal vector to the boundary of ; is a constant parameter depending on ( for , for ); and is a parameter which characterizes the reflection on the border that can be derived from the Fresnel laws if specular reflection is considered [61, 62].

Similarly to Section 3.3, the inverse problem consists in solving the optimization problem:whereand where the index refers to the source number and the index refers to the detection node. It should be noted that, under the assumption of the DA, the measurable quantity (23) is related to the photon density through the equality [60]. In the remainder of the section, the following values of the source terms are considered:  W cm−2 and  W cm−2 for all (which corresponds to a temporal signal of the form ).

4.2. Cost Function Gradients Derivation

The cost function gradients with respect to and have been computed in [23] with the help of a continuous Lagrangian formulation. In terms of (5), it can be shown that the directional derivatives of the cost function are equal towhere is the real part operator and the adjoint variable involved in (15) is the solution to the adjoint problem:

Therefore, if the inner product is used to extract the cost function gradient in (5), the expressions for the continuous cost function gradients are such thatNote that an adimensionalisation of the parameters by a priori radiative properties is used in the inversion process [23].

4.3. Numerical Results

The same algorithm as employed in Section 3 is used to reconstruct the target properties from initial estimates equal to values of those of the background. The domain considered, , is a square of 5 cm length ( cm2). The target properties to be reconstructed, and , are defined bywhere  cm2 and  cm2.

The stopping criteria are based on the cost function stabilization, with a critical value chosen equal to and a total number of iterations equal to . As in Section 3.3, these two criteria are used as dummy regularizations in order to get a low enough cost function value still without getting a divergent solution. Again, these parameters have been tuned empirically.

Physical properties involved in the forward model (39) are fixed to  MHz,  cm s−1, and . Synthetic data are considered for these numerical tests. These data, , which represent pseudoexperimental measurements, are built on a finer mesh (132 651 nodes) than the one that generates the predictions (68 921 nodes) in order to avoid the inverse crime [60, 63]. Then, a 10 dB multiplicative white Gaussian noise is applied to at the nodes of the sensors to simulate the noise inherent to experimental devices; that is, , where . Synthetic data, state, and parameter meshes can be seen in Figure 8. It can be seen from Figure 8 that 6 sources and 24 sensors, 0.5 cm2 each (containing 16 discretization points), are used for this particular test. A reduction of the parameter space dimension is employed to improve the quality of the reconstructions obtained with the BFGS algorithm [23]: the mesh associated with the radiative properties and contains only 9 261 nodes. Note that the total number of complex measurements is equal to 2 304.

Figure 9 presents the evolutions, with respect to iterations, of deviation and correlation factors, respectively, to both parameters. It is seen from this figure that when compared to ordinary gradients, the use of space-dependent Sobolev gradients decreases deviation factors and increases correlation factors. The optimum regularization parameter is about in this particular case. Figure 10 corroborates these conclusions: the best reconstructions, in terms of smoothness and accuracy, are obtained for a value of the regularization parameter, , equal to 10. Finally, remark that Figure 9 allows observing the establishment of a cross talk effect [64, 65]: for and , it is seen that the deviation factor decreases with the iterations, while the deviation factor increases. In these two cases, values of the cost function decrease with the iterations, while an improvement of the absorption coefficient reconstruction and a deterioration of the reduced scattering coefficient reconstruction are observed. This effect is prevented by the use of space-dependent Sobolev gradients with an appropriate regularization parameter weight.

5. Conclusion

In this paper, inverse models based on the BFGS algorithm have been developed to solve optical tomography problems based on two different forward models: the bidimensional steady-state radiative transfer equation and the three-dimensional frequency diffuse approximation. A Sobolev filter function was defined and space-dependent Sobolev inner products were used when extracting the cost function gradients instead of the -inner product. Two test cases consisting in reconstructing discontinuous radiative properties in a square and a cube have been considered. Numerical results obtained have shown that, for all levels of noise and all values of the regularization parameter tested, the use of Sobolev gradients drastically enhances the quality of the reconstructions. Also, it has been observed that the space-dependent Sobolev gradients can allow avoiding the establishment of a crosstalk effect in the case of the frequency three-dimensional approximation diffuse based inversion.

Generally speaking, due to simplicity of implementation and effectiveness in filtering noise measurements locally that provides better reconstructions, space-dependent Sobolev gradients appear to be particularly attractive in the context of inverse problems. In the near future, the anisotropy factor will be integrated in the inverse problem based on the radiative transfer equation and the use of a logarithmic cost function will be considered as our more recent results suggest. Later, inversion numerical algorithms of the three-dimensional radiative transfer equation and diffuse approximation model will be integrated in a global numerical code. The latter will be based on the BFGS algorithm with the use of adimensionalisation of the radiative properties, reduction of the parameter space dimension, space-dependent Sobolev gradients, and a wavelet multiscale approach, as developed in another work.

Additional Points

Note that the proposed methods have been implemented within the FreeFem++ environment [66].

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper and regarding the funding that they have received.

Acknowledgments

The authors thank the Centre Régional de Calcul Intensif des Pays de la Loire (CCIPL), financed by the French Research Ministry, the Région Pays de la Loire and the University of Nantes, for allowing computations on their supercomputers. The authors would like to thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for funding this research.