Research Article | Open Access
Comparative Study of Metaheuristics for the Curve-Fitting Problem: Modeling Neurotransmitter Diffusion and Synaptic Receptor Activation
Synapses are key elements in the information transmission in the nervous system. Among the different approaches to study them, the use of computational simulations is identified as the most promising technique. Simulations, however, do not provide generalized models of the underlying biochemical phenomena, but a set of observations, or time-series curves, displaying the behavior of the synapse in the scenario represented. Finding a general model of these curves, like a set of mathematical equations, could be an achievement in the study of synaptic behavior. In this paper, we propose an exploratory analysis in which selected curve models are proposed, and state-of-the-art metaheuristics are used and compared to fit the free coefficients of these curves to the data obtained from simulations. Experimental results demonstrate that several models can fit these data, though a deeper analysis from a biological perspective reveals that some are better suited for this purpose, as they represent more accurately the biological process. Based on the results of this analysis, we propose a set of mathematical equations and a methodology, adequate for modeling several aspects of biochemical synaptic behavior.
Most information in the mammalian nervous system flows through chemical synapses. These are complex structures comprising a presynaptic element (an axon terminal) and a postsynaptic element (a dendritic spine, a dendritic shaft, an axon, or a soma) separated by a narrow gap known as the synaptic cleft (see Figure 1). The neurotransmitter is stored in synaptic vesicles located in the presynaptic terminal. For release to take place, the membrane of one or more vesicles must fuse with a region of the presynaptic membrane, the active zone, and lining the synaptic cleft. On the opposite side, the postsynaptic membrane is populated by specific receptors.
(a) Electron microscopy image
(b) Schematic representation
Multiple factors influence the diffusion of neurotransmitter molecules and their interaction with specific receptors [1–3]. The initial concentration of the released neurotransmitter in the extracellular space depends on the volume of the synaptic cleft. The subsequent diffusion of neurotransmitter molecules outside the cleft may be influenced by the geometrical characteristics of the membranes that surround the synaptic junction, and by the presence and concentration of transporter molecules. However, direct observation of the various synaptic events at the molecular and ultrastructural levels in vivo or in vitro is rather difficult, if not impossible. Simulation approaches are thus useful to assess the influence of different parameters on the behavior of the synapse (e.g., [4, 5]).
Simulation approaches in neuroscience have considered different models, scales, and techniques, according to the phenomenon being studied. Biochemical processes, such as neurotransmitter diffusion, require Monte Carlo particle-based simulators like MCell [6, 7], ChemCell , or Smoldyn [9, 10]. These simulation techniques allow computational neuroscientist to reproduce these biological processes in a manner that they can be thoroughly observed, systematically analyzed, and easily adapted and/or modified. The results of these simulations can be studied to extract general conclusions and infer basic properties of the natural processes being studied.
In the case of chemical synapses, many aspects of their behavior during neurotransmitter release and diffusion can be simulated. Simulations, however, do not provide generalized models of these series, showing only the specific values observed each time and, in this case, these are subject to the stochastic nature of the Monte Carlo simulations used to produce them. Having general, empirical models of these time series, such as a set of mathematical equations, could be an important achievement in the study of chemical synaptic behavior. Finding these equations, however, is not a trivial task.
Having the raw simulation data at hand, the first step in finding these equations is to identify the type of mathematical expression we are looking for. A reasonable approach to achieve this is to study the underlying physicochemical processes that take place during synapse operation (e.g., molecular diffusion and receptor kinetics) and try to analytically infer the synaptic equations from them. This is, however, sometimes not possible (as is explained in detail in Section 2.1). When this is the case, an exploratory analysis can be attempted, testing different general mathematical models against the empirical data, and trying to find the one with the best adjustment. This becomes, in essence, a complex curve-fitting problem, in which a set of general equations is fitted to experimentally obtained data .
This problem is, in its general form, an optimization task, in which different search strategies in the field of metaheuristics have been successfully applied in the past. This type of search techniques, with Evolutionary Algorithms (EAs) as one of the most relevant exponents, have shown in the last decades their ability to deal with optimization problems of many different domains, ranging from completely theoretical mathematical functions (benchmarks)  to real-world problems in many different domains: biology [13–17], engineering [18–21], and logistics [22–25], to name a few. Furthermore, different metaheuristics have been previously used in related curve fitting problems [26–29], which motivates the use of this type of algorithms for our particular problem.
2. Materials and Methods
2.1. Synaptic Curves
We analyzed simulations based on simplified models of excitatory synapses where AMPA receptors were present and the neurotransmitter involved was glutamate (The AMPA receptor is a ionotropic receptor commonly found in excitatory synapses using glutamate as neurotransmitter. It is named from its specific agonist alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid). We based our study in a corpus of synaptic simulations very similar to the one presented in . Our corpus consisted of 500 different synaptic configurations, each simulated 500 times (to incorporate stochastic variability) resulting in a total of 250,000 simulations. As in , geometrical parameters were taken into account to incorporate synaptic variability, and simulations were performed using the MCell software in the Magerit supercomputer . Each one of the 500 synaptic configurations represented a unique synapse, morphologically different form the others. Simulations studied neurotransmitter diffusion and synaptic receptor behavior in each configuration. After the synaptic simulations were performed, two separated (but related) aspects of synaptic behavior were considered:(i)The average variation of neurotransmitter concentration in the synaptic cleft over time, after a single vesicle release.(ii)The average variation of the total number of synaptic receptors in the open state over time, again after a single neurotransmitter vesicle release (Molecular kinetics of synaptic receptors follow Markov-chain-like models, with several possible states and transition probabilities depending on environmental factors and chemical reaction rate constants. The most relevant state in the AMPA receptor is the open state. It is directly related with the electrical synaptic response ).
These two aspects were plotted as time-series curves for each of the 500 synaptic configurations, resulting in a total of 1,000 curves (500 glutamate concentration curves and 500 open AMPA receptor curves), each of them being an average of 500 simulations of the same synaptic configuration. An example of these curves can be seen in Figure 2. The curves obtained were consistent with previous studies [32–35]. Each one of these 1,000 curves represented the behavior of a specific synapse, morphologically different from the others. Each curve became a separate curve-fitting problem, addressed using the metaheuristics described in Section 2.2.
(a) Glutamate concentration curve
(b) Open AMPA receptors curve
2.1.1. Glutamate Concentration Curves
In the case of the glutamate concentration (Figure 2(a)), all curves showed an initial peak (maximum concentration in the moment of vesicle liberation), followed by what very closely resembled a typical exponential decay. To understand this curve, we referred to Fick’s second law of diffusion (derived by the German physiologist Adolf Fick in 1855), which predicts how diffusion causes the concentration of a substance to change with time. For two or more dimensions, this law states thatwhere is the concentration of a given substance (glutamate in our case) at location and time and is the coefficient of diffusion of substance . This is also called the heat equation and has no analytic solution for our specific problem, due to the synaptic three-dimensional geometry (The heat equation can only be solved analytically for a few idealized uni-dimensional problems). It is known, however, that its solution must follow a typical exponential decay. In exponential decay processes, the quantity of a substance decreases in the following way:where is the mean lifetime or exponential time constant of the process. We solve (2) as follows:
We integer both sides:
We now apply to both sides:
Since is constant, we rename it to and we finally obtain a solution to (2) in the following form:where is the initial quantity, since at , , . We know that Fick’s second law of diffusion (1) must follow an exponential decay (2). Therefore, its solution must be in the form of (6). The neurotransmitter diffuses through a certain volume: the synaptic cleft. As explained, the neurotransmitter concentration in every point of this volume changes following an exponential decay (6). We propose a new function as the average concentration of neurotransmitter in this volume, and we assume that it also follows an exponential decay. We define with the following equation:where is the initial average concentration at . Since the number of neurotransmitter molecules released (vesicle size) and the volume of the synaptic cleft are known (they are part of the synaptic configuration simulation parameters), can be easily calculated beforehand. Please remember that is the average concentration of in a defined volume, opposed to , which is the exact concentration at position . is the value measured in our simulation experiments (shown, e.g., in Figure 2(a)).
The value of in (7), however, cannot be analytically calculated. From the previously mentioned link between (1) and (2), we deduce a connection between each side of both equations, concluding that(i) relates to (left side of both equations);(ii) relates to (right side of both equations).
From the second item, we deduce that the constant should be somehow calculated using and some unknown aspects of the synaptic geometry, which are factored inside the term that cannot be calculated analytically for an arbitrary 3D problem. From and we derive a new coefficient , so that
The coefficient represents the effects of geometry, separated from the substance diffusion coefficient (). For our problem, cannot be determined analytically and needs to be estimated using numerical and/or statistical methods. We incorporate in (7) and obtainwhich is the final expression of our neurotransmitter (glutamate) concentration time series model. The problem of estimating the value of still remains, and for this we need to return to our synaptic simulations. Using curve-fitting methods, we can find the values of that most accurately model the results observed during these experiments.
2.1.2. AMPA Open Receptors Curves
All open AMPA receptors curves showed a rapid climb to a single peak followed by a long tail, slowly descending towards 0 (as shown in Figure 2(b)). The initial section of the curve (containing the rapid ascent, peak and descent) contains the most relevant information, that is, the amplitude of the peak and the time it takes to reach it. A general exploration of the simulation data showed that these two characteristics are highly dependent on the neurotransmitter concentration and synaptic geometry.
As we have already established, neurotransmitter concentration cannot be predicted analytically (Fick’s second law cannot be solved analytically for 2 or more dimensions); therefore open AMPA receptors curves will not be either (since they are dependent on neurotransmitter concentration). Moreover, synaptic geometry variations can be extremely diverse, making the task of inferring a mathematical equation much harder.
Therefore, we adopted the exploratory analysis approach, considering a set of selected general mathematical models and testing them against the simulation data. We based this set on a previous analysis presented in . Our work differs from this study in that in  this problem was tackled only by means of a single approximate technique, which we use here as a baseline to compare different global optimization heuristics. We also performed a subsequent analysis of the optimization results, taking into consideration the relevant biological implications of our findings and proposing generalized synaptic curve models. From , however, we selected the best five curves as a starting point:(i)4-by-4 degree polynomial rational function (9 coefficients): (ii)8-term Fourier series (18 coefficients): (iii)8-term Gauss series (25 coefficients): (iv)2-term exponential function (4 coefficients): (v)9th degree polynomial (10 coefficients):
To decide which of these curves best serves our purpose, we need to test how well each of them can model the experimental data we observed during simulations. We need, therefore, to fit every curve to the data and see how they perform.
2.2. Algorithms Used in the Comparison
This section reviews the considered algorithms for the previously described curves fitting problem, including the reference baseline algorithm (Nonlinear Least Squares) used in previous studies  and several metaheuristics (five Evolutionary Algorithms and two local searches) whose performance will be compared in Section 3.
2.2.1. Baseline Algorithm: Nonlinear Least Squares
The Nonlinear Least Squares (NLLS) algorithm [36, 37] is frequently used in some forms of nonlinear regression and is available in multiple mathematical and statistical software, such as MATLAB  or R . Loosely speaking, this algorithm tries to approximate the model by a linear one, in the first place, to further refine the parameters by successive iterations. This method was successfully used in a seminal work  and thus has been selected as the baseline algorithm to test the performance of several metaheuristics, which will be reviewed in the following sections.
2.2.2. Selected Metaheuristics
Genetic Algorithm. Genetic Algorithms (GAs)  are one of the most famous and widely used nature-inspired algorithms in black-box optimization. The generality of its operators makes it possible to use GAs both in the continuous and the discrete domains. For our experiments, we have used a Real-Coded GA that uses the - crossover, with , and the Gaussian mutation . Different population sizes and operator probabilities were tested, as described in Section 3.
Differential Evolution. Differential Evolution (DE)  is one of the most powerful algorithms used in continuous optimization. Its flexibility and simplicity (it only has three free parameters to configure) have gained DE a lot of popularity in recent years and it is being used, both in its canonical form and in more advanced flavors (as we will see below) in many different complex optimization scenarios. For our experiments we have considered a DE algorithm with Exponential Crossover and, as for the GA, different values have been tried for its three free parameters: population size (NP), , and CR.
Self-Adaptive Differential Evolution. This is one of the variants of the aforementioned canonical DE algorithm. The Self-Adaptive Differential Evolution algorithm proposed in  does not consider fixed and CR parameters but, instead, allows the algorithm to adjust their values with certain probability, controlled by and , respectively.
Generalized Opposition-Based Differential Evolution. This is the second variant of the DE algorithm used in our study. In the Generalized Opposition-Based Differential Evolution (GODE) algorithm , the Opposition-Based Learning concept is used, which basically means that, with some probability, the algorithm tries not only to evaluate the solutions in the current population, but also the opposite ones. Apart from the common DE parameters, this algorithm adds a new parameter to control the probability of applying the opposite search.
Solis and Wets Algorithm. The well-known Solis and Wets algorithm  has also been included in our study. This direct search method performs a randomized local minimization of a candidate solution with an adaptive step size. It has several parameters that rule the way the candidate solution is moved and how the step size is adapted. All these parameters will be subject to parameter tuning, as for the rest of the algorithms.
MTS-LS1 Algorithm. MTS-LS1 is the first of the three local searches combined by the MTS algorithm . This algorithm tries to optimize each dimension of the problem independently and has been successfully used in the past to search large scale global optimization problems. In the same line than the Solis and Wets algorithm, several parameters control the movements of the local search, and all of them will be adjusted.
Covariance Matrix Adaptation Evolution Strategy. The Covariance Matrix Adaptation Evolution Strategy (CMA-ES)  is an evolution strategy that adapts the full covariance matrix of a normal search. Some of its advantages is that it exhibits the same performance on an objective function whether or not a linear transformation has been applied and its robustness against ill-conditioned and nonseparable problems . To increase the probability of finding the global optimum, the increasing population size IPOP-CMA-ES algorithm was proposed in . This algorithm uses a restart method with a successively increasing population size strategy each time the stopping criterion is satisfied. This algorithm was ranked first on the “Special Session on Real-Parameter Optimization” held at the CEC 2005 Congress and has since been used with several optimization problems. For the proposed experiments, we have considered both the CMA-ES and the IPOP-CMA-ES with several values for its sigma parameter. The active covariance matrix adaptation mechanism, which actively decreases variances in especially unsuccessful directions to accelerate their convergence, has also been tested with both algorithms.
2.3. Comparison Metric
To assess the accuracy of our curve-fitting solutions, we based our study on the commonly used coefficient of determination (). This is the proportion of variability in a data set that is accounted for by the statistical model. It provides a measure of how well future outcomes are likely to be predicted by the model. usually has a value between 0 and 1 (sometimes it can be less than 0), where 1 indicates an exact fit to the reference data and a value less than or equal to 0 indicates no fit at all.
is calculated as follows: given a reference data set with values , each of which has an associated predicted value , the total sum of squares () and the residual sum of squares () are defined as
And is expressed as
The coefficient of determination, however, does not take into account regression model complexity or number of observations. This is particularly important in our case, since we want to compare the performance of very different curve models, for example, the 2-term exponential function, which has 4 coefficients, against the 8-term Gauss series, which has 25 coefficients. To incorporate these aspects, we adopted the adjusted coefficient of determination () , which takes into account the complexity of the regression model and the number of observations. is calculated as follows:where is the size of the sample set and is the number of coefficients of the regression model. As in the case of , closer to means a better fit. We used as the fitness metric for all our curve-fitting calculations.
2.4. Parameter Tuning
All the experimentation reported in this paper has followed a fractional design based on orthogonal matrices according to the Taguchi method . In this method, the concept of signal to noise ratio (SN ratio) is introduced for measuring the sensitivity of the quality characteristic being investigated in a controlled manner to those external influencing factors (noise factors) not under control. The aim of the experiment is to determine the highest possible SN ratio for the results since a high value of the SN ratio implies that the signal is much higher than the random effects of the noise factors. From the quality point of view, there are three possible categories of quality characteristics: (i) smaller is better, (ii) nominal is best, and (iii) bigger is better. The obtained results fall in the “bigger is better category” since the objective is to maximize the adjustment of the model to the observed values measured as the . For this category, the SN ratio estimate is defined in (18), where denotes the total number of instances and the target values (the adjustment of the model in this case). Consider the following:
This method allows the execution of a limited number of configurations and still reports significant information on the best combination of parameter values. In particular, a maximum of 27 different configurations were tested for each algorithm on the whole set of models and a subset of the curves (the exact number of configurations will depend on the number of parameters to be tuned). In Section 3.1 we report the tested values for each algorithm and highlight those reported by the Taguchi method as the best ones from the SN ratio point of view.
2.5. Statistical Validation
To validate the results obtained and the conclusions derived from them, it is very important to use the appropriate statistical tests for this purpose. In this paper we conducted a thorough statistical validation that is discussed in this section.
First, the nonparametric Friedman’s test  was used to determine if significant differences exist among the performance of the algorithms involved in the experimentation by comparing their median values. If such differences existed, then we continued with the comparison and used some post-hoc tests. In particular, we reported the values for two nonparametric post-hoc tests: Friedman [52–54] and Wilcoxon tests . Finally, as multiple algorithms are involved in the comparison, adjusted values should rather be considered in order to control the familywise error rate. In our experimentation, we considered Holm’s procedure , which has been successfully used in the past in several studies [12, 56].
Apart from the values reported by the statistical tests, we also reported the following values for each algorithm:(i)Overall ranking according to the Friedman test: we computed the relative ranking of each algorithm according to its mean performance for each function and reported the average ranking computed through all the functions. Given the following mean performance in a benchmark of three functions for algorithms and : , ; their relative ranking would be , ; and thus their corresponding average rankings are and .(ii)nWins: this is the number of other algorithms for which each algorithm is statistically better minus the number of algorithms for which each algorithm is statistically worse according to the Wilcoxon Signed Rank Test in a pair-wise comparison .
This meticulous validation procedure allowed us to provide solid conclusions based on the results obtained in the experimentation.
3. Results and Discussion
We have conducted a thorough experimentation to elucidate if the selected metaheuristics can be successfully used in the curve-fitting problem. For this purpose, we have taken into account both glutamate concentration and AMPA open receptors curves problems and, in the case of the latter, the five different curves presented in Section 2.1.2.
In the first place, a parameter tuning of the seven selected metaheuristics has been carried out, whose results are presented in Section 3.1, and then final runs with the best configuration of each algorithm are done, whose results are presented in Sections 3.2-3.3 and their significance according to the aforementioned statistical validation procedure is discussed. Finally, we provide a biological interpretation of the results and discuss the convenience of using one particular curve model instead of the others.
3.1. Parameter Tuning of the Selected Metaheuristics
As described in Section 2.4, to conduct our experimentation we have followed a fractional design based on orthogonal matrices according to the Taguchi method. This method allows the execution of a limited number of configurations (a maximum of in our case) yet yielding significant results. Furthermore, to avoid any overfitting to the experimental data set, the parameter tuning of the algorithms was done on a subset of curves ( randomly chosen curves) prior to using the selected values in the overall comparison. Table 1 contains, for each algorithm, all their relevant parameters, highlighting in bold the final selected values among all the tested ones according to the measures reported by the Taguchi method.
Once the best configuration for each algorithm has been determined, we ran each of them on both synaptic curves problems, for all the models and curves and conducting repetitions per curve. In the case of the selected metaheuristics, we constrained search space to the following interval: to allow faster convergence. On the other hand, to avoid any bias in the results, we ran the baseline algorithm both constrained to the same interval and unconstrained.
3.2. Glutamate Concentration Curves Problem
As we stated before, in the 500 glutamate concentration curves the only parameter that needs to be optimized is the coefficient of the proposed exponential decay function. Thus, it is a simple optimization problem that should be solved with little issues by most algorithms as it is shown in Table 2. In this table we can observe that most algorithms are able to fit the curve with a precision of . The only three algorithms not reaching that precision get trapped a few times in low quality local optima, which prevents them to reach the same accuracy. This is especially the case of the NLLS algorithm, both in the constrained and unconstrained versions.
Due to the nonexistent differences in the performance of most of the algorithms, the statistical validation did not reveal any significant differences, except for the two versions of the baseline algorithm. This first result is a good start point that suggests that more advanced metaheuristics can deal with these problems more effectively than the NLLS algorithm used in previous studies .
3.3. AMPA Open Receptors Curves Problem
To provide more insight on the results obtained for the AMPA open receptors curves problem, we have decided to report the results and conduct the statistical validation on each of the curves approximation models individually and then to carry out an overall comparison considering all the models together as a large validation data set. The results show that there are three algorithms (IPOP-CMA-ES, Self-Adaptive DE, and DE), which systematically rank among the best algorithms and, in most of the cases, are significantly better than the baseline algorithm NLLS.
3.3.1. Results with the 4-by-4 Degree Polynomial Rational Function
The results for the 4-by-4 degree polynomial rational function are reported in Table 5. As can be observed, the average attained precision is very high for most of the algorithms (with the exception of the Solis and Wets algorithm that alternates good and poor runs thus obtaining a low average fitness), especially in the case of the top three algorithms (IPOP-CMA-ES, Self-Adaptive DE, and DE), which reach a precision as high as . The values reported by the statistical validation (Table 6 for unadjusted values and Table 7 for the adjusted ones) reveal significant differences between the best performing algorithm (IPOP-CMA-ES) and all the other algorithms except for Self-Adaptive DE and DE according to the Friedman test (Wilcoxon test does reveal significant differences). The prevalence of these three algorithms will be a trend for most of the problems, as results be will confirmed in the following sections.
3.3.2. Results with the 8-Term Fourier Series
For the 8-term Fourier series results are similar in terms of the dominant algorithms (see Table 8). In this case, the Self-Adaptive DE obtained the highest ranking, number of wins, and average precision, followed by IPOP-CMA-ES and the two remaining DE alternatives. With this function, the attained precision has slightly decreased (from of the DE algorithm to of the Self-Adaptive DE) but again several metaheuristics (all the population-based algorithms, which also includes the GA) have outperformed the reference NLLS algorithm (in both constrained and unconstrained versions). This could be due to the highest complexity of the problem (18 dimensions compared to the 9 of the previous model). Also, in this problem the performance of the two local searches (Solis and Wets and MTS-LS1) importantly degrades, and this will also be a trend for the remainder problems. The statistical validation, whose results are presented in Tables 9 and 10, found significant differences between the Self-Adaptive DE algorithm and all the other algorithms for both tests (Friedman and Wilcoxon).
3.3.3. Results with the 8-Term Gauss Series
The results for this model break the trend started in the previous two cases that follows in the remaining two functions. Table 11 depicts the results for the 8-term Gauss series. As can be seen, the best algorithm is again the IPOP-CMA-ES algorithm, which again reaches a very high precision of . However, Self-Adaptive DE and DE perform poorly this time, in spite of the problem not being much bigger (25 parameters against 18). This means that it is probably the nature of the components of the Gauss series what makes the problem harder to solve for DE-based algorithms (it is a summation of squared exponential terms with two correlated coefficients whose interdependencies might be better captured by the adaptation of the covariance matrix in IPOP-CMA-ES). Surprisingly, the NLLS approaches obtain very good results with this model ( and precision, for both constrained and unconstrained, resp.). It is also interesting to note that the Solis and Wets algorithm was unable to converge to any good solution in any run. Finally, the statistical validation, whose detailed information is given in Tables 12 and 13, reports significant differences between the IPOP-CMA-ES algorithm and all the others with both statistical tests.
3.3.4. Results with the 2-Term Exponential Function
In the case of the 2-term exponential function, both the Self-Adaptive DE and the DE algorithms obtain very good results, the former being the best algorithm in this problem (with a precision of more than and the highest ranking and number of wins). As in previous cases (with the exception of the 8-term Gauss series), the NLLS algorithm performs poorly in comparison with most other techniques. Another interesting thing on this model is that both local searches (MTS-LS1 and Solis and Wets) greatly improved their performance compared with other problems. In this case, this improvement should be probably due to the small problem size that allows an easier optimization component by component. Finally, the statistical validation (Tables 15 and 16) reports significant differences between the reference algorithm (Self-Adaptive DE) and all the other algorithms, except for the DE algorithm in the Friedman test.
3.3.5. Results with the 9th Degree Polynomial
The last function used in the experimentation, the 9th degree polynomial, is an interesting case, as we can classify studied algorithms into two groups: those that converge to (what seems) a strong attractor and those that are unable to find any good solution. In the first group we have the three aforementioned algorithms that usually exhibit the best performance: IPOP-CMA-ES, Self-Adaptive DE, and DE, plus both NLLS configurations. Consequently the second group is made up of the GA, the GODE algorithm, and the two local searches. The algorithms in the first group normally converge, with some unsuccessful runs in the case of NLLS constrained and IPOP-CMA-ES, to the same attractor, and thus their average precision and ranking is very similar (Table 17). Small variations on the fitness introduce some differences in the number of wins, although they are not very important. It should be remarked that, in contrast to what happens with the other four models, none of the selected algorithms is able to reach a high precision with this problem (a highest value of compared to values in the range of to ). This probably means that a 9th degree polynomial is not the best curve to fit the experimental data, which makes sense if we consider the exponential decay processes taking part in the biological process. Section 3.4 will discuss this issue in detail. Although the differences are very small in terms of fitness among the best algorithms, the statistical validation is able to find significant differences between the unconstrained configuration of the NLLS algorithm and all the other algorithms with the Wilcoxon test (see Tables 18 and 19 for details). However, Friedman’s test is unable to find significant differences between NLLS unconstrained and Self-Adaptive DE, DE, and NLLS constrained.
3.3.6. Overall Comparison
In this section we offer an overall comparison of the algorithms for the five AMPA open receptors curves problems. To make this comparison, we considered the results on the five problems altogether, thus comparing the 2,500 fitness values at the same time. Table 20 summarizes the results. From these data, it is clear to conclude that the IPOP-CMA-ES algorithm obtains the best results, with an average precision of , the best overall ranking and the highest number of wins, being the best algorithm in all the pairwise comparisons. Following IPOP-CMA-ES, two DE variants (Self-Adaptive DE and classic DE) take places two and three in the ranking. It is important to note that three of the considered metaheuristics are able to outperform the NLLS algorithm, which was the base of the previous work in . It is also relevant to the poor performance of the two considered local searches, which seem to be inadequate for this type of problems. Finally, the same statistical validation as for the individual problems has been followed and the results, reported in Tables 21 and 22, confirm the superior performance of the IPOP-CMA-ES algorithm, obtaining significant differences with all the algorithms and both statistical tests.
3.4. Biological Interpretation of the Results
To understand the biological implications of this study we have to consider each of the two curve-fitting problems independently (glutamate concentration and AMPA open receptors).
3.4.1. Glutamate Concentration Curves
In the case of the glutamate concentration curves, the results shown in Tables 2, 3, and 4 illustrate two important aspects. Firstly, the high fitness values obtained with many optimization techniques (most of them around ) prove that the glutamate diffusion equation proposed in Section 2.1 (equation (9)) is, as expected, a very accurate model of this phenomenon. This equation, therefore, can be efficiently used to interpret and predict the glutamate diffusion process, given that the adequate means for calculating the coefficient are used. Secondly, selecting the appropriate curve-fitting technique is also an important aspect in this case, since general purpose, commonly used algorithms (such as the NLLS used in ) are now known to produce suboptimal results. Figure 3 shows an example of a glutamate concentration curve solution (The synapse configuration used here had an AMPA receptor concentration of molecules/μm2, a glutamate transporter concentration of molecules/μm2, a postsynaptic length of 229 nm, a total apposition length of 305 nm, and a 20 nm wide synaptic cleft. For more details on these parameters please refer to ). In this example the glutamate initial concentration was mol/L and its coefficient of diffusion μm2/ms. The curve-fitting solution produced , resulting in the following equation:
|√ means that there are statistical differences with significance level . means these are adjusted -values according to Holm’s method.|
|√ means that there are statistical differences with significance level . means these are adjusted -values according to Holm’s method. |
|√ means that there are statistical differences with significance level . means these are adjusted -values according to Holm’s method.|
|√ means that there are statistical differences with significance level .|