Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
Volume 2015, Article ID 708131, 16 pages
http://dx.doi.org/10.1155/2015/708131
Research Article

Comparative Study of Metaheuristics for the Curve-Fitting Problem: Modeling Neurotransmitter Diffusion and Synaptic Receptor Activation

DATSI, ETS de Ingenieros Informáticos, Universidad Politécnica de Madrid, Campus de Montegancedo, 28660 Boadilla del Monte, Madrid, Spain

Received 22 October 2014; Revised 10 April 2015; Accepted 15 April 2015

Academic Editor: Jinde Cao

Copyright © 2015 Jesús Montes et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Synapses are key elements in the information transmission in the nervous system. Among the different approaches to study them, the use of computational simulations is identified as the most promising technique. Simulations, however, do not provide generalized models of the underlying biochemical phenomena, but a set of observations, or time-series curves, displaying the behavior of the synapse in the scenario represented. Finding a general model of these curves, like a set of mathematical equations, could be an achievement in the study of synaptic behavior. In this paper, we propose an exploratory analysis in which selected curve models are proposed, and state-of-the-art metaheuristics are used and compared to fit the free coefficients of these curves to the data obtained from simulations. Experimental results demonstrate that several models can fit these data, though a deeper analysis from a biological perspective reveals that some are better suited for this purpose, as they represent more accurately the biological process. Based on the results of this analysis, we propose a set of mathematical equations and a methodology, adequate for modeling several aspects of biochemical synaptic behavior.

1. Introduction

Most information in the mammalian nervous system flows through chemical synapses. These are complex structures comprising a presynaptic element (an axon terminal) and a postsynaptic element (a dendritic spine, a dendritic shaft, an axon, or a soma) separated by a narrow gap known as the synaptic cleft (see Figure 1). The neurotransmitter is stored in synaptic vesicles located in the presynaptic terminal. For release to take place, the membrane of one or more vesicles must fuse with a region of the presynaptic membrane, the active zone, and lining the synaptic cleft. On the opposite side, the postsynaptic membrane is populated by specific receptors.

Figure 1: Chemical synapses. (a) Electron microphotograph of the cerebral cortex where three axon terminals (asterisks) establish synapses with three dendritic elements showing clearly visible postsynaptic densities (arrows). (b) Simplified model of a chemical synapse, highlighting its most significant parts.

Multiple factors influence the diffusion of neurotransmitter molecules and their interaction with specific receptors [13]. The initial concentration of the released neurotransmitter in the extracellular space depends on the volume of the synaptic cleft. The subsequent diffusion of neurotransmitter molecules outside the cleft may be influenced by the geometrical characteristics of the membranes that surround the synaptic junction, and by the presence and concentration of transporter molecules. However, direct observation of the various synaptic events at the molecular and ultrastructural levels in vivo or in vitro is rather difficult, if not impossible. Simulation approaches are thus useful to assess the influence of different parameters on the behavior of the synapse (e.g., [4, 5]).

Simulation approaches in neuroscience have considered different models, scales, and techniques, according to the phenomenon being studied. Biochemical processes, such as neurotransmitter diffusion, require Monte Carlo particle-based simulators like MCell [6, 7], ChemCell [8], or Smoldyn [9, 10]. These simulation techniques allow computational neuroscientist to reproduce these biological processes in a manner that they can be thoroughly observed, systematically analyzed, and easily adapted and/or modified. The results of these simulations can be studied to extract general conclusions and infer basic properties of the natural processes being studied.

In the case of chemical synapses, many aspects of their behavior during neurotransmitter release and diffusion can be simulated. Simulations, however, do not provide generalized models of these series, showing only the specific values observed each time and, in this case, these are subject to the stochastic nature of the Monte Carlo simulations used to produce them. Having general, empirical models of these time series, such as a set of mathematical equations, could be an important achievement in the study of chemical synaptic behavior. Finding these equations, however, is not a trivial task.

Having the raw simulation data at hand, the first step in finding these equations is to identify the type of mathematical expression we are looking for. A reasonable approach to achieve this is to study the underlying physicochemical processes that take place during synapse operation (e.g., molecular diffusion and receptor kinetics) and try to analytically infer the synaptic equations from them. This is, however, sometimes not possible (as is explained in detail in Section 2.1). When this is the case, an exploratory analysis can be attempted, testing different general mathematical models against the empirical data, and trying to find the one with the best adjustment. This becomes, in essence, a complex curve-fitting problem, in which a set of general equations is fitted to experimentally obtained data [11].

This problem is, in its general form, an optimization task, in which different search strategies in the field of metaheuristics have been successfully applied in the past. This type of search techniques, with Evolutionary Algorithms (EAs) as one of the most relevant exponents, have shown in the last decades their ability to deal with optimization problems of many different domains, ranging from completely theoretical mathematical functions (benchmarks) [12] to real-world problems in many different domains: biology [1317], engineering [1821], and logistics [2225], to name a few. Furthermore, different metaheuristics have been previously used in related curve fitting problems [2629], which motivates the use of this type of algorithms for our particular problem.

2. Materials and Methods

2.1. Synaptic Curves

We analyzed simulations based on simplified models of excitatory synapses where AMPA receptors were present and the neurotransmitter involved was glutamate (The AMPA receptor is a ionotropic receptor commonly found in excitatory synapses using glutamate as neurotransmitter. It is named from its specific agonist alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid). We based our study in a corpus of synaptic simulations very similar to the one presented in [11]. Our corpus consisted of 500 different synaptic configurations, each simulated 500 times (to incorporate stochastic variability) resulting in a total of 250,000 simulations. As in [11], geometrical parameters were taken into account to incorporate synaptic variability, and simulations were performed using the MCell software in the Magerit supercomputer [30]. Each one of the 500 synaptic configurations represented a unique synapse, morphologically different form the others. Simulations studied neurotransmitter diffusion and synaptic receptor behavior in each configuration. After the synaptic simulations were performed, two separated (but related) aspects of synaptic behavior were considered:(i)The average variation of neurotransmitter concentration in the synaptic cleft over time, after a single vesicle release.(ii)The average variation of the total number of synaptic receptors in the open state over time, again after a single neurotransmitter vesicle release (Molecular kinetics of synaptic receptors follow Markov-chain-like models, with several possible states and transition probabilities depending on environmental factors and chemical reaction rate constants. The most relevant state in the AMPA receptor is the open state. It is directly related with the electrical synaptic response [31]).

These two aspects were plotted as time-series curves for each of the 500 synaptic configurations, resulting in a total of 1,000 curves (500 glutamate concentration curves and 500 open AMPA receptor curves), each of them being an average of 500 simulations of the same synaptic configuration. An example of these curves can be seen in Figure 2. The curves obtained were consistent with previous studies [3235]. Each one of these 1,000 curves represented the behavior of a specific synapse, morphologically different from the others. Each curve became a separate curve-fitting problem, addressed using the metaheuristics described in Section 2.2.

Figure 2: Examples of synaptic curves after a neurotransmitter vesicle release. (a) Variation of neurotransmitter (in this case glutamate) concentration through time inside the synaptic cleft. (b) Variation in the total number of synaptic receptors (in this case AMPA) in the open state through time.

2.1.1. Glutamate Concentration Curves

In the case of the glutamate concentration (Figure 2(a)), all curves showed an initial peak (maximum concentration in the moment of vesicle liberation), followed by what very closely resembled a typical exponential decay. To understand this curve, we referred to Fick’s second law of diffusion (derived by the German physiologist Adolf Fick in 1855), which predicts how diffusion causes the concentration of a substance to change with time. For two or more dimensions, this law states thatwhere is the concentration of a given substance (glutamate in our case) at location and time and is the coefficient of diffusion of substance . This is also called the heat equation and has no analytic solution for our specific problem, due to the synaptic three-dimensional geometry (The heat equation can only be solved analytically for a few idealized uni-dimensional problems). It is known, however, that its solution must follow a typical exponential decay. In exponential decay processes, the quantity of a substance decreases in the following way:where is the mean lifetime or exponential time constant of the process. We solve (2) as follows:

We integer both sides:

We now apply to both sides:

Since is constant, we rename it to and we finally obtain a solution to (2) in the following form:where is the initial quantity, since at , , . We know that Fick’s second law of diffusion (1) must follow an exponential decay (2). Therefore, its solution must be in the form of (6). The neurotransmitter diffuses through a certain volume: the synaptic cleft. As explained, the neurotransmitter concentration in every point of this volume changes following an exponential decay (6). We propose a new function as the average concentration of neurotransmitter in this volume, and we assume that it also follows an exponential decay. We define with the following equation:where is the initial average concentration at . Since the number of neurotransmitter molecules released (vesicle size) and the volume of the synaptic cleft are known (they are part of the synaptic configuration simulation parameters), can be easily calculated beforehand. Please remember that is the average concentration of in a defined volume, opposed to , which is the exact concentration at position . is the value measured in our simulation experiments (shown, e.g., in Figure 2(a)).

The value of in (7), however, cannot be analytically calculated. From the previously mentioned link between (1) and (2), we deduce a connection between each side of both equations, concluding that(i) relates to (left side of both equations);(ii) relates to (right side of both equations).

From the second item, we deduce that the constant should be somehow calculated using and some unknown aspects of the synaptic geometry, which are factored inside the term that cannot be calculated analytically for an arbitrary 3D problem. From and we derive a new coefficient , so that

The coefficient represents the effects of geometry, separated from the substance diffusion coefficient (). For our problem, cannot be determined analytically and needs to be estimated using numerical and/or statistical methods. We incorporate in (7) and obtainwhich is the final expression of our neurotransmitter (glutamate) concentration time series model. The problem of estimating the value of still remains, and for this we need to return to our synaptic simulations. Using curve-fitting methods, we can find the values of that most accurately model the results observed during these experiments.

2.1.2. AMPA Open Receptors Curves

All open AMPA receptors curves showed a rapid climb to a single peak followed by a long tail, slowly descending towards 0 (as shown in Figure 2(b)). The initial section of the curve (containing the rapid ascent, peak and descent) contains the most relevant information, that is, the amplitude of the peak and the time it takes to reach it. A general exploration of the simulation data showed that these two characteristics are highly dependent on the neurotransmitter concentration and synaptic geometry.

As we have already established, neurotransmitter concentration cannot be predicted analytically (Fick’s second law cannot be solved analytically for 2 or more dimensions); therefore open AMPA receptors curves will not be either (since they are dependent on neurotransmitter concentration). Moreover, synaptic geometry variations can be extremely diverse, making the task of inferring a mathematical equation much harder.

Therefore, we adopted the exploratory analysis approach, considering a set of selected general mathematical models and testing them against the simulation data. We based this set on a previous analysis presented in [11]. Our work differs from this study in that in [11] this problem was tackled only by means of a single approximate technique, which we use here as a baseline to compare different global optimization heuristics. We also performed a subsequent analysis of the optimization results, taking into consideration the relevant biological implications of our findings and proposing generalized synaptic curve models. From [11], however, we selected the best five curves as a starting point:(i)4-by-4 degree polynomial rational function (9 coefficients): (ii)8-term Fourier series (18 coefficients): (iii)8-term Gauss series (25 coefficients): (iv)2-term exponential function (4 coefficients): (v)9th degree polynomial (10 coefficients):

To decide which of these curves best serves our purpose, we need to test how well each of them can model the experimental data we observed during simulations. We need, therefore, to fit every curve to the data and see how they perform.

2.2. Algorithms Used in the Comparison

This section reviews the considered algorithms for the previously described curves fitting problem, including the reference baseline algorithm (Nonlinear Least Squares) used in previous studies [11] and several metaheuristics (five Evolutionary Algorithms and two local searches) whose performance will be compared in Section 3.

2.2.1. Baseline Algorithm: Nonlinear Least Squares

The Nonlinear Least Squares (NLLS) algorithm [36, 37] is frequently used in some forms of nonlinear regression and is available in multiple mathematical and statistical software, such as MATLAB [38] or R [39]. Loosely speaking, this algorithm tries to approximate the model by a linear one, in the first place, to further refine the parameters by successive iterations. This method was successfully used in a seminal work [11] and thus has been selected as the baseline algorithm to test the performance of several metaheuristics, which will be reviewed in the following sections.

2.2.2. Selected Metaheuristics

Genetic Algorithm. Genetic Algorithms (GAs) [40] are one of the most famous and widely used nature-inspired algorithms in black-box optimization. The generality of its operators makes it possible to use GAs both in the continuous and the discrete domains. For our experiments, we have used a Real-Coded GA that uses the - crossover, with , and the Gaussian mutation [41]. Different population sizes and operator probabilities were tested, as described in Section 3.

Differential Evolution. Differential Evolution (DE) [42] is one of the most powerful algorithms used in continuous optimization. Its flexibility and simplicity (it only has three free parameters to configure) have gained DE a lot of popularity in recent years and it is being used, both in its canonical form and in more advanced flavors (as we will see below) in many different complex optimization scenarios. For our experiments we have considered a DE algorithm with Exponential Crossover and, as for the GA, different values have been tried for its three free parameters: population size (NP), , and CR.

Self-Adaptive Differential Evolution. This is one of the variants of the aforementioned canonical DE algorithm. The Self-Adaptive Differential Evolution algorithm proposed in [43] does not consider fixed and CR parameters but, instead, allows the algorithm to adjust their values with certain probability, controlled by and , respectively.

Generalized Opposition-Based Differential Evolution. This is the second variant of the DE algorithm used in our study. In the Generalized Opposition-Based Differential Evolution (GODE) algorithm [44], the Opposition-Based Learning concept is used, which basically means that, with some probability, the algorithm tries not only to evaluate the solutions in the current population, but also the opposite ones. Apart from the common DE parameters, this algorithm adds a new parameter to control the probability of applying the opposite search.

Solis and Wets Algorithm. The well-known Solis and Wets algorithm [45] has also been included in our study. This direct search method performs a randomized local minimization of a candidate solution with an adaptive step size. It has several parameters that rule the way the candidate solution is moved and how the step size is adapted. All these parameters will be subject to parameter tuning, as for the rest of the algorithms.

MTS-LS1 Algorithm. MTS-LS1 is the first of the three local searches combined by the MTS algorithm [46]. This algorithm tries to optimize each dimension of the problem independently and has been successfully used in the past to search large scale global optimization problems. In the same line than the Solis and Wets algorithm, several parameters control the movements of the local search, and all of them will be adjusted.

Covariance Matrix Adaptation Evolution Strategy. The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [47] is an evolution strategy that adapts the full covariance matrix of a normal search. Some of its advantages is that it exhibits the same performance on an objective function whether or not a linear transformation has been applied and its robustness against ill-conditioned and nonseparable problems [48]. To increase the probability of finding the global optimum, the increasing population size IPOP-CMA-ES algorithm was proposed in [49]. This algorithm uses a restart method with a successively increasing population size strategy each time the stopping criterion is satisfied. This algorithm was ranked first on the “Special Session on Real-Parameter Optimization” held at the CEC 2005 Congress and has since been used with several optimization problems. For the proposed experiments, we have considered both the CMA-ES and the IPOP-CMA-ES with several values for its sigma parameter. The active covariance matrix adaptation mechanism, which actively decreases variances in especially unsuccessful directions to accelerate their convergence, has also been tested with both algorithms.

2.3. Comparison Metric

To assess the accuracy of our curve-fitting solutions, we based our study on the commonly used coefficient of determination (). This is the proportion of variability in a data set that is accounted for by the statistical model. It provides a measure of how well future outcomes are likely to be predicted by the model. usually has a value between 0 and 1 (sometimes it can be less than 0), where 1 indicates an exact fit to the reference data and a value less than or equal to 0 indicates no fit at all.

is calculated as follows: given a reference data set with values , each of which has an associated predicted value , the total sum of squares () and the residual sum of squares () are defined as

And is expressed as

The coefficient of determination, however, does not take into account regression model complexity or number of observations. This is particularly important in our case, since we want to compare the performance of very different curve models, for example, the 2-term exponential function, which has 4 coefficients, against the 8-term Gauss series, which has 25 coefficients. To incorporate these aspects, we adopted the adjusted coefficient of determination () [50], which takes into account the complexity of the regression model and the number of observations. is calculated as follows:where is the size of the sample set and is the number of coefficients of the regression model. As in the case of , closer to means a better fit. We used as the fitness metric for all our curve-fitting calculations.

2.4. Parameter Tuning

All the experimentation reported in this paper has followed a fractional design based on orthogonal matrices according to the Taguchi method [51]. In this method, the concept of signal to noise ratio (SN ratio) is introduced for measuring the sensitivity of the quality characteristic being investigated in a controlled manner to those external influencing factors (noise factors) not under control. The aim of the experiment is to determine the highest possible SN ratio for the results since a high value of the SN ratio implies that the signal is much higher than the random effects of the noise factors. From the quality point of view, there are three possible categories of quality characteristics: (i) smaller is better, (ii) nominal is best, and (iii) bigger is better. The obtained results fall in the “bigger is better category” since the objective is to maximize the adjustment of the model to the observed values measured as the . For this category, the SN ratio estimate is defined in (18), where denotes the total number of instances and the target values (the adjustment of the model in this case). Consider the following:

This method allows the execution of a limited number of configurations and still reports significant information on the best combination of parameter values. In particular, a maximum of 27 different configurations were tested for each algorithm on the whole set of models and a subset of the curves (the exact number of configurations will depend on the number of parameters to be tuned). In Section 3.1 we report the tested values for each algorithm and highlight those reported by the Taguchi method as the best ones from the SN ratio point of view.

2.5. Statistical Validation

To validate the results obtained and the conclusions derived from them, it is very important to use the appropriate statistical tests for this purpose. In this paper we conducted a thorough statistical validation that is discussed in this section.

First, the nonparametric Friedman’s test [52] was used to determine if significant differences exist among the performance of the algorithms involved in the experimentation by comparing their median values. If such differences existed, then we continued with the comparison and used some post-hoc tests. In particular, we reported the values for two nonparametric post-hoc tests: Friedman [5254] and Wilcoxon tests [55]. Finally, as multiple algorithms are involved in the comparison, adjusted values should rather be considered in order to control the familywise error rate. In our experimentation, we considered Holm’s procedure [54], which has been successfully used in the past in several studies [12, 56].

Apart from the values reported by the statistical tests, we also reported the following values for each algorithm:(i)Overall ranking according to the Friedman test: we computed the relative ranking of each algorithm according to its mean performance for each function and reported the average ranking computed through all the functions. Given the following mean performance in a benchmark of three functions for algorithms and : , ; their relative ranking would be , ; and thus their corresponding average rankings are and .(ii)nWins: this is the number of other algorithms for which each algorithm is statistically better minus the number of algorithms for which each algorithm is statistically worse according to the Wilcoxon Signed Rank Test in a pair-wise comparison [57].

This meticulous validation procedure allowed us to provide solid conclusions based on the results obtained in the experimentation.

3. Results and Discussion

We have conducted a thorough experimentation to elucidate if the selected metaheuristics can be successfully used in the curve-fitting problem. For this purpose, we have taken into account both glutamate concentration and AMPA open receptors curves problems and, in the case of the latter, the five different curves presented in Section 2.1.2.

In the first place, a parameter tuning of the seven selected metaheuristics has been carried out, whose results are presented in Section 3.1, and then final runs with the best configuration of each algorithm are done, whose results are presented in Sections 3.2-3.3 and their significance according to the aforementioned statistical validation procedure is discussed. Finally, we provide a biological interpretation of the results and discuss the convenience of using one particular curve model instead of the others.

3.1. Parameter Tuning of the Selected Metaheuristics

As described in Section 2.4, to conduct our experimentation we have followed a fractional design based on orthogonal matrices according to the Taguchi method. This method allows the execution of a limited number of configurations (a maximum of in our case) yet yielding significant results. Furthermore, to avoid any overfitting to the experimental data set, the parameter tuning of the algorithms was done on a subset of curves ( randomly chosen curves) prior to using the selected values in the overall comparison. Table 1 contains, for each algorithm, all their relevant parameters, highlighting in bold the final selected values among all the tested ones according to the measures reported by the Taguchi method.

Table 1: Parameter tuning of the selected metaheuristics.

Once the best configuration for each algorithm has been determined, we ran each of them on both synaptic curves problems, for all the models and curves and conducting repetitions per curve. In the case of the selected metaheuristics, we constrained search space to the following interval: to allow faster convergence. On the other hand, to avoid any bias in the results, we ran the baseline algorithm both constrained to the same interval and unconstrained.

3.2. Glutamate Concentration Curves Problem

As we stated before, in the 500 glutamate concentration curves the only parameter that needs to be optimized is the coefficient of the proposed exponential decay function. Thus, it is a simple optimization problem that should be solved with little issues by most algorithms as it is shown in Table 2. In this table we can observe that most algorithms are able to fit the curve with a precision of . The only three algorithms not reaching that precision get trapped a few times in low quality local optima, which prevents them to reach the same accuracy. This is especially the case of the NLLS algorithm, both in the constrained and unconstrained versions.

Table 2: Mean fitness, average ranking, and number of wins (nWins) in pair-wise comparisons with the other algorithms on the glutamate concentration problem.

Due to the nonexistent differences in the performance of most of the algorithms, the statistical validation did not reveal any significant differences, except for the two versions of the baseline algorithm. This first result is a good start point that suggests that more advanced metaheuristics can deal with these problems more effectively than the NLLS algorithm used in previous studies [11].

3.3. AMPA Open Receptors Curves Problem

To provide more insight on the results obtained for the AMPA open receptors curves problem, we have decided to report the results and conduct the statistical validation on each of the curves approximation models individually and then to carry out an overall comparison considering all the models together as a large validation data set. The results show that there are three algorithms (IPOP-CMA-ES, Self-Adaptive DE, and DE), which systematically rank among the best algorithms and, in most of the cases, are significantly better than the baseline algorithm NLLS.

3.3.1. Results with the 4-by-4 Degree Polynomial Rational Function

The results for the 4-by-4 degree polynomial rational function are reported in Table 5. As can be observed, the average attained precision is very high for most of the algorithms (with the exception of the Solis and Wets algorithm that alternates good and poor runs thus obtaining a low average fitness), especially in the case of the top three algorithms (IPOP-CMA-ES, Self-Adaptive DE, and DE), which reach a precision as high as . The values reported by the statistical validation (Table 6 for unadjusted values and Table 7 for the adjusted ones) reveal significant differences between the best performing algorithm (IPOP-CMA-ES) and all the other algorithms except for Self-Adaptive DE and DE according to the Friedman test (Wilcoxon test does reveal significant differences). The prevalence of these three algorithms will be a trend for most of the problems, as results be will confirmed in the following sections.

3.3.2. Results with the 8-Term Fourier Series

For the 8-term Fourier series results are similar in terms of the dominant algorithms (see Table 8). In this case, the Self-Adaptive DE obtained the highest ranking, number of wins, and average precision, followed by IPOP-CMA-ES and the two remaining DE alternatives. With this function, the attained precision has slightly decreased (from of the DE algorithm to of the Self-Adaptive DE) but again several metaheuristics (all the population-based algorithms, which also includes the GA) have outperformed the reference NLLS algorithm (in both constrained and unconstrained versions). This could be due to the highest complexity of the problem (18 dimensions compared to the 9 of the previous model). Also, in this problem the performance of the two local searches (Solis and Wets and MTS-LS1) importantly degrades, and this will also be a trend for the remainder problems. The statistical validation, whose results are presented in Tables 9 and 10, found significant differences between the Self-Adaptive DE algorithm and all the other algorithms for both tests (Friedman and Wilcoxon).

3.3.3. Results with the 8-Term Gauss Series

The results for this model break the trend started in the previous two cases that follows in the remaining two functions. Table 11 depicts the results for the 8-term Gauss series. As can be seen, the best algorithm is again the IPOP-CMA-ES algorithm, which again reaches a very high precision of . However, Self-Adaptive DE and DE perform poorly this time, in spite of the problem not being much bigger (25 parameters against 18). This means that it is probably the nature of the components of the Gauss series what makes the problem harder to solve for DE-based algorithms (it is a summation of squared exponential terms with two correlated coefficients whose interdependencies might be better captured by the adaptation of the covariance matrix in IPOP-CMA-ES). Surprisingly, the NLLS approaches obtain very good results with this model ( and precision, for both constrained and unconstrained, resp.). It is also interesting to note that the Solis and Wets algorithm was unable to converge to any good solution in any run. Finally, the statistical validation, whose detailed information is given in Tables 12 and 13, reports significant differences between the IPOP-CMA-ES algorithm and all the others with both statistical tests.

3.3.4. Results with the 2-Term Exponential Function

In the case of the 2-term exponential function, both the Self-Adaptive DE and the DE algorithms obtain very good results, the former being the best algorithm in this problem (with a precision of more than and the highest ranking and number of wins). As in previous cases (with the exception of the 8-term Gauss series), the NLLS algorithm performs poorly in comparison with most other techniques. Another interesting thing on this model is that both local searches (MTS-LS1 and Solis and Wets) greatly improved their performance compared with other problems. In this case, this improvement should be probably due to the small problem size that allows an easier optimization component by component. Finally, the statistical validation (Tables 15 and 16) reports significant differences between the reference algorithm (Self-Adaptive DE) and all the other algorithms, except for the DE algorithm in the Friedman test.

3.3.5. Results with the 9th Degree Polynomial

The last function used in the experimentation, the 9th degree polynomial, is an interesting case, as we can classify studied algorithms into two groups: those that converge to (what seems) a strong attractor and those that are unable to find any good solution. In the first group we have the three aforementioned algorithms that usually exhibit the best performance: IPOP-CMA-ES, Self-Adaptive DE, and DE, plus both NLLS configurations. Consequently the second group is made up of the GA, the GODE algorithm, and the two local searches. The algorithms in the first group normally converge, with some unsuccessful runs in the case of NLLS constrained and IPOP-CMA-ES, to the same attractor, and thus their average precision and ranking is very similar (Table 17). Small variations on the fitness introduce some differences in the number of wins, although they are not very important. It should be remarked that, in contrast to what happens with the other four models, none of the selected algorithms is able to reach a high precision with this problem (a highest value of compared to values in the range of to ). This probably means that a 9th degree polynomial is not the best curve to fit the experimental data, which makes sense if we consider the exponential decay processes taking part in the biological process. Section 3.4 will discuss this issue in detail. Although the differences are very small in terms of fitness among the best algorithms, the statistical validation is able to find significant differences between the unconstrained configuration of the NLLS algorithm and all the other algorithms with the Wilcoxon test (see Tables 18 and 19 for details). However, Friedman’s test is unable to find significant differences between NLLS unconstrained and Self-Adaptive DE, DE, and NLLS constrained.

3.3.6. Overall Comparison

In this section we offer an overall comparison of the algorithms for the five AMPA open receptors curves problems. To make this comparison, we considered the results on the five problems altogether, thus comparing the 2,500 fitness values at the same time. Table 20 summarizes the results. From these data, it is clear to conclude that the IPOP-CMA-ES algorithm obtains the best results, with an average precision of , the best overall ranking and the highest number of wins, being the best algorithm in all the pairwise comparisons. Following IPOP-CMA-ES, two DE variants (Self-Adaptive DE and classic DE) take places two and three in the ranking. It is important to note that three of the considered metaheuristics are able to outperform the NLLS algorithm, which was the base of the previous work in [11]. It is also relevant to the poor performance of the two considered local searches, which seem to be inadequate for this type of problems. Finally, the same statistical validation as for the individual problems has been followed and the results, reported in Tables 21 and 22, confirm the superior performance of the IPOP-CMA-ES algorithm, obtaining significant differences with all the algorithms and both statistical tests.

3.4. Biological Interpretation of the Results

To understand the biological implications of this study we have to consider each of the two curve-fitting problems independently (glutamate concentration and AMPA open receptors).

3.4.1. Glutamate Concentration Curves

In the case of the glutamate concentration curves, the results shown in Tables 2, 3, and 4 illustrate two important aspects. Firstly, the high fitness values obtained with many optimization techniques (most of them around ) prove that the glutamate diffusion equation proposed in Section 2.1 (equation (9)) is, as expected, a very accurate model of this phenomenon. This equation, therefore, can be efficiently used to interpret and predict the glutamate diffusion process, given that the adequate means for calculating the coefficient are used. Secondly, selecting the appropriate curve-fitting technique is also an important aspect in this case, since general purpose, commonly used algorithms (such as the NLLS used in [11]) are now known to produce suboptimal results. Figure 3 shows an example of a glutamate concentration curve solution (The synapse configuration used here had an AMPA receptor concentration of  molecules/μm2, a glutamate transporter concentration of  molecules/μm2, a postsynaptic length of 229 nm, a total apposition length of 305 nm, and a 20 nm wide synaptic cleft. For more details on these parameters please refer to [11]). In this example the glutamate initial concentration was  mol/L and its coefficient of diffusion μm2/ms. The curve-fitting solution produced , resulting in the following equation:

Table 3: Raw values (Self-Adaptive DE is the control algorithm) for the glutamate concentration problem.
Table 4: Corrected values according to Holm’s method (Self-Adaptive DE is the control algorithm) for the glutamate concentration problem.
Table 5: Mean fitness, average ranking, and number of wins (nWins) in pair-wise comparisons with the other algorithms on the 4-by-4 degree polynomial rational function.
Table 6: Raw values (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomial rational function.
Table 7: Corrected values according to Holm’s method (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomial rational function.
Table 8: Mean fitness, average ranking, and number of wins (nWins) in pair-wise comparisons with the other algorithms on the 8-term Fourier series.
Table 9: Raw values (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series.
Table 10: Corrected values according to Holm’s method (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series.
Table 11: Mean fitness, average ranking, and number of wins (nWins) in pair-wise comparisons with the other algorithms on the 8-term Gauss series.
Table 12: Raw values (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series.
Table 13: Corrected values according to Holm’s method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series.
Table 14: Mean fitness, average ranking, and number of wins (nWins) in pair-wise comparisons with the other algorithms on the 2-term exponential function.
Table 15: Raw values (Self-Adaptive DE is the control algorithm) for the 2-term exponential function.
Table 16: Corrected values according to Holm’s method (Self-Adaptive DE is the control algorithm) for the 2-term exponential function.
Table 17: Mean fitness, average ranking, and number of wins (nWins) in pair-wise comparisons with the other algorithms on the 9th degree polynomial.
Table 18: Raw values (NLLS unconstrained is the control algorithm) for the 9th degree polynomial.
Table 19: Corrected values according to Holm’s method (NLLS unconstrained is the control algorithm) for the 9th degree polynomial.
Table 20: Mean fitness, average ranking, and number of wins (nWins) in pair-wise comparisons with the other algorithms for all the models.
Table 21: Raw values (IPOP-CMA-ES is the control algorithm) for all the models.
Table 22: Corrected values according to Holm’s method (IPOP-CMA-ES is the control algorithm) for all the models.
Figure 3: Example of curve-fit solution for the glutamate concentration problem. The continuous line shows a solution obtained by fitting the exponential decay model proposed with the Self-Adaptive DE algorithm. The dashed line shows the reference data obtained from biochemical simulations. For this example .

It is important to remember that this equation is the solution for one particular synaptic configuration. Each one of the total 500 configurations produced a different glutamate concentration curve, from which a different curve-fitting solution (different value) was obtained.

3.4.2. AMPA Open Receptors Curves

The case of the open AMPA receptor curves requires further study. Optimization results (Tables 5 to 22) show that, when using the appropriate metaheuristic, most mathematical equations studied can be fitted to the experimental data with excellent numerical results (all curves can be fitted with more than accuracy, with the exception of the 9th degree polynomial). The five mathematical models studied are compared in Table 23, selecting for them the results provided by the best optimization technique in each case. As explained, most curve equations produce very accurate results (only the 9th degree polynomial falls clearly behind), although the number of wins for equations with similar average fitness may differ due to small variations on individual executions. As a consequence of this, the question of which model is best suited in this case requires deeper analysis.

Table 23: Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model.

From a strictly numerical point of view, the 8-term Gauss series fitted with the IPOP-CMA-ES algorithm produces the best results, so it is the first logical choice. In addition, we consider the nature of the biological process modeled (AMPA synaptic receptor activation after glutamate release). Although it cannot be modeled analytically (as explained in Section 2.1), we know that a key parameter of this process is neurotransmitter concentration. From Fick’s second law of diffusion we know that this concentration must follow an exponential decay, so it is logical to assume that some sort of exponential term has to be present in the AMPA open receptors curve. This favors the 8-term Gauss series and 2-term exponential function as the most biologically feasible equations, since both contain exponential components.

To continue our analysis, we performed an exploratory graphical study of the curve solutions provided by the different optimization algorithms. Figure 4 shows an example of curve-fit for every equation, fitted using the corresponding best algorithm for each case according to our experiments (As in the glutamate concentration curve example, the synapse configuration used here had an AMPA receptor concentration of 1967 molecules/μm2, a glutamate transporter concentration of 11560 molecules/μm2, a postsynaptic length of 229 nm, a total apposition length of 305 nm, and a 20 nm wide synaptic cleft. For more details on these parameters please refer to [11]). This study revealed several new findings:(1)The 8-term Gauss, 4-by-4 degree polynomial rational, and 2-term exponential functions are clearly the best candidates for modeling the AMPA open receptors curve. This is consistent with the previously mentioned numerical results (Table 23).(2)The curve-fitting calculated for the 8-term Gauss series presents an abnormal perturbation near the curve peak (see the detail in Figure 4(a)). This seems to be small enough so the overall fitness is not affected, but implies that apparently the adjusted curves model the synaptic phenomenon in a less realistic way than other options studied.(3)The 9th degree polynomial produces low quality solutions, clearly surpassed by the first three equations.(4)The 8-term Fourier series seems to be overfitted to the training data. The numerical results are apparently good, which indicates that the curve points tested when calculating the fitness value are very close to the simulation ones (Figure 4(f)). This is due to the fact that, when evaluating the fitness value, not all data points obtained during simulation are used. The high-resolution simulations on which this work is based produce 10,000-point curves per simulation. Testing all of these points would make fitness calculation an extremely heavy computational task. To avoid this, simulation data were sampled before curve-fitting, reducing the number of points per curve to 100. An identical sampling procedure was performed in [11] for the same reasons. In the case of the 8-term Fourier series, the optimization algorithms were able to find rapidly oscillating curves that passed very close to most test points. The shapes of the resulting curves, however, were completely different from real phenomena observed (Figure 4(e)). This indicates that the 8-term Fourier series is not an appropriate model for the AMPA open receptors curve, as it can easily produce this kind of false-positive, overfitted solutions.

Figure 4: Examples of curve-fit solutions for the AMPA open receptors problem. All figures show curve-fitting solutions for the same synaptic curve. Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm from our experiments. Dashed lines show reference data obtained from simulation. (a) also shows a detailed view of the curve peak. (e) shows an extreme case of overfitting, in the form of a fast-oscillating periodic function. (f) shows the sampled points tested in the fitness evaluation of the 8-term Fourier series, to better illustrate the overfitting problem.

Taking all this into consideration, we can try to select the appropriate equation for the AMPA open receptors curve. Between the best three, the biological relationship with the neurotransmitter concentration (following an exponential decay) makes us favor the 8-term Gauss and 2-term exponential functions over the 4-by-4 polynomial rational. These two present almost identical numerical results, with a slightly better performance in the case of the 8-term Gauss series. The small perturbation observed in 8-term Gauss series curves, however, indicates this equation models the natural phenomenon in a less realistic way. The simplicity of the 2-term exponential function has to be considered as well. It is a model with 4 parameters, in contrast with the 25 parameters of the 8-term Gauss series. For all this reason we suggest the 2-term exponential function as the best approximation.

In the example shown in Figure 4, the Self-Adaptive DE metaheuristic calculated a curve-fitting solution for the 2-term exponential function with , , , and , resulting in the following equation (also plotted in Figure 4(c)):

As in the case of the glutamate concentration curves, it is important to remember that this equation is the solution for one particular synaptic configuration. Each one of the total 500 configurations produced a different AMPA open receptors curve, from which a different curve-fitting solution (different values of , , , and ) was obtained.

4. Conclusions

In this paper we have compared several metaheuristics in the problem of fitting curves to match the experimental data coming from a synapse simulation process. Each one of the 500 synaptic configurations used represents a different synapse and produces two separate problem curves that need to be treated independently (glutamate concentration and AMPA open receptors). Several different functions have been used and the selected metaheuristics have tried to find the best fitting to each of them. We have treated all curves from the same problem in our simulation as a single data set, trying to find the best combination of function and metaheuristic for the entire problem. Furthermore, in order to show that more advanced metaheuristics can improve currently used state-of-the-art techniques, such as the NLLS algorithm, which was used in [11], we have conducted an extensive experimentation that yields significant results. First, it has been proved that several metaheuristics systematically outperform the state-of-the-art NLLS algorithm, especially in the case of the IPOP-CMA-ES, which obtained the best overall performance. Second, it has been shown that some commonly used local searches that obtain excellent results in other problems are not suitable for this task, which implies that the curve fitting algorithm must be carefully chosen and thus this comparison can be of great value for other researchers. Finally, the different performance of the algorithms opens a new research line that will focus on finding combinations of algorithms that could obtain even better results by exploiting the features of several metaheuristics.

From a biological point of view, we have observed how several curve models are able to very accurately fit a set of time-series curves obtained from the simulation of synaptic processes. We have studied two main aspects of synaptic behavior: (i) changes in neurotransmitter concentration within the synaptic cleft after release and (ii) synaptic receptor activation. We have proposed an analytically derived curve model for the former and a set of possible curve equations for the latter. In the first scenario (changes in neurotransmitter concentration) the optimization results validate our proposed curve model. In the second one (synaptic receptor activation) several possible curve models attained accurate results. A more detailed inspection, however, revealed that some of these models severely overfitted the experimental data, so not all models were considered acceptable. The best behavior was observed with the exponential models, which suggests that exponential decay processes are key elements of synapse activation. These results have several interesting implications, that can have an application in future neuroscience research. Firstly, as explained, we have identified the best possible candidate equations for both synaptic curves studied. Future studies of synaptic behavior can be modeled over these equations, further expanding the understanding of aspects such as synaptic geometry and neurotransmitter vesicle size. Our work demonstrates the validity of these equations. Secondly, we have identified the appropriate optimization techniques to be used when constructing these models. As we have demonstrated, not all curve-fitting alternatives are adequate, so selecting the wrong one could have a strong impact on the validity of any derived research. Our work successfully resolves this issue, again setting the basis for upcoming advances in synaptic behavior analysis.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was financed by the Spanish Ministry of Economy (NAVAN project) and supported by the Cajal Blue Brain Project. The authors thankfully acknowledge the computer resources, technical expertise, and assistance provided by the Supercomputing and Visualization Center of Madrid (CeSViMa). A. LaTorre gratefully acknowledges the support of the Spanish Ministry of Science and Innovation (MICINN) for its funding throughout the Juan de la Cierva Program.

References

  1. K. Fuxe, A. Dahlström, M. Höistad et al., “From the Golgi-Cajal mapping to the transmitter-based characterization of the neuronal networks leading to two modes of brain communication: wiring and volume transmission,” Brain Research Reviews, vol. 55, no. 1, pp. 17–54, 2007. View at Publisher · View at Google Scholar · View at Scopus
  2. E. Syková and C. Nicholson, “Diffusion in brain extracellular space,” Physiological Reviews, vol. 88, no. 4, pp. 1277–1340, 2008. View at Publisher · View at Google Scholar · View at Scopus
  3. D. A. Rusakov, L. P. Savtchenko, K. Zheng, and J. M. Henley, “Shaping the synaptic signal: molecular mobility inside and outside the cleft,” Trends in Neurosciences, vol. 34, no. 7, pp. 359–369, 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. J. Boucher, H. Kröger, and A. Sík, “Realistic modelling of receptor activation in hippocampal excitatory synapses: analysis of multivesicular release, release location, temperature and synaptic cross-talk,” Brain Structure & Function, vol. 215, no. 1, pp. 49–65, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Renner, Y. Domanov, F. Sandrin, I. Izeddin, P. Bassereau, and A. Triller, “Lateral diffusion on tubular membranes: quantification of measurements bias,” PLoS ONE, vol. 6, no. 9, Article ID e25731, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. J. R. Stiles and T. M. Bartol, “Monte carlo methods for simulating realistic synaptic microphysiology using MCell,” in Computational Neuroscience: Realistic Modeling for Experimentalists, pp. 87–127, CRC Press, 2001. View at Google Scholar
  7. R. A. Kerr, T. M. Bartol, B. Kaminsky et al., “Fast Monte Carlo simulation methods for biological reaction-diffusion systems in solution and on surfaces,” SIAM Journal on Scientific Computing, vol. 30, no. 6, pp. 3126–3149, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  8. S. J. Plimpton and A. Slepoy, “Microbial cell modeling via reacting diffusive particles,” Journal of Physics: Conference Series, vol. 16, no. 1, pp. 305–309, 2005. View at Publisher · View at Google Scholar
  9. S. S. Andrews and D. Bray, “Stochastic simulation of chemical reactions with spatial resolution and single molecule detail,” Physical Biology, vol. 1, no. 3-4, pp. 137–151, 2004. View at Google Scholar
  10. S. S. Andrews, N. J. Addy, R. Brent, and A. P. Arkin, “Detailed simulations of cell biology with smoldyn 2.1,” PLoS Computational Biology, vol. 6, no. 3, 2010. View at Google Scholar · View at Scopus
  11. J. Montes, E. Gomez, A. Merchán-Pérez, J. DeFelipe, and J.-M. Peña, “A machine learning method for the prediction of receptor activation in the simulation of synapses,” PLoS ONE, vol. 8, no. 7, Article ID e68888, pp. 1–14, 2013. View at Publisher · View at Google Scholar · View at Scopus
  12. A. LaTorre, S. Muelas, and J. M. Peña, “A comprehensive comparison of large scale global optimizers,” Information Sciences, 2014. View at Publisher · View at Google Scholar
  13. C. Sequeira, F. Sanchez-Quesada, M. Sancho, I. Hidalgo, and T. Ortiz, “A genetic algorithm approach for localization of deep sources in MEG,” Physica Scripta, vol. T118, pp. 140–142, 2005. View at Publisher · View at Google Scholar
  14. E. Cuevas, D. Zaldivar, and M. Pérez-Cisneros, “A novel multi-threshold segmentation approach based on differential evolution optimization,” Expert Systems with Applications, vol. 37, no. 7, pp. 5265–5271, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. A. Ochoa, M. Tejera, and M. Soto, “A fitness function model for detecting ellipses with estimation of distribution algorithms,” in Proceedings of the 6th IEEE Congress on Evolutionary Computation (CEC '10), pp. 1–8, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. A. LaTorre, S. Muelas, J.-M. Peña, R. Santana, Á. Merchán-Pérez, and J.-R. Rodríguez, “A differential evolution algorithm for the detection of synaptic vesicles,” in Proceedings of the IEEE Congress of Evolutionary Computation (CEC '11), pp. 1687–1694, IEEE, New Orleans, La, USA, June 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. R. Santana, S. Muelas, A. LaTorre, and J. M. Peña, “A direct optimization approach to the P300 speller,” in Proceedings of the 13th Annual Genetic and Evolutionary Computation Conference (GECCO '11), pp. 1747–1754, Dublin, Ireland, July 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. S. Muelas, J. M. Peña, V. Robles, K. Muzhetskaya, A. LaTorre, and P. de Miguel, “Optimizing the design of composite panels using an improved genetic algorithm,” in Proceedings of the International Conference on Engineering Optimization (EngOpt '08), J. Herskovits, A. Canelas, H. Cortes, and M. Aroztegui, Eds., COPPE/UFRJ, June 2008.
  19. W. Wang, Z. Xiang, and X. Xu, “Self-adaptive differential evolution and its application to job-shop scheduling,” in Proceedings of the 7th International Conference on System Simulation and Scientific Computing—Asia Simulation Conference (ICSC '08), pp. 820–826, IEEE, Beijing, China, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. S. M. Elsayed, R. A. Sarker, and D. L. Essam, “GA with a new multi-parent crossover for solving IEEE-CEC2011 competition problems,” in Proceedings of the IEEE Congress of Evolutionary Computation (CEC '11), pp. 1034–1040, June 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. A. LaTorre, S. Muelas, and J. M. Peña, “Benchmarking a hybrid DERHC algorithm on real world problems,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '11), pp. 1027–1033, IEEE, New Orleans, La, USA, June 2011.
  22. S. N. Parragh, K. F. Doerner, and R. F. Hartl, “Variable neighborhood search for the dial-a-ride problem,” Computers & Operations Research, vol. 37, no. 6, pp. 1129–1138, 2010. View at Publisher · View at Google Scholar · View at Scopus
  23. S. Muelas, A. Latorre, and J.-M. Peña, “A variable neighborhood search algorithm for the optimization of a dial-a-ride problem in a large city,” Expert Systems with Applications, vol. 40, no. 14, pp. 5516–5531, 2013. View at Publisher · View at Google Scholar · View at Scopus
  24. J. C. Dias, P. Machado, D. C. Silva, and P. H. Abreu, “An inverted ant colony optimization approach to traffic,” Engineering Applications of Artificial Intelligence, vol. 36, pp. 122–133, 2014. View at Publisher · View at Google Scholar
  25. S. Zhang, C. Lee, H. Chan, K. Choy, and Z. Wu, “Swarm intelligence applied in green logistics: a literature review,” Engineering Applications of Artificial Intelligence, vol. 37, pp. 154–169, 2015. View at Publisher · View at Google Scholar
  26. A. Gálvez, A. Iglesias, and J. Puig-Pey, “Iterative two-step genetic-algorithm-based method for efficient polynomial B-spline surface reconstruction,” Information Sciences, vol. 182, pp. 56–76, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  27. J. Yang, F. Liu, X. Tao, and X. Wang, “The application of evolutionary algorithm in B-spline curve fitting,” in Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD '12), pp. 2809–2812, Sichuan, China, May 2012. View at Publisher · View at Google Scholar · View at Scopus
  28. A. Gálvez and A. Iglesias, “A new iterative mutually coupled hybrid GA-PSO approach for curve fitting in manufacturing,” Applied Soft Computing Journal, vol. 13, no. 3, pp. 1491–1504, 2013. View at Publisher · View at Google Scholar · View at Scopus
  29. L. Zhao, J. Jiang, C. Song, L. Bao, and J. Gao, “Parameter optimization for Bezier curve fitting based on genetic algorithm,” in Advances in Swarm Intelligence, vol. 7928 of Lecture Notes in Computer Science, pp. 451–458, Springer, Berlin, Germany, 2013. View at Publisher · View at Google Scholar
  30. Centro de Supercomputación y Visualización de Madrid (CeSViMa), 2014, http://www.cesvima.upm.es/.
  31. P. Jonas, G. Major, and B. Sakmann, “Quantal components of unitary EPSCs at the mossy fibre synapse on CA3 pyramidal cells of rat hippocampus,” The Journal of Physiology, vol. 472, no. 1, pp. 615–663, 1993. View at Publisher · View at Google Scholar · View at Scopus
  32. K. M. Franks, T. M. Bartol Jr., and T. J. Sejnowski, “A monte carlo model reveals independent signaling at central glutamatergic synapses,” Biophysical Journal, vol. 83, no. 5, pp. 2333–2348, 2002. View at Publisher · View at Google Scholar
  33. D. A. Rusakov and D. M. Kullmann, “Extrasynaptic glutamate diffusion in the hippocampus: ultrastructural constraints, uptake, and receptor activation,” The Journal of Neuroscience, vol. 18, no. 9, pp. 3158–3170, 1998. View at Google Scholar
  34. K. Zheng, A. Scimemi, and D. A. Rusakov, “Receptor actions of synaptically released glutamate: the role of transporters on the scale from nanometers to microns,” Biophysical Journal, vol. 95, no. 10, pp. 4584–4596, 2008. View at Publisher · View at Google Scholar · View at Scopus
  35. A. Momiyama, R. A. Silver, M. Häusser et al., “The density of AMPA receptors activated by a transmitter quantum at the climbing fibre—Purkinje cell synapse in immature rats,” The Journal of Physiology, vol. 549, no. 1, pp. 75–92, 2003. View at Publisher · View at Google Scholar · View at Scopus
  36. D. M. Bates and D. G. Watts, Nonlinear Regression Analysis and Its Applications, Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics, Wiley-Interscience, New York, NY, USA, 1988. View at Publisher · View at Google Scholar · View at MathSciNet
  37. D. M. Bates and J. M. Chambers, “Nonlinear models,” in Statistical Models, S. J. M. Chambers and T. J. Hastie, Eds., Wadsworth & Brooks/cole, 1992. View at Google Scholar
  38. MathWorks, MATLAB and Statistics Toolbox Release 2013a, MathWorks, 2014, http://www.mathworks.com/support/sysreq/sv-r2013a/.
  39. R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, 2011.
  40. J. H. Holland, Adaptation in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor, Mich, USA, 1975. View at MathSciNet
  41. F. Herrera and M. Lozano, “Gradual distributed real-coded genetic algorithms,” IEEE Transactions on Evolutionary Computation, vol. 4, no. 1, pp. 43–63, 2000. View at Publisher · View at Google Scholar · View at Scopus
  42. R. Storn and K. V. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  43. J. Brest, S. Greiner, B. Bošković, M. Mernik, and V. Zumer, “Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 6, pp. 646–657, 2006. View at Publisher · View at Google Scholar · View at Scopus
  44. H. Wang, Z. Wu, S. Rahnamayan, and L. Kang, “A scalability test for accelerated de using generalized opposition-based learning,” in Proceedings of the 9th International Conference on Intelligent Systems Design and Applications (ISDA '09), pp. 1090–1095, Pisa, Italy, December 2009. View at Publisher · View at Google Scholar · View at Scopus
  45. F. J. Solis and R. J. B. Wets, “Minimization by random search techniques,” Mathematics of Operations Research, vol. 6, no. 1, pp. 19–30, 1981. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  46. L.-Y. Tseng and C. Chen, “Multiple trajectory search for large scale global optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '08), pp. 3052–3059, IEEE, Hong Kong, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  47. N. Hansen and A. Ostermeier, “Adapting arbitrary normal mutation distributions in evolution strategies: the covariance matrix adaptation,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '96), pp. 312–317, May 1996. View at Scopus
  48. N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation, vol. 9, no. 2, pp. 159–195, 2001. View at Publisher · View at Google Scholar · View at Scopus
  49. A. Auger and N. Hansen, “A restart CMA evolution strategy with increasing population size,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '05), pp. 1769–1776, IEEE Press, New York, NY, USA, September 2005. View at Scopus
  50. H. Theil, Economic Forecasts and Policy, Contributions to Economic Analysis Series, North-Holland, New York, NY, USA, 1961, http://books.google.es/books?id=FPVdAAAAIAAJ.
  51. G. Taguchi, S. Chowdhury, and Y. Wu, Taguchi's Quality Engineering Handbook, John Wiley & Sons, 2005. View at Publisher · View at Google Scholar
  52. J. H. Zar, Biostatistical Analysis, Pearson New International Edition, Prentice-Hall, 2013.
  53. W. W. Daniel, Applied Nonparametric Statistics, Duxbury Thomson Learning, 1990.
  54. S. García, D. Molina, M. Lozano, and F. Herrera, “A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 Special Session on Real Parameter Optimization,” Journal of Heuristics, vol. 15, no. 6, pp. 617–644, 2009. View at Publisher · View at Google Scholar · View at Scopus
  55. F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics Bulletin, vol. 1, no. 6, pp. 80–83, 1945. View at Publisher · View at Google Scholar
  56. A. LaTorre, S. Muelas, and J.-M. Peña, “Evaluating the multiple offspring sampling framework on complex continuous optimization functions,” Memetic Computing, vol. 5, no. 4, pp. 295–309, 2013. View at Publisher · View at Google Scholar · View at Scopus
  57. S. Muelas, J. M. Peña, V. Robles, A. LaTorre, and P. de Miguel, “Machine learning methods to analyze migration parameters in parallel genetic algorithms,” in Proceedings of the International Workshop on Hybrid Artificial Intelligence Systems 2007, E. Corchado, J. M. Corchado, and A. Abraham, Eds., pp. 199–206, Springer, Salamanca, Spain, 2007. View at Google Scholar