Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2014, Article ID 383790, 12 pages
http://dx.doi.org/10.1155/2014/383790
Research Article

A Two-Layered Diffusion Model Traces the Dynamics of Information Processing in the Valuation-and-Choice Circuit of Decision Making

1Department of Medicine, Surgery & Neurosciences, University of Siena, Viale Bracci 2, 53100 Siena, Italy
2Eye-Tracking & Visual Application Lab, University of Siena, Viale Bracci 2, 53100 Siena, Italy
3Department of Social, Political and Cognitive Sciences, University of Siena, Via Roma 56, 53100 Siena, Italy

Received 29 October 2013; Revised 18 July 2014; Accepted 7 August 2014; Published 31 August 2014

Academic Editor: Pablo Varona

Copyright © 2014 Pietro Piu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A circuit of evaluation and selection of the alternatives is considered a reliable model in neurobiology. The prominent contributions of the literature to this topic are reported. In this study, valuation and choice of a decisional process during Two-Alternative Forced-Choice (TAFC) task are represented as a two-layered network of computational cells, where information accrual and processing progress in nonlinear diffusion dynamics. The evolution of the response-to-stimulus map is thus modeled by two linked diffusive modules (2LDM) representing the neuronal populations involved in the valuation-and-decision circuit of decision making. Diffusion models are naturally appropriate for describing accumulation of evidence over the time. This allows the computation of the response times (RTs) in valuation and choice, under the hypothesis of ex-Wald distribution. A nonlinear transfer function integrates the activities of the two layers. The input-output map based on the infomax principle makes the 2LDM consistent with the reinforcement learning approach. Results from simulated likelihood time series indicate that 2LDM may account for the activity-dependent modulatory component of effective connectivity between the neuronal populations. Rhythmic fluctuations of the estimate gain functions in the delta-beta bands also support the compatibility of 2LDM with the neurobiology of DM.

1. Introduction

Even simple decisions imply higher cognitive functions that integrate noisy sensory stimuli, prior knowledge, and the costs-and-benefits related to possible actions in function of their time of occurrence. Accumulation of noisy information is a reliable pattern performed by neural pools in cortical circuitry during decision making (DM) process. This process is time absorbing, especially when the quality of information is poor and there exist many possible alternatives that may be evaluated and compared. There exists large consensus in the studies of DM toward the conformation of a phase of accumulation of evidence until a decision is made [111]; that is, the decision maker is expected to keep on gathering information until the evidence in favor of one of the alternatives suffices. Thus, the stochastic integration of information up to a certain threshold gives rise to a speed-accuracy tradeoff (the performance of the responses increases for slower response times) that is bounded by the costs associated with obtaining more information. In this context the responses times (RTs) to the stimuli characterize the speed-accuracy tradeoff because they allow the identification of the time when a decision is made (although not yet completed by the motor action) [12]. RT studies have addressed the implementation of diffusive models for describing decisional behaviors and the identification of the neuronal areas related to the decisional activity. DM is a process that involves different areas of the brain. These regions include the cortical areas that are supposed to integrate evidence supporting alternative actions and the basal ganglia (BG) that are hypothesized to act as a central switch in gating behavioral requests [915]. Neurons in the middle temporal area (MT) are known to encode motion stimulus [13], while the decision process itself occurs in other areas including posterior parietal cortex and prefrontal cortex. Perceptual choice experiments with primates [2, 14] enabled to relate the selective activation of neurons in the lateral intraparietal area (LIP) with the perceptual choice and the response time [15], and this activity would persist throughout a delay between the stimulus and the saccadic movement. This implies that the LIP neurons can respond neither purely to a motor signal, nor simply to sensory input [16]. Rather, LIP neurons are also supposed to contribute to the working memory associated with guiding the eye movement [17]; that is, they would store information about the target location. Neurons in the prefrontal cortex display similar properties during visual motion discrimination tasks [18]. Further studies of human neuroimaging and monkey single-neuron physiology have supported the hypothesis that the parietal and frontal cortices form a system for temporal accruing of data and categorical decision making. These areas would exert executive control on sensory neurons by providing top-down signals that convey information on semantic categorization derived by the stimulus-response association [19, 20].

In natural environments several sensory stimuli produce different alternatives and hence demand the evaluation of different possible responses, that is, a variety of behaviors. In other terms, also a selection question arises [21] whereby the (probability) distribution of the correct response has to take control of the individual’s motor plant [22]. The action selection then would resolve a conflict among decisional centers throughout the brain. A central switch that considers the urgency and opportunity of specific response to the stimuli results in an optimal solution in computational terms that is physiologically reliable by taking the basal ganglia (BG) as the neural base for that switch. Accordingly, BG gather input from all over the brain and, by sending tonic inhibition to midbrain and brain stem targets involved in motor actions, block the cortical control over these actions [2325]. Therefore, the inhibition of the neurons in the output nuclei, caused by BG activity, determines the disinhibition of their targets and the actions would be consequently selected. In other words, BG by acting as a central switch would evaluate the evidence and facilitate the best supported responses [22, 26, 27]. Many studies have reported a significant increase in the firing rate of the neurons of cortical areas representing the alternative choices during DM in visual tasks. The increase of the firing rates then would provide accumulation of evidence (i.e., information) related to the alternatives [1, 2]. The association between the neural firing rates and the DM process is by now an accepted fact and, by the way, some points are worthy to be mentioned. The ramping of the firing rates does not merely anticipate the motor action but would also relate to the target selection. Rather, the rate of growth of the neural activity is proportional to the response times, and so it may predict the decision time. In fact, it triggers the spiking burst of the downstream neurons (in SC and caudate), and the occurrence of their crossing of a defined threshold level marks the decision time. The ramping of the firing rates is also proportional to the prior probabilities of the alternatives and to their probabilities of being rewarded.

The main purpose of this work was to set the theoretical, neurobiologically sustainable bases for representing the two stages of valuation and choice of DM during Two-Alternative Forced-Choice (TAFC) task in terms of two distinguished layers of neuronal populations performing diffusive dynamics (2LDM), under the assumption that in the DM among alternative options the cortical areas (lateral prefrontal and parietal cortex) integrate the corresponding weighted evidence of the alternatives, whilst the ventromedial prefrontal cortex and the striatum encode the value of different options [28]. The secondary objective was to verify the ability of the 2LDM to account for possible influence that the populations exert over each other. Therefore simulation of time series reproducing the probability of doing motor action (visual targeting) during Two-Alternative Forced-Choice (TAFC) visual task was performed. Power spectrum of the gain functions and synchronization analysis of the instantaneous phases of the activities of the neuronal populations in the two layers of the model suggested activity-dependent modulation of the effective connectivity between the populations. The so-called Two-Alternative Forced-Choice (TAFC) task has often characterized the experimental setting for DM analysis [5, 6, 11]. Bogacz and coauthors [29] evidenced that the TAFC task models typically make three fundamental assumptions: (a) evidence favoring each alternative is integrated over time; (b) the process is subject to random fluctuations; and (c) the decision is made when sufficient evidence has accumulated favoring one alternative over the other. The major issue about the modality of integration of evidence is generally solved in favor of the integration of the difference in evidence, rather than the independent integration of evidence for each alternative. The application of the diffusion models in the study of cognitive processes had been introduced by Ratcliff [5] and since then on they had kept their theoretical soundness in the context of the analysis of decision making under uncertainty [1, 2, 4, 611] because it is relatively simple and well characterized [30] and it has been proven to implement the optimal mechanism for TAFC decision making [22, 31]. In applying the diffusion model to the TAFC, it is assumed that the accrual of noisy evidence corresponding to the two alternatives is carried on until their difference reaches a decisional threshold (Figure 1).

383790.fig.001
Figure 1: Drift diffusion model. The randomness of the path taken under the influence of noisy stimuli characterizes the diffusion models. A stimulus is represented in a diffusion equation by its influence on the drift rate of a random variable. This random variable, say the difference of evidence corresponding to the alternatives, accumulates the effects of the inputs over time until one of the boundaries is reached. The decision process ends when evidence reaches the threshold, and the time at which it occurs is called response time (RT). Response time (RT) depends on (a) the distance between the boundaries and the starting point, (b) the drift, that is, the rate at which the average (trend) of the random variable changes, and (c) the diffusion, that is, the variability of the path from the trend. These elements characterize the so-called drift diffusion model (DDM). The accumulation of evidence is then driven both by a deterministic component (drift) that is proportional to the stimulus intensity and by a stochastic component of noise (diffusion) that makes the evidence deviate from its own trend. The rationale of DDM is that, since the transmission and codification of the stimuli are inherently noisy, the quality of the feature extraction from such inputs may call for accumulation of a sufficient large sequence of the stimuli to get information [34]. Knowing the threshold level and the RT enables one to take a sight into the mechanism underlying the decision process [12, 88]. We can draw an analogy with a physical system and imagine the decisional process as the state of a “particle” moving within a potential well. Under this point of view, the persistence for relatively long periods of the state variable in the subthreshold area implies that the particle still entangled in the potential well enters an excited state where it remains for an exponentially distributed time interval with a certain decay time . If the combination of input and noise is sufficiently strong, then the particle is able to jump the barrier, that is, the threshold, and the system returns to an equilibrium state. The dynamics of the particle thus may resolve in a relaxation process [38] characterized by the oscillations between periods of subthreshold “disorder” inside the potential well and short impulses that trigger the system beyond the threshold in the rest state. This physical analogy allows better perception of how the DDM may fit the evolution of the input-output map underlying the neuronal model of the decision making process.

It has been shown [5, 31] that, under experiments with human subjects performing TAFC tasks, the drift diffusion model yields accuracy and reaction times (RTs). An advantage from drift diffusion models (DDM) is that, given a level of accuracy, it results in the fastest decision maker, for a fixed decision threshold. The accuracy tends to increase proportionally to the rising of threshold which results in a speed-accuracy tradeoff. This speed-accuracy tradeoff is usually considered a basic parameter for interpreting the results of both behavioral and neurological experiments [12, 15, 32]. The surprising capability of DDM to fit behavioral and neurological data seems to indicate that some decision making processes in the brain are really computed by a similar mechanism that accumulates evidence [33].

However, the canonical diffusion models assume that momentary evidence is accumulated continuously and at constant rate, that is, linearly, over the time until a decision threshold is reached [2, 15, 34, 35]. The assumption of linear integration of evidence in human decision making has been recently criticized because it misses the occurrence of refractory periods (“decisional blinks”) during DM, which areknown in psychological literature [36, 37]. On the contrary, the rate of evidence accumulation during DM has been found to fluctuate rhythmically in the delta band as a mechanism of sensory and attentional selection [3840]. The static linear-nonlinear transfer functions (LN cascades) implemented in the 2LDM for modelling the probability of firing rate of the neuronal populations in response to the stimuli accomplish the hypothesis of nonlinear momentary evidence accumulation. Moreover, in 2LDM a saturating, that is, sigmoidal, activation function is used; therefore, the link between mean population depolarization and expected firing rate (i.e., the input-output map) is parameterized by the slope of the sigmoidal function. Interestingly, the slope is allegedly related to the first- and second-order convolution kernels of the Volterra series, which represent a sufficient specification of population dynamics [4143]. The driving influence and the activity-dependent modulatory components of the effective connectivity between neuronal populations can therefore be estimated by analyzing the synchronization between the gain functions of the two layers. Specifically, modulatory connections are revealed by asynchronous coupling, whilst driving connections relate to synchronous interactions [44].

Although the evidence accumulation and choice formation are usually described as a one-stage process such that a decision is given as soon as the decisional variable reaches a threshold, it is empirically yet unknown whether decision making is performed in a single neuronal circuit [45]. Merging the two recognized stages of evaluation of alternatives and behavior/motor-action selection into one stage not only renders a model unable to explain overlapping (feature-fusion) phenomenon arising from high-rate succeeding stimuli [46], but seems counterintuitive with respect to the well-established statement that decisions are not unitary events since derivation from two distinct sequential processes [28, 47] as well. Neither would a one-stage diffusion model be able to represent the nonlinear interactions between the cortical and subcortical neuronal populations involved throughout the brain, nor would it properly describe high-conflict over long time scale (>3 s) choices, nor would it yield well distinguished estimation of the times for evaluation and action selection. From a computational point of view, one stage would be a natural outcome only if the activation functions of the computational layers in the multilevel neuronal network were linear, because any multilayer neuronal network, under the condition of linearly separable (i.e., independent) input patterns, might be restricted to a single layer of linear units. However, this assumption is not reasonable in neurobiological context, because it rules out the nonlinear coupling among brain areas, that is, the activity-dependent connections.

The most intriguing two-stage models have been proposed in terms of integrate-and-fire attractor networks [33, 4850], where the first network evaluates through competitive learning the evidence-biased firing rates of the neurons responding to each of the possible choices and consequently takes a provisional decision in favor of the most valued input. The second network provides final decisions on the base of the confidence in the first level decision, so that changes of the first level decisions are made possible. Positive feedback makes the integrate-and-fire attractor network a nonlinear model and hence it is consistent with the neurophysiology. Strikingly, attractor network exhibits, at local (i.e., cortical) level, nonlinear diffusive dynamics [48], where the biasing input stands for the drift and the stochastic spiking of the neurons provides the diffusion component. Hence, both attractor network model and 2LDM consider decision making inherently a process that involves two levels of computation of nonlinear diffusion dynamics. Also the attractor network model contemplates the role of basal ganglia as driving system of the “global” competition for the action/behavior selection, but in this case through a linear diffusion process. This marks the difference with 2LDM, where the possible implication of BG activity is to be considered in terms of nonlinear diffusion system. As abovementioned, independent, separable input patterns are necessary for linear integration, but this would hinder adaptive mechanisms. Moreover, since the adaptive tuning of the decision threshold is expected to be modulated by reward signals [51, 52], the dopamine-dependent corticostriatal synapses are described as the neurobiological locus for threshold modification [53]. This finding enhances the assumption of nonlinear behavior of BG, whereas spiny projection neurons display bistable behavior [54] just because bistability calls for nonlinearity, feedback, and hysteresis, which are conditions consistent with the implementation of reinforcement learning in BG.

The paper is organized as follows.

Section 2 is about the neuronal populations codes and the relationship between interpulse intervals and response times. The last part of the section is dedicated to the description of the two-layered diffusion model (2LDM).

Section 3 presents the results of synchronization analysis between the instantaneous phases of the activities of the two neuronal populations and the power spectrum of the gain functions from the application of the 2LDM over simulated data that were obtained by resampling time series of the probability of visual targeting (likelihood) recorded during a Two-Alternative Forced-Choice (TAFC) visual task.

Section 4 summarizes and discusses the main results and adds some comments about computational and neurobiological implications or potential developments of the 2LDM.

The appendix deals with statistical theory on distances between features.

2. Two-Layered Diffusion Model (2LDM)

2.1. Population Code

As long as the cells in a neural population have similar response properties, that is, acting in a statistically similar way [55], then the brain collects and organizes information from patterns of activity involving populations of neurons [56, 57]. In the work of Sanger [55] it is also described that the input-output (stimulus-response) map stems from the modulation of information; that is, calculation on values that are represented by population codes (encoding) and feature extraction about the input stimuli (decoding), in the brain, may be seen as relations between different population codes that provide internal representations of the input-output map. In this perspective, then, computation in the brain relies on commutations from one internal representation to another. By assuming that populations of neurons regulate the responses to stimuli, we can consider the effect of the accumulation of activity from a combination of two neural populations and . The gathering and processing of information during the experiments then would elicit spike-trains from those cells of    within the interval . If we count the number of spikes emitted up to a time   , we obtain the variables that represent the sequence of evidence accumulation. Over the time, the occurrence of noise makes stochastic variables, and so the process of accumulation of spikes from the neural population traces a random pattern that is expected to end as it encounters a bound at a finite time [12]. The two processes by which the neural system learns the input stimuli , thus, determine the behavior of the decisional variable. This learning activity then would give rise to a first level of codification through that provides the elaboration and probabilistic valuation of the input . In fact, we can imagine a binary code where the “1 s” corresponds to , that is, to the times the bound is trespassed. Afterwards, the codification from is “translated” into the population by the variable . This provides another binary code based on the overtaking of the threshold , which ultimately drives the eye movements during the computational task. Calculation on values that are represented by population codes and feature extraction about the input stimuli in the brain may be seen as relations between different population codes that provide internal representations of the input-output map. Hence, we can “translate” the likelihood of into a pulsed binary code , say the -code, where is a nonlinear transform of such that and at time assume the value “1” (pulse) if the likelihood , or the value “0” (no-pulse) otherwise.

2.2. Interpulse Intervals and Response Times

After the signal has been reformulated on the base of the -code (Figure 2), we obtain a string of symbols and the lengths of the sequences of the zeros provide the holding times, that is, the empirical interpulse intervals, IPI; that is, the variable recoded according to results in periods of subthreshold location that are broken out by sequences of impulses. Analogously, from the recoded we obtain the corresponding empirical holding times IPI given the transform . Thus we can imagine a functional chain among the bounds , , and   scaled by some opportune nonlinear transfer functions , (without loss of generality, we set ). Let us assume that    behaves a renewal process. The expected value and the variance of a renewal process may be obtained from the observed IPI data. In fact, for large , the variable is normally distributed with mean and variance , where and are the mean and the standard deviation of the corresponding IPI sequence, respectively [58, 59]. Therefore, the time series can be reconstructed by averaging out over the neural population a Gaussian random variable with mean and variance .

383790.fig.002
Figure 2: Example of binary encoding of information. The threshold value allows reading of the variable as a binary code where the 1 s pulses occur when . The lengths of the sequences of zeros provide the interpulse-intervals (IPI).

The importance of IPI arises from the hypothesis that the information transferred within the nervous system is usually encoded also by the timing of spikes [6062]. (Since we are dealing ultimately with the threshold-dependent variable , the random variable is implicitly involved, where is the scalar diffusion process that describes the evolution of the potential, that is, of the evidence, between two consecutive neuronal firings. then is the theoretical counterpart of the IPI.) Thus, by studying the properties of the set of the times that correspond to the crossings of the threshold we both realize the relationship between the impulse rate of the variables and IPI and solve the so-called first passage time problem [63] and hence the response time problem as well. We can then consider the IPI as the expression of the response time of the process through the threshold. Given this association, it becomes natural to compare the theoretical distribution of the response times to the observed distribution of the IPI. An impulse of the variable is elicited any time the process crosses the threshold and then starts again according to a renewal process. (This assumption is necessary to identify the time series of successive pulses times as a sample extracted from a population with the same distribution of the random variable [64].) The question then is how to model the distribution of the response times. We hypothesized the ex-Wald distribution of the response times [65] so that the cumulative distribution function of the response time variable is given by where is the diffusion parameter (i.e., noise of the process), , and are the drift and the threshold of the diffusion process, and   (>0) is the rate parameter of the exponential distribution. is the cumulative distribution function of the Wald distributed component of the response time variable and is the standard Gaussian distribution: The estimation of the parameters of the response time distribution involves a backward procedure. Firstly, the variable must be determined for an initial value of the threshold so as to obtain the distribution of the corresponding IPI. Secondly, the best combination of the parameters for the RT distribution of will be assigned as the one which minimizes some error function, say the root mean square error (RMSE) of the difference between their corresponding distribution of RT and the observed IPI. (Indeed the RMSE is a quadratic function of the errors and is optimal when the residuals are distributed as normal random variables. In that occurrence the RMSE is a convex surface. On the contrary, in presence of heavy-tailed distributions of the residuals, the RMSE becomes suboptimal, and it had better use other criteria, e.g., based on entropy measures. However, in front of the computational complexity involved in the inverse method for deriving the parameters of the diffusion models, the RMSE may turn out to be economical.) Lastly, the parameters for the RT distribution of the variable can be estimated analogously by comparison to the interpulse-interval distribution IPI. Of course the final result of the sequential procedure is -dependent; that is, the particular initial value of may affect the estimated vector of optimal parameters. Therefore, the question is how to initialize . The assignation to the average of the likelihood, that is, , suggests an interesting interpretation under the information theory perspective. In fact, we may expect that the function should be learned so as to maximize the mutual information between and subject to noise effect. This is the so-called infomax principle, by which the process of stimuli-learning gives rise to optimization algorithms [66]. If the noise that affects the system is Gaussian and independent of the input , then the mutual information between and resolves in the difference between the entropy of the output and the entropy of the noise [60]. It implies that, to improve the information transmission, the entropy of the signal must be maximized. Therefore, the value of the function , which corresponds to the maximal entropy of the binary signal , is expected to be very close to the one that corresponds to the average . To recover the mapping from stimulus to impulse rate we can apply a nonlinear transformation of a convolution of the stimuli , . By assuming the logistic function for the nonlinear transforming function we can write , where is the convolution of the stimuli (i.e., or ) with an opportune function that is obtained in two stages. In the first stage the transfer function estimate is computed for the input signal and the binary output signal , where is the variable representing the probability of impulse rates, that is, if or if . The relationship between and is shaped by the static (i.e., time invariant) transfer function , that is, the ratio of the cross power spectral density of and over the power spectral density of . In the second stage, the inverse discrete Fourier transform of is computed. Since the inputs are generated from Gaussian process, then is Gaussian too. According to the theorem of Bussgang, the cross-correlation between and scales the autocovariance function of by a value ; therefore we can correct with . Next, the convolution of vectors and forms the argument of the logistic transfer function . This procedure yields a static linear-nonlinear model for the probability of firing rate of the neuronal populations in response to the stimuli and the variable is then expressed by , where is the noise term [67].

2.3. Structure of the Model

Let us consider the input-output map between input and the final state of decisional variable . Data inflow at time ; proceeds from external input and recurrent output obtained at time   . This relation, which implies relatively complex computational paradigms, is mediated by populations of neurons and in different areas in the brain. Cells of neuron population , activated by input , respond according to a tuning curve    and generate the time series of spikes . Variable counts the spikes until the threshold is reached. This event affects the observable variable and thus the final decisional state through a second neuron population . The firing of neurons is integrated in variable that exceeds threshold and ultimately drives the path of the variable . The state of at any time holds the whole information-set available up to , including the implicit reward corresponding to the state at that time. By aiming at the maximization of the reward, the system would give rise to gap evaluation and error reduction that ultimately involves a feedback circuitry. In this way, the information backpropagates from the decision stage to the valuation stage in order to elicit the adaptation of the threshold in the valuation stage. This mechanism of reinforcement triggers the competition between the alternatives and the valuation is ultimately addressed to the most probable rewarded one (Figure 3).

383790.fig.003
Figure 3: The two-layered diffusion model (2LDM) for decision making. Both stages (valuation and choice) are affected by noise. In the valuation stage the critical threshold indicates the firing rate of the neuronal populations involved, to which would correspond the expected reward. The outputs of this stage then are the differences between the responses of observed neuronal activity at the stimuli provided by the alternatives and the target. These measurements enter the next stage, where the decision is taken so as to optimize some utility criterion (reward). Hence, the attainment of the threshold in the decision stage indicates the preferred alternative. Feedback information flows from the decision stage in order to elicit the adaptation of the boundary in the valuation layer. In this way, a mechanism of reinforcement determines the competition between the alternatives and the valuation is biased to the most probable rewarded one.

3. Simulation

3.1. Methods

In order to test the ability of the model to detect effective interactions between the neuronal populations, simulation of the 2LDM was carried on by resampling time series of conditional probabilities from a previous experiment of eye tracking. Nine subjects had been asked to look at two abstract images displayed on a screen for 5 seconds (s) at randomly assigned locations (left or right side). Each subject performed ten trials. The two images were balanced by extension and by photometrical characteristics (color, luminance, and contrast). Eye movements had been recorded during the period of 5 s (sampling frequency 1/50 ms) and at the end of that time subjects declared which of the images was their preferred one. The likelihood, that is, the probability of visual targeting towards one of two images conditional to the final chosen stimulus, was then calculated over the total 90 choices. One hundred surrogates of this likelihood time series were obtained by using the iAAFT technique (iterated amplitude adjusted Fourier transform) [68], so preserving the marginal distribution and power spectrum of the original signal. Next, Gaussian noise proportional to the standard deviation of the original likelihood was added at thirty randomly selected points in each surrogate series. Run test was applied to the resulting modified iAAFT surrogates, and were retained only the ones for which the null hypothesis of mutually independence of the elements in the sequence was rejected. This procedure guaranteed the generation of forty realistic-structured data vectors to which the 2LDM was implemented. Paired -test for comparing the rates of populations’ activity variables and was done. Power spectrum of the gain functions calculated for the two layers was reported. Hilbert-transforms of the average rates of populations’ activity variables and were produced so as to derive their instantaneous phases [69]. The correntropy coefficient , which is a measure of correlation in the reproducing kernel Hilbert space (RKHS) ranging over , proper for nonlinear relationship [70], operated as coefficient of phase locking (i.e., synchronization) between the phases and . Correntropy measures were calculated dynamically, that is, in running windows (of depth = 6 data points), over the phase signals.

To use the phase locking indices in a meaningful way, we need to know their distribution under the null hypothesis of independent pairs of oscillatory activity. Only values that depart significantly from what would be expected for independent oscillators can be considered as revealing the presence of synchronization. The distribution of the index, computed for pairs drawn randomly from the surrogate ensembles, can be considered as an approximation of the distribution under the null hypothesis [71]. Therefore, the iAAFT surrogates of both average rates of populations’ activity variables and were Hilbert-transformed and the resultant instantaneous phases attained, and the time series of correntropy values over running windows (of the same size as before) between and was obtained.

To test the null hypothesis that the mean of the distances between features ( and ) is zero, the Weibull-like distribution of the variable was considered (see the appendix).

3.2. Results

The resampled time series of the likelihood of visual targeting at the final selected image ranged over , with mean = 0.5853 and SD = 0.095. The original likelihood data series had mean = 0.6630 and SD = 0.0951. A paired-samples -test was conducted to compare the rates of populations’ activity variables   and . There was a significant difference between the rates of (mean = 0.2775, SD = 0.001) and the rates of (mean = 0.1891, SD = 0.0009); (99) = 649.85; . Power spectrum of the gain function in showed higher components than in up to the (lower bound of) beta-band (Figure 4). Level of synchronization between the instantaneous phases calculated by the Hilbert-transform of the rates of populations’ activity variables   and was determined in terms of correntropy coefficients (Figure 5). Departures from zero values indicate phase locking. To test the null hypothesis of asynchronous state, the vector of correntropies between the surrogate instantaneous phases was considered as representative of the null hypothesis. The distance between and was expected to be distributed as a Weibull random variable (see the appendix and Figure 6) with shape and scale parameters and . According to the Weibull-like distribution, we found that the test statistic, mean (distance)/S.E.(distance) = 15.323, was significantly different from zero (). Values in the distance feature vector greater than the critical value = 0.756 (for the significance level of 5%) revealed the times of synchronization occurrence (Figure 7). Synchronized activities of the neuronal population were concentrated in the time interval between and and peaked also at .

383790.fig.004
Figure 4: Gain functions. In the plot are displayed the gain functions relative to the neuronal populations of the two layers. Both showed prominent rhythmic activity in the delta band. Increased oscillations up to beta band characterized population .
383790.fig.005
Figure 5: Time course of the correntropy coefficient between the phase signals in and , , and between the surrogate phases . Correntropy is a measure of nonlinear correlation that is obtained by the projection of the original vectors onto the reproducing kernel Hilbert space. The plot displays the time course of correntropy coefficients ( and ) between phase signals and and between the corresponding surrogate phase signals ( and ). Zero values correspond to independence between the signals.
383790.fig.006
Figure 6: Cumulative distribution function of distance between and . The cumulative distribution function of the variable representing the difference between the correntropy coefficients and is distributed as a Weibull-like random variable (with parameters and ).
383790.fig.007
Figure 7: Distance of the correntropy coefficients measured for the phase signals and their surrogates. Synchronized interaction between the two neuronal populations was determined in correspondence with the values of the correntropy distance vector above the critical value (0.756), which was calculated according to the distribution of a Weibull random variable with parameters and at the significance level of 5%. We observed a prominent asynchronous interaction.

4. Conclusions

The model presented in this study assumes that the trajectories of an observable variable induced by the TAFC decision making task are conditional to the final state , and so they trace the information processing. Under this hypothesis, the possible association between the formation of a decision, as determined by the path, and the final state of the decisional process can be investigated by considering that populations of neurons determine neuronal responses to stimuli (Figure 3). More specifically, here it is hypothesized that the series of the likelihood are generated by sequential activation of two neuronal populations and and that the decisional process is the effect of accumulation of activity by a pool of neuron populations. This would engender diffusive dynamics of the accumulated evidence. Thus, the proposed model, 2LDM, is, to a certain extent, an implementation of the two-stage circuitry of valuation and decision which is computationally reliable in terms of both neurobiology and Bayesian theory [72, 73]. From this perspective, likelihood ultimately relies on commutations from one internal representation to another, according to their diffusive processes of activation.

There is a theoretical linkage between 2LDM and the well-recognized integrate-and-fire attractor network model [33, 4850] since both models rely on nonlinear diffusive dynamics. Major difference rests in the expected dynamics of the basal ganglia involved during the decision making process, which we considered driven by nonlinear patterns rather than linear patterns. Furthermore, the characterization of the input-output map in terms of the infomax principle makes, ultimately, the 2LDM an entropy-thresholding algorithm where the model’s parameters (threshold, diffusion noise, and drift) should be tuned to maximize the mutual information between the representations they engender and the inputs that feed the layers. This is consistent with the Q-learning adaptation, since learning the “best” action on the two thresholds to maximize the cumulative entropy is equivalent to learning the optimal behavior which maximizes the reward [74, 75]. Nonlinearity in the 2LDM is given by static linear-nonlinear functions that express the gain of the input-output map, so overcoming the theoretical weakness inherent in the canonical diffusion models which assume that momentary evidence is accumulated continuously and at constant rate, that is, linearly, until a decision threshold is reached. This way to model nonlinear dynamics is not a novelty in neuroscience because it fits for Volterra series representation which, through the first- and second-order kernels, estimates the driving and modulatory influence that one population exerts on the other. The slope of sigmoidal transfer function yields information about the effective connectivity between the neuronal populations, because it is a proxy of the Volterra kernels [76].

Simulation was used to test the ability of 2LDM to represent interactions between the neuronal populations on reliable time series and did not aim at investigating the underlying cognitive process. Synchronous interaction was present within a restricted median time interval, where, supposedly, the dynamics of the two neuronal populations were mutually reinforcing [44]. Instead, asynchronous interaction was prominent. This kind of finding is expected for modulatory (i.e., top-down) connections rather than for driving influence. Neurobiological consistency of the results was found also in terms of the power spectrum of the gain functions, which showed rhythmic oscillations in the low-frequency bands (from delta- to beta-bands). The spectral content of neuronal activity in the circuits of valuation and choice may yield information about the mechanisms underlying the DM [77]. In fact, neuronal oscillations are associated with reverberating activity at local and large scale [78] and reverberation would elicit prolonged accumulation of evidence during decision making [79]. Delta-band oscillations in cortical areas have been associated with attention [80], while the occurrence of synchronization in the delta-band is reported to be widespread and modulated by the different decision alternatives and context specific [81]. Theta-band oscillations are expected to operate in many cognitive functions including memory and DM [82, 83]. The striatum oscillations in theta frequency range are prevalent but activity in lower band is also observed [84]. In a study of DM [85], oscillations in alpha and beta frequency bands had been found synchronized with the phase of delta and theta oscillations (phase-amplitude coupling) in medial frontal cortex. This synchronization might reflect a mechanism of feedback valence coding in the medial frontal cortex. Beta-band activity has been linked to reverberation, which is a possible mechanism for memory consolidation and accumulation of evidence [86], as well as to computational operating in DM rather than neuronal representation of the sensory evidence [87]. Our finding of increased beta activity (although in the lower bound of the beta frequency range) in the second neuronal population, which would perform the selection of the optimal alternative, seems consistent with this latter perspective.

Improvement in the optimization of the 2LDM parameters is expected by considering other error functions instead of RMSE if the distribution of the residuals is not Gaussian and is heavy-tailed such that it exhibits large skewness and kurtosis. A challenging task would be the implementation of further layers for studying the subcircuits possibly involved in the valuation or choice stage of DM (e.g., the direct and indirect pathways in BG). Finally, the application of 2LDM to specific cognitive experimental task would yield information about how the speed-and-accuracy performance may vary on the base of some psychometric or behavioral smoothing parameter.

Appendix

Statistics of Distances between Features

To measure the similarity between two feature vectors, many distance measures have been proposed [89, 90]. A common metric class of measures is the -norm. The distance from one reference vector to another feature vector can be formalized as In order to derive the distribution of the variable we can refer to the following.

Lemma A.1. For nonidentical and correlated random variables , their sum is distributed according to the generalized extreme value distribution (Gumbel, Frechét, and Weibull).

Lemma A.2. If in Lemma A.1 the variables are upper bounded, the sum of variables is Weibull-distributed.

Theorem A.3. For nonidentical, correlated, and upper bounded variables , the random variable , expressed by their sum, adheres to the Weibull distribution.

Corollary A.4. For finite length feature vectors with nonidentical, correlated, and upper bounded values, the -distances for limited , from one reference feature vector to other feature vectors, adhere to the Weibull distribution.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. J. D. Schall, “Neural basis of deciding, choosing and acting,” Nature Reviews Neuroscience, vol. 2, no. 1, pp. 33–42, 2001. View at Publisher · View at Google Scholar · View at Scopus
  2. M. N. Shadlen and W. T. Newsome, “Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey,” Journal of Neurophysiology, vol. 86, no. 4, pp. 1916–1936, 2001. View at Google Scholar · View at Scopus
  3. K. F. Wong, A. C. Huk, M. N. Shadlen, and X. J. Wang, “Neural circuit dynamics underlying accumulation of time-varying evidence during perceptual decision making,” Frontiers in Computational Neuroscience, vol. 1, no. 6, pp. 1–11, 2007. View at Google Scholar
  4. P. L. Smith and R. Ratcliff, “Psychology and neurobiology of simple decisions,” Trends in Neurosciences, vol. 27, no. 3, pp. 161–168, 2004. View at Publisher · View at Google Scholar · View at Scopus
  5. R. Ratcliff, “A theory of memory retrieval,” Psychological Review, vol. 85, no. 2, pp. 59–108, 1978. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Usher and J. L. McClelland, “The time course of perceptual choice: the leaky, competing accumulator model,” Psychological Review, vol. 108, no. 3, pp. 550–592, 2001. View at Publisher · View at Google Scholar · View at Scopus
  7. J. I. Gold and M. N. Shadlen, “Banburismus and the brain: Decoding the relationship between sensory stimuli, decisions, and reward,” Neuron, vol. 36, no. 2, pp. 299–308, 2002. View at Publisher · View at Google Scholar · View at Scopus
  8. D. P. Hanes and J. D. Schall, “Neural control of voluntary movement initiation,” Science, vol. 274, no. 5286, pp. 427–430, 1996. View at Publisher · View at Google Scholar · View at Scopus
  9. R. Ratcliff, “The role of mathematical psychology in experimental psychology,” The Australian Journal of Psychology, vol. 50, pp. 129–130, 1998. View at Google Scholar
  10. R. Ratcliff and F. Tuerlinckx, “Estimating parameters of the diffusion model: approaches to dealing with contaminant reaction times and parameter variability,” Psychonomic Bulletin and Review, vol. 9, no. 3, pp. 438–481, 2002. View at Publisher · View at Google Scholar · View at Scopus
  11. R. Ratcliff, A. Cherian, and M. Segraves, “A comparison of Macaque behavior and superior colliculus neuronal activity to predictions from models of two-choice decisions,” Journal of Neurophysiology, vol. 90, no. 3, pp. 1392–1407, 2003. View at Publisher · View at Google Scholar · View at Scopus
  12. M. N. Shadlen, T. D. Hanks, A. K. Churchland, R. Kiani, and T. Yang, “The speed and accuracy of a simple perceptual decision: a mathematical primer,” in Bayesian Brain: Probabilistic Approaches to Neural Coding, K. Doya, S. Ishii, A. Pouget, and R. P. N. Rao, Eds., The MIT Press, Cambridge, Mass, USA, 2007. View at Google Scholar
  13. W. T. Newsome, K. H. Britten, and J. A. Movshon, “Neuronal correlates of a perceptual decision,” Nature, vol. 341, no. 6237, pp. 52–54, 1989. View at Publisher · View at Google Scholar · View at Scopus
  14. M. N. Shadlen and W. T. Newsome, “Motion perception: Seeing and deciding,” Proceedings of the National Academy of Sciences of the United States of America, vol. 93, no. 2, pp. 628–633, 1996. View at Publisher · View at Google Scholar · View at Scopus
  15. J. D. Roitman and M. N. Shadlen, “Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task,” Journal of Neuroscience, vol. 22, no. 21, pp. 9475–9489, 2002. View at Google Scholar · View at Scopus
  16. X. Wang, “Probabilistic decision making by slow reverberation in cortical circuits,” Neuron, vol. 36, no. 5, pp. 955–968, 2002. View at Publisher · View at Google Scholar · View at Scopus
  17. H. Seo, D. J. Barraclough, and D. Lee, “Lateral intraparietal cortex and reinforcement learning during a mixed-strategy game,” Journal of Neuroscience, vol. 29, no. 22, pp. 7278–7279, 2009. View at Publisher · View at Google Scholar · View at Scopus
  18. J.-N. Kim and M. N. Shadlen, “Neural correlates of a decision in the dorsolateral prefrontal cortex of the macaque,” Nature Neuroscience, vol. 2, no. 2, pp. 176–185, 1999. View at Publisher · View at Google Scholar · View at Scopus
  19. A. J. Parker and K. Krug, “Neuronal mechanisms for the perception of ambiguous stimuli,” Current Opinion in Neurobiology, vol. 13, no. 4, pp. 433–439, 2003. View at Publisher · View at Google Scholar · View at Scopus
  20. C. Law and J. I. Gold, “Neural correlates of perceptual learning in a sensory-motor, but not a sensory, cortical area,” Nature Neuroscience, vol. 11, no. 4, pp. 505–513, 2008. View at Publisher · View at Google Scholar · View at Scopus
  21. P. Redgrave, T. J. Prescott, and K. Gurney, “The basal ganglia: a vertebrate solution to the selection problem?” Neuroscience, vol. 89, no. 4, pp. 1009–1023, 1999. View at Publisher · View at Google Scholar · View at Scopus
  22. R. Bogacz and K. Gurney, “The basal ganglia and cortex implement optimal decision making between alternative actions,” Neural Computation, vol. 19, no. 2, pp. 442–477, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  23. G. Chevalier, S. Vacher, J. M. Deniau, and M. Desban, “Disinhibition as a basic process in the expression of striatal functions. I. The striato-nigral influence on tecto-spinal/tecto-diencephalic neurons,” Brain Research, vol. 334, no. 2, pp. 215–226, 1985. View at Publisher · View at Google Scholar · View at Scopus
  24. J. M. Deniau and G. Chevalier, “Disinhibition as a basic process in the expression of striatal functions. II. The striato-nigral influence on thalamocortical cells of the ventromedial thalamic nucleus,” Brain Research, vol. 334, no. 2, pp. 227–233, 1985. View at Publisher · View at Google Scholar · View at Scopus
  25. A. Parent and L. N. Hazrati, “Functional anatomy of the basal ganglia. I. The cortico-basal ganglia-thalamo-cortical loop,” Brain Research Reviews, vol. 20, no. 1, pp. 91–127, 1995. View at Publisher · View at Google Scholar · View at Scopus
  26. M. C. Keuken, C. Müller-Axt, R. Langner, S. B. Eickhoff, B. U. Forstmann, and J. Neumann, “Brain networks of perceptual decision-making: an fMRI ALE meta-analysis,” Frontiers in Human Neuroscience, vol. 8, article 445, 2014. View at Publisher · View at Google Scholar
  27. C. Lo and X. Wang, “Cortico-basal ganglia circuit mechanism for a decision threshold in reaction time tasks,” Nature Neuroscience, vol. 9, no. 7, pp. 956–963, 2006. View at Publisher · View at Google Scholar · View at Scopus
  28. J. W. Kable and P. W. Glimcher, “The neurobiology of decision: consensus and controversy,” Neuron, vol. 63, no. 6, pp. 733–745, 2009. View at Publisher · View at Google Scholar · View at Scopus
  29. R. Bogacz, E. Brown, J. Moehlis, P. Holmes, and J. D. Cohen, “The physics of optimal decision making: A formal analysis of models of performance in two-alternative forced-choice tasks,” Psychological Review, vol. 113, no. 4, pp. 700–765, 2006. View at Publisher · View at Google Scholar · View at Scopus
  30. P. L. Smith, “Stochastic dynamic models of response time and accuracy: a foundational primer,” Journal of Mathematical Psychology, vol. 44, no. 3, pp. 408–463, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  31. D. R. J. Laming, Information Theory of Choice-reaction Times, John Wiley & Sons, New York, NY, USA, 1968.
  32. W. A. Wickelgren, “Speed-accuracy tradeoff and information processing dynamics,” Acta Psychologica, vol. 41, no. 1, pp. 67–85, 1977. View at Publisher · View at Google Scholar · View at Scopus
  33. G. Deco, E. T. Rolls, and R. Romo, “Stochastic dynamics as a principle of brain function,” Progress in Neurobiology, vol. 88, no. 1, pp. 1–16, 2009. View at Publisher · View at Google Scholar · View at Scopus
  34. R. Ratcliff and P. L. Smith, “A comparison of sequential sampling models for two-choice reaction time,” Psychological Review, vol. 111, no. 2, pp. 333–367, 2004. View at Publisher · View at Google Scholar · View at Scopus
  35. S. W. Link, “The relative judgment theory of two choice response time,” Journal of Mathematical Psychology, vol. 12, no. 1, pp. 114–135, 1975. View at Publisher · View at Google Scholar · View at Scopus
  36. H. Pashler, “Processing stages in overlapping tasks: evidence for a central bottleneck,” Journal of Experimental Psychology: Human Perception and Performance, vol. 10, no. 3, pp. 358–377, 1984. View at Publisher · View at Google Scholar · View at Scopus
  37. J. E. Raymond, K. L. Shapiro, and K. M. Arnell, “Temporary Suppression of Visual Processing in an RSVP Task: an Attentional Blink?” Journal of Experimental Psychology: Human Perception and Performance, vol. 18, no. 3, pp. 849–860, 1992. View at Publisher · View at Google Scholar · View at Scopus
  38. V. Wyart, V. de Gardelle, J. Scholl, and C. Summerfield, “Rhythmic fluctuations in evidence accumulation during decision making in the human brain,” Neuron, vol. 76, no. 4, pp. 847–858, 2012. View at Publisher · View at Google Scholar · View at Scopus
  39. P. Lakatos, G. Karmos, A. D. Mehta, I. Ulbert, and C. E. Schroeder, “Entrainment of neuronal oscillations as a mechanism of attentional selection,” Science, vol. 320, no. 5872, pp. 110–113, 2008. View at Publisher · View at Google Scholar · View at Scopus
  40. C. E. Schroeder and P. Lakatos, “Low-frequency neuronal oscillations as instruments of sensory selection,” Trends in Neurosciences, vol. 32, no. 1, pp. 9–18, 2009. View at Publisher · View at Google Scholar · View at Scopus
  41. S. Ostojic and N. Brunel, “From spiking neuron models to linear-nonlinear models,” PLoS Computational Biology, vol. 7, no. 1, Article ID e1001056, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  42. K. J. Friston, “Volterra kernels and connectivity,” in Human Brain Function, R. J. S. Franckowiak, C. Frith, R. Dolan et al., Eds., Academic Press, 2nd edition, 2003. View at Google Scholar
  43. K. J. Friston and C. Büchel, “Attentional modulation of effective connectivity from V2 to V5 in humans,” Proceedings of the National Academy of Sciences of the United States of America, vol. 97, pp. 7591–7596, 2000. View at Google Scholar
  44. K. J. Friston, “Brain function, nonlinear coupling, and neuronal transients,” The Neuroscientist, vol. 7, no. 5, pp. 406–418, 2001. View at Publisher · View at Google Scholar · View at Scopus
  45. P. Cisek and J. F. Kalaska, “Neural mechanisms for interacting with a world full of action choices,” Annual Review of Neuroscience, vol. 33, pp. 269–298, 2010. View at Publisher · View at Google Scholar · View at Scopus
  46. J. Rüter, N. Marcille, H. Sprekeler, W. Gerstner, and M. H. Herzog, “Paradoxical evidence integration in rapid decision processes,” PLoS Computational Biology, vol. 8, no. 2, Article ID e1002382, 2012. View at Publisher · View at Google Scholar · View at Scopus
  47. B. A. J. Reddi, “Decision making: the two stages of neuronal judgement,” Current Biology, vol. 11, no. 15, pp. R603–R606, 2001. View at Publisher · View at Google Scholar · View at Scopus
  48. E. T. Rolls, Emotions and Decision-Making Explained, Oxford University Press, 2014.
  49. G. Deco, E. T. Rolls, L. Albantakis, and R. Romo, “Brain mechanisms for perceptual and reward-related decision-making,” Progress in Neurobiology, vol. 103, pp. 194–213, 2013. View at Publisher · View at Google Scholar · View at Scopus
  50. A. Insabato, M. Pannunzi, E. T. Rolls, and G. Deco, “Confidence-related decision making,” Journal of Neurophysiology, vol. 104, no. 1, pp. 539–547, 2010. View at Publisher · View at Google Scholar · View at Scopus
  51. J. N. J. Reynolds and J. R. Wickens, “Dopamine-dependent plasticity of corticostriatal synapses,” Neural Networks, vol. 15, no. 4–6, pp. 507–521, 2002. View at Publisher · View at Google Scholar · View at Scopus
  52. X. J. Wang, “Neuronal circuit computation of choice,” in Neuroeconomics: Decision Making and the Brain, P. W. Glimcher, E. Fehr, C. Camerer, and R. A. Poldrack, Eds., Academic Press, 2008. View at Google Scholar
  53. X.-J. Wang, “Decision making in recurrent neuronal circuits,” Neuron, vol. 60, no. 2, pp. 215–234, 2008. View at Publisher · View at Google Scholar · View at Scopus
  54. E. M. Izhikevich, Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting, The MIT Press, Cambridge, Mass, USA, 2007. View at MathSciNet
  55. T. D. Sanger, “Neural population codes,” Current Opinion in Neurobiology, vol. 13, no. 2, pp. 238–249, 2003. View at Publisher · View at Google Scholar · View at Scopus
  56. W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget, “Bayesian inference with probabilistic population codes,” Nature Neuroscience, vol. 9, no. 11, pp. 1432–1438, 2006. View at Publisher · View at Google Scholar · View at Scopus
  57. T. P. Trappenberg, Fundamentals of Computational Neuroscience, Oxford University Press, New York, NY, USA, 2010. View at MathSciNet
  58. D. R. Cox, Renewal Process, Methuen, London, UK, 1962.
  59. H. C. Tuckwell, Introduction to Theoretical Neurobiology, vol. 2, Cambridge University Press, Cambridge, UK, 1988. View at MathSciNet
  60. F. Rieke, D. Warland, R. de Ruyter van Steveninck, and W. Bialek, Spikes: Exploring the Neural Code, Cambridge, Mass, USA, MIT Press, 1999. View at MathSciNet
  61. P. Dayan and L. F. Abbott, Theoretical Neuroscience: Computational and Mathematical Modelling of Neural Systems, The MIT Press, Boston, Mass, USA, 2001. View at MathSciNet
  62. M. T. Giraudo, R. M. Mininni, and L. Sacerdote, “On the asymptotic behavior of the parameter estimators for some diffusion processes: application to neuronal models,” Ricerche di Matematica, vol. 58, no. 1, pp. 103–127, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  63. P. Lánsky, C. E. Smith, and L. M. Ricciardi, “One-dimensional stochastic diffusion models of neuronal activity and related first passage time problems,” in Trends in Biological Cybernetics, J. Menon, Ed., vol. 1, pp. 153–162, 1990. View at Google Scholar
  64. L. Sacerdote and C. Zucca, “Inverse first passage time method in the analysis of neuronal interspike intervals of neurons characterized by time varying dynamics,” in Proceedings of the 1st International Symposium on Brain, Vision and Artificial Intelligence (BVAI '05), Naples, Italy, 2005.
  65. W. Schwarz, “The ex-Wald distribution as a descriptive model of response times,” Behavior Research Methods, Instruments, and Computers, vol. 33, no. 4, pp. 457–469, 2001. View at Publisher · View at Google Scholar · View at Scopus
  66. R. Linsker, “A local learning rule that enables information maximization for arbitrary input distributions,” Neural Computation, vol. 9, no. 8, pp. 1661–1665, 1997. View at Publisher · View at Google Scholar · View at Scopus
  67. F. Gabbiani and S. J. Cox, Mathematics for Neuroscientists, Academic Press, New York, NY, USA, 2010.
  68. T. Schreiber and A. Schmitz, “Improved surrogate data for nonlinearity tests,” Physical Review Letters, vol. 77, no. 4, pp. 635–638, 1996. View at Publisher · View at Google Scholar
  69. M. G. Rosemblum and J. Kurths, “Analyzing synchronization phenomena from bivariate data by means of the Hilbert transform,” in Nonlinear Analysis of Physiological Data, pp. 91–99, Springer, 1998. View at Google Scholar
  70. J. C. Principe, Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives, Springer, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  71. D. P. Mandic, M. Chen, T. Gautama, M. M. van Hulle, and A. Constantinides, “On the characterization of the deterministic/stochastic and linear/nonlinear nature of time series,” Proceedings of The Royal Society of London A: Mathematical, Physical and Engineering Sciences, vol. 464, no. 2093, pp. 1141–1160, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  72. M. Colombo and P. Seriés, “Bayes in the brain: on Bayesian modelling in neuroscience,” British Journal for the Philosophy of Science, 2012. View at Publisher · View at Google Scholar
  73. M. Kawato, “Internal models for motor control and trajectory planning,” Current Opinion in Neurobiology, vol. 9, no. 6, pp. 718–727, 1999. View at Publisher · View at Google Scholar · View at Scopus
  74. P. Yin, “Maximum entropy-based optimal threshold selection using deterministic reinforcement learning with controlled randomization,” Signal Processing, vol. 82, no. 7, pp. 993–1006, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  75. J. N. Kapur, P. K. Sahoo, and A. K. C. Wong, “A new method for gray-level picture thresholding using the entropy of the histogram.,” Computer Vision, Graphics, & Image Processing, vol. 29, no. 3, pp. 273–285, 1985. View at Publisher · View at Google Scholar · View at Scopus
  76. S. Ostojic and N. Brunel, “From spiking neuron models to linear-nonlinear models,” PLoS Computational Biology, vol. 7, no. 1, Article ID e1001056, 16 pages, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  77. M. Siegel, A. K. Engel, and T. H. Donner, “Cortical network dynamics of perceptual decision-making in the human brain,” Frontiers in Human Neuroscience, vol. 5, article 21, 2011. View at Publisher · View at Google Scholar
  78. X. J. Wang, “Neural oscillations,” in Encyclopedia of Cognitive Science, L. Nabel, Ed., pp. 272–280, MacMillan, London, UK, 2003. View at Google Scholar
  79. M. Siegel and T. H. Donner, “Linking band-limited cortical population activity to fMRI and behavior,” in Integrating EEG and fMRI: Recording, Analysis, and Application, M. Ullsperger and S. Debener, Eds., pp. 271–294, University Press, New York, NY, USA, 2010. View at Google Scholar
  80. P. Fries, T. Womelsdorf, R. Oostenveld, and R. Desimone, “The effects of visual stimulation and selective visual attention on rhythmic neuronal synchronization in macaque area V4,” The Journal of Neuroscience, vol. 28, no. 18, pp. 4823–4835, 2008. View at Publisher · View at Google Scholar · View at Scopus
  81. V. Nácher, A. Ledberg, G. Deco, and R. Romo, “Coherent delta-band oscillations between cortical areas correlate with decision making,” Proceedings of the National Academy of Sciences of the United States of America, vol. 110, no. 37, pp. 15085–15090, 2013. View at Publisher · View at Google Scholar
  82. M. A. Beulen, The role of theta oscillations in memory and decision making, [Master thesis], University of Utrecht, Utrecht, The Netherlands, 2011.
  83. P. B. Sederberg, M. J. Kahana, M. W. Howard, E. J. Donner, and J. R. Madsen, “Theta and gamma oscillations during encoding predict subsequent recall,” Journal of Neuroscience, vol. 23, no. 34, pp. 10809–10814, 2003. View at Google Scholar · View at Scopus
  84. J. P. Bolan, A. Cali, and P. J. Magill, The Basal Ganglia VIII, Springer, New York, NY, USA, 2006.
  85. M. X. Cohen, C. E. Elger, and J. Fell, “Oscillatory activity and phase-amplitude coupling in the human medial frontal cortex during decision making,” Journal of Cognitive Neuroscience, vol. 21, no. 2, pp. 390–402, 2009. View at Publisher · View at Google Scholar · View at Scopus
  86. A. K. Engel and P. Fries, “Beta-band oscillations-signalling the status quo?” Current Opinion in Neurobiology, vol. 20, no. 2, pp. 156–165, 2010. View at Publisher · View at Google Scholar · View at Scopus
  87. R. C. DeCharms and A. Zador, “Neural representation and the cortical code,” Annual Review of Neuroscience, vol. 23, pp. 613–647, 2000. View at Publisher · View at Google Scholar · View at Scopus
  88. A. Voss, K. Rothermund, and J. Voss, “Interpreting the parameters of the diffusion model: an empirical validation,” Memory and Cognition, vol. 32, no. 7, pp. 1206–1220, 2004. View at Publisher · View at Google Scholar · View at Scopus
  89. E. Bertin, “Global fluctuations and Gumbel statistics,” Physical Review Letters, vol. 95, Article ID 170601, pp. 1–4, 2005. View at Publisher · View at Google Scholar
  90. E. Bertin and M. Clusel, “Generalized extreme value statistics and sum of correlated variables,” Journal of Physics A: Mathematical and General, vol. 39, no. 24, article 001, pp. 7607–7619, 2006. View at Publisher · View at Google Scholar · View at Scopus