Table of Contents Author Guidelines Submit a Manuscript
Advances in Mathematical Physics
Volume 2017, Article ID 8730859, 7 pages
Research Article

Optimal Stochastic Control Problem for General Linear Dynamical Systems in Neuroscience

1Big Data Research Center, Hunan University of Commerce, Changsha 410205, China
2Key Laboratory of High Performance Computing and Stochastic Information Processing (HPCSIP) (Ministry of Education of China), College of Mathematics and Computer Science, Hunan Normal University, Changsha 410081, China
3School of Business Administration, Hunan University, Changsha 410081, China
4School of Finance, Guangdong University of Foreign Studies, Guangzhou 510006, China

Correspondence should be addressed to Chao Deng; moc.361@nanuhoahcgned

Received 14 May 2017; Revised 25 October 2017; Accepted 22 November 2017; Published 17 December 2017

Academic Editor: Xavier Leoncini

Copyright © 2017 Yan Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper considers a -dimensional stochastic optimization problem in neuroscience. Suppose the arm’s movement trajectory is modeled by high-order linear stochastic differential dynamic system in -dimensional space, the optimal trajectory, velocity, and variance are explicitly obtained by using stochastic control method, which allows us to analytically establish exact relationships between various quantities. Moreover, the optimal trajectory is almost a straight line for a reaching movement; the optimal velocity bell-shaped and the optimal variance are consistent with the experimental Fitts law; that is, the longer the time of a reaching movement, the higher the accuracy of arriving at the target position, and the results can be directly applied to designing a reaching movement performed by a robotic arm in a more general environment.

1. Introduction

The effective control of neuronal activity is one of the most exciting topics in theoretical neuroscience, with great potential for applications in healthcare. Nowadays, the application of stochastic control methods in neuroscience is becoming a significant portion of the mainstream research. Among many researches, for example, we refer to Holden (1976) for the models of the stochastic activity of neural aggregates, Iolov et al. [1] with respect to the optimal control of single neuron spike trains, and Roberts et al. [2] with respect to the review of the application of the stochastic models of brain activity.

In this paper, we study trajectory planning and control in human arm movements. When a hand is moved to a target, the central nervous system must select one specific trajectory among an infinite number of possible trajectories that lead to the target position. The content of this paper includes two parts: the first part is modeling the activities incorporating stochastic process, and the second part is quantifying task goals as cost functions and applying the sophisticated tools of optimal control theory to obtain the optimal behavior. Feng et al. [3] reviewed two optimal control problems at a different levels, neuronal activity control and movement control. They also derived the optimal signals for these two control problems. Li et al. [4] considered the robust control of human arm movements. Based on the fuzzy interpolation of an nonlinear stochastic arm system, they simplified the complex noise-tolerant robust control of the human arm tracking problem by solving a set of linear matrix inequalities using Newton’s iterative method via an interior point scheme for convex optimization. Singh et al. [5] modeled reaching movements in the presence of obstacles and solved a stochastic optimal control problem that consists of probabilistic collision avoidance constraints and a cost function that trades off between effort and end-state variance in the presence of a signal-dependent noise. For more details, we refer the reader to Campos and Calado [6], Berret et al. [7], and Mainprice et al. [8].

Yet, all the above studies discussed 1-dimensional or lower-dimension space, and the neuronal activity or movement trajectory would be involved in a higher dimension space. In this paper, motivated by Feng et al. [3], we consider a stochastic control problem for arm movement within the framework of -dimensional control space. Applying stochastic control theory, we solved the optimization problem explicitly and obtained the exact solution of the optimal trajectory, velocity, and the optimal variance.

The remainder of this paper is organized as follows. Section 2 introduces the basic model setup of the high-order linear stochastic dynamical systems for movement trajectory. In Section 3, we derive the explicit expressions for the optimal trajectory, velocity, and variance. In Section 4, we provided a -dimensional optimization example, and concluding remarks are given in Section 5.

2. Model Setup

2.1. The Integrate-and-Fire Model

In this subsection, we give the classical I & F (integrated-and-fire) model followed by Feng et al. [3], which describe the neuron activity.with and where is the decay time constant. The synaptic input current is with and as Poisson processes with rates and , and being the magnitude of each excitatory postsynaptic potential (EPSP) and inhibitory postsynaptic potential (IPSP); a cell receives excitatory postsynaptic potentials (EPSPs) at synapses and inhibitory postsynaptic potentials (IPSPs) at inhibitory synapses. Once crosses from below, a spike is generated and is reset to . This model is termed as the IF model.

Let , and use the usual approximation to approximate the IF models (see Feng et al. [3] and Zhang and Feng [9]); then (1) can rewriten aswhere is a standard Brownian motion, is a constant, and if , it implies that the input is derived from a Poisson process. If , it is the so-called the supra-Poisson inputs, and the other is the so-called sub-Poisson inputs if . In addition, a larger leads to more randomness for the synaptic inputs.

2.2. General Linear Stochastic Differential Equation

In this subsection, we extend the one-dimensional I & F model (3) to -dimensional stochastic differential equations in which the solution process enters linearly. Such processes arise in estimation and control of linear systems, in systems, in economics, and in various other fields (see Liu [10]), aswhere is an -dimensional Brown motion independent of the -dimensional initial vector , and , , and matrices , , and are nonrandom, measurable, and locally bounded, respectively.

Now we define an matrix function , satisfying the following matrix differential equation:where is the identity matrix. We know that (3) has unique (absolutely continuous) solution defined for , and, for each , the matrix is nonsingular.

By Itô’s rule, it is easily verified thatsolves (4).

We suppose that and introduce the mean vector and covariance matrix functions as follows:From (4), we can show that hold for every . In particular, and solve the linear equations:

3. Optimization Problem Formulation

We consider a simple model of high-order linear stochastic dynamical system in -dimensional space. For simplicity of notation, we suppose that each component of trajectory in -dimensional space satisfies the following stochastic differential equation (SDE):where is a 1-dimensional Brown motion, and are constants. Generally we call as the control signal. Now letThen we have Thus, is the position along some direction in space; the matrix is a constant matrix: is the control signal vector, and is a vector.

Optimization Problem. For simplicity of notations, we let . For a point (in fact, expresses the arriving position component of the trajectory in the direction of component ) and two positive numbers , we intend to find a control signal which satisfies the constrained condition:and such that the variance of arrives the minimum in ; that is,Let ; by (6), we have, for ,Therefore, by (15) and (17), we haveBy the calculation of matrix , we easily get the following results.

Lemma 1. For and ,In particular, for ,

Proof. Since , by the definition of (see (14)) and the multiplication of matrices, we get (19) at once.
Since , it is easy to get (20).
For simplicity of notation, we suppose that has -different eigenvalues (in this case, is diagonalizable). Hence is similar to the diagonal matrix diag and there exists an invertible matrix such that diag. Therefore, there are nonzero real number and different eigenvalues such thatWe introduce the following notations:

Theorem 2. Under the constrained control condition (18), the following results hold:(i)for ;(ii)

Proof. (i) Since , we can make the derivation until -order derivation for the two sides of (17) and, by Lemma 1, we get for That is to say,Since and , again differentiating (24), we obtain Therefore,(ii) By (21) and Lemma 1, (23) can be expressed asthat is, Since are different, using the multiplicity of matrix and replacing in the above equation, we get result (ii) at once.

Note. if has multiplicity as an eigenvalue of , and is diagonal matrix, we can also choose independent functions with the form . In this case, we can obtain the same result as that in the Theorem 2 by using the similar approach.

By (17), it is easily seen thatThus we only need to minimize the first term in (29), since minimizing each term of last equal in (29) implies minimizing and, by (26), the control signal in the second term of (29) is a constant for . Now we apply the calculus of variations to the first term in (29), that is,To this end, let us define the control signal set: For any and , we have . By (30), must satisfy which givesComparing (33) with the first part constraint in , we conclude thatalmost surely for with parameters . Hence the solution of the original problem iswhere are the unique solution of the following (by the result of Theorem 2):

The similar equations are true for the other components in -dimensional space.

From the results above, we can obtain the following conclusions.

Theorem 3. Under the optimal control framework as we set up here and , the optimal mean trajectory is a straight line. When the optimal control problem is degenerate; that is, the optimal control signal is a delta function, and (29) with gives us an exact relationship between time and variance.

Proof. This proof is similar to that of Theorem  1 in Feng et al. [3]; we omit it.

Remark 4. When , the results of Theorem 3 are consistent with Feng et al. [3]. The finding is also in agreement with the experimental Fitts law (see Fitts [11]); that is, the longer time of a reaching movement, the higher the accuracy of arriving at the target point.

4. Example in 3-Dimensional Space

We consider a simple model of (arm) movements. Let be the position of the hand at time ; we then havewhere , , are parameters, is the control signal, is a diagonal matrix with diagonal elements as , respectively, and is the standard Brown motion. In physics, we know that (37) is the well-known Kramers’ equation. In neuroscience, it is observed in all in vivo experiments that the noise strength is proportional to the signal strength and hence the signals received by muscle take the form of (37) (see Feng et al. [3]).

For a point and two positive numbers , , we intend to find a control signal which satisfieswhere means that each component of it is in . To stabilize the hand, we further require that the hand will stay at for a while, that is, in time interval , which also naturally requires that the velocity should be zero at the end of movement. The physical meaning of the problem we considered here is clear; at time , the hand will reach the position (see (38)), as precisely as possible (see (39)). Without loss of generality, we assume that , and .

To use the results in the previous section, we can rewrite the optimal control problem posed in the previous paragraph as a -order linear stochastic dynamical system in -dimensional space, that is,The similar equation holds true for and .

If we let express the moving velocity in the direction of -coordinate, (40) becomes the following -order linear SDE:

Comparing (12), it is easy to know that

Since where thus by calculating, we know thatwhereHence, by (8), we get that

Therefore, (34), (35), and (36) become almost surely for with parameters . Hence the solution of the original problem iswith being given by the following equations (by Theorem 2):

5. Conclusion

The experimental study of movement in human has shown that voluntary reaching movements obey Fitts law: the longer the time taken for a reaching movement, the greater the accuracy for the hand to arrive at the end point. In this paper, we study a stochastic control problem for a reaching movement within a -dimensional space. We solve this stochastic control problem explicitly and obtain the analytical solutions for optimal signals, optimal velocity, and optimal variance. Furthermore, we find that the optimal control is also consistent with Fitts law. This implies that the straight line trajectory is a natural consequence of optimal stochastic control principles, under a nondegenerate optimal control signal.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


This research is partially supported by the Natural Science Foundation of Guangdong Province, China (no. 2017A030310575), and Innovation Team Project of Guangdong Colleges and Universities (no. 2016WCXTD012). Yingchun Deng is partially supported by the National Natural Science Foundation of China (no. 11671132), the Scientific Research Fund of Hunan Provincial Education Department, China (no. 17K057), and the Construct Program of the Key Discipline in Hunan Province.


  1. A. Iolov, S. Ditlevsen, and A. Longtin, “Stochastic optimal control of single neuron spike trains,” Journal of Neural Engineering, vol. 11, no. 4, Article ID 046004, 2014. View at Publisher · View at Google Scholar · View at Scopus
  2. J. A. Roberts, K. J. Friston, and M. Breakspear, “Clinical applications of stochastic dynamic models of the brain, part II: a review,” Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, vol. 2, no. 3, pp. 225–234, 2017. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Feng, X. Chen, H. C. Tuckwell, and E. Vasilaki, “Some optimal stochastic control problems in neuroscience—a review,” Modern Physics Letters B, vol. 18, no. 21-22, pp. 1067–1085, 2004. View at Publisher · View at Google Scholar · View at Scopus
  4. C.-W. Li, C.-C. Lo, and B.-S. Chen, “Robust sensorimotor control of human arm model under state-dependent noises, control-dependent noises and additive noises,” Neurocomputing, vol. 167, pp. 61–75, 2015. View at Publisher · View at Google Scholar · View at Scopus
  5. A. K. Singh, S. Berman, and I. Nisky, “Stochastic optimal control for modeling reaching movements in the presence of obstacles: theory and simulation,”
  6. F. M. M. O. Campos and J. M. F. Calado, “Approaches to human arm movement control—a review,” Annual Reviews in Control, vol. 33, no. 1, pp. 69–77, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. B. Berret, E. Chiovetto, F. Nori, and T. Pozzo, “Evidence for composite cost functions in arm movement planning: An inverse optimal control approach,” PLoS Computational Biology, vol. 7, no. 10, Article ID e1002183, 2011. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Mainprice, R. Hayne, and D. Berenson, “Goal set inverse optimal control and iterative replanning for predicting human reaching motions in shared workspaces,” IEEE Transactions on Robotics, vol. 32, no. 4, pp. 897–908, 2016. View at Publisher · View at Google Scholar · View at Scopus
  9. X. Zhang and J. Feng, “Computational modeling of neuronal networks,” in Encyclopedia of Biophysics, pp. 344–353, Springer, Berlin, Germany, 2013. View at Google Scholar
  10. J. Liu, “Portfolio selection in stochastic environments,” Review of Financial Studies, vol. 20, no. 1, pp. 1–39, 2006. View at Publisher · View at Google Scholar · View at Scopus
  11. P. M. Fitts, “The information capacity of the human motor system in controlling the amplitude of movement,” Journal of Experimental Psychology, vol. 47, no. 6, pp. 381–391, 1954. View at Publisher · View at Google Scholar · View at Scopus