Computational and Mathematical Methods in Medicine

Volume 2015 (2015), Article ID 585409, 16 pages

http://dx.doi.org/10.1155/2015/585409

## Comparative Sensitivity Analysis of Muscle Activation Dynamics

^{1}Institut für Mathematik, Universität Koblenz, 56070 Koblenz, Germany^{2}Institut für Sport- und Bewegungswissenschaft, Universität Stuttgart, Allmandring 28, 70569 Stuttgart, Germany^{3}Institut für Sportwissenschaft, Lehrstuhl für Bewegungswissenschaft, Friedrich-Schiller-Universität, Seidelstraße 20, 07749 Jena, Germany^{4}Stuttgart Research Centre for Simulation Technology, Pfaffenwaldring 7a, 70569 Stuttgart, Germany

Received 15 December 2014; Accepted 5 February 2015

Academic Editor: Eduardo Soudah

Copyright © 2015 Robert Rockenfeller et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze’s nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac’s linear model. Other than Zajac’s model, Hatze’s model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze’s model that combines best with a particular muscle force-length relation.

#### 1. Introduction

Scientific knowledge is gained by an interplay between quantitative real world measurements of physical, chemical, or biological phenomena and the development of mathematical models for understanding the dynamical processes behind. In general, such phenomena are determined as spatiotemporal patterns of physical measures (state variables). Modelling consists of distinguishing the surrounding world from the system that yields the phenomena and formulating a mathematical description of the system, a model, that can predict values of the state variables. The calculations depend on model parameters and often on giving measured input variables. By changing parameter values and analysing the resulting changes in the values of the state variables, the model may then be used as a predictive tool. This way, the model’s validity can be verified. If the mathematical model description is moreover derived from first principles, the model has the potential to explain the phenomena in a causal sense.

Calculating the sensitivities of a model’s predicted output, that is, the system’s state variables, with respect to model parameters is a means of eliminating redundancy and indeterminacy from models and thus helps to identify valid models. Sensitivity analyses can be helpful both in model-based experimental approaches and in purely theoretical work. A modelling theoretician could be looking for parameters to which all state variables are nonsensitive. Such parameters might be superfluous. An experimenter may inspect the model that represents his working hypothesis and analyse which of the model’s state variables are specifically sensitive to a selected parameter. Hence, the experimenter would have to measure exactly this state variable to identify the value of the selected parameter.

In a biomechanical study Scovil and Ronsky [1] applied sensitivity analysis to examine the dynamics of a mechanical multibody system: a runner’s skeleton coupled to muscle activation-contraction dynamics. They calculated specific sensitivity coefficients in three slightly different ways. A sensitivity coefficient is the difference quotient calculated from dividing the change in a state variable by the change in a model parameter value, evaluated in a selected system state [2]. The corresponding partial derivative may be simply called “sensitivity.” Therefore, a sensitivity function is the time evolution of a sensitivity [2]. Accordingly, Lehman and Stark [2] had proposed a more general and unified approach than Scovil and Ronsky [1], which allows systematically calculating the sensitivities of any dynamical system described in terms of ordinary differential equations. As an example for sensitivity functions, Lehman and Stark [2] had applied their proposed method to a muscle-driven model of saccadic eye movement. By calculating a percentage change in a state variable value per percentage change in a parameter value, all sensitivities can be made comprehensively comparable, even across models.

A sensitivity as defined so far is of first order. Methodically, we aim at introducing a step beyond, namely, at calculating second order sensitivities. These measures are suited to quantify how much the sensitivity of a state variable with respect to one model parameter depends on changing another parameter. By analysing second order sensitivities, the strength of their interdependent influence on model dynamics can be determined. In addition to this so-called local sensitivity analysis, we will take the whole parameter variability into account by calculating global sensitivities according to Chan et al. [3] and Saltelli and Chan [4]. This approach allows translating the impact of one parameter on a state variable into a parameter’s importance, by completely comprising its interdependent influence in combination with all other parameters’ sensitivities.

In this study, we will apply the sensitivity analysis to models that predict how the activity of a muscle (its chemical state) changes when the muscle is stimulated by neural signals (electrical excitation). Such models are used for simulations of muscles’ contractions coupled to their activation dynamics. Models for coupled muscular dynamics are often part of neuromusculoskeletal models of biological movement systems. In particular, we want to try and rate two specific model variants of activation dynamics formulated by Zajac [5] and by Hatze [6]. As a first result, we present an example of a simplified version of the Zajac [5] model, in which sensitivity functions can in fact be calculated in closed form. Subsequently we calculate the sensitivities numerically with respect to all model parameters in both models, aiming at an increased understanding of the influence of changes in model parameters on the solutions of the underlying ordinary differential equations (ODEs). Additionally, we discuss which of both models may be physiologically more accurate. The arguments come from a mixture of three different aspects: sensitivity analysis, others’ experimental findings, and an additional attempt to best fit different combinations of activation dynamics and force-length relations of the contractile element (CE) in a muscle to known data on shifts in optimal CE length with muscle activity [7].

#### 2. Two Models for Muscle Activation Dynamics

Macroscopically, a muscle fibre or an assembly thereof, a muscle belly, is often mapped mathematically by a one-dimensional massless thread called “contractile component” or “contractile element” (CE) [8–12]. Its absolute length is which may be normalised to the optimal fibre length by . In macroscopic muscle models, the CE muscle force is usually modelled as a function of a force-(CE-)length relation, a force-(CE-)velocity relation, and (CE-)activity . Commonly the muscle activity represents the number of attached cross-bridges within the muscle, normalised to the maximum number available (). It can also be considered as the concentration of bound -ions in the muscle sarcoplasma relative to its physiological maximum. The parameter represents the minimum activity that is assumed to occur without any stimulation [6].

We analyse two different formulations of muscle activation dynamics, that is, the time (its symbol: ) evolution of muscle activity . One formulation of muscle activation dynamics was suggested by Zajac [5], which we modified slightly to take into account:with the initial condition . In this context, is supposed to represent the (electrical) stimulation of the muscle, being a parameter for controlling muscle dynamics. It represents the output of the nervous system’s dynamics applied to the muscle which in turn interacts with the skeleton, the body mass distribution, the external environment, and therefore the nervous system in a feedback loop. Electromyographic (EMG) signals can be seen as a compound of such neural stimulations collected in a finite volume (being the input to a number of muscle fibres) over a frequency range and coming from a number of (moto-)neurons. The parameter denotes the activation time constant, and is the ratio of activation to deactivation time constants (deactivation boost).

An alternative formulation of muscle activation dynamics was introduced by Hatze [6]:We divided the original equation from Hatze [6] by the parameter which represents the maximum concentration of free -ions in the muscle sarcoplasma. Thus, the values of the corresponding normalised concentration are . The activity is finally calculated by the functionand the parameter is shifted to the accordingly renormalised functionwith and . Two cases have been suggested by Hatze [13]: (i.e., ) for and (i.e., ) for , which have been applied in the literature [7, 8, 14, 15]. By substituting (2) and (3) into and resubstituting the inverse of (3) afterwards, Hatze’s formulation of an activation dynamics can be transformed into a nonlinear differential equation directly in terms of the activity: with the initial condition .

The solutions and of both formulations of activation dynamics (1) and (5) can now be directly compared by integrating them with the same initial condition using the same stimulation .

#### 3. Local First and Second Order Sensitivity of ODE Systems regarding Their Parameters

Let and . We then consider a system of ordinary, first order initial value problems (IVP):where denotes the vector of state variables, the vector of right hand sides of the ODE, and the set of parameters which the ODE depends on. The vector of initial conditions is abbreviated by

The first order sensitivity of the solution with respect to the parameter set is defined as the matrixSimplifying, we denote , , and but keep the dependencies in mind. Because the solution might only be gained numerically rather than in a closed-form expression, we have to apply the well-known theory of sensitivity analysis as stated in Vukobratovic [16], Dickinson and Gelinas [17], Lehman and Stark [2], and ZivariPiran [18]. Differentiating (8) with respect to and applying the chain rule yieldwith being the gradient of state variables. Hence we obtain the following ODE for the first order solution sensitivity:or in short terms where is the sensitivity matrix and is the Jacobian matrix with ; furthermore, denotes the -matrix containing the partial derivatives and denotes the -matrix consisting of zeros only.

By analogy, the second order sensitivity of with respect to is defined as the following -tensor: withassuming for all , therefore assuming that the prerequisites of Schwarz theorem (symmetry of the second derivatives) are fulfilled throughout. Differentiating with respect to and applying the chain rule lead to the ODEwith . For purposes beyond the aim of this paper, a condensed notation introducing the concept of tensor (or Kronecker) products as in ZivariPiran [18] may be helpful. For a practical implementation in MATLAB see Bader and Kolda [19].

Furthermore, if an initial condition (see (7)) is considered as another parameter, we can derive a separate sensitivity differential equation by rewriting (6) in its integral form Differentiating this equation with respect to yields and differentiating again with respect to results in a homogeneous ODE for each component ; namely,

The parameters of our analysed models are supposed to represent physiological processes and bear physical dimensions therefore. For example, and are frequencies measured in (Hz), whereas is measured in (mol/L). Accordingly, would be measured in (Hz) and in (s) (note that our model only consists of ODE and therefore we do not need a second index). Normalisation provides a comprehensive comparison between all sensitivities, even across models. For any parameter, the value fixed for a specific simulation is a natural choice. For any state variable, we chose its current value at each point in time of the corresponding ODE solution. Hence, we normalise each sensitivity by multiplying it with the ratio to get the relative sensitivityA relative sensitivity thus quantifies the percentage change in the th state variable value per percentage change in the th parameter value. This applies accordingly to the second order sensitivityIt can be shown that this method is valid and mathematically equivalent to another common method in which the whole model is nondimensionalised a priori [20]. A nonnormalised model formulation has the additional advantage of usually allowing a more immediate appreciation of and transparent access for experimenters. In the remainder of this paper, we are always going to present and discuss relative sensitivity values normalised that way.

In our model the specific case applies, so (10) and (14) simplify to the case (no summation).

#### 4. Variance-Based Global Sensitivity Analysis

The differential sensitivity analysis above is called a local method because it does not take the physiological range of parameter values into account. Additionally factoring in such ranges characterises the so-called global methods. The main idea behind most global methods is to include a statistical component to scan the whole parameter space and combine the percentage change in a state variable value per percentage change in a parameter value with the variability of all of the parameters. The parameter space can be seen as a -dimensional cuboid , where and are the minimal and maximal parameter values and is the number of parameters. We can now fix a certain point and calculate the local gradient of the solution with respect to . The volume of the star-shaped area, investigated by changing only one parameter at once and lying within a ball around , vanishes in comparison to for an increasing number of parameters [21]. For an overview of the numerous methods like ANOVA, FAST, Regression, or Sobol’s Indexing, the reader is referred to Saltelli and Chan [4] and Frey et al. [22].

In this section we want to sketch just the main idea of the variance-based sensitivity analysis approach as presented in Chan et al. [3], which is based on Sobol’s Indexing. We chose this method because of its transparency and low computational cost. This method aims at calculating two measurands of sensitivity of a state variable with respect to parameter : the variance-based sensitivity function denoted by and the total sensitivity index function denoted by . The functions give a normalised first order sensitivity quite similar to from the previous section but include the parameter range. The functions, however, additionally include higher order sensitivities and give a measurand for interdependencies of parameter influences.

A receipt for calculating and is as follows. First of all, set boundaries for all model parameters, either by model assumptions or by literature reference, thus fixing . Secondly, generate two sets of sample points , , suited to represent the underlying probability distribution of each parameter, in our case the uniform distribution. Thirdly, with indicating a parameter, generate sets of new sample points , , , where consists of all sample points in except for its th component (parameter value) replaced by the th component of . Consequently, consists of the th component of and every other component taken from . Fourthly, evaluate the model from (6) at all of the sample points resulting in a family of solutions.

For this family perform the following calculations.(1)Compute the variance of the family of all solutions as a function of time, namely, . This variance function indicates the general model output variety throughout the whole parameter range.(2)Compute the variances of the family of solutions resulting from an evaluation of the model at all and , that is, for every and . Each is a function of time and indicates the model output variety if solely the value of parameter is changed.(3)Compute the variances of the family of solutions resulting from an evaluation of the model at all and , that is, for every and . Each is a function of time and indicates the model output variety if the value of is fixed, whereas all other parameter values are changed.

Note that the computations in Chan et al. [3] are done using Monte-Carlo integrals as an approximation. The and can be finally calculated asThe normalisation entails additional properties of and (see [3, Figure 1]):In other words, gives the normalised global first order sensitivity function of the solution with respect to in relation to the model output range. Accordingly, quantifies a relative impact of the variability in parameter on the model output, factoring in the interdependent influence in combination with all other parameters’ sensitivities. Chan et al. [3] suggested to denote the value as the “importance” of .

#### 5. An Analytical Example for Local Sensitivity Analysis including a Link between Zajac’s and Hatze’s Formulations

By further simplifying Zajac’s formulation of an activation dynamics (1) through assuming a deactivation boost (activation and deactivation time constants are equal) and a basic activity , we obtain a linear ODE for this specific case , which is equivalent to Hatze’s equation (2) modelling the time evolution of the free -ion concentration:By analysing this specific case, we aim at making the above described sensitivity analysis method more transparent for the reader. Solving (22) yieldsdepending on just two parameters (stimulation: control parameter) and (time constant of activation: internal parameter) in addition to the initial value . The solution equals the value after about .

We apply the more generally applicable, implicit methods (10) and (17) to determine the derivatives of the solution with respect to the parameters (the sensitivities), although we already know solution (23) in a closed form. Hence, for the transparency of our method, we calculate the gradient of the right hand side of ODE (22)and insert these partial derivatives into (10) and (17). Solving the respective three ODEs for the three parameters (, and ) and normalising them according to (18) give the relative sensitivities of with respect to , , and as functions of time (see Figure 1):