Computational Methods for Identification and Modelling of Complex Biological SystemsView this Special Issue
Review Article | Open Access
Alejandro F. Villaverde, "Observability and Structural Identifiability of Nonlinear Biological Systems", Complexity, vol. 2019, Article ID 8497093, 12 pages, 2019. https://doi.org/10.1155/2019/8497093
Observability and Structural Identifiability of Nonlinear Biological Systems
Observability is a modelling property that describes the possibility of inferring the internal state of a system from observations of its output. A related property, structural identifiability, refers to the theoretical possibility of determining the parameter values from the output. In fact, structural identifiability becomes a particular case of observability if the parameters are considered as constant state variables. It is possible to simultaneously analyse the observability and structural identifiability of a model using the conceptual tools of differential geometry. Many complex biological processes can be described by systems of nonlinear ordinary differential equations and can therefore be analysed with this approach. The purpose of this review article is threefold: (I) to serve as a tutorial on observability and structural identifiability of nonlinear systems, using the differential geometry approach for their analysis; (II) to review recent advances in the field; and (III) to identify open problems and suggest new avenues for research in this area.
A model is observable if it is theoretically possible to infer its internal state by observing its output. Model parameters can be considered as constant state variables. The particular case of parameter observability is called structural identifiability. Both concepts are structural in the sense that they depend only on the model equations; that is, they are completely determined by the system dynamics and output definition. They are not affected by limitations related to the frequency or accuracy of the experimental measurements, in contrast to the related concept of practical identifiability or estimability.
The concept of observability was introduced by Kalman in 1960 for linear time-invariant systems [1, 2]. Conditions for checking observability of nonlinear systems were soon developed by several authors [3–7]. At the same time, the interest in parametric identifiability was growing among researchers using biological models, especially in biomedical applications. As a result, the concept of structural identifiability was introduced in 1970, when Bellman and Åström coined the term and presented the Laplace transform method for its study in the context of (linear) compartmental models .
Both concepts, observability and structural identifiability, are applicable to dynamic models of any kind: electrical, chemical, mechanical, biological, etc. Observability analysis, as well as the related question of observer design, has been and continues to be frequently investigated by systems and control theorists. In turn, researchers working in biological modelling (e.g., in mathematical biology and, more recently, in the systems biology community) have more often addressed structural identifiability issues. This is due to the fact that biological applications typically have more experimental limitations than engineering ones in terms of which measurements are feasible, making parameter identification a more challenging problem and calling for a deeper study of parametric identifiability issues and methods.
Observability and structural identifiability play a central role in system identification. There are a number of classic books on the subject, such as the ones by Walter and Pronzato  and Ljung . In the context of biological modelling a very complete and recent reference is the book by DiStefano , which covers thoroughly the topic of identifiability, both from structural and practical points of view. The interested reader is also referred to , which reviews the different types of identifiability and related concepts, and to [13, 14], which deal specifically with structural identifiability. In a different context, Chatzis and coworkers have reviewed the observability and structural identifiability of nonlinear mechanical systems .
The present paper reviews observability and structural identifiability concepts and tools, with the aim of facilitating their application to biological models. Instead of attempting to discuss all the existing methodologies, it focuses on methods that adopt a differential geometry approach [16–18]. These properties may also be analysed with other symbolic approaches, such as power series [19–21], differential algebra [22–26], or others [27–29], to name just a few, as well as with seminumerical [30, 31] or numerical approaches [32, 33]. A comparison or discussion of the aforementioned methods is out of the scope of the present paper; the interested reader is again referred to [12–14, 34].
This manuscript begins by motivating the study in Section 2, illustrating the possible consequences of unobservability and unidentifiability. In Section 3 these concepts are analysed with the differential geometry approach, which provides a unified view of observability and structural identifiability and can be applied to a very general class of nonlinear systems. Section 4 reports recent developments in this area, and Section 5 concludes by suggesting some open problems as possible research directions.
2. Motivation: Implications of Unobservability and Unidentifiability in Biological Models
The importance of structural identifiability analysis has been recently stressed in different areas of biological modelling, such as animal science , pharmacodynamics , epidemiology , environmental modelling , physiology , neuroscience , oncology , and many more. On the other hand, assessing observability and structural identifiability can be difficult even for relatively small systems and becomes increasingly complicated as the model complexity increases. Furthermore, the theoretical foundations of the analyses have some aspects that are not fully studied yet. These reasons help explain why some modellers are reluctant to analyse these properties of their models , which might be understandable taking into account the fact that even the need of determining parameter values has been questioned in the context of biological modelling . However, such analysis is worth the effort, since lack of identifiability and/or observability can compromise the ability of a model to provide biological insight [36, 37, 44–46]. For example, one of the possible purposes of a model is for inferring the values of certain parameters of interest; in such case, identifiability is obviously desirable per se. Alternatively, the main purpose of the model may be to predict the dynamic behaviour of unmeasured states; in this case one is more interested in state observability than in parameter identifiability (although issues with the latter property may compromise the former).
As an example, consider the model of a possible glucose homeostasis mechanism depicted in Figure 1, which was presented in  and analysed in . This so-called IG model describes the regulation of plasma glucose concentration (G) by means of insulin (I), which is secreted by pancreatic cells. The model consists of three state variables (,I,G) whose time courses are defined by nonlinear ordinary differential equations (ODEs) with five parameters (). For the sake of the exercise, let us assume that glucose and -cell mass are the measured outputs. In this case, if the model parameters are unknown, and are structurally unidentifiable. Figure 1 illustrates this fact by showing that changes in the model outputs (i.e., glucose concentration and -cell mass) resulting from halving the value of can be compensated by doubling the value of . Therefore, it is not possible to distinguish between two parameter vectors of the form and . This also entails that insulin is an unobservable state, since the impossibility of determining the true parameter vector leads to the impossibility of determining which of the time courses shown in the lower left plot of Figure 1 is the true one. Therefore, the model cannot be used for inferring insulin concentration from measurements of the other variables. This limitation can be overcome if the value of or of is known.
Such lack of structural identifiability can have important consequences. A nice illustration is given in a recent work , where Procopio et al. presented a model of the release of a cardiac damage biomarker, cardiac troponin T, with the purpose of diagnosing acute myocardial infarction in a clinical setting. After the authors realized that the first version of the model was structurally unidentifiable, which could potentially lead to wrong conclusions, they removed the redundancies in their model and obtained an equivalent one that was structurally identifiable.
Structural unidentifiability is related to unobservability, as shown in the IG model example, in which the inability to estimate leads to wrong predictions of . However, unidentifiability does not always entail unobservability. As a trivial example, consider the case in which the value of is known. Then the IG model becomes structurally identifiable and observable. If we now modify the model by replacing parameter with the sum of two new parameters (), the two new parameters would obviously be structurally unidentifiable, but the unmeasured state would remain observable. Therefore, it is desirable to analyse both the structural identifiability and observability of a model to decrease the possibility of drawing false conclusions from it.
Before concluding this section, it should be noted that a structurally identifiable model may nevertheless be practically unidentifiable, that is, the numerical estimates of its parameters may contain large errors due to insufficient or bad quality data. A recent example of this scenario is given in , where different models of cancer chemotherapy were analysed. The results showed that, although the models were structurally identifiable, they were not practically identifiable. This deficiency could lead to infer incorrect cell cycle distributions and, as a result, to the choice of suboptimal therapies. It is thus reasonable to ask: if a model can be structurally identifiable and yet unidentifiable in practice, why should we care about analysing its structural identifiability in the first place? The answer is that practical and structural unidentifiability have different causes and also different remedies. Practical unidentifiability may be surmounted by using more informative data for calibration, but structural unidentifiabilities cannot be removed in this way (unless the new data involves modifying the output of the model, which strictly speaking entails modifying the model structure). Any attempt to remove a structural unidentifiability by incorporating more experimental data to the calibration (e.g., by sampling more densely or for a longer time) is doomed to fail, leading to a loss of resources and time. Practical identifiability analysis is not covered in this review; the interested reader is referred to [9, 11, 12].
In summary, it is advisable to analyse the observability and structural identifiability of a model before attempting to obtain insights from it. If this analysis reveals deficiencies, actions must be taken depending on the intended application of the model.
For example, if the intended application is for determining the value of a parameter that turns out to be structurally unidentifiable, it is necessary to eliminate this structural identifiability. There are several ways of achieving this. Sometimes it may be possible to determine the unidentifiable parameter by direct measurements, either of the parameter of interest or of the parameter(s) that are correlated with it. However, direct measurements of parameters are seldom possible. It is often more practical to measure additional state variables, which may make the model (or at least the parameter of interest) structurally identifiable; this possibility should be analysed before performing the experiments. Finally, if the experimental setup cannot be modified, or if it is not practical to obtain new experimental data, one can try to modify the model structure by reducing the number of parameters. This can be achieved by fixing some parameters to values taken from the literature or by merging several unidentifiable parameters into an identifiable one.
If the intended application of the model is for determining the system states, as opposed to the parameters, a structurally unidentifiable model may still be useful—as mentioned previously—as long as the states of interest are observable. In this case, lack of observability may be remedied in a similar way as structural identifiability.
3. Background: Observability and Structural Identifiability
To define observability it is necessary to introduce the notion of distinguishable states.
Definition 1. Let be a model with internal state and measurable output . Let denote the time evolution of the model output when started from an initial state at . Two states and are indistinguishable if for all . The set of states that are indistinguishable from is denoted by .
A model is observable if it is possible to distinguish its internal state from any other state, that is as follows.
Definition 2. A model is observable at if .
Observability describes the possibility of determining the current state from present and future measurements. A similar concept, reconstructability, refers to determining the current state from present and past measurements.
3.1. Observability of Linear Systems
For illustration purposes, this subsection presents the special case of linear time-invariant (LTI) systems, whose equations can be written aswhere is the parameter vector, the input vector, the state variable vector, and the output vector. , , and are constant matrices of dimensions , , and , respectively. The dependence on may be dropped for ease of notation.
Assessing the observability of amounts to determining whether it is possible to infer its internal state, , by observing its output, . An intuitive way of obtaining a condition for checking observability is the following. The available knowledge consists of the output and its derivatives; that is,where is a known matrix function. Setting and writing the above equations in matrix form leads towhere the linear observability matrix has been introduced, . If is invertible, one can uniquely obtain from the knowledge of and its derivatives, as long as . This is known as the linear observability rank condition.
“Complete” observability means that all the model states can be inferred from observations of the output.
3.2. Observability of Nonlinear Systems
Let us now consider nonlinear ODE models. In their most general form they can be written aswhere and are analytic vector functions. A special case of (4) is that of nonlinear affine-in-the-input systems:
Shortly after Kalman’s introduction of the concept of observability [1, 2], several researchers worked on its application to nonlinear systems of the type defined in (4) and (5). As a result, sufficient and/or necessary conditions for nonlinear observability were obtained [3–6], allowing to extend the observability rank condition in this context. For nonlinear models, unlike for LTI models like (1), the derivatives of the output cannot be expressed in terms of the arrays. It is therefore necessary to define a nonlinear version of the observability matrix, ; to this end Lie derivatives are used.
Definition 4. The Lie derivative of with respect to is defined byHigher order Lie derivatives can be recursively calculated as
It can be noticed from (3) that the linear observability matrix, , is the partial derivative of the derivatives of the output with respect to the states; that is,
In a nonlinear model such as (4) with constant input, , the Lie derivative of the output function coincides with the time derivative of , i.e., . Thus, Lie derivatives can be used to calculate for nonlinear models with constant inputs as follows:
The nonlinear version of the observability rank condition can be stated as follows.
Two remarks are in order. First, it should be noted that the nonlinear observability rank condition (ORC) is a sufficient, but not strictly necessary, condition for nonlinear observability (unlike the linear case, in which the ORC is both sufficient and necessary). In the nonlinear case, the ORC is “almost necessary” in the sense that, if is locally observable around , then for an open dense subset of the state space . This is a rather technical distinction, and in practice a failure to comply with the ORC is often considered as a very strong indication of unobservability. Second, it should also be noted that the ORC determines local observability: if a model satisfies the ORC, it is possible to distinguish between two adjacent states, but there may still be distant states that are indistinguishable. A locally observable model is often—although not always—globally observable too.
3.3. Structural Local Identifiability as Observability
In this paper structural identifiability is considered as a particular case of observability. As noted in the preceding Section 3.2, nonlinear observability is a local concept, which means we will study structural local identifiability. The analysis of structural global identifiability requires other approaches [12–14]. Note however that the definitions provided here do not prevent a locally identifiable model to be also globally identifiable, and this will actually be the case in many practical applications.
Definition 6. A parameter in a model given by (4) is structurally locally identifiable (s.l.i.) if for almost any parameter vector there is a neighbourhood such that the following property holds:
Definition 7. A parameter is structurally unidentifiable (s.u.) if (10) does not hold in any neighbourhood of .
Definition 8. A model is s.l.i. if all its parameters are s.l.i.
Definition 9. A model is s.u. if at least one of its parameters is s.u.
Structural identifiability can be considered as a particular case of observability by considering the parameters as state variables with zero dynamics [31, 47–50]. The augmented state variable vector isSimilar to the nonlinear observability matrix of (9), it is possible to define an augmented nonlinear observability-identifiability matrix, , as
Remark 11. Identifiability of individual parameters: if the OIC condition is fulfilled, all the parameters of are s.l.i. If the OIC does not hold, is s.u. and at least some parameter(s) are s.u. (and/or some states are unobservable). Since each column in corresponds to the partial derivative with respect to a state or parameter, it is possible to determine which parameters (states) are structurally unidentifiable (unobservable) by removing the corresponding column and recalculating rank(). If deleting the column does not change rank(), then the parameter (state) is structurally unidentifiable (unobservable) . We can thus define a Structural Identifiability Condition for a parameter as follows:
Theorem 12. Structural Identifiability Condition (SIC). Given a model defined by (4), its parameter is structurally locally identifiable in a neighbourhood of if , where is the defined in (12), and is the array that results from removing the column corresponding to from .
3.4. Example: Observability and Structural Identifiability Analysis of a Nonlinear Model
The approach described in Section 3.3 is demonstrated here by applying it to the nonlinear model used as motivating example in Section 2. This case study was briefly described in Section 2 and Figure 1, which shows its dynamic equations. It consists of states, , outputs, , parameters, , and input, . The augmented vector consisting of the states and parameters is .
The observability and structural identifiability of this system can be analysed with the observability-identifiability condition (OIC) of Theorem 10. To this end one must build the matrix defined in (12). The first two rows in correspond to the partial derivatives of the output function with respect to the states and parameters; since the output is , the first two rows of are
The matrix made up of the two rows above has rank equal to two. Subsequent rows are calculated with Lie derivatives as defined in (6) and (7). In principle, Lie derivatives must be symbolically calculated. However, in practice it may be possible to stop the calculation earlier: if the rank of the matrix does not increase after the addition of a new derivative, it is not necessary to calculate higher order derivatives since they will not modify the rank.
The first Lie derivative is obtained as
Thus, the third and fourth rows of arewhere
By adding the two rows corresponding to , the rank of increases from two to three. Proceeding in the same manner, the rank of the matrix increases with every additional Lie derivative until it stops: it is equal to 7 when is built with both 5 and 6 Lie derivatives. Thus with 6 derivatives we know that the model has some observability/identifiability issues, since its matrix does not have full rank.
At this point we can determine the observability of each state and the structural identifiability of each parameter using the procedure described in Remark 11. This yields that the unmeasured state is not observable, and that there are two s.u. parameters () and three s.l.i. parameters (). It can be noticed that multiplying by the dynamic equation of shown in Figure 1 leads to a modified model in which the third state is instead of , and parameter only appears in the equations as part of the product . This model formulation highlights the fact that only the products and are observable (identifiable).
4. Recent Developments
4.1. Computational Implementations of the Rank Conditions
The conditions described in Section 3 involve building observability () or observability-identifiability matrices () and calculating their rank. Building these arrays involves symbolic calculations, which can be performed in environments such as Mathematica (Wolfram Research, Champaign, IL, USA), MATLAB (MathWorks, Natick, MA, USA), or MAPLE (Maplesoft, Waterloo, ON, Canada). Some software tools provide advanced implementations of these calculations.
August and Papachristodoulou  used semidefinite programming to evaluate the OIC (Theorem 10). They used SOSTOOLS , a free MATLAB toolbox that performs a sum of squares decomposition. This technique allows assessing identifiability for all parameter values within an interval; however, the computational cost of the rank calculation quickly becomes high as the problem size increases, which hinders the applicability of this method to medium-to-large models.
Another MATLAB tool is the STRIKE-GOLDD toolbox , publicly available software that analyses structural identifiability and observability using the OIC. It includes options such as performing partial analyses and decomposing the models, which can be helpful for analysing large models.
For rational systems, the Exact Arithmetic Rank (EAR) method is a numerical alternative for calculating the rank. It is based on an algorithm originally presented by Sedoglavic , which was extended and implemented in Mathematica by Jirstrand and coworkers .
4.2. Accessibility and the Role of Initial Conditions
The rank conditions of Theorems 5 and 10 provide results that are valid for “almost all” values of the variables (state and parameter vectors), that is, for all possible values except for a set of measure zero (a “thin set”). Consequently, for specific values there may be loss of identifiability. This was pointed out by Saccomani et al. [53, 54], who analysed this phenomenon with a differential algebra approach, tracing its cause to a loss of accessibility from certain initial conditions. Accessibility, also called reachability, is a property that describes the ability to move a system to any state in a neighbourhood of the initial one. Saccomani and coworkers noted that a loss of accessibility from specific initial conditions could lead to loss of structural identifiability.
This matter has been recently approached from the differential geometry viewpoint. In  it was remarked that loss of accessibility is not the only possible cause of loss of structural identifiability from specific initial conditions: this phenomenon can take place even for models that are not accessible from generic initial conditions. Furthermore, it was also noted that a decrease in rank() at a specific initial condition does not necessarily result in a loss of structural identifiability, even if the system is started at that initial condition. In  a method for finding potentially problematic vectors was also suggested, although it scales up poorly with system size.
4.3. The Role of Inputs
The methodology presented in Section 3 assumes that the input vector is known and constant. Obviously, the same formulation can account for the case of unknown constant inputs simply by considering them as additional parameters, which are unknown and constant by definition. For known, time-varying inputs that are differentiable functions of time, a differential algebra approach would still be valid. However, the differential geometry procedure described in Section 3 needs to be extended in order to cope with this case. To this end it has recently been suggested to use extended Lie derivatives , which are defined as follows:
Definition 13. The extended Lie derivative is where is the derivative of the input . Higher order extended Lie derivatives are recursively calculated as:
(Note that this definition considers a time-dependent input vector , which is simply written as for ease of notation.) Unlike the previously defined Lie derivatives of ((6), (7)), the extended Lie derivatives are equal to the output derivatives for time-varying inputs, . Evaluating the OIC with a built with extended Lie derivatives correctly determines the observability and structural identifiability of a model. Some models may require time-varying inputs in order to be identifiable. In  it was shown how the extended Lie derivatives can be used for experimental design, by determining the number of nonzero derivatives of the input that are required for structural identifiability.
The identifiability of the IG model used in Sections 2 and 3.4 does not depend on the input derivatives. Hence in this section this situation will be illustrated with a different example, the following two-compartment model :Compartmental models of this type are commonly used to describe physiological processes. Note that, although the model given by (19) is linear in the states, if the state vector is augmented with the parameters (as needed for structural identifiability analysis) the model becomes nonlinear.
This model is structurally unidentifiable from an experiment with a constant input, but becomes structurally identifiable with a continuous time-varying input such as a ramp . This is illustrated in Figure 2. The constant input result can be obtained by applying the procedure described in Section 3.3 as shown in Section 3.4. Since this model has states and parameters, it would require to be observable and s.l.i. However, the aforementioned procedure yields , and the procedure in Remark 11 determines that is observable but all the parameters are s.u. The time-varying input result is obtained by building with the extended Lie derivatives defined in ((6), (7)); in the corresponding symbolic derivations is set to a constant value and higher order derivatives, are set to zero. This yields with 5 derivatives, and the model is observable and s.l.i. These calculations can be performed with STRIKE-GOLDD2  and take less than one second in a standard computer. The difference in the results with and is due to the presence of terms containing in some entries of , whose contribution is needed for a full rank. Setting removes these terms and decreases the matrix rank, leading to a loss of identifiability.
It should be noted that this model can also be analysed with a differential algebra approach; for example, the COMBOS application  obtains the same result in comparable time. Compared to the differential geometry approach, the advantages of this method are the ability to distinguish between local and global identifiability and to find identifiable combinations. Its disadvantages are that in principle it cannot consider specific derivatives being zero (e.g., but ) and that it typically has worse computational scale-up for models with large nonlinearities.
A different problem arises when the inputs are time-varying and unknown. Such inputs can be viewed as external disturbances, of which there are neither measurements nor information about their dependence on time. Martinelli  extended the ORC to account for this situation for the case of nonlinear systems that are affine with respect to the inputs, which must be differentiable but may be known and/or unknown. To this end, the model defined by (5) is augmented in order to include an unknown input vector as follows:In  it was proposed to extend this model by augmenting the original state to , which includes the input and its derivatives up to order , that is . An extended observability rank condition (EORC) was then presented, allowing checking the observability of systems with unknown inputs, although not of the inputs themselves, at least in its published form. Although in [58, 59] the structural identifiability problem was not explicitly considered, it is of course possible to apply this idea to a joint observability and structural identifiability analysis.
4.4. Model Symmetries and Identifiable Combinations
If a set of parameters are found to be structurally unidentifiable, a question naturally arises: it is possible to reformulate the model by combining such parameters in an identifiable quantity? The answer to this question entails characterizing the form in which the structurally unidentifiable parameters are correlated. Many methods for structural identifiability analysis are capable of addressing this problem to a certain extent; however, no generally applicable and automatic procedure exists.
One of the first examples, the “exhaustive modelling” method for finding the set of models that are output indistinguishable from a given one, was presented in . This procedure, also known as the similarity transformation approach, can be used to obtain structurally identifiable versions of linear compartmental models. An extension to controlled nonlinear models, which requires testing controllability and observability conditions, was presented in , and the case of uncontrolled systems was considered in [61, 62].
Differential algebra is a classic approach for the study of observability  and structural identifiability . The equivalence between the observability definitions from the algebraic and differential geometric viewpoints was established in  for a class of rational systems. DAISY is a software that adopts the differential algebra approach to assess global structural identifiability and observability , and COMBOS  is a tool specifically developed for finding identifiable parameter combinations using differential algebra concepts such as Gröbner bases [26, 66].
Other approaches to this problem use Lie transformations. A method based on the generation of Lie algebras that represent the symmetries of the model equations was presented in . This procedure uses random numerical specializations and is valid for autonomous, rational systems. Instead of using random specializations, another method described in  finds Lie symmetries by transforming rational terms into linear terms. Finally, the aforementioned toolbox STRIKE-GOLDD , which uses Lie derivatives to calculate the observability-identifiability matrix , includes a procedure for finding identifiable parameter combinations that is based on ideas from [47, 69, 70]. Briefly, it removes from the columns corresponding to identifiable parameters and calculates a basis for the null space of the resulting matrix. The coefficients of this basis define a set of partial differential equations, whose solutions yield the identifiable combinations.
4.5. Sloppiness, Dynamical Compensation, and Structural Identifiability
A structurally unidentifiable model can yield the same output for different parameter values. This situation might be interpreted as a sign of robustness of the system to changes in parameter values. However, while lack of identifiability is usually considered an undesirable model property, in certain contexts robustness is seen as a desirable property. This apparent contradiction highlights the subtle character of the relationship between identifiability and robustness. As an illustration of this relationship, this subsection discusses two concepts developed in recent years – sloppiness and dynamical compensation – that are related but not equivalent to unidentifiability.
The first concept, sloppiness or sloppy models, was introduced in  to refer to the situation in which the model output is sensitive to changes in so-called stiff parameters, but largely insensitive to changes in sloppy parameters. Sloppiness was defined as the existence of a clear gap between the eigenvalues of the system’s Fisher information matrix (FIM), with large eigenvalues corresponding to stiff parameters and small eigenvalues corresponding to sloppy parameters. It was claimed that sloppiness is a universal feature of systems biology models , which would make it impossible to estimate all parameters accurately. More recent publications have provided new insights about sloppiness, as reviewed in . The concept of sloppiness, which has been linked to information theory, highlights the fact that a model’s output behaviour may still be tightly constrained despite the parameter values being only loosely constrained. Sloppiness provides a viewpoint for studying how distinguishable models are, and how they can be reduced. Several papers have clarified the relation between sloppiness and identifiability [73–76]. It is now understood that sloppiness is related to practical rather than structural identifiability, and that it is not equivalent to unidentifiability of any kind, meaning that sloppy models can indeed be identifiable.
The second concept, dynamical compensation (DC for short), was introduced in  as a property found in certain physiological circuits. Originally DC was defined simply as the invariance of the model output with respect to changes in a parameter value. It was immediately noted that according to this definition DC amounted to structural unidentifiability [77, 78]. (Note that the glucose homeostasis mechanism discussed in the Introduction was proposed in  as a possible mechanism for achieving DC; depending on its formulation—i.e., on which states are measured and which parameters are known—this model can be structurally unidentifiable). This equivalence between structural unidentifiability and the original definition of DC was not discussed in  and was potentially problematic, since the purpose of DC was to describe a phenomenon different to structural unidentifiability. More precisely, DC referred to the capability of a physiological circuit to maintain its dynamic behaviour unchanged after a change in the value of a model parameter, following a transition period. An alternative definition of DC that provided a more detailed description of the phenomenon and that took into account the relationship with structural identifiability was proposed in .
5. Open Problems and Future Directions
The differential geometry approach adopted in this review has been used to analyse observability and structural identifiability of nonlinear systems for more than forty years. The theoretical and computational advances made in the last decades have increased its applicability. However, there are still many challenges that call for more research in this area.
For example, an intrinsic limitation of the approach is that it yields only local results. Other methods, such as differential algebra, are capable of providing global structural identifiability results. They could possibly serve as an inspiration for extending (hybridizing?) the differential geometry techniques to perform global analyses.
Other desirable developments would consist of advanced implementations to alleviate the computational burden of the analyses. Such improvements, which may benefit from the use of parallelization and high performance computing techniques, would facilitate the application of these methods to the increasingly large models being built in the biological modelling community.
Another possible direction concerns the role of inputs in observability and identifiability analysis. Despite recent advances, there are still several open questions regarding this matter. It has been noted that certain models that are structurally unidentifiable from a single constant input experiment can become identifiable if a continuously time-varying input is used . In some cases the same improvement can be obtained with multiple constant input experiments [56, 79], or, equivalently, with a single experiment with a piecewise constant input. However, the question of when a time-varying input and multiple constant inputs are equivalent for the purpose of structural identifiability has not been answered yet. Likewise, the problem of analysing observability and structural identifiability in presence of unmeasured inputs has not been fully solved yet.
Finally, an important open question is the relationship between observability/identifiability and model predictions. On the one hand, it is known that lack of the former can lead to errors in the latter. On the other hand, it is true that this is not necessarily the case. Therefore, further insights into the requisites for accurate predictive modelling would be a valuable contribution.
Conflicts of Interest
The author declares that he has no conflicts of interest.
The author was supported by the European Union’s Horizon 2020 Research and Innovation programme under Grant Agreement no. 686282 (“CANPATHPRO”) during the writing of this paper.
- R. E. Kalman, “Contributions to the theory of optimal control,” Boletín de la Sociedad Matemática Mexicana, vol. 5, pp. 102–119, 1960.
- R. Kalman, “On the general theory of control systems,” IFAC Proceedings Volumes, vol. 1, no. 1, pp. 491–502, 1960.
- E. W. Griffith and K. S. Kumar, “On the observability of nonlinear systems: I,” Journal of Mathematical Analysis and Applications, vol. 35, pp. 135–147, 1971.
- R. Hermann and A. J. Krener, “Nonlinear controllability and observability,” IEEE Transactions on Automatic Control, vol. 22, no. 5, pp. 728–740, 1977.
- Y. M. Kostyukovskii, “Simple conditions of observability of nonlinear controlled systems,” Avtomat. i Telemeh., no. 10, pp. 32–41, 1968.
- S. R. Kou, D. L. Elliott, and T. J. Tarn, “Observability of nonlinear systems,” Information and Control, vol. 22, no. 1, pp. 89–99, 1973.
- H. J. Sussmann and V. Jurdjevic, “Controllability of nonlinear systems,” Journal of Differential Equations, vol. 12, pp. 95–116, 1972.
- R. Bellman and K. J. Åström, “On structural identifiability,” Mathematical Biosciences, vol. 7, no. 3-4, pp. 329–339, 1970.
- E. Walter and L. Pronzato, “Identification of parametric models from experimental data,” in Communications and Control Engineering Series, Springer, London, UK, 1997.
- L. Ljung, System identification: theory for the user, Prentice Hall, Upper Saddle River, NJ, USA, 1999.
- J. III. DiStefano, Dynamic systems biology modeling and simulation, Academic Press, 2015.
- A. F. Villaverde and A. Barreiro, “Identifiability of large nonlinear biochemical networks,” MATCH - Communications in Mathematical and in Computer Chemistry, vol. 76, no. 2, pp. 259–296, 2016.
- O.-T. Chis, J. R. Banga, and E. Balsa-Canto, “Structural identifiability of systems biology models: A critical comparison of methods,” PLoS ONE, vol. 6, no. 11, 2011.
- H. Miao, X. Xia, A. S. Perelson, and H. Wu, “On identifiability of nonlinear ODE models and applications in viral dynamics,” SIAM Review, vol. 53, no. 1, pp. 3–39, 2011.
- M. N. Chatzis, E. N. Chatzi, and A. W. Smyth, “On the observability and identifiability of nonlinear structural and mechanical systems,” Structural Control and Health Monitoring, vol. 22, no. 3, pp. 574–593, 2015.
- A. Isidori, Nonlinear control systems, Springer Science & Business Media, 1995.
- E. D. Sontag, Mathematical Control Theory: Deterministic Finite Dimensional Systems, vol. 6, Springer Science & Business Media, 2013.
- M. Vidyasagar, Nonlinear Systems Analysis, Prentice Hall, Englewood Cliffs, NJ, USA, 1993.
- O. Chiş, J. R. Banga, and E. Balsa-Canto, “GenSSI: A software toolbox for structural identifiability analysis of biological models,” Bioinformatics, vol. 27, no. 18, Article ID btr431, pp. 2610-2611, 2011.
- H. Pohjanpalo, “System identifiability based on the power series expansion of the solution,” Mathematical Biosciences, vol. 41, no. 1-2, pp. 21–33, 1978.
- E. Walter and Y. Lecourtier, “Global approaches to identifiability testing for linear and nonlinear state space models,” Mathematics and Computers in Simulation, vol. 24, no. 6, pp. 472–482, 1982.
- S. Audoly, G. Bellu, L. D’angiò, M. P. Saccomani, and C. Cobelli, “Global identifiability of nonlinear models of biological systems,” IEEE Transactions on Biomedical Engineering, vol. 48, no. 1, pp. 55–65, 2001.
- S. Diop and M. Fliess, “Nonlinear observability, identifiability, and persistent trajectories,” in Proceedings of the 30th IEEE Conference on Decision and Control, pp. 714–719, Brighton, UK, December 1991.
- H. Hong, A. Ovchinnikov, G. Pogudin, and C. Yap, “Global identifiability of differential models,” 2018, https://arxiv.org/abs/1801.08112.
- L. Ljung and T. Glad, “On global identifiability for arbitrary model parametrizations,” Automatica, vol. 30, no. 2, pp. 265–276, 1994.
- N. Meshkat, M. Eisenberg, and I. DiStefano, “An algorithm for finding globally identifiable parameter combinations of nonlinear ODE models using Gröbner bases,” Mathematical Biosciences, vol. 222, no. 2, pp. 61–72, 2009.
- L. Denis-Vidal, G. Joly-Blanchard, and C. Noiret, “Some effective approaches to check the identifiability of uncontrolled nonlinear systems,” Mathematics and Computers in Simulation, vol. 57, no. 1-2, pp. 35–44, 2001.
- S. Vajda, K. R. Godfrey, and H. Rabitz, “Similarity transformation approach to identifiability analysis of nonlinear compartmental models,” Mathematical Biosciences, vol. 93, no. 2, pp. 217–248, 1989.
- X. Xia and C. H. Moog, “Identifiability of nonlinear systems with application to HIV/AIDS models,” IEEE Transactions on Automatic Control, vol. 48, no. 2, pp. 330–336, 2003.
- J. Karlsson, M. Anguelova, and M. Jirstrand, “An efficient method for structural identiability analysis of large dynamic systems,” in Proceedings of the 16th IFAC Symposium on System Identification, vol. 16, pp. 941–946, 2012.
- A. Sedoglavic, “A probabilistic algorithm to test local algebraic observability in polynomial time,” Journal of Symbolic Computation, vol. 33, no. 5, pp. 735–755, 2002.
- A. Raue, C. Kreutz, T. Maiwald et al., “Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood,” Bioinformatics, vol. 25, no. 15, pp. 1923–1929, 2009.
- J. D. Stigter and J. Molenaar, “A fast algorithm to assess local structural identifiability,” Automatica, vol. 58, pp. 118–124, 2015.
- A. Raue, J. Karlsson, M. P. Saccomani, M. Jirstrand, and J. Timmer, “Comparison of approaches for parameter identifiability analysis of biological systems,” Bioinformatics, vol. 30, no. 10, pp. 1440–1448, 2014.
- O. Karin, A. Swisa, B. Glaser, Y. Dor, and U. Alon, “Dynamical compensation in physiological circuits,” Molecular Systems Biology, vol. 12, no. 11, article no. 886, 2016.
- R. Muñoz-Tamayo, L. Puillet, J. B. Daniel et al., “Review: To be or not to be an identifiable model. Is this a relevant question in animal science modelling?” Animal, vol. 12, no. 4, pp. 701–712, 2018.
- D. L. I. Janzén, L. Bergenholm, M. Jirstrand et al., “Parameter identifiability of fundamental pharmacodynamic models,” Frontiers in Physiology, vol. 7, 2016.
- N. Tuncer, M. Marctheva, B. LaBarre, and S. Payoute, “Structural and practical identifiability analysis of Zika epidemiological models,” Bulletin of Mathematical Biology, vol. 80, no. 8, pp. 2209–2241, 2018.
- J. D. Stigter, M. B. Beck, and J. Molenaar, “Assessing local structural identifiability for environmental models,” Environmental Modeling and Software, vol. 93, pp. 398–408, 2017.
- T. R. Middendorf and R. W. Aldrich, “Structural identifiability of equilibrium ligand-binding parameters,” The Journal of General Physiology, vol. 149, no. 1, pp. 105–119, 2017.
- O. J. Walch and M. C. Eisenberg, “Parameter identifiability and identifiable combinations in generalized Hodgkin-Huxley models,” Neurocomputing, vol. 199, pp. 137–143, 2016.
- M. P. Saccomani and K. Thomaseth, “The Union between Structural and Practical Identifiability Makes Strength in Reducing Oncological Model Complexity: A Case Study,” Complexity, vol. 2018, Article ID 2380650, 10 pages, 2018.
- R. N. Gutenkunst, J. J. Waterfall, F. P. Casey, K. S. Brown, C. R. Myers, and J. P. Sethna, “Universally sloppy parameter sensitivities in systems biology models,” PLoS Computational Biology, vol. 3, no. 10, pp. 1871–1878, 2007.
- M. C. Eisenberg and H. V. Jain, “A confidence building exercise in data and identifiability: Modeling cancer chemotherapy as a case study,” Journal of Theoretical Biology, vol. 431, pp. 63–78, 2017.
- A. Procopio, S. De Rosa, C. Covello et al., “A model of cardiac troponin T release in patient with acute myocardial infarction,” in Proceedings of the 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pp. 435–440, Melbourne, VIC, December 2017.
- A. F. Villaverde and J. R. Banga, “Dynamical compensation and structural identifiability of biological models: Analysis, implications, and reconciliation,” PLoS Computational Biology, vol. 13, no. 11, p. e1005878, 2017.
- M. Anguelova, Nonlinear observability and identifiability: General theory and a case study of a kinetic model for S. cerevisiae [Master’s thesis], Chalmers University of Technology and Göteborg University, 2004.
- M. Anguelova, Observability and identifiability of nonlinear systems with applications in biology [PhD thesis], Chalmers University of Technology, 2007.
- E. August and A. Papachristodoulou, “A new computational tool for establishing model parameter identifiability,” Journal of Computational Biology, vol. 16, no. 6, pp. 875–884, 2009.
- E. T. Tunali and T. J. Tarn, “New results for identifiability of nonlinear systems,” IEEE Transactions on Automatic Control, vol. 32, no. 2, pp. 146–154, 1987.
- S. Prajna, A. Papachristodoulou, and P. A. Parrilo, “Introducing SOSTOOLS: A general purpose sum of squares programming solver,” in Proceedings of the 41st IEEE Conference on Decision and Control, pp. 741–746, December 2002.
- A. F. Villaverde, A. Barreiro, and A. Papachristodoulou, “Structural Identifiability of Dynamic Systems Biology Models,” PLoS Computational Biology, vol. 12, no. 10, p. e1005153, 2016.
- L. D’Angiò, M. P. Saccomani, S. Audoly, and G. Bellu, “Identifiability of nonaccessible nonlinear systems,” in Positive systems, R. Bru and Romero-Vivó, Eds., vol. 389, pp. 269–277, Springer, Berlin, Germany, 2009.
- M. P. Saccomani, S. Audoly, and L. D'Angiò, “Parameter identifiability of nonlinear systems: The role of initial conditions,” Automatica, vol. 39, no. 4, pp. 619–632, 2003.
- A. F. Villaverde and J. R. Banga, “Structural properties of dynamic systems biology models: Identifiability, reachability, and initial conditions,” Processes, vol. 5, no. 2, 2017.
- A. F. Villaverde, N. D. Evans, M. J. Chappell, and J. R. Banga, “Input-Dependent Structural Identifiability of Nonlinear Systems,” IEEE Control Systems Letters, vol. 3, no. 2, pp. 272–277, 2019.
- N. Meshkat, C. Er-zhen Kuo, and J. DiStefano, “On finding and using identifiable parameter combinations in nonlinear dynamic systems biology models and combos: a novel web implementation,” PLoS ONE, vol. 9, no. 10, Article ID e110261, 2014.
- A. Martinelli, “Extension of the observability rank condition to nonlinear systems driven by unknown inputs,” in Proceedings of the 23rd Mediterranean Conference on Control and Automation, MED 2015, pp. 589–595, Spain, June 2015.
- A. Martinelli, “Nonlinear Unknown Input Observability: Extension of the Observability Rank Condition,” IEEE Transactions on Automatic Control, vol. 64, no. 1, pp. 222–237, 2019.
- E. Walter and Y. Lecourtier, “Unidentifiable compartmental models: what to do?” Mathematical Biosciences, vol. 56, no. 1-2, pp. 1–25, 1981.
- N. D. Evans, M. J. Chapman, M. J. Chappell, and K. R. Godfrey, “Identifiability of uncontrolled nonlinear rational systems,” Automatica, vol. 38, no. 10, pp. 1799–1805, 2002.
- G. Joly-Blanchard and L. Denis-Vidal, “Some remarks about an identifiability result of nonlinear systems,” Automatica, vol. 34, no. 9, pp. 1151-1152, 1998.
- S. Diop and M. Fliess, “On nonlinear observability,” in Proceedings of 1st European Control Conference, pp. 152–157, 1991.
- S. Diop and Y. Wang, “Equivalence between algebraic observability and local generic observability,” in Proceedings of the 32nd IEEE Conference on Decision and Control. Part 3 (of 4), pp. 2864-2865, December 1993.
- G. Bellu, M. P. Saccomani, S. Audoly, and L. D'Angiò, “DAISY: a new software tool to test global identifiability of biological and physiological systems,” Computer Methods and Programs in Biomedicine, vol. 88, no. 1, pp. 52–61, 2007.
- N. Meshkat, C. Anderson, and I. DiStefano, “Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations,” Mathematical Biosciences, vol. 233, no. 1, pp. 19–31, 2011.
- J. W. Yates, N. D. Evans, and M. . Chappell, “Structural identifiability analysis via symmetries of differential equations,” Automatica, vol. 45, no. 11, pp. 2585–2591, 2009.
- B. Merkt, J. Timmer, and D. Kaschek, “Higher-order Lie symmetries in identifiability and predictability analysis of dynamic models,” Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, vol. 92, no. 1, 012920, 9 pages, 2015.
- M. J. Chappell and R. N. Gunn, “A procedure for generating locally identifiable reparameterisations of unidentifiable non-linear systems by the similarity transformation approach,” Mathematical Biosciences, vol. 148, no. 1, pp. 21–41, 1998.
- N. D. Evans and M. J. Chappell, “Extensions to a procedure for generating locally identifiable reparameterisations of unidentifiable systems,” Mathematical Biosciences, vol. 168, no. 2, pp. 137–159, 2000.
- K. S. Brown and J. P. Sethna, “Statistical mechanical approaches to models with many poorly known parameters,” Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, vol. 68, no. 2, p. 021904/9, 2003.
- M. K. Transtrum, B. B. Machta, K. S. Brown, B. C. Daniels, C. R. Myers, and J. P. Sethna, “Perspective: Sloppiness and emergent theories in physics, biology, and beyond,” The Journal of Chemical Physics, vol. 143, no. 1, Article ID 010901, 2015.
- J. F. Apgar, D. K. Witmer, F. M. White, and B. Tidor, “Sloppy models, parameter uncertainty, and the role of experimental design,” Molecular BioSystems, vol. 6, no. 10, pp. 1890–1900, 2010.
- O.-T. Chis, A. F. Villaverde, J. R. Banga, and E. Balsa-Canto, “On the relationship between sloppiness and identifiability,” Mathematical Biosciences, vol. 282, pp. 147–161, 2016.
- D. V. Raman, J. Anderson, and A. Papachristodoulou, “Delineating parameter unidentifiabilities in complex models,” Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, vol. 95, no. 3, 2017.
- C. Tönsing, J. Timmer, and C. Kreutz, “Cause and cure of sloppiness in ordinary differential equation models,” Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, vol. 90, no. 2, 2014.
- E. D. Sontag, “Dynamic compensation, parameter identifiability, and equivariances,” PLoS Computational Biology, vol. 13, no. 4, 2017.
- A. F. Villaverde and J. R. Banga, “Dynamical compensation in biological systems as a particular case of structural non-identifiability,” 2017, https://arxiv.org/abs/1701.02562.
- T. S. Ligon, F. Fröhlich, O. T. Chiş et al., “GenSSI 2.0: multi-experiment structural identifiability analysis of SBML models,” Bioinformatics, vol. 34, no. 8, pp. 1421–1423, 2018.
Copyright © 2019 Alejandro F. Villaverde. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.