Research Article  Open Access
Jana L. Gevertz, Joanna R. Wares, "Developing a Minimally Structured Mathematical Model of Cancer Treatment with Oncolytic Viruses and Dendritic Cell Injections", Computational and Mathematical Methods in Medicine, vol. 2018, Article ID 8760371, 14 pages, 2018. https://doi.org/10.1155/2018/8760371
Developing a Minimally Structured Mathematical Model of Cancer Treatment with Oncolytic Viruses and Dendritic Cell Injections
Abstract
Mathematical models of biological systems must strike a balance between being sufficiently complex to capture important biological features, while being simple enough that they remain tractable through analysis or simulation. In this work, we rigorously explore how to balance these competing interests when modeling murine melanoma treatment with oncolytic viruses and dendritic cell injections. Previously, we developed a system of six ordinary differential equations containing fourteen parameters that well describes experimental data on the efficacy of these treatments. Here, we explore whether this previously developed model is the minimal model needed to accurately describe the data. Using a variety of techniques, including sensitivity analyses and a parameter sloppiness analysis, we find that our model can be reduced by one variable and three parameters and still give excellent fits to the data. We also argue that our model is not too simple to capture the dynamics of the data, and that the original and minimal models make similar predictions about the efficacy and robustness of protocols not considered in experiments. Reducing the model to its minimal form allows us to increase the tractability of the system in the face of parametric uncertainty.
1. Introduction
For many solid tumors, the most utilized cancer treatments are surgery, chemotherapy, and radiotherapy [1]. While this approach can effectively reduce tumor burden in the short term, longterm recurrence is the norm. This failure of conventional treatment modalities has spurred efforts to design novel cancer therapeutics.
One emerging treatment modality is oncolytic virotherapy. This technique involves targeting cancer cells using oncolytic viruses (OVs), standard viruses genetically engineered to replicate selectively in cancer cells. OV replication within a cancer cell creates a viral burden too large for the cell to support, which eventually causes the infected cancer cell to lyse [2]. The OVs that get released from the lysed cancer cell are then free to infect additional cancer cells. The lysing effects of OVs, while powerful, are also transient. In a clinical setting, OVs have generally proven to be insufficient to fully and permanently eradicate a solid tumor mass [2].
Another treatment modality receiving attention is gene therapy, defined as the introduction of genes of interest into cancer cells with therapeutic intent [3]. Gene therapy has been attempted using genes that mediate the release of cytokines, tumor suppressor genes, and apoptosisrelated genes, to name a few. Independent of the gene of interest, this modality requires a vector that can efficiently deliver, and uniformly distribute, the gene product to solid tumors [4]. Oncolytic viruses can be used as such a genedelivery vector.
The ability of oncolytic virotherapy to induce tumor cell lysis and to stimulate an antitumor immune response in a preclinical setting has led to a number of clinical trials for different tumor types. As of 2016, twenty different viruses have been studied as candidates for oncolytic virotherapy, and new candidate viruses continue to be studied [5]. In 2015, the U.S. Food and Drug Administration approved the first oncolytic virus therapy, TVEC (Imlygicā¢), for the treatment of advanced melanoma [5]. A number of clinical trials using this and other oncolytic viruses are underway in both solid and liquid tumors, and while they tend to be incredibly well tolerated, to date efficacy has been inferior to other available therapies [5, 6]. That said, researchers continue to actively study the anticancer effects of oncolytic virotherapy, both as a drug to be used in combination with other modalities, and as a potential cancer vaccine [5, 6].
In the present work, we focus on preclinical data and model the use of OVs (the adenovirus (Ad) in particular) to deliver genes that boost the immune systemās ability to identify, target, and kill cancer cells. The transgenes of interest in this study are 41BB ligand (41BBL) and interleukin (IL)12. 41BB is a costimulatory member of the tumornecrosis factor receptor superfamily that is expressed on activated CD4+ and CD8+ T cells [7]. The binding of 41BB to its ligand, 41BBL, promotes the outgrowth of type1 T helper cells and cytolytic effector T cells [4]. IL12 is a cytokine that strongly stimulates the differentiation of naĆÆve CD4+ T cells to type1 T helper cells. IL12 has been determined to be one of the most efficient antitumor cytokines in experimental animal models [8]. For conciseness, we refer to oncolytic adenoviruses concurrently acting as a vector for both the 41BBL and IL12 transgenes as Ad/41BBL/IL12.
Recent preclinical work of Huang et al. has shown that Ad/41BBL/IL12 can cause tumor regression in a mouse model of melanoma [4]. This debulking is a consequence of both tumor cell lysis, as well as immune system stimulation resulting from the local release of 41BBL and IL12. Ad/41BBL/IL12 can also be combined with intratumorally injected dendritic cell (DC) vaccines, resulting in a greater antitumor response than elicited by either treatment alone [4]. DCs are immune cells that present antigens to other cells of the immune system. Antigen presentation triggers an adaptive immune response that results in the immune system actively seeking out cells expressing the presented cancer antigen [9]. Huang et al. developed DC vaccines by harvesting DCs from the bone marrow of tumorbearing mice, and exposing them ex vivo to tumorassociated antigens until maturation [4].
Given the combined effectiveness of Ad/41BBL/IL12 and DC injections, it is natural to ask in what order, and at what dose, one should administer these therapeutics to elicit the maximal antitumor response. This is an experimentally timeconsuming and costly question to address. Mathematicalmodeling techniques can help answer questions about complex biological systems without the associated experimental costs [10]. Differential equation models (frequently paired with experimental data) have been successfully used to improve treatment protocols involving oncolytic viruses [11ā16]. Previously, we hierarchically developed and fit a mathematical model to the experimental data in Huang et al. [4]. This system of ordinary differential equations, involving six variables and fourteen noninitialcondition parameters, was shown to well describe the dynamics of OVs enhanced with one or more immunostimulatory molecules (41BBL, IL12, or both), DC injections, and DC injections coupled with Ad/41BBL/IL12 [17ā19]. Note that for each treatment protocol, the model was fit to the average tumor volume data (averaged over the 89 mice in the treatment cohort, with some consideration of the standard deviation in the data).
Using the bestfit parameters obtained from the hierarchical fitting to the average, we previously discovered that administering three doses of OVs followed by three doses of DCs (OVOVOVDCDCDC) is the optimal drug ordering [18] when constrained to considering the drug dosing and spacing used by Huang et al. [4]. Further analysis, however, led us to doubt the robustness of this prediction. First, we found that the doses of Ad/41BBL/IL12 and DC used in Huang et al.ās experiments [4] were near a bifurcation point; that is, slightly altering the dose or sequence could drastically change the efficacy of the protocol [18]. A fullscale robustness analysis of optimal protocols using the Virtual Expansion of Populations for Analyzing Robustness of Therapies (VEPART) procedure confirmed that our originally predicted optimal strategy is fragile [19].
The fragile nature of the optimal protocol raises doubts about whether it will actually be effective in individual mice in the experimental population, as individual mice generally have dynamics that deviate from the average. And, in the era of personalized medicine [20, 21], there is a strong emphasis on tailoring treatment protocols to individual patients. However, the difficulty of collecting and analyzing patientspecific data, especially in the face of intratumor and temporal heterogeneity, makes personalizing therapy a real challenge [22]. Even individualizing a mathematical model is a highly nontrivial task, as it often requires finding the bestfit parameters in a highdimensional parameter space, given a very limited amount of data about the individual. Therefore, before we can explore the challenging question of personalizing therapy using our model, we ask the following questions: does our mathematical model require all six variables and fourteen parameters to adequately describe the data? Or, can we simplify the structure of the model (number of variables and parameters) in order to make the model more amenable to personalization while retaining the goodness of fit to the experimental data? These are the questions that we answer in this work.
This paper is organized as follows. First, we briefly introduce the reader to the previously developed mathematical model of tumor growth and treatment with Ad/41BBL/IL12 and DCs [17, 18]. Second, we introduce a collection of methods that we employ to test whether our original model is minimal in its structure. Third, we argue that the original model is not of minimal structure, supported by evidence from parameter 95% credible intervals, local and global sensitivity analyses of parameters, and a soft/stiff parameter analysis. This leads us to propose a minimal model that contains five variables (one less than the original model) and eleven noninitial condition parameters (three less than the original model). Next, we show that this minimal model fits the experimental data as well as the original model, and we further argue that the model is not too simple for describing Huang et al.ās experimental data [4]. We conclude by showing that the minimal and original models make qualitatively similar predictions about the efficacy and robustness of treatment protocols not considered in our experimental dataset, which serves to further validate the sufficiency of the minimal model.
2. Methods
We begin by introducing the previously developed mathematical model that describes tumorimmune dynamics subject to treatment with either OVs, DCs, or both [17, 18]. We then expound upon the variety of techniques that we utilized to address whether the model is minimally structured, given the experimental data it was designed to fit.
2.1. Original Mathematical Model
Our original model contained the following six ordinary differential equations:
When all parameters and timevarying terms are positive, this model captures the effects of tumor growth and response to treatment with Ad/41BBL/IL12 and DCs. By allowing various parameters and timevarying terms to be identically zero, other treatment protocols tested in Huang et al. can also be described (e.g., adenovirus that only mediates the release of 41BBL) [4]. These other models are defined more explicitly after the description of the full model.
In Equation (1), uninfected tumor cell volume, , grows exponentially (at a rate ), and upon being infected by an oncolytic adenovirus, , converts to infected cancer cell volume, , at a densitydependent rate (), where is the total volume of cells (tumor cells and tumortargeting T cells, ). The tumortargeting T cells indiscriminately kill both uninfected and infected tumor cells, with the rate of killing depending on the amount of IL12 and 41BBL production (modeled through in the term ).
In Equation (2), newly infected tumor cell volume is accumulated at a rate of . The infected cells, , are lysed by the virus or other mechanisms (at a rate of ), thus acting as a source term for the virus by releasing free virions into the tissue space (with Ī± virions released on average per cell, seen in Equation (3)). We again see the tumortargeting T cells in action, killing (modeled in the term ).
In Equation (3), treatment with OVs is represented with the timedependent term , determined by drug delivery and dosing schedules of interest. Due to lysis of , virions are released on average per cell lysed, as discussed earlier. Virions are not impacted by the T cell populationāany loss of virions is due to natural decay, .
Equations (4) and (5) describe how the population of T cells, , and naĆÆve T cells, , change in time. The activation and recruitment of tumortargeting T cells can happen in three ways: (1) through stimulation of the naĆÆve T cell pool, which at rate can asymmetrically divide to give rise to tumortargeting T cells, due to increased IL12 (and modeled as proportional to , at a rate of , as infected cells are the ones to release IL12); (2) through stimulation of cytotoxic T cells due to increased 41BBL (also modeled as proportional to , at a rate of ); and (3) through production of T cells due to the externally primed dendritic cells, , (at a rate of ). T cells and naĆÆve T cells also experience natural death (at rates of and , respectively).
Equation (6) describes how the population of injected dendritic cells changes over time. The timedependent term represents the treatment with DCs, determined by the drug delivery and dosing schedule of interest. Dendritic cells decay at a rate of .
The data from Huang et al. was measured as tumor volume versus time for a variety of treatment protocols, averaged over 89 mice per protocol [4]. The treatment protocols increased in complexity, and therefore, our full model was designed and validated hierarchically against each dataset as follows:(i)Model 1: No OV or DC treatment. Under no treatment, tumor data are modeled using Equation (1) only, with all parameters other than set to 0.(ii)Model 2: Treatment with OVs that replicate and lyse, but do not mediate the release of cytokines and costimulatory molecules. Tumor data under this treatment are modeled using Equations (1)ā(3) with and set to 0.(iii)Model 3: Treatment with OVs that lyse tumor cells and(a)mediate the release of 41BBL (Ad/41BBL). Tumor data under this treatment are modeled using Equations (1)ā(4) with and parameters in Equation (4) set to 0.(b)mediate the release of IL12 (Ad/IL12). Tumor data under this treatment are modeled using Equations (1)ā(5) with and parameters in Equation (4) set to 0.(c)mediate the release of both IL12 and 41BBL (Ad/41BBL/IL12). Tumor data under this treatment are modeled using Equations (1)ā(5) with parameter in Equation (4) set to 0.(iv)Model 4: Treatment with DCs only. Tumor data under this treatment are modeled using only Equations (1), (4), and (6). Equation (4) has and parameters set to 0 in this case, and Equation (1) has set to 0.(v)Model 5: Treatment with Ad/41BBL/IL12 and DCs. This is modeled using the entirety of the system in Equations (1)ā(6).
The experimental data increase from simple to more complex, allowing us to fit the model parameters in a hierarchical fashion. For instance, using Model 1, we fit the tumor growth rate to the control data in which the tumors grew without treatment. The bestfit value of was then used in subsequent versions of the model. Using previously fit parameters allowed us to reduce the dimension of parameter space at each step of the model development and fitting process. More details on the hierarchical development of the model can be found in our previous studies [17ā19].
2.2. 95% Credible Intervals and Local Sensitivity Analysis
In our original works [17, 18], a single bestfit value was determined for each parameter in system (1)ā(6). In our later work [19], we expanded our understanding of the bestfit parameter values by identifying the potential distribution for each parameter. We did this by creating one thousand bootstrap replicates [23] from each of our experimental datasets (control, Ad only, Ad/41BBL/IL12, etc.,) [19]. Each bootstrap replicate was created by sampling the mice in the original experimental dataset with replacement. The bestfit parameter values were found for each bootstrap replicate, and these were used to estimate the posterior marginal distribution on each fit parameter. In other words, each distribution we approximate is the distribution of the sample average of a parameter value, for one thousand populations of size . The interval in which we can be 95% certain that the true value of the parameter is found, the 95% credible interval, was then calculated from these approximated distributions. This is done by excluding the values that fall in the extreme tails of the distribution (2.5% of values in each tail). In this work, we identify poorly constrained parameters (that may not be needed in a minimal model) as those with very large 95% credible intervals.
We have previously performed a local sensitivity analysis on the full system in Equations (1)ā(6), along with the submodels that describe the simpler treatment protocols detailed above [19]. The parameters we focus on are the ones whose values could not be readily ascertained from experiments, as detailed in [18]. For each submodel, our local sensitivity analysis entails performing an exhaustive search about the bestfit parameters and identifying all parameter sets that give a fit within 10% of the optimal fit.
The optimal fit is defined as the set of parameters that minimize the goodnessoffit metric [19]:where is the average experimental tumor volume at day , is the tumor volume at day predicted by the submodel under consideration, and is the variance in the experimental tumor volume at day . The fractional term in Equation (7) is a dimensionless measure of the error in which the sum of the square error is divided by the variance in the experimental data. In this way, we require better fits to the average volume when the variance is small, in accordance with the principle of maximum likelihood estimation [24]. However, because instrumentation error in volume measurements is independent of tumor volume, calipers are imprecise for smaller tumor sizes [25, 26]. For this reason, we weigh the dimensionless term in Equation (7) by the average tumor volume, as this does not artificially bias the algorithm to fit well at small tumor sizes at the expense of fitting well over a majority of the data points [19].
In this work, we use these data to identify parameters that the model is highly insensitive to (parameters that can vary widely without affecting the goodness of fit of the model). We then consider fixing each of those parameters or even restructuring the equations to remove such parameters all together.
2.3. Global Sensitivity Analysis
We next take a more holistic view of parameter space by performing a global sensitivity analysis using the Sobol method [27]. To do this, we view parameter space as our input space, and the tumor volume as predicted by the corresponding DE submodel in Equations (1)ā(6) as the output space. The Sobol method allows us to determine how much of the total output variance is due to each individual parameter. If the Sobol Index corresponding to a particular parameter (defined below) is small, the model dynamics are insensitive to varying that particular parameter.
To use the Sobol method, we first determine a realistic domain for each parameter in the model. Define the vector such that each element of the vector corresponds to a parameter in our model, specified in Equations (1)ā(6):
The full set of choices for each element of the vector defines the 14 dimensional parameter space ; though in all of our analyses, we only consider a subset of this vector. We assume a uniform distribution for each parameter in the parameter space, where the minimum and maximum value for each parameter is determined from its 95% credible interval.
Here, we define the total output space as all possible values of , where for a particular choice of parameters , we definewhere and are elements of the solution to the original model (1)ā(6), for the particular choice of parameters with time to , here 30 days.
In order to determine how the output varies with each parameter, we consider how much each parameter contributes to the total variance of the output space [27, 28], . First we calculate the total variance of over the full parameter space, . If possible, we would next calculate the variance of , given the true value of , , to determine how much variance is lost due to being fixed. However, we do not know the true value of . Instead, we calculate the firstorder sensitivity index (Sobol Index) for each of the parameters, , by calculating the expected value of over all other parameters conditioned on each value of : . We then compute the variance of the conditional expectation: .
is then defined as the amount that the parameter contributes to the total variance, normalized by the total variance:where, as previously stated, the range of values considered for each parameter is determined by its associated 95% credible interval.
Sobolās key insight involves using an ANOVA decomposition to calculate the above variances [27, 28]. This allows for practical numerical simulations that employ Monte Carlo simulations to calculate the Sobol Indices [27]. In this study, we utilized an available MATLAB toolbox, GSAT, to calculate all of the reported Sobol Indices using the FAST method and Sobol distributions for sampling. (https://www.mathworks.com/matlabcentral/fileexchange/40759globalsensitivityanalysistoolbox). When Sobolās method determines that varying a parameter over its domain causes to change minimally, will be small. Because such a does not contribute much to the overall variance of the output, the model is considered insensitive to the value of that parameter. On the contrary, when varying a parameter over its domain causes to greatly vary, will be large. A model is sensitive to parameters corresponding to large values.
2.4. Parameter Sloppiness Analysis
It is well established that the quantitative behavior of multiparametric biological models is much more sensitive to changes in certain combinations of parameters than to others, a phenomenon known as āsloppinessā [29]. Herein, we will use an established methodology (see [29, 30] for more details) to identify parametric combinations to which the model is highly sensitive (i.e., stiff directions) and combinations to which the model is highly insensitive (i.e., soft directions). This is done by performing an eigenvector decomposition of the Hessian matrix corresponding to our cost function in Equation (7). Large eigenvalues correspond to stiff eigendirections, whereas small eigenvalues correspond to the soft eigendirections [29, 30].
The foundational step of performing a sloppiness analysis is to compute the cost manifold [30]. We do this by identifying all points in parameter space that give a fit within 10% of the optimal (generated for the previouslydescribed local sensitivity analysis). The data are then normalized in two ways. First, the value of each parameter is scaled by the optimal value of that parameter, hence the normalized axes we see in Figure 1. Second, for each parameter set , we define the cost function to be the deviation in the value of at that parameter set compared with the value of at the optimal parameter set. As we are only interested in parameters that give a fit within 10% of optimal, each point shown in parameter space in Figure 1 has a value in the range 0 and 0.1. We then find the bestfit quadratic surface describing using the polyfit package for MATLAB (https://www.mathworks.com/matlabcentral/fileexchange/34765polyfitn). Once the bestfit quadratic surface has been obtained (an example of which is visualized in Figure 1(a)), its associated Hessian matrix can be computed. The eigenvalues and eigenvectors of this matrix allow us to identify soft and stiff parameter directions.
(a)
(b)
We will first employ this analysis to study whether the OV model has any redundant parameters. This means we will compute as a function of the T cell parameters (, , ) in Model 3c, which is our model of treatment with Ad/41BBL/IL12 only. We choose to focus on the T cell parameters because our local sensitivity results (see Section 3.1) suggest that the model is most insensitive to these parameters. The second question we will explore is whether the model is too simple to describe the experimental data. To this end, we turn to one of the major simplifying assumptions we made: that tumor growth is exponential in the absence of treatment, and we consider instead a model without any treatment () in which tumor growth is logistic instead of exponential. In this case, Model 1 is replaced with the model:and we will be working in parameter space, where is the tumor growth rate (comparable to in the original Model 1), and is the tumorcarrying capacity. We perform a parameter sloppiness analysis to assess the carrying capacityās impact on the fit of the model to the data. While there are other intrinsic tumor growth terms that could be considered, and other structural terms in the model we could analyze, we focus on the exponential growth term because prior studies have indicated better fits to cancer growth data using functional forms other than exponential growth [31].
2.5. Robustness Analysis
Once we propose a minimal system, we need to validate that this reduced system and the original system in (1)ā(6) make qualitatively (and possibly quantitatively) similar predictions. One way to do this is to compare the fits of the original and reduced models with the data. As a further way to validate the proposed minimal system, we will employ the Virtual Expansion of Populations for Analyzing Robustness of Therapies (VEPART) method [19] to see if the original and reduced systems make similar predictions about treatment efficacy in situations for which we do not have experimental data.
The VEPART method is a way to not only predict optimal treatment protocols, but to assess the robustness of those protocols. To detail, when a dataset is mathematically modeled, typically the model is fit to the average experimental data (tumor volume at each time point averaged over the N mice in the treatment cohort). From a model that well describes the average data, optimization techniques can be utilized to identify the best way to control the tumor, given the drugs under consideration and any constraints on their usage. However, a protocol that is optimal for treating the āaverageā tumor may or may not be effective in a tumor whose dynamics deviate from the average. If a protocol elicits the same qualitative response in samples that deviate from the average, we call the protocol robust. If a protocol results in a qualitatively different treatment response in samples that deviate from the average, the protocol is classified as fragile [19].
We have previously undertaken a robust/fragile treatment analysis of system (1)ā(6) using the VEPART method [19]. This method, which is summarized in Figure 2, begins with time course data from a sample population, for which a mathematical model is developed and fit to the average of this data. Bootstrapping of the data allows for an estimation of the posterior distribution on each of the fit parameters (in our case, at different stages of the hierarchical model development process). The distributions are then pseudorandomly sampled, only selecting values within the 95% credible interval, and selecting simultaneously fit parameters together to preserve the covariance structure in the data. This procedure results in a full parameterization of the mathematical model, which we call a āvirtual population.ā We generate 1000 such virtual populations, perform an optimal treatment analysis on each population, and compare treatment response across virtual populations to assess robustness. Full details of the method can be found in [19].
In this work, we will apply the VEPART method to our proposed minimal model and ask whether this model gives qualitatively similar robust/fragile predictions as the original model. If the reduced model indeed gives similar predictions, this provides an additional level of validation that important information was not lost by simplifying our original model to the proposed minimal one.
3. Results and Discussion
3.1. 95% Credible Intervals and Local Sensitivity Analysis
Here, we explore if any parameters can be fixed or removed entirely from the model without compromising the goodness of fit to the data by considering the following: (1) the 95% credible intervals for each fit parameter and (2) the extent to which a single parameter can deviate from its bestfit value and still give a goodnessoffit metric (a value of ) within 10% of the optimal value (Table 1).

If we start by looking at the 95% credible intervals (Table 1), we find that three parameters (, , and ) have very tight credible intervals. Here, tight means that the interval for which we can be 95% certain that the true value of the parameter is found only contains numbers that vary over at most one order of magnitude. On the contrary, the T cellrelated parameters (, , and ) have 95% credible intervals containing values that vary over four to five orders of magnitude. This indicates that we have much less certainty about the value of these T cell parameters, and that the modelās fit to the data may not heavily depend on their precise value.
To determine how much the fit depends on these parameters, we next consider how much a parameter value can vary from its bestfit value and still give a goodnessoffit metric within 10% of the optimal value. Using the data in Table 1, the parameter, which represents the IL12induced activation rate of naĆÆve T cells, stands out from the other parameters. This parameter can vary by 17,300% from its bestfit value while still ensuring the goodnessoffit metric is within 10% of the optimal value. No other parameters considered come close to being able to deviate this much, with and having the next largest deviations at only 47.2% and 46.3%, respectively. Furthermore, even choosing gives a fit within 10% of the optimal, as shown in the cost function in Figure 3. Combining this local sensitivity result with the very large 95% credible interval for suggests we can set this parameter to zero in the model. Since the model uses an initial condition of , setting does more than just eliminate a parameter from the modelāit actually eliminates the entire variable from the model, since the only source term of in Equation (5) comes from the term .
3.2. Global Sensitivity Analysis
To further our investigation, we next expand our study of parameter sensitivity from a local one to a global one. In particular, we conducted a Sobol sensitivity analysis on the T cell parameters (, , ) that get fit in the Ad/41BBL/IL12 model (Model 3c). We found that the firstorder Sobol indices are given by , , and . The larger the Sobol index is for a parameter, the more sensitive the dynamics of the model are to the value of that parameter. We see that has the smallest Sobol index among the T cell parameters, in spite of the large 95% credible interval used in its calculation.
Thus, the global sensitivity analysis further confirms the local sensitivity analysis: the model appears most insensitive to the choice of . Since we previously showed that is a viable choice for the model to result in a goodness of fit within 10% of the optimal fit, this lends more credence to the notion that a minimal model could have . And, as previously argued, this means that a minimal model need not include the equation at all.
3.3. Parameter Sloppiness Analysis of Ad/41BBL/IL12 Model
To further explore the hypothesis that we can set and therefore remove the variable from our model, we conducted a sloppiness analysis of the Ad/41BBL/IL12 model (Model 3c) in parameter space. We found that the Hessian matrix corresponding to the bestfit ellipsoid has the following eigenvalueeigenvector pairs:
While the first two eigenvectors do not clearly point in the direction of one of the parameters, the final eigenvector clearly points along the axis of the first parameter (the axis). Furthermore, the eigenvalue associated with this eigenvector is very small. Such a small eigenvalue means the length of the ellipsoid along the direction of that eigenvector is very large, and therefore, the parameter can be classified as a soft parameter whose value has a minimal impact on the model dynamics.
3.4. A Minimal Model
Taken together, these analyses suggest that can be fixed (at zero, as argued above), and therefore that the variable can be removed from system (1)ā(6) without sacrificing model fit to the data. If we remove the variable A from consideration, the parameters , , and are no longer needed. This leaves us with the following minimal model which contains five variables (one less than the original) and eleven noninitial condition parameters (three less than the original):
When considering the treatment protocol Ad/41BBL/IL12, the parameter in this minimal model represents the joint impact that 41BBL and IL12 have on recruiting tumortargeted T cells. If we were considering only treatment with Ad/41BBL, represents the singular impact 41BBL has, and if we were only considering treatment with Ad/IL12, represents the singular impact of IL12 on T cell recruitment. The interpretation of all other parameters has not changed from the original model.
We have repeated the process of hierarchically finding the bestfit parameters for this minimal model, and its submodels corresponding to the different treatment protocols (Models 1ā5, as detailed previously). Table 2 gives the value of the bestfit parameters, along with the percent change in their values compared with the original model. These changes ranged from a 16% decrease to a 5% increase in the value of a single parameter (see Table 2).

Figure 4 illustrates how well the full model (Ad/41BBL/IL12 and DCs) and submodel 3c (Ad/41BBL/IL12, at two different doses) in their original and minimal forms fit the experimental data. Visually, the fits to the data from the minimal model (Figure 4(b)) and the original model (Figure 4(a)) are so similar they cannot be distinguished. Therefore, to quantify how the fit has changed from the original model to the minimal model, we compare the goodnessoffit metric in each case (see Table 2). Recall the goal of parameter fitting was to minimize ; therefore, an increase in means the model is a worse fit to the data, and a decrease in means the model is a better fit to the data. We found that the goodnessoffit metric (combined for both doses) went up about 3% (3% worse fit) for submodel 3c describing treatment with Ad/41BBL/IL12. Surprisingly, and likely attributable to the random nature of the simulated annealing scheme used to fit the parameters (see [19] for details), the goodnessoffit metric actually went down insignificantly (0.08% better fit) in the full model accounting for Ad/41BBL/IL12 and DCs. This allows us to conclude that the minimal model describes the experimental data as well as the original model.
(a)
(b)
3.5. Is the Minimal Model Too Simple?
We have demonstrated that our minimal model is sufficient to describe the experimental data when tumors are treated with Ad/41BBL/IL12 either in isolation or in combination with DC injections. Here, we explore the question of whether the model is too simple to describe the data. To begin, we turned to the model selection methods of Akaike information criterion () [32] and its variant which corrects for small sample sizes [33], along with Bayesian information criterion () [34]. These can be used to evaluate different models and assign a numerical score to each model based on the goodness of fit to the data and the number of parameters in the model. This allows models based on different assumptions to be compared, with the aim of identifying the most plausible model [35]. To further validate our prediction that the minimal model is sufficient to describe the experimental data, we calculate the , , and under the assumption that absolute model error is independent and normally distributed [35]:where is the number of time points for which we have data and is the number of model parameters. Note we are using a modified version of these formulas to correspond with our goodnessoffit function, . All three criteria assign the lowest score, and therefore āselect,ā our minimal model of Ad/41BBL/IL12ā+āDCs over the original model (see Table 3).

This information theoretic approach suggests that our minimal model is not too simple, when compared with the original model. However, it does not consider other components of the model that may make it too simplistic. To further investigate how much model complexity is needed to adequately describe the data, here we explore the impact of using a twoparameter growth model, the logistic equation (Equation (10)) to describe tumor growth without treatment. This is in comparison with the currently used oneparameter exponential growth term.
We approach this using a parameter sloppiness analysis. Since this model contains only two parameters, the analysis occurs in twodimensional parameter space, which allows for nice visualizations of the results (see Figure 1). In particular, in Figure 1, we see all normalized points in parameter space that give a goodness of fit within 10% of the optimal fit. In Figure 1(a), we can see the bestfit ellipsoid to this data, and in Figure 1(b), we see the eigenvectors of the Hessian associated with this ellipsoid.
We find that the tumor growth rate, , is a stiff parameter, as the eigenvector extending nearly along the axis has relatively large eigenvalue of . This means the value of cannot deviate significantly from the optimal and still give a strong fit to the data. On the contrary, is a soft parameter, as the eigenvector extending nearly along the axis (distorted in figure due to scaling differences in the horizontal and vertical axis) has a small corresponding eigenvalue of . This means that the value of can deviate significantly from its optimal value and still give a strong fit to the data. As represents the carrying capacity in the logistic growth term, this says that the model for tumor growth is highly insensitive to the value of the carrying capacity. Since expanding the model to include logistic growth instead of exponential growth would introduce a soft parameter, we conclude that it is sufficient to use an exponential growth term to capture tumor behavior without treatment. The sufficiency of using an exponential growth term can be explained by revisiting the experimental dataāthe time scales for which tumor growth is considered are sufficiently short that the tumor is still in its nearexponential growth regime (even though its growth would eventually plateau). Hence, the model is highly insensitive to the choice of a carrying capacity, and the added complexity is not needed in this model.
3.6. Robustness Analysis
We have presented a variety of evidence that system (11)ā(15) represents a minimally, but not too minimally, structured model describing treatment with immunoenhanced OVs and dendritic cell injections. Here, we will create āvirtual populations,ā as indicated in the VEPART method, to classify various treatment protocols as either robust (effective in a large fraction of virtual populations) or fragile (ineffective in a large fraction of virtual populations). If the minimal model and original model yield similar predictions, this lends further support that the dynamics we are interested in capturing are adequately described by the minimal model.
The constraints imposed on this analysis were based on the experimental design in [4], and are as follows: (1) one treatment is administered per day, (2) there are six days of treatment, with three of the days being Ad/41BBL/IL12, and three being DCs, and (3) the dose is fixed at the dose used in the experimental work [4], or a different fixed dose if specified. This results in twenty possible treatment protocols per fixed dose, and these protocols are then ranked from quickest time to tumor eradication (defined as tumor volume dropping below that of a single cell, estimated to be āmm^{3}), to the largest volume after thirty days [19].
In our robustness analysis of the original model, the protocol of OVOVOVDCDCDC was found to be optimal for the dose used in the experimental data. This was deemed the optimal protocol because it led to tumor eradication in the largest fraction of the 1000 virtual populations [19]. However, the response to this āoptimalā protocol varied significantly across virtual populations, as shown in Figure 5(a). The āoptimalā protocol was the best protocol to apply in 72.2% of the virtual populations, but was the worst protocol to apply in 13.8% of the virtual populations. Therefore, we previously classified the doses used in the experiments of Huang et al. [4] as fragile, since different virtual populations have a very different response to the same treatment protocol, including the one predicted to be āoptimal.ā
(a)
(b)
Here, we repeated the robustness analysis at the experimental dose used in Huang et al. [4] on the minimal model, and the results are shown in Figure 5(b). The protocol found to optimize treatment response in the minimal model is the same as in the original model, which is a good indication that the two models have similar dynamics. Furthermore, the āoptimalā protocol was the best protocol to apply in 77.4% of the virtual populations (+5.2% from original model), but was the worst protocol to apply in 9.4% of the populations (ā4.4% from original model). In other words, the optimal predicted at the experimental dose used in Huang et al. [4] is fragile whether or not we assess robustness using the original model or the minimal model.
The VEPART method was also applied to the original and minimal model in two regions of dosing space that differ from the experimental dose used in Huang et al. [4]. First, we considered increasing the Ad/41BBL/IL12 dose by 50%, while simultaneously decreasing the DC dose by 50% (high OV/low DC). In the original model, the optimal at the high OV/low DC dose remained OVOVOVDCDCDC [19], and this treatment proved to be more robust than the same treatment at the experimental dose. The same qualitative result holds when we consider the minimal model (data not shown).
Next, we considered decreasing the Ad/41BBL/IL12 dose by 50%, while simultaneously increasing the DC dose by 50% (low OV/high DC). In this case, the optimal treatment predicted by both the original and the minimal model is DCDCDCOVOVOV. In the original model, this optimal proved to be the most robust of all, as it caused tumor eradication in a significant majority of the virtual populations (84.2%, see Figure 6(a)), and it ranked as the best protocol in 100% of the virtual populations [19]. In the minimal model, the optimal protocol still proves to be the most robust protocol considered, causing tumor eradication in 95.4% of the virtual populations (+11.2% from original model). And, just like for the original model, in the minimal model, the treatment ranks as the best protocol in 100% of the virtual populations.
(a)
(b)
Therefore, despite some quantitative changes, the main conclusion of our VEPART analysis remains unaltered whether we consider the original or the minimal model: the experimental dose used in Huang et al. [4] is still fragile (see Figure 5(b)), and we do not recommend treating at this region of dosing space. Instead, treatments should occur in the low OV/high DC region of dosing space, as the optimal protocol of DCDCDCOVOVOV is predicted to be robust (see Figure 6). The similar nature of the predictions from the original and the minimal model lends further support to the sufficient nature of our minimal model.
4. Conclusions
In this study, we tackled a common challenge faced by mathematical biologists: identifying when one has developed a minimal model to describe an experimental dataset. Our approach combined the use of several methodologies, including an analysis of 95% credible intervals, a local sensitivity analysis, a global sensitivity analysis, a parameter sloppiness analysis, an information criteria analysis, and a comparison of model dynamics subject to a range of scenarios (treatment protocols). Applying these approaches to our model of oncolytic virus treatment in combination with dendritic cell injections led us to uncover that a reduced model, containing five (instead of six) variables and eleven (instead of fourteen) noninitial condition parameters, is sufficient and not too simplistic to describe the experimental data in [4].
Although the results presented herein are particular to one dataset and its corresponding model, these analyses can be applied in many other scenarios where it is of value to have an analytically tractable model with a minimal number of parameters. That said, this was not a comprehensive study of all aspects of the model. While we focused closely on robustness to perturbations in parameter values, we did not consider robustness to initial conditions, or to many of the functional forms used in the model, as has been considered elsewhere [13, 36ā39]. Further, we did not study whether our ordinary differential equation model is structurally identifiable, meaning its parameters can be identified from perfect noisefree data, or if the model is practically identifiable, meaning the parameters can be identified in the case of imperfect, noisy data [40]. Despite not considering all possible ways to analyze the sufficiency of a mathematical model, we are able to simplify an existing mathematical model while providing evidence that the model is not too simple to describe important aspects of treatment with oncolytic viruses and dendritic cell injections.
While in many ways the approaches detailed herein run counter to the current trend of developing mechanistic models with significant biological detail, we believe that minimal models with sufficient complexity hold significant promise in the realm of precision medicine [20, 41, 42]. Precision/personalization raises a number of challenges, a significant one being the often sparse data available on an individual basis coupled with the high dimensionality of parameter space, even in a minimal model like the one proposed here. In future work, we will consider how global sensitivity analyses can help to identify the most important model parameters in a highdimensional parameter space. This will allow us to leverage our minimal model to perform individualized fitting of mouse data, and to search for personalized optimal treatment protocols.
Data Availability
Previously reported murine tumor volume data were used to support this study. These prior studies (and datasets) are cited at relevant places within the text as reference [4].
Conflicts of Interest
The authors declare that they have no conflicts of interest in this work.
Acknowledgments
J.L.G would like to thank Eduardo Sontag for his many discussions on some of the ideas presented in this article. Both J.L.G. and J.R.W. would like to acknowledge ChaeOk Yun and Peter Kim for their collaboration and particularly for the exposure and access they provided to the rich dataset utilized in this work. The work of J.L.G. was partially supported by a Support of Scholarly Activity Award from the College of New Jersey. The work of J.R.W. was partially supported by an Enhanced Sabbatical Award from the University of Richmond. The authors acknowledge use of the ELSA highperformance computing cluster at The College of New Jersey for conducting the research reported in this paper. This cluster is funded by the National Science Foundation under grant number OAC1828163.
References
 M. Arruebo, N. Vilaboa, B. SĆ”ezGutierrez et al., āAssessment of the evolution of cancer treatment therapies,ā Cancers, vol. 3, no. 3, pp. 3279ā3330, 2011. View at: Publisher Site  Google Scholar
 H. H. Wong, N. R. Lemoine, and W. Wang, āOncolytic viruses for cancer therapy: overcoming the obstacles,ā Viruses, vol. 2, no. 1, pp. 78ā106, 2010. View at: Publisher Site  Google Scholar
 S. Das, M. Menezes, S. Bhatia et al., āGene therapies for cancer: strategies, challenges and successes,ā Journal of Cellular Physiology, vol. 230, no. 2, pp. 259ā271, 2015. View at: Publisher Site  Google Scholar
 J. H. Huang, S. N. Zhang, K. J. Choi et al., āTherapeutic and tumorspecific immunity induced by combination of dendritic cells and oncolytic adenovirus expressing IL12 and 41BBL,ā Molecular Therapy, vol. 18, no. 2, pp. 264ā274, 2010. View at: Publisher Site  Google Scholar
 L. Aurelian, āOncolytic viruses as immunotherapy: progress and remaining challenges,ā OncoTargets and Therapy, vol. 9, pp. 2627ā2637, 2016. View at: Publisher Site  Google Scholar
 D. Ahn and T. BekaiiSaab, āThe continued promise and many disappointments of oncolytic virotherapy in gastrointestinal malignancies,ā Biomedicines, vol. 5, no. 4, p. 10, 2017. View at: Publisher Site  Google Scholar
 T. Wen, J. Bukczynski, and T. Watts, ā41BB ligandmediated costimulation of human T cells induces CD4 and CD8 T cell expansion, cytokine production, and the development of cytolytic effector function,ā The Journal of Immunology, vol. 168, no. 10, pp. 4897ā4906, 2002. View at: Publisher Site  Google Scholar
 M. Colombo and G. Trinchieri, āInterleukin12 in antitumor immunity and immunotherapy,ā Cytokine & Growth Factor Reviews, vol. 13, no. 2, pp. 155ā168, 2002. View at: Publisher Site  Google Scholar
 K. F. Bol, G. Schreibelt, W. R. Gerritsen, I. J. M. de Vries, and C. G. Figdor, āDendritic cellbased immunotherapy: state of the art and beyond,ā Clinical Cancer Research, vol. 22, no. 8, pp. 1897ā1906, 2016. View at: Publisher Site  Google Scholar
 D. N. Santiago, J. P. Heidbuechel, W. M. Kandell et al., āFighting cancer with mathematics and viruses,ā Viruses, vol. 9, no. 9, p. 239, 2017. View at: Publisher Site  Google Scholar
 R. Eftimie and G. Eftimie, āTumourassociated macrophages and oncolytic virotherapies: a mathematical investigation into a complex dynamics,ā Letters in Biomathematics, vol. 5, no. 1, pp. 6ā35, 2018. View at: Publisher Site  Google Scholar
 A. L. Jenner, C. O. Yun, P. S. Kim, and A. C. F. Coster, āMathematical modelling of the interaction between cancer cells and an oncolytic virus: insights into the effects of treatment protocols,ā Bulletin of Mathematical Biology, vol. 80, no. 6, pp. 1615ā1629, 2018. View at: Publisher Site  Google Scholar
 N. L. Komarova and D. Wodarz, āODE models for oncolytic virus dynamics,ā Journal of Theoretical Biology, vol. 263, no. 4, pp. 530ā543, 2010. View at: Publisher Site  Google Scholar
 F. Le BÅuf, C. Batenchuk, M. VĆ¤hĆ¤Koskela et al., āModelbased rational design of an oncolytic virus with improved therapeutic potential,ā Nature Communications, vol. 4, no. 1, p. 1974, 2013. View at: Publisher Site  Google Scholar
 K. J. Mahasa, A. Eladdadi, L. De Pillis, and R. Ouifki, āOncolytic potency and reduced virus tumorspecificity in oncolytic virotherapy. a mathematical modelling approach,ā PLoS One, vol. 12, no. 9, Article ID e0184347, 2017. View at: Publisher Site  Google Scholar
 D. Wodarz and N. Komarova, āTowards predictive computational models of oncolytic virus therapy: basis for experimental validation and model selection,ā PLoS One, vol. 4, no. 1, Article ID e4271, 2009. View at: Publisher Site  Google Scholar
 P. Kim, J. Crivelli, I. K. Choi, C. O. Yun, and J. Wares, āQuantitative impact of immunomodulation versus oncolysis with cytokineexpressing virus therapeutics,ā Mathematical Biosciences and Engineering, vol. 12, no. 4, pp. 841ā858, 2015. View at: Publisher Site  Google Scholar
 J. Wares, J. Crivelli, C. O. Yun, I. K. Choi, J. Gevertz, and P. Kim, āTreatment strategies for combining immunostimulatory oncolytic virus therapeutics with dendritic cell injections,ā Mathematical Biosciences and Engineering, vol. 12, no. 6, pp. 1237ā1256, 2015. View at: Publisher Site  Google Scholar
 S. Barish, M. Ochs, E. Sontag, and J. Gevertz, āEvaluating optimal therapy robustness by virtual expansion of a sample population, with a case study in cancer immunotherapy,ā Proceedings of the National Academy of Sciences, vol. 114, no. 31, pp. E6277āE6286, 2017. View at: Publisher Site  Google Scholar
 F. AndrĆ©, J. Ciccolini, J. Spano et al., āPersonalized medicine in oncology: where have we come from and where are we going?ā Pharmacogenomics, vol. 14, no. 8, pp. 931ā939, 2013. View at: Publisher Site  Google Scholar
 N. Kronik, Y. Kogan, M. Elishmereni, K. HaleviTobias, S. VukPavloviÄ, and Z. Agur, āPredicting outcomes of prostate cancer immunotherapy by personalized mathematical models,ā PLoS One, vol. 5, no. 12, Article ID e15482, 2010. View at: Publisher Site  Google Scholar
 M. JamalHanjani, S. Quezada, J. Larkin, and C. Swanton, āTranslational implications of tumor heterogeneity,ā Clinical Cancer Research, vol. 21, no. 6, pp. 1258ā1266, 2015. View at: Publisher Site  Google Scholar
 B. Efron, āBootstrap methods: another look at the jackkife,ā Annals of Statistics, vol. 7, no. 1, pp. 1ā26, 1979. View at: Publisher Site  Google Scholar
 J. Pan and K. Fang, Maximum Likelihood Estimation, Springer, New York, NY, USA, 2002.
 J. Feldman, R. Goldwasser, S. Mark, J. Schwartz, and I. Orion, āA mathematical model for tumor volume evaluation using twodimensions,ā Journal of Applied Quantitative Methods, vol. 4, pp. 455ā462, 2009. View at: Google Scholar
 G. Ayers, E. McKinley, P. Zhao et al., āVolume of preclinicial xenograft tumors is more accurately assessed by ultrasound imaging than manual caliper measurements,ā Journal of Ultrasound in Medicine, vol. 29, no. 6, pp. 891ā901, 2010. View at: Publisher Site  Google Scholar
 I. M. Sobol, āGlobal sensitivity indices for nonlinear mathematical models and their monte carlo estimates,ā Mathematics and Computers in Simulation, vol. 55, no. 1ā3, pp. 271ā280, 2001. View at: Publisher Site  Google Scholar
 F. CannavĆ³, āSensitivity analysis for volcanic source modeling quality assessment and model selection,ā Computers & Geosciences, vol. 44, pp. 52ā59, 2012. View at: Publisher Site  Google Scholar
 R. N. Gutenkunst, J. J. Waterfall, F. P. Casey, K. S. Brown, C. R. Myers, and J. P. Sethna, āUniversally sloppy parameter sensitivities in systems biology models,ā PLoS Computational Biology, vol. 3, no. 10, p. e189, 2007. View at: Publisher Site  Google Scholar
 K. S. Brown and J. P. Sethna, āStatistical mechanical approaches to models with many poorly known parameters,ā Physical Review E, vol. 68, no. 2, Article ID 021904, 2003. View at: Publisher Site  Google Scholar
 E. A. Sarapata and L. G. de Pillis, āA comparison and catalog of intrinsic tumor growth models,ā Bulletin of Mathematical Biology, vol. 76, no. 8, pp. 2010ā2024, 2014. View at: Publisher Site  Google Scholar
 H. Akaike, Information Theory and an Extension of the Maximum Likelihood Principle, Springer, New York, NY, USA, 1998.
 C. M. Hurvich and C. L. Tsai, āRegression and time series model selection in small samples,ā Biometrika, vol. 76, no. 2, pp. 297ā307, 1989. View at: Publisher Site  Google Scholar
 G. Schwarz, āEstimating the dimensions of a model,ā Annals of Statistics, vol. 6, no. 2, pp. 461ā464, 1978. View at: Publisher Site  Google Scholar
 H. Miao, C. Dykes, L. Demeter, and W. Hulin, āDifferential equation modeling of HIV viral fitness experiments: model identification, model selection, and multimodel inference,ā Biometrics, vol. 65, no. 1, pp. 292ā300, 2009. View at: Publisher Site  Google Scholar
 J. Gunawardena, Models in Systems Biology: the Parameter Problem and the Meanings of Robustness, vol. 1, Wiley, Hoboken, NJ, USA, 2010.
 N. Barkai and S. Leibler, āRobustness in simple biochemical networks,ā Nature, vol. 387, no. 6636, pp. 913ā917, 1997. View at: Publisher Site  Google Scholar
 E. Batchelor and M. Goulian, āRobustness and the cycle of phosphorylation and dephosphorylation in a twocomponent regulatory system,ā Proceedings of the National Academy of Sciences, vol. 100, no. 2, pp. 691ā696, 2003. View at: Publisher Site  Google Scholar
 A. Mallavarapu, M. Thomson, B. Ullian, and J. Gunawardena, āProgramming with models: modularity and abstraction provide powerful capabilities for systems biology,ā Journal of The Royal Society Interface, vol. 6, no. 32, pp. 257ā270, 2009. View at: Publisher Site  Google Scholar
 N. Meshkat, S. Sullivant, and M. Eisenberg, āIdentifiability results for several classes of linear compartment models,ā Bulletin of Mathematical Biology, vol. 77, no. 8, pp. 1620ā1651, 2015. View at: Publisher Site  Google Scholar
 F. EmmertStreib, āPersonalized medicine: has it started yet? A reconstruction of the early history,ā Frontiers in Genetics, vol. 3, p. 313, 2012. View at: Publisher Site  Google Scholar
 M. Verma, āPersonalized medicine and cancer,ā Journal of Personalized Medicine, vol. 2, no. 1, pp. 1ā14, 2012. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 Jana L. Gevertz and Joanna R. Wares. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.