Abstract

In study designs with repeated measures for multiple subjects, population models capturing within- and between-subjects variances enable efficient individualized prediction of outcome measures (response variables) by incorporating individuals response data through Bayesian forecasting. When measurement constraints preclude reasonable levels of prediction accuracy, additional (secondary) response variables measured alongside the primary response may help to increase prediction accuracy. We investigate this for the case of substantial between-subjects correlation between primary and secondary response variables, assuming negligible within-subjects correlation. We show how to determine the accuracy of primary response predictions as a function of secondary response observations. Given measurement costs for primary and secondary variables, we determine the number of observations that produces, with minimal cost, a fixed average prediction accuracy for a model of subject means. We illustrate this with estimation of subject-specific sleep parameters using polysomnography and wrist actigraphy. We also consider prediction accuracy in an example time-dependent, linear model and derive equations for the optimal timing of measurements to achieve, on average, the best prediction accuracy. Finally, we examine an example involving a circadian rhythm model and show numerically that secondary variables can improve individualized predictions in this time-dependent nonlinear model as well.

1. Introduction

Significant steps forward in the analysis of repeated-measures data were made with the introduction of linear and nonlinear mixed-effects models [13], which distinguish within-subjects variance (from multiple measurements in each subject) versus between-subjects variance (from multiple subjects being measured). Distinguishing these types of variance can also be thought of as explicitly modeling random error in the data. This can be useful in understanding how different individuals are from one another as compared to how different multiple measurements are for given individuals. In research on sleep and sleepiness, for example, breakthroughs made possible by mixed-effects models include elucidation of the dose-response effects of sustained sleep restriction on sleep architecture and neurobehavioral impairment [4, 5] and demonstration of the trait characteristics of individual differences in vulnerability to sleep loss [6]. In recent years, mixed-effects model approaches to statistical regression and analysis of variance have become widely available in statistical software packages. They are nowadays the methodology of choice for many repeated-measures investigations in sleep research and other fields of study.

A further advance was the introduction of a model individualization technique called Bayesian posterior distribution estimation or Bayesian forecasting. This technique was first used in sleep research to overcome a shortcoming of biomathematical models of fatigue and performance. Existing models did not account for individual differences in sleep regulation and vulnerability to sleep loss and therefore did not accurately predict performance for given individuals. Bayesian forecasting addressed this shortcoming by utilizing the separation of within- and between-subjects variance in model parameters as enabled by mixed-effects modeling [2]. In Bayesian forecasting, the between-subjects variance of model parameters serves as Bayesian prior information. Measurements from a new individual, not previously studied, are combined with the prior information to efficiently derive model parameters tailored to the new individual, thereby yielding a subject-specific mathematical model [2, 7, 8]. As a bonus, the Bayesian forecasting technique also yields quantitative estimates of the accuracy of individualized predictions made with the subject-specific mathematical model [9].

In a published example, Bayesian forecasting was implemented for the two-process model of sleep regulation [10, 11] to predict performance impairment of selected individuals undergoing a period of total sleep deprivation (see Figure 1). Comparisons with the individuals’ actual data revealed that the model parameters converged efficiently to those that best characterized each individual, and the response predictions were significantly more accurate than could have been achieved with the original, nonindividualized two-process model [7].

In Bayesian forecasting, individualized parameter estimates are derived from the posterior distributions of the parameters in question after combining prior distributions with the measurement(s) from the individual, and individualized model predictions for future outcomes or responses are obtained following estimation of the posterior distribution of the expected responses at given times. A variety of methods are available for obtaining parameter estimates and response predictions once posterior distributions have been estimated. We employ the Bayesian mean squared error (BMSE) [12] and make use of the Bayesian minimum mean squared error (MMSE) estimator [13, 14], which produces unbiased point estimates that minimize the BMSE [15]. The BMSE of parameter estimates and response predictions is dependent on the amount of data available for the individual at hand and the magnitude of between-subjects variance captured in the Bayesian prior distributions.

In cases where between-subjects variance is relatively large, such as performance responses to sleep loss [6, 16], measurement data for the individual at hand are more critical for making accurate individualized response predictions. Sparseness of such measurement data (e.g., due to practical or cost-based limitations) can result in unacceptably low levels of accuracy. For example, Bayesian forecasting could be used to develop a drowsy driver warning system based on a mathematical model of fatigue and performance [17, 18] calibrated to predict lateral lane deviation, using camera-based measurements of lane position to individualize the model for the driver. However, when lane markers are covered with snow and cameras are unable to determine vehicle position relative to the lane, the individualization effort becomes less effective.

To address this limitation, we consider the use of secondary variables to increase data availability, boost response prediction accuracy, and/or reduce data collection costs for individualized response prediction. For example, in the case of a drowsy driver warning system, camera-based measurements of lateral lane deviation serving as the primary response variable could be augmented with in-car secondary variables such as steering wheel variability or driver eyelid closure assessments. However, individualization of predictions based on two or more measurement variables would only be straightforward if individual differences in responses on these variables are equivalent (cf. [19]). Generally, this is not what the evidence shows. As a case in point, trait individual differences in vulnerability to performance impairment due to sleep loss vary considerably across outcome variables [6, 20], such that the most vulnerable individuals based on one variable are not necessarily also the most vulnerable individuals based on another variable. Therefore, when considering two or more response variables as the basis for individualized prediction, it is essential to account statistically for the level of congruence between the response variables.

Here we develop a multivariate statistical framework for individualized prediction of sleep or performance variables, based on Bayesian forecasting with measurements of a primary response variable, augmented with one or more measurements of secondary response variables. The response variables are assumed to follow equivalent dynamics over time, such that they can be described by the same model framework after appropriate scaling. This is a reasonable assumption in the case of, for example, models describing sleep variables measured repeatedly across multiple nights [21] and models describing performance changes over time in response to sleep loss [22]. We make use of multivariate Bayesian prior distributions of the primary and secondary variables, assumed to have been assessed in advance by means of mixed-effects modeling [2] or other suitable techniques. The between-subjects correlation(s) between the primary and secondary variables, used here to account for the level of congruence between the response variables, is assumed to have been estimated as part of the covariance matrix of the multivariate Bayesian prior distributions. The between-subjects correlation(s) are assumed to be at least moderately strong, lest the secondary variables contain essentially no information about the primary response variable that rises above the level of measurement noise.

Our focus in this paper is on prediction accuracy in the multivariate Bayesian forecasting technique. We develop the technique by first considering the details of making individualized response predictions and estimating their accuracy for a simple univariate intercept model. To demonstrate how secondary, correlated responses can be used to make more accurate individualized predictions, we expand the intercept model to include both primary and secondary responses. Assuming fixed costs of data collection for each of the responses, we show how to optimize data collection given a desired level of accuracy for predictions of the primary response variable.

We then consider a linear approximation of the homeostatic component of the two-process model [11] and derive a closed form equation of the BMSE to quantify prediction accuracy in this time-dependent model. For this example, we study the problem of optimizing the timing of measurements in order to enhance the individualized prediction accuracy most efficiently. Finally, we consider more complicated bivariate models, both linear and nonlinear, for which the BMSE cannot be determined in closed form. For these models, we describe a process of numerically assessing the BMSE for individualized prediction based on a primary response variable in Bayesian forecasting augmented with measurements of a secondary response.

2. Subject-Specific Bayesian Models

First we discuss a modeling framework for Bayesian forecasting. Consider a response variable , dependent on a subject-specific trait parameter . Let be an index for individual, ranging from to , and let be an index for observations ordered by time, nested within individual. Suppose that the response variable can be modeled by where represents the model function, represents a fixed measurement time, represents a random subject-specific parameter, and represents additive measurement error. We assume that the distributions for and are known. Equation (1) and the distributions of and constitute a population model.

Limiting our focus to a particular individual, we may remove the subscript and model the subject’s responses aswhere is used to denote .

Suppose that a total of responses () have been measured for the individual at hand, and let denote the index of a response at some future time . We consider a prediction (estimator) of the expected response , which we denote as . Our interest is in constructing such that the expected accuracy for an arbitrary, given individual from the population is minimized.

We define the accuracy using the squared error . The expected accuracy, which is referred to as the Bayesian mean squared error (BMSE), is thus given bywhere the expectation is taken with respect to the marginal probability density function (pdf) of and .

We refer to the prediction that minimizes the BMSE as a minimum mean squared error (MMSE) prediction. After observing data from a particular individual, , and constructing the MMSE prediction , we seek to assess the expected accuracy of this particular prediction. This can be done with confidence intervals on , obtained from quantiles of the posterior distribution of . In the following sections, we describe specific types of models and investigate the BMSE and the MMSE that minimizes it.

3. Univariate Random Intercept Model

We consider a random intercept model obtained from (2) by letting :where . For a particular individual, the expected response at time isIt follows that the MMSE prediction of iswhere is the MMSE of . The estimator (and therefore ) is given by [23]:where is the sample mean of the measured responses andThe variance of the posterior distribution of (and therefore ) is [23]:Furthermore, the BMSE of (and therefore ) is given by [23]:

The MMSE prediction (given by (8) and (9)) represents a trade-off between knowledge about the population and knowledge about the subject at hand. This trade-off is embodied by the weighting factor in (10). When no data are available for the subject at hand, , resulting in a prediction made at , where is, in this case, representative of the population mean as well as the population mean response :As we begin to collect subject-specific data, the weighting factor moves towards the value , resulting in a prediction that converges to the subject-specific sample mean as data collection continues.

Likewise, when no data are available for the subject at hand, the expected accuracy of the prediction is . The term is, in this case, representative of the population variance as well as the variance in mean response over the population:where the expectation is evaluated with respect to the conditional distribution of , and all other expectations are evaluated with respect to the marginal distribution of . Per (12), as we begin to collect subject-specific data, the expected accuracy of the prediction improves (i.e., the BMSE decreases, where smaller is better).

To illustrate how predictions in this model depend on the amount of subject-specific data collected, we conducted a simulation of model (4) for a particular individual. For this example, the parameter for the individual was chosen far from the population mean (when compared to the magnitude of the population variance), in order to make the transition from the population mean to the true expected response large enough so as not to be obscured by measurement noise in the example. We assumed the population parameters and and chose the subject-specific parameter value . We simulated data points for the individual, with a standard deviation of measurement error of . The MMSE prediction was calculated by incorporating observations iteratively. Figure 2 shows plotted against the number of data points used. The variance of the posterior distribution for from (11) was used to construct confidence intervals on . As expected, for the individual considered, the prediction began at the population mean and moved to the true expected response with shrinking confidence interval as more simulated data were collected.

4. Bivariate Subject-Specific Bayesian Models

When subject-specific data are sparse, individualized predictions may not, on average, reach acceptable levels of accuracy. Accuracy may be improved by including data from a secondary subject-specific data source. However, individual differences in one response variable may not be identical to individual differences in another (e.g., [6, 21]). Therefore, data from a secondary response variable may not simply be used as a substitute for the primary response variable. Rather, to improve prediction accuracy on the primary response by incorporating data from the secondary response, the between-subjects correlation between the primary and secondary response variables must be taken into account.

Here we derive the average accuracy of individualized predictions of a primary response variable based on distinct primary and secondary subject-specific response variables, accounting for between-subjects correlation between the two responses. Let be an index for individual, let be an index for response type, and let be an index for observations ordered by time, nested within individual and response type. Suppose that the response can be modeled bywhere represents the model function, represents measurement time, represents a random subject-specific parameter associated with response type , and represents additive measurement error. Limiting our focus to a particular individual, we may remove the subscript and model the subject’s responses aswhere is used to denote . Suppose that a total of responses have been observed for each response type for the individual at hand. In the next sections, we focus on constructing the MMSE prediction for the expected primary response, .

5. Bivariate Random Intercept Model

We consider the bivariate random intercept model obtained from (16) by letting :where . The scalar model can be converted to vector form by concatenating the responses for each response type,and then concatenating the response vectors of different types,A similar assembly of the measurement errors can be done so thatAn assembly of the parameters can be accomplished by first constructing the parameter vector,and the design matrix,so that the single individual model of (17) can be vectorized asWe consider the case where subject-specific traits and measurement errors are both normally distributed,where , , and are fixed population characteristics.

Correlations between primary and secondary response variables and can be modeled as arising from a correlation between and (between-subjects correlation) or correlations between and (within-subjects correlation). Here, we assume that response correlations arise from between-subjects correlations: where () represents the between-subjects correlation between primary and secondary response variable means and represents the between-subjects variance for response variable . Furthermore, we assume that measurement errors are uncorrelated with response variable-specific variance , so that correlations between the response variables arise only from the between-subjects components. For subject-specific repeated-measures data with no covariates for two response variables, it may be fair to consider the error variance within subjects to be independent as long as perturbations from the intercepts do not tend to be common over both response types.

The error covariance matrix for response type is a diagonal matrix with dimension , where each nonzero element is the type-specific error variance ,The full error covariance matrix can then be built from the type-specific blocks,The bivariate Bayesian model we consider here is fully characterized by (23)–(28).

As was the case for model (4), for a particular individual, the expected primary response at time isThe MMSE prediction isThe MMSE estimator for model (23) is [23]:The MMSE estimator can be extracted as the first element of .

The variance of the posterior distribution of is given byThe variance of the posterior distribution of (and therefore ) can be obtained by extracting the first element from the diagonal of . The BMSE of the MMSE estimator can be obtained from the parameter BMSE matrix , which is given by [23]:The BMSE of (and therefore ) can be obtained by extracting the first element from the diagonal of . Substituting (22), (26) and (28) into (33), we find that the BMSE for can be simplified as follows:where

Figure 3 illustrates the dependence of the BMSE of on the number of observations from the primary and secondary responses. For this illustration, the population parameters were fixed at the values , , , , and . The figure shows the decrease of the BMSE as a function of the collection of secondary response measurements, given different numbers of primary response measurements. For a large number of primary response measurements, little change in BMSE is derived from the secondary response. However, for a small number of primary response measurements, the BMSE decreases substantially with just a few measurements of the secondary response.

To further illustrate the bivariate Bayesian forecasting procedure, simulated subject-specific parameter pairs from the population distribution given by (24) with , , and were generated for individuals, as shown in Figure 4. From this simulated set, individual #19 (circled in Figure 4) with subject-specific parameter vector was chosen to illustrate the transition of the primary response prediction from the population mean response to the true expected response through Bayesian forecasting. For this individual, we simulated errors from (25) using , , , and . Bivariate responses were then constructed using (23).

The MMSE prediction for individual #19 was iteratively determined after assuming only primary responses were observed and after assuming pairs of primary and secondary responses were observed. The iterative estimates for both cases are shown in Figure 5, along with the simulated data. The variance of the posterior distribution for from (32) was used to construct 95% confidence intervals on . For the individual considered, the predictions based on both response variables (purple line) were more accurate than the predictions based on only the primary response (blue line) for the first few iterations of MMSE prediction. Note, however, that while this behavior of the prediction accuracy is found on average, as follows from (34) it is not necessarily true for each and every individual to which the procedure may be applied. Thus, caution is needed in relying on bivariate Bayesian forecasting for improved prediction accuracy of specific individuals; improved prediction accuracy can only be counted on in the average over individuals.

6. Data Collection Cost Minimization

When Bayesian forecasting is applied to individualize predictions, data must be collected to tailor the population model to the individual at hand. In certain sleep research applications, such as forecasting of sleep parameters across nights or predicting performance deficits across periods of sleep deprivation, and in a wide range of other biomedical contexts, this requires creating multiple opportunities for taking measurements. This may be an expensive proposition, and reducing the number of measurement bouts needed to obtain the necessary data could entail considerable cost savings. By measuring secondary responses and incorporating these through bivariate Bayesian forecasting, it may be possible to achieve a given level of prediction accuracy at lower overall cost of data acquisition. Here we explore this possibility in the case of the bivariate random intercept model.

We consider a scenario in which the cost of collecting an observation on the primary response is , the cost of collecting an observation on the secondary response is , and the total cost of data collection is the sum of primary and secondary response costs accrued,where . For this scenario, we determine how many observations and we may expect to have to collect from each response type in order to minimize the total cost of achieving, on average, a given prediction accuracy :

We can simplify the minimization problem by removing either or from both the total cost equation and the nonnegativity constraints on the number of data points. Using (see (30)), it follows that the BMSE of is equal to the BMSE of (i.e., ). Fixing (37) therefore implies that , which can be used with (34) to obtain a relationship between the number of primary and secondary observations needed to meet the average accuracy criterion :We then substitute (38) into (36): The constraint can be equivalently formulated as an upper bound on :Consideration of this constraint is only necessary if there are a certain number of measurements on the secondary response for which it is possible to obtain the desired accuracy without measurement of the primary response; that is,Substituting for from (35), this latter condition can be reformulated as follows:Thus, has an upper bound when , the desired BMSE of , is not smaller than , the minimum BMSE of which can be obtained using only the secondary response. It follows that the constraint set for the minimization problem isIf exceeds its upper bound then the number of secondary response measurements is more than what is minimally needed to meet the average accuracy criterion (37).

The minimal cost solution occurs either on the boundary of the region defined by (43) or at a local minimum in the interior of this region. Figure 6 shows how different values of the error variance can result in either boundary or interior solution types. For this demonstration, the population parameters are set at , , , and , the costs of measurement are assumed to be = $500 and = $100, and the BMSE of is fixed at the value .

In cases where the solution lies in the interior of (43) at a local minimum, the solution must occur at critical points of (39), which can be found by setting to zero the derivative of the total cost with respect to :whereSubstituting the above expression for into (44) and solving for , we obtain the following two critical points:The smaller critical point can be disregarded as a possible solution since it is always less than zero. The second derivative at ,is positive when , which implies that the cost function exhibits a local minimum at this point. If the local minimum is inside the region defined by (43) (see Figure 6(a)), then the solution to the cost minimization problem is where is determined by substituting into (38), and the total cost is found from (36).

Alternatively, if is below the lower boundary of the region defined by (43) (see Figure 6(b)), then the minimal cost solution involves collecting no data from the secondary response. The conditions for which the secondary response is not part of the minimal cost solution are as follows:where reflects the between-to-within variance ratio for the th response type. For this case, the solution which achieves, on average, the level of accuracy is found from (38):

Finally, if is above the upper boundary of the region defined by (43) (see Figure 6(c)), then the solution for occurs at this boundary, where all the data are collected from the secondary response and none from the primary response. For this case, the solution is as follows:

Figure 7 illustrates the number of observations required from primary and secondary responses to obtain a fixed level of accuracy on average for different values of the between-subjects correlation between primary and secondary response parameters. For the example shown, the population model parameters and cost parameters were fixed at , , , , , and , and the desired level of accuracy was . The figure illustrates the three cases described by (48), (50), and (51).

7. Example: Efficient Assessment of an Individual’s Characteristic Wakefulness after Sleep Onset

To illustrate the cost minimization approach outlined in the previous section, we apply it in an example involving the assessment of wakefulness after sleep onset (WASO) in laboratory-based sleep studies. Here we define WASO as the duration of intermittent wakefulness during a sleep period, between the time of sleep onset and the time of final awakening. WASO can be measured by polysomnography (PSG), that is, measuring the sleep electroencephalogram (EEG) and other physiological sleep signals and scoring sleep/wake states, typically in 30 s epochs, based on those signals. PSG is the gold standard procedure for sleep/wake assessment, but it is labor-intensive and expensive to perform. WASO may also be measured in the laboratory using wrist actigraphy (i.e., wrist activity monitoring), which is considerably less expensive. Although actigraphy is not considered a gold standard for measuring WASO, the correspondence with PSG-based WASO is at least moderate in healthy populations [24].

We base our example on data from subjects (ages 22–38; 15 females) who spent between 6 and 13 nights and days inside a controlled laboratory environment with 10 h in bed for sleep (22:00–08:00) each day. The Institutional Review Board (IRB) of Washington State University approved the research, and subjects gave written informed consent. WASO was measured using both PSG (WASO-P) and actigraphy (WASO-A). PSG recordings were performed using digital equipment (Nihon Kohden, Foothill Ranch, CA). Sleep stages and periods of wakefulness were scored in 30 s epochs using standard criteria [25], and WASO-P was calculated from the scored records. Actigraphic recordings were made with Motionlogger wrist actigraphs (Ambulatory Monitoring, Inc., Ardsley, NY). Sleep and wakefulness were assessed from the actigraphic records using the automated algorithm of [26], which calculated WASO-A.

Let WASO-P be the primary response (as it is the gold standard measure) and let WASO-A be the secondary response. Assuming that WASO-P and WASO-A are normally distributed around distinct subject-specific means, we apply the model defined by (23)–(28) to our example. We anticipate that the subject-specific means for WASO-P and WASO-A are positively correlated. We aim to determine a cost-effective data collection scheme given a specific level of desired accuracy in estimates of an individual’s mean WASO-P. For illustration purposes, we assume a fixed cost of $1,250 per night for WASO-P and $150 per night for WASO-A. We wish to estimate an individual’s mean WASO-P to an average accuracy (i.e., ) of . Using the equations derived in the previous section, we estimate the number of nights of WASO-P and WASO-A that most cost-effectively achieves the desired level of accuracy on average.

We have estimated the population model parameters using the nlme package for R [27]. This package fits mixed-effects models using the approach of [28]. The maximum likelihood estimation method tends to underestimate variance parameters [29]; therefore, we estimate these parameters using the restricted maximum likelihood method [6].

In our dataset, the overall means for WASO-P and WASO-A were estimated as follows: and (estimate standard error), indicating that WASO-A tended to underestimate the total amount of WASO as compared to PSG. The estimated variability between subjects for WASO-P and WASO-A was found to be the same: . There was a substantial correlation between subject means for WASO-P and WASO-A: (95% confidence interval: [, ]). The within-subject variation around the subject mean was for WASO-P and for WASO-A.

We determine from (43) that , the number of actigraphy nights, is not bounded above; that is, we cannot achieve our accuracy with actigraphy alone. Further, we find from (46) that is within the feasible region defined by (43), and, therefore, the solution to the cost minimization problem is given by (48). Applying these equations, we achieve an average accuracy of for minimal cost by collecting 3.99 nights of actigraphy and 0.73 nights of PSG (see Figure 8(a)). An approximately optimal solution in the integer domain is found by the common practice of rounding the optimal continuous solution to the nearest integer values [30]. We verify through a grid search that the minimal cost integer value solution neighbors the analytic solution and can be obtained by rounding down to three nights of actigraphy and up to one night of PSG. This yields a total cost of $1700. In contrast, to achieve the same accuracy with PSG alone would require 2.24, or in practice three nights of measurement, for a total cost of $3750.

Note that the results are highly dependent on the estimated between-subjects correlation and that the 95% confidence interval for this correlation was large. The minimal cost solution also depends on the level of accuracy that is required; see Figures 8(b) and 8(c) for scenarios with an average accuracy of 14 min and 13 min, respectively.

8. Linear Models with Time Dependency

For both the univariate and bivariate linear models, time dependency can be introduced by adding time as a covariate. This complicates the construction of a design matrix that enables predictions with a given average accuracy. We show that, in models with time dependency, the BMSE of a predicted response depends on the times at which responses are measured, and this dependency can be summarized by the mean and variance of the measurement times.

To illustrate, we consider a linear approximation of a time-dependent model known as the two-process model of sleep regulation [10, 31]. It has been shown that, for a range of sleep/wake scenarios, the two-process model can describe temporal changes in waking cognitive performance as the algebraic difference between two functions describing physiological processes: the homeostatic pressure for sleep and the circadian pressure for wakefulness [11]. Here we focus solely on modeling the homeostatic pressure for sleep, the dynamics of which are specified separately for sleep and for wakefulness. The dynamics can be modeled using the recursive formulation of [10] where represents the homeostatic pressure after the th time step of duration , represents the homeostatic pressure at one time step before (i.e., at time ), is typically fixed at 0.5 h, and and are time constants for the decay and rise of the homeostatic process during sleep and wakefulness, respectively.

We divide the sleep/wake schedule into periods of sleep and periods of wake, both indexed by . Let represent the initial homeostatic pressure for the period of wakefulness. It has been proposed that change over time in the model is better modeled as linear rather than exponential [32]. For the purpose of the present example, we adopt this idea and approximate the model over a period of continuous wakefulness using a linear interpolation between the start and end points of the wake period (see Figure 9).

Let represent (hypothetical) measurements of the build-up of homeostatic pressure for sleep during wakefulness. These data can be modeled using the following approximation:where represents the amount of time elapsed since awakening, is used to denote the duration of the wake period, and represents normally distributed measurement error.

We consider the homeostatic process over a repeating schedule consisting of h of wakefulness and 8 h of sleep. In this schedule, individuals maintain a steady state for which the homeostatic pressure at the onset of the wake period is constant across days; that is, . This allows us to derive the homeostatic pressure at the start of wakefulness as a function of and :The equation for the homeostatic pressure during a particular wake period can thus be written aswhereIn matrix form, the model can be written aswhere the design matrix is given byand the parameter vector is given byAnalogous to (5), but for two parameters, we assume the following prior distribution on model parameters:As in (6), we assume that the errors are independent realizations from a normal distribution with zero mean and variance . Analogous to (11) and (12), the BMSE for the MMSE prediction of the expected (primary) response at some time in the univariate case is as follows (see Appendix A):where denotes the mean of these times,and denotes the variance of these times,

Our task is to determine the measurement times for this example that will minimize . Equation (62) shows that depends on the measurement times only through their mean and variance . Consequently, instead of conducting an -dimensional minimization of over all the measurement times , we can write as a function of and and conduct a two-dimensional minimization. In doing this, we find that is minimized both when (see Appendix B) and also, more practically relevant, when

The optimal mean measurement time for this example, given by (65), lies slightly above the prediction time; in the limit as , is minimized when the data are collected at times such that . If the prior variance on the intercept is equal to the error variance (i.e., ), the effect of the Bayesian prior is equivalent to increasing the value of that produces the minimum by . This adjustment is the same as what would manifest with no prior information on the intercept when adding an additional measurement time, , to the design matrix .

The absolute minimum of in (62) is not always located in the feasible region in this example, as defined by . More specifically, the absolute minimum of the unconstrained case is not located inside the feasible region if and only if (see Appendix B):Under this condition we hypothesize that, within the feasible region, exhibits an absolute minimum when all the data are collected at time . This is easy to show for the case of one measurement time (i.e., ), and Appendix C contains a proof for the case of two measurement times (i.e., ).

For , we conducted a simulation study to search for a counterexample (i.e., a case where the value of when all data is collected at time is not the smallest value of within the feasible region). For the simulation study, we simulated 10,000,000 times with the following values:Concerning the ranges of the variance components, note that is invariant to the scale of the response. This can be demonstrated by scaling the variance matrices and by and showing that the resulting is then scaled by the same factor . The conclusion is that the shape of the surface of depends on the variance components only through their relative magnitudes. Furthermore, we argue that if any variance component is more than three orders of magnitude greater than any other component, then it would be advantageous to simplify the model by removing the smaller component. As such, all cases for which this model is reasonable can be covered within the range from to for each variance component. Further, concerning the number of observations, we expect that if there is a counterexample, it can be found somewhere in the range Finally, the range for is determined specifically so that the absolute minimum lies outside the feasible region.

For each simulation, we compared at the hypothesized minimum, where , , to a randomly chosen time point, where each is a realization from the following distribution: For each of the 10,000,000 simulations, at the hypothesized minimum was indeed smaller than at the randomly chosen time point. Thus, we found no evidence against our original hypothesis that exhibits an absolute minimum if all the data are collected at time . An analytical proof is beyond the scope of this paper.

We now extend our analysis to consider a time-dependent model with both primary and secondary responses. We formulate the model as follows:In matrix form, the model iswhere the design matrix is given byand the parameter vector is given by

Let us assume the following prior distribution on model parameters:As in (28), we assume that the errors are independent realizations from a normal distribution with zero mean and . The BMSE of estimates of the primary response at time is given bywherewhere denotes the mean of the primary measurement times, denotes the mean of the secondary measurement times, denotes the variance of the primary measurement times, and denotes the variance of the secondary measurement times,

Our task in the bivariate case of the example is to determine the primary measurement times and the secondary measurement times that minimize . Equations (74) and (76) show that depends on the measurement times only through their response-specific means and variances, , , , and . Consequently, it is sufficient to minimize with respect to , , , and and choose any set of measurement times with these means and variances. We find that can be minimized by collecting data at the following times (see Appendix D):The optimal mean measurement time for the primary response variable, in (81), lies slightly above the prediction time. In the limit as , the solution becomes the univariate solution:

Again, the absolute minimum of is not always located in the feasible region defined by . More specifically, the absolute minimum of the unconstrained function is located outside of the feasible region if and only if (see Appendix D):Under this condition, it seems logical that the minimum would occur if we were to collect all the primary data at time and all the secondary data at time zero. However, a simulation study similar to that described above revealed counterexamples, which occurred when . When was decreased (or the ranges of the variance components for the slope were increased), counterexamples also occurred at smaller values of . Therefore, the optimal measurement scheme for the bivariate case appears to depend on and . When is small or is large (in comparison to the magnitudes of and ), the optimal design seems to be collecting all the primary data at and all the secondary data at . When is large or is small (compared to the magnitudes of and ), better designs are likely to be found numerically.

In summary, the average prediction accuracy for the simple linear time-dependent model depends not only on the number of measurements collected, but also on the times when these measurements are taken. In the univariate case, the prediction accuracy depends on these times only through their mean and variance. In the example of the two-process model, the optimal mean of the measurement times is slightly after the prediction time, where the delay increases with more prior information and with fewer or less informative data. When little prior information on the intercept is available, it is usually possible to collect data so that the absolute minimum of is achieved. In the case where the theoretical minimum cannot be achieved (i.e., (66) is not satisfied), minimization of can be achieved by collecting all the data at time .

In the bivariate case of our example, the prediction accuracy depends on the measurement times only through the means and variances of the primary and secondary measurement times. Furthermore, is minimized by centering the primary measurement times above (see (81)) as in the univariate case and collecting all secondary measurements at time zero. As in the univariate case, when little prior information on the primary intercept is available, it is usually possible to collect data so that the absolute minimum of is achieved. In the case where this minimum cannot be achieved (i.e., (84) is not satisfied), minimization of can usually be obtained for parameter ranges considered in our simulation by collecting all primary data at time , and all secondary data at time zero.

9. Nonlinear Models with Time Dependency

Lastly, we focus briefly on the nonlinear case, where the BMSE generally lacks a closed form solution. Obtaining the BMSE of the MMSE estimator for nonlinear models typically requires numerical integration of the joint probability density of and . We illustrate this with an example in which we numerically estimate the prediction BMSE given a nonlinear model and a single primary response measurement and show how it can be improved by a secondary response measurement.

We consider a two-parameter sinusoidal model of circadian (i.e., 24 h) rhythmicity, defined for a given subject and a bivariate response , as follows:where is in hours; represents a response-specific amplitude; and represents the phase, which is assumed to be common to both response types. For our example, we assumeWe simulated individuals from this model using normally distributed errors, with primary and secondary response standard deviations of and . For each individual, we simulated a single primary response at time and a single secondary response at time . Bayesian forecasting was performed using Markov Chain Monte Carlo (MCMC) with a chain length of 100,000 to obtain the MMSE predictions for each individual at time . Predictions were constructed using a primary data point alone, and also using both a primary data point and a secondary data point.

Figure 10 shows these predictions for a randomly chosen individual. The 95% confidence intervals on were constructed using the quantiles of the posterior distribution for . For the individual considered, the predictions based on both response variables (purple line and shading) are at many times substantially more accurate (i.e., they have smaller posterior variance) than the predictions based on only the primary response (blue line and shading).

The average accuracy over individuals was assessed at time by estimating the BMSE, as follows (cf. (3)):The estimated BMSE of when using only the primary data points was 0.53, and the estimated BMSE of when using both the primary and secondary data points was 0.20. These results suggest that there are nonlinear modeling scenarios where secondary response variables can improve predictions of primary response variables considerably via between-subjects correlation.

10. Discussion

In this paper, we illustrated in a Bayesian, repeated-measures framework how to improve the prediction accuracy of the expected response, as measured by the BMSE, for a primary response variable in the presence of a secondary response variable that is correlated with the primary response variable between subjects. To set up the general procedure of improving prediction accuracy through Bayesian forecasting, we constructed the BMSE for a simple univariate random intercept model.

We applied the general procedure to a bivariate random intercept model and derived the BMSE of the primary response predictions for this model. We studied how the BMSE depends on the number of observations from the secondary response and found that, for a fixed number of primary response observations, the BMSE is bounded below as specified by (40). The potential value of considering a secondary response may be assessed by considering whether this lower bound represents a meaningful improvement in the expected prediction accuracy.

Assuming the availability of a reasonably highly correlated secondary variable we also addressed the problem of determining the number of primary and secondary measurements needed to obtain a given average accuracy at minimal cost. We derived equations for the solution to this problem and illustrated their use with an example from sleep research. Given previously observed means, variances, and between-subjects correlation for polysomnographic (primary) and actigraphic (secondary) measurements of wakefulness after sleep onset and assumed reasonable measurement costs, we found that, to obtain an average prediction accuracy of 15 min, the use of actigraphy in addition to polysomnography resulted in a substantial reduction in estimated data collection costs as compared to polysomnography alone.

We then considered a steady-state, linear approximation of the homeostatic component for the two-process model of sleep regulation [10, 31] in the univariate case. We found that the minimization of the BMSE with respect to the vector of times at which the data are collected can be divided into two subcases. In one subcase, the BMSE is minimized by collecting data at times with a mean value slightly above the time at which predictions are to be made, with the offset being inversely proportional to the variance of the prior on the intercept. In the second subcase, all of the data is to be collected at the maximal time point. We extended the results of this example to the bivariate case, which again can be divided into two subcases. In one subcase, the BMSE is minimized by collecting the data so that the secondary measurement times have a mean of zero, and the primary variable is collected at times with a mean value somewhat above the time at which the predictions are to be made, like in the univariate case. In the other subcase, the time points that minimize the BMSE may best be found numerically.

Finally, we considered a nonlinear circadian model and determined the improvement in individualized prediction accuracy from a single primary data point versus both a single primary data point and a single secondary data point. For this particular model (85), we found the improvement to be substantial, suggesting that there are cases for nonlinear models where a secondary variable can substantially improve prediction accuracy for the primary variable, given a reasonably high between-subjects correlation.

In conclusion, depending on the between- and within-subjects variance components and the between-subjects correlation between primary and secondary responses, using secondary response data can be effective in increasing the individualized prediction accuracy on the primary response variable in Bayesian forecasting.

This work represents an improvement over the work of Chandler and colleagues [19], who proposed incorporating secondary variables in individualized performance predictions as covariates in a generalized linear model. An advantage of their approach is that it accounts for perturbations on system dynamics from external factors that are common to the outcome variables considered. A drawback is that the approach does not accumulate information about individual differences over time and therefore does not become increasingly accurate for individualized predictions as more data are collected for the individual at hand. Furthermore, the technique requires secondary data to be measured at the same time as the primary data is to be predicted. This is not a requirement for the presently proposed method, for which individualized predictions can be made for any given time, even if secondary data are unavailable then.

That said, here we considered only Bayesian models with diagonally structured error covariance matrices. Such models do not account for correlation within subjects between response types. The work of Chandler and colleagues [19] does account for such correlation, using a fixed linear relationship between primary and secondary responses. We did not consider this possibility here, using merely a diagonal error covariance matrix with one parameter for each response type. However, the Bayesian modeling framework in this paper can be expanded to account for correlation within subjects between response types by adding additional structure to the error covariance matrix.

Finally, the models described here assume that error variance is constant over repeated measures and across different subjects. This constraint can be relaxed easily by allowing the error covariance matrix to have different elements for different individuals.

The multivariate repeated-measures Bayesian forecasting framework presented here may be useful in a variety of clinical settings. One example is modeling the disabling effects of chronic back pain, where pain-related fear may be a good choice for a secondary variable [33]. For other examples, a rich literature in this area can be found in the domain of anesthesiology [34], where clinical applications of multivariate Bayesian forecasting abound.

Appendices

A. Expression for the Univariate Linear Time-Dependent Model

Theorem A.1. The BMSE of MMSE predictions for prediction BMSE in the univariate, linear, time-dependent model given by (58) is given bywherewhere represents the time for which the expected response is predicted, is the prior variance matrix for the parameter vector , is the design matrix, is the error covariance matrix, and is defined in (3).

Proof. Sinceand since the MMSE estimator commutes over linear transformations [23], the MMSE estimator of the expected response is given byTherefore,where the parameter BMSE matrix is given by [23] as follows:Therefore, the BMSE of the estimated response is given by (A.1).

Theorem A.2. The BMSE of MMSE predictions for the univariate, linear, time-dependent model given by (58) is given bywhere is given in (A.1).

Proof. For this model, the inverse of the between-subjects covariance matrix is given byFurthermore, the parameter BMSE matrix is given by (see Theorem A.1)where is the covariate vector at the time at which we want to make predictions:The error variance matrix is as follows:The design matrix is as follows:The matrix multiplication can be computed to beThe matrix inverse can then be computed as follows:Computing the quadratic form, we find that We note the following decomposition of the sum of squared times:Including this decomposition, we find that

B. Unconstrained Minimization of the BMSE for the Univariate, Time-Dependent Linear Model

Theorem B.1. For the univariate, linear, time-dependent model given by (58), as given by (A.17) exhibits an absolute minimum at a point if and only if

Proof. Recall that and are defined by (63) and (64). To prevent ambiguous notation, let . Consider the derivative of with respect to :We can represent this derivative more simply as follows:where the values of and are found explicitly from (B.2). It is easy to show that and , and consequentlyTherefore, for a fixed , either decreases or remains constant as we increase . It follows that, for fixed value of , is bounded below by in the limit as :Since the limit of does not depend on the values of and , any values of and for which takes on the value given by (B.5) are an absolute minimum of .

Theorem B.2. For the univariate, linear, time-dependent model given by (58), an absolute minimum of as given by (A.17), inside the region defined by , can be obtained by collecting data at times such thatif and only if

Proof. The critical points of the unconstrained function are found by taking the derivative of with respect to and and setting these derivatives to zero. The derivative with respect to is as follows: The derivative with respect to is given by (B.2). Setting both derivatives equal to zero, we find the following critical point:Evaluating the BMSE at this critical point, we find thatwhich is sufficient to show that exhibits an absolute minimum at this point.
Given the constraint that for each , is a feasible point only when . It is easy to see that in all cases. Therefore, the absolute minimum can be achieved by collecting all data at if and only if .

C. Constrained Minimization of the BMSE for the Univariate, Time-Dependent Linear Model with Two Time Points

Theorem C.1. For the univariate, linear, time-dependent model given by (58) with and , ifthen as given by (A.17) is minimized by collecting data at times such that .

Proof. Substituting into (A.15) we find thatFrom Theorem B.2 we know that there are no critical points inside the feasible region, and the solution must therefore exist on the boundary of the region, which is defined by the following line segments:Note that is symmetric in :Therefore, it is sufficient to consider only line segments and . We first consider finding the minimum value on line segment . Setting and then taking the derivative of with respect to , and solving for , we find the following two critical points:Applying condition (C.1) to , we find that Taking the second derivative of with respect to , such that , and evaluating it at result inwhich is positive. Therefore (C.5) represents a minimum.
It is easy to see that is always negative. Taking the second derivative of with respect to , such that , and evaluating it at (C.6) result inwhich is negative. Therefore represents a maximum.
Since the maximum occurs at a value , the minimum occurs at a value , and these are the only two critical points; the BMSE must be decreasing over the line segment, and the minimum BMSE on the line segment therefore occurs at , .
We next consider finding the minimum value on the line segment Setting and then taking the derivative of with respect to , and solving for , we find the following two critical points:We see that is minimized by applying the lower bound for given by (C.1), and we find that . Concerning , the numerator can be minimized by applying the upper bound for of . In this case, the numerator is , which is always positive. The maximum value of the denominator is found by substituting in the minimum value for given by (C.1). We find that the maximum value is , which is negative. Therefore is always negative.
Taking the second derivative of with respect to , such that , and evaluating it at result inEvaluating (C.10) at results inwhich is positive. In general, (C.10) is positive whenSetting the expression on the left-hand side to zero and solving for , we find the following roots:These roots are imaginary, which implies that (C.10) is always positive, and therefore, is a minimum. Given that the BMSE must always be positive, and there are only two critical points, this implies that is a maximum. Since the maximum occurs at a value of and the minimum occurs at a value of , the minimum on the line segment must occur at the value .
In summary, the minimum value on the line segment occurs at , . Applying symmetry, the BMSE at this point is the same as the BMSE at the points , , which is the maximum point for the line segment . Therefore, the minimum BMSE does not occur on the line segment . Again, by symmetry, this means that the minimum BMSE does not occur on the line segment . For the line segment , the minimum BMSE occurs at . Applying symmetry, the minimum BMSE on the line segment occurs at Since these two points are the same, we find that the overall minimum BMSE within the feasible region occurs at the point .

D. Unconstrained Minimization of the BMSE for the Bivariate, Time-Dependent Linear Model

Theorem D.1. For the bivariate, linear, time-dependent model given by (70), as given by (74) exhibits an absolute minimum at a point if and only if

Proof. Recall that , , , and are defined in (77), (78), (79), and (80). To prevent ambiguous notation, let and . Consider the derivative of (see (74)) with respect to : where It is easy to show that and , and consequentlyConsider also the derivative of with respect to :whereIt can be shown that and , and consequentlyTherefore, for fixed and , either decreases or remains constant as we increase or . It follows that, for fixed values of and , is bounded below by in the limit as , :Since the limit of does not depend on or , any values of , and for which takes on the value given by (D.8) are an absolute minimum of .

Theorem D.2. For the bivariate, linear, time-dependent model given by (70), an absolute minimum of as given by (74), inside the region defined by , can be obtained by collecting data at times such thatif and only if

Proof. Substituting (D.9) into , we find thatwhich is sufficient to show that any point is an absolute minimum of the unconstrained function (see Theorem D.1).
Given the constraint that for each , is a feasible point only when . It is easy to see that in all cases. Therefore, the absolute minimum can be achieved by collecting all data at if and only if .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to Hongbo Dong for help with some of the mathematical proofs in the paper. This research was supported by ONR Grant N00014-13-1-0302 and in part by FMCSA Award DTMC75-07-D-00006 and FAA Award DTFAAC-11-A-00003.