Abstract

The purpose of this work is to present a new methodology for fitting Wiener networks to datasets with a large number of variables. Wiener networks have the ability to model a wide range of data types, and their structures can yield parameters with phenomenological meaning. There are several challenges to fitting such a model: model stiffness, the nonlinear nature of a Wiener network, possible overfitting, and the large number of parameters inherent with large input sets. This work describes a methodology to overcome these challenges by using several iterative algorithms under supervised learning and fitting subsets of the parameters at a time. This methodology is applied to Wiener networks that are used to predict blood glucose concentrations. The predictions of validation sets from models fit to four subjects using this methodology yielded a higher correlation between observed and predicted observations than other algorithms, including the Gauss-Newton and Levenberg-Marquardt algorithms.

1. Introduction

Wiener networks are widely used in modeling complex nonlinear systems. These networks have the ability to model a wide range of data types, such as gas concentrations [1], blood glucose concentrations [2], and pH levels [3], and their structure can yield parameters with phenomenological meaning [2]. In this work, Wiener networks are used to first convert inputs into their corresponding dynamic responses and then to pass these dynamic responses through a second-order linear regression function to obtain the fitted output response. The parameters needed to convert the inputs into dynamic responses are referred to as dynamic parameters and the parameters of the regression function as static parameters. However, estimating these parameters can be quite challenging, as the behavior of these networks can be highly nonlinear in the dynamic parameters, and the number of parameters, which can be large, increases rapidly as inputs are added. Overfitting can also be a major issue. Given that Wiener networks utilize differential equations for the conversion to dynamic responses, stiffness, which is the situation where the derivative increases or decreases rapidly while the solution to the differential equation does not [4], is another concern. This phenomenon causes an algorithm to take very small steps (i.e., progress slowly) in order to reach an optimal solution. Another issue is that the process being modeled could change over time due to degradation of equipment, an increase in production, or one of many other reasons. While this change could be very gradual, this implies that eventually the fitted model can degrade over time. If this process is online, then a new model will need to be found expediently to minimize downtime and to take into account the new conditions and inputs.

The basic purpose of this article is to present a new methodology for fitting Wiener networks to large input datasets which can overcome these challenges. In the process control literature this is called β€œprocess identification.” By fitting subsets of the parameters iteratively, we can deal with a large number of parameters and their nonlinearity, as numerical instability in the next iteration is less likely when fitting a subset of the parameters. During optimization, parameter step size is controlled by the value of the objective function, which deals with stiffness. To avoid overfitting, we will utilize what is called β€œsupervised learning” in the statistics literature [5]. In supervised learning, the dataset is broken up into three subsets: training, validation, and test. The model is fit to the training set with the validation set used to determine the number of iterations to use to guard against overfitting. The test set is scrutinized at the end of the optimization process to further evaluate if overfitting has occurred. While our methodology does not perform the optimization as fast as some of the other popular algorithms, utilizing parallel computing has sped up the optimization of a Wiener network using our methodology by roughly 25% in MATLAB [6].

There are other methods for fitting Wiener networks. Due to the Wiener network's nonlinear nature, these are iterative techniques. Note that a nonlinear modeling problem is one with unknown parameters that are functionally nonlinear. Here the objective is to obtain a set of parameters that minimize the sum of squared residuals (i.e., the least squares objective function). Popular techniques for this optimization objective include the Gauss-Newton algorithm and the Levenberg-Marquardt algorithm [7]. Many mathematics/statistics software packages, such as Minitab (Minitab, Inc., State College, PA.), R [8], SAS (SAS Institute Inc., Cary, NC), and MATLAB can implement both of these algorithms. The Levenberg-Marquardt algorithm, which is given in detail in the appendix, is a compromise between the Gauss-Newton algorithm and the method of steepest descent. We have found that fitting all parameters simultaneously using either algorithm can result in overfitting. Even fitting subsets of parameters with just one algorithm has resulted in a model that performs worse than our methodology. Given that the Wiener network used here has a conditionally linear structure since fixing the dynamic parameters yields a linear model, another potential approach is that of Barham and Drane [9]. They fit four different models to argue that alternating between using the usual least squares formula for estimating the linear parameters and a modification of the Gauss-Newton algorithm suggested by Hartley [10] for the nonlinear parameters will generally perform better than either Hartley's modified Gauss-Newton algorithm or the Levenberg-Marquardt algorithm alone. However, using least squares to fit the linear parameters tended to badly overfit the model irrespective of the dynamic parameters. A final method we reviewed in this work is the GRG2 algorithm [11], which is utilized in the Solver program in Microsoft Excel. This was used to fit the Wiener networks used to model blood glucose concentrations in [2]. It attempts to fit models using a generalized reduced gradient algorithm, which can handle constraints on parameters and is found to be fairly robust. However, for supervised learning with this implementation in Excel, the algorithm must be paused after each iteration for inspection, which becomes very time consuming if there are more than a few parameters in the model.

The proposed methodology is presented with the following outline. First a detailed description of the Wiener network for multiple inputs and a single output is given to establish the problem context. After this section the details of the methodology and an example are given to illustrate the algorithm in the fourth section. Finally concluding remarks and some ideas for future work are given in the last section.

2. The Wiener Network

In this section, a detailed description of a Wiener network is given to establish the context of the problem. These networks have a powerful structure for modeling nonlinear dynamic systems. A block diagram with 𝑝 inputs and one output is given in Figure 1.

As shown in Figure 1, each input 𝐱𝑖 is first passed through a dynamic linear block, denoted 𝑔(𝐱𝑖) and converted into its corresponding dynamic variable 𝐯𝑖. Following Rollins et al. [2], we will use the following second-order plus-lead plus-dead-time differential equation:𝜏2𝑖𝑑(𝑑,𝑋)2𝑣𝑖𝑑𝑑2(𝑑)+2πœπ‘–(𝑑,𝑋)πœπ‘–(𝑑,𝑋)𝑑𝑣𝑖𝑑𝑑(𝑑)+𝑣𝑖(𝑑,𝑋)=πœπ‘Žπ‘–(𝑑,𝑋)𝑑π‘₯π‘–ξ€·π‘‘π‘‘π‘‘βˆ’πœƒπ‘–ξ€Έ+π‘₯π‘–ξ€·π‘‘βˆ’πœƒπ‘–ξ€Έ,(2.1) where πœπ‘– is a time constant, πœπ‘Žπ‘– is a lead parameter, πœπ‘– is a damping coefficient, and πœƒπ‘– denotes dead time. For simplicity, we will assume that the dynamic parameters are time and space invariant, that is, πœπ‘–(𝑑,𝑋)=πœπ‘–, πœπ‘Žπ‘–(𝑑,𝑋)=πœπ‘Žπ‘–, and πœπ‘–(𝑑,𝑋)=πœπ‘–, and for the rest of this section, fix πœƒπ‘–=0. Also when referring to 𝑣𝑖(𝑑,𝑋), we will write 𝑣𝑖(𝑑) henceforth. We will use 𝝉, π‰π‘Ž, and 𝜻 to denote all time constants, lead parameters, and damping coefficients, respectively.

To find a recursive definition for 𝐯𝑖, a forward difference approximation to (𝑑𝑣𝑖/𝑑𝑑)(𝑑) will be used. First let 𝑗=𝑑/Δ𝑑 and 𝑑𝑖=πœƒπ‘–/Δ𝑑 so that 𝑣𝑖(𝑑)=𝑣𝑖𝑗 and π‘₯𝑖(π‘‘βˆ’πœƒπ‘–)=π‘₯𝑖,π‘—βˆ’π‘‘π‘–. Thus, π‘‘π‘£π‘–π‘—β‰ˆπ‘£π‘‘π‘‘π‘–(𝑑)βˆ’π‘£π‘–(π‘‘βˆ’Ξ”π‘‘)=π‘£Ξ”π‘‘π‘–π‘—βˆ’π‘£π‘–,π‘—βˆ’1,𝑑Δ𝑑(2.2)2𝑣𝑖𝑗𝑑𝑑2β‰ˆξ€·π‘‘π‘£π‘–π‘—ξ€Έβˆ’ξ€·/𝑑𝑑𝑑𝑣𝑖,π‘—βˆ’1ξ€Έ/π‘‘π‘‘β‰ˆπ‘£Ξ”π‘‘(2.3)ξ€·ξ€·π‘–π‘—βˆ’π‘£π‘–,π‘—βˆ’1ξ€Έξ€Έβˆ’π‘£/Δ𝑑𝑖,π‘—βˆ’1βˆ’π‘£π‘–,π‘—βˆ’2ξ€Έξ€Έ/Ξ”π‘‘β‰ˆπ‘£Ξ”π‘‘(2.4)π‘–π‘—βˆ’2𝑣𝑖,π‘—βˆ’1+𝑣𝑖,π‘—βˆ’2Δ𝑑2.(2.5) By substituting (2.2) and (2.5) into (2.1),𝑣𝑖𝑗=2𝜏2+2πœπœΞ”π‘‘πœ2+2πœπœΞ”π‘‘+Δ𝑑2𝑣𝑖,π‘—βˆ’1βˆ’πœ2𝜏2+2πœπœΞ”π‘‘+Δ𝑑2𝑣𝑖,π‘—βˆ’2+ξ€·πœΞ”π‘‘π‘Žξ€Έ+Ξ”π‘‘πœ2+2πœπœΞ”π‘‘+Δ𝑑2π‘₯𝑖,π‘—βˆ’π‘‘π‘–βˆ’πœπ‘ŽΞ”π‘‘πœ2+2πœπœΞ”π‘‘+Δ𝑑2π‘₯𝑖,π‘—βˆ’π‘‘π‘–βˆ’1.(2.6)

Next all of these dynamic variables are passed through a static nonlinear block, denoted 𝑓(𝐯) in Figure 1, in order to obtain the predicted value of the response variable at time 𝑑. Following Rollins et al. [2], we use a second-order regression function with linear terms, quadratic terms, and second-order interaction terms, giving𝑦𝑗=π‘Ž0(𝑑,𝑋)+𝑝𝑖=1π‘Žπ‘–(𝑑,𝑋)𝑣𝑖𝑗+𝑝𝑖=1𝑏𝑖(𝑑,𝑋)𝑣2𝑖𝑗+π‘βˆ’1𝑝𝑖=1ξ“π‘˜=𝑖+1π‘π‘–π‘˜(𝑑,𝑋)π‘£π‘–π‘—π‘£π‘˜π‘—+πœ–π‘—,(2.7) where πœ–π‘— is a normally distributed error term with mean 0 and variance 𝜎2 and that for any π‘˜β‰ π‘—, πœ–π‘— and πœ–π‘˜ are independent. Again assume that the parameters are invariant with respect to time and space, for example, π‘Ž0(𝑑,𝑋)=π‘Ž0. The vector of parameters corresponding to the linear terms will be denoted by 𝐚, the quadratic terms by 𝐛, and the interaction terms by 𝐜.

3. The Proposed Parameter Estimation Algorithm

In this section the featured algorithm we are proposing to solve the nonlinear regression problem given in the previous section will be described. Following Rollins et al. [2], the objective of this modeling problem is to maximize the true but unknown correlation coefficient between the measured and fitted glucose concentrations that is defined by πœŒπ‘¦,̂𝑦 and estimated by π‘Ÿο¬t. More specifically, under this objective a model is declared useful if and only ifπœŒπ‘¦,̂𝑦>0.(3.1) The meaning of this criterion is that predictions of blood glucose concentrations from the model decrease and increase with measured blood glucose concentrations beyond some degree of mere chance; that is, there is true positive correlation. Notwithstanding, the closer this value is to the upper limit of 1, the more useful the model. Therefore, to achieve this objective, one seeks to identify a model with a sufficiently large value of π‘Ÿο¬t. To this end, the data are separated into a set for training and a set for validation and/or testing. The training set is used to build the model, and the validation (or testing) set is used to evaluate the model against data that were not directly used by the optimization process to estimate the model parameters. However, due to the highly complex mapping of the parameters into the response space of π‘Ÿο¬t, the following indirect criterion is used:Maximizeπ‘Ÿο¬tbyminimizingSSE𝚯=𝑛𝑖=1ξ€·π‘¦π‘–βˆ’Μ‚π‘¦π‘–ξ€Έ2SubjecttoβˆΆπœπ‘–>0,πœπ‘–>0,πœƒπ‘–β‰₯0βˆ€π‘–,(3.2) where Θ is a vector representing the estimated dynamic and static parameters 𝝉,𝜻,π‰π‘Ž,𝜽,𝐚,𝐛,𝐜 and 𝑛 is the number of observations in the training set. The objective criterion is used under the assumption that minimizing SSE is equivalent to maximizing π‘Ÿο¬t. While there is no formal proof for this assumption, experimental evidence supports a strong tendency for this relationship [2].

A model that is nonlinear in parameters, such as the proposed structure, does not have the condition that that sum of the residuals equal 0 as in the case of linear regression. However, under (3.2), the sum of the residuals in the training set should be small and thus a secondary criterion on the closeness of 𝑦𝑖 and ̂𝑦𝑖 for accuracy can be used. This measure of accuracy, denoted the average absolute error (AAE), is defined asβˆ‘AAE=𝑖fin𝑗=𝑖init||π‘¦π‘—βˆ’Μ‚π‘¦π‘—||𝑖finβˆ’π‘–init,(3.3) where 𝑖init is the initial observation used for calculation of this statistic and 𝑖fin is the final observation used. Hence accuracy is judged to increase as AAE decreases.

Thus, in addition to sufficiently large π‘Ÿο¬t values for both the training and test/validation datasets, an acceptable model must also have a relatively small value of AAE in training. This secondary criterion is not imposed in testing/validation because (3.2) forces small residuals for training data only. Furthermore, if a model is capable of a high π‘Ÿο¬t as demonstrated in training then high accuracy can be obtained with effective feedback correction or feedback control to reduce or eliminate bias.

To obtain a useful model the proposed method fits a subset of the parameters using an iterative approach, and the validation set is used to terminate optimization to guard against overfitting. As mentioned before, this was done because attempts to fit with respect to every parameter at once tend to overfit the training set. To determine which iteration is best, each iteration was given a score based on a linear combination of two statistics whose maximum is 1: π‘Ÿο¬t and the correlation π‘ŸVal between observed and predicted response values in the validation set. To establish notation, one can write the correlation between datasets {π‘₯𝑖}𝐾𝑖=1 and {𝑑𝑖}𝐾𝑖=1 asπ‘ŸVal=βˆ‘πΎπ‘–=1ξ€·π‘₯π‘–βˆ’π‘₯π‘‘ξ€Έξ€·π‘–βˆ’π‘‘ξ€Έξ”βˆ‘πΎπ‘–=1ξ€·π‘₯π‘–βˆ’π‘₯ξ€Έ2βˆ‘πΎπ‘–=1ξ€·π‘‘π‘–βˆ’π‘‘ξ€Έ2,1π‘₯=𝐾𝐾𝑖=1π‘₯𝑖,1𝑑=𝐾𝐾𝑖=1𝑑𝑖,(3.4) where 𝐾 is the total number of observations in the dataset. It was used in order to ensure that the predictions in the validation set are properly tracking changes found in the actual data.

After setting the starting values for the parameters, we then execute Algorithm 1 of our methodology. Note that we divide π‘Ÿο¬t by  .9 and π‘ŸVal by  .8 before applying the coefficients. This was done because in practice these values appear to be the maximum attainable values for π‘Ÿο¬t and π‘ŸVal in the example in the proceeding section. These values can be changed if one has a rough guess as to what the maximum attainable values are. The score is then calculated as .1β‹…π‘Ÿο¬t/.8+.9β‹…π‘ŸVal/.8. Starting with π‘Ž1, we will successively add 0.2 until the score decreases. We then subtract 0.2 in order to retain the maximum score. This is done for each π‘Žπ‘–. If choosing π‘Žπ‘–=0.2 yields a lower score than setting π‘Žπ‘–=0, then we successively subtract 0.2 from π‘Žπ‘– until the score decreases and readd 0.2 once a decrease is observed.

𝑠 𝑐 π‘œ π‘Ÿ 𝑒 = . 1 β‹… π‘Ÿ fi t / . 8 + . 9 β‹… π‘Ÿ V a l / . 8
for   𝑖 = 1 to 11  do
 repeat
   𝑠 𝑐 π‘œ π‘Ÿ 𝑒 π‘œ 𝑙 𝑑 = 𝑠 𝑐 π‘œ π‘Ÿ 𝑒
   π‘Ž 𝑖 = π‘Ž 𝑖 + . 2
  Update 𝑠 𝑐 π‘œ π‘Ÿ 𝑒
 untill   𝑠 𝑐 π‘œ π‘Ÿ 𝑒 < 𝑠 𝑐 π‘œ π‘Ÿ 𝑒 π‘œ 𝑙 𝑑
 if   π‘Ž 𝑖 = 0   then
  Replace addition operation with subtraction operation and repeat the loop.
 end  if
end  for

We then attempt to fit a model using all parameters with the Levenberg-Marquardt and Gauss-Newton algorithms [7]. The Levenberg-Marquardt algorithm is given in the appendix, as the Gauss-Newton algorithm used here is simply the Levenberg-Marquardt algorithm where 𝛼=0. The Levenberg-Marquardt algorithm is fit nine times, each with the same starting parameters. The only difference between these trials is the values of the damping parameter 𝛼 used, which were 100,  10,  1,  .01,  .0001, 1π‘’βˆ’6, 1π‘’βˆ’8, 1π‘’βˆ’10, and 0. Note that 𝛼 affects the step size of each iteration, as each model responds differently to each value of 𝛼 chosen. For each iteration of each trial, a score is calculated as before, and the parameter estimates corresponding to the iteration with the highest score that satisfies the parameter constraints among these trials are returned at the end of this loop.

The next stage of this methodology takes the parameter estimates from the previous stage and proceeds to fit subsets of these parameters at a time. Since using one algorithm exclusively has generally yielded weaker results, we chose to use three different algorithms and compare their results. The first algorithm that is attempted by our algorithm is the BFGS [7] algorithm. This is a quasi-Newton method in that it approximates the Hessian, which is used in Newton's method, with a rank 2 matrix that depends on the Jacobian. Note that we remove the difficulty of finding the inverse of a matrix by using the Sherman-Morrison-Woodbury formula [12]. We also used a trust-region version of this algorithm, in that we fixed the maximum step size of this algorithm. However, it is not guaranteed that a step from this algorithm will result in a decrease in the objective function. Hence the second algorithm we will use is the conjugate gradient [13] algorithm. This algorithm requires very little storage and is based on the idea of conjugate directions, although these directions lose conjugacy if the model is not well approximated by a quadratic approximation. However, the conjugate gradient algorithm tends to take a large step followed by several small steps, and these large steps tend to overfit when fitting Wiener networks. On top of this, the derivatives of the objective function with respect to the dynamic parameters must be approximated and can be at times unreliable. Thus the final algorithm we use is the Nelder-Mead [12] algorithm. This algorithm first generates 𝑛+1 points equidistant from one another and from the starting values, which is called a simplex. It then uses function evaluations to move the simplex toward a minimum as well as to expand or shrink the simplex. Thus it uses more function evaluations in place of derivatives. Details on these algorithms are given in the appendix. Note that to run the Nelder-Mead algorithm, the built-in function fminsearch in MATLAB was used. The number of iterations that were ran for the Nelder-Mead algorithm per iteration of this stage and the trust-region radius of the BFGS algorithm was chosen based on the value of π‘Ÿο¬t at the start of each respective algorithm.

As each iteration of each algorithm is determined from a subset of the parameters, once again a score is calculated, and the parameter estimates with the highest score where π‘Ÿο¬t has increased over the value found from the starting parameters and each constraint is satisfied are chosen as the new parameter estimates. There are four methods of choosing the subsets in our methodology. For the first method, we fit 𝐚 first, then 𝐛, 𝐜, 𝝉, 𝜻, and finally π‰π‘Ž. This is repeated if the score of the parameter estimates 𝜽 has improved by at least 0.0001 from the score of the parameter estimates used as starting values to fit 𝐚 up to three times. For the second method, the first subset of parameters to be fit is the set of static parameters that depend on the first input, that is, {π‘Ž1,𝑏1,𝑐12,𝑐13,…,𝑐1,11}. The second subset of parameters is the set of static parameters that depend on the second input, that is, {π‘Ž2,𝑏2,𝑐12,𝑐23,𝑐24,…,𝑐2,11}. Since we have eleven inputs, there will be eleven such subsets. The final three subsets of parameters to be fit are 𝝉,𝜻, and π‰π‘Ž. Again this is repeated up to three times if a score increase of at least 0.0001 is observed each time. For the third method, we will fit the same subsets of static parameters as in stage two. However, we will use different subsets of the dynamic parameters. We first fit three subsets of 𝝉 in this order: {𝜏1,𝜏2,𝜏3}, {𝜏4,𝜏5,𝜏6,𝜏7}, {𝜏8,𝜏9,𝜏10,𝜏11}. Then the corresponding subsets of 𝜻 are fit in the same order, followed by these subsets of π‰π‘Ž. For the last method, the first subsets of parameters fit are {π‘Ž1,𝑏1},{π‘Ž2,𝑏2},…,{π‘Ž11,𝑏11}. Next we fit subsets of parameters corresponding to the interaction terms. First we fit the interaction parameters corresponding to the first input, {𝑐12,𝑐13,…,𝑐1,11}, followed by the second input, {𝑐12,𝑐23,𝑐24,…,𝑐2,11}, and so on. After these parameters we fit the same subsets of dynamic parameters as in the previous stage. Assuming the value of π‘Ÿο¬t has increased by 0.002 since the start of the first stage, we will update the coefficients for calculating the scores and the algorithm will return to the first stage. If not, then we attempt to fit a model one parameter at a time. If after cycling through every parameter three times the value of π‘Ÿο¬t has not increased from the first stage by 0.001, then the algorithm exits. Otherwise the coefficients are updated, and the algorithm returns to the first stage. After two iterations through each of the four methods, each stage may only be repeated twice instead of three times.

Finally we discuss how to update the coefficients of the score. For the first iteration of this methodology, the score is calculated as before:.1β‹…π‘Ÿο¬t/.9+.9β‹…π‘ŸVal/.8. To update the coefficients, first let 𝑀=π‘Ÿο¬t+4π‘ŸVal. Then they are updated to be π‘Ÿο¬t/𝑀, and 4π‘ŸVal/𝑀, respectively. This is done to force the correlation between the predicted and observed values in the validation set to be weighted heavily. This can be altered depending on how important it is to the researcher to achieve a high π‘ŸVal. The weight of π‘Ÿο¬t is large enough so that a large increase can be chosen if a small enough decrease in π‘ŸVal is observed.

4. An Example: Blood Glucose Concentration Prediction of Type 2 Diabetics

We now illustrate our methodology and compare it to other algorithms mentioned in this paper. In this study, several subjects who exhibit significant variation in their blood glucose concentrations participated in a study in order to determine if their blood glucose concentrations can be adequately predicted from a Wiener network using activity variables, food consumption, and time of day. Since type 2 diabetes affects each subject differently, a model was built for each individual. Due to time constraints to meet the submission deadline, four of the subjects will be evaluated in this work.

In order to obtain blood glucose concentrations, the Medtronic MiniMed Continuous Glucose Monitoring CGMS System Gold (Medtronic Minimed, Northridge, Calif) was used. The SenseWear Pro3 Body Monitoring System (BodyMedia, Inc., Pittsburgh, PA) was used to measure the activity variables used in building this model. From these devices measurements of activity and blood glucose concentrations were obtained every five minutes. Subjects were also asked to record the food that they ingested during this time onto a PDA, which used the Weightmania Pro software (Edward A. Greenwood, Inc., Cambridge, Mass). Other than the necessary downtime to download the data from these devices, data were collected by these devices twenty-four hours a day for four weeks. While the SenseWear Pro3 Body Monitoring System can measure over 30 activity variables, it was decided after much trial and error to use only 7 of these variables for their Wiener network. Of the other four variables, three of them, carbohydrates, fat, and protein, are food variables that measure the amount of each consumed in grams every five minutes. The final one, time of day, allows one to capture the Circadian rhythm of each individual’s body [14]. It assumes values from 0, denoting midnight, to 1439, denoting 11:59 pm. A table of all inputs is given in Table 1.

Due to the amount of data available for each subject, the first week of a subject's data was used to fit an individual model for that subject and the subsequent two weeks as a validation set. The starting values were set to be 0 for each π‘Žπ‘–, 𝑏𝑖, and 𝑐𝑖 other than π‘Ž0, which was set to 𝑦Tr. The dynamic variables were set to parameter estimates found from fitting a pilot model. We have fit models using six different methods. The first method was the proposed methodology (PM). The second method utilized the Gauss-Newton (GN) algorithm, and another method used the GN algorithm in a method similar to PM. More specifically, the parameters were fit using the GN algorithm, with the subsets as done in PM. This particular method is called the modified GN algorithm. The fourth method utilized the Levenberg-Marquardt (LM) algorithm, and the fifth method was a modified LM algorithm, where the modifications were made in the same manner as the modified GN algorithm. Finally models were fit using the Excel Solver (ES) routine. Other than the ES routine, all models were fit using MATLAB on a computer with a 2.66 GHz Intel Core 2 Quad processor and 4 GB of RAM. The comparative results are given in Table 2. For each subject, the correlation between predicted and observed blood glucose concentrations in the validation set is at least 0.48. Here we desire a high π‘ŸVal as we wish to track the actual blood glucose concentration closely, since we do not wish to miss sudden changes in blood glucose concentration, particularly if it becomes very low (<40 mg/dL) due to the health consequences. This is why we chose the coefficients for the score as stated earlier.

We have split the results into two tables. Table 2 compares the methods that fit subsets of the parameters, and Table 3 compares the proposed methodology to the generic GN and LM algorithms. Looking at Table 2, we see that the modified GN algorithm had difficulty with subject 1, as π‘Ÿο¬t for this subject was 0.40. This indicates that 𝐽′𝐽 is ill conditioned for this subject at the starting values, and since this could happen for other subjects, the Gauss-Newton algorithm alone would not be a good choice for fitting a Wiener network to these subjects. The LM algorithm does not typically have such a difficulty due to its damping parameter 𝛼, and the modified LM algorithm generally outperformed the modified GN algorithm. However it should be noted that 𝛼 was set depending on the value of 𝑅2: it was set to 100 if π‘Ÿο¬t<0.3, 1 if 0.3β‰€π‘Ÿο¬t<0.5, 10βˆ’3 if 0.5β‰€π‘Ÿο¬t<0.7, and 10βˆ’6 if 𝑅2β‰₯0.7. This was done in order to alleviate ill conditioning and to allow for more aggressive steps when ill conditioning is no longer a problem. We see that the modified LM and GN algorithms fit much faster than the proposed method, but this is not a major problem since we are fitting these models offline. As for the GRG2 algorithm in Excel, it appears to be very competitive with the proposed method for finding a model with a large π‘Ÿο¬t value, but the proposed method outperforms this method for every subject's validation set.

As for Table 3, we compare the proposed methodology to fitting models under supervised learning with all parameters simultaneously using either the GN algorithm or the LM algorithm. Of course these other algorithms will fit much faster as there is only one set of parameters to be fit. However, we see that π‘Ÿο¬t and π‘ŸVal are greater for each subject when fitting a model with the proposed methodology than either such algorithm.

5. Concluding Remarks

This paper presents a methodology that appears to find better parameter estimates for Wiener networks than other previous algorithms. This methodology uses a score based on two statistics: π‘Ÿο¬t and π‘ŸVal in order to avoid overfitting. It also uses a grid search and the Levenberg-Marquardt algorithm in order to improve on the starting parameters. Subsets of the parameters are then fit using the Nelder-Mead, BFGS, and conjugate gradient algorithms in order to overcome issues such as stiffness, poorly approximated derivatives, and nonlinearity. However, we believe there are a few things that can be done to enhance the algorithm.

First, instead of solving the differential equation in order to calculate the dynamic variables used in the example, they were approximated. If possible, one should obtain exact values for the dynamic variables, as this will reduce error in the model. In the example presented, this is not possible since there is no closed-form solution for the differential equation used in the dynamic blocks due to the πœ•π‘₯𝑖𝑗/πœ•π‘‘ term.

Secondly, the parameter πœƒπ‘– was fixed for each input. This was done because πœƒπ‘– must be a multiple of 5 due to the fact that measurements were taken every five minutes. Since the objective function used here would not be continuous with respect to πœƒπ‘–, one would be unable to calculate derivatives. Revising Algorithm 1 such that 5 is added or subtracted from Μ‚πœƒπ‘– would be one possible method of overcoming this. This could be done at the end of each stage of the algorithm as this would be done one Μ‚πœƒπ‘– at a time.

Another possible improvement is the elimination of constraints in the parameter space. Here the parameters πœπ‘– and πœπ‘– must be larger than 0. One way to deal with this issue is reparameterization, which is suggested in [15]. By setting πœπ‘–=π‘’πœ†π‘– and πœπ‘–=𝑒𝛾𝑖, one can optimize with respect to πœ†π‘– and 𝛾𝑖 instead of πœπ‘– and πœπ‘–. This would eliminate the need to check whether πœπ‘–>0 or πœπ‘–>0 for any iteration of our methodology. The only concern is that approximating the Jacobian and the gradient will become even more difficult if this reparameterization is performed.

One last thing to discuss is the importance of starting parameters. While finding starting dynamic parameter values that would yield useful models regardless of the data would make implementation easier, there may be properties of the experiment worth exploiting. It is common knowledge that carbohydrates are digested and metabolized faster than fats. Thus the amount of time that a step change in carbohydrates affects the system is less than that of fat. This β€œresidence time” can be calculated for input 𝑖 to be 2πœπ‘–πœπ‘–. For starting parameters, one idea could be to set the starting values for 𝜁1 and 𝜁2 to be equal and choose 𝜏1 to be less than 𝜏2.

Appendix

For these algorithms, let πœƒ denote the vector of all parameters, Μƒπœƒ denote the subset of parameters to be fit, and 𝑓(π‘₯|πœƒ) denote the model of interest, see Algorithms 2, 3, 4, and 5. Also note that checks for convergence have been left out. First we give a short legend of the notation used in this appendix.Μ‚πœƒ:Current estimate of parameters,𝐟:[𝑓(𝐱1βˆ£Μ‚πœƒ)𝑓(𝐱2βˆ£Μ‚πœƒ)⋯𝑓(π±π‘›βˆ£Μ‚πœƒ)],βˆ‡πΉ:Μ‚βˆ‡πΉ(π±βˆ£πœƒ),̂𝐹(π±βˆ£πœƒ):βˆ‘(π‘¦π‘–βˆ’π‘“(π‘₯π‘–βˆ£Μ‚πœƒ))2,𝐽:Jacobian of ̂𝐹(π±βˆ£πœƒ).

Set 𝛼 .
Let 𝜈 = 2
For   𝑖 = 1 to 5  do
 Calculate the Jacobian 𝐽 .
 if   𝑖 = 1   then
   πœ† = m a x ( 𝐽 β€² 𝐽 ) β‹… 𝛼
 else
   1 πœ† = πœ† β‹… m a x ( 3 , 1 βˆ’ ( 2 πœ‚ βˆ’ 1 ) 3 )
 end if
  𝐺 = 𝐽 β€² 𝐟
  β„Ž = ( 𝐽 β€² 𝐽 + πœ† 𝐼 ) βˆ’ 1 𝐺
  Μ‚ Μ‚ πœƒ = πœƒ + β„Ž
  Μ‚ Μ‚ πœ‚ = 𝐹 ( πœƒ βˆ’ β„Ž ) βˆ’ 𝐹 ( πœƒ ) β„Ž β€² ( πœ† β„Ž βˆ’ 𝐺 )
 if   πœ‚ > 0   then
   𝜈 = 2
 else
   πœ† = πœ† β‹… 𝜈
   𝜈 = 2 𝜈
  if   𝜈 > 1 2 8   then
   break
  end if
 end if
end for

Let 𝑛 = number of iterations to be run and 𝜎 = 1 Γ— 1 0 βˆ’ 8 .
for   𝑖 = 1 to 𝑛   do
 if   𝑖 = 1   then
   β„Ž = βˆ’ βˆ‡ 𝐹
 else
   ξ‚΅ Μ‚ 𝛽 = m a x 0 , βˆ‡ 𝐹 β€² ( βˆ‡ 𝐹 βˆ’ βˆ‡ 𝐹 ( 𝐱 ∣ πœƒ βˆ’ β„Ž ) ) Μ‚ Μ‚ ξ‚Ά βˆ‡ 𝐹 ( 𝐱 ∣ πœƒ βˆ’ β„Ž ) β€² βˆ‡ 𝐹 ( 𝐱 ∣ πœƒ βˆ’ β„Ž )
   β„Ž = βˆ’ βˆ‡ 𝐹 + 𝛽 β„Ž
 end if
  π‘˜ = βˆ’ βˆ‡ 𝐹 β€² β„Ž
  Μ‚ 𝛼 = βˆ’ 𝜎 β‹… π‘˜ / ( βˆ‡ 𝐹 ( 𝐱 ∣ πœƒ + 𝜎 β„Ž ) β€² β„Ž βˆ’ π‘˜ )
  Μ‚ Μ‚ πœƒ = πœƒ + 𝛼 β„Ž
end for

Choose 𝑁 = number of iterations to be run based on starting value of 𝑅 2 and 𝑖 = 1 .
Generate a simplex around the starting parameters Μ‚ πœƒ ; denote them ( Μƒ πœƒ 1 , Μƒ πœƒ 2 Μƒ πœƒ , … , 𝑛 + 1 ) .
While 𝑖 ≀ 𝑁
 Reorder the points of the simplex such that Μƒ πœƒ 𝐹 ( 𝐱 | 1 Μƒ πœƒ ) ≀ 𝐹 ( 𝐱 | 2 Μƒ πœƒ ) ≀ β‹― ≀ 𝐹 ( 𝐱 | 𝑛 + 1 ) .
  βˆ‘ πœƒ = 𝑛 𝑖 = 1 Μƒ πœƒ 𝑖
  Μƒ πœƒ βˆ— = 2 Μƒ πœƒ πœƒ βˆ’ 𝑛 + 1
 if   Μƒ πœƒ 𝐹 ( 𝐱 ∣ 1 Μƒ πœƒ ) ≀ 𝐹 ( 𝐱 ∣ βˆ— Μƒ πœƒ ) < 𝐹 ( 𝐱 ∣ 𝑛 )   then
   Μƒ πœƒ 𝑛 + 1 = Μƒ πœƒ βˆ—
   𝑖 = 𝑖 + 1 ; next
 else if   Μƒ πœƒ 𝐹 ( 𝐱 ∣ βˆ— Μƒ πœƒ ) < 𝐹 ( 𝐱 ∣ 1 ) then
   Μƒ πœƒ βˆ— βˆ— = 3 Μƒ πœƒ πœƒ βˆ’ 2 𝑛 + 1
  if   Μƒ πœƒ 𝐹 ( 𝐱 ∣ βˆ— βˆ— Μƒ πœƒ ) < 𝐹 ( 𝐱 ∣ βˆ— )   then
    Μƒ πœƒ 𝑛 + 1 = Μƒ πœƒ βˆ— βˆ—
  else
    Μƒ πœƒ 𝑛 + 1 = Μƒ πœƒ βˆ—
  end if
   𝑖 = 𝑖 + 1 ; next
 else
  if   Μƒ πœƒ 𝐹 ( 𝐱 ∣ 𝑛 Μƒ πœƒ ) ≀ 𝐹 ( 𝐱 ∣ βˆ— Μƒ πœƒ ) < 𝐹 ( 𝐱 ∣ 𝑛 + 1 )   then
    Μƒ πœƒ βˆ— βˆ— = 3 2 1 πœƒ βˆ’ 2 Μƒ πœƒ 𝑛 + 1
   if   Μƒ πœƒ 𝐹 ( 𝐱 ∣ βˆ— βˆ— Μƒ πœƒ ≀ 𝐹 ( 𝐱 ∣ βˆ— )   then
     Μƒ πœƒ 𝑛 + 1 = Μƒ πœƒ βˆ— βˆ—
     𝑖 = 𝑖 + 1 ; next
   end if
  else
    Μƒ πœƒ βˆ— βˆ— = 1 2 1 πœƒ + 2 Μƒ πœƒ 𝑛 + 1
   if   Μƒ πœƒ 𝐹 ( 𝐱 ∣ βˆ— βˆ— Μƒ πœƒ < 𝐹 ( 𝐱 ∣ 𝑛 + 1 )   then
     Μƒ πœƒ 𝑛 + 1 = Μƒ πœƒ βˆ— βˆ—
     𝑖 = 𝑖 + 1 ; next
   end if
  end if
  for   𝑖 = 2 , 3 , … , 𝑛 + 1   do
    Μƒ πœƒ 𝑖 = 1 2 ( Μƒ πœƒ 1 + Μƒ πœƒ 𝑖 )
  end for
   𝑖 = 𝑖 + 1
  end if
end while

Let 𝑛 = number of iterations to be run, 𝜎 = 1 Γ— 1 0 βˆ’ 1 2 , and 𝐡 βˆ’ 1 = 𝐼 .
Choose 𝛿 based on current value of 𝑅 2 and subset of parameters to be fit.
for   𝑖 = 1 to 𝑛   do
 if   𝑖 > 1   then
   𝑑 = 𝐽 β€² 𝐽 β„Ž + ( 𝐽 βˆ’ 𝐽 βˆ— ) β€² 𝑓
   𝐡 βˆ’ 1 = 𝐡 βˆ’ 1 + β„Ž β€² 𝑑 + 𝑑 β€² 𝐡 βˆ’ 1 𝑑 𝐡 𝑑 β€² β„Ž β„Ž β€² 𝑑 β„Ž β„Ž β€² βˆ’ βˆ’ 1 𝑑 β„Ž β€² + β„Ž 𝑑 β€² 𝐡 βˆ’ 1 β„Ž β€² 𝑑
 end if
  𝐹 βˆ— = 𝐽 β€² 𝐟
  β„Ž = βˆ’ 𝐡 βˆ’ 1 𝐹 βˆ—
  √ 𝑑 = β„Ž β€² β„Ž
 if   𝑑 > 𝛿   then
   𝛿 β„Ž = 𝑑 2 β„Ž
   𝑑 = 𝛿
 end if
  Μ‚ Μ‚ πœƒ = πœƒ + β„Ž
  Μ‚ Μ‚ 𝜌 = 𝐹 ( 𝐱 ∣ πœƒ βˆ’ β„Ž ) βˆ’ 𝐹 ( 𝐱 ∣ πœƒ ) βˆ’ β„Ž β€² 𝐹 βˆ— βˆ’ ( 1 / 2 ) β€– 𝐽 β„Ž β€– 2
 if   𝜌 < . 2 5   then
   𝛿 𝛿 = 2
 else
  if   𝜌 > . 7 5   then
    𝛿 = m a x ( 𝛿 , 3 𝑑 )
  end if
 end if
   𝐽 βˆ— = 𝐽
end for

Acknowledgments

The authors would like to thank Jeanne Stewart and Kaylee Kotz for their help with data collection and BodyMedia, Inc. for their funding and donation of equipment.