Journal of Probability and Statistics

Journal of Probability and Statistics / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6668822 | https://doi.org/10.1155/2021/6668822

Sara Javadi, Abbas Bahrampour, Mohammad Mehdi Saber, Behshid Garrusi, Mohammad Reza Baneshi, "Evaluation of Four Multiple Imputation Methods for Handling Missing Binary Outcome Data in the Presence of an Interaction between a Dummy and a Continuous Variable", Journal of Probability and Statistics, vol. 2021, Article ID 6668822, 14 pages, 2021. https://doi.org/10.1155/2021/6668822

Evaluation of Four Multiple Imputation Methods for Handling Missing Binary Outcome Data in the Presence of an Interaction between a Dummy and a Continuous Variable

Academic Editor: Marek T. Malinowski
Received15 Oct 2020
Revised09 Mar 2021
Accepted22 Apr 2021
Published18 May 2021

Abstract

Multiple imputation by chained equations (MICE) is the most common method for imputing missing data. In the MICE algorithm, imputation can be performed using a variety of parametric and nonparametric methods. The default setting in the implementation of MICE is for imputation models to include variables as linear terms only with no interactions, but omission of interaction terms may lead to biased results. It is investigated, using simulated and real datasets, whether recursive partitioning creates appropriate variability between imputations and unbiased parameter estimates with appropriate confidence intervals. We compared four multiple imputation (MI) methods on a real and a simulated dataset. MI methods included using predictive mean matching with an interaction term in the imputation model in MICE (MICE-interaction), classification and regression tree (CART) for specifying the imputation model in MICE (MICE-CART), the implementation of random forest (RF) in MICE (MICE-RF), and MICE-Stratified method. We first selected secondary data and devised an experimental design that consisted of 40 scenarios (2 × 5 × 4), which differed by the rate of simulated missing data (10%, 20%, 30%, 40%, and 50%), the missing mechanism (MAR and MCAR), and imputation method (MICE-Interaction, MICE-CART, MICE-RF, and MICE-Stratified). First, we randomly drew 700 observations with replacement 300 times, and then the missing data were created. The evaluation was based on raw bias (RB) as well as five other measurements that were averaged over the repetitions. Next, in a simulation study, we generated data 1000 times with a sample size of 700. Then, we created missing data for each dataset once. For all scenarios, the same criteria were used as for real data to evaluate the performance of methods in the simulation study. It is concluded that, when there is an interaction effect between a dummy and a continuous predictor, substantial gains are possible by using recursive partitioning for imputation compared to parametric methods, and also, the MICE-Interaction method is always more efficient and convenient to preserve interaction effects than the other methods.

1. Introduction

The need to adequately deal with missing data is a practical challenge for researchers when analyzing epidemiological data. A well-known approach (and the default in most statistical packages) to deal with the missing data problems is complete case analysis (CCA), which omits subjects with missing values from the analysis. In some cases, such analyses are inefficient, since they sacrifice information from partially observed responses, and even in the worst situations, they may result in biased inferences about the parameters of interest [1]. An alternative to CCA is MI, which creates copies of the dataset, replacing the missing values in each dataset with independent random draws from the predictive distribution of the missing values under a specific model (the imputation model). Each dataset is analyzed separately and the results of M datasets are combined with a set of simple rules. The corresponding estimates (M point and M variance estimates) are combined according to Rubin’s combination rule [2].

Using Rubin’s rule [2], the reasons for missing data are classified as missing completely at random (MCAR) when the probability of missingness is independent of the observed and unobserved data, missing at random (MAR) if the probability of missingness is independent of the unobserved data after conditioning on observed data, and missing not at random (MNAR), where the probability of missingness is dependent on unobserved data even after conditioning on observed data [36].

MICE, also called fully conditional specification (FCS), is commonly used for imputing missing data. The MICE method specifies the univariate distribution of each incomplete variable conditional on all other variables and creates imputations per variable. The MICE algorithm is a Gibbs sampler, a Bayesian simulation approach that generates random draws from the posterior distribution and conducts univariate imputations sequentially until convergence [710].

In the MICE algorithm, the set of conditional distributions may not correspond to any joint distribution since the users specify univariate conditional distributions, and so the joint distribution may not actually exist. Despite this theoretical drawback of the MICE method, the simulation study suggests this imputation method performs well in practice [11, 12]. There are a number of software packages available to impute missing data using MICE methods. These include “IVEware” in SAS [11], “mice” [12, 13] and “mi” in R [14], and “mi” and “ice” in STATA [15].

In the MICE strategy, when the data include an interaction effect, the interaction can be modeled by appropriate models manually and by imputing the missing values in separate subgroups of the data. The default setting in implementation of MICE is for imputation models to include variables as linear terms only with no interactions, but omission of important nonlinear terms may lead to biased results [16].

Motivated by these challenges, several authors have developed more flexible techniques that can handle missing values in the presence of interactions easily. Automatic Interaction Detection is one of the first implementations of recursive partitioning [17]. Besides the fact that the recursive partitioning technique finds the split that is most predictive of the response variable by searching through all predictor variables, they model the interaction structure in the dataset by splitting a dataset into increasingly homogeneous subgroups sequentially. In other words, since splits are conditional on previous splits, possible interactions are automatically detected. Others have used an approach combining of recursive partitioning with imputation methods [3, 18]. They used CART as an imputation engine in MICE. CART in MICE is available as an option in “mice” in R.

CART methods have properties that make them attractive for imputation: they are robust against outliers, can deal with multicollinearity and skewed distributions, and are flexible enough to fit interactions and nonlinear relations. Indeed, many aspects of model fitting have been automated, so there is little tuning needed by the imputer [18].

However, to our knowledge, few evaluations have been done in simulation contexts; even fewer evaluations have been done on data, in the presence of interaction with mixtures of binary and continuous predictors. This lack of comparisons makes it difficult, if not impossible, to assess the relative merits of each procedure.

Thus, to find out when the data include a mixture of continuous and binary predictors in the presence of an interaction, which method is best to impute the missing values in the binary response under MAR and MCAR missing mechanisms, we need to evaluate the performance of the four methods for handling missing data in a real and simulated dataset. This evaluation can be further justified through the following reasons: first, most of the studies that evaluated and compared the performance of these four methods have not included the mixture of continuous and binary predictors with binary outcome. Second, no study has yet been made due to the challenge that MICE-Interaction and MICE-Stratified will result in proper imputation in datasets with interaction between a dummy and continuous predictor. Third, it is not clear what proportion of missing data via MAR and MCAR correspond to a certain amount of bias in the results, so even if we accept that one method has a kind of superiority over others, it is desirable to know how much of the bias can be compensated by that method.

Regarding the above explanations, this study was carried out to evaluate the performance of four MI methods in real and simulated data. At first, we performed MICE-Stratified, filling in values separately for the two subgroups defined by the value of the binary variable in each dataset. Then, we used three MI methods. First, we used MICE-Interaction method. For MICE-interaction, we included the interaction term as just another variable in the predictor matrix and used the default predictor matrix in the mice function. Then, we used CART for specifying the imputation model in MICE (MICE-CART), and the implementation of random forest (RF) in MICE (MICE-RF). We decline to provide formulaic or mathematical details in this article because we want to focus more on simulation.

The paper is organized as follows. In Section 2, MICE-Interaction and MICE-Stratified are first elaborated. In continuation of this section, two main recursive partitioning techniques are considered, namely, CART and RF. Subsequently, incorporation of the CART method and RF method in the MICE framework is presented. In Section 3, we implement a secondary empirical study to investigate the performance of the discussed methods. In Section 4, a simulation study is carried out to investigate which of the discussed methods are convenient to preserve the interaction effects. The results from both studies are discussed in Section 5; at the end of the study, some final conclusions are given.

2. Methods

2.1. MICE Approach

Suppose that data are presented as , where is included of the columns with at least one missing value, is included the columns of that are completely observed, is the th column of , is the missing values in the th column of the, is the currently imputed data matrix X, and is the number of partially observed variables. Suppose that is ordered in non-decreasing numbers of missing values in each column. Define equal to the matrix with its th column removed. Thus, we will have the following algorithm to show the implementation of standard MICE:(1)To fill in the initial values for the missing values, define a matrix equal to ; for each , all the values are initially filled in by random draws from the predictive distribution conditional on , and attach the imputed version of to prior to incrementing .(2)For, replace the missing values of with random draws from the predictive distribution conditional on .(3)Repeat steps 1 and 2 for a number of iterations. This procedure is performed for each variable with at least one missing value, yielding one complete dataset.(4)Repeat steps 1–3 a number of times (M), resulting in M imputed datasets that are available for analysis. Each of the M imputed datasets is analyzed separately. In the next step, the results are combined using Rubin’s rules. It is standard to use generalized linear models as the basis of the posterior predictive distribution draws in steps 1 and 2.

For MICE-Interaction method, the interaction is included in the predictor matrix. In the other words, the interaction term is included as just another variable in the imputation model and then the mice function is used since this method performs better than or equivalent to all other methods considered (the “mice” package in R does this) [3, 7, 8, 19].

For MICE-Stratified method, the datasets are stratified by dummy variable and then MICE method is used to impute missing values, in the presence of interaction effect between a dummy and a continuous variable.

An alternative approach is described in [20]. They defined a new class of nonparametric multiple imputation methods based on the CART or RF algorithm. These two methods fall into the umbrella concept of “recursive partitioning,” which allows for the modelling of internal interactions in the data by sequentially partitioning the dataset into homogeneous subsets. Some researchers used the tree package and showed that the CART results for recovering interactions were uniformly better than standard techniques [18]. Shah and coworkers applied random forest techniques to both continuous and categorical outcomes, which produced more efficient estimates than standard procedures [21]. A similar set of routines building on the rpart [22] and randomForest [23] packages were developed by Doove and coworkers [22]. Methods CART and RF are part of mice package.

2.2. Imputation by MICE-CART

CART is a nonparametric recursive partitioning imputation method that provides the results as a tree structure. The root node is at the top of the tree, which includes all members. It follows by exploring the data to find the best variable and a cut-off that best separates the subjects into two child nodes. Subgroups are made by the optimal split according to a measure of homogeneities such as the Gini index [24]. Partitioning of each child node continues until some certain stopping criterion has been reached, e.g., a predetermined number of observations in the final subsets [25]. Therefore, tree models easily reveal the interaction between independent variables [26].

Suppose that the data are presented as a matrix , where contains the columns of with observed data for all variables and is the part of X that has at least one missing datum. Similar to MICE, the variable with missing data is considered as a dependent variable. The following steps indicate an implementation of CART in MICE:(1)For each variable with missing data (j = 1, …, k), fill in the initial values by random draws from , and update the matrix (shown by ).(2)Fit the CART using each as outcome, and as predictor variables; only subjects with observed values on are used in this process.(3)For subjects in , find the terminal node; they end up according to the fitted tree in step 2; and one observed value on is randomly selected from the subset in this node and used for imputation.(4)Repeat steps 2 and 3 for a number of iterations. This procedure is performed for each variable with at least one missing value, yielding one complete dataset.(5)Repeat steps 1–4 a number of times (M), resulting in M imputed datasets.

To construct the tree, a minimum leaf size of 5 considered, with the deviance of less than 0.0001. Additionally, the response variable (Y) was included as a predictor in the imputation models to impute each incomplete variable [3, 18, 20].

2.3. Imputation by MICE-RF

The algorithm needed for RF imputation is a modification of the discussed CART algorithm. The first two steps are replaced by a construction of k bootstrapped datasets, k being the number of trees in the forest, and the fitting of k tree models. Optionally, each tree can be fitted using the full bootstrapped dataset or randomly selecting the input variables. To avoid reduced variability by imputing based on an averaged tree, possibly due to the higher stability of the individual trees, the imputed value is randomly selected from the union of the k donor pools. For more details on the algorithm, see [22].

3. Empirical Study

We used empirical data from a population-based study collected in southeastern Iran [27]. The data was initially collected to address the main variables that encourage participants to change their body using a variety of methods such as diet, exercise, or drugs. A multistage sampling was adopted where the area was stratified into 10 blocks. From each block, approximately 120 households were selected. In each block, the first household was selected at random. The remaining blocks were selected following a systematic sampling approach. In each household, only one person, aged 14–55, was interviewed.

3.1. Dependent and Independent Variables

The outcome variable of interest was whether participants had tried to change their body using a variety of methods such as diet, surgery, or drugs. Independent variables include BMI (body mass index), body-esteem scale (BE), perceived sociocultural pressure scale (PSPS), physical appearance comparison scale (PACS), and Gender. Our data did not involve any missing values. These variables were measured with standard and validated questionnaires. Logistic regression analysis showed that BMI, BE, PACS4, PSPS, and Gender were independent variables that could influence eating disorders. More details are provided in [27].

In this study, we proposed the following complete-data model (1), which contains all five main variables and an interaction term between a dummy and a continuous variable to guide the analysis of the datasets.where Y is the binary outcome variable, is BMI, is PSPS, is BE, is PACS4, is Gender, and is the interaction between BMI and Gender.

3.2. Simulation Study for Real Data (Generation of Missing Data)

In this section, the simulation was restricted to generating missing data in variable Y. In fact, for the experimental data, the following steps were repeated 1000 times:Step 1. First, We randomly drew 700 observations 300 times with replacement from each of the datasets. 10% to 50% univariate missing data were created in the outcome variable via MCAR and MAR mechanisms. MAR data was generated in variable Y in a way that probability of becoming missing depended on and.Step 2. Using MI method: the four considered MI methods were MICE-Interaction, MICE-CART, MICE-RF, and MICE-Stratified. The independent variables used in all the methods were Gender, BMI, BE, PACS4, PSPS, and the interaction between Gender and BMI. Finally, the number of imputations was set to 5 [3, 28].Step 3. Recalculation of the coefficients and other indexes [5] from complete datasets, which is intended to represent how MI was able to reduce the effect of missing data on the estimations.Step 4. There are several measures that may inform us about the statistical validity of a particular procedure. This step includes calculations of raw bias (RB), percent bias (PB), coverage rate (CR), model-based and empirical SE, and estimated proportion of the variance attributable to the missing data (). The raw bias, which is the average difference between the true value of the parameter being estimated from the real data, and the value of the estimation after imputation, should be close to 0. Bias can also be expressed as percent bias (PB). For acceptable performance, we use an upper limit for PB of 5%. In more detail, the grades of PB are defined as negligible (0%–5%), minimal (5%–10%), moderate (10%–20%), heavy (20%–30%), and severe (>30%) [29, 30]. The coverage is the percentage of cases where the estimand, the true value of the parameter being estimated [20], is located within the 95% confidence interval around the estimate, which should be 95% or higher. The width of the 95% confidence interval should be as small as possible (as long as the coverage does not fall below 95%) and is an indicator of statistical efficiency. However, it is important to evaluate the coverage with other measures, because high variances can lead to higher coverage rates. We regard the performance of the interval procedure to be poor if its coverage drops below 90% [31]. Model-based SE is the mean of the SE estimated across simulations and empirical SE is the SD of the estimates across simulations. The two should be similar and should match if the Rubin’s variance estimate with a specific MI model is unbiased. Lastly, is the estimated proportion of the variation attributable to the missing data, and is an indicator of the severity of the missing data problem (the computation of RB, PB, and is based on equations (2)–(4). The mice package of R was used to calculate these measurements.

A scientific estimand Q is a quantity of scientific interest that we can calculate if we would observe the entire population. T is the total variance of , and hence of if is unbiased. B is the extra variance caused by the fact that there are missing values in the sample, and is the extra simulation variance caused by the fact that the imputation reputation M is finite [3].

4. Simulation Study

This simulation included the generation of artificial datasets and then creating missing data in the binary outcome variable. The following steps are performed to examine the imputation methods.Step 1. In this simulation study, we generated a finite population of size N = 700 from the binary variable outcome Y. The artificial data include one binary predictor variable and four continuous variables that the binary variable was uncorrelated from the other variables. The number of continuous variables is kept higher than the number of categorical variables due to the fact that the simulation is aimed to be similar to our real data. One binary variable was randomly drawn from a binomial distribution. The marginal distribution of is Bernoulli (0.6). Moreover, four continuous predictors were randomly drawn from a multivariate normal distribution, where the first two predictor variables ( and) had a pairwise correlation of r = 0.5 and the last two predictor variables ( and) had pairwise correlations of . The binary response is modeled using GLM depending on various binary and continuous predictors. The dichotomization of is based on the following criteria:where 0.6 is the mean value of . By defining and standardizing logit , .

We have used the simulation design that combined sampling and missing data mechanisms. To achieve this simulation design, we first generated 1000 sets of data based on the described characteristics and then created missing values one time for each complete dataset. Steps 2 and 3 are the same as those described in the real data simulation section. Various scenarios have been considered, including two missing mechanisms (MAR and MCAR), five missing proportions (10%, 20%, 30%, 40%, and 50%), and four different imputation methods (MICE-Interaction, MICE-CART, MICE-RF, and MICE-Stratified), which lead to a combination of 2 × 5 × 4 = 40 scenarios. According to the above, a total of 1000 repetitions were performed. To compare different scenarios, the average measures for the scenarios were calculated.

5. Results

5.1. Results for Real Data

For the real dataset, we present only the results with respect to the interaction effects in the form of plots. The results of the imputation methods under the missing mechanism of MCAR and MAR are shown in Figures 16. They are separated by different measurements.

5.1.1. Raw Bias (RB)

All the imputation methods used to estimate the interaction were almost biased towards the null except the MICE-Stratified method. However, the bias values of MICE-Stratified method were lower than 0.1 at all the missing percentages. The MICE-Interaction and MICE-CART methods were less biased than the other two methods. We found that the bias values become higher with increasing missing percentages (Figure 1).

5.1.2. Percent Bias (PB)

The MICE-Interaction and MICE-CART methods were less biased than the other two methods at almost all scenarios. The MICE-Stratified method produced the highest PB at all missing percentages (Figure 2).

5.1.3. Coverage Rate (CR)

Among imputation methods, the MICE-RF method had the highest coverage rates at all missing rates and both mechanisms and the coverage rate of the MICE-Stratified method was very low (Figure 3).

5.1.4. Model and Empirical SE

At all the MI approaches, the empirical and model-based standard errors are almost identical for the interaction effect (Figures 4 and 5).

5.1.5. The Proportion of Variation Attributable to the Missing Data

In general, λ values for all methods were less than 0.5. The MICE-Stratified and MICE-CART methods had lower λ value for interaction than the others (for instance, the MICE-Stratified led to the lowest bias in the MCAR missing mechanism, at all the missing percentages), which showed that the MICE-RF and MICE-Interaction methods indicated greater uncertainty than the other methods in estimating the interaction effect (Figure 6).

5.2. Results for Artificial Datasets

Tables 16 show RB, PB, CR, model-based SE, empirical SE, and λ under the missing mechanism of MCAR and MAR for imputation methods; the tables are separated by different measurements.


VariableMethodMARMCAR
10%20%30%40%50%10%20%30%40%50%

(BMIreal)MCR0.0240.0500.0820.1190.1450.0280.0540.0830.1120.157
MI0.0010.0060.0120.0160.0200.0020.0080.0120.0000.020
MRF−0.013−0.024−0.028−0.038−0.052−0.009−0.017−0.026−0.040−0.040
MS−0.071−0.123−0.177−0.228−0.278−0.069−0.129−0.178−0.236−0.276
(Betotal)MCR−0.010−0.021−0.034−0.050−0.058−0.011−0.023−0.032−0.044−0.066
MI0.0010.0010.0040.0000.0090.0000.0000.0040.0080.003
MRF−0.034−0.064−0.092−0.124−0.142−0.031−0.062−0.087−0.112−0.139
MS−0.087−0.156−0.213−0.266−0.308−0.086−0.157−0.218−0.267−0.314
(PACS4)MCR−0.023−0.036−0.052−0.073−0.086−0.010−0.016−0.030−0.041−0.053
MI−0.0030.0040.0060.0080.0110.0010.0050.0050.0080.008
MRF−0.054−0.091−0.129−0.167−0.196−0.034−0.066−0.098−0.123−0.151
MS−0.118−0.184−0.236−0.278−0.311−0.086−0.156−0.217−0.264−0.312
(PSPS)MCR−0.020−0.036−0.055−0.071−0.095−0.008−0.022−0.034−0.046−0.063
MI0.000−0.0010.002−0.0010.0000.0030.0020.0020.0050.004
MRF−0.052−0.094−0.129−0.167−0.200−0.032−0.066−0.097−0.126−0.156
MS−0.113−0.187−0.237−0.280−0.314−0.087−0.158−0.218−0.271−0.315
(Gender)MCR0.000−0.005−0.022−0.033−0.035−0.012−0.025−0.040−0.062−0.068
MI0.0040.0070.0030.0030.018−0.0010.004−0.006−0.0020.003
MRF−0.015−0.027−0.054−0.071−0.080−0.024−0.043−0.068−0.086−0.102
MS−0.043−0.070−0.097−0.121−0.132−0.051−0.084−0.115−0.141−0.162
(BMIreal ∗ Gender)MCR−0.076−0.144−0.228−0.304−0.390−0.072−0.136−0.225−0.300−0.401
MI−0.0020.000−0.0100.006−0.0030.0020.0010.0000.003−0.004
MRF−0.128−0.233−0.345−0.431−0.517−0.119−0.231−0.327−0.417−0.514
MS−0.236−0.405−0.533−0.628−0.716−0.231−0.399−0.534−0.631−0.722


VariableMethodMARMCAR
10%20%30%40%50%10%20%30%40%50%

X1 (BMIreal)MCR0.0480.1010.1650.2390.2910.0560.1090.1670.2260.315
MI0.0020.0120.0250.0330.0400.0050.0160.0240.0000.039
MRF−0.025−0.048−0.056−0.077−0.103−0.018−0.034−0.052−0.079−0.079
MS−0.143−0.246−0.355−0.455−0.556−0.138−0.259−0.355−0.471−0.552
(Betotal)MCR−0.020−0.043−0.068−0.101−0.117−0.021−0.047−0.064−0.089−0.132
MI0.0020.0010.0080.0000.0160.0010.0000.0090.0160.007
MRF−0.068−0.129−0.185−0.248−0.286−0.063−0.124−0.175−0.224−0.279
MS−0.176−0.314−0.428−0.533−0.620−0.173−0.316−0.437−0.535−0.631
(PACS4)MCR−0.047−0.072−0.103−0.146−0.172−0.020−0.032−0.061−0.081−0.106
MI−0.0060.0080.0120.0160.0220.0010.0090.0100.0160.016
MRF−0.109−0.181−0.258−0.334−0.393−0.069−0.132−0.196−0.247−0.302
MS−0.235−0.368−0.472−0.556−0.622−0.172−0.312−0.433−0.527−0.623
X4 (PSPS)MCR−0.041−0.072−0.111−0.144−0.190−0.017−0.044−0.068−0.093−0.126
MI0.000−0.0030.005−0.0010.0000.0050.0040.0030.0090.010
MRF−0.104−0.188−0.259−0.335−0.401−0.065−0.134−0.196−0.254−0.312
MS−0.227−0.375−0.477−0.563−0.631−0.175−0.318−0.438−0.545−0.633
X5 (Gender)MCR0.001−0.021−0.088−0.129−0.137−0.047−0.099−0.161−0.247−0.279
MI0.0130.0270.0080.0120.068−0.0050.014−0.027−0.0100.008
MRF−0.060−0.109−0.214−0.283−0.320−0.093−0.172−0.273−0.341−0.408
MS−0.172−0.279−0.384−0.479−0.522−0.202−0.338−0.458−0.563−0.646
(BMIreal ∗ Gender)MCR−0.076−0.144−0.228−0.304−0.391−0.072−0.136−0.225−0.301−0.401
MI−0.0020.000−0.0100.006−0.0020.0020.001−0.0010.003−0.003
MRF−0.128−0.233−0.345−0.431−0.518−0.119−0.231−0.328−0.417−0.514
MS−0.236−0.406−0.533−0.629−0.716−0.231−0.399−0.534−0.631−0.722


VariableMethodMARMCAR
10%20%30%40%50%10%20%30%40%50%

(BMIreal)MCR1.0001.0000.9810.9640.8851.0001.0000.9820.9550.924
MI1.0001.0000.9980.9970.9841.0001.0000.9980.9920.985
MRF1.0001.0001.0000.9980.9861.0001.0001.0000.9980.994
MS0.9960.9670.8260.7020.5560.9970.9530.8500.6740.563
(Betotal)MCR1.0000.9990.9820.9440.9061.0000.9970.9870.9550.901
MI1.0000.9990.9980.9940.9831.0001.0000.9980.9950.986
MRF1.0000.9990.9870.9420.9031.0001.0000.9910.9630.875
MS0.9830.7400.4150.2050.0920.9730.7270.4020.2080.078
(PACS4)MCR1.0000.9960.9650.8930.8341.0000.9970.9870.9610.907
MI1.0001.0000.9980.9910.9871.0000.9990.9990.9980.983
MRF1.0000.9800.9190.7980.6781.0000.9970.9640.9260.850
MS0.8960.5000.2100.0930.0490.9730.6660.3010.1400.046
(PSPS)MCR1.0000.9940.9550.9070.8241.0001.0000.9830.9460.899
MI1.0001.0000.9970.9950.9851.0001.0001.0000.9980.989
MRF1.0000.9800.9200.7890.6761.0000.9990.9700.9040.830
MS0.8930.4890.2050.0880.0420.9780.6480.2950.1100.047
X5 (Gender)MCR1.0000.9990.9950.9810.9591.0000.9990.9900.9790.957
MI1.0001.0000.9980.9970.9891.0001.0000.9990.9960.988
MRF1.0001.0000.9980.9990.9941.0001.0000.9990.9970.992
MS1.0000.9910.9740.9510.9620.9990.9910.9740.9580.924
(BMIreal ∗Gender)MCR1.0000.9970.9290.8220.7121.0000.9960.9460.8340.677
MI1.0001.0000.9990.9980.9901.0000.9990.9990.9870.983
MRF1.0000.9880.8670.7140.5581.0000.9840.9080.7340.545
MS0.9100.4550.1730.0720.0150.9160.4690.1660.0370.014


VariableMethodMARMCAR
10%20%30%40%50%10%20%30%40%50%

(BMIreal)MCR0.1610.1670.1740.1830.1900.1610.1690.1740.1820.190
MI0.1640.1760.1900.2090.2330.1650.1770.1910.2080.230
MRF0.1630.1710.1780.1850.1890.1640.1710.1780.1840.192
MS0.1490.1460.1460.1460.1490.1490.1460.1460.1460.148
(Betotal)MCR0.1140.1180.1220.1240.1280.1150.1190.1220.1250.128
MI0.1170.1250.1350.1470.1630.1180.1260.1360.1480.163
MRF0.1160.1200.1250.1290.1330.1170.1210.1250.1280.131
MS0.1050.1020.1010.1010.1020.1050.1020.1010.1010.102
(PACS4)MCR0.1050.1090.1120.1130.1170.1050.1090.1120.1140.118
MI0.1080.1180.1260.1400.1540.1080.1150.1250.1360.147
MRF0.1070.1110.1140.1170.1190.1060.1110.1140.1180.120
MS0.0950.0930.0920.0920.0930.0960.0930.0920.0920.093
(PSPS)MCR0.1050.1090.1120.1140.1160.1050.1080.1110.1140.117
MI0.1090.1170.1270.1400.1540.1080.1150.1240.1350.149
MRF0.1070.1110.1150.1170.1190.1060.1100.1130.1170.120
MS0.0950.0930.0920.0920.0930.0960.0930.0920.0920.093
(Gender)MCR0.1920.1980.2040.2100.2150.1920.1990.2040.2100.215
MI0.1970.2100.2280.2450.2700.1980.2120.2280.2490.274
MRF0.1940.2020.2080.2160.2190.1950.2040.2090.2170.222
MS0.1780.1750.1740.1750.1780.1790.1750.1740.1760.178
(BMIreal ∗Gender)MCR0.2380.2460.2540.2640.2700.2390.2490.2550.2610.268
MI0.2430.2610.2810.3070.3390.2430.2610.2800.3050.342
MRF0.2410.2480.2530.2590.2600.2410.2490.2540.2580.263
MS0.2090.1990.1920.1890.1890.2100.1990.1930.1890.188


VariableMethodMARMCAR
10%20%30%40%50%10%20%30%40%50%

(BMIreal)MCR0.0690.1020.1350.1600.2070.0720.1020.1270.1550.181
MI0.0630.0890.1220.1410.1870.0650.0930.1170.1490.172
MRF0.0600.0820.1010.1100.1240.0640.0830.0950.1110.122
MS0.0790.0940.1090.1100.1130.0780.0940.1000.1070.110
(Betotal)MCR0.0550.0770.1000.1200.1460.0540.0730.0930.1140.136
MI0.0500.0670.0880.1030.1300.0500.0680.0830.1050.132
MRF0.0500.0620.0720.0800.0930.0490.0610.0710.0820.092
MS0.0580.0680.0730.0780.0790.0580.0680.0720.0780.076
(PACS4)MCR0.0550.0740.0930.1160.1400.0530.0710.0880.1060.127
MI0.0510.0660.0830.1050.1220.0480.0670.0790.0950.121
MRF0.0510.0600.0690.0790.0870.0470.0600.0670.0740.083
MS0.0590.0670.0690.0740.0700.0560.0630.0690.0710.069
(PSPS)MCR0.0550.0770.0940.1130.1410.0510.0700.0870.1100.126
MI0.0480.0680.0820.1020.1220.0460.0640.0790.0980.118
MRF0.0490.0610.0690.0790.0870.0450.0580.0660.0770.081
MS0.0590.0660.0670.0710.0690.0540.0650.0690.0710.069
(Gender)MCR0.0770.1130.1450.1720.1940.0830.1140.1520.1720.205
MI0.0720.1050.1360.1680.2080.0730.1080.1430.1710.217
MRF0.0700.0950.1110.1220.1350.0710.0940.1180.1200.135
MS0.0890.1110.1250.1340.1260.0910.1140.1220.1210.134
(BMIreal ∗Gender)MCR0.0960.1390.1820.2160.2480.1020.1430.1750.2170.242
MI0.0860.1260.1720.2140.2650.0890.1310.1730.2190.258
MRF0.0850.1150.1320.1430.1470.0880.1160.1310.1430.143
MS0.1270.1440.1560.1510.1490.1230.1420.1480.1420.143


VariableMethodMARMCAR
10%20%30%40%50%10%20%30%40%50%

(BMIreal)MCR0.0690.1210.1780.2320.2670.0670.1320.1710.2220.264
MI0.1060.2060.2990.3980.4890.1090.2130.3080.3950.485
MRF0.1160.2060.2700.3300.3640.1190.1990.2650.3170.373
MS0.0100.0380.0800.1230.1730.0100.0380.0760.1210.171
(Betotal)MCR0.0760.1390.1920.2330.2720.0780.1430.1960.2320.271
MI0.1110.2050.2960.3940.4860.1130.2130.3090.3940.484
MRF0.1250.2130.2900.3450.3920.1330.2170.2790.3320.369
MS0.0120.0400.0790.1250.1730.0110.0410.0790.1210.169
(PACS4)MCR0.0930.1560.2050.2330.2790.0800.1430.2000.2300.279
MI0.1270.2430.3210.4260.5100.1120.2100.3080.4010.472
MRF0.1490.2380.2990.3510.3820.1280.2190.2840.3410.381
MS0.0140.0460.0840.1230.1750.0120.0380.0790.1290.172
(PSPS)MCR0.0910.1570.2090.2430.2730.0800.1430.1910.2310.271
MI0.1300.2390.3290.4290.5080.1130.2160.3070.3950.486
MRF0.1520.2360.3040.3510.3840.1300.2180.2740.3270.380
MS0.0140.0460.0840.1290.1740.0120.0410.0790.1240.173
(Gender)MCR0.0720.1290.1800.2280.2640.0720.1320.1750.2230.258
MI0.1050.2010.3040.3840.4720.1120.2170.3060.4000.482
MRF0.1160.1990.2610.3230.3490.1140.2080.2590.3190.358
MS0.0100.0380.0760.1190.1710.0110.0380.0760.1240.170
(BMIreal ∗Gender)MCR0.0950.1590.2190.2740.3090.0950.1710.2200.2630.301
MI0.1110.2130.3070.3980.4830.1070.2140.3000.3910.491
MRF0.1600.2500.3110.3680.4040.1590.2490.3110.3560.408
MS0.0150.0480.0890.1350.1870.0150.0470.0920.1340.185

By comparing the values of RB for each coefficient, interesting results were obtained: for all scenarios, the MICE-Interaction method performed very well in terms of RB since all RB values for this method were almost zero. For instance, at 40% missing percentage under both mechanisms, the amount of RBs for the interaction produced by the MICE-Interaction method was equal to zero (Table 1).

Considering the PB values, the MICE-Interaction method had acceptable performance since the PB values were less than 5%. For all the coefficients which were in the interaction effect, at all missing percentages in both missing mechanisms, the MICE-Interaction method led to a negligible PB and had relatively good performance (Table 2).

MICE-Interaction, which correctly includes the interaction term in the imputation model, led to slightly higher coverage for the interaction effect than the other imputation methods. The CR values of the MICE-Interaction method were more than 95%. In almost all scenarios, the CR values of MICE-Stratified method were less than 95% (Table 3).

By examining the model and empirical SEs, it is determined that by simulations the difference between the empirical and Rubin’s MI variance was small or negligible. The model SEs was larger than the empirical SEs, and this allowed the coverage to be relatively good despite the slight bias (Tables 4-5).

5.2.1. The Proportion of Variation Attributable to the Missing Data (λ)

Examining the estimated values of λ, it is interesting to note that, in all scenarios, all methods had acceptable performance based on the λ criterion (λ < 0.5). The λ values obtained from the MICE-CART and MICE-Stratified are lower than other methods. The MICE-Stratified method resulted in the lowest λ which showed that the influence of the imputation model on the final result was not larger than that of the complete-data model. The estimated values were less than 0.2, indicating that the methods deal with a relatively large fraction of missing information (Table 6).

6. Discussion

In this study, four MI methods were evaluated, including MICE-Stratified, MICE-Interaction, and two recursive partitioning techniques that incorporate interaction effects in the data, as imputation methods in MICE: CART and RF. In addition, the percentage of missing data and two missing mechanisms on the performance of the MI methods were examined. We studied the raw bias, percent bias, coverage rates, model-based SE, empirical SE, and the proportion of variation attributable to the missing data, after imputation by these methods. Overall, at lower percentages of missing values (at most 30%), all three methods except MICE-Stratified appeared to produce reasonably high quality imputations.

Consistent with similar studies, MI methods were evaluated based on RB, PB, CR, model SE, empirical SE, and λ [3, 22]. RB, PB, and CR are several measures that may inform us about the statistical validity or precision of a particular procedure. RB is a measure of invalidity and should be close zero. For acceptable performance, we used an upper limit for PB of 5% [3, 32]. CR is also a measure of validity but at the same time is a function of the average width of the confidence intervals, which is a measure of precision. If all is well, then RB should be close to zero, and the coverage should be near 0.95. Methods having no bias and proper coverage are called randomization-valid [33]. If two methods are both randomization-valid, the method with the shorter confidence intervals is more efficient. The λ value is a function of the number of imputations and interpreted as the proportion of the variation attributable to the missing data. If λ is high, say λ > 0.5, the influence of the imputation model on the final result is larger than that of the complete-data model. Better imputation results in lower values of λ [2, 34, 35]. As we saw earlier in the current study, in almost all scenarios, the λ values were less than 0.5. Although the lowest amount of λ was attributed to the MICE-Stratified and MICE-CART method, according to the RB and CR values of these two methods, the MICE-CART method was superior to the MICE-Stratified method.

Our main result is that, relative to one another, the MICE-Interaction method appeared to have the best performance at all missing percentages. Also, the recursive partitioning methods had better performance than MICE-Stratified method. Furthermore, the simulation results of comparing the methods showed that the quality of parameter estimates was almost identical in the two missing mechanisms.

An extension of this study would consider a dataset that has more complex structures between the variables than our data. In addition, looking at each model closely, one would observe that the design of this study directly aligns with the strength of each recursive partitioning model. The recursive splits in the tree structure allow CART to effectively capture higher order interaction more accurately, as long as the number of individuals in each leaf is large enough.

The imperfect imputation models that resulted in the bias of recursive partitioning techniques may have emerged from the presence of main effects in the data. That is, recursive partitioning techniques have difficulty in modelling linear main effects. It is hard to capture the main effects because, due to the binary tree model, it would take many fortuitous splits to recreate the structure [36]. We expect that this problem will also happen implementing other recursive partitioning techniques. The solution to this problem is provided by STIMA [37], which combines a linear main effects model with recursive partitioning. The difficulty with modelling linear main effects also explains why in our study the recursive partitioning imputation methods led to biases that are somewhat higher for the main effects compared to MICE-Interaction method.

Meng stated that the analysis procedure should be congenial to the imputation model. Bartlett et al. connected congeniality to compatibility by extending the joint distribution of the imputation model to include the substantive model [38, 39]. Uncongeniality can occur if the imputation model is specified as more restrictive than the complete-data model, or if it fails to account for important factors in the missing data mechanism. Both types of missions introduce biased and possibly inefficient estimates [3]. In the MICE-Interaction method, we used the imputation model, which included the interaction effect, and this made compatibility between imputation and analysis model [40]. Therefore, the method led to less bias in estimating the coefficients. The higher biases for the interaction effects by RF compared to CART may be explained by interactions that are missed in the tree building process due to drawing bootstrap samples and the (low) number of randomly preselected variables [41]. Therefore, it is concluded that MICE-Interaction preserved the interaction effect best and the MICE-Interaction method is recommended if a user has presumptions of interaction effects. The quality of imputations for any of the methods was lower for datasets with higher percentage of missing data.

Some features of the present study could be considered as strengths. For example, to our knowledge, this is the first time that the performance of four imputation methods was evaluated in observations involving a combination of binary and continuous predictors, in the presence of an interaction effect.

To improve on the performance of the MICE-RF, an extension of this study will look into the possibility of changing the number of trees in RF algorithm. Future work will consider the missing not at random mechanism instead of missing completely at random and missing at random considered here. Finally, as mentioned earlier, datasets with more complex structures between the variables will be considered.

7. Conclusion

In summary, this paper offered a fair comparison between MICE-stratified, MICE-Interaction, and tree-based imputation methods in the MICE algorithm. To our knowledge, this is the first paper to compare tree-based imputation in MICE to a parametric model that includes a true interaction effect between a dummy and a continuous variable.

MICE-Interaction had the highest coverage of the interaction effect. Importantly, parametric imputation should only be utilized if there is enough information to ensure that all necessary interaction terms are included in the imputation model. If one can accept the reduction of coverage for the interaction effect, recursive partitioning methods are recommended as they do not require specification of the imputation model.

Beyond the fact that there is still room for improvement, it can be concluded that although recursive partitioning methods were valuable for imputing datasets containing interaction effect between a binary and a continuous variable, properly incorporating interactions in the parametric imputation model (MICE-Interaction method) led to much better performance.

Abbreviations

CR:Coverage rate
FCS:Full conditional specification
JAV:Just another variable
MCR:MICE-CART
MRF:MICE-RF
MI:MICE-Interaction
MS:MICE-Stratified
MAR:Missing at random
MCAR:Missing completely at random
MI:Multiple imputation
PB:Percent bias
PMM:Predictive mean matching
RB:Raw bias
RF:Random forests
SE:Standard error.

Data Availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request to the corresponding author. The corresponding author had previously used the real data from Addressing the Problem of Missing Data in Decision Tree Modelling‏ (link: https://doi.org/10.1080/02664763.2017.1284184).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. J. A. C. Sterne, I. R. White, J. B. Carlin et al., “Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls,” BMJ, vol. 338, no. 1, Article ID b2393, 2009. View at: Publisher Site | Google Scholar
  2. D. B. Rubin, Multiple Imputation for Nonresponse in Surveys, John Wiley & Sons, Hoboken, NJ, USA, 2004.
  3. S. Van Buuren, Flexible Imputation of Missing Data, CRC Press, Boca Raton, FL, USA, 2018.
  4. D. B. Rubin, “Multiple imputation after 18+ years,” Journal of the American Statistical Association, vol. 91, no. 434, pp. 473–489, 1996. View at: Publisher Site | Google Scholar
  5. J. Barnard and X.-L. Meng, “Applications of multiple imputation in medical studies: from AIDS to NHANES,” Statistical Methods in Medical Research, vol. 8, no. 1, pp. 17–36, 1999. View at: Publisher Site | Google Scholar
  6. R. J. Little and D. B. Rubin, Statistical Analysis with Missing Data, John Wiley & Sons, Hoboken, NJ, USA, 2019.
  7. S. Van Buuren and K. Oudshoorn, Flexible Mutlivariate Imputation by MICE, TNO, Leiden, Netherlands, 1999.
  8. S. Van Buuren, “Multiple imputation of discrete and continuous data by fully conditional specification,” Statistical Methods in Medical Research, vol. 16, no. 3, pp. 219–242, 2007. View at: Publisher Site | Google Scholar
  9. J. S. Liu, Monte Carlo Strategies in Scientific Computing, Springer Science & Business Media, Berlin, Germany, 2008.
  10. F. Li, M. Baccini, F. Mealli, E. R. Zell, C. E. Frangakis, and D. B. Rubin, “Multiple imputation by ordered monotone blocks with application to the anthrax vaccine research program,” Journal of Computational and Graphical Statistics, vol. 23, no. 3, pp. 877–892, 2014. View at: Publisher Site | Google Scholar
  11. T. E. Raghunathan and D. B. Rubin, “Roles for Bayesian techniques in survey sampling,” in Proceedings of the Silver Jubilee Meeting of the Statistical Society of Canada, Canada, 1998. View at: Google Scholar
  12. S. Buuren and K. Groothuis-Oudshoorn, “MICE: multivariate imputation by chained equations in R,” Journal of Statistical Software, vol. 45, no. 3, pp. 1–68, 2010. View at: Google Scholar
  13. S. Yang, Flexible Imputation of Missing Data, Chapman & Hall/CRC Press, Boca Raton, FL, USA, 2018.
  14. Y.-S. Su, A. E. Gelman, J. Hill, and M. Yajima, “Multiple imputation with diagnostics (Mi) in R: opening windows into the black box,” Journal of Statistical Software, vol. 45, no. 2, pp. 1–31, 2011. View at: Google Scholar
  15. P. Royston and I. R. White, “Multiple imputation by chained equations (MICE): implementation in Stata,” Journal of Statistical Software, vol. 45, no. 4, pp. 1–20, 2011. View at: Publisher Site | Google Scholar
  16. S. R. Seaman, J. W. Bartlett, and I. R. White, “Multiple imputation of missing covariates with non-linear effects and interactions: an evaluation of statistical methods,” BMC Medical Research Methodology, vol. 12, no. 1, p. 46, 2012. View at: Publisher Site | Google Scholar
  17. J. N. Morgan and J. A. Sonquist, “Problems in the analysis of survey data, and a proposal,” Journal of the American Statistical Association, vol. 58, no. 302, pp. 415–434, 1963. View at: Publisher Site | Google Scholar
  18. L. F. Burgette and J. P. Reiter, “Multiple imputation for missing data via sequential regression trees,” American Journal of Epidemiology, vol. 172, no. 9, pp. 1070–1076, 2010. View at: Publisher Site | Google Scholar
  19. J. L. Schafer, Analysis of Incomplete Multivariate Data, Chapman and Hall/CRC, Boca Raton, FL, USA, 1997.
  20. L. L. Doove, S. Van Buuren, and E. Dusseldorp, “Recursive partitioning for missing data imputation in the presence of interaction effects,” Computational Statistics & Data Analysis, vol. 72, pp. 92–104, 2014. View at: Publisher Site | Google Scholar
  21. A. D. Shah, J. W. Bartlett, J. Carpenter, O. Nicholas, and H. Hemingway, “Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study,” American Journal of Epidemiology, vol. 179, no. 6, pp. 764–774, 2014. View at: Publisher Site | Google Scholar
  22. T. Therneau, B. Atkinson, and B. Ripley, Recursive Partitioning and Regression Trees. R Package ‘rpart’ (Version 4.1-11), R. Found Statistical Computing, Vienna, Austria, 2017.
  23. A. Liaw and M. Wiener, “Classification and regression by randomForest,” R News, vol. 2, no. 3, pp. 18–22, 2002. View at: Google Scholar
  24. C. Strobl, A.-L. Boulesteix, and T. Augustin, “Unbiased split selection for classification trees based on the Gini index,” Computational Statistics & Data Analysis, vol. 52, no. 1, pp. 483–501, 2007. View at: Publisher Site | Google Scholar
  25. J. Friedman, T. Hastie, and R. Tibshirani, The Elements of Statistical Learning, Springer Series in Statistics, New York, NY, USA, 2001.
  26. L. F. J. Breiman and R. A. Olshen, Classification and Regression Trees, Chapman and Hall/CRC, Boca Raton, FL, USA, 1984.
  27. B. Garrusi, S. Garousi, and M. R. Baneshi, “Body image and body change: predictive factors in an Iranian population,” International Journal of Preventive Medicine, vol. 4, no. 8, pp. 940–948, 2013. View at: Google Scholar
  28. J. L. Schafer, “Multiple imputation: a primer,” Statistical Methods in Medical Research, vol. 8, no. 1, pp. 3–15, 1999. View at: Publisher Site | Google Scholar
  29. H. Demirtas, S. A. Freels, and R. M. Yucel, “Plausibility of multivariate normality assumption when multiply imputing non-Gaussian continuous outcomes: a simulation assessment,” Journal of Statistical Computation and Simulation, vol. 78, no. 1, pp. 69–84, 2008. View at: Publisher Site | Google Scholar
  30. H. Demirtas, “Simulation driven inferences for multiply imputed longitudinal datasets,” Statistica Neerlandica, vol. 58, no. 4, pp. 466–482, 2004. View at: Publisher Site | Google Scholar
  31. L. M. Collins, J. L. Schafer, and C.-M. Kam, “A comparison of inclusive and restrictive strategies in modern missing data procedures,” Psychological Methods, vol. 6, no. 4, pp. 330–351, 2001. View at: Publisher Site | Google Scholar
  32. H. Demirtas and D. Hedeker, “Multiple imputation under power polynomials,” Communications in Statistics—Simulation and Computation, vol. 37, no. 8, pp. 1682–1695, 2008. View at: Publisher Site | Google Scholar
  33. D. Rubin, Multiple Imputation for Nonresponse in Surveys, John Wiley & Sons, New York, NY, USA, 1987.
  34. C. A. Bernaards, M. M. Farmer, K. Qi, G. S. Dulai, P. A. Ganz, and K. L. Kahn, “Comparison of two multiple imputation procedures in a cancer screening survey,” 2002. View at: Google Scholar
  35. StataCorp LLC, Stata Multiple-Imputation Reference Manual, StataCorp LLC, College Station, TX, USA, 2013.
  36. T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer Science & Business Media, Berlin, Germany, 2009.
  37. E. Dusseldorp, C. Conversano, and B. J. Van Os, “Combining an additive and tree-based regression model simultaneously: STIMA,” Journal of Computational and Graphical Statistics, vol. 19, no. 3, pp. 514–530, 2010. View at: Publisher Site | Google Scholar
  38. X.-L. Meng, “Multiple-imputation inferences with uncongenial sources of input,” Statistical Science, vol. 9, no. 4, pp. 538–558, 1994. View at: Publisher Site | Google Scholar
  39. J. W. Bartlett, S. R. Seaman, I. R. White, and J. R. Carpenter, “Multiple imputation of covariates by fully conditional specification: accommodating the substantive model,” Statistical Methods in Medical Research, vol. 24, no. 4, pp. 462–487, 2015. View at: Publisher Site | Google Scholar
  40. E. Slade and M. G. Naylor, “A fair comparison of tree‐based and parametric methods in multiple imputation by chained equations,” Statistics in Medicine, vol. 39, no. 8, pp. 1156–1166, 2020. View at: Publisher Site | Google Scholar
  41. C. Strobl, J. Malley, and G. Tutz, “An introduction to recursive partitioning: rationale, application, and characteristics of classification and regression trees, bagging, and random forests,” Psychological Methods, vol. 14, no. 4, pp. 323–348, 2009. View at: Publisher Site | Google Scholar

Copyright © 2021 Sara Javadi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1810
Downloads955
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.