Abstract

Screening cytosine-phosphate-guanine dinucleotide (CpG) DNA methylation sites in association with some covariate(s) is desired due to high dimensionality. We incorporate surrogate variable analyses (SVAs) into (ordinary or robust) linear regressions and utilize training and testing samples for nested validation to screen CpG sites. SVA is to account for variations in the methylation not explained by the specified covariate(s) and adjust for confounding effects. To make it easier to users, this screening method is built into a user-friendly R package, ttScreening, with efficient algorithms implemented. Various simulations were implemented to examine the robustness and sensitivity of the method compared to the classical approaches controlling for multiple testing: the false discovery rates-based (FDR-based) and the Bonferroni-based methods. The proposed approach in general performs better and has the potential to control both types I and II errors. We applied ttScreening to 383,998 CpG sites in association with maternal smoking, one of the leading factors for cancer risk.

1. Background

Due to its high throughput, accuracy, small sample requirement, and acceptable cost, the Illumina Infinium HumanMethylation450 BeadChip has been widely used to analyze deoxyribonucleic acid (DNA) methylation profiles in epigenetic studies that target various types of cancer. In particular, the illumina infinium assay utilizes a pair of probes (a methylated probe and an unmethylated probe) to measure the intensities of methylated and unmethylated alleles at the interrogated cytosine-phosphate-guanine dinucleotide (CpG) sites [1]. Two measures of DNA methylation are usually used: beta-values and -values. A beta-value is the ratio of signal from the methylated probe relative to the sum of both methylated and unmethylated probes. Beta-values are in the range of with 0 being completely unmethylated and 1 being fully methylated. -values are ratio of intensities for methylated and unmethylated probes and range from [2, 3]. -values are used more often in appreciation of its wide data range and variance homogeneity compared to beta-values.

Given the feature of high dimensionality of high-throughput methylation data, when performing designed and possibly complicated statistical analyses, it is wise to target potentially important CpG sites, for instance, CpG sites potentially associated with single-nucleotide polymorphisms (SNPs) and/or other covariate(s) of interest. Otherwise, the statistical power will be substantially lost. There is evidence that methylation is affected by genetic and some known factors such as smoking [4, 5]. Screening CpG sites have become overwhelmingly important across multiple health fields of study such as cancer research, genetic diseases, and epigenetic research.

Common methods for screening CpG sites assume some relationship of the -values in association with SNPs or some other genetic or environmental factors conditional on the assumption of linearity with some post hoc adjustment for multiple comparisons. The advantage to this method is the flexibility of incorporating additional covariates and their interactions. The primary limitation lies in controlling for multiple testing. Two popular adjustment methods are the Bonferroni-based method [6, 7] and the Benjamini-Hochberg method for controlling the false discovery rate (FDR) [8, 9]. These methods alter the -value or critical value to control for type I error. Bonferroni correction is the most conservative by dividing the linear regression -value, respective of the regression term of interest, by the total number of comparisons or CpG sites in this case such that those adjusted -values above the significance level are rejected. The FDR method first orders the -values, for , such that lower ordered -values that are less than or equal to are rejected [9]. It follows that the conservative Bonferroni-based method cannot control for type II error while the FDR-based method cannot control for type I error.

There are other potential issues that arise when dealing with DNA methylation. It is possible that the variation in methylation cannot be fully explained by the known covariates and there exist latent factors that confound with these known covariates [5]. To improve the screening quality, it is thus important to account for variations introduced by other unknown factors. Furthermore, CpG sites screened from one data set may not be consistent with those from another data set which directly affects the type I error rate and leads to a loss of power. It is thus equally important to improve the reproducibility of the selected CpG sites.

In this paper, we propose a novel collaboration of existing statistical techniques to screen genome-wide methylation. It takes unknown factor effects into account and achieves better reproducibility. The method has the ability to control for both types I and II error while adjusting for covariates as well as latent variables. The proposed screening method incorporates surrogate variable analysis [5], which identifies unknown latent variables, in conjunction with a training and testing approach [10] across CpG methylation sites linearly associated with covariates of interest, including the identified surrogate variables. Independently, each method is well established for different purposes. We mingle these methods and form, compared to existing methods, an improved and more efficient process to screen (filter) informative DNA methylation sites. In addition, this proposed method has been built into an efficient and user-friendly R package. In the following sections, further description and details of the proposed method can be found in Section 2, simulation studies and a real data application are included in Section 3, and we summarize the approach in Section 4.

2. Materials and Methods

The proposed screening procedure is built for analyzing the associations between methylation data (-values), or some high-dimensional data, and covariates of interest and their potential interactions. It consists of two consecutive components and surrogate variable analysis followed by a series of regressions while controlling for multiple testing. Surrogate variable analysis (SVA) aims to identify and estimate latent factors or surrogate variables (SVs) that potentially affect the association between known factors and the response variable, for example, SNPs (known factor) and DNA methylation (response variable) [5]. Including the estimated surrogate variables into the screening process has the potential to reduce unexplained variations, adjust for confounding effects, and consequently improve the accuracy of screening in terms of important variable identification [5]. In the context of DNA methylation, inclusion of the surrogate variables explains the variation in DNA methylation not explained by the covariates currently under consideration. This implies that the identified surrogate variables can be further used to identify important factors (markers) showing large contribution to the variation in the response variable explained by the surrogate variables [11]. These surrogate variables along with other variable(s) of interest can be included in regression analysis as independent variables. After SVA, we then begin the screening process with regressions and adjust for multiple comparisons.

As mentioned in the introduction, several methods exist to adjust for multiple testing but lack the ability to control for both types I and II errors. We have elected to implement a method that will control for both while simultaneously helping the issue of reproducibility. We let randomly chosen training and testing samples estimate and test the effects of the primary covariate(s), termed the TT method. General ideas of this approach are discussed in Dobbin and Simon and Faraggi and Simon [10, 12]. This method follows the concept of cross-validation. It has been shown that the implementation of the training and testing technique can provide a better control of type I error rate [10, 12]. In the next two subsections, we provide detailed steps and options available to both the surrogate variable analysis component and the TT method.

2.1. Identifying Surrogate Variables

Surrogate variables are inferred prior to screening using an algorithmic method developed by Leek and Storey (2007) called surrogate variable analysis [5]. Following the descriptions in Leek and Storey (2007), these SVs are developed by removing the amount of methylation or signal due to the variable(s) of interest and then decomposing the remaining residuals to identify an orthogonal basis of singular vectors that can be reproduced. These vectors are further examined for significant variation to form surrogate variables. Leek and Storey built an R package to perform the surrogate variable analyses (SVAs). The first step in SVA is to identify the number of surrogate variables based on the data using one of two methods, “be” or “leek” as noted in the package. The “be” method, being the default choice according Leek and Storey’s R package, is based on a permutation procedure originally proposed by Buja and Eyuboglu in 1992 [13], while the “leek” method provides an interface to the asymptotic approach proposed by Leek in 2011 [14]. Once the number of surrogate variables is calculated, they are then estimated using one of three algorithms, the iteratively reweighted (“irw”), supervised (“supervised”), and the two-step (“two-step”) method. Iteratively reweighted method is for empirical estimation of control probes, supervised method is for when control probes are known, and the two-step method is for general estimation of surrogate variables [15]. We elected to implement the “two-step” method following Leek and Storey [5]. Conditional on the data, a number of latent unknown variables have been identified, estimated, and will now be incorporated into the regression in association with DNA methylation as additional covariates.

2.2. The Training and Testing (TT) Screening Method

After surrogate variable analysis is complete, the TT screening method then begins as an iterative process of randomly sampling the training and testing data by a specified proportion. By default, 2/3 of the data will be included in the training data set, which is suggested in Dobbin and Simon [10] to maximize statistical power. Linear regressions are first applied to the training data to calculate the values for the association between the CpG site and the covariates, including SVs, using either ordinary least squares (this is a default choice in our R package) or robust regression. Robust regression is a type of linear regression that allows for more relaxed assumptions about normality and presence of outliers in the data. A CpG site is included as a candidate if the covariate(s) of interest is statistically significant according to a prespecified significance level, for example, 0.05. In our designed R package, we give user the flexibility to define which term (covariate) is used to decide the selection of CpG sites. For example, suppose the defined right-hand side of the regression is , where denotes the interaction of and . If the decision of selecting a CpG site is based on one single term, for example, the significance of the interaction effect, the value for the interaction term would be used to test statistical significance against the prespecified significance level of 0.05.

The process continues with these candidate CpG sites being further tested using the remaining subjects (testing data set) with linear regressions. For one pair of training and testing data sets, a candidate CpG site is deemed as being important if the significance still holds in the testing data. The significance level for the testing data by default is set at the same level as for the training data, 0.05. This screening process will be repeated times (i.e., iterations); at each iteration, a training and testing data set will be randomly selected. After one iteration, a pool of candidate CpG sites will be selected. This process is continued for total iterations. We summarize this screening process in Figure 1. Across all iterations, CpG sites selected in at least iterations will be included in the final pool of potentially important CpG sites; that is, the cutoff percentage of selection is . Final estimates of the associations and statistical significance for the selected CpG sites are inferred by use of the complete data via the same analytical methods (i.e., linear regression including surrogate variables previously estimated from complete data) as in the training and testing process. A cutoff percentage of 50 ( across 100 total iterations ()) was used to determine the final pool of potentially important CpG sites. Suggestions on the determination of this predefined value, , are discussed later in Section 3.1.

The user-friendly R package, ttScreening, is available from the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/ttScreening/index.html, which implements the proposed screening procedure discussed above. This ttScreening package also provides access to other screening methods: FDR and Bonferroni methods. Various options, such as type of linear regression and surrogate variable estimation method, are available for the user to specify while other options are data specific and will need to be defined by the user. However, the package does provide acceptable defaults values for those options that are not data specific. A list of package options along with descriptions are available in the package manual at https://cran.r-project.org/web/packages/ttScreening/ttScreening.pdf.

3. Results and Discussion

Simulations are used to demonstrate and assess the TT screening method in comparison with the FDR- and Bonferroni-based methods using the ttScreening R package. These are followed up by an application to a data set of 383,998 CpG sites and their association with maternal smoking.

3.1. Simulations

Simulation Scenarios. In total, 2,000 CpG sites across subjects were simulated. In these 2,000 CpG sites, sites were assumed to be important. Different settings of were considered, , 100, 200, and . Among the important sites, DNA methylation at of the CpG sites was associated with two variables and , their interaction, and unobservable independent uncorrelated variables, and the remaining of the sites were associated with , the interaction of and , and the unobservable independent uncorrelated variables. Variable is generated from normal distribution with mean and variance , and is a four-level categorical variable generated from multinomial with parameters and . The remaining CpG sites () were only associated with the unobservable variables and were deemed as unimportant ones. Linear regressions were applied to simulate the data. To assess the robustness of each method (TT screening and FDR- and Bonferroni-based methods), we considered two types of random error in the regressions, one following normal distribution with mean and variance and the other distribution with a degree freedom of . Combining the choices on and the distributions of random error, in total, we have settings. For each setting, we generated 100 Monte Carlo (MC) replicates. The results presented below include means of the number of incorrect selections (rounded to the nearest integer), estimates of sensitivity, and estimates of specificity across the 100 MC replicates. The number of incorrect selections refers to the number of CpG sites misidentified (refers to both false positive and false negative CpG sites) out of the 2,000 CpG sites.

3.2. Results

Variables or would be selected if their interaction effect was statistically significant. All package options were chosen as the default values with training and testing significance levels both set to 0.05. The screening results (Table 1) indicate that, in general, the sensitivity from the FDR-based method is comparable to that from the TT screening method but its specificity is lower when the number of important variables is not sparse; for example, . Compared to the Bonferroni-based method, the TT method in general gave better sensitivity and comparable specificity. These were as expected, as the Bonferroni-based method lacks the ability to control type II error while the FDR-based method cannot control well type I error [16]. The TT screening method, on the other hand, has the potential to control both types I and II errors. This is reflected by the results that the TT screening method overall produced the smallest number of incorrectly identified variables, a statistic incorporating information from both sensitivity and specificity. We also performed the screening using robust regressions and similar results were obtained (not shown). These findings are invariant to the distribution pattern of random errors, normally, or skew distributed.

Recall that the default value of the cutoff percentage required for a CpG site to be treated as an informative site across all iterations was 50%, and the significance levels for both the training and testing data set were at 0.05. To further evaluate how the choices of this cutoff percentage and the significance levels in the training and testing steps influence the screening results, we chose a set of the cutoff percentage values ranging from 30 to 90 and set the training significance level to 0.1 instead of default value 0.05. All other settings were kept the same. As seen in Figure 2, across the different number of important CpG sites and sample sizes, overall taking the cutoff percentage close to 50% works the best with significance level of 0.05 for both the training and testing steps. If a higher significance level is chosen in the testing step, then a higher value for cutoff percentage should be used.

The above examination on cutoff percentage is with sample size . To assess whether and how sample size influences the choice of cutoff percentage, we repeated the above analyses for different sample sizes (and for each sample size, 100 MC replicates were generated), , 400, and 800 with all options in ttScreening() set at default values except for cutoff percentage. The results (Figures 35) indicated that if the number of important variables is expected to be sparse (e.g., 0.5% of the total number variables), the default cutoff value (50%) is a reasonable choice regardless of the sample size. On the other hand, if important variables are not sparse, for example, at least 5% of the total number of variables, then the cutoff value tends to be influenced by the sample size only if sample size is not large, for example, ≤400. In this case, the smaller the sample size is, the lower the cutoff percentage should be taken. This is as expected; when the sample size is smaller, the probability of true positives will be lower. We thus do not expect a higher proportion of correct selection and consequently a lower cutoff percentage is desired. From our simulations, we recommend 20% cutoff if both the following two conditions are not satisfied: (1) important variables are not expected to be sparse (which rarely happens in high-throughput and high-dimensional data) and (2) we have small samples compared to the number of candidate CpG sites. However, if sample size is large, the default cutoff 50% is still recommended. The selection results for sample sizes , 400, and 800 following these recommendations are included in the appendix (Tables 3, 4, and 5).

To demonstrate the benefit of using the identified surrogate variables in screening, we reanalyzed the simulated data (for ) but with the surrogate variable analysis excluded (this option is available in the package). The relative patterns of statistics between different methods were intact compared to the results when surrogate variables were included in the screening process. However, when surrogate variables were not considered, a drastic drop in the mean sensitivity was observed across all the settings (Table 2). The sensitivity measures across all the 100 replicates ranged from 0 to 30%, implying that all methods had trouble deciphering CpG sites that are truly important. However, instead of completely excluding surrogate variables, what if we include a set of surrogate variables with each surrogate variable explaining quite a small portion of variation (i.e., less informative)? To examine this, we extracted surrogate variables that are less informative and reanalyzed the data. Similar findings as shown in Table 2 were concluded (Table 6), indicating the importance of including informative surrogate variables in order to substantially improve the quality of selection.

We further examined confounding effect of surrogate variables. To achieve this, we followed the same simulation scenarios noted earlier but, instead of assuming independence between all the (observed and unobserved) independent variables, we allow the 5 unobserved variables to be correlated with observed variable with a correlation of where and represents an arbitrary locations of the variables. For example, the observed variable is identified at location 0 and the 5 unobserved variables are locations . Then we generated 100 Monte Carlo replicates based on normal distributions with a sample size of . In the simulations, we considered the impact of including and excluding identified surrogate variables. The screening results (results not shown) are consistent with the previous findings when no correlations between unobserved and observed variables were assumed (Tables 1 and 2); that is, including the identified surrogate variable substantially improves the screening statistics (number of incorrectness, sensitivity, and specificity).

Based on the simulations, it is advised that users follow default setting for the TT screening method: of the data for training, “two-step” for SVA analysis as described in the published literature [5], 100 iterations for the total number of TT screenings, 50% as the cutoff proportion of those 100 iterations, and 0.05 significance level for the training and testing data. We recommend 100 iterations as a balance between computing efficiency and adequate resampling of the subjects to decipher true associations. We recommend 50% as the default because in general informative CpGs are sparse compared to the number of candidate CpG sites, in which case, as seen in simulations the 50% cutoff percentage is suitable for small and large sample sizes.

3.3. Real Data Analysis

We applied the ttScreening package to 383,998 CpG sites with DNA methylation available for 245 subjects. The data were collected from a study cohort of 18-year-old females on the Isle of Wight (IOW) in the United Kingdom [17]. We examined a single factor that may be potentially associated with DNA methylation, maternal smoking status during pregnancy (0/1). It is thought that maternal smoking in utero increases chances of asthma and wheezing in children and modifies defensive mechanisms such as xenobiotic detoxification systems, antioxidant responses, and damage repair mechanisms [18]. Certain modifications to such systems have been known to increase risk of lung cancer [18].

We applied the TT, FDR-based, and Bonferroni-based methods to identify potentially important CpG sites. In the TT method, 2/3 of the samples were used for training and the remaining for testing. The number of iterations was set at 100 with relative frequency of 50% as the cutoff and the significance level in both training and testing steps was set at 0.05. The significance level for FDR-based and Bonferroni-based methods was set to 0.05. OLS regression was used to estimate associations and values. Other settings were chosen as the default.

FDR- and Bonferroni-based methods, respectively, identified ten and five CpG sites associated with maternal smoking status. The five CpG sites identified by the Bonferroni-based method were also included in the ten CpG sites identified by FDR. The TT screening method identified 91 CpG sites potentially linked to maternal smoking status. The 10 CpG sites collectively identified by Bonferroni and FDR were included in the 91 identified by TT. Significant CpG site locations and annotated genes identified are included in Table 7 in Figure 6.

To understand the biological meaning of the identified sets of CpGs, pathway analysis was used. The genes annotated to each significant CpG site were extracted from the 450 K array manifest file (v1.2; available: http://support.illumina.com/downloads/humanmethylation450_15017482_v1-2_product_files.html). Where a CpG site was annotated to more than one gene, all annotated genes were included. The resulting gene lists were analyzed with Ingenuity Pathway Analysis tool (IPA; Qiagen). Statistical significance for each pathway is reported by IPA using values. The set of 91 CpGs differentially methylated with maternal smoking identified by TT (includes the 5 and 10 CpG sites identified by Bonferroni and FDR, resp.) were mapped to 54 unique genes.

The 91 CpGs differentially methylated with maternal smoking included 18 previously identified top maternal smoking-associated CpG sites in AHRR, CYP1A1, GFI1, MYO1G, CNTNAP2 [4, 1921], FRMD4A [4, 21], LRRC32, and intergenic CpGs near LOC284998 and PDE10A-SDIM1 [21]. Bonferroni identified five maternal smoking-associated CpGs and FDR identified ten (including all five identified by Bonferroni), all of which have been previously published in association with maternal smoking. However, the TT method identified many more CpGs that have been previously identified as statistically significant in other cohorts, suggesting that these are not simply false positive results, and represent additional truly maternal smoking associated genes worthy of future investigation and validation. Our observation of differential methylation at age 18 in response to maternal smoking during pregnancy also agrees with previous observations that these epigenetic responses are preserved at least into adolescence [20]. The top pathways enriched among genes containing the 91 CpG sites included aryl hydrocarbon receptor signaling () and xenobiotic metabolism signaling (), which support a long-lasting effect of maternal smoking on metabolic pathways controlling responses to smoke exposure. Furthermore, these pathways overlap with other pathways including the nicotine degradation II and nicotine degradation III pathways, as well as pathways for the metabolism of cigarette smoke components, and producing known effects of smoke exposure (Figure 6), such as melatonin degradation [22]. The TT method also identified six differentially methylated CpG sites in HOXA2, whose mutation causes cleft palate [23], the risk of which is known to be affected by maternal smoking [21].

In conclusion, the CpGs detected by TT in association with maternal smoking were enriched among pathways related to the known biology of those processes. Disruption or modification of some of these pathways results in greater risk of lung cancer [18]. TT also identified CpGs located in genes not previously identified which indicates potentially new findings.

4. Conclusions

We developed a unique screening procedure built into an R package for the purpose of screening important variables. It includes the developed method that involves training and testing steps (the TT screening method), along with another two existing methods, the method controlling for FDR and the other controlling overall significance level through the Bonferroni method. Simulations are used to demonstrate and assess the TT screening method.

Overall, the TT screening method produced comparable sensitivity results to that of FDR-based method and comparable specificity results of Bonferroni-based methods. However, the number of misidentified CpG sites from the TT method is in general smaller than those from the other two approaches. The findings on sensitivity and specificity were as expected, because the Bonferroni-based method lacks the ability to control type II error while the FDR-based method cannot control well type I error [16]. The TT screening method, on the other hand, has the potential to control both the types I and II errors, which was supported by the smaller numbers of incorrect detections. It was also noticed that, by incorporating surrogate variable analysis, all three methods produced higher sensitivity measures. In the real data application, the TT method identified a larger number of CpGs compared to the other two methods. The subsequent pathway analyses further support the practical strength of the proposed method. It is worth noting that since the screening process is to identify the CpG sites, to properly assess the functionality of these sites, it is critical to evaluate them jointly, for example, using pathway analyses.

An important contribution of the developed process is its computing efficiency. Although we combined three methods into one package, the computing is not burdensome. The computing time for screening 383,998 CpG sites (the real data analysis) with the ttScreening() function using default options only takes 82 minutes on a personal computer with a central processing unit (CPU) of 2.0 gigahertz (GHz) and memory of 4.0 gigabytes (GB) of random-access memory (RAM). The ttScreening() function always automatically adjusts for multiple testing using three methods, FDR, Bonferroni, and the training and testing method.

Lastly, the versatility of the proposed screening method allows it to be applied to a variety of scientific fields in which large data or high dimensionality is a computational problem. DNA methylation is of growing interest in all aspects of public health, in cancer and genetics/epigenetics specifically. TT reduces dimensionality in a timely manner while controlling for types I and II errors and adjusting unknown latent variables estimated using the surrogate variable analysis. Finally, the package incorporates a variety of options which allows the user to create very specific settings while maintaining convenient usability.

Appendix

See Tables 3, 4, 5, 6, and 7 and Figures 3, 4, 5, and 6.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

Many thanks to Dr. Nelis Soto-Ramirez for her effort in assisting the authors with the testing of the package. This work is supported by NIH Grant nos. R01AI091905 (Principal Investigator (PI): Karmaus) and R21AI099367 (PI: Zhang).