Abstract

Tian et al. (2007) introduced a so-called hidden sensitivity model for evaluating the association of two sensitive questions with binary outcomes. However, in practice, we sometimes need to assess the association between one sensitive binary variable (e.g., whether or not a drug user, the number of sex partner being 1 or >1, and so on) and one nonsensitive binary variable (e.g., good or poor health status, with or without cervical cancer, and so on). To address this issue, by sufficiently utilizing the information contained in the non-sensitive binary variable, in this paper, we propose a new survey scheme, called combination questionnaire design/model, which consists of a main questionnaire and a supplemental questionnaire. The introduction of the supplemental questionnaire which is indeed a design of direct questioning can effectively reduce the noncompliance behavior since more respondents will not be faced with the sensitive question. Likelihood-based inferences including maximum likelihood estimates via the expectation-maximization algorithm, asymptotic confidence intervals, and bootstrap confidence intervals of parameters of interest are derived. A likelihood ratio test is provided to test the association between the two binary random variables. Bayesian inferences are also discussed. Simulation studies are performed, and a cervical cancer data set in Atlanta is used to illustrate the proposed methods.

1. Introduction

Warner [1] introduced a randomized response technique to obtain truthful answers to questions with sensitive attributes. Using the Warner design, Kraemer [2] derived a bivariate correlation between an attribute with polytomous responses and an attribute with normally distributed responses. Fox and Tracy [3] derived estimation of the Pearson product moment correlation coefficient between two sensitive questions by assuming that randomized response observations can be treated as individual-level scores that are contaminated by random measurement error. Edgell et al. [4] considered the correlation between two sensitive questions using the unrelated question design or the additive constants design. Christofides [5] presented a randomized response technique with two randomization devices to estimate the proportion of individuals having two sensitive characteristics at the same time. Kim and Warde [6] considered a multinomial randomized response model which can handle untruthful responses. They also derived the Pearson product moment correlation estimator which may be used to quantify the linear relationship between two variables when multinomial response data are observed according to a randomized response procedure. However, all these randomized response procedures make use of one or two randomizing devices which (i) entail extra costs in both efficiency and complexity, (ii) increase the cognitive load of randomized response techniques, and (iii) allow for new sources of error, such as misunderstanding the randomized response procedures or cheating on the procedures [7].

From the perspective of incomplete categorical data design, Tian et al. [8] proposed a nonrandomized response model (called the hidden sensitivity model) for assessing the association of two sensitive questions with binary outcomes. To protect respondents' privacy and to avoid the use of any randomization device, they utilized a non-sensitive question in the questionnaire to indirectly obtain respondents' answers to the two sensitive questions. In the hidden sensitivity model, they implicitly assumed that all respondents are willing to follow the design instructions. In other words, the noncompliance behavior will not occur. However, in practice, we sometimes need to assess the association between one sensitive binary variable (e.g., whether or not a drug user, the number of sex partner being 1 or >1, and so on) and one nonsensitive binary variable (e.g., good or poor health status, with or without cervical cancer, and so on). To our knowledge, the survey design for addressing this issue and corresponding statistical analysis methods are not available. Although we could directly adopt the hidden sensitivity model, the information contained in the nonsensitive binary variable cannot be utilized in the design. Intuitively, such information can be used to enhance the degree of privacy protection, so that more respondents will not be faced with the sensitive question. The major objective of this paper is to propose a new survey design and to develop corresponding statistical methods for analyzing sensitive data collected by this technique.

The rest of the paper is organized as follows. In Section 2, without using any randomizing device, we propose a survey scheme, called combination questionnaire design/model, which consists of a main questionnaire and a supplemental questionnaire. Likelihood-based inferences including maximum likelihood estimates via an expectation–maximization (EM) algorithm, asymptotic confidence intervals, and bootstrap confidence intervals of parameters of interest are derived in Section 3. A likelihood ratio test is also provided to test the association between the two binary random variables. In Section 4, we discuss Bayesian inferences when prior information on parameters is available. In Section 5, two simulation studies are performed to compare the efficiency of the proposed combination questionnaire model with that of the existing hidden sensitivity model of Tianet al. [8] (i.e., the main questionnaire only). A cervical cancer data set in Atlanta is used in Section 6 to illustrate the proposed methods. A discussion and an appendix on the mode of a group Dirichlet density and a sampling method from it are also presented.

2. The Survey Design

Assume that is a sensitive binary random variable, is a non-sensitive binary random variable, and they are correlated. Let denote the sensitive class (e.g., if a respondent is a drug user) and denote the non-sensitive class (e.g., if a respondent is not a drug user). Furthermore, let both (e.g., if a respondent receives at least some college training) and (e.g., if a respondent graduates at most from some high school) be non-sensitive classes. Define , where , , , and , then , where denotes the -dimensional closed simplex in . The objective is to make inferences on , , and the odds ratio .

The survey scheme consists of a main questionnaire and a supplemental questionnaire. To design the main questionnaire which is to be assigned to group 1 with respondents ( is specified by the investigators), we first introduce a non-sensitive question (say, ) with three possible answers. Assume that is a non-sensitive variate with trichotomous outcomes associated with the and is independent of both and . Define for . For example, let denote that a respondent was born in January–April (May–August, September–December), and thus we could assume that . The main questionnaire is shown in Table 1, under which each respondent is asked to answer the non-sensitive question .

On the one hand, since Category I (i.e., ) and Category IV (i.e., ) are non-sensitive to each respondent, it is reasonable to assume that a respondent is willing to provide his/her truthful answer by putting a tick in Block () according to his/her true status. On the other hand, Category II (i.e., ) and Category III (i.e., ) are usually sensitive to respondents. In this case, if a respondent belongs to Category II (III), he/she is designed to put a tick in Block 2 (3).

The supplemental questionnaire is designed as shown in Table 2, under which respondents ( is also specified by the investigators) in group 2 are asked to put a tick in Block 5 or Block 6 depending on their true status; that is, or . Since both the and are non-sensitive classes, the supplemental questionnaire is in fact a design of direct questioning. Therefore, we call this design the combination questionnaire model.

Table 3 shows the cell probabilities , the observed frequencies and the unobservable frequencies , for the main questionnaire. The observed frequency is the sum of the frequency of respondents belonging to Block and the frequency of those belonging to Category II. The observed frequency is the sum of the frequency of respondents belonging to Block and the frequency of those belonging to Category III. Note that , we have . Thus, only and are unobservable. Table 4 shows the cell probabilities and observed counts for the supplemental questionnaire.

Remark 1. The design of the main questionnaire is similar to that of the hidden sensitivity model of Tian et al. [8], while the design of the supplemental questionnaire is indeed a design of direct questioning since both the and are non-sensitive classes. Table 1 shows that Categories II and III are two sensitive subclasses. Therefore, putting a tick in Block 2 or Block 3 implies that the respondent could be suspected with the sensitive attribute. Let (see Table 3); then around half of the, say, respondents will be suspected with the sensitive attribute if only the main questionnaire is employed. However, besides the main questionnaire (with respondents), if the supplemental questionnaire (see Table 2) with respondents is also used, then only half of the respondents will be suspected with the sensitive attribute. In other words, in the proposed combination questionnaire model, the information of the non-sensitive binary variable can be used to enhance the degree of privacy protection, so that more respondents will not be faced with the sensitive question. This is why we introduce the supplemental questionnaire besides the main questionnaire.

Remark 2. In practice, to simplify the design itself, we suggest that both the sample size in the main questionnaire and the sample size in the supplemental questionnaire should be fixed in advance rather than the total sample size being fixed. In this way, survey data can be collected in two independent groups, resulting in a relatively simpler statistical analysis. In addition, the interviewees are randomly assigned to either the group 1 or group 2.

3. Likelihood-Based Inferences

In this section, maximum likelihood estimates (MLEs) of the and the odds ratio are derived by using the EM algorithm. In addition, asymptotic confidence intervals and the bootstrap confidence intervals of an arbitrary function of are also provided. Finally, a likelihood ratio test is presented for testing the association between the two binary random variables.

3.1. MLEs via the EM Algorithm

A total of respondents are classified into two groups by a randomization approach such that respondents answer the questions in the main questionnaire and respondents answer the questions in the supplemental questionnaire. Let denote the observed counts collected in the main questionnaire (see Table 3), where . The likelihood function of based on is Let denote the observed counts gathered in the supplemental questionnaire (see Table 4), where . The likelihood function of based on is Let . Since and are independent, the observed-data likelihood function of is where the subscript “CQ” denotes the “combination questionnaire” model.

Since the corresponding cell probabilities to the observed counts and in the group 1 are in the form of summation (i.e., , ), we cannot obtain the explicit expressions of the MLEs of from the score equations of (4). By treating the observed counts and as incomplete data, we use the EM algorithm [9] to find the MLE of . The counts in Table 3 can be viewed as missing data. Briefly, , , , and represent the counts of the respondents belonging to Categories I, II, III, and IV, respectively. Thus, we denote the latent data by and the complete data by . Note that all are known. Consequently, the complete-data likelihood function for is where .

By treating as random variables, we note that the complete-data likelihood function (5) has the density form of a grouped Dirichlet distribution [10]. Ng et al. [11] derived the mode of a grouped Dirichlet density with explicit expressions (see the appendix). Hence, from and , the complete-data MLEs for are given by Given and , follows the binomial distribution with parameters and ; that is, Therefore, the E-step of the EM algorithm computes the following conditional expectations: and the M-step updates (6) by replacing and with previous conditional expectations.

Remark 3. Based on the observed-data likelihood function (4), we could use the Newton-Raphson algorithm to find the MLEs of . However, it is well known that the Newton-Raphson algorithm does not necessarily increase the log likelihood, leading even to divergence sometimes [12, page 172]. In addition, the Newton–Raphson algorithm is sensitive to the initial values. One advantage of using the EM algorithm in the current situation is that both the E- and M-step have closed-form expressions. More importantly, the EM algorithm and the data augmentation algorithm of Tanner and Wong [13] share the same data augmentation structure in the Bayesian settings (see Section 4 for more details).

3.2. Asymptotic Confidence Intervals

Let . The asymptotic variance-covariance matrix of the MLE is then given by , where denotes the observed information matrix and is the observed-data log-likelihood function. From (4), we have It is easy to show that where Hence, the observed information matrix can be expressed as where

Let denote the standard error of for . Note that can be estimated by the square root of the th diagonal element of . We denote the estimated value of by . Thus, a normal-based asymptotic confidence interval for can be constructed as

Let be an arbitrary differentiable function of . For example, and the odds ratio . The delta method (e.g., [14, page 34]) can be used to approximate the standard error of , and a normal-based asymptotic confidence interval for is given by where

3.3. Bootstrap Confidence Intervals

When the normal-based asymptotic confidence interval like (15) is beyond the low bound zero or the upper bound one, the bootstrap approach [15] can be used to construct the bootstrap confidence interval of . Based on the obtained MLE , we independently generate Having obtained and , where , we can calculate the bootstrap replication based on via the EM algorithm specified by (6) and (8). Independently repeating this process times, we obtain bootstrap replications . Consequently, a bootstrap confidence interval for is given by where and are the and percentiles of , respectively.

3.4. The Likelihood Ratio Test for Testing Association

The likelihood ratio statistic can be used to test whether the two binary random variables and are independent/correlated. The corresponding null and alternative hypotheses are [16, page 45] The likelihood ratio statistic is defined by where denotes the restricted MLE of under , denotes the MLE of , which can be obtained by the EM algorithm specified by (6) and (8), and .

To find the restricted MLE , we also employ the algorithm. Under , we have In other words, under we only have two free parameters and . Having obtained the restricted MLEs and , we can compute the restricted MLE from (23) by

In what follows, we consider the computation of the restricted MLEs and . Now, the complete-data likelihood function (5) becomes so that the restricted MLEs of and based on the complete-data are given by respectively. Thus, the -step of the algorithm calculates (26), and the -step computes the conditional expectations given in (8), where are defined in (23). Finally, under , asymptotically follows chi-squared distribution with one degree of freedom.

4. Bayesian Inferences

To derive the posterior mode of , we employ the EM algorithm again. The latent data are the same as those in Section 3.1. Based on the complete-data likelihood function (5), if the Dirichlet distribution Dirichlet is adopted as the prior distribution of , then the complete-data posterior distribution is a grouped Dirichlet (GD) distribution with the formal definition given by ; that is, where , , and . The conditional predictive distribution is

Therefore, the -step of the EM algorithm is to calculate the complete-data posterior mode: where , and the -step is to replace by the conditional expectations given by (8).

In addition, based on (27) and (28), the data augmentation algorithm of Tanner and Wong [13] can be used to generate posterior samples of . A sampling method from (27) is given in the appendix.

5. Simulation Studies

In this section, two simulation studies are conducted to compare the efficiency of the proposed combination questionnaire model with that of the hidden sensitivity model of Tian et al. [8] (i.e., the main questionnaire model only), where are assumed to be and for . In the first simulated example, let the total sample size in the combination questionnaire model be the same as the sample size in the hidden sensitivity model. In the second example, the sample size for the main questionnaire in the combination questionnaire model is assumed to be equal to the sample size in the hidden sensitivity model.

In the first simulated example, let a total of participants be interviewed by using the combination questionnaire model. The true values of are listed in the second column of Table 5. We first generate and so that . The algorithm (6) and (8) is used to calculate the MLEs of . We repeated this experiment 1000 times. The average MLEs of are reported in the third column of Table 5. The corresponding bias, variance, and mean square error (MSE) are displayed in the fourth, fifth, and sixth columns of Table 5. Next, let participants be interviewed by using the hidden sensitivity model (i.e., the main questionnaire only). The corresponding results are reported in the last four columns of Table 5.

From Table 5, we can see that both the MLEs of in the combination questionnaire model and the hidden sensitivity model are very close to their true values, while the MSEs of in the combination questionnaire model are slightly larger than those in the hidden sensitivity model. These numerical results are not surprising since in the hidden sensitivity model, Tian et al. [8] implicitly assumed that all respondents must strictly follow the design instructions. In other words, the noncompliance behavior will not occur. The introduction of the supplemental questionnaire in the combination questionnaire model can effectively reduce the non-compliance behavior since more respondents will not be faced with the sensitive question, while the cost for introducing such a supplemental questionnaire is that we definitely lose a little of efficiency.

In the second simulated example, we assume that a total of participants are interviewed by using the combination questionnaire model, while only are interviewed by using the hidden sensitivity model. We repeat this experiment 1000 times. The corresponding results are reported in Table 6.

From Table 6, we can see that the MSEs of in the combination questionnaire model are smaller than those in the hidden sensitivity model. In addition, by comparing the fifth columns in Tables 5 and 6, we can see that the precisions of for the proposed combination questionnaire model in the second simulated example are significantly improved when compared with those in the first simulated example.

6. Analyzing Cervical Cancer Data in Atlanta

Williamson and Haber [17] reported a study which examined the relationship between disease status of cervical cancer and the number of sex partners and other risk factors. Cases were 20–79-year-old women of Fulton or DeKalb county in Atlanta, Georgia. They were diagnosed and were ascertained to have invasive cervical cancer. Controls were randomly chosen from the same counties and the same age ranges. Table 7 gives the cross-classification of number of sex partners (“few, 0–3” or “many, 4”, denoted by or ) and disease status (control or case, denoted by or ). Generally, a sizable proportion (13.5% in this example) of the responses would be missing because of the sensitive question about the number of sex partners in a telephone interview. The objective is to examine if association exists between the number of sex partners and disease status of cervical cancer.

For the purpose of illustration, we presume that is non-sensitive although the number 0–3 of sex partners is somewhat sensitive for some respondents. To illustrate the proposed design and approaches, we let if a respondent was born in January–April (May–August, September–December). It is then reasonable to assume that for , and is independent of the sensitive binary variate and the the non-sensitive binary variate . For the ideal situation (i.e., no sampling errors), the observed counts from the main questionnaire as shown in Tables 1 and 3 would be , , , ; that is, On the other hand, we can view the missing data in Table 7 as the observed counts from the supplemental questionnaire as shown in Table 2. From Table 4, we have and ; that is, . Therefore, we obtain the observed data .

6.1. Likelihood-Based Inferences

Using as the initial values, the EM algorithm in (6) and (8) converged in iterations. The resultant MLEs of and are given in the second column of Table 8. From (13) and (14), the asymptotic variance-covariance matrix of the MLEs is The estimated standard errors of () are square roots of the main diagonal elements of the above matrix. From (17), the estimated standard errors of and are given by respectively, where

These estimated standard errors are listed in the third column of Table 8. From (15) and (16), we can obtain the 95 asymptotic confidence intervals of and , which are showed in the fourth column of Table 8.

Based on (18) and (19), we generate 10,000 bootstrap samples. The corresponding 95% bootstrap confidence intervals of and are displayed in the last column of Table 8.

To test the null hypothesis against , we need to obtain the restricted MLE . Using as the initial values, the EM algorithm in (26) converged to in 19 iterations. From (24), the restricted MLEs of are obtained as The log-likelihood ratio statistic is equal to 12.469 and the -value is 0.0004137. Since this -value is far less than 0.05, the is rejected at the 0.05 level of significance. Thus, we can conclude that there is an association between sex partners and cervical cancer status based on the current data. This conclusion is identical to that from the two 95% confidence intervals of the odds ratio as shown in Table 8, where both confidence intervals exclude the value 1.

6.2. Bayesian Inferences

When Dirichlet (i.e., the uniform distribution on ) is adopted as the prior distribution of , the posterior modes of are equal to the corresponding MLEs. Using as the initial values, we employ the data augmentation algorithm to generate 1,000,000 posterior samples of and discard the first half of the samples. The Bayesian estimates of and are given in Table 9. Since the lower bound of the Bayesian credible interval of the is larger than 1, we believe that there is an association between the number of sexual partners and cervical cancer status.

The posterior densities of the and estimated by a kernel density smoother are plotted in Figures 1 and 2.

7. Discussion

In this paper, we develop a general framework of design and analysis for the combination questionnaire model, which consists of a main questionnaire and a supplemental questionnaire. In fact, the main questionnaire (see Table 1) is a generalization of the nonrandomized triangular model [18] and is then a special case of the multicategory triangular model [19]. The supplemental questionnaire (see Table 2) is a design of direct questioning. The introduction of the supplemental questionnaire can effectively reduce the noncompliance behavior since more respondents will not be faced with the sensitive question, while the cost for introducing such a supplemental questionnaire is that we definitely lose a little of efficiency. The combination questionnaire model can be used to gather information to evaluate the association between one sensitive binary variable and one non-sensitive binary variable.

We note that the proposed combination questionnaire model has one limitation in applications; that is, it cannot be applied to situation where two categories and are sensitive like income. For example, let if his/her annual income is $25,000 or less and if his/her annual income is more than $25,000. For such cases, it is worthwhile to develop new designs to address this issue. One way is to replace the main questionnaire in Table 1 by a four-category parallel model (see [20, Section 4.1]). The other way is to employ the parallel model [21] to collect the information on and to employ the design of direct questioning to collect information on then we could use the logistic regression to estimate the odds ratio, which is one of our further researches.

Appendix

The Mode of a Group Dirichlet Density and a Sampling Method from a GDD

Let denote the -dimensional open simplex in . A random vector is said to follow a group Dirichlet distribution (GDD) with two partitions, if the density of is where , is a positive parameter vector, is a nonnegative parameter vector, is a known positive integer less than , and the normalizing constant is given by

We write on or on to distinguish the two equivalent representations.

If , then the mode of the grouped Dirichlet density (A.2) is given by [11, 22]

The following procedure can be used to generate i.i.d. samples from a GDD [11]. Let(1) on ; (2) on ; (3); and (4), and are mutually independent.

Define

Then, on , where and .

Acknowledgments

The authors are grateful to the Editor, an Associate Editor, and two anonymous referees for their helpful comments and suggestions which led to the improvement of the paper. They also would like to thank Miss Yin LIU of The University of Hong Kong for her help in the simulation studies. Jun-Wu Yu's research was partially supported by a Grant (no. 09BTJ012) from the National Social Science Foundation of China and a Grant from Hunan Provincial Science and Technology Department (no. 11JB1176). Partial work was done when Jun-Wu Yu visited The University of Hong Kong. Guo-Liang Tian's research was partially supported by a Grant (HKU 779210M) from the Research Grant Council of the Hong Kong Special Administrative Region.