Abstract

Stress-strength models have been frequently studied in recent years. An applicable extension of these models is conditional stress-strength models. The maximum likelihood estimator of conditional stress-strength models, asymptotic distribution of this estimator, and its confidence intervals are obtained for Kumaraswamy distribution. In addition, Bayesian estimation and bootstrap method are applied to the model.

1. Introduction

In reliability, is named as the “stress-strength model.” This model has application in some literatures in addition to reliability, such as biostatistics, quality control, engineering, psychology, stochastic precedence, medicine, and probabilistic mechanical design. For a comprehensive review with details, refer to the study by Kotz et al. [1]. Regarding this reference, a sample in a clinical study can be considered in such a way that Y and X are assumed to be the results of a treatment group and a control group, respectively. Therefore, the expression (1 − R) measures the effect of treatment.

For some other applications, refer to the study by Ventura and Racugno (2011). In terms of reliability, Y is considered the strength of a component, which is under X stress. Therefore, R and (1 − R) indicate the system performance and the probability of system failure, respectively.

Some authors have extensively studied quantity in parametric case for different distributions and in nonparametric case. Refer the study by Rezaei et al. [2] for a list of distributions used in this matter.

In usual situations, it is known that and are bigger than two fixed values. Especially when and are lifetime of two components of a system, we may know that these components had been alive for a known time until we are going to have some inferences about . Therefore, Saber and Khorshidian [3] introduced the conditional stress-strength model as follows:

The quantity is special case of this quantity if we set . In the study by Saber and Khorshidian [3], if independent random variables and are continues, then

They worked on exponential distribution in the first paper on conditional stress-strength models.

In this paper, the Kumaraswamy distribution is applied to conditional stress-strength models. The Kumaraswamy distribution is a continuous distribution whose values are similar to the beta distribution in the distance [0.1]. In this respect, it is very similar to the beta distribution. However, it has significant advantages in terms of performance, and the cumulative irreversible distribution of booklets has advantages.

In the study by Kumaraswamy [4]; Kumaraswamy [5], hydrological data were shown, for example, daily rainfall and daily flow are not compatible with known and widely used distribution distributions such as normal, log-normal, and beta distributions. In addition, distributions such as Johnson and natural transformed polynomials have this problem too. Therefore, he defined a new probability density function called the sine probability density function. It seems that researchers for such data have considered this type of distribution (see Sundar and Subbiah [6]; Fletcher and Ponnambalam [7]; Seifi et al. [8]; Ponnambalam et al. [9]; and Ganji et al. (2006)).

The Kumaraswamy (Kw) distribution has a probability density function (pdf) and a cumulative distribution function as follows:respectively, where and are two positive shape parameters. Some works on stress-strength models by using of this distribution have been listed as follows. Nadar et al. [10] studied classical and Bayesian estimation of R, and then Nadar and Kizilaslan [11] performed the same study but by using upper record values. Estimation of reliability of multicomponent stress-strength models (introduced by Bhattacharyya and Johnson (1974)) has been studied by Dey et al. [12]. Finally, the multicomponent stress-strength model based on progressively censored sample has been worked by Kohansal [13].

The rest of the paper is organized as follows; we devote Section 2 to study in case of Kumaraswamy distribution. In a continuation of Section 2, the ML estimator of quantity , its corresponding asymptotic distribution, and confidence interval are presented. Furthermore, two methods including Bayesian and bootstrap are applied to our recommended model. Finally, results are presented in Section 3.

2. Conditional Stress-Strength Model for Kumaraswamy Distribution

In this section, quantity (2) is computed when distribution of components is Kumaraswamy.

Theorem 1. Suppose and are independent random variables, then

Proof. Use and substitute functions , , , and in equation (2). So, let , thenAnd, if , thenEventually, if , we can calculate like previous ones.In continuation of this section, we find MLE of , and then asymptotic distribution of this estimator is found in order to construct its confidence interval.
Let be a random sample of size of and be a random sample of size of such that and are independent. Then, the likelihood function is obtained.Then, to facilitate the calculation, use the form of the likelihood function in [14].whereTherefore, the MLE parameters , and are obtained by solving the following equations:Then,and the MLE parameter α is obtained by solving the following nonlinear equation:The above equation is solved by numerically methods.
Therefore, the MLE of becomes

2.1. Asymptotic Distribution of

In this section, we compute the asymptotic distributions of and . Hence, the Fisher information matrix of denoted by is given as follows, where is the observed information matrix, i.e.,and the elements of are as follows:

The elements of the Fisher information matrix are obtained by taking the expectations of the observed matrix. The following integrals can be helpful when finding the elements of the Fisher information matrix:where is the beta function, is the digamma function, andwhere

So, As and , then by using of multivariate CLT of , we have where and is inverse of the Fisher information matrix:such that

Now, we can use the Delta method to express the following theorem.

Theorem 2. As and , thenwhere , and are obtained as the following equations:where

Proof. LetthenSince , by Cramer’s theorem,where . Now, , and are computed of for case of , , and , respectively.
Using the previous theorem, the confidence interval for can be obtained.

Theorem 3. (1 − α) 100% confidence interval for is equal toIn above equations are similar to in Theorem 2 with substitutions , , and instead of , , and .

2.2. Bayesian Estimation of

In this section, we use the Bayesian method to approximate the posterior distribution for the values of interest in the multicomponent stress strength for the Kumarswamy distribution based on the MCMC method.

Therefore, suppose is the joint prior density for , such that , and .

In these priors, is the gamma distribution with mean and variance and , respectively, such that and are known parameters and We assume that the prior distributions , and are independent.

Then, .

Hence,

Bayesian inference for parameters , and can be performed using the Metropolis–Hastings algorithm (see Chib and Greenberg [15]) considering the conditional distributions as the target densities.

2.3. Bootstrap Confidence Intervals

Similar to Section 2.2, we study confidence interval based on the percentile bootstrap method for the case of common shape parameter. The algorithm is as follows:Step 1: For samples and , calculate .Step 2: Use () and (, ) to generate bootstrap samples and , respectively, and then using the generated samples, calculate the bootstrap estimate of say as follows:Step 3: Repeat the previous step with time N bootstrap to generate .

Now, and the approximate 100 % confidence interval of is given by () where shows quantile of order for .

3. Simulation Study

In this section, we want to simulate the results for different sample sizes to better illustrate the methods presented in this paper (classical estimation, Bayesian estimation, and bootstrap). Codes of this section have been provided in Appendix. The two known parameters and are used in our study. For other parameters in the model, we consider two cases , and which lead to and , and that their corresponding is . In this simulation, sample sizes , are used. All results are the mean of 5000 iteration. Estimation has been accomplished by three methods such as MLE, Bayesian, and bootstrap. For comparing these methods, we compute bias and MSE. In addition, coverage probabilities (CPs) and length of confidence interval have been computed. Our findings are represented in Tables 16. As these tables demonstrate, Bayesian estimation is the best method among three used methods with respect to criteria MSE, CP, and length of confidence interval. However, ML estimation has less bias than two other studied methods.

4. Discussion and Conclusion

The conditional stress-strength model as an interesting extension of the stress-strength model in reliability was studied for Kumaraswamy distribution. Three estimation methods are applied for statistical inference in this model.

The stress-strength model has been investigated for many distributions such as generalized logistic distribution, generalized failure rate distribution, Rayleigh, and half-normal distribution (see, for instance, Rasekhi et al. [16] and Alamri et al. [17]). They may be applied for the conditional stress-strength model, too. We are going to study this model for generalized logistic distribution in the next step.

The generalized stress-strength parameter () introduced by Saber et al. [18] is given by

This quantity is related to a system with three components while is defined for systems which have two components.

For future studies, a possible extension of (33) is recommended in the following:

Appendix

Program Codes

rm(list = ls())a<−0.1; b < −0.2; n < −50; m < −50; alfa < −5; beta1<−3.5; beta2<−3.25; N < −1000; n.boot < −200ALFA < −0.05; sigma2a < −0.36; sigma2b < −4; a1<−1; b1<−0.5; a2<−1.5; b2<−1; a3<−1.5; b3<−1.25alfas < 0.9; beta1s < −6.25; beta2s < -10.25; seed < −32101456; n.mh < −1; n.gs < −10000Rab < -function(b1,b2,alf) {if(a<b) m1<−(b2/(b1 + b2))((1 − b^alf)/(1 − a^alf))^b1if(a>b) m1<-1-(b1/(b1 + b2))((1 − a^alf)/(1 − b^alf))^b2return(m1) }rkus < -function(n,alf,bet) {u < -runif(n); d < -(1 − u)^(1/bet); dd < -(1 − d)^(1/alf)return(dd) }lalfa < -function(alf){lx < -sum(log(x)); ly < -sum(log(y)); lxx < -sum(log(1-x^alf)); lyy < -sum(log(1-y^alf))lxxx < -sum(x^alflog(x)/(1-x^alf)); lyyy < -sum(y^alflog(y)/(1-y^alf))d < -(n + m)/alf + lx + ly; d1<-(n/lxx+1)lxxx; d2<-(m/lyy+1)lyyy; dd < -d + d1 + d2return(dd) }thetahat < -function(n, m,b1,b2,alf){x < -rkus(n,alf,b1); y < -rkus(m,alf,b2)lalfa < -function(alf){lx < -sum(log(x)); ly < -sum(log(y)); lxx < -sum(log(1 − x^alf)); lyy < -sum(log(1-y^alf))lxxx < -sum(x^alflog(x)/(1 − x^alf)); lyyy < -sum(y^alflog(y)/(1-y^alf))d < -(n + m)/alf + lx + ly; d1<-(n/lxx+1)lxxx; d2<-(m/lyy + 1)lyyy; dd < -d + d1 + d2return(dd) }alfahat < -uniroot(lalfa,c(0.0001, 20))$rootlxx < -sum(log(1 − x^alfahat)); lyy < -sum(log(1 − y^alfahat)); beta1hat < --n/lxx; beta2hat < --m/lyyreturn(c(beta1hat,beta2hat,alfahat))}fishermi < -function(n,m,b1,b2,alf){a1<-beta(2, b1-2); a2<-beta(2, b2-2); a3<-beta(2, b1-1); a4<-beta(2, b2-1)c1 < -digamma(2)-digamma(b1); c2 < -digamma(2)-digamma(b2); c11 < -trigamma(2)-trigamma(b1); c21 < -trigamma(2)-trigamma(b2); c3 < -digamma(2)-digamma(b1 + 1); c4<-digamma(2)-digamma(b2 + 1)J < -matrix(0, 3, 3); J[1, 1]<-(n+ m + nb1(b1-1)a1c1^2c11+mb2(b2-1)a2c2c21)/alf^2J[1, 2] < -J[2, 1] < -nb1a3c3/alf; J[1, 3] < -J[3, 1] < -mb2a4c4/alf; J[2, 2] < -n/b1^2; J[3, 3]<-m/b2^2return(J)}moshtag < -function(b1, b2,alf){if(a<b){ e1<--((b2/(b1 + b2)) (((1-b^alf)/(1-a^alf))^(b1-1) (b1 (b^alf log(b)/(1 - a^alf) - (1 - b^alf) (a^alf log(a))/(1-a^alf)^2))))e2 < -(b2/(b1 + b2))  (((1 − b^alf)/(1-a^alf))^b1  log(((1 − b^alf)/(1 − a^alf))))-b2/(b1 + b2)^2  ((1 − b^alf)/(1 − a^alf))^b1e3<-(1/(b1 + b2) − b2/(b1 + b2)^2) ((1 - b^alf)/(1 − a^alf))^b1 }if(a>b){e1<-(b1/(b1 + b2)) (((1 − a^alf)/(1 − b^alf))^(b2 − 1) (b2 (a^alf log(a)/(1 − b^alf) − (1 − a^alf) (b^alf log(b))/(1 − b^alf)^2)))e2<--((1/(b1 + b2) − b1/(b1 + b2)^2) ((1 − a^alf)/(1 − b^alf))^b2)e3<--((b1/(b1 + b2))  (((1 − a^alf)/(1 − b^alf))^b2  log(((1 − a^alf)/(1 - b^alf)))) - b1/(b1 + b2)^2 ((1 a^alf)/(1 - b^alf))^b2) }return(c(e1,e2,e3))}sigma2<-function(n,m,b1,b2,alf){JJ < -fishermi(n,m,b1,b2,alf); mosh < -moshtag(b1,b2,alf); a5<-t(mosh)%%solve(JJ)%%moshreturn(a5) }R120 < -Rab(beta1, beta2, alfa)R12h < -bias12 < - MSE12 < - L12 < - CP12 < -rep(0, 0); za < -qnorm(1-ALFA/2)for(i in 1:N) {x < -rkus(n,alfa, beta1); y < -rkus(m,alfa, beta2); tethat < -thetahat(n, m, beta1, beta2, alfa)b1hat < -tethat[1]; b2hat < -tethat[2]; ahat < -tethat[3]if((b1hat>2)&(b2hat>2))R12hat < -Rab(b1hat,b2hat,ahat); var12 < -sigma2(n, m, b1hat, b2hat, ahat); bias12[i]<-R12hat-R120MSE12[i] < -(R12hat-R120)^2; L12[i] < -21.96sqrt(var12)CP12[i] < -sum((R120≥(R12hat-zasqrt(var12))) & (R120≤(R12hat + zasqrt(var12))))R12h[i]<-R12hat }Rhat < -mean(R12h); bias < -mean(bias12); mse < -mean(MSE12); lm < -mean(L12); cp < -mean(CP12)lower < -max(round(NALFA/2),1); upper < -min(round(N(1-ALFA/2)), N)fboot < -function(N){R12bo < -rep(0, 0)for(i in 1 : N) {x < -rkus(n, alfa, beta1); y < -rkus(m,alfa, beta2); tethat < -thetahat(n, m, beta1, beta2, alfa)b1hat < -tethat[1]; b2hat < -tethat[2]; ahat < -tethat[3]if((b1hat > 2)&(b2hat > 2))R12hat < -Rab(b1hat, b2hat, ahat); R12bo[i]<-R12hat}; Rs < -sort(R12h); lb < -Rs[lower]; ub < -Rs[upper]L < --(lb-ub); CP < -sum((R120 ≥ lb)&(R120 ≤ ub)); dd < -c(L,CP)return(dd)}Lboot < -c(0,0); CPboot < -rep(0, 0)for(i in 1:n.boot) {results < -fboot(N); Lboot[i] < -results[1]; CPboot[i]<-results[2]}Lboot < -mean(Lboot); CPboot < -mean(CPboot); initial < -c(alfa, beta1,beta2,n,m,R120);dmle < -c(Rhat, bias,mse,cp,lm); dboot < -c(CPboot, Lboot); dclassic < -c(initial, dmle,dboot)dkus < -function(x,alfa, beta) alfabetax^(alfa-1)(1-x^alfa)^(beta-1)falfa < -function(alfa, beta1,beta2,x,y){n < -length(x); m < -length(y); d1 < -(prod(x)prod(y))^(alfa − 1); d2<-prod(1 − x^alfa)^(beta1 − 1)d3<-prod(1 − y^alfa)^(beta2 − 1); d4<-dgamma(alfa,a1 + m + n,b1); d5<-d1d2d3d4return(d5)}fbeta < -function(beta, alfa,x,a,b){n < -length(x); d1<-prod(1-x^alfa)^(beta-1); d2 < -dgamma(beta,a + n,b); d3<-d1d2return(d3)}mhalfa < -function(nmh,alfas, beta1s,beta2s,x,y){set.seed < -seed; sample < -rep(0,nmh)for(i in 1:nmh){alfa < -rnorm(1,alfas, sqrt(sigma2a)); a.p1 < -falfa(alfa, beta1s,beta2s, x, y)a.p2 < -falfa(alfas, beta1s,beta2s, x, y); accept.prob < -a.p1/a.p2if(is.nan(accept.prob) = = T) sample[i] < -alfaselse {ratio < -min(1,accept.prob); u < -runif(1)if(u ≤ ratio) sample[i] < -alfaelse sample[i] < -alfas }alfas < -sample[i] }return(mean(sample)) }mhbeta < -function(nmh,betas, alfas,x,a,b){set.seed < -seed; sample < -rep(0,0)for(i in 1:nmh){beta < -rnorm(1,betas, sqrt(sigma2b)); a.p1<-fbeta(beta, alfas,x,a,b); a.p2<-fbeta(betas, alfas,x,a,b)accept.prob < -a.p1/a.p2if(is.nan(accept.prob) = = T) sample[i]<-betaselse {ratio < -min(1,accept.prob)u < -runif(1)if(u ≤ ratio) sample[i]<-betaelse sample[i] < -betas }betas < -sample[i] }return(mean(sample)) }GSnew < -function(N.GS,N.MH,alfas, beta1s,beta2s,x,y){n < -length(x); m < -length(y); alfa < -beta1 < -beta2 < -Rhat < -rep(0,N.GS)for(i in 1:N.GS){alfa[i] < -mhalfa(N.MH,alfas, beta1s,beta2s,x,y); beta1[i]<-mhbeta(N.MH,beta1s,alfa[i],x,a2,b2) beta2[i]<-mhbeta(N.MH,beta2s,alfa[i],y, a3,b3); alfas < -alfa[i]; beta1s < -beta1[i]; beta2s < -beta2[i] Rhat[i]<-Rab(beta1[i],beta1[i],alfa[i])}R12h < -mean(Rhat); var12<-var(Rhat); bias12<-mean(Rhat-R120); MSE12<-mean((Rhat-R120)^2)L12<-21.96sqrt(var12); CP12<-sum((R120 ≥ (R12h-L12/2)) & (R120≤(R12h + L12/2)))R12h1<-sort(Rhat); lownumber < -max(round(N.GS,ALFA/2)1)upnumber < -min(round(N.GS(1-ALFA/2)),N.GS); c.l < -R12h1[lownumber]; c.u < -R12h1[upnumber]L12n < -c.u-c.l; CP12n<-sum((R120 ≥ c.l) & (R120 ≤ c.u));dd < -c(R12h, var12, bias12, MSE12, L12, CP12, L12n, CP12n)return(dd) }R1h < -bias1 < - MSE1 < - L1 < -L1n < - CP1<-CP1n < -rep(0,0)for(i in 1:N) {x < -rkus(n,alfa, beta1); y < -rkus(m,alfa, beta2)resu < -GSnew(n.gs,n.mh,alfas, beta1s,beta2s,x,y)R1h[i] < -resu[1]; bias1[i] < -resu[3]; MSE1[i] < -resu[4]; L1[i] < -resu[5]L1n[i] < -resu[7]; CP1[i] < -resu[6]; CP1n[i] < -resu[8] }dbayes < -c(mean(R1h), mean(bias1), mean(MSE1), mean(L1), mean(CP1), mean(L1n), mean(CP1n))

Data Availability

The data used to support this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.