Abstract

Type-II censored data is an important scheme of data in lifetime studies. The purpose of this paper is to obtain E-Bayesian predictive functions which are based on observed order statistics with two samples from two parameter Burr XII model. Predictive functions are developed to derive both point prediction and interval prediction based on type-II censored data, where the median Bayesian estimation is a novel formulation to get Bayesian sample prediction, as the integral for calculating the Bayesian prediction directly does not exist. All kinds of predictions are obtained with symmetric and asymmetric loss functions. Two sample techniques are considered, and gamma conjugate prior density is assumed. Illustrative examples are provided for all the scenarios considered in this article. Both illustrative examples with real data and the Monte Carlo simulation are carried out to show the new method is acceptable. The results show that Bayesian and E-Bayesian predictions with the two kinds of loss functions have little difference for the point prediction, and E-Bayesian confidence interval (CI) with the two kinds of loss functions are almost similar and they are more accurate for the interval prediction.

1. Introduction

The Burr type XII distribution with two parameters was first introduced by Burr [1]. The probability density function (PDF) and cumulative distribution function (CDF) of this distribution can be, respectively, written as

In the following, we shall denote it by Burr(), where is the shape parameter and is the scale parameter. In fact, it is basically a Pareto (type IV) model with the scale parameter set to 1 in Equation (1). The inference problems with Burr distribution have been extensively investigated in the literature, and it is extremely important in the study of biological, industrial, reliability, and life testing and quality control. In the life or quality tests or experiments, a random sample of size is chosen from the distribution using CDF and PDF for the test or experiment, but instead of continuing until all samples have failed, the test is terminated at the time of the () failure. The order statistics of the data is This kind of data is called type-II censored data. Only the smallest observed values are observed, because it takes a long time to observe all the failure of individuals in some cases, and such a censoring experiment is both time-saving and cost saving. The number of the censored samples is determined before the test or experiment. Tekindal et al. evaluated left-censored data through substitution, parametric, semiparametric, and nonparametric methods [2]. The Bayesian inference is highly recommended by scholars to study censoring data. Feroze and Aslam [3] studied Bayesian analysis of Gumbel type II distribution under censored data. Tabassum et al. [4] discussed Bayesian inference from the mixture of half-normal distributions under censoring. Singh et al. [5] provided “Bayesian Estimation and Prediction for Flexible Weibull Model under Type-II Censoring Scheme.”

Lewis [6] proposed the use of the Burr() distribution as a model in accelerated life test data representing times of breakdown of an insulating fluid. Inferences and predictions for the Burr() distribution and some of its testing measures based on complete and censored samples were discussed by many authors. Evans and Ragab [7] obtained Bayes estimates of and the reliability function based on type-II censored samples. AL-Hussaini and Jaheen [8, 9] obtained Bayesian estimation for the two parameters and reliability and failure rate functions of the Burr XII distribution. Ali Mousa [10] obtained empirical Bayes estimation of the parameter and the reliability function based on accelerated type-II censored data. Based on complete samples, Moore and Papadopoulos [11] obtained Bayesian estimates of and the reliability function when the parameter is assumed to be known. A1i Mousa and and Jaheen [12] obtained Bayes approximate estimates for the two parameters and reliability function of the Burr() distribution based on progressive type-II censored samples. Jaheen [13] used the generalized order statistics to obtain Bayesian inference for the Burr XII model. Based on progressive samples from the Burr() distribution, Soliman [14] obtained the Bayesian estimates using both the symmetric (squared error) loss function and asymmetric (LINEX, general entropy) loss functions.

The E-Bayesian method is a special Bayesian method which was developed by Han [15], and it is more and more popular now. The E-Bayesian method can be used to estimate statistical distribution parameters. Gonzalez-Lopez et al. used E-Bayesian to gain flexibility in the reliability-availability system estimation based on exponential distribution under the squared error loss function [16]. Han estimated the system failure probability with the E-Bayesian method, and the relationship of E-Bayesian estimators with three different prior distributions of hyperparameters was revealed [17]. Jaheen and Okasha [18] provided the E-Bayesian parameter and reliability estimation for the Burr type XII model based on type-II censoring. However, those literatures only discussed the E-Bayesian parameter estimation or reliability for some models and lack of prediction research for Burr type XII model with type-II censoring data.

Prediction of future events on the basis of the past and present information is a fundamental problem of statistics, arising in many contexts and producing varied solutions. As in estimation, a predictor can be either a point or an interval predictor. Parametric and nonparametric predictions have been considered in the literatures. In many practical data-analytic situations, we are interested in getting the prediction interval of the statistical distribution parameters.

Prediction has been applied in medicine, engineering, business, and other areas as well. Many authors discussed prediction problems for many distributions, references to research done and review papers on prediction, in nonparametric and parametric settings, such as Al-Hussaini and Ahmad [19, 20], Al-Hussaini and Jaheen [8, 9], Ashour and El-Wakeel [21], Dunsmore [22], Guilbaud [23], Johnson et al. [24], Nigm et al. [25, 26], Patel [27], Sindhu et al. [28], and Singh et al. [5]. For more details, one can refer to Aitchison and Dunsmore [29] and Geisser [30].

In this article, an effort has been made to find Bayesian prediction bounds for future order statistics from the two-parameter Burr XII model based on type-II censored data using the two-sample prediction technique. E-Bayesian and Bayesian predictive function approaches have been used for obtaining the estimates of the unknown parameter, and some other lifetime characteristics such as the reliability and hazard functions. Bayesian estimation has been developed under symmetric and asymmetric loss functions in Section 2. E-Bayesian predictive functions are derived based on a conjugate prior for the parameter of interest and symmetric and asymmetric loss functions in Section 3. Properties of E-Bayesian predictive functions are carried out in Section 4. Finally, comparison between the new method and the corresponding Bayes techniques is made using the Monte Carlo simulation in Section 5.

2. Bayesian Two-Sample Predictions

Suppose that form a random sample from the distribution with CDF and PDF . The order statistics is Let have a joint CDF and PDF . Then to the observations of , we can get the joint PDF:

In particular,

Suppose that is a type-II censored sample of size obtained from a life test on items, then the joint PDF is also the likelihood function (LF), which can be written as

To the Burr XII distribution with the PDF (1) and CDF (2), we can get the likelihood function (LF): where.

When is known, the above functions with parameters and can be rephrased only with parameter , and we suppose is a random variable. According to the Bayesian theory, we use the gamma conjugate prior density for parameter , which can be written as where and . This prior was first used by Papadopoulos [31]. The posterior density of given can be obtained from (6) and (7) as follows: where

2.1. Bayesian Prediction Bounds

Assume that is the ordered observation in the same sample of size independent of the informative sample of , and they have the same distribution. Denote as a future independent type-II censored sample from the same population with censoring scheme , and suppose that is a type-II censored sample of size obtained from a life test on items. Our aim is to develop a method to construct a Bayesian prediction about the () ordered lifetime in a future sample of size . The PDF of is given as

Substitution of and, given by (1) and (2), respectively, yields where

The Bayes predictive PDF of is defined as

Thus, combined with (10) and the posterior PDF (8), one has

To obtain the prediction bounds of we first need to find the predictive survival function . It follows from (13) that

A two-sided 100δ% predictive interval for , is given by and denote and are the confidence lower and upper limits which satisfy

In this case, it is also not possible to obtain the solutions analytically, and one needs a suitable simulation technique for solving these nonlinear equations. And sample fractiles are used to replace the population fractiles during the simulation process.

By applying the following formula due to Lingappaiah [32], one can get the simulation confidence limits from (14).

2.2. Special Cases

Case 1. To predict the first failure time in the sample of size , we set in (14), so that

The case is of particular interest; for instance, a lower limit for the first failure in a fleet of items is called a safe warranty life or an assurance limit for the fleet.

Hence, the lower and upper 100δ% Bayesian prediction bounds for are given, respectively,

Case 2. The predictive survival function of (the last lifetime in a future sample of size ) can be obtained by setting in (14), yielding

It resulted from (15) and (19) by replacing by the lower bound and the upper bound . With a suitable numerical technique for solving these nonlinear equations, we can get the lower and upper bounds of .

2.3. The Bayesian Predictor of

With given by (13), the two sample Bayesian predictive PDF of under squared error loss function can be obtained as

However, this integral of (20) tends to infinity, so the integral does not exist. To solve this problem, we apply the median Bayesian estimation for the two-sample Bayes prediction of . Under the symmetric (Squared error (SE)) loss function, according to the definition of median, we can see the median is the solution of the equation: because

And from now on, the Bayesian estimation of is the median estimator; for convenience, we use the same token .

And the Bayes point predictor of under asymmetric (LINEX (BL)) loss function is given by

3. E-Bayesian Estimation of

According to Han [33], the prior parameters and should be selected to guarantee that the prior in (7) is a decreasing function of . The derivative of with respect to is

Thus, for , , the prior is a decreasing function of .

Assuming that the hyperparameters and in (7) are independent and , the E-Bayesian estimate of parameter (expectation of the Bayesian estimate of ) is where is the domain of and for which the prior density is decreasing in , and is the Bayes estimate of . For more details, see Han [34] and Jaheen and Okasha [18].

3.1. The E-Bayesian Predictor of Ys with Squared Error Loss Function

E-Bayesian estimate of is obtained based on three different distributions of the hyperparameters and . These distributions are used to investigate the influence of the different prior distributions on the E-Bayesian estimation of .

The following distributions of c and k may be used where is the beta function. For, the E-Bayesian estimate of with squared error loss function is obtained from (21) and (26) as

3.2. E-Bayesian Point Predictor of with LINEX Loss Function

Based on the LINEX loss function, the E-Bayesian estimation of can be computed for the three different distributions of the hyperparameters and given by (26). For , i = 1, 2, 3, the E-Bayesian estimator of with LINEX loss function is obtained from (23) and (26) as below:

Analytical and numerical computations for the integrals in (27) and (28) are very complicated. With Monte Carlo simulation, we can get all of the estimators, and the samples are shown in part 4.

3.3. Properties of E-Bayesian Point Predictor of

As the integral of (21) does not exist, we cannot get the expression of real , and we can only get the median estimation of with (22). So, we cannot prove the properties of E-Bayesian point predictor of , but with our experience of E-Bayesian estimation [18], we can only guess the relations among as follows: (i)(ii)(iii)(iv)

The relationship of , can be observed only, and we cannot give the proof. However, the examples in part 4 can confirm this relationship.

In order to verify our parameter estimation, sample prediction, and the above relationship, the following examples are given to illustrate them.

4. Monte Carlo Simulation and Comparisons

4.1. Illustrative Example with Real Data

To verify the estimation and prediction method of this paper, we give two illustrative examples. A complete sample from a clinical trial describes a relief time (in hours) for 50 arthritic patients given by Wingo [35] and used recently by Ahmed et al. [36, 37] and Wu et al. [38]. Wingo [35], and Ahmed et al. [36, 37] showed that the Burr type XII model was acceptable for these data. Ahmed et al. [37] obtained the estimation of the parameters as and .

Example 1. The following data sample was generated from the two parameter Burr type XII distribution with , ; sample size ; 10 observations ; 5 censoring data ; to predict the 5 censoring data with , we set , , , . The data is the sample coming from the real data and is arranged as the order statistics, and the last 5 bold ones are censored which we predict in Table 1: 0.29, 0.35, 0.36, 0.44, 0.46, 0.49, 0.50, 0.52, 0.55, 0.55, 0.57, 0.59, 0.61, 0.70, 0.80.

Using this data, we can get the point prediction and bound prediction of the last 5 censored data according to the method given by this paper. With the results of Equations (21), (23), and (27)–(28), different Bayesian and E-Bayesian estimators for and its bounds are computed and are presented in Table 1. The procedure for estimating them is as follows: (i)For given values of the prior parameters , , , , we generate samples from the beta distribution and uniform priors, respectively(ii)Repeat the above step (i) 10,000 times. Using the real data above, we obtain the E-Bayes estimates of and its bounds based on the BS function and BL function by simulation(iii)The computational results are summarized in Table 1, where BLi is the Bayesian prediction, and EBLi is the E-Bayesian prediction, for under LINEX error loss function with three different distributions of the hyperparameters (). BSi is the Bayesian estimators (EBSi is the E-Bayesian estimators) for under squared error loss function with three different distributions of the hyperparameters (). BS CI corresponds to the 90% Bayesian confidence interval, and EBSi CI corresponds to the 90% E-Bayesian confidence interval under squared error loss function. EBLi CI corresponds to the 90% E-Bayesian confidence interval under LINEX error loss function

Figure 1 shows us the prediction curves. Here, BSiL () is the lower bound of the corresponding confidence interval, and BSiU () is the upper bound of the corresponding confidence interval. EBSiL () and EBSiU () have the same meaning. These three graphs have similar results: BSi CI () totally covers the real data, but they do not have much of degree of confidence. Especially the lower bound of BSi () is far from the real data curve; EBSi CI and EBLi CI () almost coincide, and their upper bounds are closest to the real data; BLi, BSi, EBLi, and EBSi () almost coincide. So we can see that the point prediction BLi, BSi, EBLi, and EBSi () are not very different, the interval prediction EBSi CI and EBLi CI () are almost similar, and the E-Bayesian interval predictions are more accurate. The next two examples have the similar results, so we will not show the figures again.

Example 2. Another example is with the data sample generated from the same distribution as the above Example 1, but here, we enlarge the sample size to ; the meaning of other indexes is the same as the above example. Here, ; ; ; ; ; ; . We get the prediction of the last 5 bold ones as Table 2. The data are as follows:
0.29, 0.29, 0.34, 0.34, 0.35, 0.36, 0.36, 0.36, 0.44, 0.44, 0.46, 0.46, 0.49, 0.49, 0.50, 0.50, 0.52, 0.54, 0.55, 0.55, 0.55, 0.56, 0.57, 0.58, 0.59, 0.59, 0.60, 0.60, 0.61, 0.61, 0.62, 0.64, 0.68, 0.70, 0.70, 0.71, 0.71, 0.71, 0.72, 0.73, 0.75, 0.75, 0.80, 0.80, 0.81.

To compare the estimators, the mean square error (MSE) is used to measure the estimation accuracy as follows. Here, to compare the bounds of the estimators, the 90% confidence limits of real data is used to calculate the MSE of our estimated bounds. For the convenience of the comparison, the MSE of Tables 1 and 2 is computed and put together as Tables 3 and 4.

4.2. Illustrative Example with Simulation

To illustrate the operability of the methods put forward by this paper, we also give an example with simulation. The following data sample was generated from the two parameter Burr type XII distribution with , , sample size , 15 observations , 5 censoring data ; to predict the 5 censored data with , we set , , , . The data are listed as follows:

0.05280128, 0.05560962, 0.06225709, 0.10117384, 0.10626390, 0.11375182, 0.19062614, 0.24794003, 0.24840441, 0.42198706, 0.46646103, 0.48875702, 0.55176242, 0.65954615, 0.66710844.

5. Conclusion

In this paper, the E-Bayes point prediction and prediction bounds for ordered lifetime in a future sample are discussed under symmetric and asymmetric loss functions. Two examples with real data and different choices of sample size were illustrated to examine the performance of the different predictions. Comparing Tables 3 and 4, we can find with the increasing of sample size, the MSE of the Bayes predictions and E-Bayes prediction decreases, and the predictions vary with different loss functions.

A simulation study was conducted to show the feasibility of the E-Bayes prediction given in this paper. All the predictions computed with different loss functions, different sample size, and different choice of model parameters of the censored scheme are shown in Tables 1, 2, and 5, and the predictions with different loss functions are consistent with the relationship we put forward in 3.3. From the results, we can draw the following conclusion: (i)From Tables 1, 2, and 5, the estimators of the E-Bayes estimates of satisfy the relationship and , and the E-Bayes estimates of different priors are more close to each other. And they are consistent with the assumption in 3.3(ii)If one uses the E-Bayesian approach for prediction, one would expect these estimators to be better (in the sense of MSE’s) than the Bayesian approach. With our examples, this cannot be seen in the results for different choices of loss functions. However, the results are similar. Because the E-Bayesian method uses more information than the ordinary Bayesian approach, it is much more reliable(iii)When the sample size increases, the MSE of E-Bayesian estimators decrease. So increasing the sample size can get exacter results(iv)The results establish that for optimum decision-making, importance should be given on the choice of loss function and not just the choice of prior distribution only

Burr distribution has extreme importance in the study of biological, industrial, reliability, and life testing and quality control. With the sample prediction functions, we can understand the trends and control them in a time.

The Code for the Examples
two_R.txt
rm(list = ls())
#set.seed(99)
n = 200
r = 195
af = 1
th = 2
z1 = runif(n,0,1)
z2 = sort(((1-z1)^(-1/th)-1)^(1/af))
x = z2[1:r]
xaf = log(1 + x^(af))
t = sum(xaf) + (n-r)xaf[r]
b = 1
u = 3
v = 3
a = 1
yebs = 0
yebl = 0
p = 0
yebs11 = 0
yebl11 = 0
while (p < 5000) {
k = runif(1,0,b) #EBS1
# k1 = runif(1,0,1) #EBS23
# k = b(1-sqrt(1-k1)) #EBS2
# k = bsqrt(k1) #EBS3
c = rbeta(1,u,v)
m = n-r
s = 1
ybs11 = 0
ybl11 = 0
q = 5000
y = rexp(q,a)
while (s < m + 1) {
ta = 1/beta(s,m-s + 1)
rosg = ((-1)^c(0:(s-1)))/(beta(c(1:s),s + 1-c(1:s))s(m-s + c(1:s)))
sg = m-s + c(1:s)
ybs = t(((1 + log(1 + y^af)%o%sg/(k + t))^(-r-c))exp(ay))((ta/a)rosg)
ybs1 = apply(ybs,2,sum)
ybs11 = rbind(ybs11,ybs1)
ybl = t((1 + log(1 + y^af)%o%sg/(k + t))^(-r-c))(tarosg)
ybl1 = apply(ybl,2,sum)
ybl11 = rbind(ybl11,ybl1)
s = s + 1
}
ybs111 = apply(ybs11,1,sum)/q
ybl111 = apply(ybl11,1,sum)/q
ybl111 = -log(1-ybl111)/a
yebs = yebs+ybs111
yebs11 = cbind(yebs11,ybs111)
yebl = yebl+ybl111
yebl11 = cbind(yebl11,ybl111)
two_R.txt
p = p + 1
}
yebs1 = yebs/p
yebl1 = yebl/p
yebs11 = yebs11[,1:p + 1]
yebl11 = yebl11[,1:p + 1]
deta = 0.90
low = ceiling((1-deta)p/2)
up = ceiling((1 + deta)p/2)
yebs1low = apply(yebs11,1,sort)[low,]
yebs1up = apply(yebs11,1,sort)[up,]
yebl1low = apply(yebl11,1,sort)[low,]
yebl1up = apply(yebl11,1,sort)[up,]
s = 1
ta = 1/beta(s,m-s + 1)
rosg = ((-1)^c(0:(s-1)))/(beta(c(1:s),s + 1-c(1:s))s(m-s + c(1:s)))
sg = m-s + c(1:s)
z3 = ybs111 [1]
z4 = 1
while ((1-z4/z3)^2 > 10^(-16)){
f1 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c))tarosg)
f2 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c-1)(z3^(af-1)/(1 + z3^af)))tarosgsg(r + c)af/(k + t)
)
z4 = z3
z3 = z3 + (f1-(1 + deta)/2)/f2
}
ybslow1 = z3
z3 = ybs111 [2]
z4 = 1
while ((1-z4/z3)^2 > 10^(-16)){
f1 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c))tarosg)
f2 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c-1)(z3^(af-1)/(1 + z3^af)))tarosgsg(r + c)af/(k + t)
)
z4 = z3
z3 = z3 + (f1-(1-deta)/2)/f2
}
ybsup1 = z3
s = 2
ta = 1/beta(s,m-s + 1)
rosg = ((-1)^c(0:(s-1)))/(beta(c(1:s),s + 1-c(1:s))s(m-s + c(1:s)))
sg = m-s + c(1:s)
z3 = ybs111 [2]
z4 = 1
while ((1-z4/z3)^2 > 10^(-16)){
f1 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c))tarosg)
f2 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c-1)(z3^(af-1)/(1 + z3^af)))tarosgsg(r + c)af/(k + t)
)
z4 = z3
z3 = z3 + (f1-(1 + deta)/2)/f2
}
ybslow2 = z3
z3 = ybs111 [3]
two_R.txt
z4 = 1
while ((1-z4/z3)^2 > 10^(-16)){
f1 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c))tarosg)
f2 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c-1)(z3^(af-1)/(1 + z3^af)))tarosgsg(r + c)af/(k + t)
)
z4 = z3
z3 = z3 + (f1-(1-deta)/2)/f2
}
ybsup2 = z3
s = 3
ta = 1/beta(s,m-s + 1)
rosg = ((-1)^c(0:(s-1)))/(beta(c(1:s),s + 1-c(1:s))s(m-s + c(1:s)))
sg = m-s + c(1:s)
z3 = ybs111 [3]
z4 = 1
while ((1-z4/z3)^2 > 10^(-16)){
f1 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c))tarosg)
f2 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c-1)(z3^(af-1)/(1 + z3^af)))tarosgsg(r + c)af/(k + t)
)
z4 = z3
z3 = z3 + (f1-(1 + deta)/2)/f2
}
ybslow3 = z3
z3 = ybs111 [4]
z4 = 1
while ((1-z4/z3)^2 > 10^(-16)){
f1 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c))tarosg)
f2 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c-1)(z3^(af-1)/(1 + z3^af)))tarosgsg(r + c)af/(k + t)
)
z4 = z3
z3 = z3 + (f1-(1-deta)/2)/f2
}
ybsup3 = z3
s = 4
ta = 1/beta(s,m-s + 1)
rosg = ((-1)^c(0:(s-1)))/(beta(c(1:s),s + 1-c(1:s))s(m-s + c(1:s)))
sg = m-s + c(1:s)
z3 = ybs111 [4]
z4 = 1
while ((1-z4/z3)^2 > 10^(-16)){
f1 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c))tarosg)
f2 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c-1)(z3^(af-1)/(1 + z3^af)))tarosgsg(r + c)af/(k + t)
)
z4 = z3
z3 = z3 + (f1-(1 + deta)/2)/f2
}
ybslow4 = z3
z3 = ybs111 [5]
z4 = 1
while ((1-z4/z3)^2 > 10^(-16)){
f1 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c))tarosg)
f2 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c-1)(z3^(af-1)/(1 + z3^af)))tarosgsg(r + c)af/(k + t)
)
z4 = z3
z3 = z3 + (f1-(1-deta)/2)/f2
}
ybsup4 = z3
s = 5
ta = 1/beta(s,m-s + 1)
rosg = ((-1)^c(0:(s-1)))/(beta(c(1:s),s + 1-c(1:s))s(m-s + c(1:s)))
sg = m-s + c(1:s)
z3 = ybs111 [3]
two_R.txt
z4 = 1
while ((1-z4/z3)^2 > 10^(-16)){
f1 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c))tarosg)
f2 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c-1)(z3^(af-1)/(1 + z3^af)))tarosgsg(r + c)af/(k + t)
)
z4 = z3
z3 = z3 + (f1-(1 + deta)/2)/f2
}
ybslow5 = z3
z3 = ybs111 [6]
z4 = 1
while ((1-z4/z3)^2 > 10^(-16)){
f1 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c))tarosg)
f2 = sum(((1 + log(1 + z3^af)sg/(k + t))^(-r-c-1)(z3^(af-1)/(1 + z3^af)))tarosgsg(r + c)af/(k + t)
)
z4 = z3
z3 = z3 + (f1-(1-deta)/2)/f2
}
ybsup5 = z3
x
z2
ybs111[1:m + 1]
ybslow1
ybsup1
ybslow2
ybsup2
ybslow3
ybsup3
ybslow4
ybsup4
ybslow5
ybsup5
ybl111[1:m + 1]
yebs1[1:m + 1]
yebs1low[1:m + 1]
yebs1up[1:m + 1]
yebl1[1:m + 1]
yebl1low[1:m + 1]
yebl1up[1:m + 1]
Algorithm 1

Data Availability

The (DATA.doc) data used to support the findings of this study are included within Reference [36].

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors thank the referees for their helpful remarks that improved the original manuscript. This work was financially supported by the Fundamental Research Funds for the Central Universities (WUT: 2019IA004, 2018IB016).

Supplementary Materials

The data for Example 1 and Example 2 to get the parameters’ initial estimators in this manuscript are from Reference [36], and it is illustrated in the manuscript. (Supplementary Materials)