Abstract

In several experiments of survival analysis, the cause of death or failure of any subject may be characterized by more than one cause. Since the cause of failure may be dependent or independent, in this work, we discuss the competing risk lifetime model under progressive type-II censored where the removal follows a binomial distribution. We consider the Akshaya lifetime failure model under independent causes and the number of subjects removed at every failure time when the removal follows the binomial distribution with known parameters. The classical and Bayesian approaches are used to account for the point and interval estimation procedures for parameters and parametric functions. The Bayes estimate is obtained by using the Markov Chain Monte Carlo (MCMC) method under symmetric and asymmetric loss functions. We apply the Metropolis–Hasting algorithm to generate MCMC samples from the posterior density function. A simulated data set is applied to diagnose the performance of the two techniques applied here. The data represented the survival times of mice kept in a conventional germ-free environment, all of which were exposed to a fixed dose of radiation at the age of 5 to 6 weeks, which was used as a practice for the model discussed. There are 3 causes of death. In group 1, we considered thymic lymphoma to be the first cause and other causes to be the second. On the base of mice data, the survival mean time (cumulative incidence function) of mice of the second cause is higher than the first cause.

1. Introduction

In several lifetime experiments of survival and reliability analysis, the cause of death or failure of subjects may be characterized by more than one cause. Consider an example involving more than one cause of failure, Hoel [1] supported a laboratory survival experiment where the mice were given a dose of radiation at 6 weeks of age. In this experiment, there is more than one cause of death. The researcher records the causes of death as reticulum cell sarcoma, thymic lymphoma, or others. We discussed one more example in which the cause of the death of the subject was recorded as breast cancer or other cancer presented by Boag [2]. In reliability experiments, there are promiscuous examples where subjects may fail due to one among all causes. In traditional analysis of survival samples, first, the experimenter is interested in the lifetime distribution under single cause of failure, such as heart attack or cancer, and any other causes are combined and/or considered as censored data. In the last decade, the scientist considered the different types of failure distribution for specific risk and developed the model for two or more causes of failure. In competing risk models, there are two observable variables considered, one is the failure time and second is the indicator variable, denoting the failure and specific cause of failure of the item or individual, respectively.

In literature, many authors considered the independent and dependent cause of failure rates. For instance, Sarhan et al. [3] discussed the competing risks model with the presence of covariates using Weibull subdistributions. In this regard and in most situations, the statistical analysis of competing risk sample data assumes independent and/or dependent causes of failure. Abushal [4] studied the parametric inference of Akash distribution with type-II censoring with analyzing of relief times of patients. Abushal [5] studied the Bayesian estimation of the reliability characteristic of Shanker distribution. Furthermore, Tan [6] developed a probabilistic conditional model to evaluate the efficient estimate of component failure probabilities in the binomial system testing-masked data using EM algorithm. He also discussed the accuracy and capability of the probabilistic model in this case when the system configurations are in series and parallel. Sarhan et al. [7] considered the geometric distribution for components lifetimes and obtained ML and Bayes estimators for the component reliability measures in a multicomponent system in the presence of partial dependent masked system life test data. Sarhan and Kundu [8] discussed the masked system lifetime data from geometric distribution and obtained Bayes estimators of the parameters and component reliability function when the prior information of success probabilities assumed to be beta population. They also discussed the Bayesian procedure for obtaining minimum posterior risk associated with parametric function under square error loss function. Jiang [9] considered the Poisson shock model using masked system life data and obtained ML and Bayes estimator of parameter and survival function of each component with influence of the making level. Almarashi et al. [10] considered the two causes of failure, where the lifetime units come from the exponential failure distribution. The units is censored under a hybrid progressive type-I censoring scheme. They have discussed maximum likelihood and Bayesian estimation procedure with their asymptotic confidence as well as the Bayesian credible intervals, respectively. Almalki et al. [11] discussed the statistical analysis of type-II censored competing risks data under reduced new modified Weibull baseline. Abushal et al. [12] considered the two independent cause of failure under type-I censoring scheme and obtained the ML and Bayes estimate of given parameter of the lifetime model. They also presented the confidence intervals of unknown parameters under both paradigm. For more reading, see [12]. The numerical procedures used to characterise the quality of theoretical conclusions are examined via the analysis of actual data and Monte Carlo simulations.

Although the assumption of cause dependence may be more realistic, there is some anxiety about the identifiability of the underlying lifetime model. The authors [13, 14] and several more authors such as in [1517] have argued that without information on covariance, it is not possible to use the sample data of experiment, such as the assumption of independent failure times. Competing risks models are studied by several authors based on both dependent and/or independent causes of failures with parametric and nonparametric setup [14, 18, 19]. It is assumed, in the parametric setup,that the risks follow a variety of lifetime distributions.

All researchers have the same goal in mind: to examine the competing risk data. This information is from their experiments, which could be in areas such as demography, engineering, life science, health management, and so on. The main focus of an analytic technique is on statistics that can be used to plan future events. Modeling data can be obtained in a variety of ways, both basic and complicated. A well-established methodology is to fit the data using a distribution-based procedure and then acquire the relevant statistics. The advantage of this strategy is that once a good model for the gathered observations in an experiment is found, all of the model’s properties can be used immediately. However, nowadays, the biggest challenge is searching for an appropriate model for the study.

Shankar and Ramadan et al. [20, 21] proposed a new one-parameter continuous distribution named Akshaya distribution and its generalized power Akshaya distribution for lifetime modeling in medical and engineering science. The behavior of hazard function of Akshaya distribution is increasing or decreasing (based on its parameter), so this distribution is flexible to throughout used to analysis of real area. Various properties of Akshaya distribution are discussed by Shankar [20]. They obtained the maximum likelihood estimator of parameter when cause of failures are either unknown or known. Shankar [20] discussed the statistical properties of Akshaya distribution and calculate the maximum likelihood function for complete and censored sample situation. They show the fitting the real data set and analysis for it.

In life-testing experiments, researchers conduct tests on human beings, electrical appliances, natural, and in many more aspects. In such studies, the primary objective is to understand the basic nature of observed lifetimes. Generally, conducting life-testing experiments is time taking and expensive which demands a large amount of money, labor, and time. For reducing the cost and time of the experiments, various types of censoring schemes are developed in the literature. The censoring is inevitable in reliability and life-testing experiment, and the researcher is unable to achieve overall information for all individuals. Censoring in a life-testing experiment occur when the experiment is terminated before the failure of all the units put to test. The decision of termination the experiment is taken according to any available censoring scheme. In the clinical trial, example, the patients may depart from the trial and the experiment may have to be restricted at a prefixed time point. The experiment subjects may fail accidently in industrial experiments.

In many research, the removal of units at given failure is preplanned in order to provide resources such cost associated with testing and savings in terms of time. The type-II and type-I censoring schemes are two most usual and regular censoring schemes in life experiment. However, in some life experiments, some number of patients leave the experiments cannot be prefixed and which are random. Here, we consider the independent competing risk or multiple failure data under progressive type-II censored with binomial removal. Cohen [22] considered the sample specimens that remain after each step of censoring are observed until they fail or until a future stage of censoring is performed. They explained that the maximum likelihood estimators for the normal and exponential distributions are generated when the data are gradually suppressed. A novel technique based on regression to estimate unknown parameters of the Pareto distribution using the progressive type-II censoring scheme is proposed by Seo et al. [23]. Kishan and Kumar [24] discussed the Bayesian estimation for the Lindley distribution under progressive type-II censoring with binomial removals. Salah et al. [25] obtained the estimate of the parameter of the two-parameter-power exponential distribution under progressive type-II censored data with fixed shape parameter. Li and Gui [26] obtained the ML estimates with their confidence intervals for the parameter of generalized Pareto distribution. They present Bayes estimators by using the Adaptive Rejection Metropolis Sampling algorithm to derive the point estimate and credible intervals. In this continuation, the authors also find the estimates of survival function and hazard function of the distribution. The Monte Carlo simulation study is carried out to compare the performances of the three proposed methods based on different data schemes. The broad list of related references and further details on progressive censored sample may refer to [27, 2729].

The process of the progressive scheme is as follows: first, the experimenter put subject on test at the introductory period. At initially stage, we assume the total number of failure and binomial probability in advance. On the basis of , and , generate the censored values for further analysis. At the first failure of unit, the time is observed and of surviving units is selected using simple random sampling without replacement (SRSWOR) and removed from surviving unit. At the second failure from unit, we get the failure time and of surviving units is randomly selected and removed form the experiment. At the failure, the time is observed and of surviving units is randomly selected with SRSWOR from units and removed. Similarly, when the subject fails, is observed and of surviving units is removed from the unit. At the termination of the experiment, i.e., failure, failure time is observed and all remaining units, i.e., , are completely removed from the experiment and stopped the experiment, when the censoring scheme is all prefixed [22].

In competing risk scenario, the data from the progressive type-II censoring with binomial removal is as follows. Here, denoted the total observed failure times, indicate the cause of failure, denote the number of subjects removed from the experiment at the respected failure times . Since, we observed that the type-II censoring and complete sample are the special case of the progressive type-II censoring scheme by putting and , respectively.

The state of research on competing risk models is steadily growing. As a baseline hazard rate, many probability distributions for the number of competing causes of the event of interest have been offered. An interesting survey for the classical distributions and their survey can be referred to [3035]. Table 1 shows the chronological review on the competing risk models under progressive censoring scheme with different baseline distributions.

The main goal of this paper is to explicate the competing risks model under progressive type-II censored with binomial removals. We considered that the cause of failure are mutually independent and follows to Akshaya distribution. We have drive the point and interval estimation of the parameter and its function under the classical and Bayesian paradigm. In the Bayesian estimation procedure, the experimenter considered the three loss function (one is symmetric and two are asymmetric) under gamma prior.

The paper is organized in the following frame. Section 2 describes the competing risk model based on progressive censored sample for Akshaya distribution. In Section 3, we drive the maximum likelihood function and obtain the maximum likelihood estimator of given parameter. The estimator of the component reliability and cumulative incidence function is also discussed here. The two-sided asymptotic confidence intervals of the unknown parameters included in the model. Furthermore, in Section 4, we discuss the Bayes estimators of parameter with two-sided Bayesian intervals. The Bayesian procedure is driven by the Markov chain Monte Carlo (MCMC) technique by using the prior information. The Bayesian confidence intervals are also discussed here. Finally, in Section 5, the reviewed methodologies are illustrated with simulation data. Analysis of a real data set is provided in Section 6 to illustrate the use of the method applied in this paper.

2. Model Assumptions

In life-testing experiment, we consider that there are n identical and independent (iid) subjects put in test. Every subject of the test works under some risk factors such as cause of failure. So, each subject assign , , and cause of failure. Since any one subject fails by only one cause among all causes, we noted the cause of failure and study about the environment of the all cause because some causes have low and some have high risk. In a life test, some countable subjects may fail or be removed or be censored during the test. So, the test will be terminated when it fails or reach to a censoring time. There are two noticeable quantities for the failed subject. These quantities are the subject life time and the cause of failure , say . In the censoring scheme, there is only one quantity which is the censoring time and we set it as . Also, we need the following assumptions throughout this paper:(1)We put iid subjects on the life time test and the given test is terminated at the failure, . There are mutually independent causes of failure engaged to each object.(2)Assume that is the lifetime of subject with cumulative distribution function , is the survival function of , and is the probability density function of .(3)Assume that is the lifetime of cause with , is the survival function of , is the , and is the hazard rate function of .(4)Assume that is integer observable variables signifying the cause of failure of system or censored data.(5)At the first failure,(a)We observed two observable quantities and , and(b) The surviving subject, say , is removed randomly where the removed follows binomial distribution with parameters and . Here, we assume that the binomial parameter is predefined as based on the required environment senerio.(6)In the continuation, when the subject fails : (a) we observed the two quantities and . (b) The surviving subject, say , is randomly removed provided that follows binomial distribution with parameters and .(7)At the last failure or termination of the experiment, i.e., failure, we observed the following quantities and . Secondly, the rest of the survival subject is completely removed from the experiment.(8)Here, denotes the unit has failed due to cause at time .(9)For the given study, the parameter of the binomial distribution is to be considered same in the whole experiment.(10)We assume that the random variable follows Akshaya distributions with unknown parameter , say , for and , i.e., has the and the survival function of Akshaya distribution is given byand the hazard function iswhere is the shape parameter. Based on the above discussed assumptions, the observed data , , …, is the type-II progressive censored sample, where indicate the failure time of observation. Now, denote the respected causes of failures, and indicate the number of units which are randomly selected and removed from the experiment at the respected failure times . We will use henceforth instead of to simplify the given notation.

2.1. Cumulative Incidence Function

A very interested quantity in the competing risk is cumulative incidence function. It is defined as the risk when other causes are present in the experiment. The provides the estimate quantity of the cumulative probability of locoregional recurrences in the presence of other causes. In presence of competing risks, the probability for occurrence of each event of type out of the possible event types , up to a mission time , can be described in term of for event type . The of cause iswhere is the cause-specific hazard rate which is the hazard function allied with a specific cause when there are multiple events under consideration and is overall survival function.

3. Maximum Likelihood Estimates

Now, we discuss parameter estimation for this model. We define the likelihood function of observed sample of from under progressively type-II censored binomial removals. On the basis of the available data obtained in the previous section, the likelihood function of given data can be written aswhereand .

From the relation between the survival function, the hazard rate, and , the likelihood function becomes

Now, the number of items randomly selected and removed at the failure time of subject where removal follows to the binomial distribution with parameter , such thatwhere . At the last failure in the experiment, the remaining items, if exist, are removed from the experiment with probability one. In this regard, it is assumed that and are mutually independent for all . Then,

Therefore,

Substituting the expression of equations (7) and (10) into (5), the observed likelihood function takes the following form:

Thus, the log-likelihood function takes the natural logarithm of equation (12) and defined in the following form:

The first partial derivatives of with respect to , and arewhere is the Kronecker delta. Using the observed data set, one can calculate the MLEs of the distribution parameters and by solving the following equations with respect to and ,

Suppose that and be the MLE of , , and , respectively. Using invariance property of the MLEs, MLEs of the component reliability and component cumulative incidence function, at a given time are obtained accordingly.

3.1. Confidence Intervals

Along with the point estimator, another statistic of interest is the interval estimator. A confidence interval defines the range of numbers, which contains the true population value. The probability that interval includes the parameter value is what we call the confidence level. Since the ML estimators of the parameters cannot be defined in analytic forms. Therefore, the actual distributions of ML estimators cannot be derived.

3.1.1. Asymptotic Confidence Intervals

Here, the confidence interval is obtained using the asymptotic normal property of the ML estimator. As the estimator in equations (13) and (14) is not in the closed-form, it is not possible to derive their exact distribution of given parameters. However, we use the asymptotic distribution of ML estimator to derive s for . We know thatwhere is the variance-covariance matrix and indicates the multidimension normal distribution. It can be calculated by the inverse of the matrix of . In Fisher information matrix, the element is with respect to and , that is, , and is defined asand . Now, we use the delta method to obtain the asymptotic confidence interval of , say, parametric function. The delta method (Qehlert (1992)) allows a normal approximation for a continuous and differentiable function of a sequence of random variables that already has a normal limit in distribution. According to the delta method, the variance of is estimated by

So, the of is obtained as follows:where is the upper of standard normal variate.

3.1.2. Boot-p Confidence Interval

In this subsection, Efron and Tibshirani [52] proposed the parametric bootstrap confidence intervals for the parameters. The bootstrap method is utilized when a normality assumption is invalid. Consider . Let be of . Define for given . For boot-p s, the necessarily computational algorithm is defined as the following manner:(1)To calculate the ML, estimate as under the progressive type-II censored with binomial removal.(2)Generate failure censored sample with given of size m form by using .(3)Using the sample from step-2 to obtained the boot-p estimate for the parameter , say .(4)Repeat the above mention steps 2-3 for the B times, we get the sequence of boot-p estimators .(5)Set the obtained sequence in ascending order and receive .(6)A two-sided boot-p parameter confidence interval is given bywhere [] denote the integer part of .

4. Bayesian Procedure

We obtain Bayesian estimates of the parameters , , and . To do so, we need the following assumptions:

Assumption 1. Assume that , , and are independent.

Assumption 2. Also , , have gamma prior distributions and with different known and nonnegative hyperparameters and , respectively, and has the beta distribution with with known and nonnegative hyperparameters and , respectively. The gamma prior density function has the following form:and the beta density function has the following form:whereThe joint prior density function is

Assumption 3. A quadratic loss function is used, that is, the loss function iswhere .
Therefore, the posterior density function of , up to a normalized constant, isAccording to the Assumption 3, the Bayes estimates of given function of the parameters , say isThe integral in equation (28) and the normalized constant present in equation (28) have no analytical solutions. Therefore, we will use the MCMC simulation method for obtaining the Bayes estimate of the parameters.
To generate random samples from the joint posterior density function in equation (28) and to compute the Bayes estimate of and also construct associated credible intervals.

4.1. Markov Chain Monte Carlo (MCMC) Method

The Markov Chain Monte Carlo (MCMC) method is one of the most important technique in Bayesian paradigm. The MCMC method is the main computational tools which have been broadly used in Bayesian inference [5357]. In MCMC algorithm, we have summarized the posterior distribution without requiring the given normalized constant. One of the common technique of the MCMC method is the Metropolis–Hasting to generate the random sample observation from the unknown density. The main aim of the MCMC method is to discover the suitable distribution function and its called the “proposal density.” The proposal density satisfies the two conditions (i) it mimics the actual posterior distribution function and (ii) easy to simulate. We obtain the random sample from the proposal density by using the acceptance-rejection rule. The following steps represents to the Metropolis–Hastings algorithm to generate the random sample drawn from the posterior density .(1)To set the initial point of the sequence, say .(2)To set a size of sequence for getting the random sample, say .(3)For repeat , we take the following steps:(a)Set .(b)Generate a new candidate point from a candidate distribution .(c)The acceptance probability calculate as by , where .(d)Accept with probability or otherwise set .

The proposal density , under some regularity condition, the sequence of the simulated sample will be random draws and follow the posterior density . The Gibbs sampler algorithms will help to generate random samples from the proposal distributions under the MH algorithm which is used as a cover for the conditional posterior distributions. The necessary steps for the MH algorithm is presented in the following Algorithm 1.

(1)Consider an arbitrary initial point , for which .
(2)Put .
(3)Get from full conditional distribution .
(4)Obtain from full conditional distribution .
 ⋮
(5)Obtain from full conditional distribution .
(6)Put .
(7)Repeat steps 2–6.

We discard first some values from generated chain to avoid initial value effects. Also, by using some diagnostic tools such as Cumsum and ACF plots, we get some help in making the chain stationary. At end, we get a chain of size N, based on which we draw the required inferences. We will calculate the Bayesian credible intervals (BCIs) and highest posterior density (HPD) intervals.

4.2. Bayes Estimate using Balanced Loss Functions

In the estimation theory, some difference is always observed between the true and estimate value of the parameter. In symmetric loss function, the amount of risk define by the particular loss function to a negative error is equal to positive of the magnitude. Since the assumption of the equal loss is not appropriate in practical situations scenarios and may be guide with misleading results. In this regards, asymmetric loss function are appropriate for that situation. In those cases, given negative error may be more serious than a given positive error of their magnitude or vice-versa. Now, we will discuss the asymmetric loss function. Here, we considered three loss functions: (i) balanced squared error loss function (BSEL), (ii) balanced LINEX loss function (BLINEX), and (iii) balanced entropy loss unction (BEL), which are introduced by (Jozani et al. [? ]). Let us consider that be an estimator of the unknown parameter . The Bayes estimates of are given in Table 2 under given loss functions.

are both nonzero real numbers. The Gibbs sampling is one of the easiest algorithms in MCMC techniques and is introduced by [58]. Azzalini [59] helped to vindicate the magnitude of Gibbs algorithms for the problems in Bayesian paradigm. Using the Gibbs sampling technique, the translation kernel is build up by the full conditional distributions.

4.3. Bayesian Intervals

First, we generated the sample from the given posterior densities of , , and , and find out the credible intervals for the unknown parameters and its functions. Here, we discussed the process based on the algorithm of [60] to evaluate Bayesian credible and highest posterior density (HPD) intervals. The necessary steps in the this algorithm are as follows:(a)Credible intervals(i)We generate a random sample through the M-H algorithm and set in ordered values such as(ii)The Bayesian credible interval for is given by(b)Highest posterior density (HPD) intervals(i)For calculation, all possible Bayesian credible intervals with corresponding lengths as given by(ii)Search for the credible intervals which have smallest length for . The credible intervals with smallest length is HPD interval of .

5. Simulation Study

Shankar [20] proposed the Akshaya distribution by considered the mixture of the Gamma distribution , , , and with their respective mixing proportions,

In this case, we consider the statistical analysis of a simulated data set in real-life problem. Here, in our study, we considered that a system have two components. We have assume that both the components follow Akshaya distribution with parameters and , respectively. Furthermore, we treated the values of and and we generate the lifetimes of and , which are considered to be the lifetimes of component 1 and component 2 of a system, respectively. Since we are considering series systems of the components, the lifetime of the failed systems becomes and is the failure time of system. We generate observations in this way and also note down their cause of failures, say . We assume that observations are failed and remaining observations are removed from the experiment by progressive type-II binomial removals with parameters and . Now, we have discussed to generate the progressive type-II binomial removal sample.

Table 3 shows the patterns of progressive binomial removals at every failure. The removals pattern is based on the probability value of binomial distribution. In our study, we assume different values of for particulars and for the generated type-II progressive binomial censored lifetime data. We provide these estimates under various type of schemes by choosing several patterns of removals by using the different parameters of binomial distribution. In simulation study, we obtained the ML estimate of parameter and also obtained the MSE’s. In interval estimation, the asymptotic confidence and boot-p intervals also obtained based on different methods. The ML estimate of component cumulative incidence function, say, and , is calculated based on given time. The component reliability and cumulative incidence functions are also obtained at given time and , respectively. The confidence interval of also find by using the delta method. The coverage probability (CP) of a confidence interval of parameter is defined by the proportion of the number of times that the interval comprise the true value of interest. In simulation, the CP is the ratio of number of times the true parameter value lies outside the confidence interval and total number of simulation.

For Bayesian estimation, the system components follow gamma distribution with hyperparameters to be and for . For this, we assume that and . For these values, we obtain distribution parameters and , component reliability, and component cumulative incidence function using the MCMC method. We run this process for 7000 times and obtain the averages estimates of parameters and parametric function along with their mean square errors (MSE). In this study, we considered the square error loss function as a symmetric loss function and and function as asymmetric loss function. For loss function, we considered the value of shape parameter for under and over estimation is and 2, respectively. The same value assume for the entropy loss function. We have obtained component reliability and cumulative incidence function based on the given time. We assume two sample sizes and are defined as

The Tables 4 and 5 represent the ML and Bayes estimate of and , respectively, with respected MSE’s of parameter. The Bayes estimate is obtained under symmetric and asymmetric loss function. Similarly, Tables 68 show the ML and Bayes estimates of component cumulative incidence function and with their MSE. The length and of asymptotic confidence and HPD interval of and are discussed in Tables 9 and 10, respectively. Similarly, the length of asymptotic confidence interval and HPD interval of and , , with respected coverage probabilities is discussed in Tables 1113.

From all the tables, we observe that MSEs of all function decreases when n increases. For fixed n, the MSEs of all the estimates decrease when m increases. In the Bayesian analysis, we find out the MSE’s under symmetric and asymmetric loss function. Based on the simulation study, we found the following result:(i)In general, as for any increases, MSE’s of corresponding ML and Bayes estimator decrease and reverse nature exists for at any and .(ii)The Bayes estimation, as we use some additional information in form of prior, work more efficiently than the maximum likelihood approach. For each set of , MSE’s of the Bayes estimation is more appropriate than ML approach.(iii)The average lengths of the given intervals decrease as increases for any . This nature also exists for coverage probability. The beats in case of average length which is obvious and expected.(iv)As increases for large value of , the probability of getting failures decreases.

6. Data Study

In this section, we considered simulated and real data to show the application of Akshaya distribution. All the data set to present that the Akshaya life times can use the results to a real-life problem.

6.1. Simulated Data

We also consider the simulated data analysis to show how the results applied in real-life problems. The data are simulated from the considered Akshaya population by taking and of , we generated competing risk data of size . In this study, we considered total system failed out of and takes value , and 0.15. The pair value shows the failure time and cause of failure of system. The observe data with cause of failure is given as Table 14. The asymptotic confidence and boot-p confidence are calculated. For this continue, we assume that and . For these values, we obtain distribution parameters and , component reliability, and component cumulative incidence function using the MCMC method. The Bayesian estimation performed under the symmetric and asymmetric loss function. We also discussed the Bayesian credible and HPD interval for the , , , , and . Tables 1517 present the ML and Bayes estimate of parameter with their length of confidence intervals. The component reliability and cumulative incidence function obtain at the given time . The Bayes estimates of and with their length of CI for different m for n and are discussed in Table 18. The cumulative incidence function for both causes are obtained for the value

6.2. Real Data

The data represented the survival times of mice, kept in a conventional germ-free environment, all of which were exposed to a fixed dose of radiation at an age of 5 to 6 weeks [1]. There are 3 causes of death. Here, we considered the thymic lymphoma for first cause and other cause for second cause in group 1. The cause-wise data are shown in Table 19 after using type-II progressive censored binomial removal with probability .

The ML and Bayes estimates with their respected length of confidence intervals are presented in Tables 2022. In Bayesian paradigm, all estimation is performed under various loss functions. The graphical representation of generated sample from the MCMC method of both component is presented in Figure 1. The component reliability and cumulative incidence function obtain at the given time .

7. Conclusion

We studied the competing risks model under progressive type-II censored with binomial removals. We considered the lifetimes of objects follow the Akshaya subdistributions with unknown parameters. Also, the number of items or individuals removed at every failure time follows the Binomial distribution. The classical and Bayesian approaches are used to account for the point and interval estimation procedures for parameters and parametric functions. The Bayes estimate is obtained by using the Markov Chain Monte Carlo (MCMC) method under symmetric and asymmetric loss functions. So, the Metropolis–Hasting algorithm is applied to generate Markov Chain Monte Carlo (MCMC) samples from the posterior density function. A simulated data set is applied to diagnose the performance of the given technique applied here. The ML and Bayes estimates with their respected length of confidence intervals are presented. Also, the component reliability and cumulative incidence functions are obtained by using simulated data set and real data set. We can guarantee more accurate results rather than any other model can achieve which is not appropriate for the system and can achieve which is not appropriate for the system. Overview of the analysis of survival competing risk:(i)Cumulative incidence functions (CIFs) should be used to estimate the likelihood of each type of competing risk.(ii)Researchers must decide whether the research goal is to answer etiologic questions or to estimate incidence or predict prognosis.(iii)When estimating incidence or predicting prognosis in the presence of competing risks, use the Fine-Gray subdistribution hazard model.(iv)When addressing etiologic questions, use the cause-specific hazard model.(v)In some cases, both types of regression models should be estimated for each competing risk in order to fully understand the effect of covariates on the incidence and rate of occurrence of each outcome.

The results for all competing causes, as well as cause-specific and subdistribution hazard functions, must be presented. This method allows for a more comprehensive understanding of not just the effects of prognostic factors but also the absolute risks associated with the various results in the study sample. It is difficult for decision makers to consider all hazards while making clinical decisions. Due to the availability of software, analyzing the cumulative incidence function has become increasingly popular and widely reported in recent years. Biases can occur when using the Kaplan–Meier estimator to estimate the cumulative incidence of the event of interest, as well as when using a proportional hazards model to estimate the effects of covariates on the cumulative incidence function when using a proportional hazards model for the cause-specific hazard function. The impact of incorrectly treating competing events as censoring events has practical implications in these analyses. In general, the more competing events there are, the more probable it is that competing events will be treated as censoring events. When the percentage of competing events is larger than 10%, the scientific objectives of the analysis, as well as the suitable choice of end point and technique of analysis, must be carefully considered.

In our case study, we demonstrated that a variable can have an effect on the incidence of an outcome that differs from its effect on the outcome’s cause-specific hazard. This emphasises the importance of investigating the impact of both incidences and cause-specific hazard functions of all event types in order to develop a comprehensive understanding of the various competing events. Competing risks are events that prevent the occurrence of the desired outcome. In some cases, more clarity may be required before proceeding with the analysis to determine what constitutes a competing risk. For example, if a person develops one type of heart disease, can he or she later develop another type of heart disease, or are the two conditions mutually exclusive, precluding the later second disease? Such clinical issues must be addressed during the design phase before conducting the statistical analysis. For more reading, see Abushal [61, 62].

In conclusion, competing risks are common in survival research. We encourage analysts to fully utilise the variety of statistical methods developed in the statistical literature for the analysis of survival data. Investigators must be aware of competing risks and their potential impact on statistical analyses. Researchers must choose the best method to address the study objectives and ensure that the analysis results are correctly interpreted.

8. Discussion and Scope of Future Research

A set of statistical probability distributions used in reliability engineering and lifespan data analysis is referred to as parametric survival regression models. Serial coupling with identical binomial or exponential components is no longer a requirement for system lifetime distributions. Survival analysis data are more difficult to work with since they are characterized by uncertainty, risk, and complexity. This highlights the importance of establishing a development risk structure, methodology, and approach. The suggested family’s promise has been proved by fitting it to a real-world data set, and statistical analysis demonstrates that it is a better fit. We plan to investigate a number of issues regarding the analysis of various forms of data using a masked model with independent and dependent failure causes. The most important outcomes of this study are in the fields of media and electronic equipment.

Data Availability

The document contains all of the data that are accessible.

Conflicts of Interest

The authors warrant that they do not have any conflicts of interest to disclose.

Acknowledgments

The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code 22UQU4310063DSR04.