Abstract

This paper examines the application of the Wiener process as a degradation model. Its appropriateness as a degradation model is discussed and demonstrated with the aid of Monte Carlo simulations. In particular and for monotonically degrading systems, this paper demonstrates that the irreversible accumulation of damage can be modelled by the Wiener maximum process. First passage times of the Wiener and its maximum process are also revealed to coincide. Practical advantages of assessing system reliability from degradation data are highlighted by applying the Wiener process model to a real gallium arsenide (GaAs) laser data for telecommunication systems. The real data application results demonstrate that degradation analysis allows for conclusions about system reliability to be reached earlier without compromising estimation accuracy—a major practical advantage.

1. Introduction

Assessing reliability of technical systems from failure time information is increasingly becoming a challenge. New technologies on the design for reliability continue to be developed. This has resulted in highly reliable systems that operate for long with few or no failures, even under accelerated conditions (Ye and Xie [1]). For such technical systems, collecting sufficient failure time information for reliability assessment is a costly exercise. Depending on the application, an alternative approach is to use gathered information on the state of the system and its performance while in operation, called degradation data. Through the use of suitable models and data analysis methods, registered degradation data can be converted to system reliability information which can be utilised for reliability assessment (Guo and Liao [2]).

The rationale is based on the observation that ageing failures are linked to an underlying degradation process (Lehmann [3]; McLinn [4]). For most manufactured systems, the physical conditions degrade as the system ages, such as automobile tyre wear. For some systems however, degradation occurs in system performance such as the light intensity of a light-emitting diode (ELD) dropping with usage. The physical or system performance degradation has the interpretation of damage to the system. It accumulates with time or mission and ultimately causes failure when the accumulated damage reaches a failure threshold defined by industrial standards. The system failure time distribution and its parameters are derived from the analysis of degradation data and the deterioration mechanism. Based on the derived , reliability metrics of interest such as mean time to failure (MTTF) and percentiles are determined.

Degradation models fall into two broad categories, namely, general path models and stochastic process models (Meeker et al. [5]). General path models have a well-developed theory. They are essentially mixed-effects regression models and can therefore incorporate covariates and random effects in a flexible way (Coble and Hines [6]). Their limitation is the inability to capture the time-varying behaviour of systems and the uncertainty ingrained in the evolution of system degradation over time. On the other hand, system degradation is naturally governed by a random mechanism that is best described by a stochastic process (Limon et al. [7]; Gorjian et al. [8]). In view of their random nature, stochastic process models allow for a natural explanation of the unexplained randomness of the degradation over time resulting from unobserved environmental factors. This study assumes a stochastic process model for system degradation paths. Its main objective is to investigate the application of the Wiener process in degradation modeling. With the help of Monte Carlo simulations, applications where the Wiener process is a suitable degradation model are demonstrated.

1.1. Overview

The remainder of the paper is organised as follows. In Section 2, the basis of Wiener process as a degradation model and parameter estimation is reviewed. Known results are also demonstrated using Monte Carlo simulations. The application of Wiener maximum process for monotone degradation is the subject of Section 3. In Section 4, a real data application involving GaAs laser degradation data for telecommunication systems is presented. The study ends with concluding remarks in Section 5.

2. The Wiener Process as a Degradation Model

The Wiener process is (Kahle and Lehmann [9]; Wang [10]) the basic model for random accumulation of degradation over time. Its basis is that degradation increment in an immeasurably small time interval is the sum of a large number of small, independent random stress effects (additive superposition).

Denote by the sum where is independent random variables with finite means and finite variances . Assume none of the dominates the rest. Then from the central limit theorem, the standardisation of denoted by converges under the Lindeberg condition (Beichelt [11]) to a normal distribution. That is, where is the standard normal distribution function. Thus, the degradation increments over the interval are normally distributed. Accordingly, the Wiener process has the following properties: (1)For all , the degradation increment is normally distributed with mean and variance where is a variance parameter(2)For any set of disjoint time intervals, increments are independent random variables distributed as described in property 1(3)For any constant and , . That is, has stationary increments(4) almost surely

System degradation generally has a nonzero mean. An obvious improvement of the Wiener process model is to include a drift measure reflecting the rate of degradation. This yields a one dimensional Wiener process with drift where is the diffusion parameter, and is standard Brownian motion on capturing the stochastic evolution of the degradation process. Thus, and . Consequently, . Unless indicated otherwise, technical systems having the same design are assumed to have common drift and variance parameters.

2.1. Wiener Process Model for Nonmonotone Degradation

An assumption that is often valid in applications is that physical or performance degradation is a continuous process. Accordingly, sample paths of the stochastic process describing system degradation is ought to be restricted to continuous functions.

Simulated sample paths of a Wiener process for and taking values , , , and are presented in Figure 1. Process trajectories in Figure 1 are continuous functions. It is therefore not surprising that the Wiener process is the basic model for a degradation process. Describing system degradation by the Wiener process implies that physical or performance degradation can increase or decrease with time. While this might not be meaningful in many degradation applications, it is applicable to degradation processes whose levels vary bidirectionally over time when observed closely. Examples include (1)The gain of a transistor or the extent of propagation delay (Lu [12])(2)Cracks healing and CD4 blood cell counts fluctuating (Singpurwalla [13])(3)Resistance of the structure alternating with time in the framework of structural reliability (Dong and Cui [14])

For monotone degradation processes such as wear-out, the application of the Wiener process is only as an approximation, which is especially good if . In this case, the trajectories are approximately monotone (see bottom-right panel of Figure 1) since the tooths in the evolving paths of the Wiener process are appreciably smoothed out. Alternatively, all factors contributing to nonmonotone behaviour in a Wiener process model may be attributed to pure noise and modelled accordingly.

2.2. Lifetime Estimation and Failure-Time Distribution

Assuming a Wiener process model, system lifetime is the time crosses the critical degradation level for the first time. That is, is the first passage time of the Wiener process to . It is given by

It is well known (Chhikara and Folks [15]) that is distributed as inverse Gaussian with probability density function (pdf)

A useful reparameterisation of the density in Equation (5) in terms of the development of statistical properties analogous to those of the normal distribution (Tweedie [16]) is obtained by setting

This yields the reparameterised inverse Gaussian distribution with pdf and cumulative distribution function (cdf) given as respectively, where is the mean, and is the shape parameter. The inverse Gaussian distribution is right-skewed and bounded at zero. Figure 2 illustrates the probability density function for the inverse Gaussian with and taking values , , and .

The result that is inverse Gaussian is demonstrated using Monte Carlo simulations. In particular, sample paths were simulated and is set to to ensure all systems are tested to failure as illustrated in Figure 3.

System degradation is simulated at discrete times. Interpolation was done using splines method to ensure that the resulting values are continuous, and thus unique and more representative.

Figure 4 shows histograms of the resulting values for different path parameter combinations. Additionally and based on the parameterisation in Equation (6), the theoretical inverse Gaussian pdf is also represented in green. It follows from Figure 4 that the histograms closely resemble the theoretical inverse Gaussian pdf. Hence, this illustrates that first passage times of a Wiener process with drift are indeed distributed as inverse Gaussian.

2.2.1. Maximum Likelihood Estimation of Path Model Parameters

Path model parameters are estimated based on registered degradation data or derived first passage times. The former applies to highly reliable systems where failure does not interrupt observation of the degradation process, that is, . Denote by the system degradation measure at inspection time , ; . Then, the degradation increment for the system is . From Equation (3), where and with

Consequently, are the degradation increments for system with pdf . Since the Wiener process has normally distributed increments, the likelihood function for system is

The corresponding log-likelihood for the system is given by

Taking partial derivatives of the log-likelihood function in Equation (11) with respect to and gives

The maximum likelihood estimators (MLE) and are obtained by simultaneously solving Equations (12) and (13). They are

Wiener process degradation increments are independent. Hence, the log-likelihood function of their full set is and MLE for model parameters and are

For most applications however, is finite as determined by industrial standards. System lifetimes are the values for the sampled systems. Denote by these first passage times. Their density is given in Equation (5) for the underlying Wiener degradation process. The likelihood function is thus with log-likelihood function

Maximising the log-likelihood function in Equation (20) in respect of process parameters and yields

MLE for and is obtained by simultaneously solving Equations (21) and (22). They are

Values of and are obtained from Equation (23) based on the first passage times from the simulated degradation paths. The results, together with true path parameter values, are presented in Table 1.

They show that parameter estimates of the first passage time distribution in Equation (5) recover path parameters used in the simulation. This is equally true for parameter estimates of the transformed distribution in Equation (7), and the results of which are presented in Table 2.

Thus, Monte Carlo simulations prove the correctness of the well-known result that first passage times of a Wiener process with drift obey inverse Gaussian law.

2.2.2. Interval Estimation of Model Parameters

Maximum likelihood estimators are point estimates obtained from sample data. Accordingly, values of and are subject to sampling fluctuations and may or may not be close to the quantities being estimated. It is therefore important to quantify uncertainty associated with parameter estimates. Confidence intervals are very useful in quantifying uncertainty in point estimates due to sampling error arising from limited sample sizes. Exact confidence intervals can be constructed, if for example, the sampling distribution of is known. Otherwise, and for large samples, approximate confidence intervals are used. MLE is asymptotically normal. Hence, confidence intervals for , are constructed by the asymptotic normal approximation. Also called Wald confidence intervals, normal approximation confidence intervals are based on the Wald statistic:

The standard error of is determined by the second derivative of the log-likelihood function with respect to which quantifies the curvature of the log-likelihood function. That is and is evaluated at where is a vector of first passage times. The quantity is the observed information. When constructing normal approximation confidence intervals however, is often replaced by the expected or Fisher information

The resulting confidence interval for is given by

Alternatively, the statistic is used instead. Observe that is unrestricted in sign. Hence, is in general closer to a distribution than is . Thus, and after transforming by exponentiation, the confidence interval for is

Often in reliability, only a few system failures are observed. In this case, large sample normal theory is inexact. Rather, likelihood ratio confidence bounds method is often preferred. It is based on likelihood ratio equation: where is the likelihood function for the unknown parameter , is the likelihood function evaluated at , and is the chi-squared statistic with probability and degrees of freedom, where is the number of jointly estimated parameters. A rearrangement of Equations (30) yields where terms on the right hand side are known exactly. The confidence limits for are the minimum and maximum calculated values for which Equation (31) holds.

Contour plots are a useful way of simultaneously estimating likelihood ratio confidence bounds on the parameters. Equation (31) has no closed form solution, hence, a numerical solution is required instead. A crude approach is holding one parameter constant while iterating on the other until an acceptable solution is reached. Figure 5 gives contour plots for first passage times in Figure 4.

Confidence bounds for get narrower with decreasing volatility. This is expected because low variability results in smoother sample paths (Figure 3), hence, less variability in first passage times (Figure 4). The bounds for however get wider with decreasing volatility. This is a result of the scaling in Equation (6) and can be seen in the peakedness of the theoretical and empirical densities in Figures 2 and 4, respectively.

2.3. Sampling Distributions of First Passage Time Distribution Parameters

First passage times in Figure 4 are based on simulated sample paths. In practice however, a few systems are tested for economic reasons. Hence, trajectories for (small sample) and (large sample) are simulated. Values of and are obtained from the resulting first passage times. This procedure is repeated times for each , yielding respective sampling distributions in Figures 6 and 7.

Sampling distributions for are fairly symmetrical regardless of sample size. Those of are right-skewed for small samples and close to normal for large samples. Therefore, approximate normal confidence intervals for may be appropriate for large samples whereas for , any sample size may apply.

The performance of and is also assessed for different sample sizes and volatility parameters. In particular, bias and empirical standard error (EmpSE) were reported together with their Monte Carlo standard errors (Monte Carlo SE). Bias is the amount by which exceeds on average. Unbiasedness is a key property in frequentist theory. However, small biases maybe traded-off for other good properties. The EmpSE estimates the long-run standard deviation of for replications. It is a measure of the precision (efficiency) of the estimator. Monte Carlo SE provides an estimate of the SE of the estimated performance measure as a result of using a finite number of replications (Morris et al. [17]). Another important measure is mean squared error (MSE), measuring the accuracy of used to estimate . It is a function of bias of and its variability.

Expressions of these measures, together with their Monte Carlo SE, are given in Table 3.

Performance estimates for these measures are reported in Table 4 for small and large number of sample paths and for different volatility parameters.

As expected, bias, EmpSE, and their Monte Carlo SEs decrease as the sample size increases. Additionally, the estimation of is better with smaller volatility. This is also expected since first passage times have more variability for large as can be seen from Figure 1. For however, the estimation is worse with smaller volatility. This is intuitive since a smaller leads to higher values for fixed as explained by the inverse relation in Equation (6). Accordingly, the higher scale of values translates to higher values of performance measures. Estimates of MSE can be derived from those of bias and variability. Hence, they are not reported here.

3. Wiener Maximum Process Model for Monotone Degradation

System degradation often proceeds in one direction only, and hence monotone. This irreversible accumulation of damage can be explained by the Wiener maximum process which, by definition, is nondecreasing in its argument. It has initial condition since . Recall that is the first time passes the failure threshold . Since has continuous sample paths, the occurrence of the event at time suggests that the event has already been realised. That is,

For the Wiener maximum process however, occurs if crosses at least once in the closed interval given that . That is, if and only if . It follows therefore that

Hence, crosses at exactly the same time that the process crosses the same threshold. That is, the first passage time of the Wiener maximum process to given by coincides with that of to the same failure threshold as shown in Figure 8.

Figure 9 shows the distribution of first passage times of both the Wiener and the Wiener maximum process based on simulated sample paths. The Wiener maximum process is a more realistic model when it is important that the degradation process is monotone, as is often the case in practice. Figure 9 confirms the result that Wiener and Wiener maximum process first passage times have the same distribution. This is particularly the case when the volatility parameter is much smaller compared to the drift parameter . Hence, system failure times assuming a Wiener maximum process are also distributed as inverse Gaussian with density specified in Equation (5). Thus, the popularity of the Wiener process stems from the fact that it is applicable to both nonmonotone and monotone degradation.

4. Real Data Application

Real data from a degradation test of a gallium arsenide (GaAs) laser for telecommunications systems is considered in this section. The laser uses a built-in feedback circuit to maintain a constant light output. As it ages, the laser requires more current to maintain the constant light output. The first time a 10% increase in current is needed to achieve the constant light output, the laser is considered to have failed. That is, . The data is from an accelerated degradation test involving randomly sampled lasers. The lasers were tested for at an elevated temperature of C. The elevated temperature was estimated by engineers to accelerate failure by a factor of approximately . Table C.17 in Meeker and Escobar [18] contains more information about the test. The data are plotted in Figure 10.

By the end of the test at , three lasers had failed; at , , and . The laser has a desired lifetime of at least at the use-level temperature of C. This amounts to a corresponding lifetime of at the elevated temperature of C. Consequently, the estimation target is the laser’s unreliability at , . GaAs laser sample degradation paths in Figure 10 appear to be monotone. Hence, the Wiener maximum process is a reasonable model. First passage times of to however coincides with those of the Wiener process with drift to the same failure threshold as demonstrated in Figure 8. Hence, the simpler model is assumed, and path model parameter estimates are obtained from Equation (23). The uncertainty associated with is quantified using bootstrap and jackknife methods (Tibshirani and Efron [19]). The former entails randomly drawing samples of size with replacement from the lasers and estimating . The bootstrap normal approximate 95% confidence interval for is

The jackknife sequentially leaves the laser out for and estimates from degradation data on lasers. The approximate jackknife confidence interval is given by where is the th percentile to the distribution having degrees of freedom, and is the jackknife standard error estimate given by

Table 5 reports and from the degradation analysis of the -hour laser data and their approximate 95% confidence intervals. An important advantage of assessing system reliability from degradation data is that conclusions are reached earlier without compromising estimation accuracy. Therefore, laser data available after only , , and are also analysed. Results in Table 5 show that analysis of laser data available after the different test times yielded comparable path parameter estimates. These shorter tests allow for highly reliable systems to be released early and corrective action on the unreliable ones to be done sooner. Figure 11 shows first passage time densities, i.e., derived from path parameter estimates in Table 5 using Equation (6). There does not appear to be major differences between densities, particularly for the -hour, -hour, and -hour laser degradation data. This is not a surprising result as has already been reported by Hove [20] though a general path model was assumed instead.

Estimates of the desired probability of failing by , from the analysis of laser data available after the different test times are presented in Table 6.

Lower percentiles, often useful when determining warranty period for example, are also reported. These results show that -hour and -hour analyses yielded more comparable estimates to the -hour analysis than the -hour analysis. Path parameter estimates in Table 5 and densities in Figure 11 reflect this finding. This is surprising since the -hour analysis utilises less registered degradation data than the -hour analysis.

Further analyses (not reported here) of laser data available after , , …, revealed changes in values for the different test times. It follows from the results presented in Figure 12 that shorter tests from -hour data (in red) yield values that are comparable to the -hour data.

5. Concluding Remarks

When assessing reliability for highly reliable systems, degradation tests are an attractive alternative to life tests that record only failure times. This is especially so when few or no failures are observed in life tests of practical length, and a close relationship exists between system failure and the level of degradation. In this paper, the use of Wiener process for reliability assessment is reviewed. Monte Carlo simulations are used to demonstrate known results and to quantify performance measures. In particular, the well-known result that first passage times of a Wiener process with drift to a fixed barrier are distributed as inverse Gaussian is demonstrated. Additionally, the performance of MLEs of inverse Gaussian parameters was also assessed. The findings are as follows: (1)Performance of suffers some small upwards bias. The small bias suggests that if the number of replications in the simulation study is increased unboundedly, then, the long run average of all will not be far from their true values. Bias values decrease with both increase in sample size and decrease in volatility. For however, bias values appear to be large but this is a result of the scale. They decrease with sample size but increase with decreasing volatility(2)Variability of and is significantly lower for large sample sizes, as expected. The seemingly large variability for is again a matter of scale, explaining why it increases with decreasing volatility

First passage times of the Wiener maximum process to a fixed threshold are shown to coincide with those of the Wiener process with drift. This is in line with the presented theoretical result and is important for explaining strictly monotone degradation processes. In the main, real data application demonstrated a considerable reduction in test duration without compromising estimation quality.

Data Availability

The GaAs laser data used in this study are from Meeker WQ and Escobar LA. Statistical methods for reliability data, John Wiley and Sons, 1998 (Page 642), and have been cited.

Conflicts of Interest

The authors declare that they have no conflicts of interest.