Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 292197, 8 pages
http://dx.doi.org/10.1155/2015/292197
Research Article

Reliability Assessment of CNC Machining Center Based on Weibull Neural Network

School of Mechanical Science and Engineering, Jilin University, Changchun 130025, China

Received 23 July 2015; Revised 24 October 2015; Accepted 28 October 2015

Academic Editor: Marco Mussetta

Copyright © 2015 Zhaojun Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

CNC machining centers, as the key device in modern manufacturing industry, are complicated electrohydraulic products. The reliability is the most important index of CNC machining centers. However, simple life distributions hardly reflect the true law of complex system reliability with many kinds of failure mechanisms. Due to Weibull model’s versatility and relative simplicity and artificial neural networks’ (ANNs) high capability of approximating, they are widely used in reliability engineering and elsewhere. Considering the advantages of these two models, this paper defined a novel model: Weibull neural network (WNN). WNN inherits the hierarchical structure from ANNs which include three layers, namely, input layer, hidden layer, and output layer. Based on more than 3000 h field test data of CNC machining centers, WNN has been successfully applied in comprehensive operation data analysis. The results show that WNN has good approximation ability and generalization performance in reliability assessment of CNC machining centers.

1. Introduction

Common life distributions, like normal distribution, lognormal distribution, and Weibull distribution, usually are simple for system reliability modeling [1, 2]. However, CNC machining centers are complex repairable systems in which reliability distribution could not be responded precisely by these simple life distribution models. Mixture distribution has been used popularly during the development process of modern statistics. The application of mixture distribution could trace back to the late 19th century, while Weibull mixture distribution started in 1950s [35]. At present, the most common Weibull mixture distribution is twofold Weibull distribution [6, 7]. Multifold Weibull mixture distribution has been seldom used so far. There are two reasons for this: it is hard to estimate large number of parameters, and its bad generalization performance makes it difficult to avoid overfitting.

With the rapid development of computer technology, artificial neural networks (ANNs), as machine learning model with powerful nonlinear approximation ability, have been developed and get wide applications [810]. It is often used to deal with the nonlinear relationship between input and output of complex system [11]. However, ANNs easily bring overfitting phenomenon which is a hot topic and attracts many researchers [12, 13]. Improving the generalization performance of artificial neural networks is a key point to solve overfitting problem.

In this paper, Weibull neural network (WNN) is defined based on some advantages of Weibull mixture distribution and artificial neural networks. In this network, hierarchical structure of radical-basis function network (RBF), which has simple structure and powerful nonlinear approximation performance [14, 15], is adopted. RBF was proposed by Moody and Darken [16, 17] with three layers, namely, input layer, hidden layer, and output layer. The input layer is a series of source nodes that connects the networks to reliability data of CNC machining centers. The hidden layer applies a finite Weibull mixture distribution model connecting the input layer and the output layer. The output layer is the probability density of the data. Finite Weibull mixture distribution [7] is applied as hidden layer nodes function (HLNF). Wide application and multiple distribution curve shape are the main characteristics of finite Weibull mixture distribution which suits not only the life distributions of electronic products, but also the life distributions of mechanical parts.

This paper will be focused on WNN’s two key issues: to develop an efficient learning method and to improve the generalization performance of WNN. And the rest of the paper is organized as follows: a definition of Weibull neural network (WNN) is given in Section 2 with the introduction of basic characteristics of artificial neural networks and Weibull mixture distribution. Section 3 presents the learning process of Weibull neural network (WNN). Section 4 offers field test data of CNC machining centers and applies them into comprehensive simulations and reliability assessment by WNN. For comparison, authors also analyze the data by two-parameter Weibull distribution (TPWD). Finally, conclusions are given in Section 5.

2. Weibull Neural Network

2.1. Artificial Neural Networks

Artificial neural networks (ANNs) [18] are the abstraction and simulation of certain basic characteristics of biological neural networks. As complex nonlinear approximation mathematical models, ANNs rely on the complexity of network structure by adjusting the internal connections between nodes and then achieve the purpose of training and learn any complex nonlinear relationship with strong robustness and fault tolerance [19].

Hierarchical structure is the most common structure of ANNs as BP neural network [20], RBF neural network [14, 15], and so on. Hierarchical structure of ANNs can be divided into several layers by function, such as the input layer, an intermediate layer (also called hidden layer), and the output layer. Each layer is connected in order, as shown in Figure 1. The input layer is responsible for receiving input information from the outside and transfers the information to the neurons of hidden layer. A neutron is an information processing unit which is the fundamental of neural networks. Neuron model constituted different transformation functions with various information processing abilities. The hidden layer is internal information processing layer of neural network, responsible for information conversion. According to required information processing capacity, the hidden layer may be designed as one or more layers. The final one, output layer, supplies the response of neural network to the activation pattern (signal) which is applied to the input layer.

Figure 1: Hierarchical structure of artificial neural networks (ANNs).

Under the external stimulus of input samples, neural network continuously changes connection weights of network as well as topology structure, so that the output of network is close to desired output. The above process is called learning progress of neural network. In this progress, adjustments and changes of connection weights need to follow certain rules called learning rule.

2.2. Weibull Mixture Distribution

Weibull distribution, including two-parameter type and three-parameter type, is the most common life distribution, which is originally proposed by Swedish physicist Waloddi Weibull, for studying the life of components [21]. In practical application, Weibull distribution is often used as the basic model of more complex distributions, such as Weibull mixture distribution [7], Weibull competing risk distribution [22], Weibull parallel distribution, and Weibull segmentation distribution [23]. Among them, Weibull mixture distribution is the most widely used.

In many cases, a sample population may be composed of two or more subsamples. Because of the difference of design methods, raw materials, manufacturing processes, and other aspects of reasons, products may follow different life distributions in different conditions. If the sample population is composed of subsamples, corresponding subsample cumulative distribution function is expressed as , respectively, probability density is , and corresponding mixture weight of subsample is indicated as , respectively; then the cumulative mixture distribution function of sample population is shown below:

Corresponding probability density function for the mixture distribution is shown below:

The general form of Weibull mixture distribution is (2), which is called -fold Weibull mixture distribution. -fold refers to any distinct subsamples. Mixture weights should satisfy the equation: and .

If is two-parameter Weibull cumulative distribution with shape parameter and scale parameter or three-parameter Weibull cumulative distribution with shape parameter , scale parameter , and positional parameter , two-parameter Weibull cumulative distribution is shown below:

Three-parameter Weibull cumulative distribution is shown below:

The most important feature of mixture Weibull distributions is the diversity in shape. Taking twofold mixture Weibull distribution with two parameters, for example, there are four basic types of density functions shape, as shown in Figure 2. Therefore, it is significant to research on the nonlinear approximation performance of mixture Weibull distribution.

Figure 2: Basic types of density functions of twofold mixture Weibull distribution with two parameters.
2.3. Weibull Neural Network Model

In practical application, the mixture Weibull distributions have two significant limitations: difficult to precisely estimate parameters and hard to choose a suitable folds number. However, the folds affect generalization performance of mixture Weibull distributions seriously. Low folds lead to underfitting; however, high folds result in overfitting. To improve the generalization performance of mixture Weibull distributions, Weibull neural network is proposed in this paper. Weibull neural network is a kind of mixture distribution but is different from traditional mixture Weibull distributions in structure and learning process.

The hierarchical structure of Weibull neural network is identical to radical-basis function network (RBF), including the three layers shown in Figure 3. The input layer is made up of source nodes that connect the network to reliability data. The hidden layer connects the input layer and the hidden layer and finite Weibull mixture distribution is used as hidden layer function in the network. The output layer is the probability density of the data. The connection between input layer and hidden layer is the probability, namely, that each input data and the hidden layer node are connected in a certain probability. There is a linear weighted connection between the hidden layer and output layer.

Figure 3: Hierarchical structure of Weibull neural network (WNN).

In the hierarchical structure of Weibull neural network (WNN), the input of network is expressed as , where is the number of input nodes. In this paper, the hidden layer nodes function (HLNF) is finite Weibull mixture distribution expressed as . The output of Weibull neutral network (WNN) is expressed as , where is the number of output data. Connection probability between input layer and hidden layer is expressed as which means the probability of the th data sampled by the th node. The process is random sampling with replacement, so the value of connection probability is . The connection weight between hidden layer and output layer is expressed as which means the weight of the th node to the th output . According to the characteristic of mixture distribution, needs to satisfywhere is the number of the hidden layer nodes. As every node has the same weight, .

As shown in Figure 3, the input layer achieves nonlinear mapping from input data to hidden layer nodes function (HLNF) , while the output layer achieves linear mapping from hidden layer nodes function (HLNF) to output data . The mathematical model is shown below:

According to the finite Weibull mixture distribution shown in (7), the hidden layer nodes function (HLNF) is calculated by (8). Considerwhere is the folds number of mixture Weibull distributions, .

3. Parameter Estimation of Weibull Neural Network (WNN)

As shown above, in order to define the hidden layer, we need to find the hidden layer nodes function (HLNF) and make sure of the number of hidden layer nodes. Then Weibull neural network (WNN) can be finally proposed. This process is called parameter estimation of WNN which could be divided into three steps. They are parameter estimation of HLNF, selection of HLNF, and determining the number of hidden layer nodes. The following is the detailed process.

3.1. Parameter Estimation of HLNF

Expectation maximization (EM) algorithm is widely used in mixture distribution to estimate parameters [24]. Because of the independence of learning process of neuron function, EM algorithm can be used to estimate parameters of neuron function separately [25]. EM algorithm is an iterative algorithm based on maximum likelihood estimation, each iteration is divided into two steps, namely, expectation step (E-step) and maximization step (M-step). E-step is to calculate expectations of likelihood function and estimate parameters; M-step is to maximize the expectations. According to input data, the core idea of EM algorithm is to estimate parameters by the iterations of expectations. The whole EM algorithm steps are as follows.

Step 1. Initialize parameters: means , variances , and mixture weights .

Step 2. E-step: calculate responsivity according to

Step 3. M-step: calculate weight , mean , and variance according to

Step 4. Repeat Steps 2 and 3 until the convergence of maximum likelihood function value is realized.

In Step 3, , , , and meet (11) in EM algorithm. However the following is a transcendental equation which is hard to calculate analytical solutions:

In each iteration of EM algorithm, let . When , then the following exists:

According to (12), there is a monotonic relationship between and . Therefore, RBF interpolation can be used to establish the mapping relationship between and . Then can be calculated based on . Then, according to the following, can be easily gotten:

3.2. Selection of HLNF

Selection of HLNF has a great impact on the generalization performance of neutral network and the efficiency of learning process. Single Weibull distribution as neuron function may lead to underfitting problem. What is more, multifolds Weibull mixture distribution as neuron function may lead to overfitting phenomenon. It is necessary to design an algorithm to select the most appropriate HLNF. Based on the idea of random sampling, this research chooses the value of maximum likelihood function as the index to select the most appropriate finite Weibull mixture distribution as hidden layer nodes function (HLNF) by multiple sampling. This selection algorithm is as follows.

Step 1. Based on the original sample data , use the bootstrap methodology [26] to generate groups of training samples and groups of testing samples .

Step 2. For the groups of training samples , use the EM algorithm to estimate parameters of finite Weibull mixture distributions which are from 1 to folds, separately getting mixture weight , shape parameter , and scale parameter , ,  ,  .

Step 3. According to the parameters of training samples in Step 2, maximum likelihood function values of groups testing samples can be calculated by (14), where is obtained by formulas (6) (7), and (8). Maximum likelihood function values form an evaluation matrix whose size is :

Step 4. According to (15), mean evaluation value of groups testing samples corresponding to -folds Weibull mixture distribution can be gotten, and select the maximum evaluation value corresponding to -folds Weibull mixture distribution. Then -folds Weibull mixture distribution would be the most appropriate finite Weibull mixture distribution which is the hidden layer nodes function (HLNF) for the original sample data :

3.3. Determining the Number of Hidden Layer Nodes

The number of nodes in the hidden layer is associated with not only the function between input and output, but also sample size, random noise, and so forth. Generally small amount of nodes causes poor recognition performance and fitting performance. However large numbers of nodes easily result in random noise and poor recognition performance. Therefore, choosing an appropriate nodes in hidden layer is critical for improving generalization performance of network.

Sampling study is helpful to weaken the influence of random noise. Therefore, as the number of nodes in hidden layer dynamically increases, the shape of density function of sample data tends to be stable in the learning process of WNN. For the learning process, similarity coefficient is defined to determine the stopping condition.

When the number of nodes in hidden layer is and , define the density function as and , respectively. Then the similarity coefficient (SC) between and is defined as

As is shown in Figure 4, the ratio between intersection and union of coverage area by and is the similarity coefficient (SC). The interval of theoretical value of similarity coefficient is . The larger SC means the higher similarity between and . When SC = 1, it means that two density functions are identical.

Figure 4: Schematic diagram of similarity coefficient.

By combining HLNF, bootstrap algorithm, and EM algorithm, the specific algorithm to determine the hidden layer nodes is given as follows.

Step 1. Initialize which is the stopping condition of the algorithm.

Step 2. Using Bootstrap algorithm to generate a group of initial training sample data , estimate parameters of HLNF corresponding to sample data and ; set .

Step 3. Using Bootstrap algorithm to generate th group of training sample data , estimate parameters of HLNF corresponding to sample data , and . Then the WNN model can be gotten: where is the probability density function of th hidden layer node.

Step 4. According to (16), similarity coefficient between and layers WNN can be gotten to judge whether it meets the stopping condition: . If does not satisfy the condition, go to Step 3; if satisfies the condition, end the training.

Finally the WNN is defined. And the reliability assessment process is shown in Figure 5. In the process, at the 3rd step, when parameters of WNN are gotten, the probability density on WNN is also gotten.

Figure 5: Flow chart of reliability assessment based on WNN.

4. Data Collection and Analysis

In order to validate the WNN model, we collected 23 CNC machining centers’ field test data, almost more than 3000 h running time for each. After data preprocessing, the time between failures obtained within the operation time is listed in Table 1.

Table 1: Time between failures of 23 CNC machining centers.

For the failure data in Table 1, according to Section 3.2, twofold Weibull mixture distribution is selected as neuron function. Similarity coefficient is set to be the stopping condition in learning process of Weibull neural network. According to , the number of hidden layer nodes is calculated as 51. Follow the steps in Section 3.1, estimate the parameters for hidden layer nodes function (HLNF) , and get mixture weights , shape parameter , and scale parameter of 51 nodes that are shown in Table 2. Substitute those three parameters into (6), (7), and (8) to calculate probability density which is the blue line in Figure 6.

Table 2: Parameters estimation results of WNN.
Figure 6: The probability density curves on WNN and TPWD.

Distribution law of time between failures of 23 CNC machining centers is modeled by WNN. For comparison, general two-parameter Weibull distribution (TPWD) is used to analysis the same data. And the probability density curves on two methods are shown in Figure 6. The blue curve is the probability density on WNN and the red one is on TPWD. The probability density function curves of the data are continuous and derivable in this case, presenting good generalization performance which inherits Weibull method.

Different from the only one peak of the probability density curve on TPWD, the blue one has another peak around 1500 h which means the CNC machining center has more probability to be a failure after running about 2500 h. The probability density curve on WNN reveals more accurate information of the distribution law of time between failures than that on TPWD. Two-peak curve by WNN approximates actual condition. In other words, WNN has better approximation ability than TPWD in distribution modeling of life data.

Mean time between failures (MTBF) describes the expected time between two failures for a repairable system [27]. MTBF is the major index of the reliability of CNC machining centers; MTBF based on point estimation and MTBF based on interval estimation are the most famous methods for MTBF. To deeply estimate WNN and TPWD, these two MTBF are calculated based on the above two methods. The two MTBF of WNN are calculated by the method in [2830]. The comparison results are shown in Table 3.

Table 3: MTBF of CNC machining centers.

The results show that MTBF based on point estimation under WNN is larger than that under TPWD by 14.2864 h. And MTBF based on interval estimation, under the same confidence level, are much different: the lower limit of WNN is lower than that of TPBD, and the upper limit of WNN is higher than that of TPBD by 16.192 h. What is more, the confidence interval of MTBF based on interval estimation of WNN is wider than the result of TPBD by over 23.5 h equivalent to 10.58% of TPBD. The big differences of MTBF based on interval estimation and point estimation further show that WNN and TPBD reveal the different distribution laws of time between failures of CNC machining centers. And there is a big error for TPBD in reliability assessment of CNC machining centers. Because WNN has better approximation ability than TPWD in distribution modeling of life data of CNC machining centers, combined with the above comparison, WNN is more close to the actual distribution law of time between failures.

5. Conclusions

The basic idea discussed in this paper is the study on applications of Weibull neural network for complex system reliability assessment. General reliability model easily results in overfitting or underfitting in reliability modeling process. The poor generalization performance of general reliability model could not reflect the actual life distribution law of reliability data. For the above problem, through analyzing the characteristics of artificial neural networks and Weibull mixture distribution, the authors propose Weibull neural network (WNN) for system reliability modeling. A common structure of neural network, hierarchical structure, is adopted in Weibull neural network. And three-step learning process of Weibull neural network is proposed in this paper. In the learning process, using interpolation method to solve transcendental equation significantly improves the computational efficiency of EM algorithm. Finally, a practical application case is presented. WNN is used for analyzing distribution law of time between failures of certain type of CNC machining centers. The probability density function curve of the data is continuous and derivable in the case, presenting good generalization performance. For further comparison, authors introduce two-parameter Weibull distribution (TPWD) to calculate MTBF based on point estimation and MTBF based on interval estimation. The result of the case indicates that Weibull neural network (WNN) has better approximation ability than TPWD in distribution modeling of life data. And Weibull neural network (WNN) could be popularized and applied on reliability assessment for complex systems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publishing of this paper.

Acknowledgments

Research in this paper was supported by National Science and Technology Major Project of China: reliability promotion of thousands Chinese CNC machining centers (Grant no. 2013ZX04011-012); and Jilin province science and technology development plan: reliability system design for key functional components of CNC machine tools (Grant no. 20130302009GX).

References

  1. M. Rausand and A. Hoyland, System Reliability Theory: Models, Statistical Methods, and Applications, John Wiley & Sons, 2003.
  2. F. Downton, “Bivariate exponential distributions in reliability theory,” Journal of the Royal Statistical Society Series B: Methodological, vol. 32, pp. 408–417, 1970. View at Google Scholar · View at MathSciNet
  3. J. H. K. Kao, “A graphical estimation of mixed Weibull parameters in life-testing of Electron tubes,” Technometrics, vol. 1, no. 4, pp. 389–407, 1959. View at Publisher · View at Google Scholar
  4. D. M. Titterington, A. F. M. Smith, and U. E. Makov, Statistical Analysis of Finite Mixture Distributions, John Wiley & Sons, New York, NY, USA, 1985. View at MathSciNet
  5. W. Mendenhall and R. J. Hader, “Estimation of parameters of mixed exponentially distributed failure time distributions from censored life test data,” Biometrika, vol. 45, pp. 504–520, 1958. View at Publisher · View at Google Scholar · View at MathSciNet
  6. E. E. Elmahdy and A. W. Aboutahoun, “A new approach for parameter estimation of finite Weibull mixture distributions for reliability modeling,” Applied Mathematical Modelling, vol. 37, no. 4, pp. 1800–1810, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. T. Bučar, M. Nagode, and M. Fajdiga, “Reliability approximation using finite Weibull mixture distributions,” Reliability Engineering and System Safety, vol. 84, no. 3, pp. 241–251, 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. C.-K. Goh, E.-J. Teoh, and K. C. Tan, “Hybrid multiobjective evolutionary design for artificial neural networks,” IEEE Transactions on Neural Networks, vol. 19, no. 9, pp. 1531–1548, 2008. View at Publisher · View at Google Scholar · View at Scopus
  9. G. Z. Li, T. Y. Liu, and G. F. Wu, “Improving generalization ability of neural networks ensemble with multi-task learning,” Journal of Computational Information Systems, vol. 2, no. 4, pp. 1235–1240, 2006. View at Google Scholar · View at Scopus
  10. M. Riedmiller and H. Braun, “A direct adaptive method for faster backpropagation learning: the RPROP algorithm,” in Proceedings of the IEEE International Conference on Neural Networks, H. Ruspini, Ed., pp. 586–591, IEEE, San Francisco, Calif, USA, 1993. View at Publisher · View at Google Scholar
  11. S. Haykin, Neural Networks and Learning Machines, Prentice Hall, 2008.
  12. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, pp. 1929–1958, 2014. View at Google Scholar · View at MathSciNet
  13. P. M. Bentler and D. G. Bonett, “Significance tests and goodness of fit in the analysis of covariance structures,” Psychological Bulletin, vol. 88, no. 3, pp. 588–606, 1980. View at Publisher · View at Google Scholar · View at Scopus
  14. N. Mai-Duy and R. I. Tanner, “A collocation method based on one-dimensional RBF interpolation scheme for solving PDEs,” International Journal of Numerical Methods for Heat and Fluid Flow, vol. 17, no. 2, pp. 165–186, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  15. S. Z. Li, X. Y. Li, and T. M. Jiang, “A prediction method of life and reliability for CSALT using Grey RBF neural networks,” in Proceedings of the IEEE 16th International Conference on Industrial Engineering & Engineering Management, pp. 699–703, IEEE, Beijing, China, October 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. J. Moody, “Fast learning in multi-resolution hierarchives,” in Advances in Neural Information Processing Systems I, vol. 1, pp. 29–39, Morgan Kaufmann Publishers, 1989. View at Google Scholar
  17. J. Moody and C. J. Darken, “Fast learning in networks of locally-tuned processing units,” Neural Computation, vol. 1, no. 2, pp. 281–294, 1989. View at Publisher · View at Google Scholar
  18. S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice-Hall, 1999.
  19. C. Jutten and J. Herault, “Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture,” Signal Processing, vol. 24, no. 1, pp. 1–10, 1991. View at Publisher · View at Google Scholar · View at Scopus
  20. Z. C. Xue and H. J. Wang, “The application of artificial BP neural networks and Monte-Carlo method for the reliability analysis on frame structure,” Applied Mechanics & Materials, vol. 204–208, pp. 3256–3259, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. W. Weibull, “A statistical distribution function of wide applicability,” Journal of Applied Mechanics, vol. 18, pp. 293–297, 1951. View at Google Scholar
  22. J. I. McCool, “Competing risk and multiple comparison analysis for bearing fatigue tests,” Tribology Transactions, vol. 21, no. 4, pp. 271–284, 1978. View at Publisher · View at Google Scholar · View at Scopus
  23. R. C. Elandt-Johnson and N. L. Johnson, Survival Models and Data Analysis, John Wiley & Sons, New York, NY, USA, 1980. View at MathSciNet
  24. T. Denœux, “Maximum likelihood estimation from fuzzy data using the EM algorithm,” Fuzzy Sets and Systems, vol. 183, pp. 72–91, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. S. Nandi and I. Dewan, “An EM algorithm for estimating the parameters of bivariate Weibull distribution under random censoring,” Computational Statistics and Data Analysis, vol. 54, no. 6, pp. 1559–1569, 2010. View at Publisher · View at Google Scholar · View at Scopus
  26. V. A. Jochen and J. P. Spivey, “Probabilistic reserves estimation using decline curve analysis with the bootstrap method,” in Proceedings of the SPE Annual Technical Conference and Exhibition, pp. 589–598, Society of Petroleum Engineers, Denver, Colo, USA, October 1996. View at Publisher · View at Google Scholar
  27. M. J. Mondro, “Approximation of mean time between failure when a system has periodic maintenance,” IEEE Transactions on Reliability, vol. 51, no. 2, pp. 166–167, 2002. View at Publisher · View at Google Scholar · View at Scopus
  28. R. Billinton and P. Wang, “Teaching distribution system reliability evaluation using Monte Carlo simulation,” IEEE Transactions on Power Systems, vol. 14, no. 2, pp. 397–403, 1999. View at Publisher · View at Google Scholar · View at Scopus
  29. J. G. Yang, Z. M. Wang, G. Wang, and G. Zhang, “Likelihood ratio test interval estimation of reliability indices for numerical control machine tools,” Journal of Mechanical Engineering, vol. 48, no. 2, pp. 9–15, 2012. View at Publisher · View at Google Scholar · View at Scopus
  30. A. Mustafa, “Reliability equivalence factors for some systems with mixture weibull failure rates,” African Journal of Mathematics and Computer Science Research, vol. 2, no. 1, pp. 6–13, 2009. View at Google Scholar