Abstract

Aiming at the problem that various types of uncertainties, such as randomness, fuzziness, and interval, coexist in structure reliability analysis, a discretization analysis method of hybrid reliability for uncertain structures is proposed based on evidence theory (ET) in this article. Firstly, in order to establish a hybrid reliability model based on ET, a generalized density method (GDM) is developed to transform the fuzzy variables into equivalent random variables on the basis of the entropy equivalent method (EEM). Based on the discrete property of the basic probability assignment (BPA) in evidence theory, the random variables and fuzzy variables (equivalent random variables) are both discretized into subintervals according to six-sigma rule. Then, the BPA of each subinterval is solved and all focal elements are assigned BPA, so the evidence structure characterization of random and fuzzy variables is realized. Secondly, using Fmincon function based on the sequential quadratic programming (SQP) algorithm in MATLAB, the minimum and maximum values of performance function over each focal element can be acquired directly. Meanwhile, the production rules are used to judge the belonging of focal elements and classify them, so the numerical calculation of belief measure and plausibility measure is also realized. Finally, combined with the Monte Carlo Simulation (MCS) method, an engineering example is provided to demonstrate the feasibility and accuracy of the proposed method.

1. Introduction

There are many inevitable uncertainties in practical structure engineering problems [1]. At the same time, with the intensive requirement of high product quality and reliability and therefore quantifying, controlling and managing the effects of uncertainty are important, sometimes even imperative [2]. Uncertainty can be considered as the difference between the present state of knowledge and the complete knowledge. It can be classified into two general types: aleatory uncertainty and epistemic uncertainty [3]. Aleatory uncertainty, also called objective uncertainty, derives from the randomness of environment, the inhomogeneity of materials, and the inherent variation associated with a physical system. Epistemic uncertainty, also termed as subjective uncertainty, stems from lack of knowledge or data information. So, the collection of more information or an increase of knowledge would help decrease the level of uncertainty.

The probability theory has been considered as the most suitable choice for aleatory uncertainty quantification when sufficient data information is available to construct accurate probability distributions or probability density function (PDF) [4, 5]. However, as for the real engineering problems, data information about the uncertain variables is usually so scanty that the probabilistic characteristics (or PDFs) are difficult to obtain. Therefore, the application of reliability analysis method based on probability theory is limited. In order to address this problem, the nonprobabilistic model is used to handle the epistemic uncertainty. Ben-Haim [6] proposed the nonprobabilistic reliability method. Since then, this method is widely applied to deal with uncertainty problem. Currently, different uncertainty measure and analysis methods, such as fuzzy sets [7], convex models [8], possibility theory [9], and evidence theory [10, 11], have been developed to handle the epistemic uncertainty. Among these methods, fuzzy sets are suited to the situations where enough data information is not available for defining an accurate probability distribution. Purba [12] proposed a fuzzy-based reliability method to evaluate basic events of system fault trees when precise probability distributions are not available. Sometimes, for many complex problems only the lower and upper bounds of uncertain variables can be obtained, and then convex models have been applied to compute the interval of the uncertain output. Ben-Haim [6, 13] uses convex sets model or Info-gap model to describe the uncertainty and quantify the reliability with the maximum uncertainty fluctuation that the system can sustain. Guo et al. [14] proposed a measure system and analysis method of “nonprobabilistic reliability” by quantifying the uncertain structural parameters as interval variables. Chen et al. [1] proposed a nonprobabilistic response surface limit method to perform nonprobabilistic reliability analysis for the structures based on the interval model. The possibility theory is suitable for dealing with epistemic uncertainty of conflict-free information, because the evidence from different experts is always consistent in possibility theory.

Based on the previous studies, it is obvious that the nonprobabilistic reliability analysis approach requires only a small amount of uncertainty information, with significantly lower dependence on the data information than the probabilistic reliability analysis method. Therefore, the nonprobabilistic reliability analysis method overcomes the limitations of probabilistic reliability analysis method and it provides the basis for the reliability analysis when data samples are not sufficient. Compared with nonprobabilistic reliability methods, evidence theory (ET) seems to be more general for the modeling of epistemic uncertainty [15], which can be viewed as an extension of classical probability theory. Based on the basic probability assignment function, the uncertainty of the proposition is described by the probability bounds between the belief and plausibility function. As a reasoning method of uncertainty, ET can analyze epistemic uncertainty using human thought process, and hence it can describe and deal with incomplete and even conflicting information in a reasonable manner. Recently, domestic and foreign scholars have conducted a great deal of research on reliability analysis of epistemic uncertainty based on ET. Jiang et al. [16] proposed a new reliability analysis method based on ET that can efficiently reduce the computational cost for uncertain structures. Zhang et al. [17] proposed an efficient response surface method to evaluate the reliability for structures using ET. Bae et al. [18] used the ET to quantify the epistemic uncertainty for large-scale structures. Tao et al. [19] developed a novel evidence-based fuzzy model and the corresponding combining method, which can reduce the computational cost and effectively combine uncertainty information which is coming from multiple sources. Helton et al. [20] introduced the sampling-based computational strategy to represent epistemic uncertainty in model predictions with ET. Xie et al. [21] presented an implementation framework of the quantification of margins and uncertainties (QMU) under mixed uncertainty based on the evidence theory.

In fact, uncertain variables can be described as random variables when data samples are adequate, and a precise probability distribution can be obtained. Additionally, if the data samples are insufficient, then the uncertainty modeling can be carried out using evidence theory. Consequently, a kind of important random-interval hybrid reliability analysis problems is easily to obtain, and the efficient solution of this problem is of great significance for the reliability design of many complex products. Aiming at this problem, some numerical methods, including the function approximation method [22], the iterative rescaling method [23], and the probability bounds (p-box) approach [24], have been proposed for the lower and upper bounds estimation of the structural reliability in the presence of both random and interval variables. In addition, Luo et al. [25] presented a combined probabilistic and set-valued description based on the multiellipsoid convex model description for grouped uncertain-but-bounded variables. Yang et al. [26] developed an efficient and accurate method for hybrid reliability analysis with both random and interval variables based on active learning Kriging model. Xie et al. [27] proposed an efficient hybrid reliability analysis method with random and interval variables, by decomposing the nested probability analysis loop and interval analysis loop into two separate loops. Gao et al. [28] presented a hybrid probabilistic and interval method for engineering problems described by a mixture of random and interval variables. Du et al. [29] presented optimization design and the solution algorithm based on a hybrid reliability model with random and interval variables. Qiu et al. [30] investigated the reliability analysis problems of random-interval structural system based on the combination of the classical reliability theory and the interval analysis. Wang et al. [31] evaluated the reliability of a probabilistic and interval hybrid structural system based on the interval reliability model and probabilistic operation. Mourelatos et al. [32] proposed an efficient reliability-based design optimization (RBDO) method based on ET to handle a mixture of aleatory and epistemic uncertainties. However, in addition to the existence of random and interval variables in a practical engineering problem, some fuzzy information exists which is characterized by ambiguity in concept due to the subjective cognizance of human being. Then, it is appropriate to use a fuzzy variable to describe these uncertain variables. Therefore, two kinds of important hybrid reliability analysis problems, namely, random-fuzzy and random-fuzzy-interval, should be further studied. Li et al. [33] presented a new algorithm for uncertainties propagation in fuzzy and random reliability analysis. Balu et al. [34] studied the reliability analysis problem with both random and fuzzy variables based on Fourier transform. An et al. [35] presented a new hybrid reliability index and its solving method based on random-fuzzy-interval model. Ni et al. [36] established a new hybrid reliability model which contains randomness, fuzziness, and nonprobabilistic uncertainties based on the structural fuzzy random reliability and nonprobabilistic set-based models. Wang et al. [37] proposed a new reliability analysis method based on convex models for uncertain structures which may contain randomness, fuzziness, and nonprobabilistic uncertainties.

As the literature survey reveals, the hybrid reliability analysis of structures, where various uncertainties variables coexist, has increasingly attracted the attention in recent years. It is becoming more obvious that many theories are combined together for uncertainty quantification due to the simultaneous presence of both aleatory and epistemic uncertainties in a structure; that is, uncertainty is usually the result of the combined action of two or more uncertain variables. Therefore, the reliability analysis method considering a single uncertain variable cannot make full use of the existing uncertainty information to analyze the problem effectively. It is of great practical significance to investigate the hybrid reliability analysis method with multiple uncertain variables. In this paper, aiming at the problem that three kinds of uncertain variables, such as random variables, fuzzy variables, and interval variables, are contained in the uncertain structures simultaneously, a discretization analysis method of hybrid reliability for uncertain structures is proposed based on evidence theory.

2. Fundamentals of Evidence Theory

Currently, corresponding mathematical methods have been adopted to deal with various uncertainties. For instance, the method of studying randomness (random variables) is probabilistic theory and mathematical statistics, and the method of studying fuzziness (fuzzy variables) is fuzzy set theory and fuzzy mathematics. The most representative of stochastic methods is the Monte Carlo Simulation (MCS) method based on probability theory; it is one of the statistical methods to deal with nonlinear problems effectively. The MCS has high accuracy, but the computational costs are extremely huge, especially for small failure probability levels [38]. Because it requires large data samples and many repeated function evaluations to guarantee the convergence of the simulation results, the MCS is difficult to be applied in engineering applications. However, the MCS is often used as a standard solution to test the accuracy for other new methods [26].

Evidence theory is an uncertainty modeling theory based on frame of discernment; ET was first proposed and developed by Dempster and Shafer, also called D-S theory [10, 11]. ET has an intrinsic capability to cope with both aleatory and epistemic uncertainties in its framework without any unnecessary assumptions, due to the flexibility of the basic axioms [18]. The most important feature is the “interval estimation” method used to describe the uncertainty information rather than the “point estimation” method, which means the results usually are bounded rather than single value. ET contains the following important concepts.

2.1. Frame of Discernment

In evidence theory, a frame of discernment (FD) needs to be predefined as a set of mutually exclusive elementary propositions, and hence it can be viewed as a finite sample space in probability theory [32]. For instance, if FD is given as , then and are elementary propositions and mutually exclusive to each other. All the possible subset propositions of will form a power set , and every possible outcome of arbitrary concern proposition corresponds to one of the subsets.

2.2. Basic Probability Assignment

As an important concept in ET, the basic probability assignment (BPA) represents the degree of belief for a proposition. Let be an FD, A is arbitrary subset of , and the BPA is assigned through a mapping function m: and must satisfy the following three axioms:

Here is expressed as the basic probability assignment function, and , denotes the BPA for the event . It represents the degree of belief for the event A, which is similar to the probability density function in probability theory [32]. Any set with >0 is called a focal element (FE).

2.3. Belief and Plausibility Functions

Probability theory does not allow any impreciseness on the given information, so it gives a single valued result [18]. However, due to the lack of information in practical engineering, it is more reasonable to present a bound of the total degree of belief in evidence theory, as opposed to a single value of probability given as a final result in probability theory. Specially, if is arbitrary subset of the FD, the probability of the event can be represented by an interval . Bel and Pl are called belief function and plausibility function; and are called belief measure and plausibility measure, respectively. These measures can be viewed as lower and upper bounds of a probability measure. Then, the degree of belief and degree of plausibility Pl(A) can be calculated using the following formulas:

As shown in Figure 1, Bel(A) and Pl(A) ranged from 0 to 1, and the plausibility interval contains the belief interval and the uncertainty interval . As a lower bound to measure the probability, Bel(A) can be interpreted as the degree of belief if the event would occur, and Pl(A) is the measure of the upper bound probability. Thus, the true probability P(R) is bounded in the interval between Bel(A) and Pl(A) (shown in (4)), and its approximate value can be taken as P(R)=0.5.

The uncertainty interval represents the numerical magnitude of the uncertainty information; namely, Pl(A)-Bel(A) represents the degree of uncertainty about . Thus, ET can distinguish “unknown” and “uncertainty”, as shown in Figure 1.

2.4. Combination Rules

Assuming are belief functions defined on FD , n is the number of belief functions. refer to the BPA corresponding to the event . If is present and the joint BPA is denoted as m(A), then the joint BPA is calculated by

Here is the conflict coefficient; the bigger the more intense the conflict.

3. Hybrid Reliability Model Based on Evidence Theory

Generally speaking, uncertain variables are divided into random variables, fuzzy variables, and interval variables [28], as shown in Figure 2(a). Firstly, when the data information of uncertain variables are sufficient, namely, the probability distribution function (PDF) is known, then the variables are considered as random variables. Secondly, if no further information about this distribution is available, namely, the probability distribution information is inaccurate, then the membership function (MF) can be employed to describe its distribution information, so the variables are regarded as fuzzy variables. Thirdly, when only the lower and upper bounds of uncertain variables are known, but the PDFs and the MFs cannot be determined, then, it is appropriate to use a nonprobabilistic interval variable to describe these uncertain variables. It is found that the probability density functions can be considered as a special case of membership functions, and the interval variables can be treated as uniformly distributed random variables.

It can be seen from Figure 2 that the uncertain variables cannot be regarded as random variables or fuzzy variables when their statistical data information is not enough to obtain accurate PDFs or MFs, because either the PDFs or MFs need to be known in reliability analysis. However, the lower and upper bounds of uncertain variables may be easy to be determined; for example, structure dimensions are easily obtained from the designer. Therefore, interval variable is suited to the situations where sufficient data information is not available, and only the lower and upper bounds of uncertain variables are known. In this study, the interval reliability model is used as the base; various types of uncertain variables are characterized and processed under the unified framework of ET. As shown in Figure 2(b), the hybrid reliability model of proposed can be described by a triangular structure with a “center of gravity”. The flow chart of hybrid reliability model analysis is given in Figure 3.

3.1. Discretization of Random Variables

Random variables are uncertain forms based on detailed statistical data information and explicit statistical laws; the PDF is usually used to describe its concrete distribution. Due to the discrete property of the BPA of evidence variables, each continuous variable must be truncated and discretized in order to establish a hybrid reliability model based on evidence theory. Meanwhile, the evidence structure (or BPA structure) characterization of uncertain variables is realized. In practical situations, the normal distribution is a more important probability distribution, and it is also the basis of many statistical analysis methods. Additionally, if the variable is a correlation or nonnormal distribution, it can be transformed into an independent standard normal variable by means of Rosenblatt transformation [39]. Thus, this article takes the normal distribution of random variables as an example to analyze how to achieve the evidence structure characterization.

Let be a vector of random variables, the random variables are mutually independent and obey normal distribution , and is the PDF of i-th continuous random variable . As shown in Figure 4(a), within the distribution range of , select the mean value as the center, truncated radius is , and then the six-sigma rule is applied to truncate the continuous random variables [40]; hence the identification interval is obtained. It is clear that the larger the truncated parameter , the higher the calculation accuracy. Although the distribution range of random variable is , its distribution probability in the identification interval reaches to 99.9999998% according to the six-sigma rule. In other words, when , the uncertainty information of random variables in the identification interval is considered to be complete.

To achieve evidence structure (or BPA structure) characterization of random variables, the random variable is uniformly discretized into subintervals within identification interval , which are denoted as . Each subinterval of equal length (see Figure 4(a)) is treated as a focal element , and those focal elements are continuously distributed; also their number is . Then, calculate the basic probability assignment for focal element as

It is obvious that, according to the geometric meaning of (6), the BPA of focal element is equal to the area of PDF curve in the corresponding subinterval. So, based on adaptive Lobatto algorithm [41], the quadl function is applied to compute the BPA for focal elements in MATLAB. Theoretically, when , there are . Actually, the true reliability P(A) will be bracketed by the approximation theorem (see inequation (4)). However, the computational cost will increase drastically if is too big, especially for multidimensional problems. This is because if the number of discrete subintervals (or focal elements) in is denoted as , consequently the number of focal elements is . To alleviate the computational cost, k is usually selected as 4, 8, 16, 32, etc.

It is noteworthy that in order to ensure the sum of all BPA is equal to 1, the normalization should be performed. Thus, the proportional compensation method is used to modify the BPA in this paper; namely,

3.2. Equivalent Randomization of Fuzzy Variables

The purpose of equivalent randomization for fuzzy variables is as follows: once the equivalent random variables are obtained, the discretization method can be also employed to cope with continuous fuzzy variables so that the evidence structure characterization of fuzzy variables is realized. It is well known, however, that the PDF used to describe random variables must satisfy the requirements of “regularity” and “normalization”. Thus, the main idea behind this equivalent randomization of fuzzy variables is to find the relationship between MF and PDF.

At present, the transformation method between the fuzzy variables and random variables is mainly based on the entropy equivalent method (EEM). The core of the transformation is that by making the fuzzy entropy [42] of the fuzzy variable equal to the probability entropy [43] of the random variable, then the fuzzy variables are converted into the desired distribution type. However, there are three major drawbacks of this approach, and the main reasons are listed as follows. From the principle of EEM it can be seen that the distribution type of equivalent random variable (or the distribution type of the PDF) should be given during the transform. Thus, the distribution type of equivalent random variables is not unique and affects the correctness of the results. The EEM is an approximate transformation method, so the calculation accuracy is not high. The fuzzy entropy and probability entropy are just a scalar, which only quantify the magnitude of the overall uncertainty for their own information. However, the MF and the PDF express the mapping relationship between the independent variable and the corresponding function, respectively. If the fuzzy variables are transformed into the equivalent random variables by EEM, the original distribution information of the fuzzy variable is lost. In this work, a generalized density method (GDM) is developed in order to reflect the distribution information of the original fuzzy variable more reasonably and accurately. This method can make the equivalent random variables maintain the same distribution type as the original fuzzy variables.

Assuming that is membership function of continuous fuzzy variable y, it is clear that the value domain of has been defined in , which has regularity but not necessarily normalization. Then, should be normalized to obtain the equivalent random variable with the property of PDF. The normalization principle is the integral of MF in interval equal to 1. In this study, (8) is adopted to transform the MF into PDF. The normalization principle is a simple processing method, because (8) is divided by a positive constant on the basis of the original MF. Obviously, the denominator of (8) is the integral of MF over the whole range, and the integral value is equal to the area of the geometric shape surrounded by MF and x-axis.

It should be noted that the essence of (8) is to standardize the membership functions. It not only reflects the distribution information of the original membership functions, but also satisfies the regularity and normalization required. Because the transformation from fuzzy variable to equivalent random variable is the equivalent mathematic transition, the probability distribution of the fuzzy variable has not been changed. In other words, the function corresponds to the density of the original MF at a certain value and still contains the fuzzy degree of the original fuzzy variable value, as shown in Figure 4(b). In this work, (8) is defined as the generalized density function (GDF), its maximum value , and the denominator is called normalization factor.

It is obvious that the GDM can realize the same type transformation between the MFs and the PDFs, and it can be considered that the probability density function is a special case of membership function. It can be seen that in the process of the conversion from fuzzy variable into equivalent random variable the GDM does not need to consider the distribution form of fuzzy variables. Also, it does not need any additional special treatment on the distribution of fuzzy variables, so it is more accurate and reasonable in principle.

It is worth noting that (6) is used to calculate the BPA of random variables, but it is not limited to only the random variables of the normal distribution. Therefore, once the generalized density function is obtained, the processing method for the PDF of random variable can be applied for continuous fuzzy variable (or MF). Consequently, the evidence structure (or BAP structure) characterization of the fuzzy variables is realized. The evidence structure of triangular fuzzy variables is shown in Figure 4(b). Certainly, any other PDFs or even MFs can be assigned to BAP structure using the same method. Overall, the BPA structure in ET can be employed to model both aleatory and epistemic uncertainties due to its flexible framework. That is to say, different types of uncertainty information (random, fuzzy, and interval variables) can be incorporated in one framework to quantify uncertainty in this study.

3.3. Analysis Method of Hybrid Reliability Model

Considering that random variables, fuzzy variables, and interval variables are contained in the uncertain structures simultaneously, the performance function can be expressed as

where represents the n-dimensional random variable vector and is described by the probability model. represents the p-dimensional fuzzy variable vector and is described by the fuzzy model. represents the l-dimensional interval variable vector and is described by the interval model.

The computational strategy for (9) can be expressed as follows: the fuzzy variable is firstly transformed into equivalent random variable through GDM. Under this situation, the performance function (see (9)) can be rewritten as (10); there are only random and interval variables in (10). Subsequently, both the random variables and the equivalent random variables (fuzzy variables) are uniformly discretized into finite subintervals (or focal elements); hence the hybrid reliability problem is turned into a nonprobabilistic reliability problem with only interval variables. Then, both types of uncertain variables are transformed into uncertain-but-bounded interval variables (or evidence variables), so the BPA of each subinterval is solved and all focal element are assigned BPA. At last, the evidence structure characterization of the random variables and fuzzy variables is realized.

If the number of subintervals by each random variable in is denoted as ), interval is arbitrary subinterval, the corresponding focal element is denoted as , and the BPA is assigned as . The number of subintervals by each equivalent random variable in is denoted as ), interval is arbitrary subinterval, the corresponding focal element is denoted as , and the BPA is assigned as . The number of subintervals by each interval variable in is denoted as ), interval is arbitrary subinterval, the corresponding focal element is denoted as , and the BPA is assigned as . So, the number of focal elements is finally obtained:

Meanwhile, the q-th ( ) focal element can be obtained which is a joint interval composed of , and . Similar to the joint probability density function in probability theory, the joint BPA can be used to handle the case that the FD contains multiple evidence variables in evidence theory. If the evidence variables are mutually independent, then when the evidence combination is carried out by utilizing (5), K=0. So, the joint basic probability assignment of the q-th focal element is calculated by

where , , and are the power sets of , , and , respectively. After transforming a fuzzy variable into an equivalent random variable, the reliability region of uncertain structures can be defined by an equation containing only random and interval variables; namely,

As Figure 5(a) shows, ) represents the i-th focal element of the Cartesian product . The variables and are uniformly discretized into 4 subintervals; hence there are 16 (4×4=16) focal elements in the FD. is the product of the basic probability distributions and constituting the corresponding subintervals and . In the Cartesian product , there is no overlap between the focal elements, which means that the focal elements constructed by independent variables and are mutually exclusive. Typically, the two-dimensional focal elements are geometrically rectangular, while the height of the rectangle reflects the magnitude of joint BPA, and the sum of all joint BPA is 1, as shown in Figure 5(b). Obviously, for an n-dimensional problem the focal elements are some multidimensional boxes in the FD.

3.4. Belonging Judgement and Classification Algorithm for Focal Elements

According to the relative position relation between the focal elements and the limit-state equation , the focal elements can be divided into three categories: the focal element belonging to the reliability region, intersecting region, and failure region, as shown in Figure 6. Then, Bel(R) is the sum of the joint BPA entirely within the reliability region according to formula (2). Similarly, Pl(R) is the sum of the joint BPA entirely or partially within the reliability region according to formula (3). As shown in Figure 5(a), Bel(R) and Pl(R) are represented in (14) and (15), respectively.

In order to judge the belonging of focal elements and classify them, the relative location relation between the focal elements and the limit-state equation would be determined. That is to say, the minimum and maximum values of performance function over each focal element need to be calculated, as shown in (16). For a focal element, if it means that the focal element belongs to the reliability region (shown in Figure 6(a)), and hence the focal element will contribute to the calculation of both Bel(R) and Pl(R); if and it means that the focal element belongs to the intersecting region (shown in Figure 6(b)), and hence the focal element will only contribute to Pl(R); if it means that the focal element belongs to the failure region (shown in Figure 6(c)), and hence the focal element will contribute to neither Bel(R) nor Pl(R).

where and represent the minimum and maximum values of performance function over each focal element, respectively.

The methods used to solve the minimum and maximum values of performance function include sampling method, vertex method [44], and numerical optimization method. Among all these methods, the sampling method (such as MCS) can deal with any kind of function type without considering the dimensions. But its computational cost is very huge, and the calculation accuracy strongly depends on the number of sampling points. The vertex method may be applied to reduce the computational cost by calculating only vertices of each focal element to find the minimum and maximum values. However, this method works under the assumption that the performance function is monotonic.

In this paper, computing the minimum and maximum values of performance function over each focal element is firstly expressed as a constrained optimization problem, which can be solved using Fmincon function based on the sequential quadratic programming (SQP) algorithm in MATLAB [45], and then the minimum and maximum values of performance function over each focal element can be gotten directly and easily. The Fmincon function is an efficient method for solving nonlinear constrained extremum optimization problems using the optimization toolbox in MATLAB. The optimization model and commonly used call formats are listed as follows:

where the constraint conditions are linear constraints in each interval, and the Fval is the minimum value of performance function, and is the initial point (or initial vector) for a given search; here it is set as the midpoint of each interval. More details about the Fmincon function can be referred to in [46]. It is worth noting that the optimization value obtained by the Fmincon function is minimum value; the method to compute the maximum of performance function is as follows: the performance function is firstly reversed, and then the minimum value of the Fval is inverted to obtain the maximum value of performance function.

In addition, since the Fmincon function is used to optimize solution equation (16) in MATLAB, it is easier to judge the belonging of focal elements and classify them (Figure 6) by the production rules. The production rules conform to the human way of thinking, generally expressed as the form of If Then and simplified as . Namely, if is established, then is established. Thus, the belonging judgement for focal elements (Figure 6) can be simplified into two categories: and ; the detailed flowchart is shown in Figure 3. The core of this classification algorithm is illustrated in the pseudocode, which is given byfor  if then     end if then     endend

To sum up, the numerical calculation of the Bel(R) and Pl(R) in the hybrid reliability model is shown in (18). That is, Bel(R) is the sum of the joint BPA of performance function in the combined intervals (or focal elements) when the minimum value is greater than or equal to 0. Pl(R) is the sum of the joint BPA of performance function in the combined intervals (or focal elements) when the maximum value is greater than or equal to 0.

4. Example

A crank-slider mechanism as shown in Figure 7 is investigated, which is modified from [15]. The external force is 280KN; the inner diameter and outer diameter of the coupler are 28mm and 56mm, respectively. The yield strength of the coupler is a triangular fuzzy variable, and its membership function is expressed as formula (19). In this example, random variables include the length of the crank and the length of the coupler . Furthermore, the precise distributions of the coefficient of friction between the slider and the ground NN and the offset are unavailable, but the intervals and BPA of and can be available based on the expert opinions and limited historical data. The distributions of uncertain variables are listed in Table 1.

In Table 1, parameters 1 and 2 represent the mean value and standard deviation for the random variables, respectively. Parameters 1 and 2 represent the focal elements and the corresponding BPA for the interval variables, respectively. The performance function is defined as the difference between the material strength and the maximum stress of the coupler:

4.1. Computation of and

In this study, in order to demonstrate the feasibility and accuracy of the proposed approach, the EEM and the GDM are used to convert the fuzzy variable into the equivalent random variable, respectively. On the one hand, it is considered that the normal distribution is the most common distribution type in reliability analysis, and other distribution types can be transformed into normal distribution by means of EEM. Thus, the fuzzy variable is transformed into an equivalent random variable with normal distribution in this article. The equivalent standard deviation , and the equivalent mean value MPa. On the other hand, since the area surrounded by MF and x-axis is 60, thus the generalized density function , and its maximum value . The membership function of fuzzy variable and its equivalent random variable are plotted in Figure 8.

In this article, in order to avoid the systematic errors caused by human factors, the orthogonal experimental design (OED) is utilized to arrange and analyze the number of discrete subintervals scientifically. OED method is a discrete optimization method which can effectively solve the multifactors (or variables) problems, which is characterized by the balancing property. Namely, for every pair of columns, all combinations of factor levels occur once. Because a, b, and need to be discretized into subintervals, the orthogonal table is used to arrange the number of subintervals. Bel(R) and Pl(R) are response index (RI), and level 1, level 2, and level 3 represent the number of subintervals 4, 8, and 16, respectively. Results of this orthogonal experimental design are provided in Table 2, where each column represents a factor (variable) and each row represents a combination of factor levels.

It is worth mentioning that the equivalent random variable cannot be truncated directly according to the six-sigma rule. This is because the identification interval according to the six-sigma rule, but the true fuzzy interval is . Obviously, there is no actual physical meaning in the part that is beyond the fuzzy interval, so the probability distribution of the two ends of the random variable needs to be truncated, as shown in Figure 8(b). In addition, it is also necessary to assign the truncated distribution of two ends to the original focal elements sequence after discrete subintervals in the fuzzy interval (shown in equation (7)), to avoid excessive truncation error.

Because the random variables and equivalent random variables are both discretized into subintervals according to six-sigma rule, the uncertain variables can eventually be treated as “interval variables”. Therefore, when the Fmincon function is used to optimize the minimum and maximum values of performance function over each focal element, the constraints of each interval variable are “latitude and longitude” format matrix (see Figure 5(a)). Also, the corresponding optimization results form the grid data point distribution, as shown in Figure 9.

The reliability can be approximated as P(R)=0.5 according to the principle of approximation theorem (see inequation (4)). Therefore, the reliability estimates at k= 4, 8, and 16 are obtained (shown in Table 3) through the results of the orthogonal experimental design (Table 2), and the number of samples in the MCS is 1,000,000 (shown in Figure 10). When k16, the reliability value of EEM and GDM is conservative compared with MCS method, while the reliability value of EEM is safer compared with GDM.

The reasons are as follows: the six-sigma rule is applied to truncate the continuous equivalent random variables, which leads to the fact that part of the uncertain information represented by the original membership function is lost. EEM tries to transform the membership function with a normal distribution function and does not take into account its distribution form, which results in a more obvious focus effect of the focal element sequence (namely, the BPA mainly distributed around the mean value); thus the reliability value of EEM is greater than that of GDM, which leads to the fact that the structures will tend to safety. GDM completely maintains the distribution information of the original membership function; namely, the equivalent random variable maintains the one-to-one mapping relationship between the original MF and the independent variable (shown in Figure 8). This is due to the fact that, in GDM, this focus effect of the focal element sequence is improved to some extent, which leads to the fact that the structures will tend to be conservative. From above reasons analysis indicates that the GDM is superior to EEM in the process of transformation for membership function, although the reliability value of EEM may be larger than the GDM. Therefore, this GDM is reasonable for reliability analysis of engineering structure, and the feasibility of developing a generalized density method to transform the fuzzy variables into equivalent random variables is demonstrated in this paper.

Figure 11 shows the relative error between the three methods with the number of subintervals. As can be seen through a comparison of these curves, the relative error will decrease with the increase of the number of discrete subintervals. That is to say, in order to obtain a more accurate estimate of reliability, the number of discrete subintervals must be increased. For instance, the relative error between GDM and MCS is equal to 7.60% at k=4, while it is only 2.70% which occurs at k=32. Moreover, the results of the proposed method (GDM) are very close to those of the conventional method (EEM), which indicates a fine accuracy of the GDM. For instance, the largest relative error between GDM and EEM is 1.68% in the 4-subinterval case, while it is only 0.49% in the 4-subinterval case. This indicates that the proposed method has the same accuracy as the conventional method. It is apparent that the more the number of discrete subintervals is, the higher the computational accuracy is. But if is too big, it will undoubtedly cause a huge amount of calculation.

4.2. The Effects Analysis of the Number of Discrete Subintervals on Reliability

For the OED method, the range analysis was performed after the collection of experimental data. The range analysis is used to analyze the effects of each factor level (the number of discrete subintervals) on the RI. The larger the range is, the greater the effect of level change on the RI is, and vice versa. According to Table 2, the range analysis results are summarized in Table 4 and the calculation processes (or principles) are described as follows.

In Table 4, there are six orthogonal indices in this range analysis: , , , , , and , and stands for , , and . ( stands for levels 1-3) is defined as the mean value of the response index for EEM at same level of each variable. (i stands for levels 1-3) is defined as the mean value of the response index for GDM at same level of each variable. They can be described as formulas (21) and (22), and t is the number of times at the same level of each column (variable). In this case, t is equal to 3.

The ranges and are the main indexes that can be analyzed by orthogonal experiment. is defined as the range between the maximum value and the minimum value in the column of the corresponding variable. is defined as the range between the maximum value and the minimum value in the column of the corresponding variable. They can be expressed as formulas (23) and (24) while and represent the EEM and GDM, respectively.

From the range analysis in Table 4, there is a significant order between the a, b, and . Namely, whether EEM or GDM is used to convert the fuzzy variable to the equivalent random variable, the order is always s> a=b; this means that has the greatest effect on the Bel(R) and Pl(R), followed by and . It is shown that the number change of discrete subintervals of has the greatest effect on the reliability of crank-slider mechanism.

In order to visually illustrate the changing trend (or effect direction) of the RI with the number of discrete subintervals, a response index trend diagram is plotted in Figure 12 through the results of the orthogonal experimental design (Table 2). We can see from Figure 12 that, either adopting the EEM or the GDM, the change in the number of subintervals of and has very little effect on the Bel(R) and Pl(R) (these curves tend to be horizontal), which remains almost unchanged as the number of discrete subintervals increases. However, the change in the number of subintervals of has a great effect on the Bel(R) (these curves are strictly monotone increasing), but it has little effect on the Pl(R). Meanwhile, the gap between Bel(R) and Pl(R) tends to decrease with the increases as the number of discrete subintervals increases. Therefore, in the case of a certain number of discrete subintervals of and b, we can reduce the computational cost by only increasing the number of discrete subintervals of s.

4.3. The Effects Analysis of Uncertain Variables on Reliability

Taking the results of first, second, and third rows in Table 2, then the cumulative belief function (CBF) and cumulative plausibility function (CPF) graphs are plotted, respectively. Figure 13 shows the CBF and CPF curves under cases with different subintervals of a, b, and . It can be observed that the CBF and CPF results are all staircase curves which resulted from the discrete property of BPA in evidence theory. Most importantly, the values of both Bel(R) and Pl(R) of the proposed method (GDM) correspond well to those of the conventional method (EEM). Once again, it is illustrated that the GDM can achieve the accuracy of EEM on reliability analysis using evidence theory.

Besides, these CBF and CPF can, respectively, represent the lower bounds Bel(R) and the upper bounds Pl(R) of the reliability under different values of performance function G, so the true probability distribution is between the CBF and CPF; it is obtained using the MCS method as shown in Figure 14. In other words, the CBF and CPF contain all the possibilities of the probability distribution for the performance function . Furthermore, when comparing the results under 4-subinterval, 8-subinterval, and 16-subinterval cases, it can be observed that with the increase of the subintervals for each uncertain variable the gap between CBF and CPF is gradually decreased. The smaller gap between the CBF and CPF means the reduced epistemic uncertainty associated with the performance function G, that is to say, the increasing information will lead to a lower level of epistemic uncertainty for the crank-slider mechanism. Therefore, if more information about epistemic uncertainty can be available and the number of subintervals approaches infinity, the CBF and CPF will become the probability distribution; then only aleatory uncertainty remains.

Obviously, the hybrid reliability model proposed in this paper is not only limited to the reliability analysis of crank-slider mechanism. It can be applied to all hybrid reliability analyses of uncertain structure, only if three types of uncertain variables (random, fuzzy, and interval variables) coexist in the engineering structure reliability analysis. Certainly, the model can also be used when there are only one or two types of variables out of the above-mentioned three variables in the reliability analysis of uncertain structures.

5. Conclusions

This article aims at developing a new hybrid reliability model and its solving method for uncertain structures based on evidence theory. This model turns the hybrid reliability problem into a nonprobabilistic reliability problem with only interval variables, which can accurately solve uncertain problems with random, fuzzy, and interval variables at the same time. In our proposed method, the evidence structure characterization of uncertain variables and the numerical calculation of belief measure and plausibility measure are realized, and the feasibility of developing a generalized density method to transform the fuzzy variables into equivalent random variables is demonstrated. The hybrid reliability analysis of crank-slider mechanism was performed, and the effects of the number of discrete subintervals and uncertain variables on the reliability are analyzed. The results demonstrated that the proposed method (GDM) can achieve the calculation accuracy of the conventional method (EEM) on reliability analysis using evidence theory, while ensuring the high accuracy of obtained results compared with the MCS method if the number of subintervals is large enough.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to express their gratitude to the National Natural Science Foundation of China (no. 51435011) and the Science & Technology Ministry Innovation Method Program (no. 2017IM040100).