Abstract

This paper proposes evidence theory based methods to both quantify the epistemic uncertainty and validate computational model. Three types of epistemic uncertainty concerning input model data, that is, sparse points, intervals, and probability distributions with uncertain parameters, are considered. Through the proposed methods, the given data will be described as corresponding probability distributions for uncertainty propagation in the computational model, thus, for the model validation. The proposed evidential model validation method is inspired by the idea of Bayesian hypothesis testing and Bayes factor, which compares the model predictions with the observed experimental data so as to assess the predictive capability of the model and help the decision making of model acceptance. Developed by the idea of Bayes factor, the frame of discernment of Dempster-Shafer evidence theory is constituted and the basic probability assignment (BPA) is determined. Because the proposed validation method is evidence based, the robustness of the result can be guaranteed, and the most evidence-supported hypothesis about the model testing will be favored by the BPA. The validity of proposed methods is illustrated through a numerical example.

1. Introduction

Complex real world phenomena have been increasingly being modeled by various sophisticated computational models with few or no full-scale experiments. Besides, due to the rapid development of computer technology, the model-based simulations are increasingly dominant in the design and analysis of complex engineering systems, therefore making the reduction of the cost and the time of engineering development depending on the understanding of these phenomena and full-scale testings. The quality of the model prediction is influenced by various sources of uncertainty such as model assumptions, solution approximations, variability uncertainty in model inputs, and parameters and data uncertainty due to sparse and imprecise information. When the model is used for system risk assessment or certification of reliability and safety under actual use conditions, it is crucial to quantify the uncertainty and confidence in the model prediction in order to help the risk-informed decision making, and hence the model needs to be subjected to rigorous, quantitative verification and validation (V&V) before it can be applied to practical problems with confidence [1]. Model validation is an important component of quantification of margins and uncertainties (QMU) analysis that is intimately connected with the assessment and representation of uncertainty [2]. The process of model validation measures the extent of the agreement between the model prediction and experimental observations [3]. Our work focuses on model validation under epistemic uncertainty in both the model inputs and the available observed experimental evidence.

Modeling complex systems are very complicated since there are many factors interacting with each other [48]. A key component of model validation is the specific, rigorous disposal of numerous sources and different types of uncertainty. The uncertainty can be roughly divided into two types: aleatory and epistemic. The term aleatory uncertainty describes the inherent variation of the physical system. Such variation is usually due to the random nature of the input data and can be mathematically represented by a probability distribution once enough experimental data is available. Epistemic uncertainty in nondeterministic systems arises due to ignorance, lack of knowledge, or incomplete information. These definitions are adopted from the papers by Oberkampf et al. [911]. Epistemic uncertainty regarding a variable can be of two types: a poorly known stochastic quantity [12] or a poorly known deterministic quantity [13]. In this paper, we are concerned only with the former type of epistemic uncertainty where sparse and imprecise information (i.e., sparse point data and/or interval data) is available regarding a stochastic quantity; as a result, the distribution type and/or the distribution parameters are uncertain. Both the model inputs and the observed experimental evidence are where the uncertainties come from. Previous studies about the treatment of epistemic uncertainty utilize methods such as evidence theory [14, 15], fuzzy sets [16], entropy model [17, 18], and convex models of uncertainty [19], intended basically for uncertainty quantification, and it is not clear how to implement model validation under epistemic uncertainty. The model validation method in this paper deals with the situation where the epistemic uncertainty arises from the sparse and imprecise data concerning the model inputs, and the input quantities as well as the observed experimental evidence are in the form of both point form and interval form, with the method of Dempster-Shafer (D-S) evidence theory.

Another issue to be addressed is that the final decision on model acceptance has to be made with discretion. Model uncertainty can be reduced through a deeper investigation or if there is some empirical information available about the system behavior, after which the model will be assessed and a decision can be made regarding acceptance or rejection of the model. This paper investigates the latter issue and has an assumption that the validity of a model is judged only by its output assuming that the investigation of its mechanism is unavailable. In the previous studies, Sankararaman et al. [20, 21] assess the validity of the model and make the decision using the Bayes factor, which is based on the idea of Bayesian hypothesis testing and is the ratio of the likelihood of the model prediction and the experimental observed data under two competing hypotheses: accept the model and refuse the model. However, they decide whether to accept the model solely by one Bayes factor calculation, which can be quite occasional and not precise enough.

Bayesian theory based information fusion techniques have been evolving in many fields; nonetheless, the effective performance can only be achieved if adequate and appropriate a priori and conditional probabilities are available, or the results of Bayesian methods can be imprecise, even far away from the truth. As an extension to Bayesian theory, the Dempster-Shafer (D-S) evidence theory [14] uses basic probability assignment (BPA) to quantify evidence and uncertainty [17, 2226]. Compared with probability, BPA has the advantage of efficient modeling of unknown information [27]. D-S evidence theory models how the uncertainty of a given hypothesis or discourse diminishes as groups of BPAs accumulate during the reasoning process [2730]. The effectiveness of this method has been demonstrated in many fields. In this paper, first, a method based on the concepts of D-S evidence theory is proposed to eliminate the parameters uncertainty of probability distributions for random input variables whose available information is in the form of sparse point data and interval data. In the case of point data, each of them is regarded as a BPA of the alternative input variable probability distribution parameters; while in the case of interval data, the BPAs are obtained by sampling some point data from the intervals. Through our method, each model input set including multiple point and/or interval data will be described using a probability density distribution function (PDF) before uncertainty propagation analysis and model validation. Considering the faithfulness to the available data, for the model data with little distribution parameter information, we also construct PDFs using a computational statistic technique, bootstrapping method, to address this concern. Moreover, in the model validation part, we also propose an evidential method; and the probability distribution of experimental evidence as well as that of the model prediction, which is derived from the uncertainty propagation technique, is involved. The proposed method borrows the idea of Bayesian hypothesis testing and the definition of Bayes factor in [20, 31] as reference, inspired by which, groups of BPAs for the decision making of model acceptance are generated and obtained.

The remainder of this paper is organized as follows to show the details about proposed methods. Section 2 provides an overview of basic concepts and notations about Dempster-Shafer evidence theory. Section 3 describes how the evidence based method can eliminate the parameter uncertainty of probability distribution (epistemic uncertainty) for model inputs. The validation method referred to the idea of Bayesian hypothesis testing, and Bayes factor is presented in Section 4. Illustration of the proposed methods is given through a steady state heat transfer problem in Section 5. Finally, we conclude our paper in Section 6.

2. Dempster-Shafer Theory of Evidence

The real world is full of uncertainty, and how to deal with the uncertain information is still an open issue [3236]. Many math tools, such as AHP [3740], fuzzy sets [4143], D numbers [4447], evidential reasoning [30, 4851], and rough sets [52], are adopted with wide applications. In this section, the main concepts about the Dempster-Shafer (D-S) evidence theory are reviewed. The Dempster-Shafer evidence theory, which is introduced by Dempster [53] and then extended by Shafer et al. [14], is concerned with the question of belief in a proposition and systems of propositions. Compared with the Bayesian probability model, the merits of D-S evidence theory have already been recognized in various fields. First, the D-S evidence theory can handle uncertainty or imprecision embedded in the evidence [54]. In contrast to the Bayesian probability model in which probability masses can be only assigned to singleton subsets, in D-S evidence theory, probability masses can be assigned to both singletons and compound sets. Thus, more evidence will be provided to illustrate the hypotheses or the distribution between the singletons, and this theory can be viewed as a generalization of the classic probability theory. Second, in D-S evidence theory, no prior distribution is needed before the combination of BPAs from individual information sources. Third, the D-S evidence theory allows one to specify a degree of ignorance in some situations rather than the hypothesis that all have to be assigned precisely. Some notations concerned in D-S evidence theory are introduced. Finally, conflicting management is improved with evidence theory [5557].

Definition 1 (frame of discernment). Let be a finite set of exhaustive and exclusive elements, and each is called a proposition or a hypothesis. The set is called the frame of discernment. Denote the power set of as , where denotes the empty set.

Definition 2 (mass function). For a frame of discernment , a mass function is a mapping , which is also called a basic probability assignment (BPA), satisfyingwhere is any element of and the BPA expresses how strongly the evidence supports a particular proposition .

Definition 3 (Dempster’s rule of combination). Suppose and are two BPAs formed based on the information obtained from two different information sources in the same frame of discernment ; the Dempster's rule of combination, also called the orthogonal sum, denoted by , is defined as follows:withwhere , , and are elements of and is a normalization constant and represents a basic probability mass associated with conflicts among the sources of evidence, called the conflict coefficient of two BPAs. The larger the value of is, the more conflicting the sources are and the less informative their combination is.

3. Probability Distributions of Model Input Data

The input of a mathematical or computational model is denoted by and the model prediction or, say, output is , and both and are the sets for the corresponding variables. Generally, there are three types of data in as mentioned in [20]:(a)sufficient data to construct a precise probability density distribution function (aleatory uncertainty);(b)sparse point data and/or interval data (epistemic uncertainty exists);(c)probability density distribution function with uncertain parameters (epistemic uncertainty).

Here, the paper presents an evidential method to represent the model input data and reduce the epistemic uncertainty in them as much as possible for applying the uncertainty propagation techniques to them and getting the model prediction and also for the model validation purpose. Through our method, each input set of the model can be described using a single probability density function (PDF) just as the content in Figure 1 [20, 58]. Since the scenario here is about epistemic uncertainty, a parameters set consisting of several alternative combinations of parameter statistics is provided by engineers and experts except for the inputs just as what is described in (c) mentioned above. Such epistemic uncertainty will be handled by the proposed evidence theory based method, while the computational statistic method is for the condition (b) mentioned above. The proposed method will be discussed in detail below with sparse point data and interval data.

3.1. Evidential Probability Distribution of Data with Epistemic Uncertainty

The methods that have been developed for model validation are mostly in the cases where the experimental evidence is in the form of point data. However, there are some cases where the interval data exist as the experimental evidence rather than the point data [5961]. For convenience, the format of the obtained sparse model input data in this paper is expressed as point data (suppose to ) and interval data (suppose to ). As mentioned above, the alternative PDF parameter set is similarly expressed as and is the parameter set of the known probability density distribution function type such as exponential, normal, lognormal, and Poisson, for example, the mean and the standard deviation of a normal distribution. The PDF of is represented by . It is noteworthy that the PDF for the model input data is conditioned on the decision of in the set [6265]; therefore, every element in the set should be evaluated by the available evidence, that is, the obtained model input data. The basic assignment probability (BPA) of each element of could be derived according to the principle that the more optimal (or, say, more close to the real PDF of model inputs) an alternative PDF is, the higher the value ( to ) is. A good and useful BPA should contain most of the information provided by the data source and should be suitable for the subsequent process (the combination of BPAs and decision making). To solve this problem, the proposed method incorporates the available model input data with the alternative PDFs to extract the BPA information sufficiently.

The normal distribution is commonly encountered in practice and is used throughout statistics, natural sciences, and social sciences as a simple model for complex phenomena. To explore the details, here we would like to use , ( to ) as illustration. Some details of obtaining BPAs with the available model input data are discussed as follows.

3.1.1. Step 1

Above all, given the normal PDF parameter set provided by experts and engineers, the corresponding normal PDFs can be determined and curves can be drawn. Each PDF will be coded as a proposition of the frame of discernment in D-S evidence theory; that is, .

3.1.2. Step 2

In this step, the BPAs will be derived from both the model input data and the normal PDFs. For the point data ( to ), we determine the BPA with the help of mean value theorem and regard observing as approximately equal to observing , where is an infinitesimally small positive number. In such way, we can obtain a BPA from the th evidence source (, to ):that is, calculate the intersections of and the normal PDFs:Thus we can get intersections of the alternative PDFs with uncertain parameters. Besides,where and the BPA values of the remaining propositions in the power set of are set to be equal to 0. In the evidential background, is regarded as the quantified expression of the global ignorance about the optimal model input PDF parameters. As for the interval data ( to ), due to the fact that is usually rather small, we just discretize each interval and sample a certain amount of points in the interval to replace the interval. The BPA generating process is the same as what has just been mentioned above for point data. By this means that we can get another certain-constant-times groups of BPAs, all of which will be used to help make the final decision about the optimal model input PDF.

3.1.3. Step 3

Based on the work that has been done in step , we combine all the obtained BPAs (that is, multiple mass functions) using the Dempster's rule of combination as shown in (2) and (3). According to the fused result, the proposition ( to ), whose final function value is the largest, is the evidence-supported optimal distribution parameter choice in the decision making. Thus, the approximate real model input PDF is estimated as the normal distribution with parameters . It is noteworthy that if turns out to be the largest, then we would conclude that the alternative distributions are too similar to discriminate, or the given data is too sparse to be used for making a choice. Here, the engineers and experts who have provided are assumed to be reliable.

3.2. Probability Distribution without Uncertain Parameters

Different from the situation in Section 3, assume another situation where there are only the sparse model input data denoted as the set , and the PDF needs to be estimated. Similarly, the model input data set includes both points ( to ) and intervals ( to ). Because the sample size is limited and not so large, an empirical distribution for is hard to be constructed. However, the basic idea of the bootstrapping method [66] is that the inference about a population from sample data (sample population) can be modeled by resampling the sample data (sampling with replacement) and performing inference on the resampled data (resample sample). This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods [67]. With the prior knowledge about the distribution type, we can estimate the distribution by using the bootstrapping method to determine the parametric statistics. Given both the point and the interval data, the intervals are first discretized into a finite number of points and the application of bootstrapping method follows; then the statistic (the mean and standard deviation of the population) should be estimated (central limit theorem) and be used for the construction of the PDF of model input.

4. Model Validation

Through the above sections, the model input is represented using a single probability distribution. Propagating the input uncertainty through the mathematical model and determining the PDF of the model output is the following work. Given the statistical distributions of the input variables, various methods are available to carry out probabilistic analysis so as to quantify the uncertainty in the model output such as Monte Carlo simulation [68] and response surface methods [69, 70]. The choice of method depends on the nature of model used for predicting the output and the needs with respect to accuracy and efficiency. In this case, the model prediction will be in a probabilistic distribution form; therefore the model validation is about the comparison of the observed experimental evidence and the probability distribution of the model output . This section implements the evidential model validation metric inspired by the idea of Bayesian hypothesis testing and Bayes factor.

4.1. Bayes Factor

Bayesian hypothesis testing estimates the probability of a hypothesis, given the observed experimental evidence . Bayesian methods may use Bayes factors to compare hypotheses, which are introduced by Jeffreys [71]. A Bayes factor, , is a Bayesian alternative to frequentist hypothesis testing that is most often used for the comparison of multiple models, usually to determine which model better fits the experimental evidence; it is in the form that the posterior odds in favor of the hypothesis are divided by the prior odds in favor of the hypothesis. Generally, there are two hypotheses and ; the related probabilities of these hypotheses can be updated using the Bayes theorem as [72]The first term on the right hand side of (8) is the Bayes factor . In the context of model of validation, the two hypotheses and may be defined as “the model is correct” and “the model is incorrect,” respectively.

4.2. Evidential Model Acceptance Decision Making

Consider a provided model input data set; the model predicts an output set which consists of single deterministic quantities, each of which is corresponding to a specific input data circumstances and a measured experimental dataset . Inspired by the developed Bayes factor in [20],where the reason why (9) is in that format is that “the epistemic uncertainty in the model inputs and the validation data have already been converted in to probabilistic information” by the authors [20], in this paper, we use the bootstrapping method to estimate the statistics in order to infer the PDF of the observed experimental evidence (), so as to replace the numerator in (9). And by our redefined equation (9), we can get two values: the numerator and the denominator. Accordingly, we constitute the frame of discernment under the target of making decision about model acceptance with two propositions (“the model is correct” () and “the model is incorrect” ()) such that . As for the BPA value of each proposition, the two curves in Figure 2 are given for illustration and the two intersection points, and , are what should be paid attention to. In the background of D-S evidence theory, we define the BPA values of propositions in aswhereHence, one BPA is obtained and is the quantification of the ignorance about the model testing. After obtaining all groups of BPAs, combine them all using the Dempster's rule of combination as shown in (2) and (3) such that . According to the fused result, the three propositions , , and which own the largest BPA values affect the final result of model testing; if it is or , then the result is self-evident, yet if turns out to be the largest, then we could conclude that the given data is too sparse to be used for making a decision and other methods need to be employed. It should be mentioned that the evidence distance may be another alternative to replace the Bayes factor [73], which is still under research.

Dempster-Shafer evidence theory is concerned with the question of belief in a proposition and systems of propositions, which is capable of converging most BPA values to the dominant proposition quickly. The Dempster's rule of combination is robust when there are incidental disagreements between model prediction and its relative measured evidence (the reasons could be measurement error, sparse evidence, etc.). In this case, the Bayes factor in (9) could be unreliable and unprecise whether merely computing once or averaging several results in the model testing, while the proposed evidence theory based method can handle such conditions properly. In particular, the BPAs reflect the partial lack of information available for decision making which could be used for rejecting model under consideration if the relative uncertainty is too high.

5. Numerical Example

In this section, a numerical example that depicts the proposed methods for model validation is presented. For the purpose of illustration, various types of epistemic uncertainty, that is, sparse point data, interval data, and distribution parameter uncertainty, are arranged to appear simultaneously in the model input data. Moreover, the experimental observations include both point data and interval data. In addition, the units of following variables are omitted to simplify the illustration.

5.1. Problem Description and the Data

The steady state heat transfer in a thin wire of length , with thermal conductivity and convective heat coefficient , is of interest. The temperature prediction at the midpoint of the wire is desired. The condition of this problem is assumed to be essentially one-dimensional, and it is assumed that the solution can be obtained from the boundary value problem [20, 21]:Rebba et al. [21] assumed that the temperatures at the ends of the wire are both zero (i.e., ) and [74] is the heat source; these conditions used in this paper are of an ideal scenario. The length of the wire is deterministic and . It is desired to predict .

For the sake of illustration, and are random variables here and the PDF of the conductivity of the wire is assumed to be normal but with uncertain distribution parameters, which would be provided several alternatives by experts and engineers as shown in Table 1, which displays a parameter set consisting of five alternative parametric statistics combinations. The input model data concerning is two intervals, and , and three points data, 4.98, 5.21, and 5.02. Besides, suppose that the distribution for the convective heat coefficient is normal but not available yet. Instead, it is described using two intervals, and , and two points data, 0.58 and 0.52.

Suppose, for given end temperatures of the wire and the model parameters and , the numerical model (14) predicts a set of temperatures for . The wire made with properties and having the same measured values as input to the numerical model is tested three times repeatedly to measure the temperature at location , and measured temperatures are different in each experiment. Here, assume that the measurements are in the following form: , , and . It is required to assess whether the observed experimental evidence supports the numerical model in (14). Various steps involved in the validation procedure are provided with details below.

5.2. Probability Distribution of Model Input Data

As what has already been given in Section 3, the first thing we should embark on is to describe the provided model input data using proper PDF. For this numerical example, the PDF of the conductivity of the wire is described by the experts and engineers as normal but with uncertain distribution parameters, that is, a normal distribution conditioned on the given set of alternative parameters. Thus there are five alternative PDF curves, depicted in Figure 3, for using input data to constitute groups of BPAs for eliminating the parametric uncertainty in PDF of . Figure 3(b) is the partial enlarged view for the intersections of provided data and the curves, from which the BPA values are determined for information fusion and decision making. The generated BPAs for combining are displayed in Table 2 ( represents the BPA generated from the leftmost point on the -axis and the rest are in a similar fashion). The fusion result is shown in Table 3 and the curves are coded as numbers 1 to 5 like in Table 1, from which we can conclude that the rank of five alternative PDF is and the approximate real PDF of the thermal conductivity of the wire () is the normal distribution with parameter combination , that is, (, ), that is, the red curve in Figure 3(a).

Since there is little information about the distribution of the convective heat coefficient of the wire and there are two intervals, and , and two points values, 0.58 and 0.52, which is the situation talked about in Section 3.2, discretization and bootstrapping method come to help. We discretize the entire data into 1000 points and estimate the statistic parameters of the normal distribution of ; the PDF curve is shown in Figure 4.

5.3. Probability Distribution of Model Prediction

Now, since random variables and are both described in the PDF form, we come to the point where the uncertainty propagation technique needs to be applied, in order to estimate the distribution of which is also a random variable. With what has been discussed in Section 4, the uncertainty propagation technique utilized here is Monte Carlo simulation method and all those input PDFs are calculated by the computational model in (14). The model predicted PDF of is obtained and shown in Figure 6 as the red curve having a lognormal distribution with mean 17.12 and variance 10.042, namely, the PDF to be evaluated in the evidential model validation below.

5.4. Evidential Model Validation

Given the model predictions and the three groups of corresponding measured experimental evidence, the next thing needed to do is estimating the PDFs of them (, to 3) through the bootstrapping method, and the results are depicted in Figure 5. Then we integrate the ( to 3) and in the same picture shown in Figure 6. By the indication of red dotted lines on it, accordingly, three tuples of values can be derived to generate the BPAs of three groups of BPAs for information fusion and model decision making (shown in Table 4), and the calculation follows the instruction of (10)–(13). The fusion result is displayed in Table 5, and obviously the result shows that the experimental evidence agrees with the model, and the computational model in (14) is acceptable.

6. Conclusion

This paper proposes an evidence theory based method to quantify the epistemic uncertainty, which includes three types of epistemic uncertainty concerning input model data, that is, sparse point data, interval data, and probability distributions with uncertain parameters, in the computational model. We also develop an evidential method for model validation, inspired by the idea of Bayesian hypothesis testing and Bayes factor, which compares the model predictions with the measured experimental data so as to assess the predictive capability of the model and help the decision making of model acceptance. The bootstrapping technique in statistics is also used to estimate the statistics in order to infer the probability density function (PDF) of the data involved in the model validation process. Through the proposed methods, the given data both in the point and interval form will be regarded as random variables and described by the corresponding probability distributions for the uncertainty propagation in the computational model, thus, for the model validation. Developed by the idea of Bayes factor, the frame of discernment of D-S evidence theory is constituted and the basic probability assignment (BPA) is determined. Because the proposed validation method is evidence based, the robustness of the result can be guaranteed, and the most evidence-supported hypothesis about the model testing will be favored by the BPA, thus helping the decision making of model acceptance.

A numerical example about the prediction of the wire middle temperature demonstrates that the proposed methods can effectively handle the epistemic uncertainty in the context of model validation. The further work needs to develop the methods for a more complex scenario of model validation where both the epistemic and aleatory uncertainty, as well as more types of epistemic uncertainty such as qualitative model data and categorical variables in the model, are taken into consideration.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Authors’ Contributions

Wei Deng and Xi Lu contributed equally to this study.

Acknowledgments

This work is partially supported by the National Natural Science Foundation of China (Grants nos. 61573290, 61503237).