Abstract

Structural load types, on the one hand, and structural capacity to withstand these loads, on the other hand, are of a probabilistic nature as they cannot be calculated and presented in a fully deterministic way. As such, the past few decades have witnessed the development of numerous probabilistic approaches towards the analysis and design of structures. Among the conventional methods used to assess structural reliability, the Monte Carlo sampling method has proved to be very convenient and efficient. However, it does suffer from certain disadvantages, the biggest one being the requirement of a very large number of samples to handle small probabilities, leading to a high computational cost. In this paper, a simple algorithm was proposed to estimate low failure probabilities using a small number of samples in conjunction with the Monte Carlo method. This revised approach was then presented in a step-by-step flowchart, for the purpose of easy programming and implementation.

1. Introduction

The main purpose of regulations and existing approaches in the analysis and design of structures, ranging from buildings to geotechnical structures, is to ensure the safety and proper performance when subject to probable loads. Safety means that structures should not fail under typical loads. Such a failure may not necessarily be in the form of structural collapse and can instead be defined as a certain level of failure under what is known as “performance level” in civil engineering regulations [1, 2].

Assume a load (or the stress and strain caused by the load) is to be applied to a structure with a load-bearing capacity of ; if is greater than , the structure is safe, while being greater than renders the structure unsafe, incurring a level of damage which depends on the difference between and . Therefore, a Limit State Function (LSF) can be defined as follows:

Negative values of LSF imply that the capacity is not adequate to bear the applied load. Therefore, the structure is likely to undergo some sort of failure. On the other hand, positive LSF values indicate adequacy of the structural capacity; that is, the structure is likely to remain safe under the given load. With the LSF value established, failure probability can now be defined as follows:

A great deal of research has been done on different methods to calculate this probability [3, 4]. One of the common methods for this purpose is the Monte Carlo sampling method [57]. Although this method is seen as effective and is widely used in research, Monte Carlo sampling still suffers a problem which limits its application scope. When the probability is relatively small, the number of samples required by the Monte Carlo method in order to adequately make predictions increases significantly, making the analysis difficult. As such, the aim of this paper is to propose an algorithm to estimate failure probability in cases where less than sufficient number of samples is provided, thereby addressing this issue.

As shown in Monte Carlo sampling algorithm (Figure 1), first, considering the probability distribution for each variable, enough number of random numbers is generated. Then, the LSF is evaluated for the generated random numbers, so that each random number corresponds to a LSF value. Now, by counting the number of less-than-zero data points and dividing the count by the total number of data points, an estimation of failure probability can be arrived at.

However, the problem is that if the probability of failure is so small that it renders the number of samples inadequate, the method loses its efficiency. In other words, as no less-than-zero output is likely to be found, the probability of failure is calculated as zero.

In such cases, there are two logical workarounds. The first possible solution is to increase the number of random numbers and repeat the Monte Carlo analysis. However, depending on the available time and resources, this may not always be possible. The second is to seek an approach through which the probability can be approximated using the available limited number of data points. As described in the following section, this paper focuses on the second solution and proposes an algorithm to address this issue.

2. Problem Description

2.1. The Problem of the Monte Carlo Method

The problem with Monte Carlo sampling is that if the probability of failure is a small value, a large number of samples are needed in order to predict this accurately, causing a sharp increase in required cost and time. Denoting the actual value of failure probability as , the approximated value calculated by Monte Carlo sampling is . Soong and Grigoriu [8] showed that the relationship between and can be expressed as follows:where is the total number of samples and , , and are the expected value, the variance, and the variation coefficient of estimated probability, respectively. As can be seen, by increasing , the variance and dispersion of Monte Carlo estimation are reduced, making the results less uncertain.

Now, if the Monte Carlo sampling method is used to calculate a probability of around one percent, with a variation coefficient of 5 percent, the following number of samples is required:

As seen above, for a low failure probability, a large number of samples are required, making the method difficult to apply. To address the root cause of this problem, it should be explained that the failure probability is based on the tail of the fitted distribution function, where no samples of the limited set, usually, fall into this region. Therefore, the conventional Monte Carlo sampling is a bad approximation of tail in general, and a small error in the tail leads to a huge error in estimated failure probability. A solution around this is to generate a greater number of simulations in order to obtain enough samples in the tail and, consequently, make a better approximation; however, this would make the method prohibitively expensive. The other solution is to use algorithms that generate more random numbers near the tail. These kinds of algorithms mostly use a technique to change the dispersion of random numbers in order to generate more random numbers in a certain angle or a specific area such as the tail region [912]. However, for using these methods, we still need to go through further calculations which increase the model complexities. The purpose of this paper is to provide an approximate but simple method for estimating the small failure probability that could simply be programmed and implemented.

2.2. Quantification of the Problem

To further explain the problem with Monte Carlo sampling method, assume we want to use the Monte Carlo method to estimate the probability of failure by taking 25 samples, where the is calculated as , with being a random variable of log-normal distribution function (mean () and standard deviation () of 180 and 20, resp.) and is a random variable with Extreme Type I distribution function (mean () and standard deviation () of 110 and 15, resp.). We solved the problem in this case as follows:(i)Two groups of 25 random numbers (ranging from zero to one) were generated for and , using uniform distribution function. Uniform random number generators can be found in both statistical software and spreadsheet software [13, 14] or library functions of programming languages [15, 16]. The generated random numbers for and are listed in the second and fifth columns of Table 1, respectively ( and ).(ii) For each random number, , a corresponding value of , , was generated. For this purpose, a set of standard-normal random variables (’s) were first generated from using inverse CDF of the standard-normal distribution function, :Then, using the relationship between log-normal distribution and standard-normal distribution as in (6), the corresponding log-normal variables, ’s, were produced:(iii) For each random number, , a corresponding value of , , is then generated. For this purpose, inverse CDF of Extreme Type I distribution function was used as shown in (iv) Subtracting from , values of were calculated, as reported in the seventh column of Table 1.(v) Finally, the probability of was calculated using (8), where is the number of less-than-zero cases and is total number of data points:

As can be seen, there were no less-than-zero values, implying a nonreasonable zero value for the estimated probability of Monte Carlo.

The reason for this issue is that the failure probability is too small in this case, so that the random numbers are so low that the Monte Carlo method fails to predict the solution at an acceptable level of accuracy. To investigate further, we developed a code for this case study and repeated the Monte Carlo method with 100, 1000, 5000, 10000, 15000, and 20,000 random numbers. The results are presented in Table 2.

As can be seen, by increasing the number of samples, the accuracy of the predicted probability value increased significantly. However, using this process, especially in cases where the is associated with many random variables or when it is not explicitly available, would be very costly and difficult. This is why an assistant algorithm, to be run alongside the Monte Carlo algorithm, seems necessary to provide an estimation of the failure probability with lower samples numbers.

3. Proposed Method

As explained, due to the low number of available samples, we were not able to use the conventional Monte Carlo method to accurately calculate the failure probability. Furthermore, increasing number of samples was not a viable alternative due to time and resource limitations. Therefore, we propose an approximation algorithm for calculating the failure probability under these conditions.

Our approach towards probability in the proposed algorithm is to use the trend the data displays as it approaches the negative border; this is compared to the Monte Carlo method’s counting of the number of less-than-zero samples. For this purpose, the should first be evaluated for the random variables. Then, the values should be sorted in small-to-large () fashion. The associated probability of each value of must then be approximated via [8]. Here, it is assumed that these values are likely to follow the standard-normal distribution. Therefore, the inverse standard-normal distribution is adopted to calculate the corresponding standard-normal variable (). Then, the values of versus are plotted on a graph, with a curve fitted to them. The curve fitting algorithms are not discussed in this study because, depending on the type and distribution of data, different algorithms may be appropriate. However, it is recommended to select a curve fitting algorithm that could approximate the data by the lowest possible margin of error. In this paper, the least squares method is used for curve fitting [17, 18]. The fitted curve is used instead of the large number of samples usually required; that is, instead of generating a large number of samples, a specific trend is identified from the curve using the available data. The intersection of the curve with the vertical axis specifies the standard-normal variable corresponding to , which shows the initial failure level. The value of this probability can be easily calculated using the standard-normal distribution function, , as follows:

The proposed algorithm is shown in Figure 2.

To summarize, the previously presented case study was solved using the proposed algorithm, following the step-by-step procedure below which provided an estimation of the probability of failure.

Step 1. As shown in the second column of Table 3, the values were sorted from small to large, so that the smallest and largest values were denoted by and , respectively.

Step 2. For each , a probability was calculated using the Gumbel distribution [8] in (10), as reported in the third column of Table 3:

Step 3. As shown in the fourth column of Table 3, for each , a corresponding was calculated.

Step 4. was plotted against (Figure 3).

Step 5. A graph was fitted to the data (Figure 3 and (10)), with the equation of the fitted curve being as follows:

Step 6. The intersection of fitted curve and vertical axis (intercept) was calculated:

Step 7. The failure probability was calculated using the CDF of the standard-normal distribution function as follows:

As can be seen, the proposed algorithm approximated the failure probability to an acceptable level of accuracy equivalent to the estimation provided by Monte Carlo method using 20000 random numbers.

4. Discussion

To have a better evaluation of proposed method, we need to use it for some other scenarios and check the results. Thus, we define different s and random variables here and try to study the efficiency of proposed method in estimating the exceedance probability. In this regard, three s were defined as follows:

Two different load-cases were assumed for each where any of them has different random variables and . Thus, a total of 6 different load-cases were defined as shown in Table 4. It should be noted that the required data for the proposed method are the LSF random values. The types of variables used to make the LSF, and their distribution functions, have no direct effects on the proposed algorithm. Therefore, using random variables with different distribution functions is one of the capabilities of proposed method, which is examined in this section.

Each of the six defined load-cases was then analyzed by conventional Monte Carlo method using 25, 1000, 10000, and 20000 samples. The results are shown in columns 2 to 6 of Table 5. In addition, the same load-cases were analyzed by the proposed method, so that we can carefully examine the results. The results of analysis are shown in column 7 of Table 5 and the probability plots are depicted in Figure 4.

As can be seen, in almost all cases, the result of the proposed method to predict the probability of failure by 25 samples is close to the conventional Monte Carlo method by 20000 samples. This issue demonstrates the efficiency of proposed method for predicting the small probability of failure. However, it should be noted that the proposed method is an approximation algorithm and aims to make an approximation of failure probability. Therefore, if increasing of samples numbers is possible, the conventional Monte Carlo method is a better choice. Otherwise, the proposed algorithm can be used to estimate the probability.

5. Application of Proposed Method

The proposed method could be used to evaluate the reliability of building or geotechnical structures. To address this capability, a geotechnical case study related to the rock blasting excavation is used in this section. The subject is to estimate the load-bearing capacity of rock mass against the explosion load. After the explosion, the rock medium around the explosion point undergoes a severe shock-load and would be intensely cracked. The size of this crushed zone should be limited to a certain area to minimize the side effect of the explosion. For this purpose, we use the proposed method to estimate the probability of crushed zone exceeding a certain radius. In other words, we try to predict the chance of cracks going beyond a certain radius.

To start the calculation, we first must introduce Esen et al.’s model [19]. Based on a series of in situ tests on concrete and rock samples, Esen et al. [19] developed a formula to predict the crushing zone radius around the blast-hole. Their formula is as follows:where is the crushed zone radius (mm), is the blast-holes radius (mm), is the blast-hole pressure (Pa), is the stiffness of rock mass (Pa), and is the uniaxial compressive strength of rock. and could be calculated, respectively, bywhere is the dynamic elastic modulus (Pa), is dynamic Poisson’s ratio, is the unexploded explosive density (), and is the detonation velocity (m/s). For the next step, the involved parameters should be defined as random variables. Here, we assumed that these variables have a normal probability distribution function by the characteristics listed in Table 6.

The LSF was set as the difference of crushed zone radius from 400 mm, as shown in

The probability of is identical to the exceeding of crushed zone radius from 400 mm, our study’s target in this section. Besides the proposed method, we used the conventional Monte Carlo method, First-Order Reliability Method (FORM), and Second-Order Reliability Method (SORM) to calculate this probability and then compare the results [2023].

The Monte Carlo sampling method was used by 25, 1000, 5000, 10000, and 20000 sample numbers. The results are shown in Table 7. As seen, the exceedance probability was converged to 3.465 percent after 20000 samples. The proposed method was then used in the next step to analyze the established problem. The probability graph is depicted in Figure 5, and the results are shown in Table 8.

To solve the case study problem by FORM and SORM methods, a computer program, called Risk Tool (RT) [24, 25], developed for reliability analysis and risk assessment of structures, was used. For this means, the random variables were first defined in the “Models” section of the software, according to the values of Table 6, and then the LSF in (17) was entered in the “Functions” part of the software. The required settings for both FORM and SORM analyses were done in the “Methods” tab, and the analyses were finally performed using the RT software. The results are shown in Table 9.

The results of the four methods used are shown in Figure 6. As seen, the proposed algorithm could closely approximate the results of both Monte Carlo and SORM methods. However, there is a relatively high difference between the results of FORM and other methods. It should be addressed that this matter was due to the linear approximation of FORM, which simply does not match our nonlinear LSF in this case study. Therefore, the FORM could not accurately approximate its exceedance probability, compared with the other three methods.

6. Conclusion

In this paper, the fact that the Monte Carlo method requires a large number of samples in order to return acceptable results when it comes to low-probability failures was discussed. After this, a simplified method was presented to estimate the probability of such failures to an equivalent level of accuracy using a smaller volume of samples. A simple case study was also introduced, with the proposed algorithm implemented in a step-by-step fashion. Then, using 6 other load-cases, the efficiency of proposed method was evaluated. Finally, the application of proposed method was shown in a geotechnical project, and the result was compared with other reliability methods.

The following considerations should be noted when using the proposed algorithm:(1)The proposed method merely gives an estimation of failure probability. Therefore, in cases where increasing the number of samples is possible, original Monte Carlo sampling method is preferred over the proposed algorithm.(2)The intercept of the fitted curve to data is highly dependent on the curve fitting algorithm used, which was not discussed in this paper. However, to give a general idea, a curve fitting algorithm of the lowest possible error is recommended.(3)Since the estimated probability depends on the generated random numbers, while repeating the analysis, the results may exhibit slight variations. Consequently, it is recommended that the analysis is repeated several times, with the average value being used as the final result.(4)As already explained, the problem with Monte Carlo method is that only a few of the random numbers would be generated on the tale of distribution function, which makes it difficult to calculate the failure probability by small random numbers. There are some algorithms proposed by other researchers to improve the performance of Monte Carlo sampling method, such as importance sampling techniques, particular pattern for random number generating [9, 10], or sampling in a certain range of numbers [11, 12]. As these methods do not have any interdiction with our algorithm, they can be used in conjunction with the proposed method.

Competing Interests

The authors declare that they have no competing interests.