Prediction of Ultimate Bearing Capacity of Cohesionless Soils Using Soft Computing Techniques
This study examines the potential of two soft computing techniques, namely, support vector machines (SVMs) and genetic programming (GP), to predict ultimate bearing capacity of cohesionless soils beneath shallow foundations. The width of footing (), depth of footing (), the length-to-width ratio () of footings, density of soil ( or ), angle of internal friction (), and so forth were used as model input parameters to predict ultimate bearing capacity (). The results of present models were compared with those obtained by three theoretical approaches, artificial neural networks (ANNs), and fuzzy inference system (FIS) reported in the literature. The statistical evaluation of results shows that the presently applied paradigms are better than the theoretical approaches and are competing well with the other soft computing techniques. The performance evaluation of GP model results based on multiple error criteria confirms that GP is very efficient in accurate prediction of ultimate bearing capacity cohesionless soils when compared with other models considered in this study.
Design of foundations is performed based on two criteria: ultimate bearing capacity and limiting settlement. The ultimate bearing capacity is governed by shear strength of the soil and is estimated by theories proposed by Terzaghi , Meyerhof , Hansen , Vesic , and others. However, the different bearing capacity formulae shows wide degree of variability while estimating bearing capacity of dense sand on cohesionless soils. Also the bearing capacities are validated through laboratory studies performed on small-scale models. Due to the “scale effect” for the large-scale foundations on dense sand, shearing strain show that considerable variation along the slip line and the average mobilized angle of shearing resistance along the slip line are smaller than the maximum value () obtained by plane shear tests . Thus, the use of may lead to an overestimated bearing capacity value for the calculations based on different formulae [1–4].
In the recent past, the use of soft computing techniques has attracted many researchers and applied quite successfully for solving many complex geotechnical engineering problems. Artificial neural networks (ANNs) may probably be the most popular among these tools, applied for prediction of bearing capacity of cohesionless soils , bearing capacity of piles, settlement predictions, liquefaction, and slope stability problems . Support vector machines (SVMs) are recent addition to the soft computing family that uses statistical learning theory as the working principle. SVM and its variants are applied for geotechnical problems such as prediction of pile load capacity , settlement of foundations , slope stability , and liquefaction potential .
The evolutionary computational techniques may be a better alternative for solving regression problems as they follow an optimization strategy with progressive improvement towards the global optima. They start with possible trial solutions within a decision space, and the search is guided by genetic operators and the principle of “survival of the fittest” . Genetic Algorithm (GA) is one of the most popular and powerful evolutionary optimization technique  explored by , but it cannot be used to evolve complex models such as equations. This limitation is overcome by Genetic Programming (GP) introduced by Koza , which works on the principle of GA. GP writes expressions or computer programs instead of strings in GA. In this paper, SVM and GP are used as alternate paradigms to predict bearing capacity of cohesionless soils under shallow foundations.
2. Support Vector Machine
Support vector machine (SVM) is a relatively recent addition to the family of soft computing techniques evolved from the concept of statistical learning theory explored by Boser et al. . SVM performs the regression by using a set of nonlinear functions that are defined in a high-dimensional space. SVM has been used to solve nonlinear regression problems by the principle of structural risk minimization (SRM), where the risk is measured using Vapnik’s accuracy intensive loss function () . SVM uses a risk function consisting of the empirical error and a regularization term. More details on SRM can be found in Cortes and Vapnik . Considering a set of input-output pairs as training dataset, , where is the input, is the output, is the -dimensional vector space, and is the one-dimensional vector space. In this problem, the width of footing (), depth of footing (), the length-to-width ratio () of footings, density of soil ( or ) angle of internal friction (), and so forth were used as model input parameters to predict ultimate bearing capacity (). Hence, for this problem, and .
The intension of SVM is to fit a function that can approximately predict the value of output on supplying a new set of predictors (input variables).
The -intensive loss function can be described as follows: otherwise, This defines an -tube so that if the predicted value is within the tube, the loss is zero; otherwise the loss is equal to the absolute value of the deviation minus . This concept is depicted in Figure 1.
SVM attempts to find that a function that gives the deviation of “” from the actual output is as flat as possible.
Consider a linear function of the form, where is an adjustable weight vector and is the scalar threshold. Fitness means the search for a small value of “”. It can be represented as a minimization problem with an objective function comprising the Euclidian norm as follows: Some allowance for errors () may also be introduced. Two slack parameters and have been introduced to penalize the samples with error more than “”. Thus the infeasible constraints of the optimization problem are eliminated. The modified formulation takes the following form: The constant determines the tradeoff between the flatness of and the amount up to which the deviations larger than “” are tolerated . The above optimization problem is solved by Vapnik  using Lagrange multiplier method. The solution is given by where and are known as support vectors and is the number of support vectors.
Some Lagrange multipliers () will be zero, which implies that these training solutions are irrelevant to the final solution (known as sparseness of the solution). The training objects with nonzero Lagrange multipliers are called support vectors. When linear regression is not appropriate, input data have to be mapped into a high-dimensional feature space through nonlinear mapping and the linear regression needs to be performed in the high-dimensional feature space . Kernel function is used to transform nonlinear data from the input to the feature space in linear form. Then linear fitting in new space will be equal to nonlinear fitting in original space: where is the kernel function, and are inputs, and is the dot product in the high-dimensional space.
The functions which satisfy Mercer’s theorem can be used for fitting the data . Polynomial functions, radial basis function (RBF), and splines are the most commonly used kernel functions for data fitting using SVM. The mathematical forms of some popular kernel functions can be found in .
3. Genetic Programming
Genetic Programming (GP) is an automatic programming technique for evolving computer programs to solve, or approximately solve, problems introduced by Koza . GP is basically an optimization paradigm that can also be effectively applied to the genetic symbolic regression (GSR). GSR involves finding a mathematical expression in symbolic form relating finite values of set of independent variables () and a set of dependent variables () . GP works on Darwin’s natural selection theory in evolution. Here, a population is progressively improved by selectively discarding the not-so-fit population and breeding new children to form better populations. Like other evolutionary algorithms, the solution is started with a random population of individuals (equations or computer programs). Each possible solution set can be visualized as a “parse tree” comprising the terminal set (input variables) and functions (general operators such as +, −, *, /, logarithmic or trigonometric). The “fitness” is a measure of how closely a trial solution solves the problem. The objective function—the minimization of error between estimated and observed values—is the fitness function. The solution set in a population associated with the “best fit” individuals will be reproduced more often than the less fit solution sets. It iteratively transforms a population of computer programs into a new generation of programs by applying analogs to naturally occurring genetic operators like reproduction, mutation, and crossover. The different genetic operations can be found in detail in . The basic procedure of GP is presented as a flow chart in Figure 3.
In the recent past, GP is effectively applied to solve a wide range of geotechnical engineering problems [20–22]. GP can evolve an explicit equation or equivalent computer program relating the input and output variables which is a more understandable depiction of the cause-effect process. Some literature suggests that the program-based GP approach (i.e., the GP algorithms which give a computer program which helps for estimating the predictant value for a given set of predictors) can perform equally well with an equation-based approach and other soft computing tools like ANN [23–26]. A program-based GP approach is adopted for the present study.
4. Model Development and Results
The primary step in model development for the estimation of bearing capacity of cohesionless soils underneath shallow foundations is identification of parameters that affect the bearing capacity. The basic form of equation for bearing capacity of cohesionless soil is  where is the width of foundation, is the depth of foundation, is the unit weight of sand, , is the bearing capacity factors, , is the shape factors, and , the depth factors. These factors primarily depend on the angle of shearing resistance, unit weight of the sand, and the geometry of the foundation.
The main factors affecting the bearing capacity are its width (least lateral dimension, ), length of footing (), shape (square, rectangular, and circular), and depth of embedment (). The depth of foundation has the greatest effect on the bearing capacity of all the physical properties of the foundation. There are some other factors such as compressibility and thickness of the soil layer beneath the foundation that contribute to a lesser degree . The effect of compressibility is small, except for loose densities, and is generally less important in bearing capacity computation . Moreover, there are insufficient data to consider compressibility as well as thickness of soil stratum. Therefore, they are not considered in this study.
The data used in the present study has been adopted from Padmini et al. . The five input parameters used for the model development in this study are width of footing (), depth of footing (), footing geometry (), unit weight of sand (), and angle of shearing resistance (). Ultimate bearing capacity () is the single output. The data thus compiled comprises a total of 97 data sets, which consists of results of load test data of square, rectangular, and strip footings of different sizes tested in sand beds of various densities. Out of the total 97 sets of data, 78 are used for training and 19 are used for validation in all the experiments considered in this study. The data division is done in such a way that the same 19 sets of data used by Padmini et al.  are kept as the validation dataset to enable a comparison of results of the present study with those obtained by ANN and FIS by Padmini et al. .
4.2. Development of SVM Model
The data mining software WEKA 3.6.1 proposed by Witten and Frank  is used for developing SVM model. In this study an -variant of SVM (-SVM) is used for support vector regression, and the loss function () is fixed as 0.001. Initially, a polynomial kernel of degree () 2 is used to fit a nonlinear model. The selection of regularization parameter and kernel-specific parameters ( and for polynomial and RBF kernel, resp.) may influence the results. A large value of indicates that the objective function is only to minimize the empirical risk, which makes the learning machine more complex. On the other hand, a smaller may cause learning errors with poor approximation .
A trial and error approach is followed to find the optimal value of for model with polynomial kernel. The parameter of 100 is found to be quite successful in giving satisfactory performance. Then a radial basis function (RBF) kernel is used to fit a nonlinear model in the present study to build an SVM model. The combination of control parameters such as and gives very good training performance. The plot between observed and predicted values of training dataset is shown in Figure 4 (polykernel) and Figure 5 (RBF kernel). These plots indicate that the model is well trained.
4.3. Development of GP Model
The genetic programming software DISCIPULUS  is used for developing GP model. The models are created in the form of “evolved” computer programs as GP uses Darwinian natural selection to create them. Using this model, the output of statistically similar input data can be predicted with very much accuracy. The initial control parameters used for the problem are population size: 500, crossover probability: 0.95, and mutation probability: 0.5. The basic arithmetical functions (such as addition, multiplication, subtraction, and division (+, *, −, /)) constitute the function set. The fitness function is selected as the root-mean-square error between the measure and predicted values of ultimate bearing capacity. The best program generated by GP software for predicting the UBC of cohesionless soils is given in the appendix. The plot between observed and predicted values of training dataset is shown in Figure 6. This plot indicates that the model is well trained.
5. Results and Discussions
The efficiency of the developed models is analyzed by different statistical performance evaluation criteria such as correlation coefficient (), coefficient of efficiency (), root-mean-square error (RMSE), mean bias error (MBE), and mean absolute relative error (MARE). The equations of different performance evaluation measures were presented in Table 1, in which stands for the observed output value, represents the computed output value, is the mean of observed values, represents the mean of computed values, and represents the number of data points. The different performance evaluation criteria estimated for training dataset are presented in Table 2. The predictions for testing dataset using different models are presented in Table 3, and the performance evaluation for these predictions is presented in Table 4. However, it is to be noted that the ANN and FIS results presented in Table 4 are deduced based on the relative error (RE) values reported by Padmini et al. . From Table 4 it can be inferred that the correlation coefficient and coefficient of efficiency are the highest (0.997 and 0.996) and the error criteria such as RMSE, MBE, and MARE are the least (44.967, 4.01, and 7.69) for the GP-based modeling.
Further the scatter plots between observed and predicted values of UBC for SVM models are presented in Figure 7 (polykernel) and Figure 8 (RBF kernel). The 5% error bar lines are plotted along with these scatter plots. Such a plot can be used to indicate the range of standard deviation and to determine whether the differences are statistically significant . A perusal of plots shows that, for GP-based predictions, all points lie within the specified confidence interval of 95%. Thus, it can be inferred that all the soft computing methods outperform the theoretical approaches in the prediction of bearing capacity. Similar plot for predictions with GP model is presented in Figure 9. Also from Table 4, it is seen that the value and value are closer to unity and different error criteria are much lesser for any of the applied soft computing tools when compared with theoretical models.
A statistical evaluation of the predictions by the different soft computing models for the testing dataset is performed and presented in Table 5. The standard deviation, average deviation, and coefficient of variation values of GP model results (607.91, 454.59, and 1.059) show close agreement with that of observed values (600.02, 459.48, and 1.052) followed by that of SVM (RBF) model. This confirms the robustness of the newly applied paradigms.
The different performance evaluation measures of SVM-based modelling (in Tables 4 and 5) show that the performance of RBF-based SVM is competent with ANN and FIS results. Also SVM involves only lesser number of control parameters (such as and ), and ANN involves large number of such parameters and their optimal combination is a tedious process. Thus, the SVM approach is quite simple to implement. Further, the performance evaluation of GP-based results (Tables 4 and 5) shows that the , and different error criteria are better for the GP model when compared with the theoretical methods, the SVM, and interpreted results of ANN and FIS. Thus, GP is proven to be a reliable alternative soft computing technique for prediction of ultimate bearing capacity of shallow foundation on cohesionless soil.
In this paper the application of two relatively recent soft computing techniques—SVM and GP—is investigated for the prediction of ultimate bearing capacity of cohesionless soils beneath shallow foundations. SVM results are competent and demand the optimal selection of only a few number of control parameters when compared with ANN. Performance evaluation based on multiple error criteria shows that error is the least and correlation coefficient () and coefficient of efficiency () are the highest for the GP-based modeling than SVMs, ANN, FIS, and the different theoretical models considered in this study. The GP-based modeling is found to be superior in terms of quality, and it gives the output in the form of computer programs which enables the user to apply for a new set of input data to predict the ultimate bearing capacity. Thus, GP can be recommended as a robust soft computing paradigm to predict the ultimate bearing capacity of soil.
The C++ Program to predict the ultimate bearing capacity of cohesionless soils is given here. V to V represent the input parameters width of footing (), the depth of footing (), the length-to-width ratio (), the field density (), the angle of shearing resistance (). f, f, and so forth, are the temporary computation variables that the programs GP software creates. The output of these programs is the value remaining in f after the program executes. This program needs to be run in the DISCIPULUS software environment to get the predictant value for a new set of predictors.
B. Best Program
|#define TRUNC(x)(((x)>=0) ? floor(x): ceil(x))|
|#define C_FPREM (_finite(f/f) ? f-(TRUNC(f/f)|
|#define C_F2XM1 (((fabs(f)<=1) &&|
|(!_isnan(f))) ? (pow(2,f)-1):|
|((!_finite(f) && !_isnan(f) &&|
|(f<0)) ? -1: f))|
|float DiscipulusCFunction(float v)|
|long double f;|
|long double tmp = 0;|
|int cflag = 0;|
|L14: cflag=(f < f);|
|L17: tmp=f; f=f; f=tmp;|
|L26: if (cflag) f = f;|
|L35: cflag=(f < f);|
|L50: tmp=f; f=f; f=tmp;|
|L59: if (cflag) f = f;|
|L74: if (!cflag) f = f;|
|L89: if (!cflag) f = f;|
|L93: if (!cflag) f = f;|
|if (!_finite(f)) f=0;|
This paper is a part of a research work carried out at the Department of Civil Engineering, TKM College of Engineering Kollam, Kerala, India, in 2010. The authors thank the Department of Civil Engineering, TKM College of Engineering Kollam for providing all necessary help. They also thank the anonymous reviewer/s who helped to improve the quality of the paper.
K. Terzaghi, Theoretical Soil Mechanics, John Wiley & Sons, New York, NY, USA, 1943.
G. G. Meyerhof, “Some recent research on the bearing capacity of foundations,” Canadian Geotechnical Journal, vol. 1, no. 1, pp. 16–26, 1963.View at: Google Scholar
J. B. Hansen, “A general formula for bearing capacity,” Danish Geotechnical Institute Bulletin, vol. 11, 1961.View at: Google Scholar
A. S. Vesic, “Analysis of ultimate loads of shallow foundations,” Journal of Soil Mechanics and Foundation Division, vol. 99, no. 1, pp. 45–73, 1973.View at: Google Scholar
M. A. Shahin, H. R. Maier, and M. B. Jaksa, “Artificial neural network applications in Geotechnical Eng,” Australian Geomechanics, vol. 36, no. 1, pp. 49–62, 2001.View at: Google Scholar
J. H. Holland, Adaptation in Natural and Artificial System, Ann Arbour Science Press, Ann Arbor, Mich, USA, 1975.
D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, Mass, USA, 1989.
J. R. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press, Cambridge, Mass, USA, 1992.
B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” in 5th Annual ACM Workshop on COLT, D. Haussler, Ed., pp. 144–152, ACM Press, Pittsburgh, Pa, USA, 1992.View at: Google Scholar
V. N. Vapnik, Statistical Learning Theory, John Wiley & Sons, New York, NY, USA, 1998.
S. T. Khu, S. Y. Liong, V. Babovic, H. Madsen, and N. Muttil, “Genetic programming and its application in real-time runoff forecasting,” Journal of the American Water Resources Association, vol. 37, no. 2, pp. 439–451, 2001.View at: Google Scholar
I. H. Witten and E. Frank, Data Mining, Morgan Kaufmann, San Francisco, Calif, USA, 2000.
F. D. Francone, Discipulus Owner’s Manual Version 3.0 DRAFT, Machine Learning Technologies, Littleton, Colo, USA, 1998.