International Scholarly Research Notices

International Scholarly Research Notices / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 628496 | 10 pages |

Prediction of Ultimate Bearing Capacity of Cohesionless Soils Using Soft Computing Techniques

Academic Editor: M. Abbod
Received31 Jul 2011
Accepted07 Sep 2011
Published05 Dec 2011


This study examines the potential of two soft computing techniques, namely, support vector machines (SVMs) and genetic programming (GP), to predict ultimate bearing capacity of cohesionless soils beneath shallow foundations. The width of footing (šµ), depth of footing (š·), the length-to-width ratio (šæ/šµ) of footings, density of soil (š›¾ or š›¾ī…ž), angle of internal friction (Ī¦), and so forth were used as model input parameters to predict ultimate bearing capacity (š‘žš‘¢). The results of present models were compared with those obtained by three theoretical approaches, artificial neural networks (ANNs), and fuzzy inference system (FIS) reported in the literature. The statistical evaluation of results shows that the presently applied paradigms are better than the theoretical approaches and are competing well with the other soft computing techniques. The performance evaluation of GP model results based on multiple error criteria confirms that GP is very efficient in accurate prediction of ultimate bearing capacity cohesionless soils when compared with other models considered in this study.

1. Introduction

Design of foundations is performed based on two criteria: ultimate bearing capacity and limiting settlement. The ultimate bearing capacity is governed by shear strength of the soil and is estimated by theories proposed by Terzaghi [1], Meyerhof [2], Hansen [3], Vesic [4], and others. However, the different bearing capacity formulae shows wide degree of variability while estimating bearing capacity of dense sand on cohesionless soils. Also the bearing capacities are validated through laboratory studies performed on small-scale models. Due to the ā€œscale effectā€ for the large-scale foundations on dense sand, shearing strain show that considerable variation along the slip line and the average mobilized angle of shearing resistance along the slip line are smaller than the maximum value (Ī¦max) obtained by plane shear tests [5]. Thus, the use of Ī¦max may lead to an overestimated bearing capacity value for the calculations based on different formulae [1ā€“4].

In the recent past, the use of soft computing techniques has attracted many researchers and applied quite successfully for solving many complex geotechnical engineering problems. Artificial neural networks (ANNs) may probably be the most popular among these tools, applied for prediction of bearing capacity of cohesionless soils [5], bearing capacity of piles, settlement predictions, liquefaction, and slope stability problems [6]. Support vector machines (SVMs) are recent addition to the soft computing family that uses statistical learning theory as the working principle. SVM and its variants are applied for geotechnical problems such as prediction of pile load capacity [7], settlement of foundations [8], slope stability [9], and liquefaction potential [10].

The evolutionary computational techniques may be a better alternative for solving regression problems as they follow an optimization strategy with progressive improvement towards the global optima. They start with possible trial solutions within a decision space, and the search is guided by genetic operators and the principle of ā€œsurvival of the fittestā€ [11]. Genetic Algorithm (GA) is one of the most popular and powerful evolutionary optimization technique [11] explored by [12], but it cannot be used to evolve complex models such as equations. This limitation is overcome by Genetic Programming (GP) introduced by Koza [13], which works on the principle of GA. GP writes expressions or computer programs instead of strings in GA. In this paper, SVM and GP are used as alternate paradigms to predict bearing capacity of cohesionless soils under shallow foundations.

2. Support Vector Machine

Support vector machine (SVM) is a relatively recent addition to the family of soft computing techniques evolved from the concept of statistical learning theory explored by Boser et al. [14]. SVM performs the regression by using a set of nonlinear functions that are defined in a high-dimensional space. SVM has been used to solve nonlinear regression problems by the principle of structural risk minimization (SRM), where the risk is measured using Vapnikā€™s accuracy intensive loss function (šœ€) [15]. SVM uses a risk function consisting of the empirical error and a regularization term. More details on SRM can be found in Cortes and Vapnik [16]. Considering a set of input-output pairs as training dataset, [(š‘„1,š‘¦1),(š‘„2,š‘¦2)ā‹Æ(š‘„š‘™,š‘¦š‘™)]š‘„āˆˆš‘…š‘,š‘¦āˆˆš‘Ÿ, where š‘„ is the input, š‘¦ is the output, š‘…š‘ is the š‘-dimensional vector space, and š‘Ÿ is the one-dimensional vector space. In this problem, the width of footing (šµ), depth of footing (š·), the length-to-width ratio (šæ/šµ) of footings, density of soil (š›¾ or š›¾ī…ž) angle of internal friction (Ī¦), and so forth were used as model input parameters to predict ultimate bearing capacity (š‘žš‘¢). Hence, for this problem, š‘„=[šµ,š·,(šæ/šµ),š›¾,Ī¦] and š‘¦=[š‘žš‘¢].

The intension of SVM is to fit a function that can approximately predict the value of output on supplying a new set of predictors (input variables).

The šœ€-intensive loss function can be described as follows:šæšœ€||||(š‘¦)=0forš‘“(š‘„)āˆ’š‘¦ā‰¤šœ€,(1) otherwise,šæšœ€||||(š‘¦)=š‘“(š‘„)āˆ’š‘¦āˆ’šœ€.(2) This defines an šœ€-tube so that if the predicted value is within the tube, the loss is zero; otherwise the loss is equal to the absolute value of the deviation minus šœ€. This concept is depicted in Figure 1.

SVM attempts to find that a function š‘“(š‘„) that gives the deviation of ā€œšœ€ā€ from the actual output is as flat as possible.

Consider a linear function of the form,š‘“(š‘„)=(š‘¤ā‹…š‘„)+š‘,š‘¤āˆˆš‘…š‘,š‘āˆˆš‘Ÿ,(3) where š‘¤ is an adjustable weight vector and š‘ is the scalar threshold. Fitness means the search for a small value of ā€œš‘¤ā€. It can be represented as a minimization problem with an objective function comprising the Euclidian norm as follows: 1Minimizeāˆ¶2ā€–š‘¤ā€–2Subjecttoš‘¦š‘–āˆ’ī€ŗī€·š‘¤ā‹…š‘„š‘–ī€øī€»+š‘ā‰¤šœ€,š‘–=1,2,3,ā€¦,š‘™,ī€ŗī€·š‘¤ā‹…š‘„š‘–ī€øī€»+š‘āˆ’š‘¦š‘–ā‰¤šœ€,š‘–=1,2,3,ā€¦,š‘™.(4) Some allowance for errors (šœ€) may also be introduced. Two slack parameters šœ‰ and šœ‰āˆ— have been introduced to penalize the samples with error more than ā€œšœ€ā€. Thus the infeasible constraints of the optimization problem are eliminated. The modified formulation takes the following form:1Minimizeāˆ¶2ā€–š‘¤ā€–2+š¶š‘™ī“š‘–=1ī€·šœ‰š‘–+šœ‰š‘–āˆ—ī€øSubjecttoš‘¦š‘–āˆ’ī€ŗī€·š‘¤ā‹…š‘„š‘–ī€øī€»+š‘ā‰¤šœ€+šœ‰š‘–,š‘–=1,2,3,ā€¦,š‘™,ī€ŗī€·š‘¤ā‹…š‘„š‘–ī€øī€»+š‘āˆ’š‘¦š‘–ā‰¤šœ€+šœ‰š‘–āˆ—šœ‰,š‘–=1,2,3,ā€¦,š‘™.š‘–ā‰„0,šœ‰š‘–āˆ—ā‰„0,š‘–=1,2,3,ā€¦,š‘™.(5) The constant 0<š¶<āˆž determines the tradeoff between the flatness of š‘“(š‘„) and the amount up to which the deviations larger than ā€œšœ€ā€ are tolerated [17]. The above optimization problem is solved by Vapnik [15] using Lagrange multiplier method. The solution is given byš‘“(š‘„)=š‘€ī“š‘–=1ī€·š›¼š‘–āˆ’š›¼š‘–āˆ—š‘„ī€øī€·š‘–ī€øī‚€1ā‹…š‘„+š‘,whereš‘=āˆ’2ī‚ī€·š‘„š‘¤ā‹…š‘Ÿ+š‘„š‘ ī€ø,(6) where š‘„š‘  and š‘„š‘Ÿ are known as support vectors and š‘€ is the number of support vectors.

Some Lagrange multipliers (š›¼š‘–,š›¼š‘–āˆ—) will be zero, which implies that these training solutions are irrelevant to the final solution (known as sparseness of the solution). The training objects with nonzero Lagrange multipliers are called support vectors. When linear regression is not appropriate, input data have to be mapped into a high-dimensional feature space through nonlinear mapping and the linear regression needs to be performed in the high-dimensional feature space [14]. Kernel function š¾ is used to transform nonlinear data from the input to the feature space in linear form. Then linear fitting in new space will be equal to nonlinear fitting in original space:š¾ī€·š‘„š‘–,š‘„š‘—ī€øī€·š‘„=šœ™š‘–ī€øī€·š‘„ā‹…šœ™š‘—ī€ø,(7) where š¾ is the kernel function, š‘„š‘– and š‘„š‘— are inputs, and šœ™(š‘„š‘–)ā‹…šœ™(š‘„š‘—) is the dot product in the high-dimensional space.

Thus, (6) can be replaced byš‘“(š‘„)=š‘€ī“š‘–=1ī€·š›¼š‘–āˆ’š›¼š‘–āˆ—ī€øš¾ī€·š‘„š‘–ā‹…š‘„š‘—ī€ø+š‘.(8) The concept of nonlinear mapping is depicted in Figure 2.

The functions which satisfy Mercerā€™s theorem can be used for fitting the data [14]. Polynomial functions, radial basis function (RBF), and splines are the most commonly used kernel functions for data fitting using SVM. The mathematical forms of some popular kernel functions can be found in [18].

3. Genetic Programming

Genetic Programming (GP) is an automatic programming technique for evolving computer programs to solve, or approximately solve, problems introduced by Koza [13]. GP is basically an optimization paradigm that can also be effectively applied to the genetic symbolic regression (GSR). GSR involves finding a mathematical expression in symbolic form relating finite values of set of independent variables (š‘„š‘–) and a set of dependent variables (š‘¦š‘–) [19]. GP works on Darwinā€™s natural selection theory in evolution. Here, a population is progressively improved by selectively discarding the not-so-fit population and breeding new children to form better populations. Like other evolutionary algorithms, the solution is started with a random population of individuals (equations or computer programs). Each possible solution set can be visualized as a ā€œparse treeā€ comprising the terminal set (input variables) and functions (general operators such as +, āˆ’, *, /, logarithmic or trigonometric). The ā€œfitnessā€ is a measure of how closely a trial solution solves the problem. The objective functionā€”the minimization of error between estimated and observed valuesā€”is the fitness function. The solution set in a population associated with the ā€œbest fitā€ individuals will be reproduced more often than the less fit solution sets. It iteratively transforms a population of computer programs into a new generation of programs by applying analogs to naturally occurring genetic operators like reproduction, mutation, and crossover. The different genetic operations can be found in detail in [13]. The basic procedure of GP is presented as a flow chart in Figure 3.

In the recent past, GP is effectively applied to solve a wide range of geotechnical engineering problems [20ā€“22]. GP can evolve an explicit equation or equivalent computer program relating the input and output variables which is a more understandable depiction of the cause-effect process. Some literature suggests that the program-based GP approach (i.e., the GP algorithms which give a computer program which helps for estimating the predictant value for a given set of predictors) can perform equally well with an equation-based approach and other soft computing tools like ANN [23ā€“26]. A program-based GP approach is adopted for the present study.

4. Model Development and Results

The primary step in model development for the estimation of bearing capacity of cohesionless soils underneath shallow foundations is identification of parameters that affect the bearing capacity. The basic form of equation for bearing capacity of cohesionless soil is [5]š‘žš‘¢=š›¾š·š‘š‘žš‘†š‘žš‘‘š‘ž+0.5šµš›¾š‘š›¾š‘†š›¾š‘‘š›¾,(9) where šµ is the width of foundation, š· is the depth of foundation, š›¾ is the unit weight of sand, š‘š‘ž, š‘š›¾ is the bearing capacity factors, š‘†š‘ž, š‘†š›¾ is the shape factors, and š‘‘š‘ž, š‘‘š›¾ the depth factors. These factors primarily depend on the angle of shearing resistance, unit weight of the sand, and the geometry of the foundation.

The main factors affecting the bearing capacity are its width (least lateral dimension, šµ), length of footing (šæ), shape (square, rectangular, and circular), and depth of embedment (š·). The depth of foundation has the greatest effect on the bearing capacity of all the physical properties of the foundation. There are some other factors such as compressibility and thickness of the soil layer beneath the foundation that contribute to a lesser degree [5]. The effect of compressibility is small, except for loose densities, and is generally less important in bearing capacity computation [5]. Moreover, there are insufficient data to consider compressibility as well as thickness of soil stratum. Therefore, they are not considered in this study.

4.1. Database

The data used in the present study has been adopted from Padmini et al. [5]. The five input parameters used for the model development in this study are width of footing (šµ), depth of footing (š·), footing geometry (šæ/šµ), unit weight of sand (š›¾), and angle of shearing resistance (Ī¦). Ultimate bearing capacity (š‘žš‘¢) is the single output. The data thus compiled comprises a total of 97 data sets, which consists of results of load test data of square, rectangular, and strip footings of different sizes tested in sand beds of various densities. Out of the total 97 sets of data, 78 are used for training and 19 are used for validation in all the experiments considered in this study. The data division is done in such a way that the same 19 sets of data used by Padmini et al. [5] are kept as the validation dataset to enable a comparison of results of the present study with those obtained by ANN and FIS by Padmini et al. [5].

4.2. Development of SVM Model

The data mining software WEKA 3.6.1 proposed by Witten and Frank [27] is used for developing SVM model. In this study an šœ€-variant of SVM (šœ€-SVM) is used for support vector regression, and the loss function (šœ€) is fixed as 0.001. Initially, a polynomial kernel of degree (š‘‘) 2 is used to fit a nonlinear model. The selection of regularization parameter š¶ and kernel-specific parameters (š‘‘ and šœŽ for polynomial and RBF kernel, resp.) may influence the results. A large value of š¶ indicates that the objective function is only to minimize the empirical risk, which makes the learning machine more complex. On the other hand, a smaller š¶ may cause learning errors with poor approximation [28].

A trial and error approach is followed to find the optimal value of š¶ for model with polynomial kernel. The š¶ parameter of 100 is found to be quite successful in giving satisfactory performance. Then a radial basis function (RBF) kernel is used to fit a nonlinear model in the present study to build an SVM model. The combination of control parameters such as š¶=250 and šœŽ=3 gives very good training performance. The plot between observed and predicted values of training dataset is shown in Figure 4 (polykernel) and Figure 5 (RBF kernel). These plots indicate that the model is well trained.

4.3. Development of GP Model

The genetic programming software DISCIPULUS [29] is used for developing GP model. The models are created in the form of ā€œevolvedā€ computer programs as GP uses Darwinian natural selection to create them. Using this model, the output of statistically similar input data can be predicted with very much accuracy. The initial control parameters used for the problem are population size: 500, crossover probability: 0.95, and mutation probability: 0.5. The basic arithmetical functions (such as addition, multiplication, subtraction, and division (+, *, āˆ’, /)) constitute the function set. The fitness function is selected as the root-mean-square error between the measure and predicted values of ultimate bearing capacity. The best program generated by GP software for predicting the UBC of cohesionless soils is given in the appendix. The plot between observed and predicted values of training dataset is shown in Figure 6. This plot indicates that the model is well trained.

5. Results and Discussions

The efficiency of the developed models is analyzed by different statistical performance evaluation criteria such as correlation coefficient (š‘…), coefficient of efficiency (šø), root-mean-square error (RMSE), mean bias error (MBE), and mean absolute relative error (MARE). The equations of different performance evaluation measures were presented in Table 1, in which š‘¦š‘œ stands for the observed output value, š‘¦š‘ represents the computed output value, š‘¦š‘œ is the mean of observed values, š‘¦š‘ represents the mean of computed values, and š‘› represents the number of data points. The different performance evaluation criteria estimated for training dataset are presented in Table 2. The predictions for testing dataset using different models are presented in Table 3, and the performance evaluation for these predictions is presented in Table 4. However, it is to be noted that the ANN and FIS results presented in Table 4 are deduced based on the relative error (RE) values reported by Padmini et al. [5]. From Table 4 it can be inferred that the correlation coefficient and coefficient of efficiency are the highest (0.997 and 0.996) and the error criteria such as RMSE, MBE, and MARE are the least (44.967, 4.01, and 7.69) for the GP-based modeling.

Evaluation criteriaEquation

Coefficient of correlation (R) āˆ‘ š‘… = š‘› š‘– = 1 ( š‘¦ š‘œ š‘– āˆ’ š‘¦ š‘œ ) ( š‘¦ š‘ š‘– āˆ’ š‘¦ š‘ ) ī” āˆ‘ š‘› š‘– = 1 ( š‘¦ š‘œ š‘– āˆ’ š‘¦ š‘œ ) 2 ī” āˆ‘ š‘› š‘– = 1 ( š‘¦ š‘ š‘– āˆ’ š‘¦ š‘ ) 2
Coefficient of efficiency (E) āˆ‘ šø = 1 āˆ’ š‘› š‘– = 1 ( š‘¦ š‘œ š‘– āˆ’ š‘¦ š‘ š‘– ) 2 āˆ‘ š‘› š‘– = 1 ( š‘¦ š‘œ š‘– āˆ’ š‘¦ š‘œ ) 2
Root-mean-square error (RMSE) īƒŽ R M S E = āˆ‘ š‘› š‘– = 1 ( š‘¦ š‘œ š‘– āˆ’ š‘¦ š‘ š‘– ) 2 š‘›
Mean bias error (MBE)ā€‰ 1 M B E = š‘› š‘› āˆ‘ š‘– = 1 ( š‘¦ š‘ š‘– āˆ’ š‘¦ š‘œ š‘– )
Percentage relative error (RE) R E = ( š‘¦ š‘œ āˆ’ š‘¦ š‘ ) š‘¦ š‘œ āˆ— 1 0 0
Percentage mean absolute relative error (MARE) 1 M A R E = š‘› š‘› āˆ‘ š‘– = 1 | R E |

Performance index ā€ƒMeyerhof [2] ā€ƒHansen [3] ā€ƒā€ƒVesic [4]SVM*GP*
(C = 100; d = 2)
RBF kernel
(C = 250; Ļƒ = 3)

RMSE (kPa)260.0015282.8295268.7726123.792336.828065.0650
MBE (kPa)āˆ’78.3587āˆ’95.8701āˆ’103.2620āˆ’9.5919āˆ’3.5807āˆ’0.8618

Ć¢Ė†ā€” Present study.

Sl no.Observed bearing capacity (kPa)Meyerhof [2] (kPa)Hansen [3] (kPa)Vesic [4] (kPa)ANN (kPa)FIS (kPa)SVM (kPa)GP
RBF Kernel
(C = 250; Ļƒ = 3)
(C = 100; d = 2)


Performance index ā€ƒMeyerhof [2] ā€ƒHansen [3] ā€ƒVesic [4] ā€ƒANN [5] ā€ƒā€ƒFIS [5]SVM*GP*
(C = 100; d = 2)
RBF Kernel
(C = 250; Ļƒ = 3)

RMSE (kPa)269.947287.099302.26962.62098.002125.087130.10244.967
MBE (kPa)āˆ’127.724āˆ’139.611āˆ’158.552āˆ’21.8913.9234.114.754.019

Ć¢Ė†ā€” Present study.

Further the scatter plots between observed and predicted values of UBC for SVM models are presented in Figure 7 (polykernel) and Figure 8 (RBF kernel). The 5% error bar lines are plotted along with these scatter plots. Such a plot can be used to indicate the range of standard deviation and to determine whether the differences are statistically significant [8]. A perusal of plots shows that, for GP-based predictions, all points lie within the specified confidence interval of 95%. Thus, it can be inferred that all the soft computing methods outperform the theoretical approaches in the prediction of bearing capacity. Similar plot for predictions with GP model is presented in Figure 9. Also from Table 4, it is seen that the š‘… value and šø value are closer to unity and different error criteria are much lesser for any of the applied soft computing tools when compared with theoretical models.

A statistical evaluation of the predictions by the different soft computing models for the testing dataset is performed and presented in Table 5. The standard deviation, average deviation, and coefficient of variation values of GP model results (607.91, 454.59, and 1.059) show close agreement with that of observed values (600.02, 459.48, and 1.052) followed by that of SVM (RBF) model. This confirms the robustness of the newly applied paradigms.

Statistical MeasureObservedANNFISSVM (Poly)*SVM (RBF)*GP*

Average deviation459.48455.92488.4501.1468.73454.59
Standard deviation600.02609.3645.8641.0608.60607.91
Coefficient of variation1.0521.1121.1061.0591.0591.059

Ć¢Ė†ā€” Present study.

The different performance evaluation measures of SVM-based modelling (in Tables 4 and 5) show that the performance of RBF-based SVM is competent with ANN and FIS results. Also SVM involves only lesser number of control parameters (such as š¶ and šœŽ), and ANN involves large number of such parameters and their optimal combination is a tedious process. Thus, the SVM approach is quite simple to implement. Further, the performance evaluation of GP-based results (Tables 4 and 5) shows that the š‘…, šø and different error criteria are better for the GP model when compared with the theoretical methods, the SVM, and interpreted results of ANN and FIS. Thus, GP is proven to be a reliable alternative soft computing technique for prediction of ultimate bearing capacity of shallow foundation on cohesionless soil.

6. Conclusions

In this paper the application of two relatively recent soft computing techniquesā€”SVM and GPā€”is investigated for the prediction of ultimate bearing capacity of cohesionless soils beneath shallow foundations. SVM results are competent and demand the optimal selection of only a few number of control parameters when compared with ANN. Performance evaluation based on multiple error criteria shows that error is the least and correlation coefficient (š‘…) and coefficient of efficiency (šø) are the highest for the GP-based modeling than SVMs, ANN, FIS, and the different theoretical models considered in this study. The GP-based modeling is found to be superior in terms of quality, and it gives the output in the form of computer programs which enables the user to apply for a new set of input data to predict the ultimate bearing capacity. Thus, GP can be recommended as a robust soft computing paradigm to predict the ultimate bearing capacity of soil.


A. Note

The C++ Program to predict the ultimate bearing capacity of cohesionless soils is given here. V[0] to V[4] represent the input parameters width of footing (šµ), the depth of footing (š·), the length-to-width ratio (šæ/šµ), the field density (š›¾), the angle of shearing resistance (Ī¦). f[0], f[1], and so forth, are the temporary computation variables that the programs GP software creates. The output of these programs is the value remaining in f[0] after the program executes. This program needs to be run in the DISCIPULUS software environment to get the predictant value for a new set of predictors.

B. Best Program

#define TRUNC(x)(((x)>=0) ? floor(x): ceil(x))
#define C_FPREM (_finite(f[0]/f[1]) ? f[0]-(TRUNC(f[0]/f[1])
ā€‚*f[1]): f[0]/f[1])
#define C_F2XM1 (((fabs(f[0])<=1) &&
ā€‚(!_isnan(f[0]))) ? (pow(2,f[0])-1):
ā€‚((!_finite(f[0]) && !_isnan(f[0]) &&
ā€‚(f[0]<0)) ? -1: f[0]))
float DiscipulusCFunction(float v[])
ā€‰ā€‰ā€‰long double f[8];
ā€‰ā€‰ā€‰long double tmp = 0;
ā€‰ā€‰ā€‰int cflag = 0;
ā€‰ā€‰ā€‰L0: f[0]/=-1.364008665084839f;
ā€‰ā€‰ā€‰L1: f[0]+=f[1];
ā€‰ā€‰ā€‰L2: f[0]=āˆ’f[0];
ā€‰ā€‰ā€‰L3: f[0]āˆ’=v[0];
ā€‰ā€‰ā€‰L4: f[0]+=v[4];
ā€‰ā€‰ā€‰L5: f[0]+=v[4];
ā€‰ā€‰ā€‰L6: f[0]=cos(f[0]);
ā€‰ā€‰ā€‰L7: f[0]+=f[0];
ā€‰ā€‰ā€‰L8: f[0]+=f[0];
ā€‰ā€‰ā€‰L9: f[0]+=f[0];
ā€‰ā€‰ā€‰L10: f[0]*=v[1];
ā€‰ā€‰ā€‰L11: f[0]+=v[4];
ā€‰ā€‰ā€‰L12: f[0]āˆ’=1.252994060516357f;
ā€‰ā€‰ā€‰L13: f[0]*=pow(2,TRUNC(f[1]));
ā€‰ā€‰ā€‰L14: cflag=(f[0] < f[1]);
ā€‰ā€‰ā€‰L15: f[0]=sqrt(f[0]);
ā€‰ā€‰ā€‰L16: f[0]*=0.2877938747406006f;
ā€‰ā€‰ā€‰L17: tmp=f[1]; f[1]=f[0]; f[0]=tmp;
ā€‰ā€‰ā€‰L18: f[0]āˆ’=v[3];
ā€‰ā€‰ā€‰L19: f[0]*=āˆ’0.494312047958374f;
ā€‰ā€‰ā€‰L20: f[0]*=0.7790718078613281f;
ā€‰ā€‰ā€‰L21: f[0]*=f[1];
ā€‰ā€‰ā€‰L22: f[0]=fabs(f[0]);
ā€‰ā€‰ā€‰L23: f[0]=cos(f[0]);
ā€‰ā€‰ā€‰L24: f[0]=āˆ’f[0];
ā€‰ā€‰ā€‰L25: f[0]=sqrt(f[0]);
ā€‰ā€‰ā€‰L26: if (cflag) f[0] = f[1];
ā€‰ā€‰ā€‰L27: f[0]+=v[4];
ā€‰ā€‰ā€‰L28: f[0]*=0.9955191612243652f;
ā€‰ā€‰ā€‰L29: f[0]*=0.4281637668609619f;
ā€‰ā€‰ā€‰L30: f[0]āˆ’=f[0];
ā€‰ā€‰ā€‰L31: f[0]āˆ’=v[3];
ā€‰ā€‰ā€‰L32: f[0]/=v[0];
ā€‰ā€‰ā€‰L33: f[0]āˆ’=0.9955191612243652f;
ā€‰ā€‰ā€‰L34: f[0]+=v[4];
ā€‰ā€‰ā€‰L35: cflag=(f[0] < f[1]);
ā€‰ā€‰ā€‰L36: f[0]āˆ’=f[1];
ā€‰ā€‰ā€‰L37: f[0]/=f[0];
ā€‰ā€‰ā€‰L38: f[0]*=pow(2,TRUNC(f[1]));
ā€‰ā€‰ā€‰L39: f[0]āˆ’=f[1];
ā€‰ā€‰ā€‰L40: f[0]=āˆ’f[0];
ā€‰ā€‰ā€‰L41: f[0]=fabs(f[0]);
ā€‰ā€‰ā€‰L42: f[0]=āˆ’f[0];
ā€‰ā€‰ā€‰L43: f[0]*=0.4281637668609619f;
ā€‰ā€‰ā€‰L44: f[0]*=f[0];
ā€‰ā€‰ā€‰L45: f[0]*=f[0];
ā€‰ā€‰ā€‰L46: f[0]/=f[0];
ā€‰ā€‰ā€‰L47: f[0]*=āˆ’0.7297487258911133f;
ā€‰ā€‰ā€‰L48: f[0]/=1.084159851074219f;
ā€‰ā€‰ā€‰L49: f[0]+=v[4];
ā€‰ā€‰ā€‰L50: tmp=f[0]; f[0]=f[0]; f[0]=tmp;
ā€‰ā€‰ā€‰L51: f[0]/=f[1];
ā€‰ā€‰ā€‰L52: f[0]+=0.7790718078613281f;
ā€‰ā€‰ā€‰L53: f[0]+=v[4];
ā€‰ā€‰ā€‰L54: f[0]āˆ’=f[1];
ā€‰ā€‰ā€‰L55: f[0]*=f[1];
ā€‰ā€‰ā€‰L56: f[0]*=0.4281637668609619f;
ā€‰ā€‰ā€‰L57: f[0]āˆ’=v[3];
ā€‰ā€‰ā€‰L58: f[0]āˆ’=āˆ’0.9486191272735596f;
ā€‰ā€‰ā€‰L59: if (cflag) f[0] = f[1];
ā€‰ā€‰ā€‰L60: f[0]/=v[2];
ā€‰ā€‰ā€‰L61: f[0]+=v[4];
ā€‰ā€‰ā€‰L62: f[0]āˆ’=v[2];
ā€‰ā€‰ā€‰L63: f[0]āˆ’=v[2];
ā€‰ā€‰ā€‰L64: f[0]āˆ’=v[2];
ā€‰ā€‰ā€‰L65: f[0]*=v[1];
ā€‰ā€‰ā€‰L66: f[0]+=v[4];
ā€‰ā€‰ā€‰L67: f[0]*=f[1];
ā€‰ā€‰ā€‰L68: f[0]*=0.4281637668609619f;
ā€‰ā€‰ā€‰L69: f[0]āˆ’=v[3];
ā€‰ā€‰ā€‰L70: f[1]*=f[0];
ā€‰ā€‰ā€‰L71: f[0]*=f[0];
ā€‰ā€‰ā€‰L72: f[0]āˆ’=f[1];
ā€‰ā€‰ā€‰L73: f[1]+=f[0];
ā€‰ā€‰ā€‰L74: if (!cflag) f[0] = f[1];
ā€‰ā€‰ā€‰L75: f[0]āˆ’=1.987620830535889f;
ā€‰ā€‰ā€‰L76: f[0]āˆ’=v[4];
ā€‰ā€‰ā€‰L77: f[0]āˆ’=1.987620830535889f;
ā€‰ā€‰ā€‰L78: f[0]āˆ’=v[4];
ā€‰ā€‰ā€‰L79: f[0]āˆ’=v[4];
ā€‰ā€‰ā€‰L80: f[0]āˆ’=0.7361507415771484f;
ā€‰ā€‰ā€‰L81: f[0]āˆ’=1.987620830535889f;
ā€‰ā€‰ā€‰L82: f[0]āˆ’=v[4];
ā€‰ā€‰L83: f[0]āˆ’=1.987620830535889f;
ā€‰ā€‰ā€‰L84: f[0]āˆ’=v[4];
ā€‰ā€‰ā€‰L85: f[0]āˆ’=1.501374244689941f;
ā€‰ā€‰ā€‰L86: f[0]+=v[2];
ā€‰ā€‰ā€‰L87: f[0]āˆ’=v[4];
ā€‰ā€‰L88: f[0]āˆ’=1.530829906463623f;
ā€‰ā€‰ā€‰L89: if (!cflag) f[0] = f[1];
ā€‰ā€‰ā€‰L90: f[0]+=v[3];
ā€‰ā€‰ā€‰L91: f[0]āˆ’=v[4];
ā€‰ā€‰ā€‰L92: f[0]+=āˆ’1.907608032226563f;
ā€‰ā€‰ā€‰L93: if (!cflag) f[0] = f[1];
ā€‰ā€‰ā€‰L94: f[0]+=v[3];
ā€‰ā€‰ā€‰L95: f[0]+=v[3];
ā€‰ā€‰ā€‰L96: f[0]+=v[3];
ā€‰ā€‰ā€‰L97: f[0]+=v[3];
ā€‰ā€‰ā€‰L98: f[0]+=v[3];
ā€‰ā€‰ā€‰L99: f[0]+=v[3];
ā€‰ā€‰ā€‰L100: f[0]+=v[3];
ā€‰ā€‰ā€‰L101: f[0]+=v[3];
ā€‰ā€‰ā€‰L102: f[0]+=v[3];
ā€‰ā€‰ā€‰L103: f[0]+=v[3];
ā€‰ā€‰ā€‰L104: f[0]+=v[3];
ā€‰ā€‰ā€‰L105: f[0]+=v[3];
ā€‰ā€‰ā€‰L106: f[0]+=v[3];
ā€‰ā€‰ā€‰L107: f[0]+=v[3];
ā€‰ā€‰ā€‰L108: f[0]+=v[2];
ā€‰ā€‰ā€‰L109: f[0]+=v[3];
ā€‰ā€‰ā€‰if (!_finite(f[0])) f[0]=0;
ā€‰ā€‰ā€‰return f[0];


This paper is a part of a research work carried out at the Department of Civil Engineering, TKM College of Engineering Kollam, Kerala, India, in 2010. The authors thank the Department of Civil Engineering, TKM College of Engineering Kollam for providing all necessary help. They also thank the anonymous reviewer/s who helped to improve the quality of the paper.


  1. K. Terzaghi, Theoretical Soil Mechanics, John Wiley & Sons, New York, NY, USA, 1943.
  2. G. G. Meyerhof, ā€œSome recent research on the bearing capacity of foundations,ā€ Canadian Geotechnical Journal, vol. 1, no. 1, pp. 16ā€“26, 1963. View at: Google Scholar
  3. J. B. Hansen, ā€œA general formula for bearing capacity,ā€ Danish Geotechnical Institute Bulletin, vol. 11, 1961. View at: Google Scholar
  4. A. S. Vesic, ā€œAnalysis of ultimate loads of shallow foundations,ā€ Journal of Soil Mechanics and Foundation Division, vol. 99, no. 1, pp. 45ā€“73, 1973. View at: Google Scholar
  5. D. Padmini, K. Ilamparuthi, and K. P. Sudheer, ā€œUltimate bearing capacity prediction of shallow foundations on cohesionless soils using neurofuzzy models,ā€ Computers and Geotechnics, vol. 35, no. 1, pp. 33ā€“46, 2008. View at: Publisher Site | Google Scholar
  6. M. A. Shahin, H. R. Maier, and M. B. Jaksa, ā€œArtificial neural network applications in Geotechnical Eng,ā€ Australian Geomechanics, vol. 36, no. 1, pp. 49ā€“62, 2001. View at: Google Scholar
  7. P. Samui, ā€œPrediction of friction capacity of driven piles in clay using the support vector machine,ā€ Canadian Geotechnical Journal, vol. 45, no. 2, pp. 288ā€“295, 2008. View at: Publisher Site | Google Scholar
  8. P. Samui and T. G. Sitharam, ā€œLeast-square support vector machine applied to settlement of shallow foundations on cohesionless soils,ā€ International Journal for Numerical and Analytical Methods in Geomechanics, vol. 32, no. 17, pp. 2033ā€“2043, 2008. View at: Publisher Site | Google Scholar
  9. P. Samui, ā€œSlope stability analysis: a support vector machine approach,ā€ Environmental Geology, vol. 56, no. 2, pp. 255ā€“267, 2008. View at: Publisher Site | Google Scholar
  10. M. Pal, ā€œSupport vector machines-based modelling of seismic liquefaction potential,ā€ International Journal for Numerical and Analytical Methods in Geomechanics, vol. 30, no. 10, pp. 983ā€“996, 2006. View at: Publisher Site | Google Scholar
  11. J. H. Holland, Adaptation in Natural and Artificial System, Ann Arbour Science Press, Ann Arbor, Mich, USA, 1975.
  12. D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, Mass, USA, 1989.
  13. J. R. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press, Cambridge, Mass, USA, 1992.
  14. B. E. Boser, I. M. Guyon, and V. N. Vapnik, ā€œA training algorithm for optimal margin classifiers,ā€ in 5th Annual ACM Workshop on COLT, D. Haussler, Ed., pp. 144ā€“152, ACM Press, Pittsburgh, Pa, USA, 1992. View at: Google Scholar
  15. V. N. Vapnik, Statistical Learning Theory, John Wiley & Sons, New York, NY, USA, 1998.
  16. C. Cortes and V. Vapnik, ā€œSupport-vector networks,ā€ Machine Learning, vol. 20, no. 3, pp. 273ā€“297, 1995. View at: Publisher Site | Google Scholar
  17. A. J. Smola and B. Schölkopf, ā€œA tutorial on support vector regression,ā€ Statistics and Computing, vol. 14, no. 3, pp. 199ā€“222, 2004. View at: Publisher Site | Google Scholar | MathSciNet
  18. S. Rajasekaran, S. Gayathri, and T. L. Lee, ā€œSupport vector regression methodology for storm surge predictions,ā€ Ocean Engineering, vol. 35, no. 16, pp. 1578ā€“1587, 2008. View at: Publisher Site | Google Scholar
  19. S. T. Khu, S. Y. Liong, V. Babovic, H. Madsen, and N. Muttil, ā€œGenetic programming and its application in real-time runoff forecasting,ā€ Journal of the American Water Resources Association, vol. 37, no. 2, pp. 439ā€“451, 2001. View at: Google Scholar
  20. A. A. Javadi, M. Rezania, and M. M. Nezhad, ā€œEvaluation of liquefaction induced lateral displacements using genetic programming,ā€ Computers and Geotechnics, vol. 33, no. 4-5, pp. 222ā€“233, 2006. View at: Publisher Site | Google Scholar
  21. A. Johari, G. Habibagahi, and A. Ghahramani, ā€œPrediction of soil-water characteristic curve using genetic programming,ā€ Journal of Geotechnical and Geoenvironmental Engineering, vol. 132, no. 5, pp. 661ā€“665, 2006. View at: Publisher Site | Google Scholar
  22. B. S. Narendra, P. V. Sivapullaiah, S. Suresh, and S. N. Omkar, ā€œPrediction of unconfined compressive strength of soft grounds using computational intelligence techniques: a comparative study,ā€ Computers and Geotechnics, vol. 33, no. 3, pp. 196ā€“208, 2006. View at: Publisher Site | Google Scholar
  23. S. B. Charhate, M. C. Deo, and S. N. Londhe, ā€œInverse modeling to derive wind parameters from wave measurements,ā€ Applied Ocean Research, vol. 30, no. 2, pp. 120ā€“129, 2008. View at: Publisher Site | Google Scholar
  24. S. Gaur and M. C. Deo, ā€œReal-time wave forecasting using genetic programming,ā€ Ocean Engineering, vol. 35, no. 11-12, pp. 1166ā€“1172, 2008. View at: Publisher Site | Google Scholar
  25. K. Ustoorikar and M. C. Deo, ā€œFilling up gaps in wave data with genetic programming,ā€ Marine Structures, vol. 21, no. 2-3, pp. 177ā€“195, 2008. View at: Publisher Site | Google Scholar
  26. S. S. Kashid, S. Ghosh, and R. Maity, ā€œStreamflow prediction using multi-site rainfall obtained from hydroclimatic teleconnection,ā€ Journal of Hydrology, vol. 395, no. 1-2, pp. 23ā€“38, 2010. View at: Publisher Site | Google Scholar
  27. I. H. Witten and E. Frank, Data Mining, Morgan Kaufmann, San Francisco, Calif, USA, 2000.
  28. P. S. Yu, S. T. Chen, and I. F. Chang, ā€œSupport vector regression for real-time flood stage forecasting,ā€ Journal of Hydrology, vol. 328, no. 3-4, pp. 704ā€“716, 2006. View at: Publisher Site | Google Scholar
  29. F. D. Francone, Discipulus Owner’s Manual Version 3.0 DRAFT, Machine Learning Technologies, Littleton, Colo, USA, 1998.

Copyright © 2012 S. Adarsh et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

3914Ā Views | 870Ā Downloads | 2Ā Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.