Research Article  Open Access
Doddy Prayogo, Yudas Tadeus Teddy Susanto, "Optimizing the Prediction Accuracy of Friction Capacity of Driven Piles in Cohesive Soil Using a Novel SelfTuning Least Squares Support Vector Machine", Advances in Civil Engineering, vol. 2018, Article ID 6490169, 9 pages, 2018. https://doi.org/10.1155/2018/6490169
Optimizing the Prediction Accuracy of Friction Capacity of Driven Piles in Cohesive Soil Using a Novel SelfTuning Least Squares Support Vector Machine
Abstract
This research presents a novel hybrid prediction technique, namely, selftuning least squares support vector machine (STLSSVM), to accurately model the friction capacity of driven piles in cohesive soil. The hybrid approach uses LSSVM as a supervisedlearningbased predictor to build an accurate inputoutput relationship of the dataset and SOS method to optimize the σ and γ parameters of the LSSVM. Evaluation and investigation of the STLSSVM were conducted on 45 training data and 20 testing data of driven pile load tests that were compiled from previous studies. The prediction accuracy of the STLSSVM was then compared to other machine learning methods, namely, LSSVM and BPNN, and was benchmarked with the previous results by neural network (NN) from Goh using coefficient of correlation (R), mean absolute error (MAE), and root mean square error (RMSE). The comparison showed that the STLSSVM performed better than LSSVM, BPNN, and NN in terms of R, RMSE, and MAE. This comprehensive evaluation confirmed the capability of hybrid approach SOS and LSSVM to modeling the accurate friction capacity of driven piles in clay. It makes for a reliable and robust assistance tool in helping all geotechnical engineers estimate friction pile capacity.
1. Introduction
Deep foundations built through the past years were made either of concrete, steel, or timber piles, which are either driven and precast or bored and castinsitu. Driven piles are frequently used in developing countries with a vast array of suburban and rural areas as foundations to support heavily loaded structures, for example, highrise buildings and bridges. Recently, a variation of driven piles, jackin piles, has also successfully used as foundations for highrise buildings in urban areas due to their lower vibration and noise compared to the conventionaldriven piles [1].
Despite all the development in the driven piles method, the design of driven piles still heavily relies on semiempirical methods to estimate shaft resistance (f_{s}) and end resistance (f_{b}) [1–3]. The f_{b} will be negligible, and the large percentage of pile load will be taken by f_{s} if there is no bearing layer, that is, driven piles in cohesive soil. Consequently, f_{s} that can be provided by the soil is very important in pile design. Up until now, there is no comprehensive assessment of f_{s} prediction methods [3]. Efforts were limited to comparing f_{s} prediction from various semiempirical methods with results from pile load tests [3–5]. To overcome this limitation, other efforts were dedicated to predicting f_{s} through machine learning techniques [6–8].
In civil engineering, machine learning techniques have developed into an important research area. Several studies reveal the advantages of using machine learning techniques in establishing a better predictive model over traditional methods [9–11]. Recently, least squares support vector machine (LSSVM) has become one of the widely used machine learning techniques in handling variety of complex problems [12–15]. Although acceptable prediction results have been reported, an improper parameter tuning may lessen the learning process of LSSVM resulting in a lower accuracy. Building a more accurate predictive model can be achieved by optimizing the LSSVM parameters. This includes a regularization parameter (γ) to deal with the tradeoff between minimizing model complexity and training error. Also, it includes a kernel parameter (σ) of the radial basis function (RBF) which describes the nonlinear mapping between the input space and highdimensional feature space.
Identifying the optimal parameters is an optimization issue, therefore, many recent studies combine a machine learning technique with a metaheuristicbased optimizer instead of using a sole machine learning [16–21]. Therefore, this research presents a new hybrid prediction method called selftuning least squares support vector machine (STLSSVM) to accurately model the friction capacity of driven piles. The hybrid approach STLSSVM combines the techniques of SOS and LSSVM. While the SOS is used to optimize the σ and γ parameters of the LSSVM, the LSSVM builds an accurate inputoutput relationship of the dataset by performing as a supervisedlearningbased predictor. The total 45 training data and 20 testing data from Goh [6] have been employed to validate the performance of the proposed method. The STLSSVM method is further compared with LSSVM and BPNN and is benchmarked with the previous results from Goh [6] using the coefficient of correlation (R), mean absolute error (MAE), and root mean square error (RMSE).
2. The Proposed SelfOptimized Machine Learning Framework
The objective of this proposed hybrid method is to improve the learning abilities of LSSVM by searching for optimized set of LSSVM parameters automatically. The collaborative integration between LSSVMbased regression and SOS facilitates the LSSVM to accurately determine the complicated relationship behavior between input variables and the output variable of the given historical data. The LSSVM and SOS are briefly described below.
2.1. Machine Learning Technique: Least Squares Support Vector Machine (LSSVM)
LSSVM is first introduced by Suykens and Vandewalle [12] as a modification of the conventional support vector machine (SVM). LSSVM is used with a least squares loss function which allows for function estimation while reducing computational costs. Where highly nonlinear spaces occur, RBF kernel is chosen as the kernel function in LSSVM which brings more promising results than other kernels [12, 22]. The following model of interest underlies the functional relationship between one or more independent variables along with a response variable [12, 23]:where , , and is the mapping to the highdimensional feature space. In LSSVM for regression analysis, given a training dataset , the optimization problem is formulated as follows:
where are the error variables and denotes the regularization constant.
In the previous optimization problem, a regularization term and a sum of squared fitting errors make for the objective function. For the cost function, this will be similar to the standard procedure with training feedforward neural networks and this is closely related to a ridge regression. However, the primal problem becomes somewhat impossible to solve when becomes infinite dimensional. In this case, the dual problem should be derived after constructing the Lagrangian [12].
The Lagrangian is given bywhere are the Lagrange multipliers. The conditions for optimality are given by
After elimination of e and , the following linear system is obtained:where , , and . And the kernel function is applied as follows:
The resulting LSSVM model for function estimation is expressed aswhere and b are the solution to the linear system (5).
The kernel function that is often utilized is RBF kernel. Description of RBF kernel is given as follows:where is the kernel function parameter.
With the γ parameter, the imposed penalty (to data points that move away from the regression function) can be controlled. For the σ parameter, this will have a direct impact on the smoothness of the regression function. To ensure the best performance of the predictive model, it should be noted that proper setting of these tuning hyperparameters is required.
2.2. Metaheuristic Optimization Algorithm: Symbiotic Organisms Search (SOS)
Developed by Cheng and Prayogo, SOS is a recently developed metaheuristic algorithm that took inspiration from dependencybased interaction normally found among natural organisms and symbiosis [24]. Just like many other metaheuristic solutions, SOS guides the searching process using special operators that use candidate solutions; it looks for organisms containing candidate solutions to find the global solution in the search space; it requires a maximum number of evaluations and other common control parameters; and it preserves the better solutions by using a selection mechanism.
Nevertheless, there are some key differences because SOS does not need algorithmspecific parameters; for example, particle swarm optimization (PSO) relies upon the social factor, inertia weight, and cognitive factor. To tune the parameters, SOS requires no extra work, and this is a huge advantage. With improper tuning of the parameters, there is a possibility that the obtained solutions are found in local optima regions. Since the first development in 2014, SOS has been successfully utilized in solving many optimization problems in various research areas [25–31].
At the beginning, SOS will create a random ecosystem matrix (population) with each problem having a viable candidate solution. For the user, the number of organisms can be entered within the ecosystem, and this is called the ecosystem size. In each row of the matrix, this represents organisms which are the same as individuals in many other solutions. With each virtual organism, a candidate solution is represented alongside the corresponding objective. Once the ecosystem has been generated, the search then begins.
With three clear phases, the idea comes from the most wellknown symbioses. In the long term, organisms use them to improve their survival advantage and fitness (Figure 1). Throughout this searching process, there are three ways in which the organisms benefit from interacting with one another. The SOS algorithm adopts greedy selection scheme. Therefore, the updated organisms can replace the current organisms only if their fitness is better. Once one organism has finished all three phases, the best organism can then be updated. All things considered, the phases will form a continual cycle until the stopping criterion has been reached.
2.2.1. Mutualism Phase
With the mutualism phase, this is a relationship where the two sides benefit, and the perfect example would be flowers and bees. The mathematical formulation for the mutualism phase is shown as follows:where and represent the two current organisms engaged in mutualism; represents the current best organism; models the mutualism interaction between the two current organisms; and represent the updated organisms after the interaction; and are the two random values of either 0 or 1 representing the level of benefit of each organism. Meanwhile, can be calculated using the following formulation:
2.2.2. Commensalism Phase
In this phase, the organism manages to develop a relationship in which only they benefit. For example, this is common between sharks and remora fish. The mathematical formulation for the commensalism phase is shown as follows:where is the uniform random parameter between −1 and 1.
2.2.3. Parasitism Phase
Finally, this is a relationship where one side is benefitted and the other is harmed in some way. For instance, the plasmodium parasite transfers from one human host to the next using the Anopheles mosquito. In this phase, the beneficiary will get fitter while the harmed is likely to perish. The mathematical formulation for the parasitism phase is shown as follows:where is the artificial parasite affiliated with that threatens the existence of . Meanwhile, can be calculated as follows:where and are the binary random matrix and its inverse, respectively, and is the uniform random parameter between 0 and 1.
2.3. CrossValidation Technique and Performance Measurement
Training and testing processes are essential in establishing the prediction model. In the training process, a dataset is implemented to buildup a prediction model through the machine learning method. In the testing process, the established prediction model is used to validate new dataset. Using the entire dataset to train might cause an “overfitting” phenomenon, that is, the prediction model fits the dataset extremely well but is useless for a new and unseen dataset. Hence, the training dataset is often divided into two subsets to avoid the overfitting problem; larger portion of the training dataset as “training subset” and smaller portion of the training data as “validation subset.” The validation subset is used to validate the model built. This approach ensures the established prediction model to perform well in predicting the testing dataset.
To eliminate the randomness in partitioning the training dataset, kfold crossvalidation technique is employed [32, 33]. During this process, kfold crossvalidation creates k subsets from historical data, and they will always be nonoverlapping. The first (k − 1) subset will be used in training the inference model which, in this case, is STLSSVM, before the last kth subset is used to validate the result. Since it relies upon crossvalidation, this means that the process repeats itself for k times until all subsets have been used once as the validating subsets. At all times, k will remain as an unfixed parameter, and this means k can be any suitable number. In the current research, the value of k was set to 10 since this allowed computational time to minimize. This meant that all data were divided into 10 randomly ensuring equally sized groups. While one subset is used as testing data, the other nine can be used as training data. The term “tenfold” means that 10% of the data will be used as validating subsets while the remaining 90% is used as training subsets.
The performance measures used to evaluate the predictive methods are further described in Table 1. The performance measures are implemented on the predicted output results of the training and testing data. The lowest RMSE and MAE values, together with the highest R value, indicate the best model outcome.
 
is the actual value; is the predicted value; and is the number of data samples. 
2.4. Integration of LSSVM and SOS with CrossValidation
The procedure of STLSSVM explains the interaction of the proposed method in using training data and testing data to deliver the best prediction results. As mentioned earlier, the training dataset was divided as training subset and validating subset. In the training process, the building of predictive model sets was constructed by allowing SOS to determine the optimal LSSVM parameters. Figure 2 presents the framework of the STLSSVM method.
To remove the issue of overfitting, kfold crossvalidation was chosen for parameter selection. From here, statistical performance measures can be used to calculate all results from the subset and folds. The best parameters represent the parameter set that can produce the minimum average RMSE on validating datasets through the tenfold training simulation. Meanwhile, the testing dataset focused on evaluating the performance of the trained LSSVM model after finishing the optimization on unseen data. Using SOS, the whole optimization process was automated since this allowed for simultaneous optimization of LSSVM. While SOS concentrated on optimizing the two LSSVM parameters (γ and σ) to reduce prediction errors, LSSVM addressed curve fitting and learning.
3. Experimental Results
3.1. Historical Dataset
Consisting of 45 training data and 20 testing data of load test records, this research uses historical data compiled by Goh [6]. With datadriven models mentioned here, they were based on the results received from various load data records for driven piles in clay including Vijayvergiya [34], Flaate and Selnes [35], and Semple and Rigden [36]; they were scaled down for laboratory conditions which means that the results should be used for similar conditions. In actual field data, it may not always fall within the range used in this study. Thus, dimensional analysis will need to be used alongside scaling effects to effectively apply the results here in actual field practice.
When expressing properties of load test records, numerous components were used including effective vertical stress (kPa), pile length (m), undrained shear strength (kPa), and pile diameter (cm). It is worth noticing that the output of the load test is friction capacity (kPa). Statistical descriptions of training and testing datasets of load test records are reported in Table 2.

In the training dataset, the pile length varied from 4.6 meters to 96 meters, while the diameter started with a minimum of 13.5 cm and reached up to 76.7 cm. For effective vertical stress and undrained shear strength, these could be found from 19 to 718 kPa and 10 to 335 kPa, respectively. In terms of friction capacity, this started at 8 and reached 192.1 kPa. From the testing dataset, pile length and pile diameter were from 8 to 66.4 m and 11.4 to 61 cm, respectively. While the undrained shear strength started at 9 and reached 185 kPa, effective vertical stress was found between 21 and 244 kPa. Finally, measured friction capacity was between 9 and 88.8 kPa.
3.2. Training and Testing Processes
To build the pile capacity prediction model using previous data, the training process is essential. The kfold crossvalidation method can be used to ensure that the pile capacity model is as accurate as possible. The training process of STLSSVM with the given training dataset is simulated over 10 times based on crossvalidation, with each 10 subset used exactly once as a validating subset. After the training process finished, the model is ready for predicting a new unseen testing dataset. The complete training result is provided in Appendix.
Through trial and error, suitable parameter settings for STLSSVM were determined: (1) maximum number of iterations is set to be 100, (2) population size is set to be 30, and (3) search range for the γ and σ^{2} parameters is varied from 10^{−10} to 10^{10} as suggested in [37]. Using the given training dataset, the training procedure begins with random initial population of hyperparameters. For every iteration, STLSSVM simulates 10fold crossvalidation of training and validating subsets and stores the average RMSE value of the validating subset of each fold as the fitness value. The fitness value starts from a high RMSE value of 7.596 and iteratively decreases and converges to a RMSE value of 6.831 as shown in Figure 3. Figure 4 shows the historical records of the hyperparameters selection process. The final set of parameters that produce the lowest RMSE value on validation subsets are 81369676 for γ value and 8622 for σ^{2} value. Finally, the complete testing result is provided in Appendix and evaluated using three abovementioned performance measurements.
4. Results and Discussions
For comparison purposes, this research applied other machine learningbased predictive methods which are LSSVM and backpropagation neural network (BPNN). The setting of BPNN follows the default parameter setting of MATLAB neural network toolbox, and Levenberg–Marquardt is chosen to train the BPNN [38]. The hyperparameters of LSSVM follow the default setting suggested in the publication of Suykens and Vandewalle [12]. Additionally, the previous result from the literature published by Goh [6] is collected for benchmarking the result obtained from the proposed method.
Figure 5 shows the obtained training and testing results of STLSSVM. The actual and predicted output of both training and testing phases showed a good fit to a straight line. The Rvalues for training and testing phases reported in this experiment also reflect the high accuracy and the superior performance of the trained STLSSVM model.
As mentioned earlier, to further express the accurate evaluation of the performing methods, RMSE and MAE have also been utilized, besides R. Table 3 compiles the prediction results of each method for further analysis. The results showed that the STLSSVM model effectively facilitated constructing an optimized predictive model over the default LSSVM method. By implementing the selfoptimized framework, the performance measures R, RMSE, and MAE of LSSVM testing results were improved by 0.079, 5.2019, and 3.0419 kPa, respectively. Additionally, the results of STLSSVM are also superior to those of BPNN in terms of R, RMSE, and MAE. When comparing with neural network (NN) which was previously published by Goh [6], STLSSVM performs relatively better. It can be seen that STLSSVM achieves slightly better testing results over NN in two categories (R and MAE) while producing significantly better training results over NN in all categories. This comprehensive evaluation confirmed the capability of SOS and LSSVM for modeling the accurate friction capacity based on load test records.

5. Conclusions
In this study, a new method for predicting friction pile capacity has been established based on load test records. With this research, it now extends the current body of knowledge that exists on investigating how capable LSSVM is in predicting this information. Genuine load test records were collected, and the proposed model managed to achieve accurate prediction results. In this study, the main purpose was to investigate a hybrid computational intelligence system and its efficacy in optimizing the LSSVM parameters to improve the accuracy of friction capacity forecasts of driven piles on the main load test records.
After analyzing further, the results suggested that LSSVM, combined with SOS, could facilitate the construction of an optimized predictive model to be used for friction capacity. With modeling friction pile capacity being so complex and highly nonlinear, the obtained R, RMSE, and MAE values for both training and testing are impressive and desirable for most. With this new method, it makes for a reliable and robust assistance tool in helping all geotechnical engineers estimate friction pile capacity.
Appendix
Prediction Results by STLSSVM for Training and Testing Data


Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
References
 C. Chow and Y. Tan, “Jackin pile design–Malaysian experience and design approach to EC7,” IEM Course on Eurocode, vol. 7, 2009. View at: Google Scholar
 M. F. Randolph, “Science and empiricism in pile foundation design,” Géotechnique, vol. 53, no. 10, pp. 847–875, 2003. View at: Publisher Site  Google Scholar
 A. S. Azzouz, M. M. Baligh, and A. J. Whittle, “Shaft resistance of piles in clay,” Journal of Geotechnical Engineering, vol. 116, no. 2, pp. 205–221, 1990. View at: Publisher Site  Google Scholar
 M. Y. AbuFarsakh and H. H. Titi, “Assessment of direct cone penetration test methods for predicting the ultimate capacity of friction driven piles,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 130, no. 9, pp. 935–944, 2004. View at: Publisher Site  Google Scholar
 C. Roohnavaz, “Driven pile construction control procedures and design,” Proceedings of the Institution of Civil EngineersGeotechnical Engineering, vol. 163, no. 5, pp. 241–255, 2010. View at: Publisher Site  Google Scholar
 A. Goh, “Empirical design in geotechnics using neural networks,” Geotechnique, vol. 45, no. 4, pp. 709–714, 1995. View at: Publisher Site  Google Scholar
 P. Samui, “Prediction of friction capacity of driven piles in clay using the support vector machine,” Canadian Geotechnical Journal, vol. 45, no. 2, pp. 288–295, 2008. View at: Publisher Site  Google Scholar
 S. Suman, S. K. Das, and R. Mohanty, “Prediction of friction capacity of driven piles in clay using artificial intelligence techniques,” International Journal of Geotechnical Engineering, vol. 10, no. 5, pp. 469–475, 2016. View at: Publisher Site  Google Scholar
 W. T. Chan, Y. K. Chow, and L. F. Liu, “Neural network: an alternative to pile driving formulas,” Computers and Geotechnics, vol. 17, no. 2, pp. 135–156, 1995. View at: Publisher Site  Google Scholar
 J.S. Chou, C.K. Chiu, M. Farfoura, and I. AlTaharwa, “Optimizing the prediction accuracy of concrete compressive strength based on a comparison of datamining techniques,” Journal of Computing in Civil Engineering, vol. 25, no. 3, pp. 242–253, 2011. View at: Publisher Site  Google Scholar
 N.D. Hoang, A.D. Pham, Q.L. Nguyen, and Q.N. Pham, “Estimating compressive strength of high performance concrete with Gaussian process regression model,” Advances in Civil Engineering, vol. 2016, Article ID 2861380, 8 pages, 2016. View at: Publisher Site  Google Scholar
 J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Processing Letters, vol. 9, no. 3, pp. 293–300, 1999. View at: Publisher Site  Google Scholar
 P. Samui and D. P. Kothari, “Utilization of a least square support vector machine (LSSVM) for slope stability analysis,” Scientia Iranica, vol. 18, no. 1, pp. 53–58, 2011. View at: Publisher Site  Google Scholar
 P. Samui, “Least square support vector machine and relevance vector machine for evaluating seismic liquefaction potential using SPT,” Natural Hazards, vol. 59, no. 2, pp. 811–822, 2011. View at: Publisher Site  Google Scholar
 B. G. Aiyer, D. Kim, N. Karingattikkal, P. Samui, and P. R. Rao, “Prediction of compressive strength of selfcompacting concrete using least square support vector machine and relevance vector machine,” KSCE Journal of Civil Engineering, vol. 18, no. 6, pp. 1753–1758, 2014. View at: Publisher Site  Google Scholar
 M.Y. Cheng and D. Prayogo, “Modeling the permanent deformation behavior of asphalt mixtures using a novel hybrid computational intelligence,” in Proceedings of 33rd International Symposium on Automation and Robotics in Construction (ISARC’2016), pp. 1009–1015, International Association for Automation and Robotics in Construction, Auburn, AL, USA, July 2016. View at: Google Scholar
 M.Y. Cheng, D. Prayogo, and Y.W. Wu, “Novel genetic algorithmbased evolutionary support vector machine for optimizing highperformance concrete mixture,” Journal of Computing in Civil Engineering, vol. 28, no. 4, p. 06014003, 2014. View at: Publisher Site  Google Scholar
 J.S. Chou and J. P. P. Thedja, “Metaheuristic optimization within machine learningbased classification system for early warnings related to geotechnical problems,” Automation in Construction, vol. 68, pp. 65–80, 2016. View at: Publisher Site  Google Scholar
 A.D. Pham, N.D. Hoang, and Q.T. Nguyen, “Predicting compressive strength of highperformance concrete using metaheuristicoptimized least squares support vector regression,” Journal of Computing in Civil Engineering, vol. 30, no. 3, p. 06015002, 2016. View at: Publisher Site  Google Scholar
 M.Y. Cheng, D. K. Wibowo, D. Prayogo, and A. F. V. Roy, “Predicting productivity loss caused by change orders using the evolutionary fuzzy support vector machine inference model,” Journal of Civil Engineering and Management, vol. 21, no. 7, pp. 881–892, 2015. View at: Publisher Site  Google Scholar
 M.Y. Cheng, D. Prayogo, Y.H. Ju, Y.W. Wu, and S. Sutanto, “Optimizing mixture properties of biodiesel production using genetic algorithmbased evolutionary support vector machine,” International Journal of Green Energy, vol. 13, no. 15, pp. 1599–1607, 2016. View at: Publisher Site  Google Scholar
 J. A. Suykens, T. Van Gestel, and J. De Brabanter, Least Squares Support Vector Machines, World Scientific, Singapore, 2002.
 J. A. K. Suykens, L. Lukas, and J. Vandewalle, “Sparse approximation using least squares support vector machines,” in Proceedings of 2000 IEEE International Symposium on Circuits and Systems. Emerging Technologies for the 21st Century, pp. 757–760, Geneva, Switzerland, May 2000. View at: Google Scholar
 M.Y. Cheng and D. Prayogo, “Symbiotic organisms search: a new metaheuristic optimization algorithm,” Computers & Structures, vol. 139, pp. 98–112, 2014. View at: Publisher Site  Google Scholar
 M.Y. Cheng, D. Prayogo, and D.H. Tran, “Optimizing multipleresources leveling in multiple projects using discrete symbiotic organisms search,” Journal of Computing in Civil Engineering, vol. 30, no. 3, p. 04015036, 2016. View at: Publisher Site  Google Scholar
 D.H. Tran, M.Y. Cheng, and D. Prayogo, “A novel multiple objective symbiotic organisms search (MOSOS) for timecostlabor utilization tradeoff problem,” KnowledgeBased Systems, vol. 94, pp. 132–145, 2016. View at: Publisher Site  Google Scholar
 M.Y. Cheng, C.K. Chiu, Y.F. Chiu et al., “SOS optimization model for bridge life cycle risk evaluation and maintenance strategies,” Journal of the Chinese Institute of Civil and Hydraulic Engineering, vol. 26, no. 4, pp. 293–308, 2014. View at: Google Scholar
 A. E.S. Ezugwu, A. O. Adewumi, and M. E. Frîncu, “Simulated annealing based symbiotic organisms search optimization algorithm for traveling salesman problem,” Expert Systems with Applications, vol. 77, pp. 189–210, 2017. View at: Publisher Site  Google Scholar
 A. Panda and S. Pani, “A Symbiotic Organisms Search algorithm with adaptive penalty function to solve multiobjective constrained optimization problems,” Applied Soft Computing, vol. 46, pp. 344–360, 2016. View at: Publisher Site  Google Scholar
 V. F. Yu, A. A. N. P. Redi, C.L. Yang, E. Ruskartina, and B. Santosa, “Symbiotic organisms search and two solution representations for solving the capacitated vehicle routing problem,” Applied Soft Computing, vol. 52, pp. 657–672, 2017. View at: Publisher Site  Google Scholar
 H. Kamankesh, V. G. Agelidis, and A. KavousiFard, “Optimal scheduling of renewable microgrids considering plugin hybrid electric vehicle charging demand,” Energy, vol. 100, pp. 285–297, 2016. View at: Publisher Site  Google Scholar
 P. Zhang, “Model selection via multifold cross validation,” Annals of Statistics, vol. 21, no. 1, pp. 299–313, 1993. View at: Publisher Site  Google Scholar
 R. Kohavi, “A study of crossvalidation and bootstrap for accuracy estimation and model selection,” in Proceedings of International Joint Conference on Artificial Intelligence (IJCAI’1995), vol. 14, pp. 1137–1145, Montreal, QC, Canada, August 1995. View at: Google Scholar
 V. Vijayvergiya, “A new way to predict capacity of piles in clay,” in Proceedings of 4th Annual Offshore Technology Conference, vol. 2, pp. 865–871, Houston, TX, USA, May 1972. View at: Google Scholar
 K. Flaate and P. Selnes, “Side friction of piles in clay,” in Proceedings of the 9th International Conference on Soil Mechanics and Foundation Engineering, pp. 517–522, Tokyo, Japan, July 1977. View at: Google Scholar
 R. M. Semple and W. J. Rigden, “Shaft capacity of driven pipe piles in clay,” in Proceedings of Analysis and design of pile foundations (ASCE’1984), pp. 59–79, San Francisco, CA, USA, October 1984. View at: Google Scholar
 M.Y. Cheng and N.D. Hoang, “Risk score inference for bridge maintenance project using evolutionary fuzzy least squares support vector machine,” Journal of Computing in Civil Engineering, vol. 28, no. 3, p. 04014003, 2014. View at: Publisher Site  Google Scholar
 M. H. Beale, M. T. Hagan, and H. B. Demuth, Neural Network Toolbox™ Reference, MATLAB R2015b, The MathWorks, Natick, MA, USA, 1992.
Copyright
Copyright © 2018 Doddy Prayogo and Yudas Tadeus Teddy Susanto. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.