Complexity

Volume 2019, Article ID 3094670, 13 pages

https://doi.org/10.1155/2019/3094670

## An Effective ABC-SVM Approach for Surface Roughness Prediction in Manufacturing Processes

^{1}Guangxi Key Laboratory of Manufacturing Systems and Advance Manufacturing Technology, Guangxi University, Nanning 530004, China^{2}Department of Mechanical and Marine Engineering, Beibu Gulf University, Qinzhou 535011, China^{3}Graduate School of Business and Law, RMIT University, Melbourne 3000, Australia^{4}School of Mechanical and Electric Engineering, Guangzhou University, Guangzhou 510006, China

Correspondence should be addressed to Haibin Ouyang; moc.361@7891bhyo

Received 5 December 2018; Revised 25 April 2019; Accepted 12 May 2019; Published 13 June 2019

Academic Editor: Matilde Santos

Copyright © 2019 Juan Lu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

It is difficult to accurately predict the response of some stochastic and complicated manufacturing processes. Data-driven learning methods which can mine unseen relationship between influence parameters and outputs are regarded as an effective solution. In this study, support vector machine (SVM) is applied to develop prediction models for machining processes. Kernel function and loss function are Gaussian radial basis function and -insensitive loss function, respectively. To improve the prediction accuracy and reduce parameter adjustment time of SVM model, artificial bee colony algorithm (ABC) is employed to optimize internal parameters of SVM model. Further, to evaluate the optimization performance of ABC in parameters determination of SVM, this study compares the prediction performance of SVM models optimized by well-known evolutionary and swarm-based algorithms (differential evolution (DE), genetic algorithm (GA), particle swarm optimization (PSO), and ABC) and analyzes ability of these optimization algorithms from their optimization mechanism and convergence speed based on experimental datasets of turning and milling. Experimental results indicate that the selected four evaluation indicators values that reflect prediction accuracy and adjustment time for ABC-SVM are better than DE-SVM, GA-SVM, and PSO-SVM except three indicator values of DE-SVM for AISI 1045 steel under the case that training set is enough to develop the prediction model. ABC algorithm has less control parameters, faster convergence speed, and stronger searching ability than DE, GA, and PSO algorithms for optimizing the internal parameters of SVM model. These results shed light on choosing a satisfactory optimization algorithm of SVM for manufacturing processes.

#### 1. Introduction

Traditional manufacturing is gradually developing towards smart manufacturing. Integrating big data, advanced analytical technology, large scale and high-performance computing, and Industrial Internet of Things (IIoT) into traditional manufacturing to produce highly quality customizable products at lower costs is the purpose of smart manufacturing [1]. Accurate prediction of processing product quality is the prerequisite for achieving the smart manufacturing systems or processes. The surface roughness of machined workpieces is an important technical indicator for testing product quality [2]. Generally speaking, the performance indicators of the product such as fatigue strength, corrosion resistance, lifespan, and reliability impose corresponding requirements on the surface roughness [3]. In addition, complexity and uncertainty are existence during machining process, predicting the response of such a process with reasonable accuracy is rather difficult [4]. Therefore, an effective prediction model of surface roughness when working with machined surfaces is especially important [5]. To date, data-driven machine learning techniques are the main prediction approach for surface roughness applied in different machining techniques (e.g., turning, grinding, and Electro-Discharge Machining) [6–8]. Data-driven approaches use learning algorithms and experimental data to capture underlying influence of control parameters on outputs and build prediction models so that an in-depth understanding of underlying physical processes is not a prerequisite [9]. Multivariable regression analysis [10, 11], response surface methodology [12, 13], artificial neural networks (ANN) [14–16], and support vector machine (SVM) [17–19] are the most widely data-driven approaches applied for modeling machined surface roughness. Other techniques like ensembles also are used for surface roughness prediction. Bustillo et al. [20] proposed ensemble algorithms to achieve an accurate prediction model of surface roughness, and an intensive comparison with an ANN approach was carried out based on experimental dataset. This result showed that ensemble learning can remove the operation of tuning neural network parameters and also improve prediction accuracy of model. Grzenda et al. [21] developed a hybrid algorithm that combined a genetic algorithm (GA) with neural networks to predict surface roughness; GA was used to decide on the data set transformation and network architecture. And the experimental results revealed the superiority of the multilayer perceptrons based on the architecture and transformation found by the proposed algorithm. Compared to ANN, SVM is a powerful learning model to avoid the problems of training efficiency, testing efficiency, and overfitting [22]. In addition, based on the structure of SVM model, its insensitive zone can capture the small scale random variations produced in responses with stochastic process; it is beneficial for the robust of model. Çaydaş and Ekici [17] developed three different types of support vector machines (LS-SVM, Spider SVM, and SVM-KM) and ANN to establish the prediction model for the surface roughness of AISI 304 austenitic stainless steel. Their results showed that the prediction effect of all used SVMs was better than that of ANN. The grid search method was used to determine the internal parameters of SVM (penalty factor C and kernel parameter ), but the values of internal parameter gained by grid search method relied on the determination of jumping interval. Ramesh et al. [18] used SVM to conduct prediction model of surface finish for end milling on 6061 aluminum. The SVM model can predict with 8.34% error which was a better performance compared to the regression model with 9.71% error. However, SVM internal parameters’ iteration and choice were not reported. Nian and Devdas Shetty [23] proposed least squares SVM (LS-SVM) to predict surface roughness of AISI4340 steel and AISID2 steel, and the experimental results indicated that the determination coefficient (0.9435) of the proposed LS-SVM model was higher than that of the analysis of variance (ANOVA) and NN which were 0.1917 and 0.7266, respectively. The turning parameters were found by using coupled simulated annealing (CSA) method. Wang et al. [24] established LS-SVM model with radial basis function and exponential model to predict surface roughness in lenses precision turning. The comparison of the LS-SVM and exponential models was also carried out, and the LS-SVM model was found to be capable of better prediction precision for surface roughness. The chaotic particle swarm optimization (CPSO) and leave-one-out cross-validation (LOO-CV) method were combined to determine the regularization parameter and kernel width parameter. Aich and Simul [22] adopted SVM to develop the model of average surface roughness parameter (Ra) for the electrical discharge machining (EDM) process. Particle swarm optimization (PSO) was employed to optimize SVM parameters including regularization parameter C, radius of loss insensitive hypertube , and standard deviation of the kernel function, and the turning process and search range of parameter were showed in detail. Xie et al. [25] proposed a novel surface roughness prediction model based on the energy consumption, and SVM model was used to predict the surface roughness value in turning. The internal parameters (C, , and ) of SVM were optimized by PSO. Moreover, this proposed method was compared with the PSO-relevance vector machine (PSO-RVM). The experimental result showed that the proposed model had the lowest mean relative error and was effective.

It is well-known that the internal parameters values of SVM can have great influence on the accuracy of a prediction model. The PSO algorithm is the main optimization algorithm to determine the optimal internal parameter combination of SVM prediction model for manufacturing processes. However, little research has evaluated the prediction performance of SVM with other parameter optimization algorithms on surface roughness prediction during machining. The artificial bee colony (ABC) algorithm has advantages over global optimization with fewer parameters and strong robustness [26, 27]. Thus we investigate the ability of SVM model optimized by ABC algorithm (ABC-SVM) for the prediction of surface roughness using the three experimental datasets of machining processes. Moreover, to determine the performance of ABC algorithm for optimizing SVM model, this paper systematic compares and evaluates the capability of ABC and other common optimization algorithms (DE, GA, and PSO) for determining the internal parameters of SVM model, and the reasons for the produced corresponding prediction results are analyzed in detail from the optimization mechanism and process of these optimization algorithms. It can provide an effective guide to the selection of internal parameter optimization algorithms for the prediction model. Literature survey made so far reveals that no such work is reported till now.

#### 2. Support Vector Machine (SVM)

Support vector machine proposed by Vapnik [28] is a supervised learning system. It is constructed based on structural risk minimization (SRM) principle rather than empirical risk minimization (ERM). ERM is used in many artificial intelligence modeling approaches including the neural network approach. So, generalization and model overfitting are often encountered in neural network approach when the sample size is small. In contrast to ERM, SRM can minimize error of the training sample by minimizing the upper bound of the expected risk. Thus SVM possesses a stronger generalization ability [29].

The prediction of the support vector machine can be described as follows. Let the training sample be defined as , where is the output corresponding to the input vector . The nonlinear mapping function maps the input sample to the high-dimensional feature space via kernel function, so that each point () in the training sample fits as much as possible to a linear model, i.e., the regression function (the mapped prediction model) shown in (1):where and are weight and deviation of regression function. It is known from (1) that optimal choice of and is a prerequisite for building an accurate model. SRM minimizes empirical risks and reduces generalization error (namely overfitting) of a model at the same time. Thus, in order to penalize overfitting of a model based on the training sample, the loss function is introduced into SVM. There are a few developed loss functions applied to different kinds of problems [28], and in process modeling problems, -insensitive loss function is commonly used. Therefore, the optimized problem based on the regularized risk minimization rule can be written as follows:where ; and are the positive slack variables which are introduced to cope with infeasible constraints of the optimization problem [28, 30]. For -insensitive loss function, is the radius of hypertube of insensitive zone as shown in Figure 1. For the points inside of the insensitive hypertube means, the loss is deemed to be zero, whereas for the points outside of insensitive hypertube, the training error loss is deemed to be significant and the penalization is calculated by penalty factor which denotes a balance between flatness and complexity of the model. The complexity of model and possibility of overfitting increase as increases, but if is too small, the training errors may be increased. The number of support vectors is controlled by the size of hypertube insensitive space, because these points outside of the insensitive zone are part of a support vector group [22]. An increase in the radius of insensitive hypertube results in fewer number of support vector and lower flexibility of the model. It may be beneficial to remove and reduce the influence of small stochastic noise on the output sample; however, larger will not absolutely imply the invisible target function being picked up.