Abstract

It is difficult to accurately predict the response of some stochastic and complicated manufacturing processes. Data-driven learning methods which can mine unseen relationship between influence parameters and outputs are regarded as an effective solution. In this study, support vector machine (SVM) is applied to develop prediction models for machining processes. Kernel function and loss function are Gaussian radial basis function and -insensitive loss function, respectively. To improve the prediction accuracy and reduce parameter adjustment time of SVM model, artificial bee colony algorithm (ABC) is employed to optimize internal parameters of SVM model. Further, to evaluate the optimization performance of ABC in parameters determination of SVM, this study compares the prediction performance of SVM models optimized by well-known evolutionary and swarm-based algorithms (differential evolution (DE), genetic algorithm (GA), particle swarm optimization (PSO), and ABC) and analyzes ability of these optimization algorithms from their optimization mechanism and convergence speed based on experimental datasets of turning and milling. Experimental results indicate that the selected four evaluation indicators values that reflect prediction accuracy and adjustment time for ABC-SVM are better than DE-SVM, GA-SVM, and PSO-SVM except three indicator values of DE-SVM for AISI 1045 steel under the case that training set is enough to develop the prediction model. ABC algorithm has less control parameters, faster convergence speed, and stronger searching ability than DE, GA, and PSO algorithms for optimizing the internal parameters of SVM model. These results shed light on choosing a satisfactory optimization algorithm of SVM for manufacturing processes.

1. Introduction

Traditional manufacturing is gradually developing towards smart manufacturing. Integrating big data, advanced analytical technology, large scale and high-performance computing, and Industrial Internet of Things (IIoT) into traditional manufacturing to produce highly quality customizable products at lower costs is the purpose of smart manufacturing [1]. Accurate prediction of processing product quality is the prerequisite for achieving the smart manufacturing systems or processes. The surface roughness of machined workpieces is an important technical indicator for testing product quality [2]. Generally speaking, the performance indicators of the product such as fatigue strength, corrosion resistance, lifespan, and reliability impose corresponding requirements on the surface roughness [3]. In addition, complexity and uncertainty are existence during machining process, predicting the response of such a process with reasonable accuracy is rather difficult [4]. Therefore, an effective prediction model of surface roughness when working with machined surfaces is especially important [5]. To date, data-driven machine learning techniques are the main prediction approach for surface roughness applied in different machining techniques (e.g., turning, grinding, and Electro-Discharge Machining) [68]. Data-driven approaches use learning algorithms and experimental data to capture underlying influence of control parameters on outputs and build prediction models so that an in-depth understanding of underlying physical processes is not a prerequisite [9]. Multivariable regression analysis [10, 11], response surface methodology [12, 13], artificial neural networks (ANN) [1416], and support vector machine (SVM) [1719] are the most widely data-driven approaches applied for modeling machined surface roughness. Other techniques like ensembles also are used for surface roughness prediction. Bustillo et al. [20] proposed ensemble algorithms to achieve an accurate prediction model of surface roughness, and an intensive comparison with an ANN approach was carried out based on experimental dataset. This result showed that ensemble learning can remove the operation of tuning neural network parameters and also improve prediction accuracy of model. Grzenda et al. [21] developed a hybrid algorithm that combined a genetic algorithm (GA) with neural networks to predict surface roughness; GA was used to decide on the data set transformation and network architecture. And the experimental results revealed the superiority of the multilayer perceptrons based on the architecture and transformation found by the proposed algorithm. Compared to ANN, SVM is a powerful learning model to avoid the problems of training efficiency, testing efficiency, and overfitting [22]. In addition, based on the structure of SVM model, its insensitive zone can capture the small scale random variations produced in responses with stochastic process; it is beneficial for the robust of model. Çaydaş and Ekici [17] developed three different types of support vector machines (LS-SVM, Spider SVM, and SVM-KM) and ANN to establish the prediction model for the surface roughness of AISI 304 austenitic stainless steel. Their results showed that the prediction effect of all used SVMs was better than that of ANN. The grid search method was used to determine the internal parameters of SVM (penalty factor C and kernel parameter ), but the values of internal parameter gained by grid search method relied on the determination of jumping interval. Ramesh et al. [18] used SVM to conduct prediction model of surface finish for end milling on 6061 aluminum. The SVM model can predict with 8.34% error which was a better performance compared to the regression model with 9.71% error. However, SVM internal parameters’ iteration and choice were not reported. Nian and Devdas Shetty [23] proposed least squares SVM (LS-SVM) to predict surface roughness of AISI4340 steel and AISID2 steel, and the experimental results indicated that the determination coefficient (0.9435) of the proposed LS-SVM model was higher than that of the analysis of variance (ANOVA) and NN which were 0.1917 and 0.7266, respectively. The turning parameters were found by using coupled simulated annealing (CSA) method. Wang et al. [24] established LS-SVM model with radial basis function and exponential model to predict surface roughness in lenses precision turning. The comparison of the LS-SVM and exponential models was also carried out, and the LS-SVM model was found to be capable of better prediction precision for surface roughness. The chaotic particle swarm optimization (CPSO) and leave-one-out cross-validation (LOO-CV) method were combined to determine the regularization parameter and kernel width parameter. Aich and Simul [22] adopted SVM to develop the model of average surface roughness parameter (Ra) for the electrical discharge machining (EDM) process. Particle swarm optimization (PSO) was employed to optimize SVM parameters including regularization parameter C, radius of loss insensitive hypertube , and standard deviation of the kernel function, and the turning process and search range of parameter were showed in detail. Xie et al. [25] proposed a novel surface roughness prediction model based on the energy consumption, and SVM model was used to predict the surface roughness value in turning. The internal parameters (C, , and ) of SVM were optimized by PSO. Moreover, this proposed method was compared with the PSO-relevance vector machine (PSO-RVM). The experimental result showed that the proposed model had the lowest mean relative error and was effective.

It is well-known that the internal parameters values of SVM can have great influence on the accuracy of a prediction model. The PSO algorithm is the main optimization algorithm to determine the optimal internal parameter combination of SVM prediction model for manufacturing processes. However, little research has evaluated the prediction performance of SVM with other parameter optimization algorithms on surface roughness prediction during machining. The artificial bee colony (ABC) algorithm has advantages over global optimization with fewer parameters and strong robustness [26, 27]. Thus we investigate the ability of SVM model optimized by ABC algorithm (ABC-SVM) for the prediction of surface roughness using the three experimental datasets of machining processes. Moreover, to determine the performance of ABC algorithm for optimizing SVM model, this paper systematic compares and evaluates the capability of ABC and other common optimization algorithms (DE, GA, and PSO) for determining the internal parameters of SVM model, and the reasons for the produced corresponding prediction results are analyzed in detail from the optimization mechanism and process of these optimization algorithms. It can provide an effective guide to the selection of internal parameter optimization algorithms for the prediction model. Literature survey made so far reveals that no such work is reported till now.

2. Support Vector Machine (SVM)

Support vector machine proposed by Vapnik [28] is a supervised learning system. It is constructed based on structural risk minimization (SRM) principle rather than empirical risk minimization (ERM). ERM is used in many artificial intelligence modeling approaches including the neural network approach. So, generalization and model overfitting are often encountered in neural network approach when the sample size is small. In contrast to ERM, SRM can minimize error of the training sample by minimizing the upper bound of the expected risk. Thus SVM possesses a stronger generalization ability [29].

The prediction of the support vector machine can be described as follows. Let the training sample be defined as , where is the output corresponding to the input vector . The nonlinear mapping function maps the input sample to the high-dimensional feature space via kernel function, so that each point () in the training sample fits as much as possible to a linear model, i.e., the regression function (the mapped prediction model) shown in (1):where and are weight and deviation of regression function. It is known from (1) that optimal choice of and is a prerequisite for building an accurate model. SRM minimizes empirical risks and reduces generalization error (namely overfitting) of a model at the same time. Thus, in order to penalize overfitting of a model based on the training sample, the loss function is introduced into SVM. There are a few developed loss functions applied to different kinds of problems [28], and in process modeling problems, -insensitive loss function is commonly used. Therefore, the optimized problem based on the regularized risk minimization rule can be written as follows:where ; and are the positive slack variables which are introduced to cope with infeasible constraints of the optimization problem [28, 30]. For -insensitive loss function, is the radius of hypertube of insensitive zone as shown in Figure 1. For the points inside of the insensitive hypertube means, the loss is deemed to be zero, whereas for the points outside of insensitive hypertube, the training error loss is deemed to be significant and the penalization is calculated by penalty factor which denotes a balance between flatness and complexity of the model. The complexity of model and possibility of overfitting increase as increases, but if is too small, the training errors may be increased. The number of support vectors is controlled by the size of hypertube insensitive space, because these points outside of the insensitive zone are part of a support vector group [22]. An increase in the radius of insensitive hypertube results in fewer number of support vector and lower flexibility of the model. It may be beneficial to remove and reduce the influence of small stochastic noise on the output sample; however, larger will not absolutely imply the invisible target function being picked up.

In most cases, for solving optimization problem (2), it is easier by considering the dual problem. The process is as follows.

Using the Lagrange function, we obtain

where are Lagrange multipliers.

The dual variables in (3) have to satisfy the positivity constraints, i.e., , and for the solution of the optimization problem, the partial derivative of L with respect to the primal variables () is zero at the saddle point, i.e.,

Substituting (4), (5), (6), and (7) into (3) yields the dual optimization problem as follows:

where is kernel function.

The dual variables have been eliminated through condition (6), because these variables do not appear in the dual objective function anymore but only appear in the dual feasibility condition. Equation (5) can be rewritten as

The kernel function is introduced in SVM model to address the disaster of dimensionality [27]. It is beneficial to allow the operations to be carried out in the feature space. Some kinds of kernel functions are recommended in literature [29, 30]. One of them is the Gaussian radial basis function with standard deviation which is

This is mostly used because it is potentially better for handling higher dimensional input space. Therefore the final model with optimum combination of parameters , , and is expressed in [31] as

3. Artificial Bee Colony (ABC) Algorithm

The artificial bee colony (ABC) algorithm is an optimization method proposed to imitate bee behavior by Karaboga [32] in 2005. It is an application of the cluster intelligence with high speed. One characteristic of the algorithm is that it does not need to know the special information of the problem; instead, it only needs to compare the pros and cons of the problem. The global optimal value is eventually found in the group based on the local optimum behavior of individual artificial bee.

3.1. Bees Behavior Analysis

The artificial bee colony algorithm derived from the biologic behavior consists of three basic components, namely, food sources, employed bees, and unemployed bees.

(i) Food sources: the position of food source is the feasible solution to the optimization problem. There are many factors to influence the quality of a food source, such as distance from the nest, richness of energy, and the difficulty degree to extract this energy. For simplicity, a single quantity is used to represent the quality of a food source.

(ii) Employed bees: they are employed at a particular food source which is currently being exploited by them. They carry the information (the distance, the direction, and the profitability of the food source) about this particular source and share it with other bees waiting in the nest with a certain probability.

(iii) Unemployed bees: they are finding a food source to exploit. The unemployed bees include onlookers who wait and find a food source through the information shared by the employed bees and scouts who search for a new food source randomly.

3.2. Algorithm Implementation Process

In ABC algorithm, the number of employed bees or onlooker bees is equal to the number of food sources. The search process of ABC algorithm can be summarized as follows. Employed bees search food sources to update the position (solution) in their memory by using the better nectar amount (fitness value) of the new source to replace the previous one. Employed bees share the information of food sources with onlooker bees by swing dancing in the nest. Onlooker bees receive the information of food sources and choose food sources with a probability related to their nectar amount. Like employed bees, they check the nectar amount of the candidate source to update the position in their memory by the better food sources. In the search and update of employed bees and onlooker bees, if the number of times a food source exploited reaches the set upper limit and the nectar amount is not improved, the food source is discarded, and the corresponding employed bee becomes a scout bee, and then the scout bee generates a new food source randomly.

There are three control parameters in the basic ABC: the colony size, the maximum number of exploitations, and the maximum cycle number.

4. Experiment

Machining is a complicated process. There are many parameters that influence surface roughness in machining process; these influencing factors can be divided into four classifications according to the literature [6]; they are machining parameters, cutting tool properties, workpiece properties, and cutting phenomena, respectively; some of them are not measurable or controllable in actual machining. When the machined workpiece was given, the workpiece properties cannot be modified and cutting phenomena is not be controlled by the operator [21]. But the machining parameters and cutting tool properties can be chosen to obtain good processing quality. The main machining parameters are cutting speed, feed rate, and depth of cut (or radial depth of cut and axial depth of cut). And the tool nose radius of cutting tool properties is a frequently selected parameter to analyze surface roughness. Therefore, in our study, the influence parameters of surface roughness are mainly selected from the cutting speed, feed rate, depth of cut, and the tool nose radius. In this paper, we conduct experiments on turning or milling processes for three different materials, which are AISI 1045 steel, TC18 titanium alloy, and Compacted Graphite Iron. For AISI 1045 steel, the machining form is turning, the processing machine is CKD6150A, and the workpiece is cylinder with 40mm diameter. The cemented carbide tool is selected as the tool. Four most dominating control parameters of AISI 1045 steel, namely, cutting speed () (m/min), feed rate () (mm/rev), depth of cut () (mm), and tool nose radius () (mm) are considered as input parameters. For TC18 titanium alloy and Compacted Graphite Iron, experiments both are constructed on a milling machine (VDL-600A), and the workpieces are cube with 0mm. The selected tool is the cemented carbide tool. The tool type of TC18 titanium alloy and Compacted Graphite Iron are 1135NB9010 and bgp-800-fmb27, respectively. For TC18 titanium alloy, the machining parameters including cutting speed (v) (m/min), feed rate () (mm/rev), radial depth of cut () (mm), and axial depth of cut () (mm) are considered as input parameters. For Compacted Graphite Iron, input parameters are cutting speed () (m/min) and feed speed () (mm/min). The response parameters for the three experiments are average surface roughness (Ra) of machined surface. Based on the availability of the processing machine setting, levels of the input parameters of AISI 1045 steel, TC18 titanium alloy, and Compacted Graphite Iron are selected and presented in Tables 1, 2, and 3, respectively. Radial depth and axial depth of cut for Compacted Graphite Iron are fixed at 75mm and 1mm, respectively.

The parameter combinations of AISI 1045 steel are performed based on the procedure of the Box-Behnken experimental design method [33]; orthogonal experimental design is selected to determine the parameter combinations of milling process for TC18 titanium alloy. The average surface roughness of workpieces is measured by the Mitutoyo Surftest SJ-310 which is a mobile surface roughness meter. There are 29 groups of unique combinations for AISI 1045 steel, and 20 sets are chosen randomly for training the learning process and the rest are left for the purpose of testing. For TC18 titanium alloy, there are 9 groups of parameter combinations, of which 6 are used as training set and 3 are used as testing set. For Compacted Graphite Iron, there are 20 groups of parameter combinations, of which 16 are used as training set and 4 are used as testing set.

5. Analysis and Discussion

Prediction models for Ra of AISI 1045 steel, TC18 titanium alloy, and Compacted Graphite Iron are produced by SVM based on training sample, and prediction models are tested through testing sample. The versions of these optimization algorithms compared in the text are their standard version.

5.1. Model Development

As described in Section 2, the greatest influence parameters in SVM model are C, ε, and . Importance of parameter determination in SVM model for an accurate prediction is obvious. Artificial bee colony (ABC) can be used effectively for this purpose. Moreover, other well-known evolutionary and swarm-based algorithms such as differential evolution (DE) algorithm, genetic algorithm (GA), and particle swarm optimization (PSO) may also serve the purpose. In order to evaluate the prediction performance of ABC-SVM model and select the satisfactory optimization and prediction model, a comparative study on the performances of ABC-SVM, DE-SVM [34], GA-SVM [35], and PSO-SVM [36] models is carried out.

As SVM can reduce the probability of generalization error, and a high training accuracy cannot guarantee to gain a good prediction, thus minimizing the mean square error (MSE) of predicting testing outputs is used to determine the internal parameters of SVM in this study. MSE is defined as where denotes the number of experiments and and are the experimental and predicted values, respectively. MSE is regarded as the objective function for determining , ε, and .

Termination criterion is set though a predefined maximum number of iterations. A simple ABC algorithm for searching optimal combination of , , and with minimum MSE is shown as follows.

Step 1. Choose the number of bees in colony (), the number of solutions i.e., position of food source (), the number of employed bees, and that of onlooker bees are both . Maximum number of iterations is , and maximum number of repeated exploitations of a food source is . Set , which is the dimension, i.e., the number of optimization parameters.

Step 2. Randomly initialize the position of n food sources, i.e., n set of initial combination of C, , and .

Step 3. Set and .

Step 4. Set .

Step 5. The employed bee generates new food sources at nearby the usingwhere ; is randomly selected from n food sources. is a uniformly distributed random number between and determines the degree of disturbance.

Step 6. Calculate the MSE (objective function value) of and ; if the MSE value of is smaller (better) than , then use , replace , and set and ; otherwise is kept unaltered; set and .

Step 7. Check whether , if yes go to Step 8; otherwise go to Step 5.

Step 8. Set .

Step 9. The ith onlooker bee searches the food sources by employed bees sharing with a certain probability shown in (14) below and generates new food sources according to (13) at nearby the searched food source:

Step 10. Calculate the MSE of , if the MSE value of is smaller than , then use , replace , and set and ; otherwise is kept unaltered; set and .

Step 11. Check whether , if yes go to Step 12; otherwise go to Step 9.

Step 12. Find the smallest value of MSE of all food source and record as bestmse.

Step 13. Check whether there exists k such that , , if yes go to Step 14; otherwise go to Step 16

Step 14. The corresponding kth employed bee becomes scout bee, and the scout bee generates a new nectar source by (15): where and are lower and upper limits of , , and , respectively. Instead of the corresponding kth food source, set .

Step 15. Calculate MSE of , if MSE of , then of ; otherwise bestmse keeps unaltered.

Step 16. Set iter = iter + 1.

Step 17. Check whether holds, if yes go to Step 18; otherwise go to Step 4.

Step 18. Save the position of food source with bestmse as the optimum position, i.e., optimum combination of , , and minimum MSE in fitting the SVM learned regression model on sample data.

Therefore, final position of the food source becomes global optimum in given search space. The flow chart for finding optimal combination of , , and in SVM model through ABC algorithm by minimizing MSE is shown in Figure 2.

For the average surface roughness prediction of the three experiments in Section 4, the values of the common parameters used in each optimization algorithm model including population size, maximum iteration number, and parameter rang and run times are set to be the same. For the three experiments, population size is set to be 20, the maximum iteration number is 100, the rang of internal parameters , , and is set in . Each experiment is repeated 30 times. It is well known that these control parameters in optimization algorithms have significant influence on the performance of algorithms, but in most comparative studies the values of these parameters are different for different types of problems.

The other specific parameters of each optimization algorithm are shown as follows.

GA settings: genetic algorithm (GA) is a computational model that simulates the natural evolution of Darwin's biological evolution theory and the biological evolution process of genetic mechanism. It is a method to search for optimal solutions by simulating natural evolutionary processes. There are many improved versions for GA algorithm. In our experiments, a binary coded standard GA is used; it consists of fitness scaling, seeded selection, random selection, crossover, mutation, and elite cells [26]. Crossover rate of single point is set to be 0.8. Mutation rate is set to be 0.01. In standard GA, the genetic diversity is lost in the process of reproduction and crossover, and it can be restored by mutation operation. Generation gap indicates the proportion of the population being replaced. The value of generation is chosen to be 0.9. DE settings: the Differential evolution (DE) algorithm is an efficient global optimization algorithm. It is a population-based heuristic search algorithm, with each individual in the group corresponding to a solution vector. The evolution process of DE is similar to genetic algorithm, including crossover, mutation, and selection operations. The main difference is that genetic algorithms use crossover operation to obtain better solutions, while DE relies on mutation. In DE, scaling factor F is equal to 0.8 in our experiments; F having values in the interval of influences the difference fluctuation between two solutions. Crossover rate determines the fluctuation the diversity of the population, and its value is set to be 0.9 as advised in [37]. PSO settings: the particle swarm optimization is a new evolutionary algorithm developed by J. Kennedy and R. C. Eberhart in recent years. It starts from the random solution and finds the optimal solution through iteration. The quality of the solution is evaluated by fitness function. In PSO, the global optimum is found by following the current searched optimal value; there are no crossover and mutation operations in solution search process; thus it is simpler than the genetic algorithm. Cognitive factor (c1) and social factor (c2) are constants that represent the weights of personal and population experience, respectively. In this study, we set both cognitive and social factors to be 1.8. Inertia weight controls how the particle’s previous velocity affects the velocity in the next iteration, and its value is 0.6 as advised in [38]. ABC settings: the standard ABC employed in our experiments has only one control parameter limit except common parameters (population number and maximum iteration number). If the quality of a food source has not been improved after it was exploited with limit times, the food source will not be exploited anymore and be abandoned. The limit value is suggested in [26], which is expressed by the optimization dimension () and the colony size ():

5.2. Results and Discussion

Based on the training samples and testing samples of the machining experiments for AISI 1045 steel, TC18 titanium alloy, and Compacted Graphite Iron, ABC-SVM, DE-SVM, GA-SVM, and PSO-SVM models are used to predict the average surface roughness.

In order to quantitatively evaluate the prediction performance, mean square error (MSE) (i.e., fitness function), mean absolute error (MAE), determination coefficient (R2), and average running time (T) are selected as the evaluation indictors. MAE and R2 are given as where N denotes the number of experiment, and are the measured and predicted values, respectively.

MSE and MAE evaluate the deviation of the predicted and measured values, MAE reflects the true error, and MSE changes the magnitude of error, and the smaller they are, the higher the prediction accuracy is. The determinant coefficient R2 of the model represents the proportion of the explained variation to the total variation and is one of the indicators to measure the effectiveness of the established model. The response model fitness with the actual value improves as R2 is closer to 1. Average running time (T) reflects the time complexity of the model and the prediction efficiency of different optimization algorithms.

In order to make comparison clear, the values below 10−12 are assumed to be 0, and after obtaining the optimal parameters (, , and ) by ABC, DE, GA, and PSO, the random seed is set 7 in the prediction program of SVM. The average running time (T) and mean values, best values, worst values, and standard deviations (StdDev) of MSE, R2, and MAE gained by running 30 times for three groups of prediction sample are reported in Tables 46.

From Tables 46, regarding predicted average surface roughness based on the machining data of AISI 1045 steel, TC18 titanium alloy, and Compacted Graphite Iron, all models demonstrate high prediction accuracy and similar stability on AISI 1045 steel. Because all optimization algorithms use the operators to produce population diversity and variable step sizes. It can be seen from Table 4, DE-SVM has the best values of MSE, R2 and MAE, followed by ABC-SVM and GA-SVM, and the prediction accuracy of ABC-SVM is slightly lower than of DE-SVM. For TC18 titanium alloy shown in Table 5, the prediction results of all models are not ideal, since the size of training sample is too small. All values of R2 are negative. Table 6 shows that the highest prediction accuracy is produced by ABC-SVM for predicting average surface roughness of Compacted Graphite Iron, and the prediction accuracy of GA-SVM and PSO-SVM is much lower than that of ABC-SVM. Through a comprehensive comparison across Tables 46, we can see that ABC-SVM has more stable prediction accuracy than other three prediction models when the sample size is sufficient to establish a predictive model. Furthermore, it is clear that ABC-SVM costs the least time in calculating the prediction; namely, ABC-SVM has the lowest time complexity compared to others.

To optimize MSE by ABC-SVM, DE-SVM, GA-SVM, and PSO-SVM with the initial seeds, marching steps of the simulation process for minimizing MSE of three machining materials by four prediction models are shown in Figures 35, respectively. These figures show that ABC algorithm has faster convergence rate when optimizing the internal parameters of SVM model than DE, GA, and PSO algorithms. This observation is consistent with the result that ABC-SVM has the lowest time complexity compared to others.

In sum, ABC-SVM can maintain a better prediction performance in the prediction process of average surface roughness in our experimental data which training set is enough to develop the prediction model compared to DE-SVM, GA-SVM, and PSO-SVM models. For different machining data, ABC algorithm can find better internal parameters for support vector machine model with less time for than DE, GA, and PSO. The reason for this conclusion can be understood from the following aspects of the optimization process of ABC algorithm.

(a) New or Candidate Solutions Production. In ABC, the production of a new or candidate solution depends on the difference between a randomly selected part of the corresponding parent and a randomly selected solution in the population, rather than using the crossover operator which is employed to produce candidate solutions in GA and DE. This operation improves the convergence rate of search a local minimum.

(b) Solution Selection. In ABC algorithm, a greedy method as in DE is carried out by employed bees and onlooker bees to select the solutions from the candidate and the parent solutions. Stochastic selection of solutions based on the fitness values (i.e., quality of solutions), which is similar to ‘‘roulette wheel selection’’ in GA, is implemented by onlooker bees.

This selection scheme makes the promising areas of the search space to be searched in shorter time and in more detail.

(c) Population Diversity. In GA or DE, the diversity in the population is obtained by the mutation operation which modifies a part of the solution. While in ABC, apart from partial modification of the solution which is similar to the mutation process in GA and DE, the mechanism of controlling the diversity is mainly that an entire solution in the population is abandoned and then a new one generated stochastically is put into the population by a scout. The combination of two types of control mechanisms makes a good balance between global search and local search.

(d) Control Parameters. Beside the common parameters in Section 5.1, there are two other parameters (crossover rate and scaling factor) to be selected in a standard DE, three more control parameters including crossover rate, mutation rate, and generation gap for a standard GA, and at least three control parameters including cognitive and social factors, inertia weight for a basic PSO. In contrast, there is only one control parameter in ABC algorithm, namely, limit.

Finally, the optimized values of , , and at minimizing MSE for average surface roughness prediction of three materials are shown in Tables 79, and the prediction results are shown in Tables 1012.

6. Concluding Remarks

The average surface roughness (Ra) is an important qualitative aspect of the product gained in manufacturing process. Accurate prediction model of Ra is a precondition for maintaining good processing quality and effective adjustment of processing parameters during smart manufacturing. Because of stochastic nature of machining process, the small scale random fluctuation for Ra is inevitable. SVM model established with optimum internal parameters (C, , and ) can efficiently capture these random variations and robust predict the Ra. In order to seek an efficient optimization algorithm to determine the optimum internal parameters of SVM for guaranteeing its prediction performance, this paper developed the SVM model with optimized internal parameters by ABC algorithm. Moreover, a comprehensive and detailed comparison study on the performances of the SVM model optimized by well-known evolutionary and swarm-based algorithms (DE, GA, PSO, and ABC) for choosing a satisfactory optimization algorithm for SVM model was carried out, and the mechanisms of these optimization algorithms also were deeply analyzed and compared. ABC-SVM, DE-SVM, GA-SVM, and PSO-SVM models are used to predict the average surface roughness of three groups of experimental data from AISI 1045 steel of turning experiment, TC18 titanium alloy, and Compacted Graphite Iron of milling experiments. Mean square error (MSE), determination coefficient (R2), mean absolute error (MAE), mean square error (MSE), and average running time (T) are selected as the evaluation indictors. The comparison results reveal that ABC-SVM model performs better than DE-SVM, GA-SVM, and PSO-SVM in terms of the prediction accuracy and running time. This is due to the advantages of the ABC algorithm in the four aspects (new or candidate solutions produce, solution selection, population diversity, and control parameters) in the optimization process compared to DE, GA, and PSO algorithms.

Finally, we would like to conclude the paper by mentioning some future research directions to further this research. First, a more efficient parameter optimization algorithm for intelligent prediction model can be conducted. Second, we can focus on designing fast and accurate machine learning algorithms based on big data in smart manufacturing that can be applied to high-dimension problems and real-time prediction.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Disclosure

Juan Lu and Xiaoping Liao are common first author.

Conflicts of Interest

We would like to declare that are no conflicts of interest.

Acknowledgments

This research was supported by Guangxi Key Laboratory of Processing for Nonferrous Metals and Featured Materials, National Natural Science Foundation of China (NSFC) (Gants numbers 51665005 and 61806058), Innovation Project of Guangxi Graduate Education (Grant number YCBZ2017015), the Project of Guangxi Colleges and Universities Key Laboratory Breeding Base for Coastal Mechanical Equipment Design, Manufacturing and Control (Grant number GXLH2016ZD-06), and Guangxi Key Laboratory of Manufacturing Systems and Advanced Manufacturing Technology (Grant number 17-259-05S008) and High-level Research Project of Qinzhou University (Grant number. 16PYSJ06).

Supplementary Materials

Supplementary materials show the data which are machining parameters combinations and corresponding surface roughness values of turning experiment for AISI1045 and milling experiments for TC18 and Compacted Graphite Iron in this study. (Supplementary Materials)