Abstract

This work addresses the prediction and optimization of average surface roughness (Ra) and maximum flank wear (Vbmax) of 6061 aluminum alloy during high-speed milling. The investigation was done using a DMU 50 CNC 5-axis machine with Ultracut FX 6090 fluid. Four factors were examined: the table feed rate, cutting speed, depth of cut, and cutting length. Three levels of each factor were examined to conduct 81 experiment runs. The response parameters in these experiments were measurements of Ra and Vbmax. We applied a two-pronged approach that combines machine learning (ML) and a Nondominated Sorting Genetic Algorithm (NSGA-II) to model and optimize Ra and Vbmax. Four ML models were used to predict Ra and Vbmax: linear regression (LIN), support vector machine regression (SVR), a gradient boosting tree (GBR), and an artificial neural network (ANN). The input variables were the significant factors that affect the surface quality and tool wear: the feed rate, depth of cut, cutting speed, and cutting time. Several quality metrics were employed to quantify the performance of the models, such as the root mean squared error (RMSE), mean absolute error (MAE), and coefficient of determination (R2). As a result, SVR and ANN were found to have the best predictive performance for Ra and Vbmax. These models and the NSGA-II-based approach were then employed for multiobjective optimization of cutting parameters during high-speed milling of aluminum 6061. Fifty Pareto solutions were found with Ra in the range of 0.257 to 0.308 µm and Vbmax in the range of 136.198 to 137.133 μm. Experimental validations were then conducted to confirm that the optimum solution was within an acceptable error range. More precisely, the absolute percentage errors for Ra and Vbmax were 2.5% and 1.5%, respectively. This work proposes an effective strategy for efficiently combining machine learning techniques and the NSGA-II multiobjective optimization algorithm. The experimental validations have reflected the potential for applying this strategy in various machining-optimization problems.

1. Introduction

Aluminum alloys have been widely employed in various areas of engineering, such as the automotive and aerospace industries. For instance, various automotive components are made out of aluminum alloys, including wheels, panels, structures, pistons, brake drums, and piston sleeves, while aluminum alloy aircraft parts include fittings, gears, and shafts. [1]. One of the main advantages of this material is that it exhibits a tremendous strength-to-weight ratio in comparison to steel and cast iron. Therefore, it can be used as a favorable alternative to these materials in manufacturing.

Aluminum alloys are often used in traditional machining processes with typical cutting conditions. However, traditional machining is considered to have low efficiency, especially from the perspectives of machining cost and surface quality. As an alternative, high-speed milling can provide surface quality and gloss comparable to those obtained with a grinding method [2]. Moreover, high-speed milling allows us to obtain a better surface-roughness finish and better geometric accuracy than traditional machining methods. Furthermore, surface roughness also decreases during high-speed drilling as the cutting speed increases, as observed experimentally by Kannan et al. [3].

High-speed machining can also avoid the effects of ductility and built-up edges on the surface finish of aluminum alloys. In other words, high-speed machining can result in better roughness, longer tool life, and a higher material removal rate. Thus, this technique significantly helps to increase productivity, and studies on high-speed milling are being done to improve the cost and time of machining processes.

Parameter selection for a high-speed milling process is crucial because the parameters directly affect the manufacturing process. The average surface roughness Ra is one of the most critical performance criteria in such a process and is required to ensure the products’ desired aesthetics, corrosion resistance, fatigue strength, and tribology characteristics. Various experimental studies have pointed out that Ra is affected by the feed rate, cutting speed, depth of cut, tool geometry, tool wear, temperature, and built-up edge formation [4]. Tool wear affects the finished surface and dimension tolerance of the product, as well as the stability of the machining operation [5]. Pimenov et al. [6] used Grey relational analysis (GRA) to find optimal cutting parameters for face milling AISI 1045 steel. In order to implement multiobjective optimization by GRA, multilayer regression analysis was used to determine a model of the surface roughness, material removal rate, sliding distance, and tool life based on feed per tooth, cutting speed, and flank wear. The wear of the cutting tool affects the surface roughness of the part in the finishing mill. Therefore, it is necessary to determine the correlation between wear and roughness to improve the machining efficiency [7, 8]. Therefore, more in-depth studies on the process parameters are needed to achieve the desired characteristics.

Recently, several studies have been conducted on the prediction of surface roughness using numerical techniques, such as machine learning [9]. Various ML techniques have also been used to predict surface roughness, such as random forest, regression trees, radial basis regression [10], gradient boosting trees [11], decision trees [12], support vector machine [13], and artificial neural network methods [14, 15]. The input parameters used in these studies include the spindle speed, feed rate, depth of cut, vibration along axes, cutting fluids, and cutting forces. The results indicated that ANN models have more potential for predicting Ra than traditional regression techniques [16].

The prediction of tool wear also plays a crucial role in the industry because it allows us to obtain proper planning and control of machining parameters, as well as optimized cutting conditions. Machining must be carefully monitored to predict wear over time. Different ML algorithms have been widely applied to predict tool wear, including ANNs [17], random forest [18], SVMs [19], decision trees, and feedforward BpNN [20]. According to the literature, various variables affect the prediction of tool wear, including the feed rate, depth of cut, cutting speed, and cutting force. Because of the multivariable and nonlinear correlations between the control and the performance variables, it is difficult to establish an accurate processing model to determine optimal machining conditions.

Interestingly, combinations of ML models and optimization algorithms have not been widely investigated, especially for the problem of high-speed machining. Recently, studies have integrated ANNs with genetic algorithms to optimize cutting parameters with minimum surface roughness in a milling process [21]. Metaheuristic algorithms can effectively deal with multiobjective optimization in engineering problems [22]. Moreover, these algorithms can efficiently optimize multiple objectives simultaneously [23]. Multiobjective methodologies have been successfully implemented in cutting-parameter optimization [24]. Unune et al. combined NSGA-II and an ANN to model and optimize the material removal rate (MRR) and Ra in grinding [25]. Kayaroganam et al. [26] combined a fuzzy model and the NSGA-II technique to determine the optimal drilling conditions for the minimum thrust force and torque in reinforcing AA6061 aluminum alloy drilling.

When machining AISI 6061 aluminum alloy, the surface roughness and tool wear depend on various cutting parameters and involve complex nonlinear problems. Therefore, defining performance parameters is challenging. Although experimental studies have given different formulas for determining Ra and tool wear values, it is difficult to define general formulas. Therefore, the use of machine learning and optimization techniques can help uncover nonlinear relationships between desired goals and problem inputs, especially the depth of cut, speed feed, cutting speed, cutting time, and tool type. Without solving complicated mechanical equations, the proposed machine learning model can effectively predict and analyze the surface roughness and tool wear when machining Al 6061.

Interestingly, the NSGA-II multiobjective optimization technique allows for the optimization of surface roughness and tool wear simultaneously. Lastly, the proposed optimization strategy has been validated using empirical tests. The information obtained could help to assess instrument surface roughness and wear quickly while reducing the required number of costly and time-consuming laboratory experiments.

Recent studies on high-speed milling have commonly been based on mathematical models and single-objective optimization [2, 27, 28]. However, to our knowledge, research on multiobjective optimization in high-speed milling 6061 aluminum alloy is rare, especially with a combination of machine learning and multiobjective optimization algorithms. Therefore, the aim of this work is to find the optimum solution to minimize Ra and Vbmax simultaneously in the high-speed milling of 6061 aluminum alloy.

Four predictive modeling algorithms were analyzed: LIN, SVR, GBR, and ANN. The results were compared using the following metrics: RMSE, MAE, and R2. Then, the two best predictive models were then optimized using NSGA-II to find the optimum combination of input variables to achieve the optimization goals. Finally, the optimal values of cutting parameters were validated by five experiments.

2. Research Significance

The surface roughness and tool wear of 6061 aluminum alloy during high-speed milling are complex nonlinear problems that are influenced by a variety of cutting parameters, making estimation difficult. Although a number of experimental investigations have been conducted to address this issue, it is difficult to derive a generalized formulation that takes into account all of the influential variables. Machine learning and optimization techniques could be used to investigate nonlinear correlations between desired targets and problem inputs, such as the feed rate, cutting speed, depth of cut, and cutting length.

For the first time, a hybrid machine learning and NSGA-II optimization technique was created and trained to assess surface roughness and tool wear of 6061 aluminum alloy during high-speed milling in this study. The model was trained and verified using experimental data gathered from the available literature. The approach was able to predict and analyze the surface roughness and tool wear without having to solve difficult mechanical equations. Notably, the multiobjective NSGA-II optimization technique allowed for simultaneous optimization of the surface roughness and tool wear. Lastly, the proposed optimization approach was tested in experiments. The results could be used for quick measurements of surface roughness and tool wear and reduce the need for costly and time-consuming laboratory studies.

3. Materials and Methods

3.1. Methodology

A combination of multiobjective optimization techniques, NSGA-II and ML, was used to find optimal solutions. Four predictive machine learning algorithms were first used to predict Ra and Vbmax: LIN, SVR, GRB, and ANN. The two best models were then identified and combined with the NSGA-II algorithm to define the optimal machining parameters. The processing conditions in wet machining include the table feed rate, cutting speed, depth of cut, and cutting length, which were considered as input parameters for the problem.

The flowchart in Figure 1 illustrates the study methodology. The methodology involved the collection of experimental data, dataset extraction, feature selection, and data normalization to predict Ra and Vbmax using the four predictive modeling methods. The figure also illustrates how to find the best hyperparameters by using the GridSearchCV technique. The final model was selected to predict Ra and tool wear based on the smallest value of RMSE for the test dataset. Then, the optimal solutions were identified on the Pareto front according to the constraints and minimum Ra and Vbmax. Five verification experiments were then conducted to validate the optimal values of Ra and Vbmax found by the numerical model.

3.2. Machine Learning Techniques
3.2.1. Linear Regression

The main objective of linear regression is to find the relationship between the input data and the target variable. When there is only one input variable, the method is called simple linear regression, and when there are several input variables, it is called multiple linear regression. Linear regression is a powerful statistical method for finding the relation between input and output variables. Therefore, it has been employed for many applications. For example, linear regression has been used for face recognition. Other applications in fields such as mechanical and civil engineering can also be found [29]. However, the technique is only suitable for linear problems.

3.2.2. SVM Regression

The SVM method can be used for classification and regression problems and is one of the classical machine learning techniques. The method was first proposed by Vapnik et al. [30]. Many applications have been proposed using SVMs as prediction models, and they have been found to perform well. For example, Byvatov and Schneider used an SVM in a data-driven method for bioinformatic applications [31]. An SVM can also be employed in hydrology, biology, and many other applications [32].

Mainly based on statistical learning, the main idea of the SVM is to divide a given input dataset into two main categories that are distinguished by a hyperplane. The SVM then maps the input data to points in space with the aim of maximizing the gap between the two subsets of data. The points closest in space to the hyperplane are called support vectors. One of the main advantages of the SVM method is the ability to work with a multidimensional input space, which is beneficial in terms of computer memory. However, it lacks the ability to work with large datasets, and the noise from the input data needs to be filtered before being input to the model.

3.2.3. Gradient Boosted Trees

The Gradient Boosted Tree (GBR) method is a supervised learning algorithm introduced in 2015 to provide accurate predictive models [33]. The main features of GBR are its computation time, predictive accuracy, and scalability compared to other machine learning models. In view of these advantages, GBR has been applied to many scientific fields. For example, in one study [34], GBR was employed for the prediction of miRNA disease. In banking, GBR has been used to predict a US banking meltdown. In mechanical machining, GBR is also considered an excellent approach for predicting the mechanical properties of machines [33]. GBR is a gradient tree boosting algorithm where overfitting is avoided by introducing regulation terms.

3.2.4. ANN Regression

ANNs are machine learning models that are inspired by the biological neural networks of human brains [35]. The main idea of an ANN is to learn by detecting patterns and relations between components in input data. In other words, it can be said that an ANN is constructed through experience, not programming. There are several types of ANNs, such as backpropagation neural networks [36], probabilistic neural networks [37], convolutional neural networks [38], time-recurrent neural networks [39], and long- and short-term memory networks [40]. ANNs consist of several artificial neurons (or computing nodes) that send and receive signals to and from one another.

The performance of an ANN model depends heavily on the way in which the neurons are connected to each other. In general, ANN models have three main layers: (i) an input layer where the input data are entered, (ii) hidden layers where the model is trained and tested using the input data from the previous layer, and (iii) an output layer where the results are exported. An advantage of an ANN model is that it will work with any type of data [41], which is highly useful for problems where data are collected from multiple sources and contain much noise. Another advantage is its suitability for parallel computing, which can help it to process large datasets within reasonable processing time.

However, there are some drawbacks to this approach. For example, the information in an ANN is stored across the entire network, so it consumes a great deal of memory, especially with large datasets. In addition, a large number of trials are often required to improve the control of the behavior of the network.

3.2.5. Significance of Models Used

The following are some of the benefits of using the GBR approach. First, the model has unrivaled prediction accuracy. Second, multiple loss functions and hyperparameter tuning options can be used to optimize the model. No data preprocessing is necessary prior to the training of the model, and the approach is frequently effective with both categorical and numerical data. Finally, GBR can deal with missing data points effectively.

The advantages of the support vector regression method are the following. When there is an understandable margin of dissociation between classes, the support vector machine works similarly well. In high-dimensional spaces, it is more productive, and when the number of dimensions exceeds the number of specimens, this method works well.

The following are some of the benefits of using the artificial neural network method. The capacity to solve complicated problems with nonlinear input-output relationships is the first advantage of an ANN model. Another advantage of the ANN technique is that it eliminates the need for assumptions and preconstraints during simulation. The method can examine complex nonlinear relationships and analyze data with a large number of dimensions. Because of its structure, which is made up of multiple nodes, an ANN is capable of solving high-dimensional complicated problems with good performance.

There are several disadvantages of fuzzy logic in artificial intelligence and machine learning. Because these systems rely on erroneous data and inputs, their accuracy is jeopardized. There is no one-size-fits-all strategy for applying fuzzy logic to address an issue. As a result, several solutions to a single problem emerge, causing confusion. Usually, they are not widely recognized due to the inaccuracy of the results. The fact that fuzzy logic control systems are fully reliant on human knowledge and expertise is a big disadvantage. A fuzzy logic control system's rules must be updated on a regular basis. Machine learning and neural networks are not recognized by these platforms. Validation and verification of the systems necessitate extensive testing.

3.2.6. Performance Assessment

In this study, RMSE, MAE, and R2 were employed as quality metrics to construct efficient ML models [42]:where yi and are actual and predicted values, respectively. N is the total number of observations. These metrics are the most popular in regression problems. A better model is indicated by higher values of R2 and lower values of RMSE and MAE.

The coefficient of determination (R2) is a good assessment metric for determining how well a model fits the input variables. However, this coefficient does not allow for the detection of overfitting problems. RMSE and MAE are assessment metrics for the goodness of fit. When evaluating the value of these metrics for a given model, RMSE is prioritized because it has distinct advantages over MAE and R2 [43]. Unlike MAE, RMSE does not use an absolute value, which is highly undesirable in many mathematical calculations. Therefore, when comparing the predictive accuracy of different regression models, RMSE is the first choice.

3.3. Multiobjective Optimization

The NSGA-II algorithm was utilized according to the steps reported by Deb et al. [44]. The algorithm is employed by setting an initial population according to a nondominant criterion. Then, the initial population or set of individuals is changed iteratively for the optimization process. After an assessment, the individuals with better fitness are selected as parents and evolve according to the principle of natural selection using crossover, mutation, and selection to produce a new generation of offspring. This process is repeated until a stopping criterion is met. The mathematical-style pseudocode of NSGA-II is described in Figure 2.

There is a need to optimize the machining productivity and production cost in manufacturing processes. Achieving this consists of finding the optimal configuration to avoid wastage of material, labor cost, energy, time, cutting tool, and expenses while maintaining the output requirements of the product. Therefore, cutting parameters have to be optimized [46]. One of the critical technical problems in machining is simultaneously achieving two criteria: minimum Ra and minimum tool wear. To address this issue, NSGA-II was utilized to obtain Pareto-optimal solutions.

Recently, studies have integrated ANNs with genetic algorithms to optimize cutting parameters with minimum surface roughness in milling processes [47]. Metaheuristic algorithms can effectively deal with multiobjective optimization in engineering problems [28, 48]. Moreover, these algorithms can efficiently optimize multiple objectives simultaneously [23]. Multiobjective methodologies have been successfully implemented in cutting-parameter optimization [49]. Unune et al. combined NSGA-II and an ANN to model and optimize the material removal rate (MRR) and Ra in grinding [25]. Kayaroganam et al. [26] used a fuzzy model and NSGA-II to determine the optimal drilling conditions for the minimum thrust force and torque in reinforcing AA6061 aluminum alloy drilling.

The advantages of NSGA-II are as follows. First, it employs nondominated sorting approaches to obtain a solution that is as close to the Pareto-optimal as possible. Second, it employs crowding distance approaches to promote solution diversity. Finally, it employs elitist approaches to maintain an existing population's best solution for the following generation. Therefore, the NSGA-II technique for multiobjective optimization was selected in the present work.

4. Experimental Setup

4.1. Experimental Design

Experiments were performed on a DMU 50 CNC milling 5-axis machine with a maximum spindle speed of 14,000 rpm and a maximum feed rate of 30,000 mm/min. The workpiece material was 6061 aluminum alloy with a length, width, and height of 150 mm, 15 mm, and 150 mm, respectively (Figure 3). The long edge of the workpiece was traced parallel to the X-direction of the machine. Finally, the workpiece was clamped firmly in the milling vise. The chemical content of 6061 aluminum alloy is indicated in Table 1 according to the manufacturer.

The insert tool used in this study followed the American National Standards Institute (ANSI) code APMT1135PDER-M2. According to the manufacturer, the insert tool geometry has a corner radius of 0.8 mm, a major clearance angle of 11°, and an insert-included angle of 85°. Two indexable parallelogram carbide inserts were mounted on a tool shank (300R C20-20-150 2 T, Sumitomo, Japan). The length of the tool shank was 150 mm, and the diameter was 20 mm. The geometric parameters of the cutter can be found at https://www.mitsubishicarbide.net/.

The inserts’ nose radius was 0.8 mm. The basic machining parameters for high-speed milling included the cutting speed (), table feed rate (), and axial depth of cut (a) under fluid overflow lubrication (Ultracut FX 6090). The process parameters are shown in Table 2. Experiments were conducted using the setup in Figure 4. In this study, the full factorial technique was applied to design the experimental matrix. This technique shows significant advantages compared to a fractional factorial method. Factorial designs are substantially more efficient than fractional factorial designs and can deliver more information at a similar or lower cost. They can also aid in the faster discovery of optimal conditions than fractional factorial studies. Additional components can be investigated using a factorial design without incurring additional costs. The factorial design can be used to quantify the effects of a component at several levels of other factors, which can lead to results that are applicable to a wide range of experimental settings. Therefore, a complete factorial design was selected to conduct 81 experiments.

Cutting speed is an important parameter that is commonly used to define high-speed milling [50]. For example, Ming et al. [51] performed a milling experiment for aluminum alloy using cutting speeds of 2,500 to 15,000 rpm and table feed rates of 250 to 1,500 mm/min. In another study, Zaghbani et al. [52] varied the cutting speed from 2,926 to 7,523 rpm and the table feed rate from 292 to 1,400 mm/min. Based on a literature review and the characteristics of the available equipment, an experiment was performed with the cutting-parameter ranges indicated in Table 2.

4.2. Acquisition of Data

Ra was calculated according to the ISO 1997 standard (the measurement range was 4 mm). The value was displayed through the software SurfTest SJ USB Communication Tool Ver 5.007, which was connected with a measurement device (MITUTOYO-Surftest SJ-210 Portable Surface Roughness Tester). The final Ra value of each experiment was determined by the average value of three measurements along the toolpath. The instruments used for the measurement of surface roughness and tool wear are shown in Figure 5.

The cutting tool’s flank wear was measured by a LEICA DM750 M Microscope system, and the values were displayed through LAS EZ software. In each experiment, Vbmax was measured for two inserts per cut, and the average value was recorded as the final tool wear, as shown in Table 3 (see also Figure 6). In the procedure for the whole experiment, 81 experimental runs were carried out for 10, 30, and 50 machining strokes. A machining stroke was a length of 150 mm. The measurement process was conducted in standard laboratory conditions at room temperature. Each experiment was repeated three times. Therefore, the machining time can be calculated based on the cutting length [53]:where L is the cutting length, which is calculated by multiplying the number of machining strokes and the length of a stroke (150 mm). The numbers of machining strokes during the experiments were 10, 30, and 50, which correspond to cutting lengths of 1500, 4500, and 7500 mm, respectively. z is the number of teeth, is the cutting speed, DC is the nominal diameter of the cutting tool, and fz is the feed rate per tooth (mm/t), which can be calculated as [53]where is the table feed rate (mm/min). The cutting time for each experimental run was calculated and is shown in Table 3.

5. Results and Discussion

5.1. Effect of Cutting Parameters on Surface Roughness and Tool Wear

Figures 7(a) and 7(b) show the effect of cutting parameters on the surface roughness and tool wear, respectively. Generally, as Figure 7(a) indicates, higher cut depth leads to reduced surface roughness. A higher table rate, cutting speed, and stroke (i.e., cutting length or cutting time) increase the surface roughness. However, the outcome does not follow this rule for some parameters. At a = 0.2 mm,  = 2,700 mm/min, and  = 10,345 rev/min, the surface roughness is reduced when increasing the cutting time. For  = 13,528 rev/min, Ra is reduced when increasing cutting time at  = 3,557 mm/min and a = 0.2 mm, at  = 4,050 mm/min and a = 0.4 mm, and at  = 2,700 mm/min and a = 0.6 mm.

The surface quality in milling aluminum alloy is affected by the production of built-up edges. A higher speed of chip flow increases the friction with the blade and tool wear, improves the blade surface finish, and can reduce the friction resistance [54]. As Figure 7(b) indicates, higher , , a, and the number of strokes lead to increased tool wear. The minimum of tool wear reaches 133.420 at  = 2,700 mm/min, Vc = 10,345 rev/min, a = 0.2 mm, and 10 strokes. Thus, the value of Ra changes irregularly according to cutting parameters. Therefore, it is necessary to determine the optimal value of the cutting parameters such that Ra and Vbmax are minimized together.

5.2. Hyperparameter Tuning of Machine Learning Models

Optimization of the model is the biggest challenge in obtaining a machine learning solution. Optimization of hyperparameters is done to find the model parameters that achieve the best performance measured on the validation set for a given machine learning algorithm. Hyperparameters control the learning process and impact predictive performance. Moreover, a suitable selection of hyperparameters can avoid the overfitting and underfitting of the model and enhance the predictive accuracy.

There are many common strategies for the optimization of hyperparameters, such as manual hyperparameter tuning, grid search, random search, Bayesian optimization, Gradient-based optimization, and evolution optimization [55]. This study used grid search (GS), which is a traditional technique for hyperparameter tuning. This approach allows us to find the optimal hyperparameters by using a grid of combinations in some order [56]. The GS technique is easy to use and implement, but it is less efficient when there is a large number of parameters [57]. To solve this problem, Zöller et al. [58] proposed a procedure to determine the global optimums. It starts with ample space, then the search space is narrowed, and this step is repeated multiple times. Accordingly, this work used GS to find the hyperparameters for all considered ML models. All the simulations were done using Python on a Dell Vostro with12 GB of RAM and an Intel® Core™ i5-9400 CPU @ 2.90 GHz. The optimal grid values found for the models are indicated in Table 3.

Among the four machine learning models employed in this study, the LIN model does not have any hyperparameters. However, the remaining models have many sensitive hyperparameters. As shown in Table 4, the set of hyperparameter values used for SVR, GRB, and ANN are generally the kernel, C (regularization parameter), degree, and gamma for SVR [59]; n_estimators, learning_rate, max_depth, and subsample for GBR; and batch_size, epochs, optimizer, and hidden layers for an ANN [60]. RMSE on the test dataset was employed as a principal error metric to determine optimal values.

5.3. Predictive Performance of Models
5.3.1. Predictive Performance for Ra

Table 5 shows the performance of the four ML models for the prediction of Ra as indicated for the training and testing datasets. On the training dataset, the GBR model exhibits the best predictive performance. This model obtained the lowest value of RMSE and the highest value of R2. In contrast, on the testing dataset, the predictive performance of the SVR exhibits the lowest values of RMSE and MAE and the highest value of R2. According to all error metrics on the testing dataset, the predictive performance is best with GBR, followed by ANN, LIN, and SVR. Finally, Figures 8 and 9 show the line and scatter plots of predictive and measured values of Ra for the training and testing datasets.

5.3.2. Predictive Performance for Tool Wear

Table 6 exhibits the accuracy metrics of the four ML models for predicting Vbmax for the training and testing datasets. On the training dataset, GBR shows the best predictive performance based on the smallest values of RMSE and MAE and the highest value of R2. However, this is not exhibited in the testing dataset: values of RMSE and MAE are larger than those of the ANN. The ANN model also shows the highest value of R2. According to all error metrics on the testing dataset, the predictive performance of models is best with SVR, followed by LIN, GBR, and ANN.

Moreover, the R2 values are 0.998 and 0.994 when using the training and testing datasets, respectively. This evaluation shows that the ANN model is the most efficient in predicting tool wear. Finally, Figures 10 and 11 show the line and scatter plots of predictive and measured values for the training and testing datasets.

The performance was compared when using different activation functions, such as relu, softmax, sigmoid, softplus, softsign, tanh, selu, elu, and exponential functions. Other parameters of the model remained fixed. Table 7 shows the effects of the different activation functions on the assessment criteria.

As shown in Table 7, the relu activation function exhibits the best performance when considering all error metrics. The results for the training dataset were RMSE = 0.923, MAE = 0.637, and R2 = 0.998, while those for the testing dataset were RMSE = 1.506, MAE = 1.090, and R2 = 0.994. This model yields the highest value of R2 and the smallest values of RMSE and MAE. Lastly, it should be noted that the parametric study was only conducted on activation functions in the present study. Erkan et al. [61] provide a more complete parametric study on an artificial neural network model (including the learning algorithm, and the number of neurons).

5.3.3. Multiobjective Optimization by NSGA-II

Surface roughness and tool wear must be as low as possible in machining. Therefore, a formulation defining the multiobjective problem is expressed in the following, where SVR_reg_Ra and ANN_reg_Vbmax are models predicting Ra and Vbmax, respectively. As deduced previously, the best choices of ML models in the prediction of Ra and Vbmax are SVR and ANN, respectively (using optimal hyperparameters indicated in Table 4). The problem constraints are shown as follows:Objectives:subject to constraints

The NSGA-II algorithm was implemented in Python. Control parameters were selected to operate the algorithm, such as population size, the maximum number of generations, crossover rate, mutation rate, and selection rate [62]. Table 8 shows the control parameters used in the present work. “Population size” is the initial set of solutions corresponding to each generation. A small “population size” limits the ability of the exploration of the search space and crossover operations, but a large population size can be computationally complex.

“Maximum generations” indicate the number of iterations until the end of the algorithm. “Crossover” represents the frequency with which crossovers are performed. The value of “crossover” impacts the convergence speed: a high value results in fast convergence, and a low value results in slow convergence. Finally, “mutation probability” represents how often parts of an individual solution undergo random perturbations. The “selection rate” indicates a designated probability to produce offspring for parents and applying crossover and mutation [63]. In this study, a reasonable convergence rate was obtained with a population size of 50, maximum generator of 100, crossover rate of 0.85, mutation rate of 0.25, and selection rate of 0.2.

After the NSGA-II algorithm parameters were assigned, the algorithm converged successfully after 250 function evaluations. The Pareto curve is shown in Figure 12. Pareto solutions are marked in red. The first performance objective, Ra, was found to lie between 0.257 and 0.308 μm, and the second performance objective, the tool wear (Vbmax), was found to be between 136.198 and 137.133 μm in the 50 Pareto solutions. As shown in Figure 12, the values of the optimization objective function conform to the Pareto curve, and the curve is continuous.

The optimal configuration is shown in Table 9. It is recommended that the table feed rate be between 2,700 and 2,707.411 mm/min, while the cutting speed should be between 10,345 and 10,345.08 m/min, the depth of cut should be between 0.435 and 0.600 mm, and the cutting time should be approximately 33.33 seconds. The values of the Pareto solution set are shown in Table 10.

5.4. Validation of Predicted Results

To verify the Pareto solution results, confirmatory experiments were performed. The cutting parameters of optimal solution numbers 1, 2, 22, 26, and 48 have been randomly selected from Table 10. The results are compared in Table 11. As indicated in the table, the test results are near the predicted values found through optimization using NSGA-II. The highest absolute percentage errors of Ra and Vbmax are 2.5% and 1.5%. Therefore, the combination of NSGA-II and ML can be used to obtain the desired Ra and Vbmax in high-speed milling operations.

6. Conclusion

This work has modeled and optimized the process of high-speed milling of 6061 aluminum alloy. The present study has used the ML models to predict the performance characteristics of Ra and Vbmax more robustly and accurately than the traditional approach. Moreover, a hybridization between ML models and a multiobjective optimization algorithm provided some optimal solutions. Any solution that still achieves the minimum values of Ra and Vbmax simultaneously can then be chosen. The main conclusions of this work are summarized as follows [64, 65]:(i)81 experiment runs were performed to determine the surface roughness and tool wear. To avoid underfitting and overfitting and to enhance the predictive accuracy, hyperparameters of models were tuned using the GridSearchCV technique. The results showed that SVR and ANN performed better than the rest of the models in the prediction of Ra and Vbmax when considering RMSE, MAE, and R2. Regarding the predictive performance of Ra, the values of RMSE, MAE, and R2 were 0.014, 0.012, and 0.973, respectively, which are smaller than those of the other models. Regarding the predictive performance of Vbmax, the values of RMSE, MAE, and R2 were 1.506, 1.090, and 0.994, respectively, which are again the lowest values when compared with LIN, SVR, and ANN.(ii)After applying the NSGA-II technique, the average surface roughness (Ra) ranged between 0.257 and 0.308 μm, and the Vbmax ranged between 136.198 and 137.133 μm in the 50 Pareto solutions. The feed rate ranged between 2,700 and 2,707.411 mm/min, the cutting speed ranged between 10,345 and 10,345.08 m/min, the depth of cut ranged between 0.435 and 0.600 mm, and the cutting time was approximately 33.33 seconds.(iii)The experimental verification results showed that absolute percentage errors of Ra and Vbmax were 2.5% and 1.5%, respectively.

Thus, this work confirmed that the multiobjective optimization approach provided good performance regarding the quality metrics for Ra and Vbmax. Nevertheless, more studies are needed to develop an intelligent system using NSGA-II as a decision-making tool to integrate user preferences. In further studies, the cutting forces should be measured and analyzed for a better understanding of the mechanical process.

Appendix

A. Experimental Data

In this work, the experimental data points were scaled to a range of [0; 1], as is common in machine learning for minimizing the bias between variables. The procedure for scaling a variable x is shown in Equation (A.1), which consists of two parameters, ϕ and ψ. It should be noted that ψ is the minimum of the considered variable x, and ϕ is its maximum. Finally, a reverse transformation can also be deduced from Eq. (A.1) for converting data from the scaling space to the original one (Table 12).

Abbreviations

Ra:Average surface roughness
Vbmax:Maximum flank wear wear
NSGA-II:Nondominated Sorting Genetic Algorithm
LIN:Linear regression
SVR:Support vector machine regression
GBR:Gradient boosting tree
ANN:Artificial neural network
RMSE:Root mean squared error
MAE:Mean absolute error
R2:Coefficient of determination
:Table feed rate
:Cutting speed
a:Depth of cut
Tc:Cutting time
L:Cutting length.

Data Availability

The excel data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.