Abstract

Cutting parameter optimization dramatically affects the production time, cost, profit rate, and the quality of the final products, in milling operations. Aiming to select the optimum machining parameters in multitool milling operations such as corner milling, face milling, pocket milling, and slot milling, this paper presents a novel version of TLBO, TLBO with dynamic assignment learning strategy (DATLBO), in which all the learners are divided into three categories based on their results in “Learner Phase”: good learners, moderate learners, and poor ones. Good learners are self-motivated and try to learn by themselves; each moderate learner uses a probabilistic approach to select one of good learners to learn; each poor learner also uses a probabilistic approach to select several moderate learners to learn. The CEC2005 contest benchmark problems are first used to illustrate the effectiveness of the proposed algorithm. Finally, the DATLBO algorithm is applied to a multitool milling process based on maximum profit rate criterion with five practical technological constraints. The unit time, unit cost, and profit rate from the Handbook (HB), Feasible Direction (FD) method, Genetic Algorithm (GA) method, five other TLBO variants, and DATLBO are compared, illustrating that the proposed approach is more effective than HB, FD, GA, and five other TLBO variants.

1. Introduction

In modern manufacturing, determining optimal cutting parameters is of great importance to improve the quality of products, to reduce the machining costs, and to maximize the profit rate. The main cutting parameters in multitool milling operations include the feed per tooth, cutting velocity, and the radial and axial depths of cut. The conventional methods of selecting of cutting parameters mainly depend either on the operator experience or on machining data from handbooks. But it is a known fact that the cutting parameters obtained from these resources, in most cases, are extremely conservative. Consequently, it may not perform high productivity. So it is necessary to develop a new technique to investigate the cutting optimization problem.

There are many mathematical programming techniques to be used extensively for optimization of cutting parameter over the past few decades. In earlier studies, Gupta et al. [1] developed an integer programming for the determination of optimal subdivision of depth of cut in multipass turning with constraints. Subsequently, Wang et al. [2] used deterministic graphical programming to optimize machining parameters of cutting conditions for single pass turning operations. Shin and Joo [3] used a dynamic programming for the determination of optimum of machining conditions with practical constraints. Petropoulos [4] developed a geometric programming model for the selection optimal selection of machining rate variables.

Although these mathematical programming techniques have been applied to solve the cutting parameter optimization problem, these studies have not involved some important cutting constraints. Considering the number of constraints such as surface roughness, cutting force, cutting velocity, machining power, and tool life, cutting parameter optimization problem is very complicated. The additional variables due to number of passes make the solution procedure more complicated. These mathematical programming techniques incline to obtain local optima and may be only useful for a specific problem.

Recently, nontraditional optimization approaches recently have been developed to solve the cutting parameter optimization problem. Shunmugam et al. [5] used a Genetic Algorithm (GA) to optimize cutting parameters in multipass milling operations, exploiting total production cost as the objective function. Li et al. [6] developed a two-phase GA to optimize the spindle speed and feed and select the tools for drilling blind holes in parallel drilling operations to obtain the minimum completion time. Krimpenis and Vosniakos [7] used a GA to optimize rough milling for parts with sculptured surfaces and select process parameters such as federate, cutting speed, width of cut, raster pattern angle, spindle speed, and number of machining slices of variable thickness. Although GA has some advantages over traditional techniques, the successful application of GA depends on the population size and the diversity of individual solutions in the search space. If its diversity cannot be maintained before the global optimum is reached, it may prematurely converge to a local optimum. Liu and Wang [8] proposed a modified GA to optimize milling parameter selection. The operating domain is defined and changed to be around the optimal point in its evolutionary processes so that the convergence speed and accuracy are improved. Wang et al. [9] presented a parallel genetic simulated annealing to select optimal machining parameters for multipass milling operations. The Taguchi method was initially used to predict cutting parameter performance measures, and then the GA was utilized to optimize the cutting conditions. Subsequently, Öktem [10] discussed the utilization of Artificial Neural Network (ANN) and GA for predicting the best combinations of cutting parameters to provide the best surface roughness. Li et al. [11] suggested combining the ANN and GA to minimize the make-span in production scheduling problems. António et al. [12] used a GA based on an elitist strategy to minimize manufacturing costs of multipass cutting parameters in face milling operations. Zhou et al. [13] applied fuzzy particle swarm optimization algorithm (PSO) to select the machining parameters for milling operations. Zarei et al. [14] proposed a Harmony Search (HS) algorithm to define the optimum cutting parameters for a multipass face milling operation. Mahdavinejad et al. [15] developed a new hybrid optimization approach by combining the immune algorithm with ANN to predict the effect of milling parameters on the final surface roughness of Ti-6Al-4V work pieces. Briceno et al. [16] selected an ANN for modeling and simulating the milling process. Orthogonal design and specifically equally spaced dimensioning showed that ANN is a good method to define process parameters. Venkata Rao and Pawar [17] used an Artificial Bee Colony (ABC) algorithm to minimize production time of a multipass milling process to determine the optimal process parameters such as the number of passes, depth of cut for each pass, cutting velocity, and feed. Onwubolu [18] used a new optimization technique based on tribes to select the optimum machining parameters in multipass milling operations such as plain milling and face milling by simultaneously considering multipass rough machining and finish machining.

Although some improvements in optimizing machining parameters in milling operations have been made, these nontraditional optimization approaches require a lot of specific controlling parameters except for the common parameters such as number of generation and population size. For instance, the GA involves crossover and mutation probability. Similarly, the HS algorithm includes harmony memory considering rate, bandwidth rate, and a random select rate. These specific controlling parameters affect significantly the performance of the above mentioned algorithms. Improper parameters of algorithms either raise total complexity of consumption time or fall into the local optimum. So there remains a need for efficient and effective optimization algorithms for the cutting parameters determination.

Very recently, Rao et al. proposed a Teaching-Learning-Based Optimization (TLBO) [19] algorithm. This algorithm does not need specific controlling parameters except for the common parameters such as number of generation and population size. As a stochastic search strategy, it is a new algorithm based on swarm intelligence having the characteristics of rapid convergence, simple computation, and no specific controlling parameters except for the common parameters such as number of generation and population size. However, it has some undesirable dynamical properties that degrade its searching ability [20]. One of the most important issues is that the population tends to be trapped in the local optima solution because of diversity loss. To improve the performance of the original TLBO, a few modified or improved algorithms are proposed in recent years, such as teaching-learning-based optimization with dynamic group strategy (DGSTLBO) [20], teaching-learning-based optimization with neighborhood search (NSTLBO) [21], an elitist teaching-learning-based optimization algorithm (ETLBO) [22], and a variant of teaching-learning-based optimization algorithm with differential learning (DLTLBO) [23]. These modified TLBOs have better performance than the original TLBO on classical benchmark functions. Although the abovementioned variants TLBO have some improvements, they never focus on correct assignment problem. That is to say, each learner should be assured correct assignment of learning objects in the “Learner Phase.” To this aim, we present a novel version of TLBO, TLBO with dynamic assignment learning strategy (DATLBO), in which all the learners are divided into three categories in the “Learner Phase”: good learners, moderate learners, and poor ones. Good learners are self-motivated and try to learn by themselves; each moderate learner uses a probabilistic approach to select one of good learners to learn; each poor learner also uses a probabilistic approach to select several moderate learners to learn. The modification tries to both enable the diversity of the population to be preserved in order to discourage premature convergence and achieve balance between the explorative and exploitative tendencies of achieving better solution. A case study in multitool milling operations is used to verify DATLBO. The results are compared with results from GA [24], the feasible direction method [25], handbook recommendations [26], and five other TLBO variants.

The paper is organized as follows. Section 2 gives a short introduction to modeling of milling operations. Original TLBO algorithm and the proposed algorithm, DATLBO, are described in Section 3. This case study of multitool milling parameter optimization is presented in Section 4, and the summary and conclusions are given in Section 5. The last section provides the nomenclature.

2. Modeling of Milling Operations

Milling is a machining process which uses rotary multiple tooth cutters to remove material from a work piece. Figure 1 displays two kinds of milling operations: end milling and face milling. As the cutter rotates, each tooth removes a small amount of material from the advancing work piece during each spindle revolution.

In this study, face milling and end milling operations are considered. For multitool milling operations, it is very important to select the optimum process parameters such as depth of cut, feed per tooth, and cutting velocity. Depth of cut is usually predetermined by the work piece geometry and operation sequence. Hence, determining machining process parameters can be simplified to determining the proper cutting velocity and feed per tooth.

2.1. Modeling of Unit Time, Unit Cost, and Profit Rate

The mathematical model of multitool milling operations formulated in this study is based on the research of Tolouei-Rad and Bidhendi [25]. The decision variables considered for this model are cutting velocity () and feed per tooth . The objective function is to maximize the profit rate. The unit production time for a single part in multitool milling operations is the sum of setup time, machining time, and tool changing time. The unit production time iswhere the machining time is

The unit cost for producing of the part in multitool milling operations is the sum of material cost, setup cost, machining cost, tool cost, and tool changing cost. The unit cost iswhere the tool life is

For multitool milling operations the profit rate is

2.2. Milling Process Constraints

Feed per tooth is in the range determined by the minimum and maximum feed per tooth of the machine tool:

Optimum cutting velocity is in the range determined by the minimum and maximum cutting velocity of the machine tool:

The total cutting force constraint is

Machining power cannot exceed the effective maximum machining power. Therefore, the power constraint is

The required surface roughness cannot exceed the maximum allowable surface roughness. Therefore the surface roughness constraint for end milling operations is [25]and for face milling constraint is [25]

3. Teaching-Learning-Based Optimization

3.1. TLBO Algorithm

Inspired by the philosophy of teaching and learning, Rao et al. presented a teaching-learning-based optimization (TLBO) [19]. It was developed based on the simulation of a classical learning process in a class. Like other population set-based methods such as GA, DE, and PSO, TLBO also uses a population of candidate solutions, called learners, with their positions initialized randomly from the search space. The teacher is generally referred to as a highly learned person who shares his or her knowledge with the learners in a class. The quality of a teacher affects the outcome (i.e., grads or marks) of learners. Furthermore, learners also learn from interaction between themselves.

In original TLBO algorithm, different design variables will be analogous to different subjects offered to learners and the learners’ result is analogous to the “fitness,” as in other population based optimization techniques. The learning process of TLBO is divided into two phases which consists of “Teacher Phase” and “Learner Phase.”

Teacher Phase. During teaching phase, the best learner or teacher tries to bring the mean result of the class in a subject taught by him or her who depends on his or her capability. But in practice, a teacher can only move the mean of a class up to close to his or her result to some extent. Suppose that denotes the teacher and denotes the mean at any iteration. If moves toward its own level mean , the new mean will be designated as . The difference between the existing mean and the new mean is given as follows:where is a random vector in which each element is a random number in the range . denotes a teaching factor which decides the value of mean to be changed, and the value of can be either 1 or 2. Based on (12), the updating formula of the learning for a learner in teacher phase is given by

Learner Phase. During learning phase, each learner interacts with other learners to improve his knowledge. The random interaction of learners is going with the help of formal communications, presentations, group discussions, and so forth. If the other learner have more knowledge than him or her, the learner learns something new. The learner mode is based on the following expression:

3.2. DATLBO Algorithm

In the original TLBO, each learner interacts randomly with other learners in “Learner Phase,” which has certain blindness and does not assure correct assignment of learning objects to each learner. During the course of optimization, this situation results in slower convergence rate of optimization problem. Motivated by the fact, we propose a novel version of TLBO, TLBO with dynamic assignment learning strategy (DATLBO).

3.2.1. Dynamic Assignment Learning Strategy

Study reveals that birds employ different strategies to conduct mating process among their society. The ultimate success of a bird to raise a brood with superior features depends on an appropriate assignment strategy it uses [27]. It is well known that, in the original TLBO, each learner interacts randomly with other learners in “Learner Phase,” which has a certain degree of blindness and does not assure correct assignment of learning objects to each learner. To be more specific, appropriate assignment strategy can play an important role in improving the results of the whole class. Inspired by the above bird mating process, all the learners are divided into three categories based on their results: good learners, moderate learners, and poor ones. The good learners are those learners that have the most result; the moderate learners are those learners that have the better result, and the poor ones are those learners that have bad result. Totally, each category has its own learning pattern. By means of this assignment, entire class is split into different categories of learners as per their level and each learner is assigned an appropriate learning object. The way by which each category produces a candidate solution will be explained below in detail.

3.2.2. Each Category Learning Pattern in the Dynamic Assignment Learning Strategy

Each good learner is able to self-learn without the help of others in “Learner Phase”; that is to say, good learners are self-motivated and try to learn by themselves. Therefore, each good learner tries to increase his or her own knowledge abilities of certain subject by making a small change in her subjects probabilistically. From the optimization view, exploitation of the best solutions found so far is performed by good learners. The self-learning pattern pseudocode of each good learner is implemented in Pseudocode 1, where is a newly generated learner according to , is the problem dimension, , , are uniformly distributed random numbers in the range , is the self-motivated factor of each good learner, and denotes the step size.

(1) For   to do
(2)  If  
(3)  ;
(4)  Else
(5)  ;
(6)  End If
(7) End For

Each moderate learner selects one of the whole good learners with a probabilistic approach and learns from his own interesting good learner. The assignment good learner has a better chance of being selected with more knowledge. The learning pattern pseudocode of each moderate learner is implemented in Pseudocode 2, where is a newly generated learner according to , is a time-varying weight to adjust the importance of the interesting good learner, is a vector whose each element is distributed randomly in the range , is the selected object from the whole good learners, is the self-motivated factor of each moderate learner, and and are the upper and lower bounds of the elements, respectively.

(1) ;
(2) If  
(3) ;
(4) End If

Each poor learner tends to learn from two or more moderate learners that are selected with a probabilistic approach. The learning pattern pseudocode of each poor learner is implemented in Pseudocode 3, where is a newly generated learner according to , is a time-varying weight to adjust the importance of the interesting moderate learner, is the number of being selected moderate learners, is a vector whose each element, distributed randomly in , is the selected object from the whole moderate learners, is a random number in the range , is the self-motivated factor of each poor learner, and and are the upper and lower bounds of the elements, respectively.

(1) ;
(2) If  
(3)
(4) End If

As explained above, the pseudocode of the dynamic assignment learning strategy is given in Pseudocode 4.

Begin
Set: sf = ; [pp, mm] = sort(fitness); fitness = pp; = (mm,:);
(1) For  
(2)  For   to do
(3)  If  
(4)  ;
(5)  Else
(6)  ;
(7)  End If
(8)  End For
(9) End For
(10) index1 = roulette wheel 1 (fitness, , );
(11)  For  
(12)  For   to do
(13)   ;
(14)  End For
(15) ;
(16)   If  
(17)   ;
(18)   End If
(19) End For
(20) index2 = roulette wheel 2 (fitness, , , );
(21)  For  
(22)  For   to do
(23)   For  
(24)    For   to do
(25)    ;
(26)    End For
(27)   End For
(28)  ;
(29)  ;
(30)   If  
(31)   ;
(32)   End If
(33)  End For
(34) End For
End

3.2.3. Each Category Parameters Adjustment in the Dynamic Assignment Learning Strategy

To apply the dynamic assignment learning strategy to DATLBO algorithm, the appropriate parameters have to be tuned. It seems that the most important parameter is the proportion of each learner from the class. It is suggested that the percentages of good learners, moderate learners, and bad learners are, respectively, set to 20, 50, and 30 of the class. Only one assigned good learner and two or three assigned moderate learners will be enough. Self-motivated factor is between 0 and 1. can be set between 0.9 and 1. Small values of this parameter may result in bad impact on the performance of the assignment learning strategy. It is better to select as an increasing linear function which changes from a small value nearby zero (e.g., 0.1) to a large one nearby 1 (e.g., 0.9). This behavior allows learners to change their subjects abilities at the beginning of the assignment learning strategy with high probability. This probability decreases during the generations and helps the learners to converge to the global solution. The step size can be selected from the order of or . To provide a good balance between local and global search, decreases linearly from a value nearby 2 to a small one nearby 0.

3.3. The Pseudocode of DATLBO Algorithm

By incorporating the dynamic assignment learning strategy in “Learner Phase” into the original TLBO framework, the DATLBO algorithm is developed. The pseudocode of DATLBO algorithm is presented in Pseudocode 5.

Input: , , , , , , FESMAX; ; ; ;
= 2.5; = 0.25 = 0.001; = 0.9; = 0.1; = 2; = 3;
(1) ;
(2) Generate an initial population: ;
(3) FES = ; ;
(4) While FES <= FESMAX
(6) Evaluate the objective function values: ;
(7) [, ] = min(fitness);
(7) Find the best learner: ;
(8) = mean();
(9)  For  
(10)   = round(1 + rand);
(11)  For  
(12)  ;
(13)  If  
(14)  
(15)  End If
(16)  If  
(17)  ;
(18)  End If
(19)  End For
(20)End For
(21) sf = ; [pp, mm] = sort(fitness); fitness = pp; (mm,:)
(22)  For  
(23)   For   to do
(24)    If  
(25)    ;
(26)    Else
(27)    ;
(28)    End If
(29)   End For
(30)  End For
(31)  index1 = roulette wheel 1 (fitness, , );
(32) For  
(33)  For   to do
(34)   ;
(35)  End For
(36) ;
(37)  If  
(38)  ;
(39)  End If
(40) End For
(41)  index2 = roulette wheel 2 (fitness, , , );
(42) For  
(43)  For   to do
(44)  For  
(45)    For   to do
(46)     ;
(47)    End For
(48)  End For
(49) ;
(50)    ;
(51)    If  
(52)    ;
(53)    End If
(54) End For
(55) End For
End

3.4. Experiments and Comparisons
3.4.1. Benchmark Functions Used in Experiments

To analyze and compare the performance and accuracy of DATLBO algorithm, a large set of CEC2005 tested benchmark functions are used to do the experiments. Based on the shape characteristics, the set of benchmark functions are grouped into unimodal functions ( to ) and basic multimodal functions ( to ). The brief descriptions of these benchmark functions are listed in Table 1. For more details about the definition of benchmark functions, refer to [28].

3.4.2. Experimental Platform, Termination Criterion, and Parameter

All experiments run in the same machine with a Celoron2.26 GHz CPU, 2 GB memory, and windows XP operating system with Matlab7.9. For the purpose of decreasing statistical errors, all experiments independently run 25 times for twelve test functions of 30 variables and 300,000 function evaluations (FES) as the stopping criterion.

The parameter setting of DATLBO algorithm is as follows: , , and number of good learners, moderate learners, and poor learners is set to 10, 25, and 15, respectively; roulette wheel is used as the selection approach; number of being assigned good learners and moderate learners are set to 1 and 2, respectively; self-motivated factor ; step size . The parameters of other algorithms agree well with the original papers.

3.4.3. Performance Metric

The mean value and standard deviation (SD) of the function error value are recorded to evaluate the performance of each algorithm, where and denote the best fitness value and the real global optimization value of test problem, respectively. To verify whether the overall optimization performance of various algorithms is significantly different, statistical analysis method is used to compare the results obtained by algorithms for kinds of problems. Therefore, to statistically compare DATLBO algorithm with other five algorithms, the statistical tool Wilcoxons rank sum test [29] at a 0.05 significance level is used to evaluate whether the median fitness values () of two solutions from any two algorithms.

3.4.4. Numerical Experiments and Results

In this section, DATLBO algorithm is compared with five other TLBO variants. Each corresponding table presents the experimental results, and the last three rows of each table summarize the comparison results. The best results are shown in bold.

From the statistical results of Table 2, we can see that none of the algorithms can perfectly solve the twelve CEC2005 standard benchmark functions. From the Wilcoxon’s rank sum test listed in the last three rows of Table 2, it is clear that DATLBO algorithm outperforms original TLBO algorithm on test functions except for functions , , and . Although DLTLBO algorithm outperforms DATLBO algorithm on test function , DATLBO algorithm is significantly better than DLTLBO algorithm on other test functions such as , , , , , , , and . By the ensemble of assignment learning strategy, DATLBO algorithm achieves promising results on unimodal and multimodal functions. DATLBO algorithm outperforms TLBO, DGSTLBO, ETLBO, NSTLBO, and DLTLBO algorithms over nine, nine, seven, nine, and nine out of twelve test functions, respectively. The convergence graphs comparison on twelve test functions for derived from six relevant TLBO algorithms are shown in Figure 2. From the convergence graphs, we can see that our DATLBO algorithm has better convergence speed and solution quality in most cases than other five relevant TLBO algorithms. Therefore, it is interesting to note that the overall performance of DATLBO algorithm is significantly better than original TLBO, ETLBO, NSTLBO, DGSTLBO, and DLTLBO algorithm.

4. Case Study for Milling Operation

4.1. Problem Description

In order to compare the performance of the DATLBO algorithm with the other algorithms, a case study from [25] is used to test. A work piece has four machined features, namely, a step, pocket, and two slots. The milling operation schematic is shown in Figure 3. The three-dimensional view is in Figure 3(a), the front view is in Figure 3(b), and the top view is in Figure 3(c). The objective is to find optimum machining parameters with the maximum profit rate. Table 4 shows the limits of the cutting velocity and feed per tooth for this case study. Five milling operations, face milling, corner milling, pocket milling, slot milling 1, and slot milling 2, respectively, are in Table 3. The data for the tools for each operation are in Table 5.

The machine tool is a vertical CNC milling machine;  kW and . The work piece material is 10L50l leaded steel with hardness = 225 BHN. Other data include , ,  $/min,  $/min,  min,  min, for HSS tools, , for carbide tools, , for HSS tools, for carbide tools, and , , , , , , , and ; all the data are from [25].

4.2. Applications of DATLBO Algorithm to Case Study

In this case study, Deb’s heuristic constrained handling method is used to handle the constraints with the DATLBO and five other TLBO variants. A tournament selection operator is used in Deb’s heuristic method [30] to select and compare the two solutions. The following three heuristic rules are implemented:Rule 1: if one solution is feasible and the other infeasible, the feasible solution is preferred.Rule 2: if both the solutions are feasible, the solution having the better objective function value is preferred.Rule 3: if both solutions are infeasible, the solution having the least constraint violation is preferred.

These rules are implemented at the end of the teacher and learner phases. Deb’s constraint handling methods are used to select the new solution. For this case study, the DATLBO and five other TLBO variants are run for 25 times and the termination condition is a maximum number of 300,000 function evaluation. The population size and the dimension are set to 30 and 10, respectively.

Tables 611 show the optimal feed per tooth and cutting velocity obtained by the DATLBO and five other TLBO variants, respectively, for five operations when the maximum profit rate is reached. Figures 46 and Table 12 compare the unit cost, unit time, and profit rate obtained by the HB, FD, GA, DATLBO, and five other TLBO variants. Compared to the HB solution, the unit cost decreases by 38%, 39%, 40%, 41%, 43%, 41%, 45%, and 46%, the unit time decreases by 41%, 44%, 46%, 48%, 48%, 47%, 47%, and 51%, and the profit rate increases by 2.51, 2.73, 2.94, 3.00, 3.18, 2.96, 3.25, and 3.66 times for the FD, GA, TLBO, DGSTLBO, ETLBO, NSTLBO, DLTLBO, and DATLBO, respectively. The maximum profit rate given by the DATLBO algorithm is 3.31 $/min, which is better than all the other optimization methods used for the same model.

5. Summary and Conclusions

A novel version TLBO algorithm, the DATLBO algorithm, is proposed for cutting parameter selection of a multitool milling operation. In the proposed DATLBO algorithm, all the learners are divided into three categories based on their results in “Learner Phase”: good learners, moderate learners, and poor ones. Good learners are self-motivated and try to learn by themselves; each moderate learner uses a probabilistic approach to select one of good learners to learn; each poor learner also uses a probabilistic approach to select several moderate learners to learn. The DATLBO is applied to a case study for a multitool milling selection problem based on maximum profit rate criterion with five practical technological constraints. Significant improvements are obtained with the DATLBO algorithm in comparison to the results by HB, FD, GA, TLBO, DGSTLBO, ETLBO, NSTLBO, and DLTLBO algorithms. These results show that the DATLBO algorithm is an important alternative for optimization of machining parameters in multitool milling operations.

Nomenclature

:Tool clearance angle (deg.)
, :Axial and radial depths of cut (mm)
:Tool lead (corner) angle (deg.)
:Chip cross-sectional area (mm2)
, :Labor cost and overhead costs ($/min)
, , :Machining cost, cost of raw material per part, and cutting tool cost ($)
:Unit cost ($)
:Cutting force equation constant
:Cutting force coefficient
:Tool diameter (mm)
:Machine tool efficiency factor
:Feed per tooth (mm/tooth)
Cutting force and permitted cutting force (N)
, :Slenderness ratio and exponent of slenderness ratio
:th milling operation
:Distance traveled by tool to perform operation (mm)
:Correction factor of cutting force under different experiment conditions
:Number of machining operations
, :Tool life exponent and spindle speed (rpm)
, :Required power for the operation and motor power (kW)
:Profit rate ($/min)
:Influential exponent tool on cutting force
:Contact proportion of cutting edge with work piece per revolution
:A vector which is distributed randomly in
, :Value of surface roughness and attainable surface roughness (μm)
:Sale price of the product ($)
, , :Machining time, setup time, and tool changing time (min)
Teacher:The best learner
:Teaching factor
, :Tool life (min) and unit production time (min)
:Maximum number of algorithm iterations
:Influential exponent axial depths of cut on cutting force
, , :Cutting velocity, recommended cutting velocity by handbook, and optimum cutting velocity (m/min)
:Tool wear exponent
:Influential exponent spindle speed on cutting force
:th learner
:Influential exponent radial depths of cut on cutting force
:Influential exponent feed per tooth on cutting force
:Number of tool teeth
:Feasible region
:Arbitrarily small real number
:Constant between zero and one
:Random vector sequence in probability space
:One convergence point with specific probability
:One convergence point with probability 1.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors wish to acknowledge the financial support for this work from the National Natural Science Foundation of China (51575442 and 61402361) and the Shaanxi Province Education Department (11JS074).