Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014 (2014), Article ID 309327, 10 pages
Research Article

Design Optimization of Mechanical Components Using an Enhanced Teaching-Learning Based Optimization Algorithm with Differential Operator

Department of Mechanical Engineering, United Institute of Technology, Coimbatore 641020, India

Received 19 March 2014; Revised 12 July 2014; Accepted 27 July 2014; Published 25 September 2014

Academic Editor: Albert Victoire

Copyright © 2014 B. Thamaraikannan and V. Thirunavukkarasu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper studies in detail the background and implementation of a teaching-learning based optimization (TLBO) algorithm with differential operator for optimization task of a few mechanical components, which are essential for most of the mechanical engineering applications. Like most of the other heuristic techniques, TLBO is also a population-based method and uses a population of solutions to proceed to the global solution. A differential operator is incorporated into the TLBO for effective search of better solutions. To validate the effectiveness of the proposed method, three typical optimization problems are considered in this research: firstly, to optimize the weight in a belt-pulley drive, secondly, to optimize the volume in a closed coil helical spring, and finally to optimize the weight in a hollow shaft. have been demonstrated. Simulation result on the optimization (mechanical components) problems reveals the ability of the proposed methodology to find better optimal solutions compared to other optimization algorithms.

1. Introduction

The problem of volume minimization of a closed coil helical spring was solved using some traditional technique under some constraints. A graphical technique was used by Y. V. M. Reddy and B. S. Reddy to optimize weight of a hollow shaft after satisfying a few constraints. Moreover, Reddy et al. optimized weight of a belt-pulley drive under some constraints using geometric programming [13].

Majority of mechanical design includes an optimization task in which engineers always consider certain objectives such as weight, wear, strength, deflection, corrosion, and volume depending on the requirements. However, design optimization for a complete mechanical system leads to a cumbersome objective function with a large number of design variables and complex constraints [46]. Hence, it is a general procedure to apply optimization techniques for individual components or intermediate assemblies rather than a complete assembly or system. For example, in a centrifugal pump, the optimization of the impeller is computationally and mathematically simpler than the optimization of the complete pump. Analytical or numerical methods for calculating the extremes of a function have long been applied to engineering computations. Perhaps these traditional optimization procedures perform well in many practical cases; they may fail to perform in more complex design situations. In real time optimization (design) problems, the number of design variables will be very large, and their influence on the objective function to be optimized can be very complicated (nonconvex), with a nonlinear character. The objective function may have many local optima, whereas the designer is interested in the global optimum or a reasonable and acceptable optimum [7, 8].

Optimization is a method of obtaining the best result under the given circumstances. It plays a vital role in machine design because the mechanical components are to be designed in an optimal manner. While designing machine elements, optimization helps in a number of ways to reduce material cost, to ensure better service of components, to increase production rate, and many such other parameters [912]. Thus, optimization techniques can effectively be used to ensure optimal production rate. There are several methods available in the literature of optimization. Some of them are direct search methods and others are gradient methods. In direct search method, only the function value is necessary, whereas the gradient based methods require gradient information to determine the search direction. However, there are some difficulties with most of the traditional methods of optimization and these are given below.

A large body of literature is available on traditional methods for solving the above problems. Perhaps, the traditional techniques have a variety of drawbacks paving the way for advent of new and versatile methodologies to solve such optimization problems. Such problems cannot be handled by classical methods (e.g., gradient methods) that only compute local optima. So, there remains a need for efficient and effective optimization methods for mechanical design problems. Continuous research is being conducted in this field and nature-inspired heuristic optimization methods are proving to be better than deterministic methods and thus are widely used [1316].

The most commonly used evolutionary optimization technique is the genetic algorithm (GA). However, GA provides a near optimal solution for a complex problem having large number of variables and constraints. This is mainly due to the difficulty in determining the optimum controlling parameters such as population size, crossover rate, and mutation rate. A change in the algorithm parameters changes the effectiveness of the algorithm. The same is the case with PSO, which uses inertia weight and social and cognitive parameters. Similarly, ABC [17] requires optimum controlling parameters of number of bees (employed, scout, and onlookers), limit, and so forth. HS requires the harmony memory consideration rate, the pitch adjusting rate, and the number of improvisations. Therefore, the efforts must be continued to develop a new optimization technique which is free from the algorithm parameters; that is, no algorithm parameters are required for the working of the algorithm. This aspect is considered in the present work.

Recently, a new optimization technique, known as teaching-learning based optimization (TLBO), has been developed by Rao et al. [5, 1820]. It is one of the recent evolutionary algorithms and is based on the natural phenomenon of teaching and learning process. It has already proved its superiority over other existing optimization techniques such as GA, ABC, PSO, harmony search (HS), DE, and hybrid-PSO. This research also proposes a hybrid method combining the teaching-learning based optimization (TLBO) and a differential mechanism. Here, TLBO will be performed as a base level search procedure, which makes a decision to direct the search towards the optimal region. Later, the exact method (SQP) will be used to fine-tune that region to get the final solution.

2. Mathematical Formulation

In this section, the detailed design considerations of closed coil helical spring, optimum design of hollow shaft, and optimal design of belt-pulley drive are discussed. These problems are adopted from [9], which uses GA as optimization tool.

Case 1 (Closed Coil Helical Spring). The helical spring is made up of a wire coiled in the form of a helix which is primarily intended for compressive and tensile load (Figure 1). The cross-section of the wire from which the spring is made may be circular, square, or rectangular. Two forms of helical springs are used, namely, compression helical spring and tensile spring. The helical spring is said to be closely coiled when the spring wire is coiled so close that the plane containing each turn is nearly at right angles to the axis of the helix and the wire is subjected to torsion (Figure 1). Shear stress is produced in the helical spring due to twisting. The load applied is parallel to or along the axis of the spring.
The optimization criterion is to minimize the volume of a closed coil helical spring under several constraints (Figure 1). The problem may be stated mathematically as follows.

Figure 1: Schematic representation of a closed coil helical spring.

The volume of the spring can be minimized subject to the constraints discussed below. Consider

Stress Constraint. The shear stress must be less than the specified value and can be represented as where Here, maximum working load () and allowable shear stress are set to be 453.6 kg and 13288.02 kgf/cm2, respectively.

Configuration Constraint. The free length of the spring must be less than the maximum specified value. The spring constant can be determined using the following expression: where shear modulus is equal to 808543.6 kgf/cm2.

The deflection under maximum working load is given by It is assumed that the spring length under is 1.05 times the solid length. Thus, the free length is given by the expression Thus, the constraint is given by where is set equal to 35.56 cm.

The wire dia must exceed the specified minimum value and it should satisfy the following condition: where is equal to 0.508 cm.

The outside dia of the coil must be less than the maximum specified and it is where is equal to 7.62 cm.

The mean coil dia must be at least three times the wire dia to ensure that the spring is not tightly wound and it is represented as The deflection under preload must be less than the maximum specified. The deflection under preload is expressed as where is equal to 136.08 kg.

The constraint is given by the expression where is equal to 15.24 cm.

The combined deflection must be consistent with the length and the same can be represented as Truly speaking, this constraint should be an equality. It is intuitively clear that, at convergence, the constraint function will always be zero.

The deflection from preload to maximum load must be equal to the specified value. These two made an inequality constraint since it should always converge to zero. It can be represented as where is made equal to 3.175 cm.

During optimization, the ranges for different variables are kept as follows: Therefore, the above-mentioned problem is a constrained optimization problem with a single objective function subjected to eight constraints.

Case 2 (Optimum Design of Hollow Shaft). A shaft is rotating member which transmits power from one point to another (Figure 2). It may be divided into two groups, namely, (i) transmission shaft and (ii) line shaft. Shafts which are used to transmit power between the source and the machines, absorbing power, are called transmission shaft. Machine shafts are those which form an integral part of the machine itself. The common example of machine shaft is a crank shaft. Figure 2 shows the schematic representation of a hollow shaft.

Figure 2: Schematic representation of a hollow shaft.

The objective of this study is to minimize the weight of a hollow shaft which is given by the expression Substituting the values of , as 50 cm and 0.0083 kg/cm3, respectively, one finds the weight of the shaft and it is given by It is subjected to the following constraints.

The twisting failure can be calculated from the torsion formula as given below: or Now, applied should be greater than ; that is, .

Substituting the values of as per m length, 1.0 × 105 kg-cm, 0.84 × 106 kg/cm2, and , respectively, one gets the constraints as The critical buckling load () is given by the following expression: Substituting the values of , , and to 1.0 × 105 kg-cm, 0.3, and 2.0 × 105 kg/cm2, respectively, the constraint is expressed as The ranges of variables are mentioned as follows:

Case 3 (Optimum Design of Belt-Pulley Drive). The belts are used to transmit power from one shaft to another by means of pulleys which rotate at the same speed or at different speeds (Figure 3). The stepped flat belt drives are mostly used in factories and workshops where the moderate amount of power is to be transmitted. Generally, the weight of pulley acts on the shaft and bearing. The shaft failure is most common due to weight of the pulleys (Table 1). In order to prevent the shaft and bearing failure, weight minimization of flat belt drive is very essential. The schematic representation of a belt-pulley drive is shown in Figure 3.

Table 1: Comparison of the results obtained by GA with the published results Case 1.
Figure 3: Schematic representation of a belt-pulley drive.

Objective Function. The weight of the pulley is considered as objective function which is to be minimized as Assuming , , , and and replacing , and by , and , respectively, and also substituting the values of , , , and ,  kg/cm3, respectively, the objective function can be written as It is subjected to the following constraints.

The transmitted power can be represented as Substituting the expression for in the above equation, one gets Assuming ,  hp and substituting the values of and , one gets or Assuming And considering (26) to (28), one gets Substituting  kg/cm2   cm,  rpm in the above equation, one gets or or Assuming that width of the pulley is either less than or equal to one-fourth of the dia of the first pulley, the constraint is expressed as or The ranges of the variables are mentioned as follows:

3. Optimization Procedure

Classical search and optimization techniques demonstrate a number of difficulties when faced with complex problems. The major difficulties arise when one algorithm is applied to solve a number of different problems. This is because classical method is designed to solve only a particular class of problems efficiently. Thus, this method does not have the breadth to solve different types of problem. Moreover, most of the classical methods do not have the global perspective and often get converged to a locally optimal solution; another difficulty that exist in classical methods includes they cannot be efficiently utilized in parallel computing environment. Most classical algorithms are serial in nature and hence more advantages cannot be derived with these algorithms.

Over few years, a number of search and optimization techniques, drastically different in principle from classical method, are increasingly getting more attention. These methods mimic a particular natural phenomenon to solve an optimization problem, Genetic Algorithm, Simulated Annealing are few among the nature inspired techniques.

4. Teaching-Learning Based Optimization

Teaching-learning based optimization (TLBO) is an optimization technique developed by Ragsdell and Phillips and David Edward [14, 15], based on teaching-learning process in a class among the teacher and the students. Like other nature-inspired algorithms, TLBO is also a population-based technique with a predefined population size that uses the population of solutions to arrive at the optimal solution. In this method, populations are the students that exist in a class and design variables are the subjects taken up by the students. Each candidate solution comprises design variables responsible for the knowledge scale of a student and the objective function value symbolizes the knowledge of a particular student. The solution having best fitness in the population (among all students) is considered as the teacher.

More specifically, an individual student () within the population represents a single possible solution to a particular optimization problem. is a real-valued vector with elements, where is the dimension of the problem and is used to represent the number of subjects that an individual, either student or teacher, enrolls to learn/teach in the TLBO context. The algorithm then tries to improve certain individuals by changing these individuals during the Teacher and Learner Phases, where an individual is only replaced if his/her new solution is better than his/her previous one. The algorithm will repeat until it reaches the maximum number of generations.

During the Teacher Phases, the teaching role is assigned to the best individual (). The algorithm attempts to improve other individuals () by moving their position towards the position of the by referring to the current mean value of the individuals (). This is constructed using the mean values for each parameter within the problem space (dimension) and represents the qualities of all students from the current generation. Equation (39) simulates how student improvement may be influenced by the difference between the teacher’s knowledge and the qualities of all students. For stochastic purposes, two randomly generated parameters are applied within the equation: ranges between 0 and 1; is a teaching factor which can be either 1 or 2, thus emphasizing the importance of student quality:

During the Learner Phase, student () tries to improve his/her knowledge by peer learning from an arbitrary student , where is unequal to . In the case that is better than , moves towards (40). Otherwise, it is moved away from (41). If student performs better by following (40) or (41), he/she will be accepted into the population. The algorithm will continue its iterations until reaching the maximum number of generations. Consider

Additionally infeasible individuals must be appropriately handled, to determine whether one individual is better than another, when applied to constrained optimization problems. For comparing two individuals, the TLBO algorithm, according to [1417], utilizes Deb’s constrained handling method [4].(i)If both individuals are feasible, the fitter individual (with the better value of fitness function) is preferred.(ii)If one individual is feasible and the other one infeasible, the feasible individual is preferred.(iii)If both individuals are infeasible, the individual having the smaller number of violations (this value is obtained by summing all the normalized constraint violations) is preferred.

Differential Operator. All students can generate new positions in the search space using the information derived from different students using best information. To ensure that a student learns from good exemplars and to minimize the time wasted on poor directions, we allow the student to learn from the exemplars until the student ceases improving for a certain number of generations called the refreshing gap. We observe three main differences between the DTLBO algorithm and the original TLBO [4].(1)Once the sensing distance is used to identify the neighboring members of each student, as exemplars to update the position, this mechanism utilizes the potentials of all students as exemplars to guide a student’s new position.(2)Instead of learning from the same exemplar students for all dimensions, each dimension of a student in general can learn from different students for different dimensions to update its position. In other words, each dimension of a student may learn from the corresponding dimension of different student based on the proposed equation (42).(3)Finding the neighbor for different dimensions to update a student position is done randomly (with a vigil that repetitions are avoided). This improves the thorough exploration capability of the original TLBO with large possibility to avoid premature convergence in complex optimization problems.

Compared to the original TLBO, DTLBO algorithm searches more promising regions to find the global optimum. The difference between TLBO and DTLBO is that the differential operator applied accepts the basic TLBO, which generates better solution for each student instead of accepting all the students to get updated as in KH. This is rather greedy. The original TLBO is very efficient and powerful, but highly prone to premature convergence. Therefore, to evade premature convergence and further improve the exploration ability of the original TLBO, a differential guidance is used to tap useful information in all the students to update the position of a particular student. Equation (42) expresses the differential mechanism. Consider where is the first element in the dimension vector ; is the th element in the dimension vector ; is the first element in the dimension vector ; is the random integer generated separately for each , from 1 to , but .

Figure 4 shows the differential mechanism for choosing the neighbor student for (34). This assumes that the dimension of the considered problem is 5 and the student size, which is the population size, is 6 during the progress of the search. Once the neighbor students are identified using the sensing distance, the th individual position will be updated (with all neighbor students) as shown in Figure 4. This is in an effort to avoid premature convergence and explore a large promising region in the prior run phase to search the whole space extensively.

Figure 4: Differential operator illustrated.

5. The Pseudocode of the Proposed Refined TLBO Algorithm

The following steps enumerate the step-by-step procedure of the teaching-learning based optimization algorithm refined using the differential operator scheme.(1)Initialize the number of students (population), range of design variables, iteration count, and termination criterion.(2)Randomly generate the students using the design variables.(3)Evaluate the fitness function using the generated (new) students.//Teacher Phase//(4)Calculate the mean of each design variable in the problem.(5)Identify the best solution as teacher amongst the students based on their fitness value. Use differential operator scheme to fine-tune the teacher.(6)Modify all other students with reference to the mean of the teacher identified in step 4.//Learner Phase//(7)Evaluate the fitness function using the modified students in step 6.(8)Randomly select any two students and compare their fitness. Modify the student whose fitness value is better than the other and use again the differential operator scheme. Reject the unfit student.(9)Replace the student fitness and its corresponding design variable.(10)Repeat (test equal to the number of students) step 8, until all the students participate in the test, ensuring that no two students (pair) repeat the test.(11)Ensure that the final modified students strength equals the original strength, ensuring there is no duplication of the candidates.(12)Check for termination criterion and repeat from step 4.

6. Results and Discussions

In this section, simulation experiments for the above three optimization problems are done. For the sake of comparison of results of the proposed TLBO based method, this research also adopts the four nature-inspired optimization methods, namely, GA [11], PSO [12], ABC [13], and TLBO [14] methods. All the four methods are original versions without any modification.

Parameter Settings of the Algorithms

Genetic Algorithm. Population size is 100; crossover probability is 0.80; mutation probability is 0.010; number of generations is 3000.

Particle Swarm Optimization. Particle size is 30; , , and ; number of generations is 3000.

Artificial Bee Colony. Population size is 50; employed bees are 50; onlooker bees are 50; number of generations is 3000.

Teaching-Learning Based Optimization. Population size is 50; number of generations is 3000.

For the proposed TLBO based algorithm also the same parameter values are used as above (Tables 2 and 3). As detailed above, these optimization methods require algorithm parameters that affect the performance of the algorithm. GA requires crossover probability, mutation rate, and selection method; PSO requires learning factors, variation of weight, and maximum value of velocity; ABC requires number of employed bees, onlooker bees. On the other hand, the TLBO requires only the number of individuals and iteration number (Figures 5, 6, 7, and 8).

Table 2: Best, worst, and mean production cost produced by the various methods for Case 1.
Table 3: Comparison of the results obtained by GA with the published results Case 2.
Figure 5: Convergence plot of the various methods for Case 1.
Figure 6: Convergence (magnified) plot of the various methods for Case 1.
Figure 7: Convergence plot of the various methods for Case 2.
Figure 8: Convergence plot of the various methods for Case 3.

A comparison is made of the results obtained by GA with the published results and is given in Table 6. These results are summarized based on the 50 independent trial runs of each technique. It is observed that optimal values obtained by GA are slightly better as compared to the published results. It is important to highlight that the performance of a GA depends upon its parameter selection. Thus, there is still a chance for further improvement of results, although the GA parameters are selected after a careful study (Tables 4 and 5).

Table 4: Best, worst, and mean production cost produced by the various methods for Case 2.
Table 5: Comparison of the results obtained by GA with the published results Case 3.
Table 6: Best, worst, and mean production cost produced by the various methods for Case 3.

Figure 9 shows the 50 different trial run results of the proposed technique for all the three cases to show the robustness in producing the optimum values.

Figure 9: Final cost of the optimization obtained for all test cases using DTLBO method.

7. Conclusion

In the paper, three different mechanical component optimization problems, namely, weight minimization of a hollow shaft, weight minimization of a belt-pulley drive, and volume minimization of a closed coil helical spring, have been investigated. A new teaching-learning based optimization (TLBO) algorithm with differential operator is proposed to solve the above problems and checked for different performance criteria, such as best fitness, mean solution, average number of function evaluations required, and convergence rate. The results show the better performance of the proposed TLBO based algorithm over other nature-inspired optimization methods for the design problems considered. Although this research focuses on three typical mechanical component optimization problems that too with minimum number of constraints, this proposed method can be extended for the optimization of other engineering design problems, which will be considered in a future work.


:Width of the pulley, cm
:Ratio of mean coil dia to wire dia
:Dia of spring wire, cm
:Dia of any pulley, cm
:Dia of the first pulley, cm
:Dia of the third pulley, cm
:Dia of the second pulley, cm
:Dia of the fourth pulley, cm
:Inner dia of hollow shaft, cm
:Outer dia of hollow shaft, cm
:Minimum wire dia, cm
:Mean coil dia of spring, cm
:Maximum outside dia of spring, cm
:Young’s modulus, kgf/cm2
:rpm of the second pulley
:rpm of the fourth pulley
:Number of active coils
:rpm of any pulley
:Power transmitted by belt-pulley drive, hp
:Any nonnegative real number
:Allowable shear stress, kgf/cm2
:Thickness of the belt, cm
:Thickness of the first pulley, cm
:Thickness of the third pulley, cm
:Thickness of the second pulley, cm
:Thickness of the fourth pulley, cm
:Twisting moment on shaft, kgf-cm
:Maximum working load, kgf
:Preload compressive force, kgf
:Shear modulus, kgf/cm
:Polar moment of inertia, cm4
:Ratio of inner dia to outer dia
:Spring stiffness, kgf/cm
:Free length, cm
:Maximum free length, cm
:Length of shaft, cm
:rpm of the first pulley
:rpm of the third pulley
:Weight of shaft, kg
:Weight of pulleys, kg
:Tangential velocity of pulley, cm/s
:Volume of spring wire, cm3
:A random number
:Tension at the tight side, kgf
:Tension at the slack side, kgf
:Critical twisting moment, kgf-cm.
Greek Symbols
:Spread factor
:Poisson’s ratio
:Cumulative probability
:Perturbance factor
:Deflection under preload, cm
:Maximum perturbance factor
:Allowable maximum deflection under preload, cm
:Deflection from preload to maximum load, cm
:Deflection under maximum working load, cm
:Angle of twist, degree
:Density of shaft material, kg/cm3
:Allowable tensile stress of belt material, kg/cm3.

Conflict of Interests

The authors do not have any conflict of interests in this research work.


  1. J. N. Siddall, Optimal Engineering Design, Principles and Application, Marcel Dekker, New York, NY, USA, 1982.
  2. Y. V. M. Reddy and B. S. Reddy, “Optimum design of hollow shaft using graphical techniques,” Journal of Industrial Engineering, vol. 7, p. 10, 1997. View at Google Scholar
  3. Y. V. M. Reddy, “Optimal design of belt drive using geometric programming,” Journal of Industrial Engineering, vol. 3, p. 21, 1996. View at Google Scholar
  4. K. Deb and B. R. Agarwal, “Simulated binary cross-over for continuous search space,” Complex System, vol. 9, no. 2, p. 115, 1995. View at Google Scholar
  5. R. V. Rao, V. J. Savsani, and D. P. Vakharia, “Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems,” Computer Aided Design, vol. 43, no. 3, pp. 303–315, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. S. Rajeev and C. S. Krishnamoorthy, “Discrete optimization of structures using genetic algorithms,” Journal of Structural Engineering, vol. 118, no. 5, pp. 1233–1250, 1992. View at Google Scholar · View at Scopus
  7. R. V. Rao and V. J. Savsani, Mechanical Design Optimization Using Advanced Optimization Techniques, Springer, London, UK, 2012.
  8. A. N. Haq, K. Sivakumar, R. Saravanan, and V. Muthiah, “Tolerance design optimization of machine elements using genetic algorithm,” The International Journal of Advanced Manufacturing Technology, vol. 25, no. 3-4, pp. 385–391, 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. A. K. Das and D. K. Pratihar, “Optimal design of machine elements using a genetic algorithm,” Journal of the Institution of Engineers, vol. 83, no. 3, pp. 97–104, 2002. View at Google Scholar
  10. Y. Peng, S. Wang, J. Zhou, and S. Lei, “Structural design, numerical simulation and control system of a machine tool for stranded wire helical springs,” Journal of Manufacturing Systems, vol. 31, no. 1, pp. 34–41, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Lee, M. Jeong, and B. Kim, “Die shape design of tube drawing process using FE analysis and optimization method,” International Journal of Advanced Manufacturing Technology, vol. 66, no. 1–4, pp. 381–392, 2013. View at Publisher · View at Google Scholar · View at Scopus
  12. H. Wang and Z. Zhu, “Optimization of the parameters of hollow axle for high-speed passenger car by orthogonal design,” Journal of Soochow University Engineering Science Edition, vol. 31, no. 6, pp. 6–9, 2011. View at Google Scholar · View at Scopus
  13. S. He, E. Prempain, and Q. H. Wu, “An improved particle swarm optimizer for mechanical design optimization problems,” Engineering Optimization, vol. 36, no. 5, pp. 585–605, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. K. M. Ragsdell and D. T. Phillips, “Optimal design of a class of welded structures using geometric programming,” Journal of Engineering for Industry—Transactions of the ASME, vol. 98, no. 3, pp. 1021–1025, 1976. View at Publisher · View at Google Scholar · View at Scopus
  15. G. David Edward, Genetic Algorithms in Search, Optimization, and Machine Learning, vol. 412, Addison-Wesley, Reading, Mass, USA, 1989.
  16. R. C. Eberhart and Y. Shi, “Particle swarm optimization: developments, applications and resources,” in Proceedings of the Congress on Evolutionary Computation, vol. 1, pp. 81–86, Seoul, Republic of Korea, 2001. View at Publisher · View at Google Scholar
  17. D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing Journal, vol. 8, no. 1, pp. 687–697, 2008. View at Publisher · View at Google Scholar · View at Scopus
  18. R. V. Rao, V. J. Savsani, and D. P. Vakharia, “Teaching-learning-based optimization: an optimization method for continuous non-linear large scale problems,” Information Sciences, vol. 183, no. 1, pp. 1–15, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. T. Niknam, R. Azizipanah-Abarghooee, and J. Aghaei, “A new modified teaching-learning algorithm for reserve constrained dynamic economic dispatch,” IEEE Transactions on Power Systems, vol. 28, no. 2, pp. 749–763, 2012. View at Publisher · View at Google Scholar · View at Scopus
  20. P. K. Roy and S. Bhui, “Multi-objective quasi-oppositional teaching learning based optimization for economic emission load dispatch problem,” International Journal of Electrical Power and Energy Systems, vol. 53, pp. 937–948, 2013. View at Publisher · View at Google Scholar · View at Scopus