Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 860681, 15 pages
http://dx.doi.org/10.1155/2012/860681
Research Article

An Elite Decision Making Harmony Search Algorithm for Optimization Problem

1Department of Mathematics, Zhejiang A&F University, Zhejiang 311300, China
2Department of Mathematics, Zhejiang Sci-Tech University, Zhejiang 310018, China
3State Key Laboratory of Software Engineering, Wuhan University, Hubei 430072, China

Received 5 April 2012; Revised 26 May 2012; Accepted 10 June 2012

Academic Editor: Ricardo Perera

Copyright © 2012 Lipu Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.

1. Introduction

In 2001, Geem et al. [1] proposed a new metaheuristic algorithm, harmony search (HS) algorithm, which imitates the behaviors of music improvisation process. In that algorithm, the harmony in music is analogous to the optimization solution vector, and the musicians improvisations are analogous to local and global search schemes in optimization techniques. The HS algorithm does not require initial values for the decision variables. Furthermore, instead of a gradient search, the HS algorithm uses a stochastic random search that is based on the harmony memory considering rate and the pitch adjusting rate so that derivative information is unnecessary. These features increase the flexibility of the HS algorithm and have led to its application to optimization problems in different areas including music composition [2], Sudoku puzzle solving [3], structural design [4, 5], ecological conservation [6], and aquifer parameter identification [7]. The interested readers may refer the review papers [810] and the references therein for further understanding.

HS algorithm is good at identifying the high performance regions of the solution space at a reasonable time but gets into trouble in performing local search for numerical applications. In order to improve the fine-tuning characteristic of HS algorithm, Mahdavi et al. [11] discussed the impacts of constant parameters on HS algorithm and presented a new strategy for tuning these parameters. Wang and Huang [12] used the harmony memory (HM) (set of solution vectors) to automatically adjust parameter values. Fesanghary et al. [13] use sequential quadratic programming technique to speed up local search and improve precision of the HS algorithm solution. Omran and Mahdavi [14] proposed a so-called the global best HS algorithm, in which concepts from swarm intelligence are borrowed to enhance the performance of HS algorithm such that the new harmony can mimic the best harmony in the HM. Also, Geem [15] proposed a stochastic derivative for discrete variables based on an HS algorithm to optimize problems with discrete variables and problems in which the mathematical derivative of the function cannot be analytically obtained. Pan et al. [16] used the good information captured in the current global best solution to generate new harmonies. Jaberipour and Khorram [17] described two HS algorithms through parameter adjusting technique. Yadav et al. [18] designed an HS algorithm which maintains a proper balance between diversification and intensification throughout the search process by automatically selecting the proper pitch adjustment strategy based on its HM. Pan et al. [19] divided the whole HM into many small-sized sub-HMs and performed the evolution in each sub-HM independently and thus presented a local-best harmony search algorithm with dynamic subpopulations. Later on, the excellent ideas of mutation and crossover strategies used in [19] were adopted in designing the differential evolution algorithm and obtained perfect result for global numerical optimization by Islam et al. [20].

Considering that, in political science and sociology, a small minority (elite) always holds the most power in making the decisions, that is, elite decision making. One could image that the good information captured in the current elite harmonies can be well utilized to generate new harmonies. Thus, in our elite decision making HS (EDMHS) algorithm, the new harmony will be randomly generated between the best and the second best harmonies in the historic HM, following some probability rule. The generated harmony vector replaces the worst harmony in the HM, only if its fitness (measured in terms of the objective function) is better than that of the worst harmony. These generating and updating procedures repeat until the near-optimal solution vector is obtained. To demonstrate the effectiveness and robustness of the proposed algorithm, various benchmark optimization problems, including continuous design variables and integer variables minimization problems, are used. Numerical results reveal that the proposed new algorithm is very effective.

This paper is organized as follows. In Section 2, a general harmony search algorithm and its recently developed variants will be reviewed. Section 3 introduces our method that has “Elite-Decision-Making” property. Section 4 presents the numerical results for some well-known benchmark problems. Finally, conclusions are given in the last section.

2. Harmony Search Algorithm

In the whole paper, the optimization problem is specified as follows: Minimize𝑓(𝑥),subjectto𝑥𝑖𝑋𝑖,𝑖=1,2,,𝑁,(2.1) where 𝑓(𝑥) is an objective function, 𝑥 is the set of each decision variable (𝑥𝑖), 𝑁 is the number of decision variables, and 𝑋𝑖 is the set of the possible range of values for each decision variable, that is 𝑥𝐿𝑖𝑋𝑖𝑥𝑈𝑖 and 𝑥𝐿𝑖 and 𝑥𝑈𝑖 are the lower and upper bounds for each decision variable, respectively.

2.1. The General HS Algorithm

The general HS algorithm requires several parameters as follows: HMS: harmony memory size, HMCR: harmony memory considering rate, PAR: pitch adjusting rate, bw: bandwidth vector.

Remarks 2.1. HMCR, PAR, and bw are very important factors for the high efficiency of the HS methods and can be potentially useful in adjusting convergence rate of algorithms to the optimal solutions. These parameters are introduced to allow the solution to escape from local optima and to improve the global optimum prediction of the HS algorithm.
The procedure for a harmony search, which consists of Steps 14.

Step 1. Create and randomly initialize an HM with HMS. The HM matrix is initially filled with as many solution vectors as the HMS. Each component of the solution vector is generated using the uniformly distributed random number between the lower and upper bounds of the corresponding decision variable [𝑥𝐿𝑖,𝑥𝑈𝑖], where 𝑖[1,𝑁].
The HM with the size of HMS can be represented by a matrix as 𝑥HM=11𝑥12𝑥1𝑁𝑥21𝑥22𝑥2𝑁𝑥HMS1𝑥HMS2𝑥HMS𝑁.(2.2)

Step 2. Improvise a new harmony from the HM or from the entire possible range. After defining the HM, the improvisation of the HM, is performed by generating a new harmony vector 𝑥=(𝑥1,𝑥2,,𝑥𝑁). Each component of the new harmony vector is generated according to 𝑥𝑖𝑥𝑖𝑥HM(,𝑖)withprobabilityHMCR,𝑖𝑋𝑖,withprobability1-HMCR,(2.3) where HMCR is defined as the probability of selecting a component from the HM members, and (1-HMCR) is, therefore, the probability of generating a component randomly from the possible range of values. Every 𝑥𝑖 obtained from HM is examined to determine whether it should be pitch adjusted. This operation uses the PAR parameter, which is the rate of pitch adjustment as follows: 𝑥𝑖𝑥𝑖[]𝑥±rand0,1×bwwithprobabilityPAR,𝑖,withprobability1-PAR,(2.4) where rand[0,1] is the randomly generated number between 0 and 1.

Step 3. Update the HM. If the new harmony is better than the worst harmony in the HM, include the new harmony into the HM and exclude the worst harmony from the HM.

Step 4. Repeat Steps 2 and 3 until the maximum number of searches is reached.

2.2. The Improved HS Algorithm

To improve the performance of the HS algorithm and eliminate the drawbacks associated with fixed values of PAR and bw, Mahdavi et al. [11] proposed an improved harmony search (IHS) algorithm that uses variable PAR and bw in improvisation step. In their method, PAR and bw change dynamically with generation number as expressed below: PAR(gn)=PARmin+PARmaxPARminMaxItr×gn,(2.5) where PAR(gn) is the pitch adjusting rate for each generation, PARmin is the minimum pitch adjusting rate, PARmax is the maximum pitch adjusting rate, and MaxItr and gn is the maximum and current search number, respectively. We have bw(gn)=bwmax𝑒𝑐×gn,(2.6) where 𝑐=logbwmin/bwmax.MaxIter(2.7)

Numerical results reveal that the HS algorithm with variable parameters can find better solutions when compared to HS and other heuristic or deterministic methods and is a powerful search algorithm for various engineering optimization problems, see [11].

2.3. Global Best Harmony Search (GHS) Algorithm

In 2008, Omran and Mahdavi [14] presented a GHS algorithm by modifying the pitch adjustment rule. Unlike the basic HS algorithm, the GHS algorithm generates a new harmony vector 𝑥 by making use of the best harmony vector 𝑥best={𝑥best1,𝑥best2,,𝑥best𝑛} in the HM. The pitch adjustment rule is given as follows: 𝑥𝑗=𝑥best𝑘,(2.8) where 𝑘 is a random integer between 1 and 𝑛. The performance of the GHS is investigated and compared with HS. The experiments conducted show that the GHS generally outperformed the other approaches when applied to ten benchmark problems.

2.4. A Self-Adaptive Global Best HS (SGHS) Algorithm

In 2010, Pan et al. [16] presented a SGHS algorithm for solving continuous optimization problems. In that algorithm, a new improvisation scheme is developed so that the good information captured in the current global best solution can be well utilized to generate new harmonies. The pitch adjustment rule is given as follows: 𝑥𝑗=𝑥best𝑗,(2.9) where 𝑗=1,,𝑛. Numerical experiments based on benchmark problems showed that the proposed SGHS algorithm was more effective in finding better solutions than the existing HS, HIS, and GHS algorithms.

3. An Elite Decision Making HS Algorithm

The key differences between the proposed EDMHS algorithm and IHS, GHS, and SGHS are in the way of improvising the new harmony.

3.1. EDMHS Algorithm for Continuous Design Variables Problems

The EDMHS has exactly the same steps as the IHS with the exception that Step 3 is modified as follows.

In this step, a new harmony vector 𝑥=(𝑥1,𝑥2,,𝑥𝑁)𝑇 is generated from 𝑥𝑖𝑥𝑖[]𝑥HM(𝑠,𝑖),HM(𝑏,𝑖)withprobabilityHMCR,𝑖𝑋𝑖,withprobability1-HMCR,(3.1) where HM(𝑠,𝑖) and HM(𝑏,𝑖) are the 𝑖th element of the second-best harmony and the best harmony, respectively.

3.2. EDMHS Algorithm for Integer Variables Problems

Many real-world applications require the variables to be integers. Methods developed for continuous variables can be used to solve such problems by rounding off the real optimum values to the nearest integers [14, 21]. However, in many cases, rounding-off approach may result in an infeasible solution or a poor suboptimal solution value and may omit the alternative solutions.

In EDMHS algorithm for integer programming, we generate the integer solution vector in the initial step and improvise step, that is, each component of the new harmony vector is generated according to 𝑥𝑖𝑥𝑖[]𝑥round(HM(𝑠,𝑖),HM(𝑏,𝑖))withprobabilityHMCR,𝑖𝑋𝑖,withprobability1-HMCR,(3.2) where round() means round off for (). The pitch adjustment is operated as follows: 𝑥𝑖𝑥𝑖𝑥±1withprobabilityPAR,𝑖,withprobability1-PAR.(3.3)

4. Numerical Examples

This section is about the performance of the EDMHS algorithm for continuous and integer variables examples. Several examples taken from the optimization literature are used to show the validity and effectiveness of the proposed algorithm. The parameters for all the algorithm are given as follows: HMS=20, HMCR=0.90, PARmin=0.4, PARmax=0.9, bwmin=0.0001, and bwmax=1.0. In the processing of the algorithm, PAR and bw are generated according to (2.5) and (2.6), respectively.

4.1. Some Simple Continuous Variables Examples

For the following five examples, we adopt the same variable ranges as presented in [4]. Each problem is run for 5 independent replications, the mean fitness of the solutions for four variants HS algorithm, IHS, SGHS, SGHS, and EDMHS, is presented in tables.

4.1.1. Rosenbrock Function

Consider the following: 𝑥𝑓(𝑥)=1002𝑥212+1𝑥12.(4.1) Due to a long narrow and curved valley present in the function, Rosenbrock function [4, 22] is probably the best known test case. The minimum of the function is located at 𝑥=(1.0,1.0) with a corresponding objective function value of 𝑓(𝑥)=0.0. The four algorithms were applied to the Rosenbrock function using bounds between −10.0 and 10.0 for the two design variables 𝑥1 and 𝑥2. After the 50,000 searches, we arrived at Table 1.

tab1
Table 1: Four HS algorithms for Rosenbrock function.
4.1.2. Goldstein and Price Function I (with Four Local Minima)

Consider the following: 𝑥𝑓(𝑥)=1+1+𝑥2+121914𝑥1+3𝑥2114𝑥2+6𝑥1𝑥2+3𝑥22×30+2𝑥13𝑥221832𝑥1+12𝑥21+48𝑥236𝑥1𝑥2+27𝑥22.(4.2) Goldstein and Price function I [4, 13, 23] is an eighth-order polynomial in two variables. However, the function has four local minima, one of which is global, as follows: 𝑓(1.2,0.8)=840.0, 𝑓(1.8,0.2)=84.0, 𝑓(0.6,0.4)=30, and 𝑓(0.0,1.0)=3.0 (global minimum). In this example, the bounds for two design variables (𝑥1 and 𝑥2) were set between −5.0 and 5.0. After 8000 searches, we arrived at Table 2.

tab2
Table 2: Four HS algorithms for Goldstein and Price function I.
4.1.3. Eason and Fenton’s Gear Train Inertia Function

Consider the following: 1𝑓(𝑥)=1012+𝑥21+1+𝑥22𝑥21+𝑥21𝑥22+100𝑥1𝑥24.(4.3) This function [4, 24] consists of a minimization problem for the inertia of a gear train. The minimum of the function is located at 𝑥=(1.7435,2.0297) with a corresponding objective function value of 𝑓(𝑥)=1.744152006740573. The four algorithms were applied to the gear train inertia function problem using bounds between 0.0 and 10.0 for the two design variables 𝑥1 and 𝑥2. After 800 searches, we arrived at Table 3.

tab3
Table 3: Four HS algorithms for Eason and Fenton's gear train inertia function.
4.1.4. Wood Function

Consider the following: 𝑥𝑓(𝑥)=1002𝑥212+1𝑥12𝑥+904𝑥232+1𝑥232𝑥+10.1212+𝑥412𝑥+19.82𝑥14.1(4.4) The Wood function [4, 25] is a fourth-degree polynomial, that is, a particularly good test of convergence criteria and simulates a feature of many physical problems quite well. The minimum solution of the function is obtained at 𝑥=(1,1,1,1)𝑇, and the corresponding objective function value is 𝑓(𝑥)=0.0. When applying the four algorithms STO the function, the four design variables, 𝑥1,𝑥2,𝑥3,𝑥4, were initially structured with random values bounded between −5.0 and 5.0, respectively. After 70,000 searches, we arrived at Table 4.

tab4
Table 4: Four HS algorithms for Wood function.
4.1.5. Powell Quartic Function

Consider the following: 𝑥𝑓(𝑥)=1+10𝑥22𝑥+53𝑥42+𝑥22𝑥34𝑥+101𝑥44.(4.5) The second derivative of the Powell quartic function [4, 26] becomes singular at the minimum point, it is quite difficult to obtain the minimum solution (i.e., 𝑓(0,0,0,0)=0.0) using gradient-based algorithms. When applying the EDMHS algorithm to the function, the four design variables, 𝑥1,𝑥2,𝑥3,𝑥4, were initially structured with random values bounded between −5.0 and 5.0, respectively. After 50,000 searches, we arrived at Table 5.

tab5
Table 5: Four HS algorithms for Powell quartic function.

It can be seen from Tables 15, comparing with IHS, GHS, and SGHS algorithms, that the EDMHS produces the much better results for four test functions. Figures 15 present a typical solution history graph along iterations for the five functions, respectively. It can be observed that four evolution curves of the EDMHS algorithm reach lower level than that of the other compared algorithms. Thus, it can be concluded that overall the EDMHS algorithm outperforms the other methods for the above examples.

860681.fig.001
Figure 1: Convergence of Rosenbrock function.
860681.fig.002
Figure 2: Convergence of Goldstein and Price function I.
860681.fig.003
Figure 3: Convergence of Eason and Fenton function.
860681.fig.004
Figure 4: Convergence of Wood function.
860681.fig.005
Figure 5: Convergence of Powell quartic function.
4.2. More Benchmark Problems with 30 Dimensions

To test the performance of the proposed EDMHS algorithm more extensively, we proceed to evaluate and compare the IHS, GHS, SGHS, and EDMHS algorithms based on the following 6 benchmark optimization problems listed in CEC2005 [27] with 30 dimensions. (1) Sphere function: 𝑓(𝑥)=𝑛𝑖=1𝑥2𝑖,(4.6) where global optimum 𝑥=0 and 𝑓(𝑥)=0 for 100𝑥𝑖100. (2) Schwefel problem: 𝑓(𝑥)=𝑛𝑖=1𝑥𝑖sin||𝑥𝑖||,(4.7) where global optimum 𝑥=(420.9687,,420.9687) and 𝑓(𝑥)=12569.5 for 500𝑥𝑖500. (3) Griewank function: 1𝑓(𝑥)=4000𝑛𝑖=1𝑥2𝑖𝑛𝑖=1𝑥cos𝑖𝑖+1,(4.8) where global optimum 𝑥=0 and 𝑓(𝑥)=0 for 600𝑥𝑖600. (4) Rastrigin function: 𝑓(𝑥)=𝑛𝑖=1𝑥2𝑖10cos2𝜋𝑥𝑖,+10(4.9) where global optimum 𝑥=0 and 𝑓(𝑥)=0 for 5.12𝑥𝑖5.12. (5) Ackley’s function: 𝑓(𝑥)=20exp0.2130𝑛𝑖=1𝑥2𝑖1exp30𝑛𝑖=1cos2𝜋𝑥𝑖+20+𝑒,(4.10) where global optimum 𝑥=0 and 𝑓(𝑥)=0 for 32𝑥𝑖32. (6) Rosenbrock’s Function: 𝑓(𝑥)=𝑛1𝑖=1𝑥100𝑖+1𝑥2𝑖2+𝑥𝑖12,(4.11) where global optimum 𝑥=(1,,1) and 𝑓(𝑥)=0 for 5𝑥𝑖10.

The parameters for the IHS algorithm, HMS=5, HMCR=0.9, bwmax=(𝑥𝑈𝑗𝑥𝐿𝑗)/20, bwmin=0.0001, PARmin=0.01, and PARmax=0.99 and for the GHS algorithm, HMS=5, HMCR=0.9, PARmin=0.01, and PARmax=0.99.

Table 6 presents the average error (AE) values and standard deviations (SD) over these 30 runs of the compared HS algorithms on the 6 test functions with dimension equal to 30.

tab6
Table 6: AE and SD generated by the compared algorithms.
4.3. Integer Variables Examples

Six commonly used integer programming benchmark problems are chosen to investigate the performance of the EDMHS integer algorithm. For all the examples, the design variables, 𝑥𝑖,𝑖=1,,𝑁, are initially structured with random integer values bounded between −100 and 100, respectively. Each problem is run 5 independent replications, each with approximately 800 searches, all the optimal solution vector are obtained.

4.3.1. Test Problem 1

Consider the following: 𝑓1(𝑥)=9𝑥21+2𝑥22112+3𝑥21+4𝑥2272,(4.12) where 𝑥=(1,1)𝑇 and 𝑓1(𝑥)=0, see [14, 21, 28].

4.3.2. Test Problem 2

Consider the following: 𝑓2𝑥(𝑥)=1+10𝑥22𝑥+53𝑥42+𝑥2𝑥34𝑥+103𝑥44,(4.13) where 𝑥=(0,0,0,0)𝑇 and 𝑓2(𝑥)=0, see [14, 21, 28].

4.3.3. Test Problem 3

Consider the following: 𝑓3(𝑥)=2𝑥21+3𝑥22+4𝑥1𝑥26𝑥13𝑥2,(4.14) where 𝑥1=(4,2)𝑇,𝑥2=(3,2)𝑇,𝑥3=(2,1)𝑇(4.15) and 𝑓3(𝑥)=0, see [14, 21, 29].

4.3.4. Test Problem 4

Consider the following: 𝑓4(𝑥)=𝑥𝑇𝑥,(4.16) where 𝑥=(0,0,0,0,0)𝑇 and 𝑓4(𝑥)=0, see [14, 21, 30].

4.3.5. Test Problem 5

Consider the following: 𝑓5(𝑥)=(15,27,36,18,12)𝑥+𝑥𝑇3520103210204063132106116103231638201032102031𝑥,(4.17) where 𝑥=(0,11,22,16,6)𝑇 and 𝑥=(10,12,23,17,6)𝑇 with 𝑓5(𝑥)=737, see [21, 28].

4.3.6. Test Problem 6

Consider the following: 𝑓6(𝑥)=3803.84138.08𝑥1232.92𝑥2+123.08𝑥21+203.64𝑥22+182.25𝑥1𝑥2,(4.18) where 𝑥=(0,1)𝑇 and 𝑓6(𝑥)=3833.12, see [21, 28].

5. Conclusion

This paper presented an EDMHS algorithm for solving continuous optimization problems and integer optimization problems. The proposed EDMHS algorithm applied a newly designed scheme to generate candidate solution so as to benefit from the good information inherent in the best and the second best solution in the historic HM.

Further work is still needed to investigate the effect of EDMHS and adopt this strategy to solve the real optimization problem.

Acknowlegdments

The research is supported by the Grant from National Natural Science Foundation of China no. 11171373 and the Grant from Natural Science Foundation of Zhejiang Province no. LQ12A01024.

References

  1. Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001. View at Publisher · View at Google Scholar · View at Scopus
  2. Z. W. Geem and J. Y. Choi, “Music composition using harmony search algorithm,” in Proceedings of the Applications of Evolutionary Computing, pp. 593–600, April 2007. View at Scopus
  3. Z. Geem, “Harmony search algorithm for solving sudoku,” in Knowledge-Based Intelligent Information and Engineering Systems, pp. 371–378, Springer.
  4. K. S. Lee and Z. W. Geem, “A new structural optimization method based on the harmony search algorithm,” Computers and Structures, vol. 82, no. 9-10, pp. 781–798, 2004. View at Publisher · View at Google Scholar · View at Scopus
  5. M. P. Saka, “Optimum geometry design of geodesic domes using harmony search algorithm,” Advances in Structural Engineering, vol. 10, no. 6, pp. 595–606, 2007. View at Publisher · View at Google Scholar · View at Scopus
  6. Z. Geem and J. Williams, “Ecological optimization using harmony search,” in Proceedings of the American Conference on Applied Mathematics, pp. 24–26, 2008.
  7. M. T. Ayvaz, “Simultaneous determination of aquifer parameters and zone structures with fuzzy c-means clustering and meta-heuristic harmony search algorithm,” Advances in Water Resources, vol. 30, no. 11, pp. 2326–2338, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. Z. W. Geem, “Harmony search applications in industry,” Soft Computing Applications in Industry, vol. 226, pp. 117–134, 2008. View at Publisher · View at Google Scholar · View at Scopus
  9. Z. Geem, Music-Inspired Harmony Search Algorithm: Theory and Applications, vol. 191, Springer, 2009.
  10. G. Ingram and T. Zhang, “Overview of applications and developments in the harmony search algorithm,” Music-Inspired Harmony Search Algorithm, vol. 191, pp. 15–37, 2009. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Mahdavi, M. Fesanghary, and E. Damangir, “An improved harmony search algorithm for solving optimization problems,” Applied Mathematics and Computation, vol. 188, no. 2, pp. 1567–1579, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. C. M. Wang and Y. F. Huang, “Self-adaptive harmony search algorithm for optimization,” Expert Systems with Applications, vol. 37, no. 4, pp. 2826–2837, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Fesanghary, M. Mahdavi, M. Minary-Jolandan, and Y. Alizadeh, “Hybridizing harmony search algorithm with sequential quadratic programming for engineering optimization problems,” Computer Methods in Applied Mechanics and Engineering, vol. 197, no. 33-40, pp. 3080–3091, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  14. M. G. H. Omran and M. Mahdavi, “Global-best harmony search,” Applied Mathematics and Computation, vol. 198, no. 2, pp. 643–656, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  15. Z. W. Geem, “Novel derivative of harmony search algorithm for discrete design variables,” Applied Mathematics and Computation, vol. 199, no. 1, pp. 223–230, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. Q.-K. Pan, P. N. Suganthan, M. F. Tasgetiren, and J. J. Liang, “A self-adaptive global best harmony search algorithm for continuous optimization problems,” Applied Mathematics and Computation, vol. 216, no. 3, pp. 830–848, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  17. M. Jaberipour and E. Khorram, “Two improved harmony search algorithms for solving engineering optimization problems,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, pp. 3316–3331, 2010. View at Publisher · View at Google Scholar
  18. P. Yadav, R. Kumar, S. Panda, and C. Chang, “An intelligent tuned harmony search algorithm for optimisation,” Information Sciences, vol. 196, pp. 47–72, 2012, http://dx.doi.org/10.1016/j.ins.2011.12.035. View at Publisher · View at Google Scholar
  19. Q. Pan, P. Suganthan, J. Liang, and M. Tasgetiren, “A local-best harmony search algorithm with dynamic subpopulations,” Engineering Optimization, vol. 42, pp. 101–117, 2010. View at Publisher · View at Google Scholar
  20. S. Islam, S. Das , S. Ghosh, S. Roy, and P. Suganthan, “An adaptive differential evolution algorithm with novel mutation and crossover strategies for global numerical optimization,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 42, no. 2, pp. 482–500, 2012. View at Google Scholar
  21. E. Laskari, K. Parsopoulos, and M. Vrahatis, “Particle swarm optimization for integer programming,” in Proceedings of the IEEE Congress on Evolutionary Computation, vol. 2, pp. 1582–1587.
  22. H. H. Rosenbrock, “An automatic method for finding the greatest or least value of a function,” The Computer Journal, vol. 3, pp. 175–184, 1960. View at Publisher · View at Google Scholar
  23. A. A. Goldstein and J. F. Price, “On descent from local minima,” Mathematics of Computation, vol. 25, pp. 569–574, 1971. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  24. E. D. Eason and R. G. Fenton, “A comparison of numerical optimization methods for engineering design,” Journal of Engineering for Industry, vol. 96, no. 1, pp. 196–200, 1974. View at Publisher · View at Google Scholar · View at Scopus
  25. A. Colville, I. B. M. C. N. Y. S. Center, and I. B. M. C. P .S. Center, A Comparative Study on Nonlinear Programming Codes, IBM Corporation, Philadelphia Scientific Center, 1970.
  26. A. Conn, K. Scheinberg, and P. Toint, “On the convergence of derivative-free methods for unconstrained optimization,” in Approximation Theory and Optimization: Tributes to MJD Powell, pp. 83–108, 1997. View at Google Scholar · View at Zentralblatt MATH
  27. P. Suganthan, N. Hansen, J. Liang et al., “Problem definitions and evaluation criteria for the cec 2005 special session on real-parameter optimization,” Tech. Rep. 2005005, Nanyang Technological University, Singapore, 2005. View at Google Scholar
  28. A. Glankwahmdee, S. Judith, and L. Gary, “Unconstrained discrete nonlinear programming,” Engineering Optimization, vol. 4, no. 2, pp. 95–107, 1979. View at Publisher · View at Google Scholar · View at Scopus
  29. S. S. Rao and S. S. Rao, “Engineering Optimization: Theory and Practice,” John Wiley & Sons, Hoboken, NJ, USA, 2009. View at Google Scholar
  30. G. Rudolph, “An evolutionary algorithm for integer programming,” in Proceedings of the 3rd Conference on Parallel Problem Solving from Nature (PPSN '94), pp. 139–148, Jerusalem, Israel, October 1994.