Abstract

This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.

1. Introduction

In 2001, Geem et al. [1] proposed a new metaheuristic algorithm, harmony search (HS) algorithm, which imitates the behaviors of music improvisation process. In that algorithm, the harmony in music is analogous to the optimization solution vector, and the musicians improvisations are analogous to local and global search schemes in optimization techniques. The HS algorithm does not require initial values for the decision variables. Furthermore, instead of a gradient search, the HS algorithm uses a stochastic random search that is based on the harmony memory considering rate and the pitch adjusting rate so that derivative information is unnecessary. These features increase the flexibility of the HS algorithm and have led to its application to optimization problems in different areas including music composition [2], Sudoku puzzle solving [3], structural design [4, 5], ecological conservation [6], and aquifer parameter identification [7]. The interested readers may refer the review papers [810] and the references therein for further understanding.

HS algorithm is good at identifying the high performance regions of the solution space at a reasonable time but gets into trouble in performing local search for numerical applications. In order to improve the fine-tuning characteristic of HS algorithm, Mahdavi et al. [11] discussed the impacts of constant parameters on HS algorithm and presented a new strategy for tuning these parameters. Wang and Huang [12] used the harmony memory (HM) (set of solution vectors) to automatically adjust parameter values. Fesanghary et al. [13] use sequential quadratic programming technique to speed up local search and improve precision of the HS algorithm solution. Omran and Mahdavi [14] proposed a so-called the global best HS algorithm, in which concepts from swarm intelligence are borrowed to enhance the performance of HS algorithm such that the new harmony can mimic the best harmony in the HM. Also, Geem [15] proposed a stochastic derivative for discrete variables based on an HS algorithm to optimize problems with discrete variables and problems in which the mathematical derivative of the function cannot be analytically obtained. Pan et al. [16] used the good information captured in the current global best solution to generate new harmonies. Jaberipour and Khorram [17] described two HS algorithms through parameter adjusting technique. Yadav et al. [18] designed an HS algorithm which maintains a proper balance between diversification and intensification throughout the search process by automatically selecting the proper pitch adjustment strategy based on its HM. Pan et al. [19] divided the whole HM into many small-sized sub-HMs and performed the evolution in each sub-HM independently and thus presented a local-best harmony search algorithm with dynamic subpopulations. Later on, the excellent ideas of mutation and crossover strategies used in [19] were adopted in designing the differential evolution algorithm and obtained perfect result for global numerical optimization by Islam et al. [20].

Considering that, in political science and sociology, a small minority (elite) always holds the most power in making the decisions, that is, elite decision making. One could image that the good information captured in the current elite harmonies can be well utilized to generate new harmonies. Thus, in our elite decision making HS (EDMHS) algorithm, the new harmony will be randomly generated between the best and the second best harmonies in the historic HM, following some probability rule. The generated harmony vector replaces the worst harmony in the HM, only if its fitness (measured in terms of the objective function) is better than that of the worst harmony. These generating and updating procedures repeat until the near-optimal solution vector is obtained. To demonstrate the effectiveness and robustness of the proposed algorithm, various benchmark optimization problems, including continuous design variables and integer variables minimization problems, are used. Numerical results reveal that the proposed new algorithm is very effective.

This paper is organized as follows. In Section 2, a general harmony search algorithm and its recently developed variants will be reviewed. Section 3 introduces our method that has “Elite-Decision-Making” property. Section 4 presents the numerical results for some well-known benchmark problems. Finally, conclusions are given in the last section.

2. Harmony Search Algorithm

In the whole paper, the optimization problem is specified as follows: Minimize𝑓(𝑥),subjectto𝑥𝑖𝑋𝑖,𝑖=1,2,,𝑁,(2.1) where 𝑓(𝑥) is an objective function, 𝑥 is the set of each decision variable (𝑥𝑖), 𝑁 is the number of decision variables, and 𝑋𝑖 is the set of the possible range of values for each decision variable, that is 𝑥𝐿𝑖𝑋𝑖𝑥𝑈𝑖 and 𝑥𝐿𝑖 and 𝑥𝑈𝑖 are the lower and upper bounds for each decision variable, respectively.

2.1. The General HS Algorithm

The general HS algorithm requires several parameters as follows: HMS: harmony memory size, HMCR: harmony memory considering rate, PAR: pitch adjusting rate, bw: bandwidth vector.

Remarks 2.1. HMCR, PAR, and bw are very important factors for the high efficiency of the HS methods and can be potentially useful in adjusting convergence rate of algorithms to the optimal solutions. These parameters are introduced to allow the solution to escape from local optima and to improve the global optimum prediction of the HS algorithm.
The procedure for a harmony search, which consists of Steps 14.

Step 1. Create and randomly initialize an HM with HMS. The HM matrix is initially filled with as many solution vectors as the HMS. Each component of the solution vector is generated using the uniformly distributed random number between the lower and upper bounds of the corresponding decision variable [𝑥𝐿𝑖,𝑥𝑈𝑖], where 𝑖[1,𝑁].
The HM with the size of HMS can be represented by a matrix as 𝑥HM=11𝑥12𝑥1𝑁𝑥21𝑥22𝑥2𝑁𝑥HMS1𝑥HMS2𝑥HMS𝑁.(2.2)

Step 2. Improvise a new harmony from the HM or from the entire possible range. After defining the HM, the improvisation of the HM, is performed by generating a new harmony vector 𝑥=(𝑥1,𝑥2,,𝑥𝑁). Each component of the new harmony vector is generated according to 𝑥𝑖𝑥𝑖𝑥HM(,𝑖)withprobabilityHMCR,𝑖𝑋𝑖,withprobability1-HMCR,(2.3) where HMCR is defined as the probability of selecting a component from the HM members, and (1-HMCR) is, therefore, the probability of generating a component randomly from the possible range of values. Every 𝑥𝑖 obtained from HM is examined to determine whether it should be pitch adjusted. This operation uses the PAR parameter, which is the rate of pitch adjustment as follows: 𝑥𝑖𝑥𝑖[]𝑥±rand0,1×bwwithprobabilityPAR,𝑖,withprobability1-PAR,(2.4) where rand[0,1] is the randomly generated number between 0 and 1.

Step 3. Update the HM. If the new harmony is better than the worst harmony in the HM, include the new harmony into the HM and exclude the worst harmony from the HM.

Step 4. Repeat Steps 2 and 3 until the maximum number of searches is reached.

2.2. The Improved HS Algorithm

To improve the performance of the HS algorithm and eliminate the drawbacks associated with fixed values of PAR and bw, Mahdavi et al. [11] proposed an improved harmony search (IHS) algorithm that uses variable PAR and bw in improvisation step. In their method, PAR and bw change dynamically with generation number as expressed below: PAR(gn)=PARmin+PARmaxPARminMaxItr×gn,(2.5) where PAR(gn) is the pitch adjusting rate for each generation, PARmin is the minimum pitch adjusting rate, PARmax is the maximum pitch adjusting rate, and MaxItr and gn is the maximum and current search number, respectively. We have bw(gn)=bwmax𝑒𝑐×gn,(2.6) where 𝑐=logbwmin/bwmax.MaxIter(2.7)

Numerical results reveal that the HS algorithm with variable parameters can find better solutions when compared to HS and other heuristic or deterministic methods and is a powerful search algorithm for various engineering optimization problems, see [11].

2.3. Global Best Harmony Search (GHS) Algorithm

In 2008, Omran and Mahdavi [14] presented a GHS algorithm by modifying the pitch adjustment rule. Unlike the basic HS algorithm, the GHS algorithm generates a new harmony vector 𝑥 by making use of the best harmony vector 𝑥best={𝑥best1,𝑥best2,,𝑥best𝑛} in the HM. The pitch adjustment rule is given as follows: 𝑥𝑗=𝑥best𝑘,(2.8) where 𝑘 is a random integer between 1 and 𝑛. The performance of the GHS is investigated and compared with HS. The experiments conducted show that the GHS generally outperformed the other approaches when applied to ten benchmark problems.

2.4. A Self-Adaptive Global Best HS (SGHS) Algorithm

In 2010, Pan et al. [16] presented a SGHS algorithm for solving continuous optimization problems. In that algorithm, a new improvisation scheme is developed so that the good information captured in the current global best solution can be well utilized to generate new harmonies. The pitch adjustment rule is given as follows: 𝑥𝑗=𝑥best𝑗,(2.9) where 𝑗=1,,𝑛. Numerical experiments based on benchmark problems showed that the proposed SGHS algorithm was more effective in finding better solutions than the existing HS, HIS, and GHS algorithms.

3. An Elite Decision Making HS Algorithm

The key differences between the proposed EDMHS algorithm and IHS, GHS, and SGHS are in the way of improvising the new harmony.

3.1. EDMHS Algorithm for Continuous Design Variables Problems

The EDMHS has exactly the same steps as the IHS with the exception that Step 3 is modified as follows.

In this step, a new harmony vector 𝑥=(𝑥1,𝑥2,,𝑥𝑁)𝑇 is generated from 𝑥𝑖𝑥𝑖[]𝑥HM(𝑠,𝑖),HM(𝑏,𝑖)withprobabilityHMCR,𝑖𝑋𝑖,withprobability1-HMCR,(3.1) where HM(𝑠,𝑖) and HM(𝑏,𝑖) are the 𝑖th element of the second-best harmony and the best harmony, respectively.

3.2. EDMHS Algorithm for Integer Variables Problems

Many real-world applications require the variables to be integers. Methods developed for continuous variables can be used to solve such problems by rounding off the real optimum values to the nearest integers [14, 21]. However, in many cases, rounding-off approach may result in an infeasible solution or a poor suboptimal solution value and may omit the alternative solutions.

In EDMHS algorithm for integer programming, we generate the integer solution vector in the initial step and improvise step, that is, each component of the new harmony vector is generated according to 𝑥𝑖𝑥𝑖[]𝑥round(HM(𝑠,𝑖),HM(𝑏,𝑖))withprobabilityHMCR,𝑖𝑋𝑖,withprobability1-HMCR,(3.2) where round() means round off for (). The pitch adjustment is operated as follows: 𝑥𝑖𝑥𝑖𝑥±1withprobabilityPAR,𝑖,withprobability1-PAR.(3.3)

4. Numerical Examples

This section is about the performance of the EDMHS algorithm for continuous and integer variables examples. Several examples taken from the optimization literature are used to show the validity and effectiveness of the proposed algorithm. The parameters for all the algorithm are given as follows: HMS=20, HMCR=0.90, PARmin=0.4, PARmax=0.9, bwmin=0.0001, and bwmax=1.0. In the processing of the algorithm, PAR and bw are generated according to (2.5) and (2.6), respectively.

4.1. Some Simple Continuous Variables Examples

For the following five examples, we adopt the same variable ranges as presented in [4]. Each problem is run for 5 independent replications, the mean fitness of the solutions for four variants HS algorithm, IHS, SGHS, SGHS, and EDMHS, is presented in tables.

4.1.1. Rosenbrock Function

Consider the following: 𝑥𝑓(𝑥)=1002𝑥212+1𝑥12.(4.1) Due to a long narrow and curved valley present in the function, Rosenbrock function [4, 22] is probably the best known test case. The minimum of the function is located at 𝑥=(1.0,1.0) with a corresponding objective function value of 𝑓(𝑥)=0.0. The four algorithms were applied to the Rosenbrock function using bounds between −10.0 and 10.0 for the two design variables 𝑥1 and 𝑥2. After the 50,000 searches, we arrived at Table 1.

4.1.2. Goldstein and Price Function I (with Four Local Minima)

Consider the following: 𝑥𝑓(𝑥)=1+1+𝑥2+121914𝑥1+3𝑥2114𝑥2+6𝑥1𝑥2+3𝑥22×30+2𝑥13𝑥221832𝑥1+12𝑥21+48𝑥236𝑥1𝑥2+27𝑥22.(4.2) Goldstein and Price function I [4, 13, 23] is an eighth-order polynomial in two variables. However, the function has four local minima, one of which is global, as follows: 𝑓(1.2,0.8)=840.0, 𝑓(1.8,0.2)=84.0, 𝑓(0.6,0.4)=30, and 𝑓(0.0,1.0)=3.0 (global minimum). In this example, the bounds for two design variables (𝑥1 and 𝑥2) were set between −5.0 and 5.0. After 8000 searches, we arrived at Table 2.

4.1.3. Eason and Fenton’s Gear Train Inertia Function

Consider the following: 1𝑓(𝑥)=1012+𝑥21+1+𝑥22𝑥21+𝑥21𝑥22+100𝑥1𝑥24.(4.3) This function [4, 24] consists of a minimization problem for the inertia of a gear train. The minimum of the function is located at 𝑥=(1.7435,2.0297) with a corresponding objective function value of 𝑓(𝑥)=1.744152006740573. The four algorithms were applied to the gear train inertia function problem using bounds between 0.0 and 10.0 for the two design variables 𝑥1 and 𝑥2. After 800 searches, we arrived at Table 3.

4.1.4. Wood Function

Consider the following: 𝑥𝑓(𝑥)=1002𝑥212+1𝑥12𝑥+904𝑥232+1𝑥232𝑥+10.1212+𝑥412𝑥+19.82𝑥14.1(4.4) The Wood function [4, 25] is a fourth-degree polynomial, that is, a particularly good test of convergence criteria and simulates a feature of many physical problems quite well. The minimum solution of the function is obtained at 𝑥=(1,1,1,1)𝑇, and the corresponding objective function value is 𝑓(𝑥)=0.0. When applying the four algorithms STO the function, the four design variables, 𝑥1,𝑥2,𝑥3,𝑥4, were initially structured with random values bounded between −5.0 and 5.0, respectively. After 70,000 searches, we arrived at Table 4.

4.1.5. Powell Quartic Function

Consider the following: 𝑥𝑓(𝑥)=1+10𝑥22𝑥+53𝑥42+𝑥22𝑥34𝑥+101𝑥44.(4.5) The second derivative of the Powell quartic function [4, 26] becomes singular at the minimum point, it is quite difficult to obtain the minimum solution (i.e., 𝑓(0,0,0,0)=0.0) using gradient-based algorithms. When applying the EDMHS algorithm to the function, the four design variables, 𝑥1,𝑥2,𝑥3,𝑥4, were initially structured with random values bounded between −5.0 and 5.0, respectively. After 50,000 searches, we arrived at Table 5.

It can be seen from Tables 15, comparing with IHS, GHS, and SGHS algorithms, that the EDMHS produces the much better results for four test functions. Figures 15 present a typical solution history graph along iterations for the five functions, respectively. It can be observed that four evolution curves of the EDMHS algorithm reach lower level than that of the other compared algorithms. Thus, it can be concluded that overall the EDMHS algorithm outperforms the other methods for the above examples.

4.2. More Benchmark Problems with 30 Dimensions

To test the performance of the proposed EDMHS algorithm more extensively, we proceed to evaluate and compare the IHS, GHS, SGHS, and EDMHS algorithms based on the following 6 benchmark optimization problems listed in CEC2005 [27] with 30 dimensions. (1) Sphere function: 𝑓(𝑥)=𝑛𝑖=1𝑥2𝑖,(4.6) where global optimum 𝑥=0 and 𝑓(𝑥)=0 for 100𝑥𝑖100. (2) Schwefel problem: 𝑓(𝑥)=𝑛𝑖=1𝑥𝑖sin||𝑥𝑖||,(4.7) where global optimum 𝑥=(420.9687,,420.9687) and 𝑓(𝑥)=12569.5 for 500𝑥𝑖500. (3) Griewank function: 1𝑓(𝑥)=4000𝑛𝑖=1𝑥2𝑖𝑛𝑖=1𝑥cos𝑖𝑖+1,(4.8) where global optimum 𝑥=0 and 𝑓(𝑥)=0 for 600𝑥𝑖600. (4) Rastrigin function: 𝑓(𝑥)=𝑛𝑖=1𝑥2𝑖10cos2𝜋𝑥𝑖,+10(4.9) where global optimum 𝑥=0 and 𝑓(𝑥)=0 for 5.12𝑥𝑖5.12. (5) Ackley’s function: 𝑓(𝑥)=20exp0.2130𝑛𝑖=1𝑥2𝑖1exp30𝑛𝑖=1cos2𝜋𝑥𝑖+20+𝑒,(4.10) where global optimum 𝑥=0 and 𝑓(𝑥)=0 for 32𝑥𝑖32. (6) Rosenbrock’s Function: 𝑓(𝑥)=𝑛1𝑖=1𝑥100𝑖+1𝑥2𝑖2+𝑥𝑖12,(4.11) where global optimum 𝑥=(1,,1) and 𝑓(𝑥)=0 for 5𝑥𝑖10.

The parameters for the IHS algorithm, HMS=5, HMCR=0.9, bwmax=(𝑥𝑈𝑗𝑥𝐿𝑗)/20, bwmin=0.0001, PARmin=0.01, and PARmax=0.99 and for the GHS algorithm, HMS=5, HMCR=0.9, PARmin=0.01, and PARmax=0.99.

Table 6 presents the average error (AE) values and standard deviations (SD) over these 30 runs of the compared HS algorithms on the 6 test functions with dimension equal to 30.

4.3. Integer Variables Examples

Six commonly used integer programming benchmark problems are chosen to investigate the performance of the EDMHS integer algorithm. For all the examples, the design variables, 𝑥𝑖,𝑖=1,,𝑁, are initially structured with random integer values bounded between −100 and 100, respectively. Each problem is run 5 independent replications, each with approximately 800 searches, all the optimal solution vector are obtained.

4.3.1. Test Problem 1

Consider the following: 𝑓1(𝑥)=9𝑥21+2𝑥22112+3𝑥21+4𝑥2272,(4.12) where 𝑥=(1,1)𝑇 and 𝑓1(𝑥)=0, see [14, 21, 28].

4.3.2. Test Problem 2

Consider the following: 𝑓2𝑥(𝑥)=1+10𝑥22𝑥+53𝑥42+𝑥2𝑥34𝑥+103𝑥44,(4.13) where 𝑥=(0,0,0,0)𝑇 and 𝑓2(𝑥)=0, see [14, 21, 28].

4.3.3. Test Problem 3

Consider the following: 𝑓3(𝑥)=2𝑥21+3𝑥22+4𝑥1𝑥26𝑥13𝑥2,(4.14) where 𝑥1=(4,2)𝑇,𝑥2=(3,2)𝑇,𝑥3=(2,1)𝑇(4.15) and 𝑓3(𝑥)=0, see [14, 21, 29].

4.3.4. Test Problem 4

Consider the following: 𝑓4(𝑥)=𝑥𝑇𝑥,(4.16) where 𝑥=(0,0,0,0,0)𝑇 and 𝑓4(𝑥)=0, see [14, 21, 30].

4.3.5. Test Problem 5

Consider the following: 𝑓5(𝑥)=(15,27,36,18,12)𝑥+𝑥𝑇3520103210204063132106116103231638201032102031𝑥,(4.17) where 𝑥=(0,11,22,16,6)𝑇 and 𝑥=(10,12,23,17,6)𝑇 with 𝑓5(𝑥)=737, see [21, 28].

4.3.6. Test Problem 6

Consider the following: 𝑓6(𝑥)=3803.84138.08𝑥1232.92𝑥2+123.08𝑥21+203.64𝑥22+182.25𝑥1𝑥2,(4.18) where 𝑥=(0,1)𝑇 and 𝑓6(𝑥)=3833.12, see [21, 28].

5. Conclusion

This paper presented an EDMHS algorithm for solving continuous optimization problems and integer optimization problems. The proposed EDMHS algorithm applied a newly designed scheme to generate candidate solution so as to benefit from the good information inherent in the best and the second best solution in the historic HM.

Further work is still needed to investigate the effect of EDMHS and adopt this strategy to solve the real optimization problem.

Acknowlegdments

The research is supported by the Grant from National Natural Science Foundation of China no. 11171373 and the Grant from Natural Science Foundation of Zhejiang Province no. LQ12A01024.