Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017, Article ID 3498363, 25 pages
https://doi.org/10.1155/2017/3498363
Research Article

Elite Opposition-Based Water Wave Optimization Algorithm for Global Optimization

1School of Computer and Electronics Information, Guangxi University, Nanning 530004, China
2College of Information Science and Engineering, Guangxi University for Nationalities, Nanning 530006, China
3Key Laboratory of Guangxi High Schools Complex System and Computational Intelligence, Nanning 530006, China

Correspondence should be addressed to Yongquan Zhou; moc.621@uohznauqgnoy

Received 13 August 2016; Revised 4 November 2016; Accepted 15 November 2016; Published 15 January 2017

Academic Editor: Manuel Doblaré

Copyright © 2017 Xiuli Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Water wave optimization (WWO) is a novel metaheuristic method that is based on shallow water wave theory, which has simple structure, easy realization, and good performance even with a small population. To improve the convergence speed and calculation precision even further, this paper on elite opposition-based strategy water wave optimization (EOBWWO) is proposed, and it has been applied for function optimization and structure engineering design problems. There are three major optimization strategies in the improvement: elite opposition-based (EOB) learning strategy enhances the diversity of population, local neighborhood search strategy is introduced to enhance local search in breaking operation, and improved propagation operator provides the improved algorithm with a better balance between exploration and exploitation. EOBWWO algorithm is verified by using 20 benchmark functions and two structure engineering design problems and the performance of EOBWWO is compared against those of the state-of-the-art algorithms. Experimental results show that the proposed algorithm has faster convergence speed, higher calculation precision, with the exact solution being even obtained on some benchmark functions, and a higher degree of stability than other comparative algorithms.

1. Introduction

Optimization problem is wide and varied; in many scientific and engineering computation areas, a majority of problems of people encounter can be attributed to objective optimization problem; thus, the research of optimization problem has been a very active field. In fact, the optimization method can be divided into two types of deterministic optimization and stochastic optimization; although the deterministic optimization method is relatively mature, its application condition is harsh and difficult to deal with large-scale optimization problems, which prompted the stochastic optimization method, especially development of heuristic optimization method.

In recent years, the heuristic optimization algorithm, especially the metaheuristic optimization algorithm, has been concerned by many researchers. Metaheuristic optimization algorithm originates from the simulation of various types of physical, biological, social, and other phenomena in nature to solve optimization problems. As a stochastic optimization method, the metaheuristic optimization algorithm has the advantage of simple and universal, strong robustness, suitable for parallel processing and wide application range. Due to the advantages of metaheuristic, several algorithms have been proposed recently, such as particle swarm optimization (PSO) [1], genetic algorithm (GA) [2], ant colony optimization (ACO) [3], artificial bee colony (ABC) [4], cuckoo search (CS) [5], bat algorithm (BA) [6], firefly algorithm (FA) [7], flower pollination algorithm (FPA) [8], and water wave optimization (WWO) [9].

Water wave optimization (WWO) is a relatively new metaheuristic initially proposed by Zheng in 2015 [9], inspired by the shallow water wave theory [10] for global optimization. WWO has the advantages of simple framework and thus easiness of implementation; even with a small population size it performs well [9]. At present, as a new metaheuristic optimization method, WWO has been successfully applied to the optimization problems such as high speed [9] and TSP [11].

In order to further improve the performance of WWO, some modified approaches are introduced to strengthen its performance. Zhang et al. [12] improved on WWO; an improved version with variable population size (VC-WWO) is proposed by them, and, meanwhile, a comprehensive learning mechanism is developed in refraction operator to increase the solution diversity. Zheng and Zhang [13] developed a simplified version of WWO (Sim-WWO); in Sim-WWO, leaving out the refraction operator and in order to better balance exploration and exploitation as well as partially compensate the effect of weeding out refraction operator, a strategy of population size reducing is introduced. In order to apply WWO to combinatorial optimization problem, the traveling salesman problem (TSP), Wu et al. [11] redefined the propagation, breaking, and refraction operator based on the original WWO. In this paper, an improved water wave optimization algorithm based on elite-opposition (EOBWWO) learning strategy has been applied to function optimization and structure engineering design problems. The improvements include three parts: elite opposition-based learning (EOBL) strategy enhances the diversity of population, local neighborhood search strategy is introduced to enhance local search in breaking operation, and improved propagation operator provides the improved algorithm with a better balance between exploration and exploitation. We tested the performance of EOBWWO on 20 benchmark functions and two structure engineering design problems. The experimental results show that proposed algorithm has signification performance advantage including a fast convergence speed and a high calculation precision; in addition, the improved algorithm is able to obtain the exact solution on some test functions.

This paper is organized into the following sections. Section 2 introduces the original WWO algorithm briefly, the detailed description of EOBWWO algorithm is presented in Section 3, simulation experiments and results discussed are described in Section 4, and finally the conclusion is given in Section 5.

2. Water Wave Optimization (WWO) Algorithm

Water wave optimization (WWO) algorithm is inspired by shallow water wave theory and developed by Zheng [9], where each individual in the population is analogous to the “water wave” object with a wave height and a wavelength . Without losing generality, suppose there is a maximization problem and its objection function is , where practical problem can be compared with the shallow water wave model; the corresponding relation is shown as follows:

When the population is initialized, for each wave, the wave height is set to a constant and wavelength is generally set to 0.5. The fitness value of each water wave is inversely proportional to the vertical distance to the seabed; from this we can know that from the seabed nearer the water wave fitness value is bigger, the wave height is bigger, and the wavelength is smaller, as illustrated in Figure 1. During the process of optimization problem-solving, search globally in the solution space by simulating the propagation, breaking, and refraction operation of water waves.

Figure 1: Different wave shapes in deep and shallow water.
2.1. Propagation

In WWO, all water waves have to be propagated once at each generation. It is assumed that the original water wave is , is a new wave created by propagation operator, the dimension of the maximum value function is , the propagation operation is shifted, and each dimension of the original water wave is given as where , is used to control the propagation step which is a uniformly distributed random number fixed in [], and is the length of the dimension of the search space. If the length of is longer than the length of the dimension of the search space, a new position will be reset randomly aswhere and are the lower bound and upper bound of th dimension of the search space and is a random number within the range .

After propagating, we evaluate fitness of ; if , instead of in the population, meanwhile the wave height of is reset to ; otherwise, remained, and in order to simulate energy dissipation of wave in the process of propagation, its height is decreased by one.

It is a natural phenomenon that when a wave travels from deep water to shallow water, its wave height increases and its wavelength decreases, as illustrated in Figure 1. In a bid to simulate this phenomenon, WWO uses the way in which the wavelength of each wave is updated after each generation as follows: where is a control parameter named wavelength reduction coefficient, and are the maximum and minimum fitness values among the current population, respectively, and is a very small positive constant to avoid division-by-zero.

2.2. Breaking

In the water wave theory, with the energy of water wave increasing constantly, crest becomes more and more steep, and the wave breaks into a series of solitary waves when its velocity of crest exceeds the wave celerity. After propagating, WWO only performs breaking on the wave which is a new best solution , which is used to improve the diversity of the population. The detailed process is as follows: first of all, we select randomly dimensions (where is a random number between 1 and a predefined number ) and perform operations on each selected dimension of original wave to generate each dimension of solitary wave as follows: where is a Gaussian random number with mean 0 and standard deviation 1 and is breaking coefficient. If the solitary wave with the best fitness is better than , is selected instead of ; otherwise, remained.

2.3. Refraction

In WWO, the refraction operation only performs on a wave whose height decreases to zero to avoid search stagnation, which simulates the phenomenon that wave ray is not perpendicular to the isobath. By refraction, in the way that random number centered halfway between the original positions and to calculate each dimension of new wave , the details are as follows:

Followed by refraction, the wave height of is also reset to ; meanwhile its wavelength is updated as follows:

To sum up, the role of propagation operator is to make the high fitness wave exploit small area and the low fitness wave explore large area, the breaking operator enhances the local search among the promising best waves, and the refraction operation helps avoid search stagnation and thus reduces premature convergence. The basic framework of WWO is as Algorithm 1 [9].

Algorithm 1: The WWO algorithm.

3. The EOBWWO Algorithm

In order to improve the performance including global searching and local searching abilities of WWO and obtain a better balance between exploration and exploitation even further, there are three optimization strategies applied to the original WWO; they are elite opposition-based learning (EOBL) strategy [42], local neighborhood searching (LNS) strategy [43], and improved propagation operator.

3.1. Elite Opposition-Based Learning (EOBL) Strategy

The optimization process of WWO algorithm can be regarded as the transformation continually of its search space. When the algorithm falls into local optimum, the search space is difficult to contain the global optimal solution. Thus it is very significant to guide the current solution space approximation to the space of global optimal solution. In a bid to enhance the global search ability (i.e., exploration ability) of WWO, the elite opposition-based learning (EOBL) strategy is introduced.

Before introducing the EOBL, we should firstly explain opposition-based learning (OBL) [44]. The main idea of OBL is that it generates the opposition solution of current solution, evaluates current solution and opposition solution at the same time, and chooses the better one to enter the next iteration. We assume is a point in current population ( is the dimension of search space; ), and its opposition point is defined as follows: Since the opposition solution of OBL generated may not be more favorable than the current search space to search the global optimal solution, thus in this paper, we can use elite opposition-based learning (EOBL) strategy. EOBL is a new technique in the field of intelligence computation and its model can be described as follows: Suppose the elite individual (optimal individual in population) in the current population is , for an individual , the elite opposition solution of is defined as follows: where , is the population size, and is a generalized coefficient, and is the dynamic boundary of the th dimensional search space and can be obtained by the following formula: The fixed boundary is not conducive to preserve the search experience; thus we use dynamic boundary of the search space to replace the fixed boundary to preserve the search experience in order to make the opposition solution located in the search space which is narrowing. Moreover, if the operator of dynamic boundary makes jump out of , the following method can be used to reset : The EOBL generates the opposition population according to the elite individual and evaluates the current population and the elite population at the same time; in addition, it makes full use of the characteristics of the elite individuals to contain more useful search information than the ordinary individuals which improve the diversity of the population to certain extent. EOBL can enhance the ability of global exploration of WWO.

3.2. Local Neighborhood Search (LNS)

WWO only performs the breaking operator on the new best solution to enhance local search around the best solution. In a bid to further enhance the local search ability to improve the convergence speed, the local neighborhood search [43] (LNS) model is added before the breaking operation.

The main idea of LNS is using the best solution found so far in a small neighborhood of the current solution rather than the entire population to update the current solution. The experience of an individual’s neighborhood is considered when updating the individual’s location, so that the graph of interconnections of them is called neighborhood structure. Suppose there is a WWO population is a vector in the current population, and its dimension is . The indices of each vector are random in a bid to maintain the diversity of each neighborhood. Next, we can define the neighborhood of radius ( is a nonzero integer and ), where , for each vector ; that is to say, neighborhood of consists of . For analysis, we suppose that the vectors can be arranged into a ring topology according to their indices. Figure 2 illustrates the concept of local neighborhood model. In addition, the neighborhood topology is static and about the definition of the set of vectors all the time. LNS model is described inwhere is the best vector in the neighborhood of and and and are the scaling factors, where . In the improved version of WWO, the new best solution is updated according to (12), and the breaking operation is performed by the updated solution as where is the best solution updated by LNS.

Figure 2: Neighborhood ring topology of radius 2.
3.3. Improvement of WWO

In original WWO, all water waves have to be propagated once at each generation, and the search behavior of each water wave is affected by the other waves in the group. Similar to PSO [1, 45, 46], an inertial weight is embedded into (2) in order to learn the past experience. Moreover, as shown in (2) and (4), propagation operator can make high fitness wave search small region and low fitness wave explore large region in global search process. In (2), the search step size is a random number fixed in the range , which is not very reasonable because search step size prefers fairly large step at the beginning in order to strengthen the probability of reaching the optimal regions, and, with the iteration going on, search step size should be decreased gradually to enhance the local exploitation ability. The random step size in the propagation process is improved by referencing to the method of [47]; the improved propagation operator is as follows:where and are, respectively, the maximum and minimum inertial weight, where , is the current iteration number and is the maximum iteration number, and and are two selected constants, where . Improved propagation operator not only makes use of the past experience, but also makes the search step size decrease gradually with the iteration going on. The whole pseudocode of EOBWWO can be summarized as Algorithm 2.

Algorithm 2: The framework of EOBWWO algorithm.

4. Simulation Experiments and Results Analysis

In order to identify the effectiveness and efficiency of EOBWWO, 20 standard test functions are applied in this section. The detailed parameters about 20 benchmark functions [48, 49] including functional form, scope, optimal solution, and the iterations are illustrated in Table 1. The 20 benchmark functions can be divided into three groups, unimodal functions () as group 1, multimodal functions () as group 2, and low-dimension functions () as group 3. In the unimodal functions, the global optimum of is located in a smooth, long, and narrow parabolic valley; when the traditional gradient optimization method is searched to the valley edge, it is difficult to carry out global optimization. However, it is very slow to change the value in the long and narrow area, which can be used to evaluate the performance of the algorithm. In the multimodal functions, has many local minima, it is a typical nonlinear multimodal function, which has a wide range of search space, and it is generally considered to be a complex multimodal problem which is difficult to deal with. In general, unimodal functions are suitable for evaluating the exploitation; however, multimodal functions tend to be a good choice for evaluating exploration [50].

Table 1: Benchmark test functions.

The rest of this section is organized as follows: experimental setting is given in Section 4.1, experiment results of 30 dimensions and discussion are represented in Section 4.2, high-dimension test results including 100 dimensions, 1000 dimensions, and 10000 dimensions for some unimodal functions and multimodal functions are described in Section 4.3, and two design problems are shown in Section 4.4.

4.1. Experimental Setting and Comparative Methods

The empirical analysis was conducted on a computer of Intel(R) with 3.5 GHz Xeon CPU and 8 GB of memory, the operating system is Windows 7, and the programs are written in Matlab 2012a.

The scope and dimension of variables have significant influence on the complexity of optimization. The scope of the benchmark function and the dimension of the low-dimension functions are illustrated in Table 1. The dimensions of unimodal functions and multimodal functions are, respectively, 30, 100, 1000, and 10000.

The performance of proposed EOBWWO algorithm is evaluated by comparing it to five state-of-the-art metaheuristic algorithms: ABC [4], CS [5], FPA [6], BA [8], and WWO [9]; the parameters settings of aforementioned algorithms are given in Table 2.

Table 2: The parameters setting for six algorithms.
4.2. Experiment Results and Discussion

In Tables 3-4, the dimension is 30, whereas the standard benchmark functions are listed in Table 1. In this paper, all function optimization experimental results of the algorithms are repeated 30 times to ensure the credibility in statistics. There are four evaluation indicators: max, min, median, and Std represent the worst fitness value, optimal fitness value, median of the test results, and standard deviation, respectively. The last column of Tables 35 gives the rank of the algorithms in terms of median values among the six algorithms. The minimum value, best median value, and the minimum standard deviation values among the six algorithms of each benchmark function are shown in bold.

Table 3: Experiment results of unimodal functions for different algorithms ().
Table 4: Experiment results of multimodal functions for different algorithms ().
Table 5: Experiment results of low-dimension functions for different algorithms.

Moreover, nonparametric Wilcoxon rank tests were conducted on the results of EOBWWO and other comparative algorithms on the 20 benchmark functions, and the test results are shown in Table 6, where the value of is 1 indicating that the performance and comparative method are statistically different with 95% confidence, and 0 implies that there is no statistical difference [9].

Table 6: Statistical comparison between EOBWWO and the other five algorithms.

In Table 3, on the unimodal group, EOBWWO obtained the exact solution except function and obtained the minimum standard deviation. Although ranking fifth on function , standard deviation of EOBWWO is less than the other algorithms. All these mean that EOBWWO has a higher calculation precision and better stability in the optimization of the unimodal functions.

On group 2 of 8 multimodal functions, seeing from Table 4, EOBWWO can find the exact solution for , , , , , and , and the standard deviations of the five functions of EOBWWO are zeros. In addition, EOBWWO obtains the best median value on all functions. For functions and , the worst fitness value, best fitness value, median value, and standard deviations of EOBWWO are less than the other five algorithms. The multimodal functions are more complex than unimodal functions due to the local minima; thus the above analysis indicates that EOBWWO has a strong global search ability and higher calculation precision.

On group 3 of 6 low-dimension functions, results are illustrated in Table 5. For , EOBWWO obtains the minimum values including the worst fitness, best value fitness, median value, and standard deviations among the comparative algorithms. For , ABC, CS, WWO, and EOBWWO can find the exact solution, and the standard deviations of ABC and EOBWWO are zeros. For , ABC, CS, WWO, and EOBWWO have the same median value and optimal fitness value, while the standard deviation of CS is minimal. For , FPA can obtain the better fitness value and median, but the standard deviation of WWO is the best. For , optimal fitness value, median value, and the standard deviation of FPA are better than those of EOBWWO. And for , it is obvious that ABC, CS, WWO, and EOBWWO obtain the exact solution and the standard deviation of CS is better. Through the above analysis of Table 5, we can draw a conclusion that EOBWWO has certain advantages in dealing with low-dimension functions according to the experimental results.

In summary, as a result of introducing the three major optimization strategies in the improvement, the calculation precision of EOBWWO is better than the comparative algorithms for most benchmark functions. In addition to functions , , and , the calculation precision of EOBWWO is inferior to ABC, FPA, and FPA, respectively. Moreover, EOBWWO can find the exact solution on , , , , , , and , and the standard deviations of these functions are zeros, which show the higher calculation precision and stronger stability of EOBWWO. EOBWWO obtains exact solution on multimodal functions , , , , and , which shows that EOBWWO has better global search performance.

In order to show the performance of the EOBNWWO clearly, Figures 322 represent the convergence curves and Figures 2342 describe the ANOVA test of global minimum of benchmark functions in Table 1. From Figures 322, obliviously, the convergence rate of EOBWWO is faster than other comparison algorithms including WWO on , , and , and the exact solution of EOBWWO is obtained in some functions , , , and . All of these indicate that EOBWWO has a faster convergence speed and a higher calculation precision than the other comparative algorithms. Figures 2342 show the ANOVA test of global minimum for ; it can be easily found that the standard deviation of EOBWWO is much smaller for most functions and the standard deviation is even zero on some functions (e.g., , , and ). Figures 2342 imply that EOBWWO has strong stability.

Figure 3: , convergence curves for .
Figure 4: , convergence curves for .
Figure 5: , convergence curves for .
Figure 6: , convergence curves for .
Figure 7: , convergence curves for .
Figure 8: , convergence curves for .
Figure 9: , convergence curves for .
Figure 10: , convergence curves for .
Figure 11: , convergence curves for .
Figure 12: , convergence curves for .
Figure 13: , convergence curves for .
Figure 14: , convergence curves for .
Figure 15: , convergence curves for .
Figure 16: , convergence curves for .
Figure 17: , convergence curves for .
Figure 18: , convergence curves for .
Figure 19: , convergence curves for .
Figure 20: , convergence curves for .
Figure 21: , convergence curves for .
Figure 22: , convergence curves for .
Figure 23: , ANOVA test of global minimum for .
Figure 24: , ANOVA test of global minimum for .
Figure 25: , ANOVA test of global minimum for .
Figure 26: , ANOVA test of global minimum for .
Figure 27: , ANOVA test of global minimum for .
Figure 28: , ANOVA test of global minimum for .
Figure 29: , ANOVA test of global minimum for .
Figure 30: , ANOVA test of global minimum for .
Figure 31: , ANOVA test of global minimum for .
Figure 32: , ANOVA test of global minimum for .
Figure 33: , ANOVA test of global minimum for .
Figure 34: , ANOVA test of global minimum for .
Figure 35: , ANOVA test of global minimum for .
Figure 36: , ANOVA test of global minimum for .
Figure 37: , ANOVA test of global minimum for .
Figure 38: , ANOVA test of global minimum for .
Figure 39: , ANOVA test of global minimum for .
Figure 40: , ANOVA test of global minimum for .
Figure 41: , ANOVA test of global minimum for .
Figure 42: , ANOVA test of global minimum for .
4.3. High-Dimension Function Test Results

In order to validate performance of EOBWWO comprehensively, in this subsection, we choose four functions including two unimodal functions () and two multimodal functions () to test the 100 dimensions, 1000 dimensions, and 10000 dimensions, respectively, on the six algorithms. The results of all the algorithms about the four functions are summarized in Table 7. The maximum numbers of iteration of each algorithm on each function are consistent with Table 1.

Table 7: Experiment results of high-dimension functions for different algorithms ().

For unimodal function , it can be seen obviously that the performance of EOBWWO outperforms the other comparative algorithms for dimensions 100, 1000, and 10000 from Table 7. With the increase of dimension, EOBWWO is still obtaining the exact solution and the standard deviation is zero on . In the five comparison algorithms, as far as the median value changes in each dimension are concerned, the stability of FPA is better, but BA obtains the minimum value in each dimension. From the results of function in Table 7, it is very easy to find that although the performance of EOBWWO is not very good when the dimension is 30, with the increase of dimension, not only does EOBWWO obtain the minimum value in each dimension, but also the standard deviation is the smallest and the range is not very large. In addition, for EOBWWO, the change of the order of magnitudes of median is the smallest in different dimensions. Those provide strong evidence that EOBWWO has higher performance in dealing with complex functions. Among the other comparative algorithms, the performance of BA is better, the second is CS, the third is WWO, and the fourth and fifth are BC and FPA, respectively.

For multimodal functions, EOBWWO obtained the exact solution and the standard deviation is zero on and for the dimensions 100, 1000, and 10000. Taking into account median value, the performance of BA is better among comparative algorithms. However, considering the order of magnitudes of median, the difference between the five algorithms is not obvious.

Furthermore, some high dimensional tests of EOBWWO are also tested in functions , , , , and ; details of the experimental results are shown in Table 8. As shown in Tables 7 and 8 and the above analysis, EOBWWO has the ability efficiently and stably to handle high dimensional functions.

Table 8: Experiment results of high-dimension functions of EOBWWO.
4.4. Structural Engineering Design Examples

Many structural design problems in the real world are constrained optimization problems which are nonlinear with complex constraints and the optimal solution even does not exist in some cases. In order to evaluate the performance of EOBWWO even further, in this subsection, EOBWWO was used to solve two structural design problems: design of a compressing spring and design of a welded beam.

4.4.1. Test Problem 1: Design of a Tension/Compression Spring

Design of a tension or compressing spring problem is introduced by Belegundu [14] firstly and it deals with the optimal design of tension/compression spring for a minimum weight. As shown in Figure 43, a tension/compression spring problem has three design variables: the wire diameter , the mean coil diameter , and the number of active coils . The minimum weight is subject to constraints on minimum deflection, shear stress, surge frequency, and limits on outside diameter [15]. A detailed description of the problem is as follows:where the experimental parameters are set as follows: [29]. Table 9 lists the optimal solution for compression spring design obtained by EOBWWO. The results are 20 runs independently and the number of iterations of EOBWWO is 5000. In the process of dealing with tension/compression spring constrained optimization problem, first of all we need to determine whether the four constraints are satisfied. If these constraint conditions are all satisfied, then calculate according to formula (15) and compare with the original fitness values, the better result as fitness value of constrained optimization problem. Otherwise, the original fitness value remains and continues to iterate.

Table 9: Statistical results of best tension/compression spring model obtained by EOBWWO.
Figure 43: The tension/compression spring problem.

In the process of using EOBWWO algorithm to deal with tension/compression spring constrained optimization problem, when the position of individual in current population is changed (create a new solution) and required to estimate the new solution, the steps of dealing with the tension/compression spring constrained optimization problems are as follows.

Step 1. Calculate the values of the four constraint conditions (see (16)–(19)) and estimate whether they are all satisfied with the constraint conditions. If these constraint conditions are all satisfied, go to Step 2; otherwise, go to Step 3.

Step 2. Calculate fitness value of new solution by formula (15), and compare new fitness value to original fitness value. According to the comparison results determine whether to update the current individual. Go to Step 4.

Step 3. Keeping the original individual that violates any constraint, go to Step 4.

Step 4. Continue performing the following operations.

The reason behind keeping the individual that violates any constraint of the constraint conditions is that each iteration of the population has to implement the elite opposition-based learning (EOBL) strategy and propagation operation; then the individual that violates any constraint may satisfy the constraints in the next iteration.

As one of the most well-known design benchmark problems, many researchers have studied this problem. Belegundu [14] introduced this problem and used eight different mathematical optimization techniques for this problem. Arora [15] solved this problem using a numerical optimization technique called a constraint correction at the constant cost. Table 10 summarized the optimal results of design of a tension/compression spring obtained by EOBWWO and other researchers.

Table 10: Best results of compression spring by different model.

As seen from Table 10, the proposed method obtained the best design overall of 0.012665234 corresponding to and the results obtained by EOBWWO are better than the comparative methods.

4.4.2. Test Problem 2: Design of a Welded Beam

As another well-known design benchmark problem, the objective of welded beam problem is to minimize overall cost of fabrication subject to constraints on shear stress , bending stress in the beam , buckling load on the bar , end deflection of the beam , and side constraints. As depicted in Figure 44 [28], this problem consists of four design variables: thickness of the weld , the length of the welded joint , the width of the beam , and the thickness of the beam .

Figure 44: The welded beam problem.

The problem can be formulated as follows:where

EOBWWO algorithm was run to find the minimum cost of fabrication of this design problem, where the range of values of the four experimental parameters , , , is set as follows: [30]. Similarly, in the process of searching the minimum overall cost of fabrication, it is also the first to determine whether the seven constraint conditions are satisfied when calculating the fitness value. If these constraint conditions are all satisfied, then make use of formula (20) to calculate fitness value. In each run, EOBWWO is at the cost of 5000-function evaluation to locate the optimal solution. The experimental results of design of a welded beam obtained by EOBWWO are listed in Table 11. Similarly, in the process of searching the minimum overall cost of fabrication, when the position of any individual in the current population is changed and required to estimate the new solution, the steps of using EOBWWO algorithm to deal with welded beam problem are as using EOBWWO algorithm to deal with compression spring.

Table 11: The optimal solution of the welded beam design example obtained by EOBWWO.

The results of comparing with those of other optimization algorithms reported in the literature are shown in Table 12. It can be seen from Table 12 remarkably that the proposed EOBWWO algorithm is much better than other algorithms in the design of welded beam, and the optimal solution obtained by EOBWWO is 1.69634711 corresponding to .

Table 12: The optimal solution of the welded beam design example using different methods.

5. Conclusions

In this paper, three strategies are added to the original WWO algorithm to improve the convergence speed and calculation precision of WWO algorithm even further for function optimization and structure engineering design problems. The elite opposition-based (EOB) learning strategy enhances global exploration capability by means of increasing population diversity. The local neighborhood search strategy is introduced to enhance local exploitation capability via enhancing the local search around the promising optimal solution. In addition, the improved propagation operator provides the improved algorithm with a better balance between exploration and exploitation. By using the above-mentioned three strategies, EOBWWO can deal with function optimization including multimodal functions and structural design problems. The results of 20 benchmark functions and two structural design problems in Section 4 demonstrated that the performance of EOBWWO is better than the comparative algorithms at most benchmark functions and other solving methods for the two structural design problems. The improved EOBWWO algorithm can significantly improve the convergence speed and calculation precision of the original WWO algorithm. There are various important issues for the further research topics of EOBWWO. On the one hand, structural design problems not only exist in the real world widely, but also are generally nonlinear and constrained optimization problems. Therefore, other design problems can be resolved using EOBWWO in future research, such as multidimensional knapsack problem, permutation flow shop scheduling problem [51], and graph coloring problem. On the other hand, some improvements can be introduced to EOBWWO and WWO algorithm to enhance the ability of dealing with relevant problems. More elaborate set of parameters, such as breaking coefficient and wavelength reduction coefficient , multiple population strategy, and combination with other optimization algorithms are some good choices. In addition, multiobjective optimization problems are also the focus in the future research.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work was supported by the National Science Foundation of China under Grants nos. 61463007 and 61563008 and the Guangxi Natural Science Foundation under Grant no. 2016GXNSFAA380264.

References

  1. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, Perth, Australia, November-December 1995.
  2. J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligenc, MIT Press, Cambridge, Mass, USA, 1975.
  3. K. Socha and M. Dorigo, “Ant colony optimization for continuous domains,” European Journal of Operational Research, vol. 185, no. 3, pp. 1155–1173, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. X.-S. Yang and S. Deb, “Cuckoo search via Lévy flights,” in Proceedings of the World Congress on Nature and Biologically Inspired Computing (NABIC '09), pp. 210–214, Coimbatore, India, December 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. X. S. Yang, “A new metaheuristic bat-inspired algorithm,” Computer Knowledge & Technology, vol. 284, pp. 65–74, 2010. View at Google Scholar
  7. X.-S. Yang, “Multiobjective firefly algorithm for continuous optimization,” Engineering with Computers, vol. 29, no. 2, pp. 175–184, 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. X. S. Yang, “Flower pollination algorithm for global optimization,” in Unconventional Computation and Natural Computation: 11th International Conference, UCNC 2012, Orléan, France, September 3–7, 2012. Proceedings, vol. 7445 of Lecture Notes in Computer Science, pp. 240–249, Springer, Berlin, Germany, 2012. View at Publisher · View at Google Scholar
  9. Y.-J. Zheng, “Water wave optimization: a new nature-inspired metaheuristic,” Computers & Operations Research, vol. 55, pp. 1–11, 2015. View at Publisher · View at Google Scholar · View at Scopus
  10. H. Huang, Dynamics of Surface Waves in Coastal Waters, Springer, Berlin, Germany, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  11. X.-B. Wu, J. Liao, and Z.-C. Wang, “Water wave optimization for the traveling salesman problem,” in Intelligent Computing Theories and Methodologies: 11th International Conference, ICIC 2015, Fuzhou, China, August 20–23, 2015, Proceedings, Part I, vol. 9225 of Lecture Notes in Computer Science, pp. 137–146, Springer, Berlin, Germany, 2015. View at Publisher · View at Google Scholar
  12. B. Zhang, M.-X. Zhang, J.-F. Zhang, and Y.-J. Zheng, “A water wave optimization algorithm with variable population size and comprehensive learning,” in Intelligent Computing Theories and Methodologies, vol. 9225 of Lecture Notes in Computer Science, pp. 124–136, Springer International, Cham, Switzerland, 2015. View at Publisher · View at Google Scholar
  13. Y. J. Zheng and B. Zhang, “A simplified water wave optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '15), Sendai, Japan, May 2015. View at Publisher · View at Google Scholar
  14. A. D. Belegundu, A Study of Mathematical Programming Methods for Structural Optimization, Department of Civil and Environmental Engineering, Iowa University, 1982.
  15. J. S. Arora, Introduction to Optimum Design, McGraw-Hill, New York, NY, USA, 1989.
  16. C. A. C. Coello, “Use of a self-adaptive penalty approach for engineering optimization problems,” Computers in Industry, vol. 41, no. 2, pp. 113–127, 2000. View at Publisher · View at Google Scholar · View at Scopus
  17. C. A. C. Coello, “Constraint-handling using an evolutionary multiobjective optimization technique,” Civil Engineering & Environmental Systems, vol. 17, no. 4, pp. 319–346, 2000. View at Publisher · View at Google Scholar · View at Scopus
  18. C. A. C. Coello and E. M. Montes, “Constraint-handling in genetic algorithms through the use of dominance-based tournament selection,” Advanced Engineering Informatics, vol. 16, no. 3, pp. 193–203, 2002. View at Publisher · View at Google Scholar · View at Scopus
  19. S. He, E. Prempain, and Q. H. Wu, “An improved particle swarm optimizer for mechanical design optimization problems,” Engineering Optimization, vol. 36, no. 5, pp. 585–605, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. C. A. C. Coello and R. L. Becerra, “Efficient evolutionary optimization through the use of a cultural algorithm,” Engineering Optimization, vol. 36, no. 2, pp. 219–236, 2004. View at Publisher · View at Google Scholar · View at Scopus
  21. K. H. Raj, R. S. Sharma, G. S. Mishra, A. Dua, and C. Patvardhan, “An evolutionary computational technique for constrained optimization in engineering design,” Journal of the Institution of Engineers (India): Mechanical Engineering Division, vol. 86, pp. 121–128, 2005. View at Google Scholar
  22. A.-R. Hedar and M. Fukushima, “Derivative-free filter simulated annealing method for constrained continuous global optimization,” Journal of Global Optimization, vol. 35, no. 4, pp. 521–549, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. Q. He and L. Wang, “An effective co-evolutionary particle swarm optimization for constrained engineering design problems,” Engineering Applications of Artificial Intelligence, vol. 20, no. 1, pp. 89–99, 2007. View at Publisher · View at Google Scholar · View at Scopus
  24. E. M. Montes and C. A. C. Coello, “An empirical study about the usefulness of evolution strategies to solve constrained optimization problems,” International Journal of General Systems, vol. 37, no. 4, pp. 443–473, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. M. G. H. Omran and A. Salman, “Constrained optimization using CODEQ,” Chaos, Solitons and Fractals, vol. 42, no. 2, pp. 662–668, 2009. View at Publisher · View at Google Scholar · View at Scopus
  26. V. S. Aragón, S. C. Esquivel, and C. A. Coello Coello, “A modified version of a T-cell Algorithm for constrained optimization problems,” International Journal for Numerical Methods in Engineering, vol. 84, no. 3, pp. 351–378, 2010. View at Publisher · View at Google Scholar · View at Scopus
  27. B. Akay and D. Karaboga, “Artificial bee colony algorithm for large-scale problems and engineering design optimization,” Journal of Intelligent Manufacturing, vol. 23, no. 4, pp. 1001–1014, 2012. View at Publisher · View at Google Scholar · View at Scopus
  28. A. H. Gandomi, X.-S. Yang, A. H. Alavi, and S. Talatahari, “Bat algorithm for constrained optimization tasks,” Neural Computing & Applications, vol. 22, no. 6, pp. 1239–1255, 2013. View at Publisher · View at Google Scholar · View at Scopus
  29. A. H. Gandomi, “Interior search algorithm (ISA): a novel approach for global optimization,” ISA Transactions, vol. 53, no. 4, pp. 1168–1183, 2014. View at Publisher · View at Google Scholar · View at Scopus
  30. A. Baykasoğlu and F. B. Ozsoydan, “Adaptive firefly algorithm with chaos for mechanical design optimization problems,” Applied Soft Computing, vol. 36, pp. 152–164, 2015. View at Publisher · View at Google Scholar
  31. K. Deb, “Optimal design of a welded beam via genetic algorithms,” Aiaa Journal, vol. 29, no. 11, pp. 2013–2015, 2013. View at Google Scholar
  32. J. P. B. Leite and B. H. V. Topping, “Improved genetic operators for structural engineering optimization,” Advances in Engineering Software, vol. 29, no. 7–9, pp. 529–562, 1998. View at Publisher · View at Google Scholar · View at Scopus
  33. C. A. C. Coello, “Self-adaptive penalties for GA-based optimization,” in Proceedings of the 1999 Congress on Evolutionary Computation (CEC'99), pp. 573–580, July 1999. View at Publisher · View at Google Scholar · View at Scopus
  34. K. Deb, “An efficient constraint handling method for genetic algorithms,” Computer Methods in Applied Mechanics & Engineering, vol. 186, no. 2–4, pp. 311–338, 2000. View at Publisher · View at Google Scholar · View at Scopus
  35. X. Hu, R. C. Eberhart, and Y. Shi, “Engineering optimization with particle swarm,” in Proceedings of the 2003 IEEE Swarm Intelligence Symposium (SIS '03), pp. 53–57, Indianapolis, Ind, USA, April 2003. View at Publisher · View at Google Scholar · View at Scopus
  36. J.-L. Liu, “Novel orthogonal simulated annealing with fractional factorial analysis to solve global optimization problems,” Engineering Optimization, vol. 37, no. 5, pp. 499–519, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. M. Mahdavi, M. Fesanghary, and E. Damangir, “An improved harmony search algorithm for solving optimization problems,” Applied Mathematics and Computation, vol. 188, no. 2, pp. 1567–1579, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  38. M. Fesanghary, M. Mahdavi, M. Minary-Jolandan, and Y. Alizadeh, “Hybridizing harmony search algorithm with sequential quadratic programming for engineering optimization problems,” Computer Methods in Applied Mechanics and Engineering, vol. 197, no. 33–40, pp. 3080–3091, 2008. View at Publisher · View at Google Scholar · View at Scopus
  39. A. Kaveh and S. Talatahari, “Engineering optimization with hybrid particle swarm and ant colony optimization,” Asian Journal of Civil Engineering, vol. 10, no. 6, pp. 611–628, 2009. View at Google Scholar
  40. A. Kaveh and S. Talatahari, “An improved ant colony optimization for constrained engineering design problems,” Engineering Computations, vol. 27, no. 1, pp. 155–182, 2010. View at Publisher · View at Google Scholar · View at Scopus
  41. A. H. Gandomi, X.-S. Yang, and A. H. Alavi, “Mixed variable structural optimization using Firefly Algorithm,” Computers & Structures, vol. 89, no. 23-24, pp. 2325–2336, 2011. View at Publisher · View at Google Scholar · View at Scopus
  42. X.-Y. Zhou, Z.-J. Wu, H. Wang, K.-S. Li, and H.-Y. Zhang, “Elite opposition-based particle swarm optimization,” Acta Electronica Sinica, vol. 41, no. 8, pp. 1647–1652, 2013. View at Publisher · View at Google Scholar · View at Scopus
  43. S. Das, A. Abraham, U. K. Chakraborty, and A. Konar, “Differential evolution using a neighborhood-based mutation operator,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 3, pp. 526–553, 2009. View at Publisher · View at Google Scholar · View at Scopus
  44. H. R. Tizhoosh, “Opposition-based learning: a new scheme for machine intelligence,” in Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and the International Conference on Intelligent Agents, Web Technologies and Internet Commerce, vol. 1, pp. 695–701, Vienna, Austria, November 2005.
  45. J. Kennedy, “Particle swarm optimization,” in Encyclopedia of Machine Learning, pp. 760–766, Springer, New York, NY, USA, 2010. View at Google Scholar
  46. Y. Peng and B.-L. Lu, “A hierarchical particle swarm optimizer with latin sampling based memetic algorithm for numerical optimization,” Applied Soft Computing, vol. 13, no. 5, pp. 2823–2836, 2013. View at Publisher · View at Google Scholar · View at Scopus
  47. B. Zhang, H. Yuan, L. Sun, J. Shi, Z. Ma, and L. Zhou, “A two-stage framework for bat algorithm,” Neural Computing & Applications, 2016. View at Publisher · View at Google Scholar · View at Scopus
  48. S. Mirjalili, “Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems,” Neural Computing and Applications, vol. 27, no. 4, pp. 1053–1073, 2016. View at Publisher · View at Google Scholar · View at Scopus
  49. S. Surjanovic and D. Bingham, “Virtual library of simulation experiments: test functions and datasets,” 2013, http://www.sfu.ca/screen.
  50. S. Saremi, S.-Z. Mirjalili, and S.-M. Mirjalili, “Evolutionary population dynamics and grey wolf optimizer,” Neural Computing and Applications, vol. 26, no. 5, pp. 1257–1263, 2015. View at Publisher · View at Google Scholar · View at Scopus
  51. H. Bargaoui and O. B. Driss, “Multi-agent model based on tabu search for the permutation flow shop scheduling problem,” Advances in Distributed Computing & Artificial Intelligence Journal, vol. 3, no. 8, pp. 519–527, 2014. View at Google Scholar