Table of Contents Author Guidelines Submit a Manuscript
Scientific Programming
Volume 2019, Article ID 2401818, 20 pages
https://doi.org/10.1155/2019/2401818
Research Article

Dynamically Dimensioned Search Embedded with Piecewise Opposition-Based Learning for Global Optimization

1School of Economics and Management, Harbin Engineering University, Harbin 150001, China
2Faculty of Mechanics, Kim Il Sung University, Pyongyang 950003, Democratic People’s Republic of Korea
3College of Civil and Building Engineering, Kyambogo University, Kampala, Uganda
4College of Economics and Management, Northeast Forestry University, Harbin 150040, China

Correspondence should be addressed to Fu Yan; moc.361@dhpufnay

Received 2 January 2019; Revised 4 April 2019; Accepted 12 May 2019; Published 26 May 2019

Academic Editor: Basilio B. Fraguela

Copyright © 2019 Jianzhong Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Dynamically dimensioned search (DDS) is a well-known optimization algorithm in the field of single solution-based heuristic global search algorithms. Its successful application in the calibration of watershed environmental parameters has attracted researcher’s extensive attention. The dynamically dimensioned search algorithm is a kind of algorithm that converges to the global optimum under the best condition or the good local optimum in the worst case. In other words, the performance of DDS is easily affected by the optimization conditions. Therefore, this algorithm has also suffered from low robustness and limited scalability. In this work, an improved version of DDS called DDS-POBL is proposed. In the DDS-POBL, two effective methods are applied to improve the performance of the DDS algorithm. Piecewise opposition-based learning is introduced to guide DDS search in the right direction, and the golden section method is used to search for more promising areas. Numerical experiments are performed on a set of 23 classic test functions, and the results represent significant improvements in the optimization performance of DDS-POBL compared to DDS. Several experimental results using different parameter values demonstrate the high solution quality, strong robustness, and scalability of the proposed DDS-POBL algorithm. A comparative performance analysis between the DDS-POBL and other powerful algorithms has been carried out by statistical methods by using the significance of the results. The results show that DDS-POBL works better than PSO, CoDA, MHDA, NaFA, and CMA-ES and gives very competitive results when compared to INMDA and EEGWO. Moreover, the parameter calibration application of the Xinanjiang model shows the effectiveness of the DDS-POBL in the real optimization problem.

1. Introduction

The rapid development of productivity of human society has brought a great demand for optimization algorithms. Obtaining a good solution to the complex optimization problems in the real world becomes the specialized task for the optimization algorithms. Traditional optimization algorithms such as Newton’s method and the gradient method, which are based on mathematical theory, can hardly solve these complex optimization problems due to the extreme computation burdens. Therefore, highly efficient optimization algorithms have become the focus of research in recent years. The metaheuristic algorithm inspired from various phenomena of nature is one of the prevailing highly efficient algorithms. The biggest characteristic of these algorithms is to continuously evaluate candidate solutions through multiple iterations and try to improve upon these solutions. These metaheuristic algorithms are usually classified into two main categories [1]: single solution-based heuristic global search algorithms and population-based heuristic algorithms. Some of the famous single solution-based heuristic global search algorithms are simulated annealing (SA) [2], threshold accepting method (TA) [3], microcanonical Annealing (MA) [4], tabu search (TS) [5], guided local search (GLS) [6], and dynamically dimensioned search (DDS) [7, 8]. Population-based ones include evolutionary algorithms (EA) [9], genetic algorithms (GA) [10], particle swarm optimization (PSO) [11], dragonfly algorithm (DA) [12, 13], and shuffled complex evolution (SCE) algorithms [14].

According to the No Free Lunch (NFL) theorem [15], it is hard for researchers to propose a metaheuristic algorithm that is best suited for solving all optimization problems. That is to say, a particular algorithm may show very promising solutions only on certain problems but not on others. From this view, both the single solution-based heuristic global search algorithms and population-based heuristic algorithms have their respective strengths and weaknesses. The main trouble they all encounter is that the rates of convergence are very low, thus bringing them both a high computing burden and low results accuracy and limiting their applications in the real world. This study will focus on single solution-based heuristic global search algorithms especially the DDS algorithm and try to rectify its slow convergence speed and low solutions accuracy.

The dynamically dimensioned search (DDS) algorithm, introduced by Tolson and Shoemaker [7], provides a relatively new potential for the family of single solution-based heuristic global search algorithms. At the initial stages of iteration, the algorithm is mainly based on the global search and is converted to local search at the later stages of iteration. This special search mechanism of the DDS algorithm is achieved by dynamically and probabilistically reducing the number of dimensions in the neighborhood [7]. Different versions of DDS have been proposed and successfully applied to practical engineering optimization problems such as the hybrid discrete dynamically dimensioned search (HD-DDS) which was used to solve discrete, single-objective, constrained water distribution system (WDS) design problems [8], the modified dynamically dimensioned search (MDDS) which was presented to optimize the parameter for distributed hydrological model [16], the DDS algorithm which was used to automate the calibration process of an unsteady river flow model [17], the Pareto archived dynamically dimensioned search (PA-DDS) which was applied for multi-objective optimization [18], and the combining filter method and dynamically dimensioned search which was designed for constrained global optimization problems [19]. Although the DDS algorithm partly overcomes the common drawback of single solution-based search algorithms to some extent, it does not still provide an ideal solution to address the poor and slow convergence of the global optimum in the best case or an acceptable local optimum in the worst case completely.

The drawbacks of the DDS algorithm, by which the global optimal is obtained in the best case and a local optimum is achieved in the worst case, make DDS not to be a perfect algorithm. In practical applications, most optimization problems involve complex constraints. To overcome these drawbacks and obtain a solution that is globally optimal or close to it, DDS needs to be improved. Through the in-depth analysis of the potential solution update principle of the DDS algorithm, we found that DDS does not have a clear search direction when updating this potential solution. This makes us realize that the lack of clear search direction in DDS may be the main reason why it cannot converge to the global optimal vicinity in poor conditions. In literature [20], POBL is introduced to be an algorithm that can guide the search direction of each dimension. In this paper, we use POBL to guide the search direction of the DDS algorithm. In addition, the golden section (GS) search is used in conjunction with the POBL algorithm to guide DDS to quickly gather the potential solution near the global optimal solution and accelerate the convergence speed of the DDS. Hence, an improved version of dynamically dimensioned search algorithm named the “dynamically dimensioned search algorithm embedded with piecewise opposition-based learning” (DDS-POBL) algorithm is presented in this work. The proposed algorithm reconstructs the framework of the DDS algorithm by introducing piecewise opposition-based learning (POBL). The bounds of each variable will be updated dynamically along with the number of iterations and increased by adopting the golden ratio and the piecewise opposite number of the obtained best positions. A dynamically dimensioned search algorithm combined with dynamic bounds adjustment and opposition-based learning is one of the major contributions of this work.

The rest of the content of the paper can be listed as follows: in Section 2, a brief description to the DDS algorithm is presented. Section 3 contains description of motivation and the proposed DDS-POBL algorithm. Several numerical experiments and statistical analysis are reported in Section 4 and the comparison experiments between the proposed DDS-POBL and standard DDS and other state-of-the-art algorithms has also been carried out. And finally, a brief conclusion and future research direction are detailed in Section 5.

2. Dynamically Dimensioned Search Algorithm

The DDS algorithm is a greedy type of algorithm developed by Tolson and Shoemaker in 2007 [7]. The main purpose of the proposed DDS algorithm is to solve the calibration problems that exist in the context of watershed simulation models. Since it is easy for programming and based on a simple concept and takes into account the global and local search, it has attracted extensive attentions for research studies. The main difference between the DDS algorithm and existing optimization algorithms is that the neighborhood is dynamically adjusted by changing the dimension of the search and no algorithm parameter needs tuning. In the optimization process, the DDS algorithm obtains a good approximation of the globally optimal solution, rather than the precise global optimum within a specified maximum number of function evaluations. Thus, DDS is suited for computationally expensive optimization problems such as distributed watershed model calibration.

As mentioned above, the DDS algorithm takes into account the global and local search in its iteration because the algorithm searches globally in the initial iterations and becomes increasingly local when approaching the maximum allowable number of iterations [7, 16]. In each iteration, is randomly selected with probability from the decision variables D for inclusion in neighborhood . The expression of probability is as follows:where indicates the number of current iteration and represents the maximum number of iteration.

At each iteration , a new potential solution is obtained by perturbing the current best solution only in the randomly selected dimensions. These perturbation magnitudes are sampled using a standard normal random variable , reflecting at decision variable bounds as follows:where is a scalar neighborhood size perturbation factor, and correspond to the upper and lower bounds of the dimension (variable), and denotes a standard normal random number.

In order to accurately choose the best solution between the current best and the trial potential for the next iteration, the greedy search method is employed. The current best will be replaced by the trial if the objective function value of is smaller than ; otherwise, the current best position is reserved for the next iteration. The pseudocode description of the DDS algorithm is presented in Algorithm 1.

Algorithm 1: DDS algorithm.

3. Proposed DDS-POBL Algorithm

3.1. Analysis of the Low Searching Ability of DDS Algorithm

As described above, the DDS algorithm is not only a local optimization algorithm, but it also has strong ability to search globally. At the beginning, the probability that provokes the number of dimensions to be perturbed in the neighborhood is large, which makes the DDS algorithm searche globally. With the increase of iterations, the probability decreases gradually and becomes a local search. During this search, the trial candidate is achieved by perturbation operation on the current best point by using standard normal distribution, which brings more useful information to the search.

Based on the above analysis, it can be found that the DDS algorithm can be successfully transformed from the global search to the local search with the decrease of perturbation probability and that the diversity of trial candidate solution is also obtained from the current best solution through the disturbance of the standard normal distribution. But then two natural questions are raised: what is the effective search direction for the DDS algorithm, and which promising area is more likely that the trial candidate solution or the current best solution will approach the global optimum? Unfortunately, however, it is difficult to find reasonable answers from the existing literature. This is the main drawback of the DDS algorithm, and probably this is the reason why the final solution of DDS is always a good approximation of the globally optimal solution rather than the precise global optimum. It tells us that it is very important to adaptively adjust the search direction and search area in the iteration process. Therefore, an improved search mechanism for guiding the DDS algorithm search efficiently is needed in avoiding the problem of premature convergence and stagnation in local optima.

To accomplish this task, in this work, the dynamically dimensioned search algorithm embedded with piecewise opposition-based learning has been proposed, in which the search direction is determined by piecewise opposition-based learning strategy, and the promising areas are obtained by adjusting the boundaries of variables using the golden ratio.

3.2. Piecewise Opposition-Based Learning (POBL) Theory

Before the detailed description of the piecewise opposition-based learning theory, it is necessary to define the important concepts related to this algorithm.

Definition 1. Let be a point in n-dimensional space, where and . The piecewise opposite point of P is defined by :Let be a candidate solution in an n-dimensional solution space with . The quality of a candidate solution is measured by calculating its fitness function value . If (for a maximization problem) or (for a minimization task), then replace with ; otherwise, is preserved in the next iteration. A formal description of the piecewise opposition-based learning algorithm is presented in Algorithm 2.

Algorithm 2: POBL and GS algorithm [21].
3.3. Piecewise Opposition-Based Learning Determines the Right Search Direction

For the current heuristic optimization algorithms, how to find its best search direction does not have a fixed operation mode. In general, researchers tend to use a greedy search method, namely, the trial-and-error method, to determine the search direction of candidate . At each iteration, the algorithm tries to improve upon this candidate x until it eventually converges to the ideal optimal solution or meets certain predefined termination criteria. Therefore, the computational burden of these algorithms subjects to the quality of the candidate solution. However, the initial estimates are not always close to the actual solution. In some cases where they place on the opposite location of where the current candidate solution resides, it is always computationally expensive for the algorithm to converge [21]. According to Tizhoosh [21], searching for the candidate solution at all directions simultaneously is an available method to improve the poor quality of the initial guess. Fortunately, the POBL algorithm is proposed as the right way to guide the algorithm to search for the right direction based on this intuition, in which the opposite estimate x+ of x is simultaneously taken into account with x at each iteration of the algorithm. Therefore, introducing the POBL theory into the DDS algorithm can guide the algorithm to search the right direction effectively. Meanwhile, in order to obtain a better initial guess, we set the current best solution of the standard DDS algorithm as the initial candidate solution of the POBL algorithm, as shown in Algorithm 2.

3.4. The Golden Section (GS) Can Guide the Solutions to Search for Promising Areas

The golden section (GS) is one of the most amazing phenomena present in nature. The successful applications of GS-based rule have been widely observed. One of them is the optimal method based on the GS. The core framework of this method is to divide the space into several sections or groups. The main goal is to separate the space using this region division method in a way that allows for better functionality, spacing, and distribution [22]. It is well known that the feasible or theoretical optimal solution of the optimization problem is distributed in a certain interval of its solution space. Therefore, if we can find such a promising interval quickly, the algorithm will quickly find the optimal solution. At present, the bisection method and GS are two main fast search methods that gradually reduce the interval to find the minimum. However, the search efficiency of the GS is higher than that of the bisection method. It has been proved in practice that the GS method can achieve the results obtained through 2,500 experiments by the bisection method only with 16 trials. Therefore, the GS search is an efficient method to gradually reduce the interval where the minimum value is located. The key is to keep that no matter how many points have been evaluated, the minimum value is within the interval defined by the two points adjacent to the point with the lowest evaluated value so far [23]. According to [23], its specific implementation steps and principle are as follows:

Figure 1 shows a simple step in the technique of finding a minimum in one-dimensional solution space of a unimodal function. As described in Figure 1, the functional values of are represented on the vertical axis, and the horizontal axis is the range of values of parameter. First, the functional values of at the three points of , , and are calculated, and the results that are known. Since is smaller than either or , it is clear that the interval from to is a promising area that can be used to find a minimum.

Figure 1: Schematic diagram of the golden section.

The next operation in the minimization process is to “probe” the function by evaluating a new value x, namely, . Since the unimodal functional values satisfy , a function value smaller or bigger than can be find in the interval between and . Therefore, it is most effective to select in the largest interval, i.e., between and . From Figure 1, in the case where the function yields , a minimum lies between and , and the three points ,, and will be the new triplet points. However, if the function yields the value of , then a minimum lies between and , and the new triplet of points will be , , and . Based on this analysis, in either case, we can find a narrower promising search area guaranteed to contain the minimum value of function.

3.5. Design of Proposed DDS Embedded with Piecewise Opposition-Based Learning: DDS-POBL

Based on the analysis of the POBL algorithm, it can be seen that a good choice of the initial guess is crucial for the algorithm to quickly find the right search direction. To apply the POBL algorithm correctly to the DDS algorithm and to help it find the right search direction, we use the current best solution obtained from the DDS algorithm as the initial guess for POBL. After using the POBL algorithm to determine the optimal search direction for the DDS algorithm, the next step is to use the GS search method to guide the DDS algorithm to search for the most promising interval. By determining the correct search direction of the DDS algorithm and guiding it search to the most promising area, the present DDS-POBL algorithm has greatly improved its global and local optimization performance compared with the standard version of the DDS algorithm, and its applications have also been greatly expanded.

The pseudocode of the proposed DDS-POBL algorithm is described as Algorithm 3. In Algorithm 3, we first initialize the parameters of the DDS algorithm and set r = 0.2, , and  = 0. Then, we run the DDS algorithm using pseudocode of lines 1 through 31 to get a better solution and its fitness value . Next, we invoke Algorithm 2 to find the right search direction and use the as the initial input for the POBL algorithm. A potential optimal solution is found by executing the POBL algorithm, i.e., executing the first to the ninth lines of Algorithm 2. Finally, the GS method is used to search the most promising interval, i.e., the 10th through 18th lines of Algorithm 2 are executed.

Algorithm 3: DDS-POBL algorithm.

4. Numerical Experiments and Results

4.1. Benchmark Functions and Parameter Settings

A set of benchmark functions, including 23 test problems, were collected from several literature studies [13, 2426] and applied to evaluate the performance of the proposed algorithm. The selected test functions are classified into two categories: unimodal and multimodal benchmark test functions. The unimodal problems with only one global optimum and no local optima are suitable for investigating the exploitation of the DDS-POBL algorithm. However, multimodal problems are often used to investigate the exploration performance of the algorithm and the local optimal avoidance ability since it has more than one local optimal solution [27]. Detailed description of the selection of test functions and their respective global optimums are summarized in Table 1. In Table 1, represents the global optimal value.

Table 1: Summary of the benchmark test functions used in the experiment of this work.

In order to obtain a fair comparison, we set the neighborhood disturbance parameters and the maximum number of iterations of the DDS and DDS-POBL algorithms to 0.2 and 500, respectively, i.e., and . The DDS-POBL and conventional DDS algorithms were coded in MATLAB R2015a, and all experiments were performed on a personal computer (Core i5@ 2.5 GHz, 2.70 GHz, and 64 GB RAM).

4.2. The Evaluation Metrics of Function Optimization Performance

When evaluating the optimization performance of an algorithm, researchers often prefer to investigate its efficiency through metrics such as the acceleration rate (AR) and success rate (SR). According to [21, 27], AR is a metric related to the convergence speed of the algorithm. In the present work, AR is employed to compare the convergence rate of the DSS-POBL algorithm against the convergence rate of the DDS algorithm. It is defined as follows:where indicates the number of function calls and the reported NFCs are used in the present experiments for each function. For the average over 50 trials, means that the convergence rate of DDS-POBL is faster than DDS, whereas means that DDS-POBL has the same convergence rate to DDS; means that the convergence rate of DDS-POBL is slower than DDS.

The success rate (SR) is an important metric to evaluate the performance for the optimization algorithm, which is defined as the ratio of the number of trials that the algorithm successfully reaches the desired value before reaching the maximum number of function calls and the total number of experimental trails. Its expression is described as follows:

Based on the description of AR and SR, the average AR () and SR () can be calculated over the n benchmark functions as follows:

4.3. Comparison with Conventional DDS Algorithm

A comparison result of optimization performance between the proposed DDS-POBL algorithm and the conventional dynamically dimensioned search (DDS) algorithm for 23 test problems is presented in Table 2. In this experiment, each test function was tested 50 times for 30 and 60 dimensions to observe the performance of both algorithms. Four common criteria were introduced to compare the algorithms [25], i.e., Best, Mean, Worst, and Standard deviation.

Table 2: Best, Mean, Worst, and Standard deviation value obtained by the DDS and DDS-POBL algorithm for 23 test functions. The best results are highlighted in italic.

From Table 2, for the 30 and 60 dimensions, the DDS-POBL algorithm has better “Best,” “Mean,” “Worst,” and “St. dev” values for all test functions except f21 and f23 than the conventional DDS algorithm. For the four test functions (i.e. f7, f8, f13, and f21), the DDS-POBL algorithm could achieve theoretical optima (0). Moreover, on the test functions f1f5, f14f17, f19, and f20, the DDS-POBL algorithm provided much closer results to the global optimum. Both DDS-POBL and DDS could obtain global optimum in test function f21. DDS had better “St. dev” value than DDS-POBL in function f23 and better “Best” value in function f18 for dimension 30. In addition, the DDS algorithm provided solutions close to the solution of the DDS-POBL algorithm in the test functions f9, f11, f12, f18, f22, and f23.

To analyze the convergence of the algorithms, the convergence curves of the DDS-POBL and DDS on some typical test functions have been plotted for 30 and 60 dimensions in Figure 2. All the convergence curves are plotted by the mean result achieved in the 50 runs. In the figure, the number of iterations is indicated on the horizontal axis and the vertical axis represents the mean of logarithms (log) of objective function values. As displayed in Figure 2, the DDS-POBL algorithm has faster convergence than the classical DDS algorithm for all of the benchmark test functions. This means that the guide on the search direction of algorithm is right and that the boundaries updated by the GS can make the search lead to more promising area.

Figure 2: Convergence curves for D = 30 and D = 60 of certain typical functions. (a) Function f1. (b) Function f2. (c) Function f3. (d) Function f4. (e) Function f5. (f) Function f8. (g) Function f10. (h) Function f12. (i) Function f14. (j) Function f16. (k) Function f17. (l) Function f19. (m) Function f20. (n) Function f22.

To better evaluate single solution-based search algorithms, we calculate the acceleration rate () and success rate () (as described in Section 4.1) for several benchmark functions. These results are recorded in Table 3. As described in Table 3, the proposed DDS-POBL algorithm converges to the desired result faster than the conventional DDS algorithm for 15 out of 23 test functions. For the other 8 test functions, the convergence speed of the DDS-POBL algorithm is equal to that of the DDS algorithm. From the value of metrics of the average acceleration rate () in Table 3, it can be observed that the DDS-POBL algorithm has a more superior average convergence rate than DDS. Overall, in terms of achieving global optimum and avoiding local optima, the DDS-POBL algorithm outperforms DDS.

Table 3: Comparison results of acceleration rates and success rates between the DDS-POBL algorithm and DDS algorithm.
4.4. Comparison with Other State-of-the Art Algorithm

To further show the significant superiority of DDS-POBL, we compared it with eight state-of-the-art optimization algorithms, i.e., the particle swarm optimizer (PSO) [28], the piecewise opposition-based learning (POHS) [20], the completely derandomized self-adaptation evolution strategies (CMA-ES) [29], the composite differential evolution (CoDE) [30], the memory-based hybrid dragonfly algorithm (MHDA) [28], the exploration-enhanced grey wolf optimizer (EEGWO) [24], the hybrid method based on the DA and the improved NM simplex algorithm (INMDA) [13], and the firefly algorithm with neighborhood attraction (NaFA) [31]. In this experiment, the parameter settings of EEGWO, INMDA, and DDS-POBL are as follows: the population size is 30, the maximum number of iterations is 500, the number of independent experiments is 50, and the other parameters related to the algorithm are consistent with its original literature. Since the function values for each test function of the POHS, CMA-ES, MHDA, and NaFA algorithm were taken directly from their original papers, the parameter settings of these algorithms remain the same as the original papers.

In this experiment, the best fitness values are averaged (represented by “Mean”), and the corresponding standard deviation values (indicated by “St. dev”) are computed. Table 4 records the experimental results obtained by DDS-POBL and other eight algorithms for D = 30. Please note that the best results achieved for each test function is highlighted in italic.

Table 4: Comparison of the DDS-POBL algorithm with other state-of-the-art algorithms (D = 30).

The comparison results presented in Table 4 show that the proposed DDS-POBL algorithm yields better results than PSO in 7 test functions (f1, f3, f4, f5, f6, f9, and f13) except for 2 test functions (f11 and f12). In terms of functions f11 and f12, POHS is able to achieve better results than DDS-POBL, but DDS-POBL achieves better results on f1, f5, f6, and f13. The DDS-POBL algorithm provides better optimization values than the CMA-ES algorithm for 8 out of 9 benchmark test functions, while CMA-ES provides better “Mean” result on function f11. The DDS-POBL algorithm shows poorer optimization results than CoDE only in function f12 for 9 test problems. With respect to MHDA, DDS-POBL obtains better “Mean” and “St. dev” results on functions f1, f3, f4, f5, f6, f9, and f13, but MHDA has better performance in functions f11 and f12. In terms of functions f11 and f12, POHS is able to achieve better results than DDS-POBL, and DDS-POBL achieves better results on f1, f5, f6, and f13. When compared to INMDA, DDS-POBL provides very competitive results on all functions except functions f9, f11, and f12. INMDA gets better results than DDS-POBL on functions f11 and f12, on the contrary, worse on f9. NaFA obtains better results than DDS-POBL on functions f11 and f12 for “Mean” value. Compared with EEGWO, DDS-POBL gets competitive results for five functions (f1, f3, f4, f5, and f9) and similar for one function (f13). In addition, DDS-POBL obtains better results than EEGWO for one function (f6) but worse for two functions (f11 and f12).

For the Rastrigin function, not all optimization algorithms can converge to the global optimal value of zero. This can be easily found in the existing literature. For example, the POHS algorithm in literature [20] has an optimized value of 2.08E + 01 for the Rastrigin function, while the optimized value is 5.90E – 07 when using the MHDA in literature [28]. In the literature [31], the optimized value obtained by NaFA is 2.09E + 01 for Rastrigin function and that by CoDE [30] is 3.41E + 01. In this paper, the optimal solution of Rastrigin function obtained by the DDS and DDS-POBL algorithm is not a global solution. Although the value 29.8488 obtained by DDS-POBL is smaller than that by CoDE, it is still far from the zero. Maybe this is one of the drawbacks of the DDS and DDS-POBL algorithm.

As can be seen from Table 4, the optimization performance of DDS-POBL is very close to that of EEGWO and INMDA, and it is difficult to determine which algorithm is better. Therefore, the comparison of algorithms should be done using some statistical analysis. To conduct statistical analysis of the comparison results scientifically, we adopted two statistical methods, namely, the sign test and Wilcoxon signed-rank test, which are taken from reference [32]. The statistical analysis results are given in Tables 5 and 6. In the sign test shown in Table 5, DDS-POBL shows a significant improvement over CMA-ES and CoDE with a level of significance α = 0.05 and over PSO and MHDA with a level of significance α = 0.1. From the Wilcoxon signed-rank test results recorded in Table 6, we can see that DDS-POBL outperforms CMA-ES with a level of significance α = 0.01 and outperforms CoDE, NaFA, and MHDA with α = 0.1. In addition, DDS-POBL is inferior to INMDA, with a level of significance α = 0.05, not significantly better than PSO and not significantly worse than EEGWO.

Table 5: Summary of sign test results on six state-of-the-art algorithms.
Table 6: Wilcoxon signed-rank test results on six state-of-the-art algorithms.

In order to reveal which algorithm reaches to vicinity of global solution fast, Figure 3 displays the convergence curves of DDS-POBL, EEGWO, and INMDA on functions f1, f5, f11, and f13. As can be seen in Figure 3, DDS-POBL and EEGWO showed almost the same convergence rate and both were inferior to INMDA on functions f1 and f5. On function f11, EEGWO presented the fastest convergence rate, followed by INMDA, and DDS-POBL was the worst. For function f13, DDS-POBL and EEGWO showed the fastest convergence rate, while INMDA was slower than the former two, but all of them converged to the global optimal.

Figure 3: Convergence curves of DDS-POBL, EEGWO, and INMDA with D = 30. (a) Function f1. (b) Function f5. (c) Function f11. (d) Function f13.
4.5. Robustness of the Proposed Algorithm

For the DDS algorithm, performance is often determined by a scalar neighborhood size perturbation factor . The author of paper [7] suggested that the optimization performance of DDS is best when . In the present work, in order to investigate the effect of different values of on the performance of the DDS-POBL algorithm, the parameter is varied at five different values i.e. 0.1, 0.2, 0.3, 0.5, and 0.9, with the rest of the parameters kept the same as mentioned in the previous subsection. All of the 23 test functions listed in Table 1 were run 20 times independently. Table 7 records the experimental results obtained by DDS-POBL with five different values of a scalar neighborhood size perturbation factor for D = 30, where “Mean” indicates the mean best objective function value and “St. dev” represents the corresponding standard deviation value. To quickly recognize the best results, the best results for each function are marked in italic.

Table 7: Descriptive results of DDS-POBL using five different values for 23 functions (D = 30).

As it is described in Table 7, the total optimization performance of the DDS-POBL algorithm with 0.2 was superior to other cases. The specific optimization results of DDS-POBL are as follows: when is 0.1 and 0.5, 8 optimal results are obtained; when is 0.2, 13 optimal results are obtained; when is 0.3, 7 optimal results are obtained; and when is 0.9, 6 optimal results are obtained. However, the optimization results of different values of are not significantly different except for one function (f13). Figure 4 clearly shows the average convergence speed and the performance of the DDS-POBL algorithm when different values are taken. Therefore, the proposed DDS-POBL algorithm has good robustness.

Figure 4: Convergence curves for D = 30 with different values on some typical functions. (a) f1. (b) f2. (c) f3. (d) f4. (e) f5. (f) f8. (g) f10. (h) f14. (i) f15. (j) f17. (k) f9. (l) f20.
4.6. Scalability of the Proposed Algorithm

To further investigate the scalability and optimization performance of the proposed algorithm in high dimensional space, experiments are repeated on some typical test functions (f1, f2, f5f8, f10, f13, f16, f17, and f20f21) with dimensions 100, 300, and 500. Each test function is run 30 times independently, and the rest of the parameters are the same setting as Section 4.2. Table 8 records this experiment results.

Table 8: Evaluation of the performance of the DDS-POBL algorithm over 30 independent runs for dimensions 100, 300, and 500.

As can be seen from Table 8, the greatly increased dimension of the test functions dose not result in degrade of optimization performance of the DDS-POBL algorithm too much. Although the optimization performance of the DDS-POBL algorithm can be reduced as the dimension increases on several test functions (i.e., f1, f2, f5, f6, f10, f16, f17, and f20), the reduction is too small to be ignored. In addition, the optimization performance of DDS-POBL on several test functions (f7, f8, f13, and f21) does not change with increasing dimension, and all of them obtain global optima 0.

4.7. Application of the Proposed Method to Parameter Calibration of Xinanjiang Model (Day Model)

The Xinanjiang model is a well-known hydrological model put forward by Zhao R. J. of Ho-hai University, which is a concept with decentralized parameters watershed hydrological model. Its detailed information can be found in reference [33]. There are many studies on parameter optimization of this model, mainly including the surrogate modeling approach [34], genetic algorithm (GA) [35], and SEC-UA [36]. In this subsection, we choose this model as a real optimization problem to test the efficiency of the proposed method and select the Yanduhe catchment of the Three Gorges, with a drainage area of 601 km2 [37] as the study area. This area has a humid climate, good vegetation, and loose soil and is divided into 59 basic units, including 30 outer units and 29 inner units. The area, chain length, and slope length of the inner and outer units, respectively, are surveyed, and the results are shown in Table 9. The historical runoff data used was from January 1, 1981, to December 30, 1981.

Table 9: Result of basic cell survey.

The parameters of the Xinanjiang model are divided into four categories, including 15 parameters [31]. The first category “evapotranspiration parameters” includes K, WUM, WLM, and C. The second category “runoff production parameters” consists of WM, B, and IMP. The third category “parameters of runoff separation” contains SM, EX, KSS, and KG. The last category “runoff concentration parameters” includes KKSS, KKG, CS, and L. Among these parameters, WM, WUM, WLM, B, C, and IMP are insensitive parameters and are generally valued by experience, and their value is recorded in Table 10. On the contrary, the other 9 parameters are sensitive parameters. In these sensitive parameters, K, EX, and L do not need to be calibrated and are determined by experience as shown in Table 10. The other six sensitive parameters need to be calibrated, and their range is described in Table 11. To further verify the effectiveness of the proposed algorithm, we used the DDS-POBL and DDS to calibrate these six sensitive parameters. The results are shown in Table 11. Table 12 listed the proportion of different intervals of the runoff calculation relative error. Figure 5 plotted the fit of the calculation flow and observed flow of the DDS-POBL and DDS algorithm after calibrating the nine sensitive parameters.

Table 10: Empirical values of 6 parameters.
Table 11: Calibration results for 6 sensitive parameters.
Table 12: The proportion of different intervals of runoff calculation relative error.
Figure 5: Fitting of calculation runoff and observed runoff of DDS-POBL and DDS.

It can be seen from Table 12 that, in the runoff calculation, relative error intervals are [0, 20] and (30,100], the proportion of the DDS is larger than DDS-POBL, while in the interval (0, 30], DDS-POBL is much larger than DDS. In addition, the proportion of DDS-POBL in runoff calculation relative error interval (20, 30] is greater than the sum of DDS in the interval [0, 50]. From Figure 5, DDS-POBL has a slightly poor fitting effect compared to DDS during the relatively smooth period of runoff variation, but the DDS fitting effect is significantly inferior to DDS-POBL when the runoff changes drastically.

5. Conclusions

An improved version of DDS combined with hybrid piecewise opposition-based learning, called DDS-POBL, has been proposed in this work. Compared with DDS, DDS-PBOL has been significantly improved in the two following aspects. One is to introduce the piecewise oppositional learning strategies to help the DDS algorithm to search the correct direction; the other is to use the golden section method to guide potential solutions to search more promising areas. Several different numerical experiments were performed to verify the advantages of the proposed approach in optimizing performance. Firstly, tests were done on 23 benchmark test functions. The simulation results showed that on the 15 out of 23 test functions, DDS-POBL had better optimization performance than traditional DDS. Secondly, nine typical test functions were chosen to verify the performance of DDS-POBL compared to other state-of-the-art algorithms. Experimental results revealed that the proposed DDS-POBL algorithm could offer highly competitive results compared to other state-of-the-art algorithms in most cases. Thirdly, an experiment on a scalar neighborhood size perturbation factor was performed with different values of DDS-POBL to investigate its robustness. The optimization results showed that the proposed DDS-POBL algorithm could provide optimization results with little difference. Furthermore, the parameter calibration application of the Xinanjiang model reveals the superiority of DDS-POBL over DDS in practical optimization problems.

In addition, several representative large-scale test functions were selected as experimental objects to verify the scalability of the DDS-POBL algorithm, and the results of large-scale test problems proved that the proposed method has good scalability. However, for functions f11 and f12, the proposed DDS-POBL algorithm could not find a satisfactory result. Therefore, the proposed DDS-POBL algorithm is still not an ideal algorithm. In our future work, the exciting research and applications can be explored further.

Data Availability

The excel data or MATLAB code used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This project was supported by the National Natural Science Foundation of China (Grant no. 71841054), Soft Science Foundation of Heilongjiang Province (Grant no. GC16D102), Key Program of Economic and Social of Heilongjiang Province (Grant no. KY10900170004), Philosophy and Social Science Research Planning Program of Heilongjiang Province (Grant no. 17JYH49), Research on Supply Potential and Business Decision of Larch Carbon potential in Heilongjiang Province (Grant no. 2572018BM07), and Research on Forest Carbon Sink Supply Potential and Policy Tools in State-Owned Forest Area of Heilongjiang province (Grant no. 18GLD289).

References

  1. I. Boussaïd, J. Lepagnot, and P. Siarry, “A Survey on optimization metaheuristics,” Information Sciences, vol. 237, pp. 82–117, 2013. View at Publisher · View at Google Scholar · View at Scopus
  2. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. View at Publisher · View at Google Scholar · View at Scopus
  3. G. Dueck and T. Scheuer, “Threshold accepting: a general purpose optimization algorithm appearing superior to simulated annealing,” Journal of Computational Physics, vol. 90, no. 1, pp. 161–175, 1990. View at Publisher · View at Google Scholar · View at Scopus
  4. M. Creutz, “Microcanonical Monte Carlo simulation,” Physical Review Letters, vol. 50, no. 19, pp. 1411–1414, 1983. View at Publisher · View at Google Scholar · View at Scopus
  5. F. S. Wen and C. S. Chang, “Tabu search approach to alarm processing in power systems,” IEE Proceedings-Generation, Transmission and Distribution, vol. 144, no. 1, pp. 31–38, 1997. View at Publisher · View at Google Scholar · View at Scopus
  6. C. Voudouris, E. P. K. Tsang, and A. Alsheddy, “Guided local search,” in Handbook of Metaheuristics, M. Gendreau and J.-Y. Potvin, Eds., pp. 321–361, Springer, Berlin, Germany, 2010. View at Google Scholar
  7. B. A. Tolson and C. A. Shoemaker, “Dynamically dimensioned search algorithm for computationally efficient watershed model calibration,” Water Resources Research, vol. 43, no. 1-6, p. W01413, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. B. A. Tolson, M. Asadzadeh, H. R. Maier, and A. Zecchin, “Hybrid discrete dynamically dimensioned search (HD-DDS) algorithm for water distribution system design optimization,” Water Resources Research, vol. 45, no. 12, Article ID W12416, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin, Germany, 3rd edition, 1996.
  10. J. H. Holland, “Genetic algorithms,” Scientific American, vol. 267, no. 1, pp. 66–72, 1992. View at Publisher · View at Google Scholar · View at Scopus
  11. J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, November-December 1995.
  12. S. Mirjalili, “Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems,” Neural Computing and Applications, vol. 27, no. 4, pp. 1053–1073, 2016. View at Publisher · View at Google Scholar · View at Scopus
  13. J. Xu and F. Yan, “Hybrid Nelder-Mead algorithm and dragonfly algorithm for function optimization and the training of a multilayer perceptron,” Arabian Journal for Science and Engineering, vol. 44, no. 4, pp. 3473–3487, 2019. View at Publisher · View at Google Scholar
  14. Q. Y. Duan, V. K. Gupta, and S. Sorooshian, “Shuffled complex evolution approach for effective and efficient global minimization,” Journal of Optimization Theory and Applications, vol. 76, no. 3, pp. 501–521, 1993. View at Publisher · View at Google Scholar · View at Scopus
  15. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67–82, 1997. View at Publisher · View at Google Scholar · View at Scopus
  16. X. Huang, W. Liao, X. Lei et al., “Parameter optimization of distributed hydrological model with a modified dynamically dimensioned search algorithm,” Environmental Modelling and Software, vol. 52, pp. 98–110, 2014. View at Publisher · View at Google Scholar · View at Scopus
  17. F.-R. Lin, N.-J. Wu, C.-H. Tu, and T.-K. Tsay, “Automatic calibration of an unsteady river flow model by using dynamically dimensioned search algorithm,” Mathematical Problems In Engineering, vol. 2017, Article ID 7919324, 19 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  18. A. Masoud and T. Bryan, “Pareto archived dynamically dimensioned search with hypervolume-based selection for multi-objective optimization,” Engineering Optimization, vol. 45, no. 12, pp. 1489–1509, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. M. J. F. G. Macêdo, M. F. P. Costa, A. M. A. C. Rocha, E. W. Karas, A. M. A. C. Rocha, and E. W. Karas, “Combining filter method and dynamically dimensioned search for constrained global optimization,” in Computational Science and Its Applications-ICCSA 2017, pp. 119–134, Springer, Cham, Switzerland, 2017. View at Publisher · View at Google Scholar · View at Scopus
  20. R. Sarkhel, N. Das, A. K. Saha, and M. Nasipuri, “An improved Harmony Search Algorithm embedded with a novel piecewise opposition based learning algorithm,” Engineering Applications of Artificial Intelligence, vol. 67, pp. 317–330, 2018. View at Publisher · View at Google Scholar · View at Scopus
  21. H. R. Tizhoosh, “Opposition-based learning: a new scheme for machine intelligence,” in Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), vol. 1, pp. 695–701, Vienna, Austria, November 2005.
  22. E. Cuevas, L. Enríquez, D. Zaldívar, and M. Pérez-Cisneros, “A selection method for evolutionary algorithms based on the Golden Section,” Expert Systems With Applications, vol. 106, pp. 183–196, 2018. View at Publisher · View at Google Scholar · View at Scopus
  23. J. A. Koupaei, S. M. M. Hosseini, and F. M. M. Ghaini, “A new optimization algorithm based on chaotic maps and golden section search method,” Engineering Applications of Artificial Intelligence, vol. 50, pp. 201–214, 2016. View at Publisher · View at Google Scholar · View at Scopus
  24. W. Long, J. Jiao, X. Liang, and M. Tang, “An exploration-enhanced grey wolf optimizer to solve high-dimensional numerical optimization,” Engineering Applications of Artificial Intelligence, vol. 68, pp. 63–80, 2018. View at Publisher · View at Google Scholar · View at Scopus
  25. W. Gao, S. Liu, and L. Huang, “A novel artificial bee colony algorithm based on modified search equation and orthogonal learning,” IEEE Transactions on Cybernetics, vol. 43, no. 3, pp. 1011–1024, 2013. View at Publisher · View at Google Scholar · View at Scopus
  26. S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Grey wolf optimizer,” Advances in Engineering Software, vol. 69, pp. 46–61, 2014. View at Publisher · View at Google Scholar · View at Scopus
  27. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition-based differential evolution,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 64–79, 2008. View at Publisher · View at Google Scholar · View at Scopus
  28. S. R. K.S. and S. Murugan, “Memory based hybrid dragonfly algorithm for numerical optimization problems,” Expert Systems with Applications, vol. 83, pp. 63–78, 2017. View at Publisher · View at Google Scholar · View at Scopus
  29. N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation, vol. 9, no. 2, pp. 159–195, 2001. View at Publisher · View at Google Scholar · View at Scopus
  30. Y. Wang, Z. Cai, and Q. Zhang, “Differential evolution with composite trial vector generation strategies and control parameters,” IEEE transactions on evolutionary computation, vol. 15, no. 1, pp. 55–66, 2011. View at Publisher · View at Google Scholar · View at Scopus
  31. H. Wang, W. Wang, X. Zhou et al., “Firefly algorithm with neighborhood attraction,” Information Sciences, vol. 382-383, pp. 374–387, 2017. View at Publisher · View at Google Scholar · View at Scopus
  32. J. Derrac, J. García, D. Molina, and F. Herrera, “A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms,” Swarm and Evolutionary Computation, vol. 1, no. 1, pp. 3–18, 2011. View at Publisher · View at Google Scholar · View at Scopus
  33. R. J. Zhao, “The Xinanjiang model applied in China,” Journal of Hydrology, vol. 135, no. 1–4, pp. 371–381, 1992. View at Publisher · View at Google Scholar · View at Scopus
  34. Y. Ye, X. Song, J. Zhang, F. Kong, and G. Ma, “Parameter identification and calibration of the Xin’anjiang model using the surrogate modeling approach,” Frontiers of Earth Science, vol. 8, no. 2, pp. 264–281, 2014. View at Publisher · View at Google Scholar · View at Scopus
  35. Q. J. Wang, “The genetic algorithm and its application to calibrating conceptual rainfall-runoff models,” Water Resources Research, vol. 27, no. 9, pp. 2467–2471, 1991. View at Publisher · View at Google Scholar · View at Scopus
  36. Q. Duan, S. Sorooshian, and V. Gupta, “Effective and efficient global optimization for conceptual rainfall-runoff models,” Water Resources Research, vol. 28, no. 4, pp. 1015–1031, 1992. View at Publisher · View at Google Scholar · View at Scopus
  37. X. Song, F. Kong, C. Zhan, J. Han, and X. Zhang, “Parameter identification and global sensitivity analysis of Xinanjiang model using meta-modeling approach,” Water Science and Engineering, vol. 6, no. 1, pp. 1–17, 2013. View at Google Scholar