Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2018, Article ID 4730957, 14 pages
https://doi.org/10.1155/2018/4730957
Research Article

Multipopulation Management in Evolutionary Algorithms and Application to Complex Warehouse Scheduling Problems

1Department of Electrical Engineering, Shaoxing University, Shaoxing, Zhejiang, China
2School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China

Correspondence should be addressed to Haiping Ma; moc.621@1891gniphm

Received 3 February 2018; Accepted 18 March 2018; Published 26 April 2018

Academic Editor: Liang Hu

Copyright © 2018 Yadong Yu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Multipopulation is an effective optimization strategy which is often used in evolutionary algorithms (EAs) to improve optimization performance. However, it is of remarkable difficulty to determine the number of subpopulations during the evolution process for a given problem, which may significantly affect optimization ability of EAs. This paper proposes a simple multipopulation management strategy to dynamically adjust the subpopulation number in different evolution phases throughout the evolution. The proposed method makes use of individual distances in the same subpopulation as well as the population distances between multiple subpopulations to determine the subpopulation number, which is substantial in maintaining population diversity and enhancing the exploration ability. Furthermore, the proposed multipopulation management strategy is embedded into popular EAs to solve real-world complex automated warehouse scheduling problems. Experimental results show that the proposed multipopulation EAs can easily be implemented and outperform other regular single-population algorithms to a large extent.

1. Introduction

Evolutionary algorithms (EAs) are fast and robust computation methods for global optimization and have been widely applied in solving numerous real-world problems [15]. In recent years, the concept of multipopulation is frequently discussed and used to improve the optimization performance of EAs. In this regard, firstly, the original population is divided into several small subpopulations for special purposes such as large-scale problems and dynamic optimization problems. Then, particular evolution mechanisms and operations, for example, crossover and mutation for genetic algorithms (GAs), are executed. The purpose of multipopulation is to maintain population diversity and enhance exploration ability, which is the crucial factor to avoid premature convergence when handling optimization problems.

Existing studies on multipopulation demonstrate that such strategy has become one of the most effective methods to enhance EA performance [6, 7]. The key reasons for this are categorized as follows: it divides the overall population into multiple subpopulations, in which the population diversity can be maintained due to the fact that different subpopulations can be located in different search domains; it is able to search different areas simultaneously, leading the separated populations to rapidly find the optimal solutions; and various population-based EAs can be fast and easily integrated within multipopulation methods. In the research work carried out by Chang [8], a modified particle swarm optimization (PSO) with multiple subpopulations was utilized for multimodal function optimization problems. Simulation results on complex multimodal functions showed that the global and local system solutions were solved by these best particles of subpopulations. Nseef et al. [9] proposed an adaptive multipopulation artificial bee colony (ABC) algorithm for dynamic optimization problems, where multiple subpopulations were used to cope with dynamic changes and to maintain the diversity. Experimental results showed that the proposed multipopulation ABC was superior to regular ABC on all tested instances. In the research work carried out by Wu et al. [10], differential evolution (DE) with multipopulation-based ensemble of mutation strategies was used for optimizing benchmark functions, where each subpopulation performed a mutation strategy. Experimental comparisons showed the competitive performance of the proposed method. A hybrid multipopulation GA was employed in [11] for the dynamic facility layout problem, where the entire solution space was separated into different parts and each subpopulation represented a separate part. Simulation results showed that the proposed algorithm enjoyed the superiority over other algorithms. Niu et al. [12] proposed a symbiosis-based alternative learning multi-swarm PSO algorithm, where the communication between subpopulations used a learning method to select one example out of the center position, the local best position, and the historical best position including the experience of the internal and external multiple subpopulations, to keep the diversity of the population. The experimental results exhibited better performance in terms of the convergence speed and optimality. Xiao et al. [13] presented a novel multipopulation coevolution immune optimization algorithm (IOA) for most of the existing multimodal benchmarks, where coevolution of three subpopulations was promoted through a self-adjusted clone operator to enhance exploration and exploitation. The authors proved that their method outperformed three known immune algorithms and several other EAs. The introduction of external archiving into a multipopulation harmony search (HS) algorithm to solve dynamic optimization problems was presented by Turky and Abdullah [14]. The results on moving peak benchmarks showed that their modified version was better than the original harmony search algorithms. A multipopulation cooperative bat algorithm (BA) was used in [15] for an artificial neural network model, which mainly depended on the connection weights and network structure. Experimental results showed that there was a significant improvement by applying the proposed algorithm to all the test cases. Ozsoydan and Baykasoǧlu [16] employed a multipopulation firefly algorithm (FA) to tackle dynamic optimization problems. The experiments on moving peak benchmarks showed that the proposed algorithm significantly improved system performance. Mauša and Galinac Grbac [17] proposed a coevolutionary multipopulation genetic programming (GP) which combined colonization and migration with three ensemble selection strategies for classification in software defect prediction. Computational results demonstrated the efficiency of the proposed method.

Although multipopulation methods have shown success for solving optimization problems, most of them use a preset constant number of subpopulations during optimization process [18]. The subpopulation number has an important impact on performance of multipopulation EAs, because it is related to the difficulty of the problem, which is not known in advance. For a given problem, each EA may have different subpopulation numbers during different phases of the search process. For example, an algorithm with many subpopulations may have effective search ability during the initial phase of the optimization process, whereas the algorithm with a few subpopulations may have better search ability during the later phase of the optimization process. Therefore, it is of potential to dynamically manage the number of subpopulations during evolution process, based on the difficulty of the problem. It may result in outstanding results without the essential use of dedicated evolution operators. Nonetheless in most of multipopulation EA literature [19, 20], the dynamic management of subpopulation number is rarely mentioned.

In order to address this issue, this paper proposes a multipopulation management strategy to improve EA optimization performance. The proposed method uses some simple rules to dynamically manage the subpopulation number to maintain population diversity. To verify its effectiveness, the proposed multipopulation management strategy is embedded into various popular algorithms to construct multipopulation EAs. Then they are tested on a set of CEC benchmark functions and further applied to real-world complex automated warehouse scheduling problems. Experimental results show that the proposed multipopulation management strategy can help EAs to obtain excellent results and outperform some state-of-the-art single-population algorithms in the literature.

The following are the original contributions of this paper: a dynamic management strategy of the subpopulation number is proposed to boost population diversity while preserving simplicity for EAs; the proposed multipopulation management strategy is embedded into several popular EAs, including the stud genetic algorithm (SGA), population-based incremental learning (PBIL), self-adaptive differential evolution (SaDE), and standard particle swarm optimization (PSO2011); the optimization ability of these multipopulation EAs is investigated on a set of benchmark functions, and further they are applied to solve real-world automated warehouse scheduling problems.

The remainder of the paper is organized as follows. Section 2 gives detail descriptions of multipopulation management strategy. Section 3 discusses the integration of multipopulation management strategy with popular EAs. Section 4 compares the performance of several EAs in conjunction with multipopulation management strategy on benchmark functions and complex warehouse scheduling problems. Section 5 provides conclusions and suggestions for future research.

2. Multipopulation Management Strategy

This section presents the basic challenges for multipopulation EAs, as well as our proposed dynamic management for the subpopulation number.

2.1. Challenges for Multipopulation Methods

In EAs, diversity refers to differences among candidate solutions. As mentioned in most of EA literature, evolution progress lies fundamentally on the existence of variation of population, and the high diversity of a population greatly contributes to EA performance. Diversity loss often results in premature convergence, because EAs find themselves trapped in local optimal solutions and lack population diversity to escape. So the crucial point for EAs solving optimization problems is how to maintain population diversity. In recent years, multipopulation methods are introduced into EAs and become one of the most successful methods to improve optimization performance, because such methods deal with candidate solutions scattering over the entire search space. This feature helps population-based EAs to quickly achieve the global optimal solutions.

To make multipopulation methods more efficient, several crucial challenging issues in algorithm design need to be addressed. The first question is how to determine the number of subpopulations. If too many subpopulations distribute in the problem, this may waste the limited computational resources. However, too small number of subpopulations may lead to limited effect of the multipopulation strategy. The second question is how to determine the search area of each subpopulation. If the search area of a subpopulation is too small, there is a potential problem that the small isolated subpopulation may converge to a local optimal solution. In this case, the diversity will lose and the algorithm can hardly make any progress. On the contrary, if the search area of a subpopulation is too large, it is almost equal to the search area of the original population, and it is very hard to obtain better optimization performance compared with the regular algorithms. The third question is how to communicate between subpopulations. Many researchers believe that communication between subpopulations is very helpful during optimization because information can be shared among subpopulations and, hence, this will accelerate the search process and promising solutions may be found as well. In some current studies, the communication between subpopulations is controlled by four parameters: (i) a communication rate that defines the number of solutions in a subpopulation to be sent to other subpopulations; (ii) a communication policy that determines which solutions are to be replaced by ones from other subpopulations; (iii) a communication interval that sets up the frequency for executing communication; and (iv) a connection topology that defines how to connect between subpopulations.

For these challenges, most existing multipopulation methods just use predefined values, which are based on empirical experience, to determine the parameter setting of subpopulations. Some other studies assume that some information of optimization problems is known. In these cases, problem information can be used to guide the configuration of multipopulation parameters. However, for most of the cases, we need to deeply explore these challenging issues to develop excellent multipopulation EAs.

For the first challenge mentioned above, it is a good method that we use the appropriate number of subpopulations to maintain population diversity. This issue includes two ways. The first way is to use a fixed number of subpopulations. Most of current multipopulation methods fall into this group. The advantage of this way is that it can be implemented simply, and we only need to create a fixed number of subpopulations in advance for the optimization problem. Generally speaking, the more the peaks are in the fitness functions, the more the subpopulations are needed. However, it is not efficient to obtain the number of peaks of fitness functions for practice problems. In addition, the distribution and shape of fitness functions may also play a role in configuring the subpopulation number. The second way is to use a variable number of subpopulations. It is a difficult problem when the number of subpopulations is increased or decreased. To maintain population diversity, the subpopulation number might be different at different states during evolution process. For example, in the early stages, it needs a large number of subpopulations because candidate solutions can scatter over the entire search space, which leads to high population diversity. But, in the later stages, a small number of subpopulations are in favor of reducing diversity to fast converge to global best solutions. So the dynamic change of the subpopulation number should be in line with the population diversity. In many previous studies [21, 22], it often splits off from a main population into multiple parts to increase the subpopulation number and merges a set of small subpopulations into a main population to decrease the subpopulation number. The common characteristic of these methods is that they often adopt a specific cluster method to complete these operations. Therefore, additional knowledge about clustering is needed meanwhile increasing the complexity of algorithms, which is not desirable for solving some complex problems with the limitation of computational resource.

Undoubtedly, it is a good idea to dynamically manage the number of subpopulations during evolution process, without introducing additional complex mechanisms and considering the difficulty of the optimization problems in advance.

2.2. Ways to Manage Subpopulation Number

This part introduces a simple and effective multipopulation management strategy, to dynamically increase or decrease the subpopulation number. Compared with other multipopulation methods, the proposed method highlights the challenge of determining the number of subpopulations.

In the ways to manage the subpopulation number, four basic rules are considered:(1)The maximum number of subpopulations is limited in order to prevent computational burden on too many coevolving subpopulations.(2)The subpopulation number decreases, when we merge the same subpopulations or delete the existing subpopulations.(3)The subpopulation number increases, when we create some new subpopulations or divide the existing subpopulation.(4)The interaction of subpopulations is not considered because it is so smart that it affects the investigation on multipopulation management strategy.

Based on the above basic rules, two open problems need to be solved in the proposed multipopulation management strategy. First, what is the best condition to increase or decrease the subpopulation number? Before answering this question, we firstly consider that the purpose of multipopulation methods is to maintain population diversity. So it is an effective method to build the relationship between population diversity and control condition of the subpopulation number. Quantitatively, we need some simple approaches to measure diversity. Euclidean distance is probably the most widely used type of diversity measure nowadays. Euclidian distance between two solutions and can be calculated as where and denote the th solution variables in the solutions and , respectively, and is the number of solution variables.

Equation (1) is used to measure individual distance between two solutions. In the multipopulation method, we need to further consider population distance between two subpopulations. To calculate population distance, the concept of average population is introduced, which is a vector of average value of all solutions in the same population. We use to represent average population, which is calculated as follows:where is the size of the population.

The corresponding standard deviation of average population is calculated asThen, we define population distance between two subpopulations and , which is represented aswhere , , and are Euclidean distances, and are average populations, and and are standard deviations for subpopulations and , respectively. Equation (4) denotes the fact that population distance between two subpopulations is related to Euclidean distances between their average populations, and solution distribution in each subpopulation. If is small, areas occupied by subpopulations and are overlapping, which means that they have great similarity. The coefficient is set to 2 on the basis of normal distribution characteristic [23], according to 95% of solutions being located in distance of two standard deviations from average populations.

Next, we consider several diversity cases, and they are defined by the following ways:(1)As a diversity between individuals in the same subpopulation;(2)As a diversity between a subpopulation and other subpopulations;(3)As a diversity between the current subpopulation and its parent subpopulation.

Note that the first case is taken as individual diversity; the second and third cases are taken as population diversity.

Once we have the method of measuring diversity, only a threshold value is set to decide the condition to increase or decrease the subpopulation number. For individuals in the same subpopulation, if for any one of individual distances, we reckon that they are different, and if for all individual distances, they are considered to be similar. In the same way, for any two subpopulations, if , they are regarded as of diversity, and if , we think that they have similarity. Note that similarity is the opposite of diversity; can be a parameter by a user or can be an adaptive parameter.

Second, how do we increase or decrease the subpopulation number during evolution process? That is, how do we create new subpopulations, or delete redundant subpopulations? Based on the diversity concepts mentioned above, multipopulation management strategy can be roughly classified into three cases. In the first case, when all individuals are similar in the certain subpopulation, we firstly randomly preserve one of individuals because they are almost the same and then delete this subpopulation. Meanwhile, we create a new subpopulation consisting of the following individuals: 1/3 of individuals are copies of the preserved individual, 1/3 of individuals are from the best individuals in the entire population, and 1/3 of individuals are randomly generated. In the second case, when a subpopulation is similar to other subpopulations within the whole population, we directly delete this subpopulation to save computational resources. In the third case, when a subpopulation is similar to its parent subpopulation, we create a new subpopulation, and it consists of the following individuals: 50% of individuals are copies of the best individual in the latest generation, and the other 50% are generated randomly from the neighborhood of the best individual with the normal distribution. Note that the maximum number of subpopulations is limited to the preset number to prevent too many subpopulations of being involved. Figure 1 shows the multipopulation management strategy during evolution process.

Figure 1: Multipopulation management strategy during evolution process.

3. Integration with EAs

Now we integrate the proposed multipopulation management strategy with EAs, called multipopulation EAs, and the flowchart is shown in Figure 2. It starts by setting the parameters. It creates a population of candidate solutions and evaluates them. Next, the population of solutions is divided into multiple subpopulations. Each subpopulation runs an EA paradigm to generate its own offspring. Then, subpopulations perform diversity judgement, and, based on diversity levels, it creates or deletes subpopulations. Finally, it checks the stopping condition. In this way, EAs always can dynamically manage the subpopulation number, leading to better performance than the corresponding single-population EAs. The pseudocode of multipopulation EAs is shown in Algorithm 1.

Algorithm 1: Pseudocode of the proposed multipopulation EAs.
Figure 2: Flowchart of the proposed multipopulation EAs.

The main steps of the proposed multipopulation EAs are further described in detail below.

Step 1 (set parameters). The main parameters of the proposed multipopulation EAs are initialized. They include the parameters of EA paradigms, the maximum number of iterations, the size of population, the number of initial subpopulations, and the maximum number of subpopulations.

Step 2 (initialize the population of solutions). It randomly generates a set of candidate solutions between the lower and upper boundaries.

Step 3 (evaluate the population of solutions). The fitness of the generated solutions is calculated using the objective function.

Step 4 (divide the population). The initial population is divided into multiple subpopulations with the same population size, and each subpopulation is randomly assigned from solutions in the population.

Step 5 (create offspring subpopulations). Different subpopulations are executed independently by EA paradigms to generate their own offspring subpopulations. Note that, for EA paradigms, we can use the same EA for all subpopulations, or we also can use the different EA for each subpopulation.

Step 6 (judge diversity and manage subpopulations). If a subpopulation is similar to other subpopulations, directly delete this subpopulation. If it is similar to its parent subpopulation, create a new subpopulation. If individuals are similar in a subpopulation, firstly delete this subpopulation and then create a new subpopulation. For details see Section 2.2.

Step 7 (evaluate offspring subpopulations and check the stopping condition). If the termination criterion is not met, go to Step 5; otherwise, terminate and output the evaluation results. Here the termination criterion is the maximum number of function evaluations.

4. Experimental Results

In this section the performances of the proposed multipopulation EAs are investigated. Section 4.1 describes the experimental setup, Section 4.2 presents performance comparisons on the 2013 CEC benchmark functions, and Section 4.3 applies the proposed multipopulation EAs to a real-world complex automated warehouse scheduling problem.

4.1. Simulation Setup

The performances of the proposed multipopulation EAs are evaluated on the 28 benchmark functions presented in Table 1, which are from the 2013 Congress on Evolutionary Computation (CEC) [24].

Table 1: 2013 CEC benchmark functions, where the search range of all functions is . More details about these functions can be found in [24].

The popular EAs used in this paper include SGA, PBIL, SaDE, and PSO2011. We select SGA because it is an improvement of the basic GA that uses the best individual in each generation for crossover [25]. We select PBIL because it is the most successful variant of estimation of distribution algorithms [26]. We select SaDE because it is one of the most powerful DE algorithms and has demonstrated excellent performance on many problems [27, 28]. We select PSO2011 because it is popular in the literature and contains improvements gained as a result of many years of PSO studies [29, 30].

The next step is to set the parameters of each multipopulation EA. For SGA we use real coding, roulette-wheel selection, single-point crossover with a crossover probability of 1, and a mutation probability of 0.001. For PBIL we use learning rate , the number of best and worst individuals used to update probabilities in each generation is , and the standard deviation for mutation linearly decreases from 10% of the parameter range at the first generation to 2% of the parameter range at the final generation [26]. The SaDE parameter settings are adapted according to the learning progress [27]: the scaling factor is randomly sampled from the normal distribution (0.5, 0.3), and the crossover rate follows the normal distribution (0.5, 0.1). For PSO2011, we use an inertia weight , a cognitive constant , and a social constant for neighborhood interaction that is the same as .

In addition, we initially use three subpopulations, and the maximum subpopulation number is set to six. Each subpopulation uses the same EA, and the population size of each subpopulation is 25. For fair comparisons, the population size of single-population EAs is set to 150. We evaluate each function in dimensions with the function evaluation limit of . All algorithms are terminated after the maximum number of function evaluations is reached, or if the objective function error value is below 10–6.

4.2. Performance Comparisons

We simulate each algorithm 25 times on each benchmark, and the results are shown in Tables 2 and 3. The tables show that multipopulation SaDE (M-SaDE) performs best on 18 functions (F4, F6, F7, F9, F10, F11, F12, F13, F14, F15, F16, F18, F19, F20, F21, F22, F26, and F28), and multipopulation PSO2011 (M-PSO2011) performs best on 9 functions (F2, F3, F5, F8, F17, F23, F24, F25, and F27). For function F1, both M-SaDE and M-PSO2011 obtain the global optimal solutions. Furthermore, we briefly consider the types of functions for which the various algorithms are best-suited. Tables 2 and 3 show that the best-performing algorithm on each of the unimodal functions (F1–F5) is always M-PSO2011, and the best-performing algorithm on each of the multimodal functions (F6–F20) is always M-SaDE. We also note from these tables that M-PSO2011 performs as well as M-SaDE on the composition functions (F21–F28). This implies that no single algorithm can be the best for every problem, and, for different optimization problems, different algorithms have their own superiority to obtain the best performance.

Table 2: Comparisons of the best error values of the 2013 CEC benchmark functions for SGA and PBIL. In the table, “S-” and “M-” denote the single-population version and multipopulation version of algorithms, respectively. Here [] indicates the mean value and the corresponding standard deviation of 25 independent simulations. Average CPU times (minutes) are shown in the last row of the table.
Table 3: Comparisons of the best error values of the 2013 CEC benchmark functions for SaDE and PSO2011. In the table, “S-” and “M-” denote the single-population version and multipopulation version of algorithms, respectively. Here [] indicates the mean value and the corresponding standard deviation of 25 independent simulations. The best result in each row (Tables 2 and 3 combined) is shown in bold font. Average CPU times (minutes) are shown in the last row of the table.

Tables 2 and 3 also show that the total performances of multipopulation EAs, including M-SGA, M-PBIL, M-SaDE, and M-PSO2011, are better than their corresponding single-population EAs. This may be due to the proposed multipopulation management strategy that enhances population diversity to improve optimization performance.

The average running times of all algorithms are shown in the last rows of Tables 2 and 3. Here MATLAB® is used as the programming language, and the computer is a 2.40 GHz Intel Pentium® 4 CPU with 4 GB of memory. We find that the average running times of the multipopulation EAs are less than their corresponding single-population algorithms. For example, the average running time of multipopulation SGA (M-SGA) is less than single-population SGA (S-SGA). The reason is that multipopulation EAs use multiple parallel subpopulations to reduce computation time with the same total population size as the corresponding single-population algorithms. So multiple subpopulations are also amenable to parallel processing, and they can further reduce computational effort.

In order to further compare the performance of the multipopulation and single-population EAs, we use the Wilcoxon method to test for statistical significance. The Wilcoxon method is a nonparametric statistical test to determine whether differences between groups of data are statistically significant when the assumptions that the differences are independent and identically normally distributed are not satisfied [3133]. The Wilcoxon test results are shown in Table 4, where the pairs are marked if the difference between each pair of algorithms is statistically significant.

Table 4: Wilcoxon test results for pairwise algorithm comparisons. If the difference between the algorithms is statistically significant, the pairs are marked as follows: “x-o” shows that the left algorithm is better than the right one; “o-x” shows that the right algorithm is better than the left one. The B/S/W row at the bottom shows the total scores, where “B” denotes the number of times the left algorithm performs better, “S” denotes the number of times the two algorithms perform the same, and “W” denotes the number of times the left algorithm performs worse than the right one.

The results in Table 4 are divided into S-SGA versus M-SGA group, S-PBIL versus M-PBIL group, S-SaDE versus M-SaDE group, and S-PSO2011 versus M-PSO2011 group. For each pair of algorithms we calculate B/S/W scores, where “B” denotes the number of times that the left algorithm performs better than the right one, “S” denotes the number of times that the left algorithm performs the same as the right one (statistically speaking), and “W” denotes the number of times that the left algorithm performs worse than the right one.

Table 4 shows that, for S-SGA versus M-SGA, the B/S/W score is 1/7/20, which indicates that S-SGA outperforms M-SGA one time, S-SGA is statistically the same as M-SGA seven times, and M-SGA outperforms S-SGA twenty times. For S-PBIL versus M-PBIL, the B/S/W score is 2/3/23, which indicates that S-PBIL outperforms M-PBIL two times, S-PBIL is statistically the same as M-PBIL three times, and M-PBIL outperforms S-PBIL twenty-three times. For S-SaDE versus M-SaDE, the B/S/W score is 0/5/23, which indicates that S-SaDE does not outperform M-SaDE in any time, S- SaDE is statistically the same as M-SaDE five times, and M-SaDE outperforms S-SaDE twenty-three times. For S-PSO2011 versus M-PSO2011, the B/S/W score is 1/5/22, which indicates that S-PSO2011 outperforms M-PSO2011 one time, S-PSO2011 is statistically the same as M-PSO2011 five times, and M-PSO2011 outperforms S-PSO2011 twenty-two times. From the results we see that multipopulation versions of these algorithms are significantly better than their single-population versions on the CEC 2013 benchmark functions, which further verifies the conclusions obtained in Tables 2 and 3. Such statistical results show that the proposed multipopulation management strategy is a good method to improve the optimization performance for EAs. The reason for its competitive performance is that multipopulation EAs effectively manage the number of subpopulations in different evolution phases throughout the evolution, which can significantly maintain population diversity, compared to single-population EAs.

4.3. Application to Complex Warehouse Scheduling Problems

In this section, the proposed multipopulation EAs are applied to a real-world complex automated warehouse scheduling problems described in [34], which are formulated as a constrained single-objective optimization problem. Warehouse scheduling in the supply chain is challenging because it is proved as a typical NP-hard problem, which is one of the most challenging types of combinatorial optimization problem [35, 36]. Layout of the warehouse system we study is shown in Figure 3, and more details about cost function, constraints, and parameter settings of the warehouse scheduling problem can be referred to [34].

Figure 3: Layout of the warehouse system.

In this experiment, we use the same simple-population and multipopulation EAs as those in the benchmark simulations. The constraint-handling method is based on feasibility rules by Deb [37], which has demonstrated promising performance in dealing with constraints. We consider five simulation schemes. Scheme 1 () is described as follows.p1: (12, 6, 2, 50, 1), p2: (51, 8, 1, 45, 1), p3: (40, 3, 5, 46, 1), p4: (37, 3, 4, 113, 1),p5: (14, 7, 1, 40, 1), p6: (13, 5, 2, 50, 1), p7: (45, 5, 5, 50, 1), p8: (18, 7, 3, 33, 1),p9: (7, 3, 2, 35, 1), p10: (11, 4, 3, 110, 1), p11: (15, 2, 1, 34, 1), p12: (36, 8, 4, 40, 1),p13: (8, 6, 5, 47, 1), p14: (56, 2, 3, 34, 1), p15: (42, 8, 3, 30, 1), p16: (23, 4, 1, 50, 1),p17: (4, 6, 2, 50, 1), p18: (13, 1, 1, 36, 1), p19: (39, 6, 4, 42, 1), p20: (45, 6, 5, 55, 1)p21: (50, 8, 1, 67, 2), p22: (3, 1, 2, 74, 2), p23: (55, 4, 3, 68, 2), p24: (6, 7, 4, 85, 2),p25: (17, 4, 5, 74, 2), p26: (60, 7, 3, 80, 2), p27: (35, 6, 2, 67, 2), p28: (15, 2, 1, 70, 2),p29: (57, 4, 2, 80, 2), p30: (2, 1, 4, 62, 2), p31: (25, 8, 2, 76, 2), p32: (5, 2, 4, 64, 2),p33: (17, 5, 3, 76, 2), p34: (41, 2, 5, 91, 2), p35: (19, 3, 2, 70, 2), p36: (20, 1, 1, 82, 2),p37: (42, 6, 2, 88, 2), p38: (32, 6, 3, 75, 2), p39: (9, 7, 1, 82, 2), p40: (58, 5, 4, 69, 2)

In Scheme 1, is the number of storage products, and is the number of retrieval products. is the storage unit, and it is denoted by , where , , and are the Euclidean coordinates of each storage unit, is the weight coefficient that affects the scheduling quality, and indicates a storage product and indicates a retrieval product.

Scheme 2 () includes all the storage products of Scheme 1 but only the first 16 retrieval products of Scheme 1. Scheme 3 () includes all the storage products of Scheme 1 but only the first 12 retrieval products. Scheme 4 () includes the first 16 storage products of Scheme 1 and all of the retrieval products. Scheme 5 () includes the first 12 storage products of Scheme 1 and all of the retrieval products.

Optimization results of different simple-population and multipopulation EAs are summarized in Table 5. From the table, it is seen that M-SaDE performs best for all of scheme cases because of its lowest scheduling quality effect. Furthermore, we find that multipopulation EAs perform significantly better than single-population EAs on these scheme cases of the warehouse scheduling problem, which again verifies the conclusions obtained in the test of benchmark functions.

Table 5: Optimization results for 5 schemes of the warehouse scheduling problem. Here denotes the mean value and corresponding standard deviation of the scheduling quality effect. The best results in each row are shown in boldface font.

A sample multipopulation SaDE scheduling route output is shown in Table 6 for Scheme 1. It is seen that the warehouse scheduling problem is divided into 5 routes, where machine 1 implements routes 1 and 2, and machine 2 implements routes 3, 4, and 5. Each route includes 8 storage units, where the first 4 storage units are used to store products, and the last 4 storage units are used to retrieve products.

Table 6: Scheduling orders as optimized by multipopulation SaDE. “Machine” denotes the machine number index, “Route” denotes the route number index, and “Scheduling orders” denote the scheduling orders that one machine implements in one route.

5. Conclusions

In this paper, we first propose a management strategy for the number of multiple subpopulations based on individual distance and population distance, and it dynamically increases or decreases the subpopulation number during evolution process to maintain population diversity. Then we integrate the proposed multipopulation management strategy into EAs, including SGA, PBIL, SaDE, and PSO2011, to develop new multipopulation EAs. Next, the proposed multipopulation EAs are tested on CEC benchmark functions, and empirical results show that any single-population EA can be easily extended to a multipopulation EA by the proposed strategy, and the proposed multipopulation methods can obtain the better optimization performance than single-population EAs. Finally, these multipopulation EAs are used to solve real-world complex automated warehouse scheduling problems, and experimental results show that the proposed multipopulation EAs can obtain satisfied solutions.

In future research, at least three directions are envisioned. First, the proposed multipopulation management strategy has been combined into several EAs and improves their optimization performance. The multipopulation management strategy presented here could be extended for more EAs and swarm intelligence algorithms, for example, fireworks algorithm (FWA) [38, 39], brain storm optimization (BSO) algorithm [40, 41], teaching-learning based optimization (TLBO) algorithm [4244], and Jaya optimization algorithm [45]. Second, in this paper, we do not consider the communication between subpopulations. Undoubtedly, the interaction of subpopulations is important to enhance optimization performance by information share between subpopulations. So how to implement the adaptive interaction of subpopulations is additional direction for future study. Third, solving real-world application problems is the perpetual goal for EAs, and it would be fruitful to apply the proposed multipopulation EAs to various complex real-world problems. There is no doubt that more applications of the proposed multipopulation EAs can emerge in the near future with focused research.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. H. Ma and D. Simon, Evolutionary Computation with Biogeography-based Optimization, John Wiley & Sons, New Jersey, NJ, USA, 2017.
  2. Z. Yang, K. Li, Q. Niu, and Y. Xue, “A comprehensive study of economic unit commitment of power systems integrating various renewable generations and plug-in electric vehicles,” Energy Conversion and Management, vol. 132, pp. 460–481, 2017. View at Publisher · View at Google Scholar · View at Scopus
  3. H. Ma, S. Ye, D. Simon, and M. Fei, “Conceptual and numerical comparisons of swarm intelligence optimization algorithms,” Soft Computing, vol. 21, no. 11, pp. 3081–3100, 2017. View at Publisher · View at Google Scholar · View at Scopus
  4. X. Li, J. Lai, and R. Tang, “A hybrid constraints handling strategy for multiconstrained multiobjective optimization problem of Microgrid economical/environmental dispatch,” Complexity, Article ID 6249432, 12 pages, 2017. View at Publisher · View at Google Scholar
  5. Z. Yang, K. Li, Q. Niu, and Y. Xue, “A novel parallel-series hybrid meta-heuristic method for solving a hybrid unit commitment problem,” Knowledge-Based Systems, vol. 134, pp. 13–30, 2017. View at Publisher · View at Google Scholar · View at Scopus
  6. P. Siarry, A. Pétrowski, and M. Bessaou, “A multipopulation genetic algorithm aimed at multimodal optimization,” Advances in Engineering Software, vol. 33, no. 4, pp. 207–213, 2002. View at Publisher · View at Google Scholar · View at Scopus
  7. J. Alami, A. E. Imrani, and A. Bouroumi, “A multipopulation cultural algorithm using fuzzy clustering,” Applied Soft Computing, vol. 7, no. 2, pp. 506–519, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. W.-D. Chang, “A modified particle swarm optimization with multiple subpopulations for multimodal function optimization problems,” Applied Soft Computing, vol. 33, pp. 170–182, 2015. View at Publisher · View at Google Scholar · View at Scopus
  9. S. K. Nseef, S. Abdullah, A. Turky, and G. Kendall, “An adaptive multi-population artificial bee colony algorithm for dynamic optimisation problems,” Knowledge-Based Systems, vol. 104, pp. 14–23, 2016. View at Publisher · View at Google Scholar · View at Scopus
  10. G. Wu, R. Mallipeddi, P. N. Suganthan, R. Wang, and H. Chen, “Differential evolution with multi-population based ensemble of mutation strategies,” Information Sciences, vol. 329, pp. 329–345, 2016. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Pourvaziri and B. Naderi, “A hybrid multi-population genetic algorithm for the dynamic facility layout problem,” Applied Soft Computing, vol. 24, pp. 457–469, 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. B. Niu, H. Huang, L. Tan, and Q. Duan, “Symbiosis-Based Alternative Learning Multi-Swarm Particle Swarm Optimization,” IEEE Transactions on Computational Biology and Bioinformatics, vol. 14, no. 1, pp. 4–14, 2017. View at Publisher · View at Google Scholar · View at Scopus
  13. J. Xiao, W. Li, B. Liu, and P. Ni, “A novel multi-population coevolution immune optimization algorithm,” Soft Computing, 2015. View at Publisher · View at Google Scholar
  14. A. M. Turky and S. Abdullah, “A multi-population harmony search algorithm with external archive for dynamic optimization problems,” Information Sciences, vol. 272, pp. 84–95, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. N. S. Jaddi, S. Abdullah, and A. Hamdan, “Multi-population cooperative bat algorithm-based optimization of artificial neural network model,” Information Sciences, vol. 294, pp. 628–644, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  16. F. B. Ozsoydan and A. Baykasoǧlu, “A multi-population firefly algorithm for dynamic optimization problems,” in Proceedings of the IEEE International Conference on Evolving and Adaptive Intelligent Systems, EAIS 2015, France, December 2015. View at Publisher · View at Google Scholar · View at Scopus
  17. G. Mauša and T. Galinac Grbac, “Co-evolutionary multi-population genetic programming for classification in software defect prediction: An empirical case study,” Applied Soft Computing, vol. 55, pp. 331–351, 2017. View at Publisher · View at Google Scholar · View at Scopus
  18. C. Li, T. T. Nguyen, M. Yang, S. Yang, and S. Zeng, “Multi-population methods in unconstrained continuous dynamic environments: The challenges,” Information Sciences, vol. 296, no. 1, pp. 95–118, 2015. View at Publisher · View at Google Scholar · View at Scopus
  19. R. Shang, Y. Wang, J. Wang, L. Jiao, S. Wang, and L. Qi, “A multi-population cooperative coevolutionary algorithm for multi-objective capacitated arc routing problem,” Information Sciences, vol. 277, pp. 609–642, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. H. Cheng, S. Yang, and X. Wang, “Immigrants-enhanced multi-population genetic algorithms for dynamic shortest path routing problems in mobile ad hoc networks,” Applied Artificial Intelligence, vol. 26, no. 7, pp. 673–695, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. C. Li and S. Yang, “A clustering particle swarm optimizer for dynamic optimization,” in Proceedings of the 11th IEEE Congress on Evolutionary Computation (CEC '09), pp. 439–446, IEEE Press, Piscataway, NJ, USA, May 2009. View at Publisher · View at Google Scholar
  22. X. P. Wang and L. X. Tang, “An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization,” Information Sciences, vol. 348, pp. 124–141, 2016. View at Publisher · View at Google Scholar
  23. H. Kwasnicka and M. Gierusz, “Managing of cooperative genetic algorithms by intelligent agent,” in Proceedings of the 5th International Conference on Intelligent Systems Design and Applications, ISDA '05, pp. 564–569, Poland, September 2005. View at Publisher · View at Google Scholar · View at Scopus
  24. J. J. Liang, B. Y. Qu, P. N. Suganthan, and A. G. Hernandez-Diaz, “Problem definitions and evaluation criteria for the CEC 2013 special session and competition on real-parameter optimization,” Technical Report, Nanyang Technological University, Singapore, pp. 1–39, 2013.
  25. W. Khatib and P. J. Fleming, “The Stud GA: A mini revolution?” in Parallel Problem Solving from Nature, pp. 683–691, Springer, New York, NY, USA, 1998. View at Google Scholar
  26. L. Pedro and J. A. Lozano, Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation, Kluwer Academic Publishers, Boston, MA, USA, 2002.
  27. S.-Z. Zhao, P. N. Suganthan, and S. Das, “Self-adaptive differential evolution with multi-trajectory search for large-scale optimization,” Soft Computing, vol. 15, no. 11, pp. 2175–2185, 2011. View at Publisher · View at Google Scholar · View at Scopus
  28. S. Das and P. N. Suganthan, “Differential evolution: a survey of the state-of-the-art,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 4–31, 2011. View at Publisher · View at Google Scholar · View at Scopus
  29. M. Omran and M. Clerc, Particle Swarm Central. http://www.particleswarm.info/, 2011.
  30. K. Yu, X. Wang, and Z. Wang, “Multiple learning particle swarm optimization with space transformation perturbation and its application in ethylene cracking furnace optimization,” Knowledge-Based Systems, 2015. View at Publisher · View at Google Scholar · View at Scopus
  31. M. M. Al-Rifaie and T. Blackwell, “Bare bones particle swarms with jumps,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface, vol. 7461, pp. 49–60, 2012. View at Publisher · View at Google Scholar · View at Scopus
  32. J. Demsar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006. View at Google Scholar · View at MathSciNet
  33. J. Derrac, S. García, D. Molina, and F. Herrera, “A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms,” Swarm and Evolutionary Computation, vol. 1, no. 1, pp. 3–18, 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. H. Ma, S. Su, D. Simon, and M. Fei, “Ensemble multi-objective biogeography-based optimization with application to automated warehouse scheduling,” Engineering Applications of Artificial Intelligence, vol. 44, article no. 2340, pp. 79–90, 2015. View at Publisher · View at Google Scholar · View at Scopus
  35. J.-P. Gagliardi, J. Renaud, and A. Ruiz, “Models for automated storage and retrieval systems: A literature review,” International Journal of Production Research, vol. 50, no. 24, pp. 7110–7125, 2012. View at Publisher · View at Google Scholar · View at Scopus
  36. A. T. Almaktoom, “Stochastic reliability measurement and design optimization of an inventory management system,” Complexity, Article ID 1460163, Art. ID 1460163, 9 pages, 2017. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. K. Deb, “An efficient constraint handling method for genetic algorithms,” Computer Methods Applied Mechanics and Engineering, vol. 186, no. 2–4, pp. 311–338, 2000. View at Publisher · View at Google Scholar · View at Scopus
  38. Y. Tan and Y. Zhu, “Fireworks algorithm for optimization,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface, vol. 6145, no. 1, pp. 355–364, 2010. View at Publisher · View at Google Scholar · View at Scopus
  39. S. Ye, H. Ma, S. Xu, W. Yang, and M. Fei, “An effective fireworks algorithm for warehouse-scheduling problem,” Transactions of the Institute of Measurement and Control, vol. 39, no. 1, pp. 75–85, 2017. View at Publisher · View at Google Scholar · View at Scopus
  40. S. Cheng, Q. Qin, J. Chen, and Y. Shi, “Brain storm optimization algorithm: a review,” Artificial Intelligence Review, vol. 46, no. 4, pp. 445–458, 2016. View at Google Scholar
  41. Y. H. Shi, “Brain storm optimization algorithm,” in Proceedings of the International Conference in Swarm Intelligence, vol. 6728, pp. 303–309, Springer, Heidelberg, Germany, 2011.
  42. Z. Yang, K. Li, Q. Niu, Y. Xue, and A. Foley, “A self-learning teaching-learning based optimization for dynamic economic/environmental dispatch considering multiple plug-in electric vehicle loads,” Journal of Modern Power System and Clean Energy, vol. 2, no. 4, pp. 298–307, 2014. View at Google Scholar
  43. K. Yu, X. Wang, and Z. Wang, “An improved teaching-learning-based optimization algorithm for numerical and engineering optimization problems,” Journal of Intelligent Manufacturing, vol. 27, no. 4, pp. 831–843, 2016. View at Publisher · View at Google Scholar · View at Scopus
  44. K. Yu, X. Chen, X. Wang, and Z. Wang, “Parameters identification of photovoltaic models using self-adaptive teaching-learning-based optimization,” Energy Conversion and Management, vol. 145, pp. 233–246, 2017. View at Publisher · View at Google Scholar · View at Scopus
  45. K. Yu, J. Liang, B. Qu, X. Chen, and H. Wang, “Parameters identification of photovoltaic models using an improved JAYA optimization algorithm,” Energy Conversion and Management, vol. 150, pp. 742–753, 2017. View at Publisher · View at Google Scholar