Abstract

Multipopulation is an effective optimization strategy which is often used in evolutionary algorithms (EAs) to improve optimization performance. However, it is of remarkable difficulty to determine the number of subpopulations during the evolution process for a given problem, which may significantly affect optimization ability of EAs. This paper proposes a simple multipopulation management strategy to dynamically adjust the subpopulation number in different evolution phases throughout the evolution. The proposed method makes use of individual distances in the same subpopulation as well as the population distances between multiple subpopulations to determine the subpopulation number, which is substantial in maintaining population diversity and enhancing the exploration ability. Furthermore, the proposed multipopulation management strategy is embedded into popular EAs to solve real-world complex automated warehouse scheduling problems. Experimental results show that the proposed multipopulation EAs can easily be implemented and outperform other regular single-population algorithms to a large extent.

1. Introduction

Evolutionary algorithms (EAs) are fast and robust computation methods for global optimization and have been widely applied in solving numerous real-world problems [15]. In recent years, the concept of multipopulation is frequently discussed and used to improve the optimization performance of EAs. In this regard, firstly, the original population is divided into several small subpopulations for special purposes such as large-scale problems and dynamic optimization problems. Then, particular evolution mechanisms and operations, for example, crossover and mutation for genetic algorithms (GAs), are executed. The purpose of multipopulation is to maintain population diversity and enhance exploration ability, which is the crucial factor to avoid premature convergence when handling optimization problems.

Existing studies on multipopulation demonstrate that such strategy has become one of the most effective methods to enhance EA performance [6, 7]. The key reasons for this are categorized as follows: it divides the overall population into multiple subpopulations, in which the population diversity can be maintained due to the fact that different subpopulations can be located in different search domains; it is able to search different areas simultaneously, leading the separated populations to rapidly find the optimal solutions; and various population-based EAs can be fast and easily integrated within multipopulation methods. In the research work carried out by Chang [8], a modified particle swarm optimization (PSO) with multiple subpopulations was utilized for multimodal function optimization problems. Simulation results on complex multimodal functions showed that the global and local system solutions were solved by these best particles of subpopulations. Nseef et al. [9] proposed an adaptive multipopulation artificial bee colony (ABC) algorithm for dynamic optimization problems, where multiple subpopulations were used to cope with dynamic changes and to maintain the diversity. Experimental results showed that the proposed multipopulation ABC was superior to regular ABC on all tested instances. In the research work carried out by Wu et al. [10], differential evolution (DE) with multipopulation-based ensemble of mutation strategies was used for optimizing benchmark functions, where each subpopulation performed a mutation strategy. Experimental comparisons showed the competitive performance of the proposed method. A hybrid multipopulation GA was employed in [11] for the dynamic facility layout problem, where the entire solution space was separated into different parts and each subpopulation represented a separate part. Simulation results showed that the proposed algorithm enjoyed the superiority over other algorithms. Niu et al. [12] proposed a symbiosis-based alternative learning multi-swarm PSO algorithm, where the communication between subpopulations used a learning method to select one example out of the center position, the local best position, and the historical best position including the experience of the internal and external multiple subpopulations, to keep the diversity of the population. The experimental results exhibited better performance in terms of the convergence speed and optimality. Xiao et al. [13] presented a novel multipopulation coevolution immune optimization algorithm (IOA) for most of the existing multimodal benchmarks, where coevolution of three subpopulations was promoted through a self-adjusted clone operator to enhance exploration and exploitation. The authors proved that their method outperformed three known immune algorithms and several other EAs. The introduction of external archiving into a multipopulation harmony search (HS) algorithm to solve dynamic optimization problems was presented by Turky and Abdullah [14]. The results on moving peak benchmarks showed that their modified version was better than the original harmony search algorithms. A multipopulation cooperative bat algorithm (BA) was used in [15] for an artificial neural network model, which mainly depended on the connection weights and network structure. Experimental results showed that there was a significant improvement by applying the proposed algorithm to all the test cases. Ozsoydan and Baykasoǧlu [16] employed a multipopulation firefly algorithm (FA) to tackle dynamic optimization problems. The experiments on moving peak benchmarks showed that the proposed algorithm significantly improved system performance. Mauša and Galinac Grbac [17] proposed a coevolutionary multipopulation genetic programming (GP) which combined colonization and migration with three ensemble selection strategies for classification in software defect prediction. Computational results demonstrated the efficiency of the proposed method.

Although multipopulation methods have shown success for solving optimization problems, most of them use a preset constant number of subpopulations during optimization process [18]. The subpopulation number has an important impact on performance of multipopulation EAs, because it is related to the difficulty of the problem, which is not known in advance. For a given problem, each EA may have different subpopulation numbers during different phases of the search process. For example, an algorithm with many subpopulations may have effective search ability during the initial phase of the optimization process, whereas the algorithm with a few subpopulations may have better search ability during the later phase of the optimization process. Therefore, it is of potential to dynamically manage the number of subpopulations during evolution process, based on the difficulty of the problem. It may result in outstanding results without the essential use of dedicated evolution operators. Nonetheless in most of multipopulation EA literature [19, 20], the dynamic management of subpopulation number is rarely mentioned.

In order to address this issue, this paper proposes a multipopulation management strategy to improve EA optimization performance. The proposed method uses some simple rules to dynamically manage the subpopulation number to maintain population diversity. To verify its effectiveness, the proposed multipopulation management strategy is embedded into various popular algorithms to construct multipopulation EAs. Then they are tested on a set of CEC benchmark functions and further applied to real-world complex automated warehouse scheduling problems. Experimental results show that the proposed multipopulation management strategy can help EAs to obtain excellent results and outperform some state-of-the-art single-population algorithms in the literature.

The following are the original contributions of this paper: a dynamic management strategy of the subpopulation number is proposed to boost population diversity while preserving simplicity for EAs; the proposed multipopulation management strategy is embedded into several popular EAs, including the stud genetic algorithm (SGA), population-based incremental learning (PBIL), self-adaptive differential evolution (SaDE), and standard particle swarm optimization (PSO2011); the optimization ability of these multipopulation EAs is investigated on a set of benchmark functions, and further they are applied to solve real-world automated warehouse scheduling problems.

The remainder of the paper is organized as follows. Section 2 gives detail descriptions of multipopulation management strategy. Section 3 discusses the integration of multipopulation management strategy with popular EAs. Section 4 compares the performance of several EAs in conjunction with multipopulation management strategy on benchmark functions and complex warehouse scheduling problems. Section 5 provides conclusions and suggestions for future research.

2. Multipopulation Management Strategy

This section presents the basic challenges for multipopulation EAs, as well as our proposed dynamic management for the subpopulation number.

2.1. Challenges for Multipopulation Methods

In EAs, diversity refers to differences among candidate solutions. As mentioned in most of EA literature, evolution progress lies fundamentally on the existence of variation of population, and the high diversity of a population greatly contributes to EA performance. Diversity loss often results in premature convergence, because EAs find themselves trapped in local optimal solutions and lack population diversity to escape. So the crucial point for EAs solving optimization problems is how to maintain population diversity. In recent years, multipopulation methods are introduced into EAs and become one of the most successful methods to improve optimization performance, because such methods deal with candidate solutions scattering over the entire search space. This feature helps population-based EAs to quickly achieve the global optimal solutions.

To make multipopulation methods more efficient, several crucial challenging issues in algorithm design need to be addressed. The first question is how to determine the number of subpopulations. If too many subpopulations distribute in the problem, this may waste the limited computational resources. However, too small number of subpopulations may lead to limited effect of the multipopulation strategy. The second question is how to determine the search area of each subpopulation. If the search area of a subpopulation is too small, there is a potential problem that the small isolated subpopulation may converge to a local optimal solution. In this case, the diversity will lose and the algorithm can hardly make any progress. On the contrary, if the search area of a subpopulation is too large, it is almost equal to the search area of the original population, and it is very hard to obtain better optimization performance compared with the regular algorithms. The third question is how to communicate between subpopulations. Many researchers believe that communication between subpopulations is very helpful during optimization because information can be shared among subpopulations and, hence, this will accelerate the search process and promising solutions may be found as well. In some current studies, the communication between subpopulations is controlled by four parameters: (i) a communication rate that defines the number of solutions in a subpopulation to be sent to other subpopulations; (ii) a communication policy that determines which solutions are to be replaced by ones from other subpopulations; (iii) a communication interval that sets up the frequency for executing communication; and (iv) a connection topology that defines how to connect between subpopulations.

For these challenges, most existing multipopulation methods just use predefined values, which are based on empirical experience, to determine the parameter setting of subpopulations. Some other studies assume that some information of optimization problems is known. In these cases, problem information can be used to guide the configuration of multipopulation parameters. However, for most of the cases, we need to deeply explore these challenging issues to develop excellent multipopulation EAs.

For the first challenge mentioned above, it is a good method that we use the appropriate number of subpopulations to maintain population diversity. This issue includes two ways. The first way is to use a fixed number of subpopulations. Most of current multipopulation methods fall into this group. The advantage of this way is that it can be implemented simply, and we only need to create a fixed number of subpopulations in advance for the optimization problem. Generally speaking, the more the peaks are in the fitness functions, the more the subpopulations are needed. However, it is not efficient to obtain the number of peaks of fitness functions for practice problems. In addition, the distribution and shape of fitness functions may also play a role in configuring the subpopulation number. The second way is to use a variable number of subpopulations. It is a difficult problem when the number of subpopulations is increased or decreased. To maintain population diversity, the subpopulation number might be different at different states during evolution process. For example, in the early stages, it needs a large number of subpopulations because candidate solutions can scatter over the entire search space, which leads to high population diversity. But, in the later stages, a small number of subpopulations are in favor of reducing diversity to fast converge to global best solutions. So the dynamic change of the subpopulation number should be in line with the population diversity. In many previous studies [21, 22], it often splits off from a main population into multiple parts to increase the subpopulation number and merges a set of small subpopulations into a main population to decrease the subpopulation number. The common characteristic of these methods is that they often adopt a specific cluster method to complete these operations. Therefore, additional knowledge about clustering is needed meanwhile increasing the complexity of algorithms, which is not desirable for solving some complex problems with the limitation of computational resource.

Undoubtedly, it is a good idea to dynamically manage the number of subpopulations during evolution process, without introducing additional complex mechanisms and considering the difficulty of the optimization problems in advance.

2.2. Ways to Manage Subpopulation Number

This part introduces a simple and effective multipopulation management strategy, to dynamically increase or decrease the subpopulation number. Compared with other multipopulation methods, the proposed method highlights the challenge of determining the number of subpopulations.

In the ways to manage the subpopulation number, four basic rules are considered:(1)The maximum number of subpopulations is limited in order to prevent computational burden on too many coevolving subpopulations.(2)The subpopulation number decreases, when we merge the same subpopulations or delete the existing subpopulations.(3)The subpopulation number increases, when we create some new subpopulations or divide the existing subpopulation.(4)The interaction of subpopulations is not considered because it is so smart that it affects the investigation on multipopulation management strategy.

Based on the above basic rules, two open problems need to be solved in the proposed multipopulation management strategy. First, what is the best condition to increase or decrease the subpopulation number? Before answering this question, we firstly consider that the purpose of multipopulation methods is to maintain population diversity. So it is an effective method to build the relationship between population diversity and control condition of the subpopulation number. Quantitatively, we need some simple approaches to measure diversity. Euclidean distance is probably the most widely used type of diversity measure nowadays. Euclidian distance between two solutions and can be calculated as where and denote the th solution variables in the solutions and , respectively, and is the number of solution variables.

Equation (1) is used to measure individual distance between two solutions. In the multipopulation method, we need to further consider population distance between two subpopulations. To calculate population distance, the concept of average population is introduced, which is a vector of average value of all solutions in the same population. We use to represent average population, which is calculated as follows:where is the size of the population.

The corresponding standard deviation of average population is calculated asThen, we define population distance between two subpopulations and , which is represented aswhere , , and are Euclidean distances, and are average populations, and and are standard deviations for subpopulations and , respectively. Equation (4) denotes the fact that population distance between two subpopulations is related to Euclidean distances between their average populations, and solution distribution in each subpopulation. If is small, areas occupied by subpopulations and are overlapping, which means that they have great similarity. The coefficient is set to 2 on the basis of normal distribution characteristic [23], according to 95% of solutions being located in distance of two standard deviations from average populations.

Next, we consider several diversity cases, and they are defined by the following ways:(1)As a diversity between individuals in the same subpopulation;(2)As a diversity between a subpopulation and other subpopulations;(3)As a diversity between the current subpopulation and its parent subpopulation.

Note that the first case is taken as individual diversity; the second and third cases are taken as population diversity.

Once we have the method of measuring diversity, only a threshold value is set to decide the condition to increase or decrease the subpopulation number. For individuals in the same subpopulation, if for any one of individual distances, we reckon that they are different, and if for all individual distances, they are considered to be similar. In the same way, for any two subpopulations, if , they are regarded as of diversity, and if , we think that they have similarity. Note that similarity is the opposite of diversity; can be a parameter by a user or can be an adaptive parameter.

Second, how do we increase or decrease the subpopulation number during evolution process? That is, how do we create new subpopulations, or delete redundant subpopulations? Based on the diversity concepts mentioned above, multipopulation management strategy can be roughly classified into three cases. In the first case, when all individuals are similar in the certain subpopulation, we firstly randomly preserve one of individuals because they are almost the same and then delete this subpopulation. Meanwhile, we create a new subpopulation consisting of the following individuals: 1/3 of individuals are copies of the preserved individual, 1/3 of individuals are from the best individuals in the entire population, and 1/3 of individuals are randomly generated. In the second case, when a subpopulation is similar to other subpopulations within the whole population, we directly delete this subpopulation to save computational resources. In the third case, when a subpopulation is similar to its parent subpopulation, we create a new subpopulation, and it consists of the following individuals: 50% of individuals are copies of the best individual in the latest generation, and the other 50% are generated randomly from the neighborhood of the best individual with the normal distribution. Note that the maximum number of subpopulations is limited to the preset number to prevent too many subpopulations of being involved. Figure 1 shows the multipopulation management strategy during evolution process.

3. Integration with EAs

Now we integrate the proposed multipopulation management strategy with EAs, called multipopulation EAs, and the flowchart is shown in Figure 2. It starts by setting the parameters. It creates a population of candidate solutions and evaluates them. Next, the population of solutions is divided into multiple subpopulations. Each subpopulation runs an EA paradigm to generate its own offspring. Then, subpopulations perform diversity judgement, and, based on diversity levels, it creates or deletes subpopulations. Finally, it checks the stopping condition. In this way, EAs always can dynamically manage the subpopulation number, leading to better performance than the corresponding single-population EAs. The pseudocode of multipopulation EAs is shown in Algorithm 1.

Set the parameters of the proposed multi-population EAs
Randomly initialize the entire population
Evaluate the fitness of all candidate solutions in the population
Divide the population into multiple subpopulations
While the halting criterion is not satisfied do
For each subpopulation do
Perform an independent EA to create its own offspring subpopulation
End for
Judge diversity and manage offspring subpopulations, and the detail sees Section 2.2
Evaluate the fitness of all offspring subpopulations
End while

The main steps of the proposed multipopulation EAs are further described in detail below.

Step 1 (set parameters). The main parameters of the proposed multipopulation EAs are initialized. They include the parameters of EA paradigms, the maximum number of iterations, the size of population, the number of initial subpopulations, and the maximum number of subpopulations.

Step 2 (initialize the population of solutions). It randomly generates a set of candidate solutions between the lower and upper boundaries.

Step 3 (evaluate the population of solutions). The fitness of the generated solutions is calculated using the objective function.

Step 4 (divide the population). The initial population is divided into multiple subpopulations with the same population size, and each subpopulation is randomly assigned from solutions in the population.

Step 5 (create offspring subpopulations). Different subpopulations are executed independently by EA paradigms to generate their own offspring subpopulations. Note that, for EA paradigms, we can use the same EA for all subpopulations, or we also can use the different EA for each subpopulation.

Step 6 (judge diversity and manage subpopulations). If a subpopulation is similar to other subpopulations, directly delete this subpopulation. If it is similar to its parent subpopulation, create a new subpopulation. If individuals are similar in a subpopulation, firstly delete this subpopulation and then create a new subpopulation. For details see Section 2.2.

Step 7 (evaluate offspring subpopulations and check the stopping condition). If the termination criterion is not met, go to Step 5; otherwise, terminate and output the evaluation results. Here the termination criterion is the maximum number of function evaluations.

4. Experimental Results

In this section the performances of the proposed multipopulation EAs are investigated. Section 4.1 describes the experimental setup, Section 4.2 presents performance comparisons on the 2013 CEC benchmark functions, and Section 4.3 applies the proposed multipopulation EAs to a real-world complex automated warehouse scheduling problem.

4.1. Simulation Setup

The performances of the proposed multipopulation EAs are evaluated on the 28 benchmark functions presented in Table 1, which are from the 2013 Congress on Evolutionary Computation (CEC) [24].

The popular EAs used in this paper include SGA, PBIL, SaDE, and PSO2011. We select SGA because it is an improvement of the basic GA that uses the best individual in each generation for crossover [25]. We select PBIL because it is the most successful variant of estimation of distribution algorithms [26]. We select SaDE because it is one of the most powerful DE algorithms and has demonstrated excellent performance on many problems [27, 28]. We select PSO2011 because it is popular in the literature and contains improvements gained as a result of many years of PSO studies [29, 30].

The next step is to set the parameters of each multipopulation EA. For SGA we use real coding, roulette-wheel selection, single-point crossover with a crossover probability of 1, and a mutation probability of 0.001. For PBIL we use learning rate , the number of best and worst individuals used to update probabilities in each generation is , and the standard deviation for mutation linearly decreases from 10% of the parameter range at the first generation to 2% of the parameter range at the final generation [26]. The SaDE parameter settings are adapted according to the learning progress [27]: the scaling factor is randomly sampled from the normal distribution (0.5, 0.3), and the crossover rate follows the normal distribution (0.5, 0.1). For PSO2011, we use an inertia weight , a cognitive constant , and a social constant for neighborhood interaction that is the same as .

In addition, we initially use three subpopulations, and the maximum subpopulation number is set to six. Each subpopulation uses the same EA, and the population size of each subpopulation is 25. For fair comparisons, the population size of single-population EAs is set to 150. We evaluate each function in dimensions with the function evaluation limit of . All algorithms are terminated after the maximum number of function evaluations is reached, or if the objective function error value is below 10–6.

4.2. Performance Comparisons

We simulate each algorithm 25 times on each benchmark, and the results are shown in Tables 2 and 3. The tables show that multipopulation SaDE (M-SaDE) performs best on 18 functions (F4, F6, F7, F9, F10, F11, F12, F13, F14, F15, F16, F18, F19, F20, F21, F22, F26, and F28), and multipopulation PSO2011 (M-PSO2011) performs best on 9 functions (F2, F3, F5, F8, F17, F23, F24, F25, and F27). For function F1, both M-SaDE and M-PSO2011 obtain the global optimal solutions. Furthermore, we briefly consider the types of functions for which the various algorithms are best-suited. Tables 2 and 3 show that the best-performing algorithm on each of the unimodal functions (F1–F5) is always M-PSO2011, and the best-performing algorithm on each of the multimodal functions (F6–F20) is always M-SaDE. We also note from these tables that M-PSO2011 performs as well as M-SaDE on the composition functions (F21–F28). This implies that no single algorithm can be the best for every problem, and, for different optimization problems, different algorithms have their own superiority to obtain the best performance.

Tables 2 and 3 also show that the total performances of multipopulation EAs, including M-SGA, M-PBIL, M-SaDE, and M-PSO2011, are better than their corresponding single-population EAs. This may be due to the proposed multipopulation management strategy that enhances population diversity to improve optimization performance.

The average running times of all algorithms are shown in the last rows of Tables 2 and 3. Here MATLAB® is used as the programming language, and the computer is a 2.40 GHz Intel Pentium® 4 CPU with 4 GB of memory. We find that the average running times of the multipopulation EAs are less than their corresponding single-population algorithms. For example, the average running time of multipopulation SGA (M-SGA) is less than single-population SGA (S-SGA). The reason is that multipopulation EAs use multiple parallel subpopulations to reduce computation time with the same total population size as the corresponding single-population algorithms. So multiple subpopulations are also amenable to parallel processing, and they can further reduce computational effort.

In order to further compare the performance of the multipopulation and single-population EAs, we use the Wilcoxon method to test for statistical significance. The Wilcoxon method is a nonparametric statistical test to determine whether differences between groups of data are statistically significant when the assumptions that the differences are independent and identically normally distributed are not satisfied [3133]. The Wilcoxon test results are shown in Table 4, where the pairs are marked if the difference between each pair of algorithms is statistically significant.

The results in Table 4 are divided into S-SGA versus M-SGA group, S-PBIL versus M-PBIL group, S-SaDE versus M-SaDE group, and S-PSO2011 versus M-PSO2011 group. For each pair of algorithms we calculate B/S/W scores, where “B” denotes the number of times that the left algorithm performs better than the right one, “S” denotes the number of times that the left algorithm performs the same as the right one (statistically speaking), and “W” denotes the number of times that the left algorithm performs worse than the right one.

Table 4 shows that, for S-SGA versus M-SGA, the B/S/W score is 1/7/20, which indicates that S-SGA outperforms M-SGA one time, S-SGA is statistically the same as M-SGA seven times, and M-SGA outperforms S-SGA twenty times. For S-PBIL versus M-PBIL, the B/S/W score is 2/3/23, which indicates that S-PBIL outperforms M-PBIL two times, S-PBIL is statistically the same as M-PBIL three times, and M-PBIL outperforms S-PBIL twenty-three times. For S-SaDE versus M-SaDE, the B/S/W score is 0/5/23, which indicates that S-SaDE does not outperform M-SaDE in any time, S- SaDE is statistically the same as M-SaDE five times, and M-SaDE outperforms S-SaDE twenty-three times. For S-PSO2011 versus M-PSO2011, the B/S/W score is 1/5/22, which indicates that S-PSO2011 outperforms M-PSO2011 one time, S-PSO2011 is statistically the same as M-PSO2011 five times, and M-PSO2011 outperforms S-PSO2011 twenty-two times. From the results we see that multipopulation versions of these algorithms are significantly better than their single-population versions on the CEC 2013 benchmark functions, which further verifies the conclusions obtained in Tables 2 and 3. Such statistical results show that the proposed multipopulation management strategy is a good method to improve the optimization performance for EAs. The reason for its competitive performance is that multipopulation EAs effectively manage the number of subpopulations in different evolution phases throughout the evolution, which can significantly maintain population diversity, compared to single-population EAs.

4.3. Application to Complex Warehouse Scheduling Problems

In this section, the proposed multipopulation EAs are applied to a real-world complex automated warehouse scheduling problems described in [34], which are formulated as a constrained single-objective optimization problem. Warehouse scheduling in the supply chain is challenging because it is proved as a typical NP-hard problem, which is one of the most challenging types of combinatorial optimization problem [35, 36]. Layout of the warehouse system we study is shown in Figure 3, and more details about cost function, constraints, and parameter settings of the warehouse scheduling problem can be referred to [34].

In this experiment, we use the same simple-population and multipopulation EAs as those in the benchmark simulations. The constraint-handling method is based on feasibility rules by Deb [37], which has demonstrated promising performance in dealing with constraints. We consider five simulation schemes. Scheme 1 () is described as follows.p1: (12, 6, 2, 50, 1), p2: (51, 8, 1, 45, 1), p3: (40, 3, 5, 46, 1), p4: (37, 3, 4, 113, 1),p5: (14, 7, 1, 40, 1), p6: (13, 5, 2, 50, 1), p7: (45, 5, 5, 50, 1), p8: (18, 7, 3, 33, 1),p9: (7, 3, 2, 35, 1), p10: (11, 4, 3, 110, 1), p11: (15, 2, 1, 34, 1), p12: (36, 8, 4, 40, 1),p13: (8, 6, 5, 47, 1), p14: (56, 2, 3, 34, 1), p15: (42, 8, 3, 30, 1), p16: (23, 4, 1, 50, 1),p17: (4, 6, 2, 50, 1), p18: (13, 1, 1, 36, 1), p19: (39, 6, 4, 42, 1), p20: (45, 6, 5, 55, 1)p21: (50, 8, 1, 67, 2), p22: (3, 1, 2, 74, 2), p23: (55, 4, 3, 68, 2), p24: (6, 7, 4, 85, 2),p25: (17, 4, 5, 74, 2), p26: (60, 7, 3, 80, 2), p27: (35, 6, 2, 67, 2), p28: (15, 2, 1, 70, 2),p29: (57, 4, 2, 80, 2), p30: (2, 1, 4, 62, 2), p31: (25, 8, 2, 76, 2), p32: (5, 2, 4, 64, 2),p33: (17, 5, 3, 76, 2), p34: (41, 2, 5, 91, 2), p35: (19, 3, 2, 70, 2), p36: (20, 1, 1, 82, 2),p37: (42, 6, 2, 88, 2), p38: (32, 6, 3, 75, 2), p39: (9, 7, 1, 82, 2), p40: (58, 5, 4, 69, 2)

In Scheme 1, is the number of storage products, and is the number of retrieval products. is the storage unit, and it is denoted by , where , , and are the Euclidean coordinates of each storage unit, is the weight coefficient that affects the scheduling quality, and indicates a storage product and indicates a retrieval product.

Scheme 2 () includes all the storage products of Scheme 1 but only the first 16 retrieval products of Scheme 1. Scheme 3 () includes all the storage products of Scheme 1 but only the first 12 retrieval products. Scheme 4 () includes the first 16 storage products of Scheme 1 and all of the retrieval products. Scheme 5 () includes the first 12 storage products of Scheme 1 and all of the retrieval products.

Optimization results of different simple-population and multipopulation EAs are summarized in Table 5. From the table, it is seen that M-SaDE performs best for all of scheme cases because of its lowest scheduling quality effect. Furthermore, we find that multipopulation EAs perform significantly better than single-population EAs on these scheme cases of the warehouse scheduling problem, which again verifies the conclusions obtained in the test of benchmark functions.

A sample multipopulation SaDE scheduling route output is shown in Table 6 for Scheme 1. It is seen that the warehouse scheduling problem is divided into 5 routes, where machine 1 implements routes 1 and 2, and machine 2 implements routes 3, 4, and 5. Each route includes 8 storage units, where the first 4 storage units are used to store products, and the last 4 storage units are used to retrieve products.

5. Conclusions

In this paper, we first propose a management strategy for the number of multiple subpopulations based on individual distance and population distance, and it dynamically increases or decreases the subpopulation number during evolution process to maintain population diversity. Then we integrate the proposed multipopulation management strategy into EAs, including SGA, PBIL, SaDE, and PSO2011, to develop new multipopulation EAs. Next, the proposed multipopulation EAs are tested on CEC benchmark functions, and empirical results show that any single-population EA can be easily extended to a multipopulation EA by the proposed strategy, and the proposed multipopulation methods can obtain the better optimization performance than single-population EAs. Finally, these multipopulation EAs are used to solve real-world complex automated warehouse scheduling problems, and experimental results show that the proposed multipopulation EAs can obtain satisfied solutions.

In future research, at least three directions are envisioned. First, the proposed multipopulation management strategy has been combined into several EAs and improves their optimization performance. The multipopulation management strategy presented here could be extended for more EAs and swarm intelligence algorithms, for example, fireworks algorithm (FWA) [38, 39], brain storm optimization (BSO) algorithm [40, 41], teaching-learning based optimization (TLBO) algorithm [4244], and Jaya optimization algorithm [45]. Second, in this paper, we do not consider the communication between subpopulations. Undoubtedly, the interaction of subpopulations is important to enhance optimization performance by information share between subpopulations. So how to implement the adaptive interaction of subpopulations is additional direction for future study. Third, solving real-world application problems is the perpetual goal for EAs, and it would be fruitful to apply the proposed multipopulation EAs to various complex real-world problems. There is no doubt that more applications of the proposed multipopulation EAs can emerge in the near future with focused research.

Conflicts of Interest

The authors declare that they have no conflicts of interest.