Abstract

The cooperative coevolution (CC) algorithm features a “divide-and-conquer” problem-solving process. This feature has great potential for large-scale global optimization (LSGO) while inducing some inherent problems of CC if a problem is improperly decomposed. In this work, a novel CC named selective multiple population- (SMP-) based CC (CC-SMP) is proposed to enhance the cooperation of subproblems by addressing two challenges: finding informative collaborators whose fitness and diversity are qualified and adapting to the dynamic landscape. In particular, a CMA-ES-based multipopulation procedure is employed to identify local optima which are then shared as potential informative collaborators. A restart-after-stagnation procedure is incorporated to help the child populations adapt to the dynamic landscape. A biobjective selection is also incorporated to select qualified child populations according to the criteria of informative individuals (fitness and diversity). Only selected child populations are active in the next evolutionary cycle while the others are frozen to save computing resource. In the experimental study, the proposed CC-SMP is compared to 7 state-of-the-art CC algorithms on 20 benchmark functions with 1000 dimensionality. Statistical comparison results figure out significant superiority of the CC-SMP. In addition, behavior of the SMP scheme and sensitivity to the cooperation frequency are also analyzed.

1. Introduction

Large-scale global optimization (LSGO) is a kind of optimization problem that includes hundreds or even thousands of decision variables. The huge number of decision variables may induce the curse of dimensionality and an extremely complex interdependency of variables, both of which are challenging for conventional optimization approaches. In addition, the increase of the decision variables may also lead to a change in the problem’s properties.

Cooperative coevolution (CC) algorithms have great potential to conduct LSGO in a divide-and-conquer manner. The original problem is decomposed into several subproblems each of which is optimized by a separate coevolutionary population. Since these subproblems are relative small and optimized concurrently, highly efficient problem solving is expected.

To properly use CC, two issues should be carefully considered: cooperation among subproblems and problem decomposition (also called variable grouping). Since each subproblem just represents a segment of the original problem, one subproblem needs to persistently combine its individuals with collaborators provided by the other subproblems. The resultant whole solutions are then evaluated according to the objective function of the original problem. Obviously, improper collaborators may misguide the fitness evaluation and therefore be harmful for global optimization. In fact, early theoretical works have proved that the conventional CC algorithms are likely to converge to a Nash equilibrium rather than a global optimum if the collaborators cannot provide sufficient information for cooperation.

Plenty of work has been conducted on problem decomposition to minimize the interdependency among subproblems [16]. If the subproblems are fully independent from each other, the divide-and-conquer feature of CC algorithms could be directly utilized to optimize large-scale problems. However, fully accurate decomposition is usually impractical especially for large-scale problems. Although some pioneer works [710] have studied the way to address the inhabitation problems of CC through collecting informative collaborators, they cannot be directly applied to LSGO.

In our previous work [11], niching-based multimodal optimization (NMMO) was incorporated into the optimization procedure of each subproblem. The optima found by the NMMO procedures are exchanged among subproblems, which enhances the collection of informative collaborators and significantly improves the performance on LSGO. Nevertheless, in [11] we just trigger the NMMO optimizer occasionally because the convergence efficiency of NMMO is relatively low due to the simultaneous search in different regions. Therefore, the optimization is still mainly conducted by another local search optimizer.

According to our further study, we found that it is not necessary to locate and refine all optima of a subproblem. If some regions are properly selected to be searched, the construction of informative collaborators can still be achieved. In addition, since the number of the regions or nichings are reduced the convergence rate of the NMMO procedure is accordingly improved. Thus, a proper selection scheme is expected to be helpful for optimizing subproblems and providing informative collaborators simultaneously.

Bearing this idea in mind, in this work a selective multipopulation scheme is proposed to enhance the cooperation among subproblems. To find informative collaborators, multiple populations are used to search different regions simultaneously. Different from common multipopulation- or niching-based methods that are essentially for multimodal optimization, in our algorithm only some representative ones are selected according to fitness and diversity, and evolved at each iteration. Thus, only one multipopulation-based optimizer is used to not only find an informative collaborator but also conduct global optimization in each subproblem. Besides, the collaborative information is adaptively updated among subproblems. The proposed algorithm is tested on 20 LSGO benchmark functions [12] launched at the 2010 IEEE Congress on Evolutionary Computation (CEC’10) and compared with 7 state-of-the-art CC algorithms. Experimental results validate the effectiveness.

The remainder of this paper is organized as follows. Background knowledge and motivation are presented in Section 2. In Section 3, the detail of the proposed method is described. Behavior analysis and comparison are given in Section 4. In the last section, we summarize our work and draw the conclusions.

2. Background and Motivation

In this section, we briefly introduce the CC algorithm and CMA-ES. Then, the background knowledge and challenges of CC are presented. We figure out that the optimizer of subproblems should not only provide informative collaborators with good diversity and fitness but also adapt to dynamic landscapes.

2.1. Background
2.1.1. CC Algorithm

The pseudocode of the general CC framework is shown in Algorithm 1. At first, the original problem is decomposed into subproblems with a certain grouping method, then the context vector is initialized. Note that each subproblem is only a part of the original problem. Therefore, we need collaborative information (CI) from other subproblems to evaluate individuals in the current subproblem. Typically, the collaborative information is the current best solution in each subproblem. After initializing the subproblem populations (SP), subproblems will be optimized in a round-robin fashion. The optimizer could be any evolutionary algorithm. The historical best solution of each subproblem is often stored as the collaboration information. The algorithm terminates when the number of fitness evaluations (FEs) exceeds the maximum number. In line 10, the best solution is output as the final result.

1 Decompose the original problem into subproblems;
2 Randomly initialize the context vector ;
3 Initialize ;
4 while Termination = falsedo
5    for do
6   ;
7    end
8 end
9 ;
10 return
2.1.2. CMA-ES

In the multiple population framework used in this work, each population is evolved by an independent local search optimizer. The covariance matrix adaptation evolution strategy (i.e., CMA-ES) is proposed by Hansen and Ostermeier [13]. By sampling and updating a normal distribution adaptively, the CMA-ES shows a powerful local search ability in continuous space. The main formula of CMA-ES is shown as follows: where is the sampling center, is step size, and is the covariance matrix which determines the shape of the distribution. The parameters , , and update in different recombination, cumulation, and selection strategies [14].

2.2. Challenges for CC
2.2.1. Dynamic Landscapes

Due to the divide-and-conquer nature of CC, fitness evaluation in a subproblem is related to the other subproblems, in other words, the fitness landscape is dynamic when there exists interdependency among the decision variables in different subproblems. This can be demonstrated by the following example.

Equation (2) is the Ackley function with dimensions. A orthogonal matrix is used to rotate the coordinate, which makes the function nonseparable. The Ackley function is widely used to build benchmark functions for assessing optimization methods. In this case, let and the four decision variables are arbitrarily grouped into two subproblems (each consists of 2 decision variables). Thus, the resultant two subproblems are interdependent.

Figure 1 shows the corresponding landscapes of the first subproblem when cooperating with four different collaborators from the second subproblem. It can be seen that not only the shape of the landscape but also the location of the optimum may vary accordingly. This reveals the dynamic nature of the optimization procedure in subproblems in the context of CC. Therefore, the optimizer of subproblems should be able to respond to the change of landscapes.

2.2.2. Inherent Problems of CC

The evolutionary game theory (EGT) has been used to theoretically analyze the CC [8, 15]. It has been proved that conventional CC is likely to converge to the Nash equilibrium which is relatively easy to reach. Specifically, if the Nash equilibrium is a local optimum with a large attractive basin, local rather than global optimization will be the resultant of conventional CC. We use the maximum of two quadratics (MTQ) function to visually demonstrate this phenomenon. The MTQ function (formulated in (4)) is a typical function that may lead to local optimization of CC and is widely used in the literature [7, 1518] to test CC algorithms. determines the height of a peak, affects the covering area, and is the coordinate of a peak.

Figure 2 shows the landscape of a MTQ function with two optima. The global optimum (i.e., ) locates on the small peak (covers a small attractive basin) while the local optimum (i.e., ) locates on the large peak. According to the principle of EGT, it is easy to reach the Nash equilibrium on a large peak (i.e., ) where each subproblem could be awarded easily. From the perspective of optimization, global optimization is not necessary until the global optimum locates in a large attractive basin where the Nash equilibrium is easy to reach. Notably, this is not similar to the premature genetic algorithms which could be addressed by maintaining proper population diversity. As for the CC, if the optimization of a subproblem has converged (taking Figure 2 for instance, the population for variable converges to 0.2), due to the cooperative nature of the CC, the search area for the other subproblem will be particularly restricted (the search space of is restricted to the solid blue line ). In such case, the population cannot search the remaining area where the global optimum may actually be located.

2.3. Motivation

The inherent problem of CC is caused by its cooperative nature. Thus, the proper cooperation scheme should be accordingly designed to provide and utilize informative collaborators. Intuitively, representative information is found and sent to the other subproblem so as to help it “understand” the counterpart. We take Figure 2 as an example, in case that and the search space for is actually the solid red line . If both the current best optimum (on large peak) and local optimum (on small peak) can be properly obtained and provided to , the solid blue line that represents an additional search space of could be drawn. Along , CC will definitely find the real global optimum. Therefore, the current global and local optima are good representatives of a subproblem. They can be used as informative collaborators to help CC jump out from the locally optimal Nash equilibrium.

Nevertheless, there is no need to find all optima like multimodal optimization. To save the computing resource and improve converging rate, only the optima with great potential to be the real global optimum is worthy of being exploited. Thus, selective property should be additionally incorporated into multimodal optimization.

In this paper, we particularly designed a multipopulation optimizer for CC. In such optimizer, a selection strategy is used to allow potential optima to use more computing resource. Besides, the populations in each subproblem can be adaptively restated to address the dynamic landscape challenge of CC.

3. Proposed Algorithm

In this paper, we propose a selective multipopulation (SMP) scheme to enhance the cooperation among subproblems. The resultant CC algorithm is named CC-SMP for short. The main extension of the CC-SMP is obtaining and sharing more information among subproblems in a uniform selection-based multipopulation framework.

3.1. Algorithm Framework

The pseudocode of the CC-SMP is described in Algorithm 2. First, the original problem is decomposed into several subproblems with a given grouping method (line 1). Then, the coevolutionary populations and information pool are initialized. Specifically, each subproblem is allocated a coevolutionary population with a size of . These populations are randomly generated according to the grouping result (lines 3–6). Besides, 50 additional complete solutions are randomly generated and evaluated. The fittest solution is copied to the information pool as the initial collaborative information (lines 7–10). The initial coevolutionary populations for subproblems are evaluated via collaborating with initial collaborative information (line 11).

After the initialization procedure, the main optimization procedure is executed (lines 12–24) till the number of fitness evaluations (FEs) exceeds . A multipopulation-based optimizer is used for each subproblem. This optimizer differs from conventional multipopulation optimizers, since not all of the child populations will have the same opportunity to evolve. A biobjective selection is conducted to identify child populations with good fitness and diversity (lines 14-15) and let them be active to evolve. As for the child populations that are not selected, they will be frozen in the current iteration to save computing resources.

  Input:
  Result: One run of CC-SMP
1 ; ;
2 ;
3 for to do
4   ;
5   ;
6 end
7 ;
8 ;
9 ;
10 ;
11 Evaluate all via collaborating with ;
12 while do
13   for to do
14    // see Algorithm 3;
       ;
15   for to do
16       = Multiple CMA-ES
      ;
17      
      // see Algorithm 4;
18   end
19   ;
20   ;
21   ;
22  end
23 end

To address the challenge of dynamic landscape, a stagnation check is made for the child populations in each subproblem (line 17). The stagnated child populations are restarted to react in response to the dynamic landscapes. In order to efficiently find and exploit local optima, the CMA-ES, a popular local search algorithm, is used for each child population. Accordingly, several active child populations may simultaneously search different areas of a subproblem. If any child population gets stuck a new local optimum has been found and sufficiently exploited. Since we use the information of optima to construct collaborative information (CI) of CC, a newly raised optimum means the CI should be updated. In such case, CI is updated by selecting elements from the information pool which consists of the historical best solution () and centers of the CMA-ES optimizers (line 20). Those elements are also selected according to fitness and diversity like the selection of active populations to avoid combinational explosion.

3.2. Selective Multiple Populations (SMP)

In this work, multiple child populations are used to continually explore and exploit local optima in each subproblem. As mentioned before, not all child populations evolve at every iteration. For improving the converge rate and saving computing resources, only some child populations with great potential will be selected to evolve. The following biobjective selection is proposed to identify potential child populations.

For a given subproblem, child populations are randomly initialized at first with their own evolutionary parameters (e.g., center, step size, and covariance matrix for the CMA-ES). At each generation, a biobjective selection is conducted to select child populations which have competitive fitness and diversity. Then, the selected child populations can evolve while the remaining child populations will be frozen till the next generation. Particularly, the diversity of a given child population is defined as its minimum Manhattan distance between other child populations. The Manhattan distance indicates the similarity between vectors. The Manhattan distance between -dimensional vectors and is formulated in (5). where is the th dimension of vector . The diversity of is measured as where is the number of vectors. Note that in this work the diversity of a given population refers to its center (in the context of CMA-ES). In other words, the vectors , , used in (5) are actually the centers of the child populations, and is the number of child populations. Given the diversity and fitness of the centers, selection could be conducted according to the following biobjective minimization:

The nondominated sorting method used in NSGA-II [19] is employed to rank the centers. Child populations with nondominated centers are selected to evolve in the next iteration. Algorithm 3 shows the detail of the selection process.

Input:
Output:
 //
1 for to do
2  for to do
3     // (6);
4  end
5 end
6 //nondominated sorting;

Comparing with conventional multipopulation evolutionary algorithms, the SMP is easy to conduct and evolves efficiently. In conventional algorithms, the management of the child populations including parameters (search space, step size, etc.), life time, and update strategy are relatively complex and additional parameters may also be involved. In contrast, in SMP, selected child populations evolve and restart (as presented in Section 3.3) separately without a merging/splitting operation. The biobjective selection makes the child populations search in the distributed area with good diversity. Besides, selecting potential populations to be active for evolution also saves computing resources and speeds up the rate of convergence.

3.3. Stagnation and Restart

In this work, we propose a restart scheme so that multiple child populations are capable of exploring local optima continually. In addition, this restart scheme enhances the optimizer of each subproblem to react to a dynamic landscape. In particular, the best fitness of active optima is kept to identify stagnation. Stagnated populations are then restarted to maintain the proper diversity which is essential to continually search (exploring new optima and searching in a dynamic landscape).

The pseudocode of stagnation identification is shown in Algorithm 4. As mentioned before, the CMA-ES is used as the optimizer of each child population. Stagnation criteria for a child population are borrowed from [20]. As for a given child population, the best and median fitness values (termed and , resp.) in the last generations are persistently archived and updated. is the dimension of the subproblem and is the number of the sampling points of CMA-ES in a generation. In this work, is set to as suggested in [21].

Input:
Output:
1 ;
2 ;
3 ;
4 ;
5 if then
6    ;
7    ;
8    ;
9 end

Three criteria (, , and ) are used to check whether a CMA-ES optimizer gets stuck. and depend on the data sets in and . In particular, the 20 newest values in and should be smaller than the 20 oldest ones. As for , the difference between the maximum and minimum of the recent fittest values should be larger than a predefined threshold . If either of these three criteria is not fulfilled, the corresponding CMA-ES optimizer is identified as stagnation.

In this work, we restart a child population in its opposite position [22] (line 6). The opposite position of is defined as . bounds variable . Reference [23] shows that hybridization opposition-based learning (OBL) and CC can improve the performance of CC. The key idea of OBL is that the opposite of a candidate solution has relatively high probability to obtain a better fitness value.

3.4. Select Collaborative Information

In the CC algorithm, the cooperation between subproblems is achieved by exchanging collaborative information. Each subproblem is just a part of the original problem, but only complete solutions can be evaluated. To evaluate individuals of a given subproblem, the individuals needs to be combined with collaborative information to form complete solutions. In traditional CC algorithms, only the best individual of each subproblem is provided as collaborative information. As mention before, such cooperation will lead to the convergence to Nash equilibrium which may be suboptimal.

To address this issue, it is necessary to provide collaborators which are more informative to enhance the cooperation among subproblems. According to the description of CC-SMP, each subproblem provides its historical best solution and centers of active CMA-ES populations. An intuitive way to construct complete solutions is to fully mix the collaborators and the individual at hand. However, as the number of subproblems increase, the number of the resultant complete solutions increases exponentially, which may lead to a large number of fitness evaluation. To avoid such combinatorial explosion, the number of informative collaborators should be properly limited. In this work, the collaborative information (historical best solution and centers of active CMA-ES populations) is offered in terms of complete solutions and archived in the information pool. As shown in Figure 3, a complete solution consists of a solution segment of the subproblem and the very collaborator which is used to conduct fitness evaluation. Then, few informative collaborators with good diversity and fitness are selected according to the selection scheme illustrated in Algorithm 3.

4. Experiments

In this section, we test and verify the proposed CC-SMP algorithm. First, the CC-SMP is compared with seven state-of-the-art CC algorithms for LSGO. Second, the behavior of CC-SMP is analyzed in the context of CC. Third, parameter sensitivity is analyzed to further understand the performance of the CC-SMP.

4.1. General Parameter Settings

The following experiments are conducted using the benchmark suite for LSGO which was launched at CEC’10 (2010 IEEE Congress on Evolutionary Computation). This benchmark suite includes twenty 1000-dimensional real-valued functions. As shown in Table 1, different basic functions and separability are considered to generate the benchmark functions. According to the separability, the benchmark functions can be classified into five categories: (i)C1: fully separable (F1–F3)(ii)C2: partially separable with a single 50-dimensional nonseparable group (F4–F8)(iii)C3: partially separable with 10 50-dimensional nonseparable groups (F9–F13)(iv)C4: partially separable with 20 50-dimensional nonseparable groups (F14–F18)(v)C5: fully nonseparable (F19–F20)

Unless otherwise specified, a given algorithm is run for 25 times on a function to obtain the average performance for comparison. As for each run, an algorithm terminates when the number of fitness evaluations (FEs) exceeds . As for the CC-SMP, the subproblems evolve one by one, and each subproblem evolves one generation in a single cycle. The differential grouping (DG) [2] is used to decompose the original problem into several subproblems before the CC-SMP algorithms are conducted.

Note that according to our preliminary experimental results, in each cycle of the proposed CC-SMP the number of active optimizers (i.e., the CMA-ES) is always limited (from 1 to 6) even if there are 50 optimizers. In addition, the number of optimizers will not significantly affect the effectiveness and efficiency, because most of the optimizers will not be selected to be active and run in the optimization procedure according to our selection strategy. To make sure there are enough to be selected, we set such number to be 10 in our following experiments.

4.2. Performance Comparison

Here we compare the CC-SMP with seven state-of-the-art CC algorithms: DECC-D [5], MLCC [24], DECC-DML [5], DECC-DG [2], DECC-I [2], CBCC-1 [25], and CBCC-2 [25]. The grouping methods of these CC algorithms are presented in Table 2. Note that in some algorithms (i.e., CC-SMP, DECC-DG, CBCC-1, and CBCC-2) problem decomposition has been conducted before optimization. In the following experiments, the number of FEs that are used for grouping is not counted. The accurate FEs consumed by variable grouping on each function are given in [2]. The optimizer of these seven large-scale optimization algorithms is the SaNSDE [26]. It is a self-adaptive differential evolution algorithm with neighborhood search.

The mean and standard deviation values of the algorithms over 25 independent runs are compared in Table 3. As for each function, the best result (with a significance level of 0.05) is marked in bold. As can be seen, the proposed CC-SMP performs the best on 12 out of 20 functions followed by DECC-I (4/20), DECC-DG (2/20), and CBCC-2 (2/20). In addition, in Table 4 pairwise Wilcoxon test comparisons between the CC-SMP and each peer algorithm are made on each test function. The CC-SMP performs significantly better in (109 out of 140) comparing cases but significantly worse in only (23 of 140) of the cases. Therefore, the selective multipopulation strategy effectively enhances the CC algorithm on LSGO problems.

To better understand the dynamics of the CC-SMP, Figure 4 shows the average fitness values of the CC-SMP and three compared algorithms over 25 independent runs. It can also be seen that on most of the test functions the CC-SMP converges more quickly.

The Friedman average rankings are given in Table 5 to conduct multiple statistical comparison according to different separabilities (fully, partially, and nonseparable) and function features (unimodal and multimodal). It can be seen that the CC-SMP also obtains the best ranking results.

4.3. Behavior Analysis

Although ten child populations are initialized for each subproblem, at each iteration, only some of them are selected to be active for evolution. In this section, the behavior of SMP is studied to deeply understand how does the SMP select active populations. For each subproblem, we record the fittest value of each active (selected by SMP) child population during the evolution process. Three typical selective behaviors could be observed in Figures 57 which are described as follows:

First, several child populations evolve simultaneously and the best performed one is always selected to be active. Figure 5 shows the dynamics of the selected child populations in the 22nd subproblem when optimizing F1 (forty 25-D subcomponents). It can be seen that the 10th child population is always selected to evolve till the number of FEs reaches about . Actually, the 10th child population has discovered the global optimum and restarted after FEs (avoiding getting stuck for a long time). This means that the SMP can provide the best-performed child population more evolution opportunity. In addition to the best-performed child population, the diversity-controlling criterion in the SMP can also keep several other child populations from occasionally being active to search the global optimum.

Second, if the landscape is unimodal or the diversity of the child populations is relatively small only, the best-performed child population is selected to persistently evolve. Figure 6 shows the dynamics of the selected child populations in the 17th subproblem when optimizing F9 (unimodal). It can be seen that the 10th child population is always selected to be active. Although several other child populations are also selected at the beginning, only the 10th one is persistently selected after about FEs. In such case, the corresponding subproblem could be optimized by only one child population and computing resource is saved.

Third, in a multimodal landscape the always-being-selected child populations easily get stuck but can be restarted by the SMP to keep a proper level of diversity. Figure 7 shows the dynamics of the selected child populations in the 29th subproblem when optimizing F10 (multimodal). It can be seen that the stagnation takes place in the 2nd and 7th child population, respectively. However, due to the restart scheme in the SMP both can be selected to be active again. In addition, every time a child population is restarted an alternative one is selected to have the lead role of evolution and to persistently evolve till stagnation.

4.4. Sensitivity Analysis to Cooperation Frequency

According to the description in Section 3.2, the main idea of the SMP is to improve the cooperation among subproblems by sharing collaborators which are more informative. Whenever a subproblem updates its collaborators from the information pool, a cooperation takes place.

In the CC-SMP, a subproblem triggers a cooperation only when it is stagnated and about to be restarted. To further investigate the impact of the frequency with which the subproblems cooperate, in the following experiments the CC-SMP is implemented with four cooperation frequencies, i.e. . Note that denotes the number of cycles (line 13–22 in Algorithm 2) that are executed between every two occurrences of cooperation.

Table 6 shows the performance with . It can be seen that in 13 out of 20 functions the performance changes obviously (the maximal value is larger than 1.5 times the minimal value). In particular, always leads to a relatively poor performance and therefore an obvious change of performance. This is because means that the subproblems need to update their collaborators in each cycle. In such case, the landscape may change in a high frequency and there is not sufficient time for the optimizer to explore the new landscape. Accordingly, the quality of the informative collaborators and the resultant cooperation is decreased. This can also be verified by convergence curves on selected functions, as shown in Figure 8. It can be seen that the algorithm with always obtains the lowest convergence rate.

Although a clear superiority of (used in CC-SMP by default) is not observed in Table 6 and Figure 8 comparing with , the restart strategy implies an adaptive behavior when conducting cooperation. There is no need to carefully set a fixed cooperation frequency. A cooperation is triggered as long as a stagnation takes place. Seen from Figure 9, on different functions the algorithm with a restart scheme may adaptively adjust its cooperation frequency. More cooperation takes place when solving multimodal functions (see and ), because more child populations are needed to search simultaneously to obtain better informative collaborators from the multimodal landscape. These child populations need to jump out from the local optima and update collaborative information, which increases the number of occurrences of cooperation.

5. Conclusions

In this paper, we have proposed a novel CC named CC-SMP whose cooperation among subproblems is enhanced by the selective multiple population (SMP) scheme. The main motivation of this paper is to provide informative collaborators among subproblems throughout the CC process. By properly utilizing those collaborators, the inherent problem of CC algorithms caused by inaccurate problem decomposition is expected to be addressed. We have argued two challenges to obtain informative collaborators. First, the informative collaborators should be some representative individuals with good fitness and diversity. Second, due to the interactive nature of CC algorithms the landscape of the subproblems is dynamic rather than static.

To address two such challenges, the SMP scheme has been proposed to enhance the cooperation of CC algorithms. A multipopulation scheme with a restart-after-stagnation procedure has been designed to conduct optimization for each subproblem. Several child populations simultaneously search in different local areas and locate corresponding local optima as the informative collaborators. The restart-after-stagnation procedure has been incorporated to help the child populations adapt to a dynamic landscape. To save computing resource and speed up converging rate, not all child populations can evolve all the time. A biobjective selection has also been incorporated to identify the qualified child populations according to the criteria of representative individuals (fitness and diversity). Only selected child populations are active in the next evolutionary cycle while the others are frozen to save computing resource. In the experimental studies, the proposed CC-SMP has been compared with 7 state-of-the-art CC algorithms on twenty 1000D benchmark functions. Both pairwise comparison (Wilcoxon test) and multiple comparison (Friedman ranking) results figure out the significant superiority of the proposed CC-SMP. In addition, behavior analysis of the SMP scheme and sensitivity to the cooperation frequency have been conducted to further understand the CC-SMP.

More work can focus on realizing cooperation among subproblems more efficiently and accurately in the future. Some techniques from the machine learning field may be used to settle the combinatorial explosion problem between subproblems. The data generated in the optimization process may be utilized to improve the quality of collaboration information in our follow-on work.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work has been supported by the National Natural Science Foundation of China (nos. 6117235 and 61473233).