Abstract

This article proposes an evolutionary multiagent framework of the co-operative co-evolutionary multiobjective model (CCMO-EMAS), specifically for equipment layout optimization in engineering. In this framework, each agent is set in a multiobjective cooperative co-evolutionary mode along with the algorithms and corresponding settings. In each iteration, agents are executed in turn, and each agent optimizes a subpopulation from system decomposition. Additionally, the collaboration mechanism is addressed to build complete solutions and evaluate individuals in the co-operative co-evolutionary algorithm. Each subpopulation is optimized once, and the corresponding agent is evaluated based on the improvement of the system memory. Moreover, the agent team is also evolved through an elite genetic algorithm. Finally, the proposed CCMO-EMAS framework is verified in a multimodule satellite equipment layout problem.

1. Introduction

On the conception of the agent, researchers presented diverse viewpoints according to their research fields, and there is still no uniform definition. Currently, using an agent-based model to solve optimization problems attracts the attentions of an increasing number of researchers [1]. The multiagent system (MAS) is the set of several agent elements which interact with each other under some defined rules or given conditions. Through combining multiagent with the evolutionary algorithms, several researchers proposed agent-based evolutionary algorithms [2].

There are mainly three patterns for agent-based evolutionary algorithm. (1) The MAS is used as the management layer to control the evolutionary procedure. In such methods, agents have their functions, and the algorithm optimization is achieved through coordination and interaction among agent systems. Therefore, when solving different problems, it is necessary to reestablish the multiagent model according to different situations. For instance, Cardon et al. [3] used GA to solve multiobjective job shop scheduling problems with a multiagent system. (2) The MAS represents the population of the algorithm. The typical method in this field is the multiagent genetic algorithm. As an example, Zhong et al. [4] took each solution in the population as one agent, and all the agents are distributed within the grid in a two-dimension space, with each node only linking with four neighbor nodes. It is equivalent to decompose the solution and searching space, which has a good effect on improving the diversity of the algorithm. This approach is more likely to be regarded as an algorithm rather than a framework. (3) Each agent in MAS represents an algorithm model, where the representative work is the research of Hanna and Cagan [5]. This evolutionary multiagent system (EMAS) is a framework that can integrate many kinds of evolutionary algorithms in MAS and changes the algorithm model by the evolution of agents. EMAS framework adjusts the algorithm model and parameter settings through collaboration and interaction between agents, and obtains a better solution.

In the past decades, the EMAS has been successful in solving engineering design problems, especially in terms of cooperation and functional combination of algorithms. There have been many representative systems are proposed, such as A-Team [6] and A-Design [7]. However, the main problem of these systems is that an algorithm and its parameters represented by the agent are usually unchanged during the optimization process, so the full potential of each agent cannot be played out. Based on the above research, Landry and Cagan provided a universal framework of EMAS [8] which enhanced the cooperation of agents and further proposed three levels of cooperation between agents, including full cooperation, partial cooperation, and no co-operation. The algorithm parameters are also encoded into the genes of the agent and change with the evolution of the agent. Landry and Cagan [9] made a further study on the EMAS, they proposed a protocol-based EMAS method, which encoded the optional operators of different algorithms into the genes of the agent. Egan et al. [10] applied the EMAS to solve an intricate design in bio-nanotechnology and realized the cooperation between human and evolutionary agent systems by a graphical interface. As referred above, the EMAS has already got excellent achievements for single-objective problems. However, it has some deficiencies in solving multiobjective combinational optimization such as Pareto nondominated ranking and low selecting pressure. The relative references of EMAS for the multiobjective evolutionary model are insufficient, especially for some multiobjective combinational problems that need divide and conquer [11, 12]. Taking multicabin satellite equipment layout problem as an example, the conventional method is to decompose the original problems into several subproblems based on the principle of divide-and-conquer and then to solve them with a multiobjective cooperative co-evolutionary algorithm [13].

The main work of this paper is proposing an evolutionary multiagent framework of the multiobjective cooperative co-evolutionary model (CCMO-EMAS), which is compared with two original multiobjective cooperative co-evolutionary algorithms for an engineering problem, i.e., the multicabin satellite equipment layout problem (MLSELP). The paper is organized as follows. Section 2 provides a brief introduction to EMAS, and the CCMO-EMAS framework is proposed in Section 3. The model of MLSELP is introduced in Section 4. The framework is validated by MLSELP, and optimization results of all the considered algorithms are analyzed and discussed in Section 5. Finally, our future work and recommendation are concluded in Section 6.

In the description of Cagan [8], EMAS is composed of multiple agents, which is also known as an agent team. The gene of each agent in team represents or describes an algorithm model, including configuration and parameter. The agent team will use multiple algorithm models to solve the same problem, and the solutions will be shared and refined through shared memory and search space of the problem. The structure of the EMAS is shown in Figure 1. It is noteworthy that the agent team is also evolving to improve the algorithm model. Agents in EMAS are encoded with a binary string, which builds different algorithm models according to the various gene positions of the chromosome. The operation process of EMAS is as follows:

Firstly, each agent needs to be activated before evolution, so that the algorithm represented by the agent is executed. In the process of activation, the algorithm is assigned initial solutions and run a certain number of iterations to generate new solutions based on the parameters in the agent. The initial solutions can be selected from the shared memory or randomly generated. Based on these two sources of initial solution, EMAS defines three levels of cooperation between agents: (1) full cooperation: all the input population will be acquired from the shared memory; (2) partial cooperation: the input population has 50% chance of using initial solutions from the shared memory; and (3) no cooperation: all the input population will all use randomly generated solutions. In EMAS, the shared memory not only provides the initial populations for agents but also plays an external archival role in storing the optimal solution search so far. The objective values of the new solutions will be used to evaluate the agent. After each activation, each agent’s fitness value is proportional to the difference between the optimal solution of the new solutions and the shared memory, and then the optimal solution in shared memory is updated. In order to keep the population size constant in the shared memory, EMAS removes the exceeded solutions based on their objective values. If the optimal solution in newly generated solutions is worse than that in shared memory, the agent does not change the fitness value and continues to run a certain number of iterations. Besides, each agent has an attribute called “age” whose initial value is zero. Whenever the algorithm in the agent runs one time, the age of this agent increases one. If the age of the agent reaches specified value, this agent will go through the mutation operation, and its fitness value will decrease at the same time.

Secondly, the agent team will evolve based on their fitness values. In EMAS, the evolutionary operation of agents is similar to that of the binary genetic algorithm, including reproduction, crossover, mutation and selection. The agents with better fitness values are selected as the parent agents and allowed to preserve beneficial genes. Each parent agent is only permitted to reproduce once in each iteration. In the reproduction, the two-parent agents are duplicated, and a single-point crossover is executed to generate two offspring agents.

Meanwhile, the new offspring agents also have 50% probability of mutation to become completely different agents. As referred above, the agent might also mutate inactively when the age of the agent reaches the specified value. After crossover and mutation, the offspring agents are activated and their fitness values are calculated. Agents with the same fitness are further sorted by age, and the smallest one is chosen. This process will repeat until termination conditions are satisfied. In this EMAS, the three levels of cooperative method between agents achieve excellent effects for solving benchmark test functions, but it is easy to fall into local optimum when solving combinational problems with complicated constraints.

As mentioned above, EMAS is mostly used to solve single-objective optimization problems, but rarely solve multiobjective optimization problems. Also, the framework of EMAS does not consider co-evolutionary algorithms and has difficulty to solve combinational problems with a large number of variables.

3. An Evolutionary Multiagent Framework of the Multiobjective Cooperative Co-Evolutionary Model (CCMO-EMAS)

Compared with conventional heuristic algorithms, cooperative co-evolutionary algorithms (CCEAs) are generally used to solve complex combinational problems with more variables. CCEA is based on the idea of divide and conquer, that is, decomposing the complex problem into several subproblems. In CCEAs, the initial population of system solutions is divided into many populations of subsystem solution based on problem decomposition firstly. At each iteration, a heuristic algorithm optimizes the population of subsystems one after another. To reduce the effects of coupling, the fitness function of each subsystem is the same as that of the system. Therefore, a significant problem occurs about building complete solutions for evaluation of each sub-system, which we also call it a collaboration mechanism. Especially in multiobjective cooperative co-evolutionary algorithms (such as NSCCGA [14], MOCCA [15]), the collaboration mechanism is the critical factor to the performance of algorithm. Meanwhile, the setting of parameters is another problem and plays an essential role in combinational problems. To address these problems, an evolutionary multiagent framework of multiobjective cooperative co-evolutionary algorithms (CCMO-EMAS) is proposed in this study.

3.1. Structure of the CCMO-EMAS

Similar to the structure of EMAS, CCMO-EMAS contains an agent team with many agents. The structure of CCMO-EMAS is shown in Figure 2. In CCMO-EMAS, each agent can be defined as a multiobjective cooperative co-evolutionary model with the algorithms and their settings. The initial population of the system is decomposed into several subpopulations (i.e., population of subsystems) based on the characteristics of the combinational problem. CCMO-EMAS uses several subsystem memories to store subpopulations and provides the initial solutions for agents. The number of subsystem memories depends on the system decomposition and determines the number of agents. Furthermore, system memory is employed as an external archive to preserve the excellent solutions and provides the representative system solution for collaboration mechanism. In multiobjective combinational problems, nondominated solutions of system are usually selected as excellent solutions.

The gene of agent represents the algorithm type and related parameters, mainly including the setting of collaboration mechanism. With the iteration of CCMO-EMAS, the algorithms and settings will evolve along with the evolution of the agent team, where each agent optimizes a subsystem. In each iterating of CCMO-EMAS, the agents are activated firstly, which is running the algorithm model in the agent. The initial solutions of algorithm have come from different subsystems, and each agent corresponds to a subsystem. Therefore, it is impossible that several agents have the same initial population.

After the activation, the agents are evaluated by their scores and ages, which depends on the degree of improvement solutions of their algorithms. In this study, the degree of improvement solutions is calculated by updating the nondominated solutions in the system memory. After evaluation, the agent team is evolved by the crossover and mutation. Finally, an elite strategy is used for selecting the newly generated agents for realizing the improvement in the multiobjective cooperative co-evolutionary algorithms. The flowchart of CCMO-EMAS is shown in Figure 3.

It is worth noting that the purpose of CCMO-EMAS is to obtain the most suitable multiobjective cooperative co-evolutionary algorithms for a specific type of combinational problems, and the optimal solution of the system is generated to evaluate the agent.

The pseudo code of CCMO-EMAS is shown in the following, as Algorithm 1.

Input: : multiobjective function; : initial solutions set; : number of agent team; : initial agent team; : number of iterations;
Output: optimal agent team ; solutions in system memory ;
(1)Initial T = 0 and i = 0;
(2)Decompose population into
(3)Copy into ;
(4)while do
(5)for each do
(6)  Decode an agent into a multi-objective cooperative co-evolutionary algorithm A;
(7)  Choose a sub-population in as initial population of algorithm A;
(8)  Use algorithm A to optimize based on the collaboration mechanism in ;
(9)  Update through ;
(10)  Obtain score and age of agent ;
(11)end for
(12)Use elite genetic algorithm to evolve the agent team based on the score and age;
(13)Update through ;
(14)T = T + 1;
(15)end while
3.2. Agent Definition

In CCMO-EMAS, the agent adopts real number coding to denote the multiobjective cooperative co-evolutionary model. The chromosome of agent contains five gene positions, which are represented by five real numbers. Therein, the former two positions denote the collaboration mechanism, which contains the subsystem memory selection and the representative selection. As described above, the collaboration mechanism in this framework is addressed to build complete solutions for fitness calculation, which combines the subpopulation with representatives of other subpopulations or system memory. The latter three positions represent the setting of the algorithm in the cooperative co-evolutionary algorithm, which contains algorithm type, crossover factor and mutation factor. The structure of the agent chromosome can be seen in Figure 4.

The specific meaning of each gene position is as follows:(i)Selection of sub-system memory: this determines which sub-system memory provides the initial population for one agent. When an agent determines one sub-system memory, other agents cannot choose this memory.(ii)Selection of representative: this determines how to select a representative form for the complete solution. The details will be introduced in Section 3.4.(iii)Algorithm: it determines which kind of algorithm is chosen as the basic algorithm for the multi-objective cooperative co-evolutionary model. Considering the requirements of the algorithm comparison and the selection constraints of the latter parameter, this paper only gives two optional multi-objective evolutionary algorithms, NSGA-II and NSDE.(iv)Crossover operator: this assigns the value of the crossover operator (CR) in the algorithm.(v)Mutation operator: it assigns the value of the mutation operator (F) in the algorithm.

3.3. Activation and Evolution

In CCMO-EMAS, every agent is activated at the beginning of each iteration. In the activation stage, the algorithm represented by the agent is running and optimizing the sub-population in the corresponding sub-systems. The primary function of the activation process is to generate new solutions and update the sub-population of sub-systems. According to the collaboration mechanism, the new solutions and the representative are combined into a complete solution of the system, which is stored in the system memory. Afterwards, the system memory updates and removes the solutions based on non-dominated rank and crowded distance. In this study, the score of the agent is proportional to the ability to make positive changes of non-dominated solutions set in the system memory. The number of replaced solutions in the system memory is the score of an agent.

Sometimes, it is difficult to adequately reflect the pros and cons of agent-based only on the score. As each agent only optimizes the sub-problem in a sub-system, the score also depends mostly on the collaboration mechanism. In multi-objective cooperative co-evolutionary algorithms, the quality of non-dominated solutions in the system memory is also closely related to the representative. Especially when the score of the agent is zero, the agent can reselect a representative to form complete solutions and update the system memory again. The age of the agent is initialized to zero in each iteration. Then the number of age increases one when the agent reselects a representative each time. Once the age of the agent reaches three, this agent ends activation and gets a bad evaluation. In this case, the multi-objective cooperative co-evolutionary algorithm in agent is considered to be less effective, and will directly perform mutation operation. According to the ages, the agent will increase opportunities to use different representatives, which can improve the cooperative ability of the multi-objective cooperative co-evolutionary algorithm. The flowchart of agent activation is shown in Figure 5.

As referred above, for the score and age of the agent, we need to calculate the nondominated ranking and crowded distance more than once for multiobjective combinational problems. To improve the computational efficiency, an elitist strategy is adopted in CCMO-EMAS, which is to retain a certain number of excellent agents. Meanwhile, the parent individuals are selected from other agents based on their scores and ages, and the selection operator adopts roulette selection. The higher the score and the younger age, the greater the probability of being selected as the parent agent. After that, the parent agents execute the single-point crossover and Gaussian mutation to generate the new agents, which are composed of the next-generation agent team with the elite agents. The next-generation agent team is composed of the new generated agents and the elite agents after crossover and mutation operations.

These steps are repeated until the termination conditions are reached. The repeating iteration for the agent team can continually explore the algorithms space of the multiobjective co-operative co-evolutionary model, so it is beneficial to obtain more applicable multiobjective cooperative co-evolutionary algorithms for multiobjective combinational problems.

3.4. Collaboration Mechanism

As described above, EMAS defined three levels of cooperation between agents, notably including no cooperation and randomly generating solutions in each iteration. For the combinational problems with performance constraints, full cooperation is the better choice than the two other cooperation levels. Especially after the solution has evolved to a certain stage, it is difficult to satisfy the performance constraints by using randomly initialized populations based on partial cooperation or no cooperation. Without the full cooperation between the agents, the qualities of nondominated solutions in the system memory are hard to improve, and the evaluation of the agent will be influenced.

Different from EMAS, multiobjective co-operative co-evolutionary algorithms must employ collaboration mechanism to evaluate the solutions in subsystem. The collaboration mechanism can also be called the collaboration method of complete solutions, which combines the subsystem solution with the representative of excellent solutions to form system solutions. In CCMO-EMAS, a new cooperation mode is constructed to realize by the collaboration mechanism in a cooperative co-evolution framework.

Agents employ two collaboration methods: the first method is to combine the respective optimal solutions of each subsystem memory into the representative [16], and the second method is to select the representative directly from the system memory [17]. No matter which kind of collaboration methods are, each agent has to cooperate with other agents. In CCMO-EMAS, each collaboration method also has two selections modes, which are best selection and random selection, respectively. Thus, there are four kinds of collaborators selections modes of representative for agents. The four selections modes are listed as follows:(i)Best collaboration of subsystem memory (BC): this refers to the random selection of the representative from the Pareto optimal solutions in each subsystem memory to form a complete solution.(ii)Random collaboration of subsystem memory (RC): this refers to the random selection operation of the representative from each subsystem memory to form a complete solution.(iii)Best selection of system memory (BA): this refers to the random selection of the complete representative from the Pareto optimal solutions in system memory.(iv)Random selection of system memory (RA): it refers to the random selection of the complete representative in system memory.

As the research’s emphasis of multiobjective cooperative co-evolutionary algorithms, each kind of selection of representative has its characteristics at different stages of evolution.

4. Multicabin Satellite Equipment Layout Case

This section describes the details about the multicabin satellite equipment layout problem and related configurations. The mathematical model is given in Section 4.1. The Engineering requirements and of the algorithm configurations are proposed in Section 4.2. The metrics of performance are detailed in Section 4.3.

4.1. Problem Description

The multimodule satellite equipment layout problem is a multiobjective engineering problem with the features of nonlinear, noncontinuous, and multimodal. This case is cited from [18], which updated the situation of simplified international communication satellite [19]. In this problem, a set of independent components will be layout on the support surfaces of the satellite, where is the number of components. The design variable of the component is , where , , are centroid coordinates, is the angle between the long side of the cuboid’s bottom surface and -axis, is the serial number of supporting surfaces. In this case, , namely, all the cuboids are the orthogonal layout. A simplified layout scheme of the multicabin satellite is shown in Figure 6, where is the reference coordinates system, is the coordinate system of the multicabin satellite, and is the coordinate system of the components. The transformation matrix of these three coordinate systems can be obtained in [20].

The mathematical model and relative formulas of this multiobjective optimization are shown as follows:where indicates three objective functions of the multimodule satellite equipment layout, is the objective function. , which represents the moment of inertia, the inertia angle, and centroid distance, respectively.where , and are the moment of inertia of each axis (x, y, z) in the reference coordinate system, respectively. Also, , and are the angles between the principal axis of inertia and the axes of the cabin. Besides, and are the calculating centroid and expectation centroid of the satellite module system. The formulas of the moment of inertia are shown as follows:where , , and are the moments of inertia of each component. Noteworthy, the formulas for cylinder and cuboid are different. The formulas of the cylinder are shown below:

The formulas of the cuboid are similar as follows:

The formulas of the calculating centroid are shown as below:

The formulas of the inertia angle are shown below:where , , and are generated by the inertia, and the formulas are shown as follows:

The nonoverlapping constraint is shown as follows:where is the overlap volume of any two components and , as well as component and the bulkhead of the cabin, and , .

The detailed geometry of the satellite cabin and components can be seen in [19].

4.2. Engineering Requirements and Algorithm Configurations

The multicabin satellite is composed of two cabins and eight bearing surfaces, which need to contain 120 components. The components are simplified to 72 cylinders and 48 cuboids in order to reduce the calculation. All components are required to be placed on each bearing surface, and cuboids are placed orthogonally.

The technical specifications for the satellite cabin system are shown as below:(i)The sum of the moment of inertia should be less than the allowable values, which is 2030 .(ii)The sum of the inertial angle should be less than the allowable values, which is 0.09 rad.(iii)The distance between the expected centroid and calculating centroid (centroid distance) should be less than the allowable value, which is 6 mm.(iv)To ensure that none of the components (including bulkhead of the satellite) interferes .

The setting of the CCMO-EMAS is shown as follows. The number of system memories is one, and the number of subsystem memories is eight which is the same as the agents. The size of all memories is 200. CCMO-EMAS uses NSDE and NSGA-II as two basic algorithms in the multiobjective cooperative co-evolutionary model. The iteration number of the CCMO-EMAS is 25, while the algorithm in each agent runs 50 generations, and the total generations are 10000. The proportion of elite agents is 10%.

Two multiobjective cooperative co-evolutionary algorithms named NSCCGA and NSCCDE are used to compare with CCMO-EMAS, and the algorithm configurations are similar. The algorithm population is decomposed into eight subpopulations, and the population size is 200 too. The iteration number of the NSCCDE and NSCCGA is also 25, while the algorithm in each subpopulation runs 50 generations, and the total generations are 10000 too.

Furthermore, a non-co-operative algorithm named NSGA-II is also used to solve these problems. The population size is 200, and the total generations are 10000.

4.3. Metrics of Performance

In this paper, the multiobjective combinational problem of satellite cabin has three objectives, so at least three performance indicators are required to evaluate the performance of algorithms. It is a common practice at present, which is widely applied in the multiobjective optimization literature. These three performance indicators are shown as follows [21]:(i)Convergence Performance Indicator. Generation Distance is the average distance from the obtained Pareto front by the algorithms to the real Pareto optimal front. The smaller the value of , the better the convergence of the Pareto optimal solutions. reflects the exploitation ability of the algorithms.where is the number of solutions of the obtained Pareto front, is the minimum value of the Euclidean distance between the obtained Pareto front and the real Pareto front.(ii)Distributed Performance Indicator. Spacing is used to evaluate the distribution of the Pareto optimal solution set. The smaller the value of , the more uniform the solution on the obtained Pareto front. indicates the ability to maintain the diversity of the algorithms.where , is the number of the Pareto solution set, is the number of objective functions, and is the mean of .(iii)Dispersion Performance Indicator. Maximum Spread is used to evaluate the coverage performance of obtaining nondominated solutions in the real Pareto optimal front. The larger the value of , the better the spread of obtained nondominated solutions. reveals the scope of exploration in the nondominated solutions.where is the number of objectives, and are the maximum and minimum values of the objective in the obtained Pareto front, respectively. and are the maximum and minimum values of the objective of the real Pareto front, respectively.

Because the real Pareto optimal front of the practical engineering problems is unknown, we integrate all the Pareto fronts of the various algorithms to obtain an approximate optimal front as the reference Pareto front (right Pareto front) for evaluation [22].

5. Results and Discussions

In this section, the multi-cabin satellite equipment layout problem is solved by CCMO-EMAS and other three algorithms, and then the results are analyzed and discussed.

5.1. Result Analysis of CCMO-EMAS

To show the position of Pareto front more clearly, a reference basis point (RBS) is taken as a basis point in the solution space. RBS is the solution that meets the noninterference constraints and the allowable values of the design objectives, which are moment of inertia, inertia angle, and centroid distance (i.e., equals , equals 0.09 rad, and equals 6 mm, respectively). As shown in Figure 7, the Pareto front obtained by CCMO-EMAS is far from the RBS, so it meets the engineering allowable value. In summary, the Pareto solutions meet the allowable value should be better than the RBS.

To further analyze the shape and the distribution of the Pareto front obtained by CCMO-EMAS, a 2D figure of the Pareto front is obtained in Figure 7. Therein, the value of centroid distance of F3 is smaller, and the color becomes deeper when it moves into the downward position. The three objectives of the Pareto front can be seen in Table 1 and Figure 8. It can be seen in Figure 7 that the minimum F1 and F2 in the Pareto optimal solutions are placed at both ends of the Pareto front. Besides, the minimum F3 in the Pareto front has the deepest color. It illustrates that the Pareto front obtained by CCMO-EMAS has a balanced distribution and accords with the character of the Pareto front. A balanced solution of the three objectives is selected based on practical experiences (selected optimal Pareto solution, SOPS), and the 2D layout scheme of the SOPS solutions can be seen in Figure 9.

5.2. Comparison and Analysis of Different Algorithms

The computing results of CCMO-EMAS, NSCCDE, NSCCGA, and NSGA-II are compared in the following contents. It is shown as Figure 10 and Table 2; the mean value of convergence performance of CCMO-EMAS ranks first in the four algorithms. NSCCDE, NSCCGA, and NSGA-II rank the second, third, and the last, respectively. To be specific, the mean value of convergence performance of CCMO-EMAS is 14.34% higher than NSCCDE, 300% higher than NSCCGA and 500% higher than NSGA-II. The mean values of the distributed performance are listed in order: CCMO-EMAS, NSCCGA, NSGA-II, and NSCCDE. The mean of the distributed performance of CCMO-EMAS is 53.16% higher than NSCCGA, 97.18% higher than NSGA-II, and 188.64% higher than NSCCDE. The mean of dispersion performance of CCMO-EMAS is 26.82% higher than NSGA-II, 56.33% higher than NSCCGA, and 106.28% higher than NSCCDE.

As mention above, the convergence, distributed, and dispersion performance of CCMO-EMAS are all ranked first in the four algorithms, which show that CCMO-EMAS has advantages in the following aspects. Firstly, CCMO-EMAS can use multiple algorithms, and their parameters are adaptive. According to no free lunch theory, various algorithms have their advantages. Therefore, the approaches simultaneously use two algorithms have a better performance than that using one of the algorithms. Secondly, CCMO-EMAS employs a system memory as an external archive to save the nondominated solutions that have been searched. It continually updates this archive, which is beneficial for improving the performance of the Pareto front. Thirdly, CCMO-EMAS adopts two kinds of collaboration mechanisms consisting of four selections of representative. It can also reselect the representative in the activation process of the agent, which equates to using several representatives and helps to improve the dispersion and distributed performance.

However, CCMO-EMAS still has some shortcomings. The main shortcoming is that computing time is much larger than the other algorithms. The average computing time of CCMO-EMAS is 845% longer than NSGA-II, 600% longer than NSCCDE, and 373% longer than NSCCGA. This problem needs to be solved in future research.

5.3. Comparison and Analysis of Collaboration Methods

In order to study the effect of different collaboration methods, each kind of representative selection of collaboration methods in CCMO-EMAS is compared and analyzed. The four representative selections of collaboration methods are the best selection (BA), best collaboration (BC), random selection (RA), and random collaboration (RC), respectively. Moreover, CCMO-EMAS uses one of the collaboration methods to solve the problem and compares and analyzes the results.

As seen in Figure 11 and Table 3, the convergence performance of BA ranks the first, BC ranks the second, RA ranks the third, and RC ranks the last. The mean value of convergence performance of BA is 13.18% higher than BC, 17.41% higher than RA, and 78.03% higher than RC. It can be seen that the convergence performances of BA and BC are better, which illustrates that the optimal selection of representative is helpful to the convergence performance of the algorithm. The convergence performance of RC has fallen far behind the other three collaboration methods.

The distributed performance of these four collaboration methods is listed in the following order: RA, BA, BC, and RC. The average value of distributed performance of RA is 25% higher than BA, 89.32% higher than BC, and 525% higher than RC. It can be seen that the distributed performances of RA and BA outperform others so that the complete representative selection from the system memory has a positive effect on the distributed performance of algorithm. Meanwhile, the distributed performance of RC is far worse than mainly those of other three collaboration methods.

In terms of the dispersion performance , BC ranks the first, RA ranks the second, BA ranks the third, and RC ranks the last. The mean value of the dispersion performance of BC is 21.42% higher than that of RA, 30.89% higher than BA, and 31.73% higher than RC. It can be indicated that the dispersion performance of BC is the best. The dispersion performances of the other three selections are nearly the same.

The computing times of all four methods are also very close to each other.

5.4. Analysis of Problem Characteristics

The previous contents have already compared and analyzed the different algorithms and collaboration methods in multiobjective cooperative co-evolutionary models of CCMO-EMAS. To further study the characteristics of multiobjective cooperative co-evolutionary models in a specific multiobjective combinational problem; this section analyzes the occurrence frequency of different basic algorithms and collaboration methods in different iteration stages. The purpose is to build the targeted multiobjective cooperative co-evolutionary model for a specific problem in the future.

As can be seen in Figure 12, in the earlier evolutionary stage of the agents (the former five iterations of CCMO-EMAS), NSDE is used more often than NSGA-II. This situation illustrates that NSDE has more advantages in the earlier stage for this problem. It can be concluded from the computing results of the previous contents that NSDE performs better than NSGA-II in improving convergence. In the earlier stage of evolution, convergence is a general requirement, and the agent tends to use NSDE as the underlying algorithm more frequently. In the middle and later evolutionary stages of the agents (the last five iterations of CCMO-EMAS), NSGA-II used more often than NSDE. In particular, the agents rarely use the NSDE algorithm in the last 5–10 generations, because the convergence rate of the algorithm began to drop sharply when it converged to a certain degree. To better explore the solution space, the agent requires algorithms with better dispersion and distributed performance. The NSGA-II is also mainly used in the 10–25 generations of agents’ evolution. The algorithm selection of CCMO-EMAS is relatively simple because only two basic algorithms are used. In future research, more algorithms are planned to add.

Another research emphasis is the occurrence frequency of collaboration methods. CCMO-EMAS employs different representative selections of collaboration methods in different stages and makes various effects. As can be seen in Figure 13, in the earlier evolutionary stage of the agents (the former five iterations of CCMO-EMAS), BC and RA are used more often than BA. In the earlier iteration of the agents, the algorithms do not converge to a particular degree, so it is necessary to explore each direction of solution space. At this stage, the quality of the nondominated solutions in system memory is poor and is still in the accumulation stage. BA is beneficial to the convergence of algorithms, which is dependent on the solution quality in the system memory. In the middle evolutionary stage of the agents (5–20 iterations of CCMO-EMAS), BA and RA are the two most frequently used because they are archive-based collaboration methods. The BC is also used to improve the dispersion of algorithms. RC is still rarely used in this period. In the later evolutionary stage of the agents (20–25 iterations of CCMO-EMAS), BC and RA are the two most used. It proves that the dispersion and distributed performance have been the main objectives rather than the convergence performance in this stage. BA and RC are less chosen in this period. RC is the worst in the four collaboration methods and rarely used.

To sum up, it is appropriate to use different basic algorithms and collaboration methods in different iteration stage for a specific class of multiobjective combinational problem.

6. Conclusions and Future Work

The primary contribution of this study is to propose an evolutionary multiagent framework of the multiobjective cooperative co-evolutionary model. In this framework of CCMO-EMAS, each agent represents a multiobjective cooperative co-evolutionary algorithm. With the iteration of the CCMO-EMAS, the algorithm will evolve with the evolution of agents. In the different evolutionary stage of agents, the different basic algorithms and collaboration methods are applied based on the characteristics of specific problem. CCMO-EMAS is used to solve a complex multiobjective problem of MLSELP. Through analysis and comparison, the proposed framework can obtain better results than the other three algorithms.

Through analysis and discussion, CCMO-EMAS has many advantages for multiobjective combinational problems. Firstly, CCMO-EMAS can obtain high-quality solutions that do not depend on specific algorithms or adjusting the parameters. Secondly, as a learning tool, CCMO-EMAS has the potential to guide the evolution process. Finally, CCMO-EMAS deals with flexibility and validity when handling the variable design space. The main disadvantage of CCMO-EMAS is a long calculation time. To improve computational efficiency, the parallel computing framework will be introduced into CCMO-EMAS in the next study.

In this multiobjective combinational problem, 120 components will be layout on the support surfaces of the satellite, which include 72 cylinders and 48 cuboids. It is assumed that all the components are rigid bodies and the mass are uniformly distributed. As shown in Figure 8, , , , , , , , . The total mass of satellite cabin is 1553.068 kg. The centroid coordinate of relative reference coordinate system is , and the Inertia matrix is

The dimensions and masses of components are shown in Table 4.

The data of the SOPS layout scheme by CCMO-EMAS is shown in Table 5.

Data Availability

The experimental data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was also supported by the International cooperation project of Shandong Academy of Sciences (Awards: 2019GHZD08), 2019 Shandong Province Key R&D Plan (Awards: 2019JZZY010126) and the China National Key R&D Plan (Awards: 2018YFE0197700), funded by the Key R&D Program of Shandong (Awards: 2017GGX50107). The authors are grateful for Specially Recruited Experts of Taishan Scholar. This research was funded by the National Natural Science Foundation of China, grant numbers 61472062 and 91546123.