Abstract

Nature-inspired computing has attracted huge attention since its origin, especially in the field of multiobjective optimization. This paper proposes a disruption-based multiobjective equilibrium optimization algorithm (DMOEOA). A novel mutation operator named layered disruption method is integrated into the proposed algorithm with the aim of enhancing the exploration and exploitation abilities of DMOEOA. To demonstrate the advantages of the proposed algorithm, various benchmarks have been selected with five different multiobjective optimization algorithms. The test results indicate that DMOEOA does exhibit better performances in these problems with a better balance between convergence and distribution. In addition, the new proposed algorithm is applied to the structural optimization of an elastic truss with the other five existing multiobjective optimization algorithms. The obtained results demonstrate that DMOEOA is not only an algorithm with good performance for benchmark problems but is also expected to have a wide application in real-world engineering optimization problems.

1. Introduction

Conventional mathematical optimization methods have the disadvantage of getting trapped in local optima for nonlinear optimization problems. Moreover, such optimization algorithms are highly complex and specialized. Inspired by the idea of biological evolution in nature, metaheuristic optimization algorithms have attracted huge attention due to the advantages of local avoidance and easy implementation. Research on optimization algorithms achieves rapid development due to the emergence of metaheuristic optimization algorithms. Many nature-inspired optimization algorithms have been proposed in the past few decades, including particle swarm optimization (PSO) [1], ant colony optimization (ACO) [2], evolution strategies (ES) [3], genetic algorithm (GA) [4], artificial bee colony algorithm (ABC) [5], gravitational search algorithm (GSA) [6], bat algorithm (BA) [7], flower pollination algorithm (FPA) [8], grey wolf optimizer (GWO) [9], whale optimization algorithm (WOA) [10], disruption particle swarm optimization (DPSO) [11], and equilibrium optimization algorithm (EO) [12]. Most of them are used to handle single objective optimization problems.

However, there is usually more than one objective needs to be optimized in real-world optimization problems, which means the common characteristic of real problems is multiobjective. In contrast to single objective problem, a multiobjective problem takes several conflicting objectives into consideration simultaneously. Instead of a single optimal solution, there is usually a set of alternative trade-offs between the objectives called Pareto optimal solutions in a multiobjective optimization problem [13]. Traditional methods to handle multiobjective optimization problems sometimes cannot produce well-distributed solutions along the Pareto front, and it may have difficulty in finding Pareto optimal solutions in nonconvex regions [14]. In 1985, Schaffer et al. [15] proposed the vector evaluated genetic algorithm (Vega) and applied it to solve the optimization problem involving multiple objectives for the first time. Then, a series of initial multiobjective optimization algorithms based on Pareto optimal were proposed successively, such as multiple objective genetic algorithms (MOGA) [16], niched Pareto genetic algorithm (NPGA) [17], and a nondominated sorting genetic algorithm (NSGA) [18]. The characteristics of these multiobjective optimization algorithms are the individual selection method based on nondominant ranking and the population diversity maintaining strategy based on fitness sharing mechanism. Due to the effectiveness of nature-inspired optimization algorithms, research on multiobjective optimization algorithms has attracted lots of attention in the past few decades. Some classical multiobjective evolutionary algorithms have been proposed, including nondominated sorting genetic algorithm version 2 (NSGA-II) [19], region-based selection in evolutionary multiobjective optimization (PESA2) [20], improving strength-Pareto evolutionary algorithm (SPEA2) [21], a multiobjective evolutionary algorithm based on decomposition (MOEA/D) [22], multi objective particle swarm optimization (MOPSO) [23], and multiobjective simulated-annealing algorithm (MOSA) [24]. In recent years, various novel multiobjective optimization algorithms have been proposed such as multiobjective gravitational search algorithm (MOGSA) [25], grid-based evolutionary algorithm (GrEA) [26], multiobjective grey wolf algorithm (MOGWO) [27], multiobjective ant lion optimizer (MOALO) [28], and multiobjective whale optimization (MOWOA) [29].

On the basis of absorbing the excellent searching mechanism of the equilibrium optimization algorithm and a novel mutation operator proposed in this work, this paper presented a disruption-based multiobjective equilibrium optimization algorithm (DMOEOA), which is able to handle multiobjective optimization problems. The novel mutation operator named layered disruption method is first proposed in this work with the aim of enhancing the exploration and exploitation abilities of DMOEOA. In addition, according to the No Free Lunch theorem [30], one optimization algorithm cannot solve all optimization problems effectively. This theorem also provides researchers with opportunities and motivations to propose new multiobjective optimization algorithms.

In this paper, the basic concepts of multiobjective optimization problems and grid mechanism are given in Section 2. The introduction of equilibrium optimization operator and layered disruption method is presented in Section 3.Section 4 provides experimental results and analysis of DMOEOA on benchmark functions with five multiobjective optimization algorithms. The analysis of the layered disruption method and the parametric study is also conducted in this section. In addition, the application of DMOEOA in the structural optimization of an elastic truss is presented in Section 5. Finally, some concluding remarks are given in Section 6.

2. Basic Concepts

In this section, the concepts of multiobjective optimization problems (MOPs) are given first, then some definitions of grid mechanism are provided.

2.1. Multiobjective Optimization Problems

The optimization of a problem with more than one objective is called multiobjective optimization. Without a loss of generality, the MOP can be formulated as a minimization problem as follows:where refers to the decision vector in the search space , denotes the objective to be optimized in the objective space , , and represent the lower limit and upper limit of the decision variable, respectively.

Definition 1. (Pareto dominance). Given two decision vectors , , the corresponding objective vectors are denoted as , , respectively. dominates (denoted as )

Definition 2. (Pareto optimality). An obtained solution is Pareto optimal

Definition 3. (Pareto optimal set). The set of Pareto optimal solutions is called the Pareto optimal set (PS) it is defined as follows:

Definition 4. (Pareto optimal front). Given a Pareto optimal set (PS), the Pareto optimal front is defined as follows:

2.2. Grid Mechanism

Grid mechanism [23, 26, 29] is introduced into DMOEOA due to its conciseness and high efficiency. In this mechanism, each individual is assigned a grid location in each dimension of the objective space. The grid mechanism is able to reflect the diversity and convergence of the obtained solutions. Some definitions of grid mechanism used in this work are as follows [26]:

Definition 5. (Grid boundary). and represent the minimum and maximum values of the objective, respectively; the lower limit and the upper limit of the grid in the objective space are as follows:where represents the number of divisions (i.e., grids) in the objective space in each dimension.

Definition 6. (Grid location). The grid location of an individual can be determined as follows:where is the width of the grid in the objective, represents the function of rounding up. For example, in Figure 1, the grid locations of individuals and are and , respectively.

Definition 7. (Grid ranking). The grid ranking of an individual is defined as the summation of its grid location in each objective as follows:The smaller the value, the more individuals in the obtained solutions are dominated by an individual . As shown in Figure 1, the grid ranking of is 4; in contrast, the grid ranking of is 6, which means that the individual is closer to the true Pareto front than the individual .

Definition 8. (Grid coordinate point distance). The normalized Euclidean distance between an individual and the minimal boundary point in its grid is called grid coordinate point distance , which is defined as follows:As for individuals who have the same grid ranking, the one who has a smaller value should be selected first. For example, in Figure 1, individuals and have the same value. However, the of the individual is smaller than the individual , so the individual should be preferred. The general framework of the grid mechanism is shown in Algorithm 1.

Input: Pop K = Number of Object in Rk N = Size of population
Ouput: Pop
Create grid for Pop by equations (7)-(8)
for i = 1: N do
for j = 1: K do
  Calculate GLj for each individual by equation (9)
end for
 Calculate GR and GCPD for each individual by equations (10)-(11)
end for

3. The Proposed Algorithm

3.1. Equilibrium Optimizer (EO)

The equilibrium optimization algorithm is first proposed by Faramarzi et al. [12]. The equilibrium optimizer is inspired by the control volume mass balance model, which is applied to the estimation of dynamic and equilibrium states. In equilibrium optimizer, each individual (solution) with its concentration (position) is regarded as a search agent. In EO, each individual in the population is similar to a solution and the individual’s concentration is similar to a particle’s position in the particle swarm optimization algorithm [1]. More information about EO may refer to [12]. Due to the simple principle, easy implementation, and fast convergence, EO has been widely applied to solve various single objective optimization problems, including economic dispatch [31], structural design optimization [32], and image segmentation [33]. The position updating formulation of EO is as follows [12]:where is defined as unit, refers to the equilibrium candidate, and represent exponential term and generation rate, respectively. is a random vector in the interval of [0, 1], is the number of dimensions of the individual’s concentration .

3.1.1. Equilibrium Pool and Equilibrium Candidate

The equilibrium state indicates the final convergence state of EO. At the beginning of the search process, there is no knowledge about the final equilibrium state, and the equilibrium candidate is used to provide a search guide for individuals in the population. In equilibrium optimizer, equilibrium candidates are defined by the four best individuals selected according to their fitness value during the whole optimization process and an individual whose concentration is the average of the above four best individuals. The equilibrium pool consists of five individuals.

However, as for multiobjective optimization problems, there is usually a set of alternative trade-offs between these objectives. We cannot sort the solutions based on their fitness value. Therefore, in DMOEOA, the classical external repository in MOPSO [23] is used to construct the equilibrium pool. Solutions in the external repository are regarded as equilibrium candidates. The equilibrium pool in DMOEOA is shown below:where represents the external repository, and the is used to keep a historical record of the nondominated solutions found along the whole search process. Each individual in each iteration updates its concentration (position) with roulette wheel selection among equilibrium candidates . The more equilibrium candidates with the same value in the equilibrium pool, the less likely they are to be selected to guide the particles in the population. The above selection method is able to maintain the diversity of the obtained solutions in the search process.

3.1.2. Exponential Term

The concentration updating rule is mainly controlled by the exponential term .where is the function of iterations, decreases with the number of iterations, and represent the current iteration and the maximum iteration, respectively. is a constant value which controls the exploitation ability of EO. With the aim of achieving high convergence by slowing down the search speed, is defined as follows:where is a constant value that affects the exploration ability, is applied to control the direction of exploration and exploitation, is a random number in [0, 1]. In this work, the values of and are set to 2 and 1, respectively. The selection of the two values is consistent with the original EO algorithm. Therefore, the exponential term can be formulated as follows:

3.1.3. Generation Rate

Generation rate plays an important role in the equilibrium algorithm. It is used to improve the exploitation ability of EO.where represents the initial value. is called the generation rate control probability. represents the generation probability, which is set to 0.5 according to the original EO algorithm. and are two random numbers in [0, 1]. indicates the decay vector. This study assumes . Thus, the generation rate can be formulated as follows:

3.2. Layered Disruption Method (LDM)

Inspired by the disruption phenomenon of astrophysics, a novel operator named “Disruption” and its variants are introduced into single objective evolutionary algorithms [11, 34, 35]. In this paper, a layered disruption method is integrated into a multiobjective equilibrium optimization algorithm to enhance its exploration and exploitation abilities.

3.2.1. Disruption Phenomenon

“When a swarm of gravitationally bound particles having a total mass, , approaches too close to a massive object, , the swarm tends to be torn apart. The same thing can happen to a solid body held together by gravitational forces when it approaches a much more massive object” [36]. This is called disruption phenomena. The disruption phenomenon is originated from astrophysics [36]. As shown in Figure 2, the swarm will be torn apart when the following condition is satisfied [11]:where is the distance between the center of mass of the swarm and the mass , and represents the radius of the swarm .

3.2.2. Layered Disruption Condition

In order to simulate the disruption phenomenon, individuals in the population with the same value are treated as one group, and different groups have different disruption conditions. It is different from Liu et al. [11], Sarafrazi et al. [34], and Ding et al. [35]. All individuals are treated as one group. Here, we define the disruption coefficient as follows:where is the number of groups. represents the index after sorting all groups by increasing order according to values.

In the group, individuals with the smallest values are treated as a whole and denoted as the mass . Other individuals who will be disrupted have the total mass . is defined as follows:where is the number of individuals in the group, and is the function of rounding up.

3.2.3. Disruption Operator

When the individual satisfies the disruption condition, a random number which obeys the Cauchy distribution is utilized to disrupt the individual. The is defined as follows:

The disruption equation is as follows:where is the position vector of individual , is the current iteration, represents the max iteration. refers to the disruption operator which is a matrix consisting of a set of Cauchy random numbers. It is worth noting that different dimensions of individual have different Cauchy random numbers, which is different from Liu et al. [11]. All dimensions of individual have the same .

We can observe that the individual with large value and value is more likely to be disrupted to explore in a wide region at the early stage. As the number of iterations increases, the individual will fully exploit its surrounding area. Therefore, the disruption method proposed in this paper is able to enhance the exploration and exploitation abilities of the proposed algorithm. The general framework of the layered disruption method is shown in Algorithm 2.

Input: Pop Ui = Number of individuals in the ith group Number of groups
Output: Pop
for i = 1:do
Si = [Qi Ui]
 Calculate GCPD of each individual in the Pop by equation (11)
 Sort individuals in the ith group by increasing order according to the GCPD value
for j = Si + 1:Uido
  Disrupt the jth individual in ith group by disruption equation (27)
end for
end for
3.3. The Pseudocode of the DMOEOA Algorithm

The pseudocode of the DMOEOA algorithm is shown in Figure 3.

3.4. Computational Complexity Analysis of the DMOEOA Algorithm

The computational complexity of an algorithm indicates the number of resources required to run it; the computational complexity of an algorithm can reflect the performance of the algorithm. refers to the number of individuals in the population and represents the number of objectives. The computational complexity of the main steps of DMOEOA is shown in Table 1.

Therefore, the computational complexity of DMOEOA is of . The computational complexity of DMOEOA is the same as the algorithms employed to compare with DMOEOA in this paper, including MOPSO, MOALO, NSGAII, MOWOA, and MOGWO.

4. Simulation Results and Discussion

4.1. Parameter Setting and Instances

In this section, three kinds of standard benchmark test suites including ZDT suites [37], DTLZ suites [38], and UF suites [39] are utilized to validate the performance of the proposed DMOEOA algorithm. The optimal Pareto fronts of these test functions include continuous, discontinuous, convex, and concave. Five multiobjective optimization algorithms, including MOPSO, MOALO, MOWOA, NSGAII, and MOGWO, are employed to compare with DMOEOA. The parameters of algorithms shown in Table 2 are chosen. These parameters are selected in accordance with the original algorithms. For all of the following simulation experiments, the maximum number of iterations and populations is set to 300 and 200, respectively. As for ZDT suites [37] and UF suites [39], the dimension of the search space is set to 30 and the dimension of the search space of DTLZ suites [38] is set to 12. To eliminate the randomness of the results, each algorithm runs 30 times on each benchmark test function.

4.2. Performance Metrics

In order to minimize the distance of the Pareto front produced by DMOEOA with respect to the optimal Pareto front and maximize the diversity of solutions found, two performance metrics are employed to quantify the performance of multiobjective optimization algorithms, including Inverted Generational Distance (IGD) [40] and metric of Delta [19].

The performance metrics of Inverted Generational Distance and Delta are formulated as follows:where represents the number of true Pareto optimal solutions, indicates the Euclidean distance between the true Pareto optimal solution and its nearest solutions in the external repository. In addition to reflecting the convergence of the obtained solutions, IGD can reflect the uniformity and coverage of the obtained solutions. The smaller the IGD value, the better coverage and convergence of the obtained solutions.where is the Euclidean distance between consecutive solutions in the obtained solutions and is the mean of these distances. and represent the extreme solutions and the boundary solutions of the obtained solutions, respectively. is the number of obtained solutions. The smaller the Delta value, the better the diversity of the solution set.

4.3. Discussion and Analysis

This section provides the statistical results of DMOEOA and five multiobjective optimization algorithms, including MOPSO, MOALO, MOWOA, NSGAII, and MOGWO, for IGD metric and Delta metric. The results obtained by those six algorithms upon test functions are shown in Tables 3 and 4 and Figures 4 and 5. The best value is shown in bold. In addition, the Wilcoxon rank-sum test is employed to compare the IGD results obtained by DMOEOA, and those five compared algorithms at a significance level of 0.05. The IGD results for two-objective test functions and three-objective test functions are shown in Tables 3 and 4, respectively, in which the “//” represent the proposed algorithm is better than, similar to, or worse than its corresponding competitor, respectively. The results are represented by ““, which means that compared to the competitor, DMOEOA wins on test functions, ties on test functions, and loses on test functions.

The statistical results shown in Table 3 indicate that DMOEOA provides better performance in convergence and coverage than MOPSO, MOALO, MOWOA, and NSGA-II. From Figure 5, we can observe that MOPSO shows better diversity of obtained solutions than other five algorithms on ZDT1. The statistical results of the algorithms on ZDT2 and ZDT3 for IGD in Table 3 show that the proposed DMOEOA algorithm provides better results on average and standard deviation of IGD than the other five algorithms. IGD is a performance metric that reflects the convergence and coverage performance of an algorithm, which means that the DMOEOA algorithm provides better convergence and coverage of obtained solutions on ZDT3. In Figure 5, the boxplot of Delta on ZDT2 indicates that DMOEOA and MOALO show similar performance in a diversity of obtained solutions, and the boxplot of Delta on ZDT3 suggests that the DMOEOA algorithm has better performance in a diversity of obtained solutions than MOALO, MOWOA, and MOGWO.

As shown in Table 3, the statistical results of the six algorithms on ZDT4 and ZDT6 test problems for IGD show that the proposed DMOEOA algorithm is able to outperform the other five algorithms on average, and the Wilcoxon rank-sum test results indicate that the proposed algorithm has superiority in both coverage and convergence performance. From Figure 5, we can observe that although MOPSO and NSGAII outperform the other four algorithms in a diversity of obtained solutions on ZDT4 and ZDT6, the above two algorithms show poor convergence ability on ZDT4 and ZDT6 test functions.

The statistical results of the algorithms on UF1 and UF2 for IGD in Table 3 show that the proposed DMOEOA algorithm provides better results on average and standard deviation of IGD than the other five algorithms, which means that the DMOEOA algorithm shows better performance in convergence and coverage on UF1 and UF2. As shown in Figure 5, the boxplot of Delta on UF1 indicates that DMOEOA provides better performance in the diversity of obtained solutions than MOALO, NSGA-II, and MOGWO. In contrast, the boxplot of Delta on UF2 indicates that MOPSO shows better performance in diversity than the other five algorithms.

The best results on average and standard deviation of IGD for UF3 belong to MOGWO and MOWOA, respectively (see Table 3). The statistical results of the algorithms on UF4 for IGD (see Table 3) indicate that DMOEOA shows better performance in convergence and coverage than the other five algorithms. The boxplot of Delta on UF3 and UF4 shown in Figure 5 indicates that DMOEOA has better performance in diversity than MOALO, MOWOA and NSGA-II.

UF5 test function has discontinuous Pareto optimal front. As shown in Figure 4, the best obtained optimal Pareto fronts of the six algorithms for UF5 suggest that the nondominated solutions obtained by DMOEOA are more uniformly distributed than the other five algorithms. According to the Wilcoxon rank-sum test results, the convergence ability of the proposed algorithm on UF5 is similar to that of MOGWO. The statistical results of the algorithms for the IGD metric on UF6 (see Table 3) show that the convergence and coverage performance of the proposed algorithm DMOEOA is similar to that of MOPSO, MOALO, and MOWOA.

UF7 benchmark has a linear Pareto optimal front. Compared to test functions with disconnected Pareto optimal fronts, it is easier for algorithms to obtain well-distributed solutions on the UF7 test problem. The statistical results of the algorithm for the IGD metric (see Table 3) on UF7 prove that the DMOEOA algorithm has better performance in convergence and coverage than MOPSO, MOALO, and NSGAII. As depicted in Figure 5, the boxplot of Delta on UF7 suggests that the DMOEOA algorithm shows better performance in the diversity of obtained nondominated solutions than MOALO, MOGWO, and MOWOA.

UF8, UF9, and UF10 are triobjective test problems, and these three benchmarks have complex Pareto optimal fronts, which make them challenging for all the six algorithms. As shown in Figure 4, the best obtained Pareto optimal front of DMOEOA on UF8 is more distributed than the other five algorithms. Meanwhile, the statistical results shown in Table 4 suggest that DMOEOA provides better results on average and standard deviation of IGD. It can be stated that the proposed DMOEOA algorithm has better performance in both convergence and distribution than the other five algorithms on the UF8 test problem. From Figure 4, we can observe that all the six algorithms show poor convergence and distribution on UF9. Compared to the other five algorithms, the DMOEOA has superiority in both coverage and convergence of obtained solutions on UF10 according to the Wilcoxon rank-sum test results shown in Table 4.

DTLZ1 and DTLZ2 are triobjective test problems and they have multiple local Pareto optimal fronts. The statistical results of the algorithm for IGD on DTLZ1 in Table 4 show that the proposed DMOEOA algorithm performs better in convergence than the other five algorithms. As shown in Figure 4, the best obtained optimal Pareto front of NSGA-II on DTLZ2 is far from the true Pareto optimal front. In contrast, the obtained optimal Pareto fronts of DMOEOA and MOPSO are more uniformly distributed than the other four algorithms. Meanwhile, the statistical results of the DMOEOA for IGD on DTLZ2 in Table 4 prove that DMOEOA and MOPSO have superiority in convergence ability.

Both DTLZ3 and DTLZ4 have concave Pareto optimal fronts. The statistical results of the algorithms on DTLZ3 and DTLZ4 for IGD (see Table 4) show that DMOEOA shows better results on average and standard deviation of IGD than the other five algorithms. MOPSO and MOGWO provide better performance in the diversity of obtained solutions than the other four algorithms on both DTLZ3 and DTLZ4 (see Figure 5).

DTLZ5 and DTLZ6 are both three-objective test problems with degenerate Pareto optimal fronts. As shown in Table 4, the statistical results of the algorithms for IGD on DTLZ5 indicate that DMOEOA has a similar performance in convergence and coverage with MOPSO and MOGWO. MOPSO shows better performance in both diversity and convergence of obtained solutions than DMOEOA, MOPSO, MOALO, and NSGA-II on DTLZ6 (see Table 4 and Figure 5).

DTLZ7 is disconnected in both the Pareto optimal set and the Pareto optimal front. In Table 4, the statistical results of the algorithms for IGD on DTLZ7 suggest that DMOEOA provides better results on average and standard deviation of IGD than the other five algorithms, which means that the DMOEOA algorithm shows superiority in both convergence and coverage ability on DTLZ7, and the boxplot of Delta on DTLZ7 indicates that DMOEOA shows better performance in the diversity of obtained solutions than MOALO, MOWOA, and NSGA-II (see Figure 5).

The above results demonstrate that the DMOEOA algorithm is able to show competitive and promising results on multiobjective test functions, especially for three-objective test problems, and the test results indicate that DMOEOA does exhibit better performances in these problems with a better balance between convergence and distribution. The statistical results for IGD demonstrate the high convergence ability of DMOEOA. The layered disruption method plays an important role in improving the convergence and distribution performance of DMOEOA. LDM can prompt the population to conduct extensive searches in each iteration. As the number of iterations increases, the individual will fully exploit its surrounding area. Thus, the exploration and exploitation ability of the proposed algorithm can be enhanced.

4.4. Analysis of Layered Disruption Method (LDM)

In this work, the layered disruption method (LDM) is introduced into DMOEOA with the aim of enhancing its exploration and exploitation abilities. Thus, it is important to investigate the impact of the LDM on DMOEOA. In this section, ZDT3, ZDT4, ZDT6, and DTLZ2 test problems which include discontinuous, convex, and concave Pareto optimal fronts are employed as test instances. Different intermediate generations of nondominated solutions obtained by DMOEOA and DMOEOA without LDM (denoted as MOEOA) are recorded to see the impact of LDM on the proposed algorithm. The numbers of recorded intermediate generations include 20, 40, 60, 80, 100, 150, 300. Other parameters of the DMOEOA algorithm are the same as in Table 1. Simulation results are depicted in Figures 69. The results of the algorithms on ZDT3 depicted in Figure 6 show that DMOEOA is able to find the true optimal Pareto front after the 60th generation. In contrast, MOEOA cannot completely converge to the true optimal solutions even in the 300th generation. As shown in Figure 7, the DMOEOA converges to the true Pareto front after the 80th generation on ZDT4. By comparison, MOEOA shows poor convergence and distribution ability on ZDT4. Similarly, simulation results of the algorithms on ZDT6 in Figure 8 indicate that DMOEOA converges to the optimal Pareto front after the 60th generation. As for MOEOA, there are still some poor solutions in the 300th generation. In addition, as shown in Figure 8, the distribution of best obtained optimal Pareto solutions of DMOEOA is better than MOEOA. Compared to MOEOA, the DMOEOA is able to find the optimal solutions of DTLZ2 shown in Figure 9 faster. From simulation results depicted in Figures 69, we can observe that the LDM is able to enhance the exploration and exploitation ability of the proposed algorithm.

4.5. Parametric Study

The proposed DMOEOA algorithm introduces the grid mechanism. Meanwhile, the layered disruption method is based on the grid mechanism, and the number of grid divisions is a key parameter in the grid mechanism. Therefore, it is necessary to investigate the effect of grid division in the performance of the DMOEOA algorithm. In this section, UF {5, 6, 7, 8, 9, 10} test suites are utilized as test instances. Inverted Generational Distance (IGD) is employed as the performance metric. We performed runs using different numbers of grid division to see the effect of this parameter in the performance of the DMOEOA algorithm. The number of grid divisions ranges from 5 to 15. Other parameters of the DMOEOA algorithm are the same as in Table 1. To eliminate the randomness of the results, for each given grid division, the proposed algorithm runs 30 times on each benchmark test function.

As shown in Figure 10, with the increase of grid divisions from 5 to 9, mean IGD values of DMOEOA on UF {5, 6, 8, 9} test instances decreased gradually, then the mean values of IGD increased with the number of divisions. As for UF7 and UF10 test instances, the mean IGD values increased slowly with the numbers of grid divisions from 9 to 15. From Figure 10, we can observe that too many or too few divisions will affect the performance of the algorithm. The appropriate number of grid divisions is beneficial to improve the convergence and coverage ability of the proposed algorithm. In general, DMOEOA performs well at for both biobjective and triobjective test problems.

5. Application in Structural Optimization of an Elastic Truss

In this section, the proposed DMOEOA algorithm is applied to the structural optimization of a 4-bar elastic truss as a demonstration. The 4-bar elastic truss design optimization problem is a well-known engineering problem in the structural optimization field [41]. The structure of the 4-bar truss is shown in Figure 11.

The truss is designed with joint displacement and structural volume as the objectives. And areas of member cross sections are set as design variables. Mathematically, this engineering problem is as follows:where . and represent the structural volume and joint displacement of the truss, respectively. is the area of cross section of member.

IGD metric is utilized as the performance metric. The above five algorithms including MOPSO, MOALO, MOWOA, NSGAII, and MOGWO are employed to be compared with DMOEOA. The maximum number of iterations and populations is set to 100 and 100, respectively. To eliminate the randomness of the results, each algorithm runs 30 times. The statistical results obtained by those six algorithms upon the above structural optimization problem are shown in Table 5 and Figure 12.

The statistical results shown in Table 5 suggest that DMOEOA provides better results on best, average, and standard deviation of IGD than the other five algorithms. Although the better result on the worst of IGD is obtained by MOPSO, the superiority of DMOEOA in convergence is significant. From Figure 12, we can observe that DMOEOA is able to converge to the true optimal Pareto front. In contrast, NSGAII and MOALO show the poor distribution of obtained solutions on this structural optimization problem.

6. Conclusion

This paper proposes a disruption-based multiobjective equilibrium optimization algorithm (DMOEOA). This algorithm integrates a layered disruption method proposed in this work. Layered disruption method (LDM) is proposed to enhance the exploration and exploitation abilities of the proposed algorithm. To validate the effectiveness of the DMOEOA algorithm, three kinds of benchmark test suites have been selected with five different multiobjective optimization algorithms which include well-known algorithms and state-of-the-art algorithms. The test results suggest that the DMOEOA algorithm is able to show well performance in these test problems with a better balance between convergence and distribution. The impact of the layered disruption method is analyzed. In addition, we discuss the influence of division numbers on the performance of the proposed DMOEOA algorithm. Moreover, the new proposed algorithm is also applied for solving the structural optimization problem of a four-bar elastic truss. Compared with the other five optimizers, the results show that DMOEOA is not only an algorithm with well performance for benchmark test functions but also expected to have a wide application in engineering design optimization problems. Future research should focus on applying the proposed DMOEOA algorithm to handle constrained real engineering problems and many-objective optimization problems (Table 6).

Data Availability

All data included in this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

The authors are grateful to Binbin Chen for helpful discussions. This research was funded by the General Program of the National Natural Science of China “A study on the water absorption property of the buoyancy material for the full ocean depth manned submersible” (Grant no. 51879157) and the “Construction of a Leading Innovation Team” project by the Hangzhou Municipal Government, the Startup Funding of New-Joined PI of Westlake University with grant number (041030150118).