Complexity

Complexity / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 5301284 | 11 pages | https://doi.org/10.1155/2019/5301284

A Multiobjective Brain Storm Optimization Algorithm Based on Decomposition

Academic Editor: Matilde Santos
Received01 Oct 2018
Revised18 Dec 2018
Accepted14 Jan 2019
Published22 Jan 2019

Abstract

Brain storm optimization (BSO) algorithm is a simple and effective evolutionary algorithm. Some multiobjective brain storm optimization algorithms have low search efficiency. This paper combines the decomposition technology and multiobjective brain storm optimization algorithm (MBSO/D) to improve the search efficiency. Given weight vectors transform a multiobjective optimization problem into a series of subproblems. The decomposition technology determines the neighboring clusters of each cluster. Solutions of adjacent clusters generate new solutions to update population. An adaptive selection strategy is used to balance exploration and exploitation. Besides, MBSO/D compares with three efficient state-of-the-art algorithms, e.g., NSGAII and MOEA/D, on twenty-two test problems. The experimental results show that MBSO/D is more efficient than compared algorithms and can improve the search efficiency for most test problems.

1. Introduction

Multiobjective optimization problems (MOPs) [14] which have conflicting objectives exist widely in the real world. Different from single-objective optimization problems, there are a series of optimal solutions for a MOP. Since the number of optimal solutions may be infinite, which leads to achieving impractically all optimal solutions at once, the good diversity and convergence of got solutions are the main goals of a good algorithm for MOPs. Although metaheuristics [57] can deal with MOPs, the evolutionary algorithms (EAs) [8, 9] naturally make up the set of the objective vectors of Pareto optimal solutions [10] with a population of nondominated solutions.

Multiobjective evolutionary algorithms (MOEAs) make use of the population evolution to achieve a set of nondominated solutions which are optimal solutions of the current population, which are effective methods of solving MOPs. In decades, many MOEAs [1130] have been proposed, such as multiobjective genetic algorithms [11], multiobjective particle swarm optimization algorithms [1214], multiobjective differential evolution algorithms [15, 16], multiobjective immune clone algorithms [17], group search optimizer [18], and evolutionary algorithms based on decomposition [1922]. The MOEA based on decomposition (MOEA/D) [11] may be the most widely studied in recent years. In MOEA/D, a set of uniformly distributed weight vectors and aggregate functions decomposes a MOP into a series of single or multiobjective subproblems. All subproblems are simultaneously optimized in a population evolution. Moreover, MOEA/D uses the neighbor information to improve the search efficiency and maintains the diversity of got solutions by the uniformly distributed weight vectors and aggregate functions.

In 2011, Shi proposes the brain storm optimization (BSO) [24] algorithm which is a simple and promising algorithm. It simulates the collective behavior of human being which is the brainstorming [24]. Since the invention of BSO algorithm in 2011, it had attracted many attentions in the evolutionary algorithms research community. Many works [25, 26] develop and apply the BSO algorithm. In BSO, the solutions are diverged into several clusters. One individual or more individuals are selected to generate new solutions by some genetic operators. Some multiobjective BSO algorithms [2729] are proposed to solve MOPs. In these multiobjective BSO algorithms, population is updated by newly solutions after solutions are clustered, which may low the speed of convergence.

In this paper, the decomposition technology is fused together with multiobjective brain storm optimization algorithm (MBSO/D) to solve MOPs. The main motivation of MBSO/D is improve the performance of multiobjective BSO algorithms (MOBSOs) by using the decomposition technology. The MBSO/D mainly includes three parts to solve effectively MOPs. First, the current solutions are automatically clustered as each subproblem is optimized and the size of each cluster is the same; second, the decomposition technology determines the neighboring clusters of each cluster, and parent solutions are selected from adjacent clusters to produce the new solutions, which can improve the search efficiency; third, an adaptive selection strategy is used to balance exploration and exploitation.

We have done some work [31, 32] on evolutionary algorithms based on decomposition recently. In the paper [31], a crossover operator based on uniform design and a selection strategy based on decomposition are designed to improve the performance of decomposition-based multiobjective evolutionary algorithms. In the paper [32], we design an initialization method of weight vectors and a new adaptive weight vector adjustment strategy. Compared to our previous works [31, 32], the main highlight originality of this manuscript is that the decomposition technology is firstly used in multiobjective brain storm optimization algorithm to improve the performance of MBSOs; the main difference is that an adaptive selection strategy in which the selection probability is updated adaptively according to the success rate of the selection strategy is used to balance exploration and exploitation.

The organization of the remaining paper is reproduced below. The related works about the MOP and BSO are briefly reviewed in Section 2. Section 3 states and describes MBSO/D in detail. Experimental results and discussions are contained in Section 4. Section 5 gives the conclusions and some possible paths for the future work.

The related definitions of MOP, BSO, and aggregation function are stated in this section.

2.1. Multiobjective Optimization

The description of a MOP by mathematic formulation is as follows [30]:where is a -dimensional decision variable whose feasible region is for (1); a MOP includes objective functions . In the following, some important definitions of MOPs are displayed. For two solutions , if each and , dominates (or denoted ). If the solution vector is not dominated by any other solutions, is called a Pareto optimal solution. The set of Pareto optimal solutions (PS) is formed by all Pareto optimal solutions. The Pareto optimal front (PF) is the set of the objective vectors of all Pareto optimal solutions.

2.2. Brain Storm Optimization Algorithm

Humans may work together to solve some problems which cannot be solved by one person. Shi proposes the brain storm optimization algorithm [24] based on this human idea generation process. This algorithm is presented as follows. In a BSO algorithm, a clustering algorithm clusters the solutions into several clusters. One or three cluster(s) is (or are) randomly selected to produce new offspring. The new offspring are only compared with the best solution of the population at the same cluster to update the population. When parents are chosen from one cluster or neighboring clusters, the local search ability is strengthened. When the new offspring are generated by parents that are selected from three random clusters, the global search ability is heightened. The pseudocode of the BSO algorithm is as shown in Pseudocode 1.

(1) Initialization: Generate an solutions randomly or by a problem-specific method and evaluate these solutions
While the stopping criterion is not met do
Clustering: cluster solutions into clusters by a clustering algorithm;
Generating new solutions: randomly select one or two cluster(s) to produce new solutions.
Updating: compare the new solution with the existing solution with the same cluster; the better one is kept and recorded
as the new solution.

3. The Proposed Algorithm

This paper combines the decomposition technology and multiobjective brain storm optimization algorithm (MBSO/D) to address MOPs. The major parts of this algorithm are that three strategies based on decomposition (cluster strategy, update strategy selection strategy) are developed. Three strategies will be pointed out in this section.

3.1. Motivation

The main motivation of this paper is to design a MOEA to achieve a set of solutions which evenly distribute on the true PF. BSO algorithm uses the group information to address the problems, which can help MOEAs to improve their performance. MOEAs use the neighboring information to optimal population, which can improve the performance of MOEAs. In this paper, this optimization idea is utilized to enhance the performance of multiobjective BSO algorithm. First, new solutions update the population by using the updated strategy of MOEA/D [21]; then, solutions are automatically clustered as each subproblem is optimized. Second, the decomposition technology determines the neighboring clusters of each cluster, and parent solutions are selected from adjacent clusters. Third, an adaptive selection strategy is used to balance exploration and exploitation.

The MOEA/D decomposes a MOP into a series of subproblems by weight vectors and aggregate functions. For each solution, it and some of its neighbor solutions are selected as parents to generate a new solution. Then, some of its neighbor solutions are updated by aggregate functions and the new solution. So, all subproblems are simultaneously optimized in a population evolution. Each aggregate function makes some solutions converge to the corresponding weight vector, which can improve the convergence of the algorithm. In addition, the diversity of solutions is maintained by the uniformly distributed weight vectors. The main advantages of MOEA/D are that the diversity of obtained solutions can be determined by the given weight vectors; neighbor’s information is used to generate new offspring, which can improve the search efficiency.

3.2. Cluster and Update

In this work, each given weight vector fixes a cluster, and the size of each cluster is made the same. The best solution of each cluster is determined by corresponding weight vector and the aggregation function [31]. The formula is as follows:where is a aggregation function, is the reference point, and is the current best solution of -th cluster . After a new solution is produced by some cluster, one solution of neighboring clusters of this cluster may be replaced by this new solution. If the best solution is not better than the new solution, randomly select a solution (not the best solution) from the cluster, which is replaced by this new solution, and the current best solution of this cluster will be redefined by (2). This update strategy can reduce the computing effort. Moreover, in this paper, one new solution will replace two solutions at most, which can balance the convergence and diversity. One solution is replaced and the diversity of the population is maintained, but the rate of convergence may be lower. Many solutions are replaced and the rate of convergence can be enhanced, but the diversity of the population is reduced.

3.3. Selection Strategy and Crossover Operator

An effective selection strategy can help crossover operators to perform search work more effectively. In this BSO algorithm, according to an adaptive selection probability, a new solution will be produced by one cluster or three clusters. The adaptive selection probability is calculated by the following formulation:where is the evolution generation; is the selection probability of the -generation; and are the number of kept new solutions which are generated by one cluster and three clusters, respectively; is to make the denominator not 0 and is set to a small number. If a random number is smaller than , parents are selected from one cluster to produce a new solution, which may enhance the exploitation; otherwise, parents are selected from three clusters to produce a new solution, which can strengthen the exploration.

Based on the selection probability and the decomposition technology, a selection strategy is designed to balance local search and global search. For each weight vector, work out its closet weight vectors according to the Euclidean distances of any two weight vectors. For each , set where are the closet weight vectors to and is the size of each cluster. Then setwhere and are two random numbers; is set to 0.9 as the same in [21]; is the current best solution of -th cluster. When the set is set, randomly select two indexes and from and generate one solution from , , and by the following formula:where is a scale factor which controls the length of the exploration vector (); is a constant value namely crossover rate; and indicates the -th component of ; . If the solution dominates , is set as the following formula:If the new solution generated by (5) is dominated by , the new solution generated by (6) may be better than with big probability. Equation (6) serves to enhance the search efficiency. When two indexes and are randomly selected from or , the local search can be carried out to improve the convergence; when two indexes and are randomly selected from , the global search can be carried out.

3.4. Pseudocode of the Proposed Algorithm

In this subsection, the pseudocode for the multiobjective brain storm optimization algorithm based on decomposition (MBSO/D) is displayed as shown in Pseudocode 2.

Input:
MOP (1)
A stopping criterion
: the number of direction vectors (the clusters)
: the size of the cluster
: the number of the neighborhood,
: a set of uniformly distributed weight vectors
Output: Objective vectors:
Initialization: Generate an initial population determine ; randomly cluster
the initial population into clusters with size and determine the
best solution of each cluster; determine , where
are the closest weight vectors to ; set
While the function evaluation times are less than the maximum function evaluation times do
Set and
For , do
According to Eq. (4), randomly select two indexes and , use , and to generate offspring
by Eq. (5).
If dominates , is regenerated by Eq. (6).
Update of : For , if , then set
Update of Neighboring Solutions: set and
While and
;
If
randomly select a number from , set ,
, and .
.
If generated by one cluster
;
Else
;
End if
End if
;
End while
End for
Use Eq. (3) to update .
end for
end while

In MBSO/D, firstly, solutions in initial population are randomly clustered into N clusters with size K and the best solution of each cluster is determine by (2); secondly, for the best solution of each cluster (the for loop in pseudocode), some solutions are selected according to (3)-(4), the best solution and selected solutions generate a new solution by (5) or (6), and these produce good offspring; thirdly, some neighboring solutions of the best solution are updated by the new solution and aggregation function (the while loop in pseudocode), to help the algorithm obtain a set of solutions with good diversity and convergence; finally, we update (3) to balance exploitation and exploration.

In this MBSO/D, the MOPs are solved by updating the neighboring solutions of each solution. We use the aggregation function to update the neighboring solutions of each solution, so the population is being updated at the same time. This updated strategy makes solution converge to subproblems ; this ensures the convergence of the algorithm. In addition, the weight vectors are uniformly distributed; this ensures the diversity of the algorithm. So, this update strategy can solve MOPs. Moreover, in original BSO, solutions are clustered by some clustering algorithms. But, in this MBSO, each subproblem is considered a class; the solutions are automatically clustered while the population is updated. The clustering in this MBSO does not require additional computation.

This multiobjective BSO algorithm compares with other multiobjective BSO algorithms, e.g., MBSO-C [28] (MBSO with Cauchy mutation) and MBSO-DE [27] (MBSO with differential evolution); the major differences are as follows.

In MBSO-C and MBSO-DE, after the solutions are cluttered by some clustering methods, some solutions of the population are replaced by the newly generated solutions. In these MBSOs, the population is firstly updated by the newly generated solutions; then, solutions are automatically clustered as each subproblem is optimized. In MBSO-C and MBSO-DE, the newly generated solutions compare with other solutions of the same cluster to update this cluster, which may reduce the pressure of convergence; in this algorithm, aggregate function is used to update the population, which may enhance pressure of convergence.

4. Experimental Studies

In this section, the performance of MBSO/D will be verified by comparing it with existing multiobjective optimization algorithms, e.g., NSGAII [11] and MOEA/D [21]. Two test suites are used in this experiment: ten test problems of the CEC 2009 competition [33] and seven DTLZ problems [34]. The number of decision variables is placed to 30, 7, 21, 12, and 22 for F1-F10, DTLZ1 and DTLZ3, DTLZ2 and DTLZ4-DTLZ6, DTLZ7, respectively.

4.1. Performance Metrics

Three performance metrics are adopted to quantify the performances of algorithms: generational distance (GD) [35], inverted generational distance (IGD) [35], and hypervolume indicator (HV) [36]. GD can evaluate the convergence performance of an algorithm. Both IGD and HV can evaluate the diversity and convergence of solutions obtained by an algorithm. Roughly 2000 points uniformly sampled on the Pareto fronts are used in the calculation of IGD and GD for each test problem. In the calculation of HV, the reference point is set to . Moreover, the Wilcoxon Rank-Sum test [37] is employed at a significance level of 0.05. It tests whether the performance of MBSO/D is significantly better (“+”), statistically similar (“=”), or significantly worse (“-”) than/as that obtained by the compared algorithms.

4.2. Parameters Setting

All algorithms are implemented by using the MATLAB language and run independently for thirty times with the maximal number of function evaluations 100 000 on all test problems. For fair comparisons, the population size and the maximal number of function evaluations of the compared algorithms are the same as this work, and other parameters of NSGAII and MOEA/D are the same as the original literature. In MBSO/D, = 0.5 and = 0.5; the size of each cluster is set to 5; the population size is set to 105 for all compared algorithms on each test problem; the size of neighborhood list is set to ; the probability of choosing mate subproblem from its neighborhood is set to 0.9.

4.3. Algorithm Performance Analyses

The statistical results of the GD, IGD, and HV metrics obtained by each MOEA are posted in Tables 1 and 2. The performance of MBSO/D will be checked by these statistical results. In these tables, the highlighted bold results indicate the best results.


ProblemsIGDGDHV
meanstdmeanstdmeanstd

F1MBSO/D0.01720.00680.01080.00200.84480.0073
MOEA/D0.0750(+)0.05130.0150(+)0.00820.7622(+)0.0515
NSGAII0.0951(+)0.03550.0135(+)0.00280.7298(+)0.0517

F2MBSO/D0.00500.00010.00250.00030.86790.0004
MOEA/D0.0137(+)0.00320.0050(+)0.00060.8538(+)0.0036
NSGAII0.0095(+)0.00050.0101(+)0.00060.8614(+)0.0007

F3MBSO/D0.03260.01010.01680.00430.82310.0123
MOEA/D0.1194(+)0.09390.0145(-)0.00670.7538(+)0.0604
NSGAII0.0351(+)0.02550.0130(-)0.00150.8328(-)0.0195

F4MBSO/D0.00380.00010.00040.00010.53820.0001
MOEA/D0.0084(+)0.00090.0040(+)0.00020.5300(+)0.0023
NSGAII0.0056(+)0.00030.0050(+)0.00030.5348(+)0.0004

F5MBSO/D0.40740.07180.47990.08960.07670.0599
MOEA/D0.4342(+)0.15470.3598(-)0.16010.1199(-)0.0926
NSGAII0.4643(+)0.10870.3094(-)0.13250.0782(=)0.0817

F6MBSO/D0.13340.06770.38790.91740.44850.0811
MOEA/D0.2116(+)0.13830.1467(-)0.05540.3911(+)0.0946
NSGAII0.1935(+)0.08970.0079(-)0.01000.3900(+)0.0889

F7MBSO/D0.01680.00240.02220.00970.67700.0048
MOEA/D0.0794(+)0.16760.0166(-)0.01270.6165(+)0.1386
NSGAII0.0782(+)0.12030.0067(-)0.00160.6066(+)0.1002

F8MBSO/D0.09030.00800.04430.00930.64440.0198
MOEA/D0.0939(+)0.01140.0136(-)0.00250.6516(=)0.0158
NSGAII0.1482(+)0.02420.7099(+)0.60170.5616(+)0.0427

F9MBSO/D0.07700.01350.14840.11130.97970.0276
MOEA/D0.1039(+)0.04480.0857(-)0.03750.9067(+)0.0638
NSGAII0.1666(+)0.07090.9917(+)0.89670.7647(+)0.1496

F10MBSO/D0.34420.07727.39883.39870.31750.1025
MOEA/D0.3597(+)0.21130.1555(-)0.14250.3100(+)0.1442
NSGAII0.3505(+)0.06823.4759(-)3.56080.1811(+)0.0624

DTLZ1MBSO/D0.01860.00010.00710.00010.14040.0001
MOEA/D0.0314(+)0.00160.0075(+)0.00020.1295(+)0.0011
NSGAII0.0356(+)0.05000.0183(+)0.06110.1335(+)0.0204

DTLZ2MBSO/D0.05220.00370.01840.00090.73800.0027
MOEA/D0.0813(+)0.00530.0209(+)0.00100.6673(+)0.0107
NSGAII0.0692(+)0.00210.0234(+)0.00130.7011(+)0.0057

DTLZ3MBSO/D0.06240.05150.01770.00170.73270.0577
MOEA/D0.0807(+)0.00480.0204(+)0.00090.6709(+)0.0115
NSGAII0.0692(+)0.00240.0231(+)0.01460.7113(+)0.0062

DTLZ4MBSO/D0.05300.00270.01740.00080.74210.0018
MOEA/D0.0822(+)0.00530.0202(+)0.00100.6788(+)0.0155
NSGAII0.1299(+)0.16280.0218(+)0.00370.6748(+)0.0853

DTLZ5MBSO/D0.01860.00150.00770.00470.42810.0012
MOEA/D0.0121(-)0.00300.0006(-)0.00010.4174(+)0.0083
NSGAII0.0053(-)0.00030.0011(-)0.00020.4378(-)0.0003

DTLZ6MBSO/D0.02070.00030.00350.00400.42600.0002
MOEA/D0.0118(-)0.00380.0004(+)0.00010.4190(+)0.0092
NSGAII0.0555(+)0.02600.0668(+)0.02490.3745(+)0.0291

DTLZ7MBSO/D0.07840.00050.00710.00051.29130.0013
MOEA/D0.1558(+)0.02390.0079(+)0.00100.9319(+)0.0031
NSGAII0.1124(+)0.09350.0157(+)0.01321.0256(+)0.0024

“+” means that MBSO/D outperforms its competitor algorithm, “-” means that MBSO/D is worse than its competitor algorithm, and “=” means that the competitor algorithm has the same performance as MBSO/D.

ProblemsGD
bestmeanbestmean

ZDT1MBSO/D0.00090.00090.05630.0569
MBSO-DE0.00100.00110.10080.1257
MBSO-C0.06950.09120.51050.5529

ZDT2MBSO/D0.00070.00080.02700.0274
MBSO-DE0.00070.00080.09970.1253
MBSO-C0.07250.09050.48980.5588

ZDT3MBSO/D0.00100.00120.41260.4187
MBSO-DE0.00110.00120.41260.4188
MBSO-C0.04430.05890.57080.6364

ZDT4MBSO/D0.00220.00230.14050.1460
MBSO-DE2.932213.83791.29581.3968
MBSO-C6.496615.29050.83620.9699

ZDT6MBSO/D0.00080.00090.00620.0082
MBSO-DE0.00370.00400.53220.5346
MBSO-C0.05800.08130.69690.7425

4.3.1. Comparisons of MBSO/D with NSGAII and MOEA/D

This subsection presents the comparison results on IGD, GD, and HV in seventeen problems. Table 1 gives the mean and standard deviation values for three comparison algorithms. We can obtain from Table 1 that, in the form of the HV and IGD metrics, the results obtained by MBSO/D are better than those obtained by NSGAII and MOEA/D on more than fourteen problems, which indicates that the final solutions obtained by MBSO/D have a better diversity than those obtained by NSGAII and MOEA/D and have a good convergence; the mean values of GD metric contained by MBSO/D are bigger than those obtained by MOEA/D and NSGAII on more than nine problems; these imply that MBSO/D can obtain a set of solutions with better convergence than MOEA/D and NSGAII on most test problems.

Moreover, MBSO/D outperforms NSGAII and MOEA/D in solving DTLZ1 and DTLZ3; this emphasizes that the selection strategy and crossover operators have the advantage in solving multiple local fronts problems. NSGAII is better at solving DTLZ5. The reason for the higher mean IGD value for MBSO/D is because the update strategy is not suitable for MOPs with degenerated PF. According to the median values of IGD metric, Figure 1 plots the nondominated solutions obtained by MBSO/D. It is observed that the obtained objective vectors of nondominated solutions uniformly cover the full true PF on these seventeen test problems except for problems F5, F6, F9, and F10. MBSO/D cannot converge to the true PF on problems F5, F6, F9, and F10, because the maximal number of function evaluations is small and the sizes of decision variables are big.

The main goal of MOEAs is to obtain a set of solutions with good diversity and convergence. To test the algorithms' ability to accomplish this goal, Figure 2 shows the evolution of the average IGD metric values of the current population with the number of function evaluations for F1, F2, DTLZ1, and DTLZ3. Figure 2 indicates that MBSO/D converges, in terms of the number of function evaluations, much faster than MOEA/D and NSGAII in minimizing the IGD metric value for these four test problems. In other words, for MOPs with many local PFs (DTLZ1 and DTLZ3), MBSO/D can quickly converge to true PF and maintain the diversity of obtained solutions; for two-objective problems (F1 and F2), MBSO/D can find a set of solutions better than MOEA/D and NSGAII. These comparisons suggest that the selection strategy of MBSO/D is good at balancing exploration and exploitation; the updated strategy can maintain the diversity.

4.3.2. Comparisons of MBSO/D with MBSO-C and MBSO-DE

In this subsection, we compare MBSO/D with two MBSOs (MBSO-C [28] and MBSO-DE [27]) to testify the performance of MBSO/D. MBSO/D compares with MBSO-C and MBSO-DE on five ZDT [23] problems (ZDT1-ZDT4 and ZDT6). The experimental results of MBSO-C and MBSO-DE are directly obtained from the original literatures. For fair comparisons, the population size of MBSO/D is set to 100 for five problems; the maximal number of function evaluations is set to 300 000 which is smaller than the number of MBSO-C and MBSO-DE; other parameters settings are adopted in Section 4.2.

The mean and the best values of GD and [11] metrics obtained by MBSO/D, MBSO-C, and MBSO-DE are displayed in Table 2. It can be obtained from this table that, in the form of these two metrics, the results obtained by MBSO/D are better than those obtained by MBSO-C and MBSO-DE, which illustrate that MBSO/D can achieve a set of solutions with better diversity and converge than MBSO-C and MBSO-DE on all these five problems. These comparisons imply that MBSO/D is better at balancing exploration and exploitation than MBSO-C and MBSO-DE on these problems. According to the median values of GD metric, the objective vectors of nondominated solutions obtained by MBSO/D on the five ZDT test problems are plotted in Figure 3, which can visually show the superior performance of MBSO/D. These indicate that MBSO/D can effectively approach the true PFs.

4.3.3. Comparisons of MBSO/D with MOHS/D

In this subsection, MOHS/D is used to compare with MBSO/D [38]. MOHS/D hybridizes MOEA/D with harmony search algorithm to solve MOPs. MBSO/D compares with MOHS/D on five ZDT problems and seven DTLZ problems. The experimental results of MOHS/D are directly got from the original literature to make a fair comparison, and the population sizes of MBSO/D are set to 100 and 200 for two-objective and three-objective problems, respectively. For each problem, the maximal number of function evaluations is the same as MOHS/D [38]; other parameters settings are the same as in Section 4.2.

Table 3 summarizes the best, median, and worst values of IGD metric obtained by MBSO/D and MOHS/D on these twelve test problems. It can be seen from this table that the median values of IGD metric obtained by MBSO/D are smaller than those obtained by MOHS/D on five ZDT problems, DTLZ3 and DTLZ6 and that the median values of IGD metric obtained by MBSO/D are bigger than those obtained by MOHS/D on other five DTLZ problems. Those indicate that we can obtain a set of solutions with better coverage and diversity than MOHS/D on most problems of these twelve test problems.


ProblemsIGD
BestMedianWorst

ZDT1MBSO/D1.62e-31.71e-32.04e-3
MOHS/D1.37e-31.86e-32.18e-3

ZDT2MBSO/D8.42e-41.02e-32.45e-3
MOHS/D2.26e-32.26e-33.01e-3

ZDT3MBSO/D6.14e-49.52e-41.26e-3
MOHS/D9.15e-41.19e-31.76e-3

ZDT4MBSO/D1.86e-41.62e-45.21e-3
MOHS/D1.76e-41.64e-44.23e-3

ZDT6MBSO/D1.62e-41.87e-43.41e-4
MOHS/D1.59e-41.91e-42.42e-4

DTLZ1MBSO/D1.77e-21.81e-21.92e-2
MOHS/D4.69e-031.28e-024.16e-2

DTLZ2MBSO/D5.12e-25.22e-25.44e-2
MOHS/D3.23e-34.08e-34.71e-3

DTLZ3MBSO/D6.13e-26.24e-28.26e-2
MOHS/D8.62e-21.33e-12.00e-1

DTLZ4MBSO/D6.23e-35.30e-25.74e-2
MOHS/D8.43e-39.76e-31.06e-2

DTLZ5MBSO/D5.49e-31.86e-22.61e-2
MOHS/D1.34e-31.40e-31.45e-3

DTLZ6MBSO/D1.57e-22.07e-22.51e-2
MOHS/D1.21e-22.21e-23.35e-2

DTLZ7MBSO/D6.25e-27.85e-29.21e-2
MOHS/D2.89e-23.00e-23.07e-2

5. Conclusions

In this paper, we proposed a multiobjective brain storm optimization algorithm, called MBSO/D, based on the idea of decomposition. In this approach, the update strategy of MOEA/D [21] is used in the MBSO, which can well balance the diversity and convergence. Simultaneously, a selection strategy is adopted to improve the search efficiency of this algorithm. Moreover, MBSO/D compares with NSGAII, MOEA/D, MBSO-C, MBSO-DE, and MOHS/D on some test sets with complicated PS or many local PFs. According to the performance analyses, MBSO/D shows competitive performances on most test problems against for comparisons MOEAs. These results imply that the update strategy and selection strategy in MBSO/D can help MBSO to obtain a set of solutions with good diversity and convergence. However, for a few benchmark functions, the proposed algorithm shows shortcoming because the update strategy is not suitable for some MOPs with degenerated PF. In the future, we will study that this algorithm is used to solve real-world problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by National Natural Science Foundation of China (nos. 61502290, 61806120, 61672334, 61673251, and 61401263), the Fundamental Research Funds for the Central Universities (GK201901010), China Postdoctoral Science Foundation (no. 2015M582606), and Natural Science Basic Research Plan in Shaanxi Province of China (no. 2016JQ6045).

References

  1. R. Cheng, T. Rodemann, M. Fischer, M. Olhofer, and Y. Jin, “Evolutionary many-objective optimization of hybrid electric vehicle control: From general optimization to preference articulation,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 1, no. 2, pp. 97–111, 2017. View at: Publisher Site | Google Scholar
  2. R. Roy and J. Mehnen, “Dynamic multi-objective optimisation for machining gradient materials,” CIRP Annals - Manufacturing Technology, vol. 57, no. 1, pp. 429–432, 2008. View at: Publisher Site | Google Scholar
  3. X. Xue and Y. Wang, “Using memetic algorithm for instance coreference resolution,” IEEE Transactions on Knowledge and Data Engineering, vol. 28, no. 2, pp. 580–591, 2016. View at: Publisher Site | Google Scholar
  4. S. Palaniappan, S. Zein-Sabatto, and A. Sekmen, “Dynamic multiobjective optimization of war resource allocation using adaptive genetic algorithms,” in Proceedings of the 2001 IEEE SoutheastCon, pp. 160–165, 2001. View at: Google Scholar
  5. H. Li and D. Landa-Silva, “An adaptive evolutionary multi-objective approach based on simulated annealing,” Evolutionary Computation, vol. 19, no. 4, pp. 561–595, 2011. View at: Publisher Site | Google Scholar
  6. L. Zuo, L. Shu, S. Dong, C. Zhu, and T. Hara, “A multi-objective optimization scheduling method based on the ant colony algorithm in cloud computing,” IEEE Access, vol. 3, pp. 2687–2699, 2015. View at: Publisher Site | Google Scholar
  7. Y. Zhang, D.-W. Gong, and J. Cheng, “Multi-objective particle swarm optimization approach for cost-based feature selection in classification,” IEEE Transactions on Computational Biology and Bioinformatics, vol. 14, no. 1, pp. 64–75, 2017. View at: Publisher Site | Google Scholar
  8. W. K. Mashwani, A. Salhi, M. Asif, R. Adeeb, and M. Sulaiman, “Enhanced version of multi-algorithm genetically adaptive for multiobjective optimization,” International Journal of Advanced Computer Science and Applications, vol. 6, no. 12, pp. 279–287, 2015. View at: Google Scholar
  9. W. K. Mashwani, A. Salhi, O. Yeniay, H. Hussian, and M. Jan, “Hybrid non-dominated sorting genetic algorithm with adaptive operators selection,” Applied Soft Computing, vol. 56, pp. 1–18, 2017. View at: Publisher Site | Google Scholar
  10. X. Ma, F. Liu, Y. Qi et al., “A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with large-scale variables,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 2, pp. 275–298, 2016. View at: Publisher Site | Google Scholar
  11. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site | Google Scholar
  12. M. Kiani and S. H. Pourtakdoust, “State estimation of nonlinear dynamic systems using weighted variance-based adaptive particle swarm optimization,” Applied Soft Computing, vol. 34, pp. 1–17, 2015. View at: Publisher Site | Google Scholar
  13. B. Tang, Z. Zhu, H.-S. Shin, A. Tsourdos, and J. Luo, “A framework for multi-objective optimisation based on a new self-adaptive particle swarm optimisation algorithm,” Information Sciences, vol. 420, pp. 364–385, 2017. View at: Publisher Site | Google Scholar
  14. H. Han, W. Lu, and J. Qiao, “An adaptive multiobjective particle swarm optimization based on multiple adaptive methods,” IEEE Transactions on Cybernetics, vol. 47, no. 9, pp. 2754–2767, 2017. View at: Publisher Site | Google Scholar
  15. Q. H. Lin, Q. F. Zhu, P. Huang et al., “A novel hybrid multi-objective immune algorithm with adaptive differential evolution,” Computer and Operations Research, vol. 62, pp. 95–111, 2015. View at: Publisher Site | Google Scholar
  16. X. P. Wang and L. X. Tang, “An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization,” Information Sciences, vol. 348, pp. 124–141, 2016. View at: Publisher Site | Google Scholar
  17. R. Shang, L. Jiao, F. Liu, and W. Ma, “A novel immune clonal algorithm for MO problems,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 1, pp. 35–50, 2012. View at: Publisher Site | Google Scholar
  18. Z.-H. Zhan, J. Li, J. Cao, J. Zhang, H. S.-H. Chung, and Y.-H. Shi, “Multiple populations for multiple objectives: a coevolutionary technique for solving multiobjective optimization problems,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 445–463, 2013. View at: Publisher Site | Google Scholar
  19. H. Zhang, X. Zhang, X.-Z. Gao, and S. Song, “Self-organizing multiobjective optimization based on decomposition with neighborhood ensemble,” Neurocomputing, vol. 173, pp. 1868–1884, 2016. View at: Publisher Site | Google Scholar
  20. S. Jiang and S. Yang, “An improved multiobjective optimization evolutionary algorithm based on decomposition for complex pareto fronts,” IEEE Transactions on Cybernetics, vol. 46, no. 2, pp. 421–437, 2015. View at: Publisher Site | Google Scholar
  21. Q. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007. View at: Publisher Site | Google Scholar
  22. M. Ming, R. Wang, Y. Zha, and T. Zhang, “Pareto adaptive penalty-based boundary intersection method for multi-objective optimization,” Information Sciences, vol. 414, pp. 158–174, 2017. View at: Publisher Site | Google Scholar | MathSciNet
  23. E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: empirical results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, 2000. View at: Publisher Site | Google Scholar
  24. Y. Shi, “An optimization algorithm based on brainstorming process,” International Journal of Swarm Intelligence Research, vol. 2, no. 4, pp. 35–62, 2011. View at: Publisher Site | Google Scholar
  25. S. Cheng, Q. Qin, J. Chen, and Y. Shi, “Brain storm optimization algorithm: a review,” Artificial Intelligence Review, vol. 46, no. 4, pp. 445–458, 2016. View at: Publisher Site | Google Scholar
  26. X. Ma, Y. Jin, and Q. Dong, “A generalized dynamic fuzzy neural network based on singular spectrum analysis optimized by brain storm optimization for short-term wind speed forecasting,” Applied Soft Computing, vol. 54, pp. 296–312, 2017. View at: Publisher Site | Google Scholar
  27. X. Guo, Y. Wu, L. Xie, S. Cheng, and J. Xin, “An adaptive brain storm optimization algorithm for multiobjective optimization problems,” in Advances in Swarm and Computational Intelligence, vol. 9140 of lecture notes in computer science, pp. 365–372, 2015. View at: Google Scholar
  28. Y. Shi, J. Xue, and Y. Wu, “Multi-objective optimization based on brain storm optimization algorithm,” International Journal of Swarm Intelligence Research, vol. 4, no. 3, pp. 1–21, 2013. View at: Publisher Site | Google Scholar
  29. L. Xie and Y. Wu, “A modified multi-objective optimization based on brain storm optimization algorithm,” in Advances in Swarm Intelligence, vol. 8795 of Lecture Notes in Computer Science, pp. 328–339, 2014. View at: Google Scholar
  30. D. A. Van Veldhuizen, Multiobjective Evolutionary Algorithms: Classifications, Analyses, and New Innovations, Air Force Institute of Technology Wright, Ohio, Ohio, USA, 1999.
  31. C. Dai and X. Lei, “An improvement decomposition-based multi-objective evolutionary algorithm with uniform design,” Knowledge-Based Systems, vol. 125, pp. 108–115, 2017. View at: Publisher Site | Google Scholar
  32. C. Dai and X. Lei, “A Decomposition-based multi-objective evolutionary algorithm with adaptive weight adjustment,” Complexity, vol. 2018, Article ID 1753071, 20 pages, 2018. View at: Publisher Site | Google Scholar
  33. Q. F. Zhang and P. N. Suganthan, “Final report on CEC’09 MOEA competition,” Tech. Rep., the School of CS and EE, University of Essex, UK and School of EEE, Nangyang Technological University, Singapore, 2009, http://dces.essex.ac.uk/staff/qzhang/moeacompetition09.htm. View at: Google Scholar
  34. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi-objective optimization test problems,” in Proceedings of the Congress on Evolutionary Computation (CEC '02), pp. 825–830, May 2002. View at: Publisher Site | Google Scholar
  35. E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. Da Fonseca, “Performance assessment of multiobjective optimizers: an analysis and review,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 2, pp. 117–132, 2003. View at: Publisher Site | Google Scholar
  36. K. Deb, A. Sinha, and S. Kukkonen, “Multi-objective test problems, linkages, and evolutionary methodologies,” in Proceedings of the 8th Annual Genetic and Evolutionary Computation Conference (GECCO '06), pp. 1141–1148, Washington, Wash, USA, 2006. View at: Google Scholar
  37. S. Robert, J. Torrie, and D. Dickey, Principles and Procedures of Statistics: A Biometrical Approach, McGraw-Hill, New York, NY, USA, 1997.
  38. I. A. Doush, M. Q. Bataineh, and M. El-Abd, “The Hybrid Framework for Multi-objective Evolutionary Optimization Based on Harmony Search Algorithm,” in RTIS, vol. 756, pp. 134–142, Lecture Notes in Real-Time Intelligent Systems, 2017. View at: Google Scholar

Copyright © 2019 Cai Dai and Xiujuan Lei. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

875 Views | 369 Downloads | 3 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.