Abstract

Brain storm optimization (BSO) algorithm is a simple and effective evolutionary algorithm. Some multiobjective brain storm optimization algorithms have low search efficiency. This paper combines the decomposition technology and multiobjective brain storm optimization algorithm (MBSO/D) to improve the search efficiency. Given weight vectors transform a multiobjective optimization problem into a series of subproblems. The decomposition technology determines the neighboring clusters of each cluster. Solutions of adjacent clusters generate new solutions to update population. An adaptive selection strategy is used to balance exploration and exploitation. Besides, MBSO/D compares with three efficient state-of-the-art algorithms, e.g., NSGAII and MOEA/D, on twenty-two test problems. The experimental results show that MBSO/D is more efficient than compared algorithms and can improve the search efficiency for most test problems.

1. Introduction

Multiobjective optimization problems (MOPs) [14] which have conflicting objectives exist widely in the real world. Different from single-objective optimization problems, there are a series of optimal solutions for a MOP. Since the number of optimal solutions may be infinite, which leads to achieving impractically all optimal solutions at once, the good diversity and convergence of got solutions are the main goals of a good algorithm for MOPs. Although metaheuristics [57] can deal with MOPs, the evolutionary algorithms (EAs) [8, 9] naturally make up the set of the objective vectors of Pareto optimal solutions [10] with a population of nondominated solutions.

Multiobjective evolutionary algorithms (MOEAs) make use of the population evolution to achieve a set of nondominated solutions which are optimal solutions of the current population, which are effective methods of solving MOPs. In decades, many MOEAs [1130] have been proposed, such as multiobjective genetic algorithms [11], multiobjective particle swarm optimization algorithms [1214], multiobjective differential evolution algorithms [15, 16], multiobjective immune clone algorithms [17], group search optimizer [18], and evolutionary algorithms based on decomposition [1922]. The MOEA based on decomposition (MOEA/D) [11] may be the most widely studied in recent years. In MOEA/D, a set of uniformly distributed weight vectors and aggregate functions decomposes a MOP into a series of single or multiobjective subproblems. All subproblems are simultaneously optimized in a population evolution. Moreover, MOEA/D uses the neighbor information to improve the search efficiency and maintains the diversity of got solutions by the uniformly distributed weight vectors and aggregate functions.

In 2011, Shi proposes the brain storm optimization (BSO) [24] algorithm which is a simple and promising algorithm. It simulates the collective behavior of human being which is the brainstorming [24]. Since the invention of BSO algorithm in 2011, it had attracted many attentions in the evolutionary algorithms research community. Many works [25, 26] develop and apply the BSO algorithm. In BSO, the solutions are diverged into several clusters. One individual or more individuals are selected to generate new solutions by some genetic operators. Some multiobjective BSO algorithms [2729] are proposed to solve MOPs. In these multiobjective BSO algorithms, population is updated by newly solutions after solutions are clustered, which may low the speed of convergence.

In this paper, the decomposition technology is fused together with multiobjective brain storm optimization algorithm (MBSO/D) to solve MOPs. The main motivation of MBSO/D is improve the performance of multiobjective BSO algorithms (MOBSOs) by using the decomposition technology. The MBSO/D mainly includes three parts to solve effectively MOPs. First, the current solutions are automatically clustered as each subproblem is optimized and the size of each cluster is the same; second, the decomposition technology determines the neighboring clusters of each cluster, and parent solutions are selected from adjacent clusters to produce the new solutions, which can improve the search efficiency; third, an adaptive selection strategy is used to balance exploration and exploitation.

We have done some work [31, 32] on evolutionary algorithms based on decomposition recently. In the paper [31], a crossover operator based on uniform design and a selection strategy based on decomposition are designed to improve the performance of decomposition-based multiobjective evolutionary algorithms. In the paper [32], we design an initialization method of weight vectors and a new adaptive weight vector adjustment strategy. Compared to our previous works [31, 32], the main highlight originality of this manuscript is that the decomposition technology is firstly used in multiobjective brain storm optimization algorithm to improve the performance of MBSOs; the main difference is that an adaptive selection strategy in which the selection probability is updated adaptively according to the success rate of the selection strategy is used to balance exploration and exploitation.

The organization of the remaining paper is reproduced below. The related works about the MOP and BSO are briefly reviewed in Section 2. Section 3 states and describes MBSO/D in detail. Experimental results and discussions are contained in Section 4. Section 5 gives the conclusions and some possible paths for the future work.

The related definitions of MOP, BSO, and aggregation function are stated in this section.

2.1. Multiobjective Optimization

The description of a MOP by mathematic formulation is as follows [30]:where is a -dimensional decision variable whose feasible region is for (1); a MOP includes objective functions . In the following, some important definitions of MOPs are displayed. For two solutions , if each and , dominates (or denoted ). If the solution vector is not dominated by any other solutions, is called a Pareto optimal solution. The set of Pareto optimal solutions (PS) is formed by all Pareto optimal solutions. The Pareto optimal front (PF) is the set of the objective vectors of all Pareto optimal solutions.

2.2. Brain Storm Optimization Algorithm

Humans may work together to solve some problems which cannot be solved by one person. Shi proposes the brain storm optimization algorithm [24] based on this human idea generation process. This algorithm is presented as follows. In a BSO algorithm, a clustering algorithm clusters the solutions into several clusters. One or three cluster(s) is (or are) randomly selected to produce new offspring. The new offspring are only compared with the best solution of the population at the same cluster to update the population. When parents are chosen from one cluster or neighboring clusters, the local search ability is strengthened. When the new offspring are generated by parents that are selected from three random clusters, the global search ability is heightened. The pseudocode of the BSO algorithm is as shown in Pseudocode 1.

(1) Initialization: Generate an solutions randomly or by a problem-specific method and evaluate these solutions
While the stopping criterion is not met do
Clustering: cluster solutions into clusters by a clustering algorithm;
Generating new solutions: randomly select one or two cluster(s) to produce new solutions.
Updating: compare the new solution with the existing solution with the same cluster; the better one is kept and recorded
as the new solution.

3. The Proposed Algorithm

This paper combines the decomposition technology and multiobjective brain storm optimization algorithm (MBSO/D) to address MOPs. The major parts of this algorithm are that three strategies based on decomposition (cluster strategy, update strategy selection strategy) are developed. Three strategies will be pointed out in this section.

3.1. Motivation

The main motivation of this paper is to design a MOEA to achieve a set of solutions which evenly distribute on the true PF. BSO algorithm uses the group information to address the problems, which can help MOEAs to improve their performance. MOEAs use the neighboring information to optimal population, which can improve the performance of MOEAs. In this paper, this optimization idea is utilized to enhance the performance of multiobjective BSO algorithm. First, new solutions update the population by using the updated strategy of MOEA/D [21]; then, solutions are automatically clustered as each subproblem is optimized. Second, the decomposition technology determines the neighboring clusters of each cluster, and parent solutions are selected from adjacent clusters. Third, an adaptive selection strategy is used to balance exploration and exploitation.

The MOEA/D decomposes a MOP into a series of subproblems by weight vectors and aggregate functions. For each solution, it and some of its neighbor solutions are selected as parents to generate a new solution. Then, some of its neighbor solutions are updated by aggregate functions and the new solution. So, all subproblems are simultaneously optimized in a population evolution. Each aggregate function makes some solutions converge to the corresponding weight vector, which can improve the convergence of the algorithm. In addition, the diversity of solutions is maintained by the uniformly distributed weight vectors. The main advantages of MOEA/D are that the diversity of obtained solutions can be determined by the given weight vectors; neighbor’s information is used to generate new offspring, which can improve the search efficiency.

3.2. Cluster and Update

In this work, each given weight vector fixes a cluster, and the size of each cluster is made the same. The best solution of each cluster is determined by corresponding weight vector and the aggregation function [31]. The formula is as follows:where is a aggregation function, is the reference point, and is the current best solution of -th cluster . After a new solution is produced by some cluster, one solution of neighboring clusters of this cluster may be replaced by this new solution. If the best solution is not better than the new solution, randomly select a solution (not the best solution) from the cluster, which is replaced by this new solution, and the current best solution of this cluster will be redefined by (2). This update strategy can reduce the computing effort. Moreover, in this paper, one new solution will replace two solutions at most, which can balance the convergence and diversity. One solution is replaced and the diversity of the population is maintained, but the rate of convergence may be lower. Many solutions are replaced and the rate of convergence can be enhanced, but the diversity of the population is reduced.

3.3. Selection Strategy and Crossover Operator

An effective selection strategy can help crossover operators to perform search work more effectively. In this BSO algorithm, according to an adaptive selection probability, a new solution will be produced by one cluster or three clusters. The adaptive selection probability is calculated by the following formulation:where is the evolution generation; is the selection probability of the -generation; and are the number of kept new solutions which are generated by one cluster and three clusters, respectively; is to make the denominator not 0 and is set to a small number. If a random number is smaller than , parents are selected from one cluster to produce a new solution, which may enhance the exploitation; otherwise, parents are selected from three clusters to produce a new solution, which can strengthen the exploration.

Based on the selection probability and the decomposition technology, a selection strategy is designed to balance local search and global search. For each weight vector, work out its closet weight vectors according to the Euclidean distances of any two weight vectors. For each , set where are the closet weight vectors to and is the size of each cluster. Then setwhere and are two random numbers; is set to 0.9 as the same in [21]; is the current best solution of -th cluster. When the set is set, randomly select two indexes and from and generate one solution from , , and by the following formula:where is a scale factor which controls the length of the exploration vector (); is a constant value namely crossover rate; and indicates the -th component of ; . If the solution dominates , is set as the following formula:If the new solution generated by (5) is dominated by , the new solution generated by (6) may be better than with big probability. Equation (6) serves to enhance the search efficiency. When two indexes and are randomly selected from or , the local search can be carried out to improve the convergence; when two indexes and are randomly selected from , the global search can be carried out.

3.4. Pseudocode of the Proposed Algorithm

In this subsection, the pseudocode for the multiobjective brain storm optimization algorithm based on decomposition (MBSO/D) is displayed as shown in Pseudocode 2.

Input:
MOP (1)
A stopping criterion
: the number of direction vectors (the clusters)
: the size of the cluster
: the number of the neighborhood,
: a set of uniformly distributed weight vectors
Output: Objective vectors:
Initialization: Generate an initial population determine ; randomly cluster
the initial population into clusters with size and determine the
best solution of each cluster; determine , where
are the closest weight vectors to ; set
While the function evaluation times are less than the maximum function evaluation times do
Set and
For , do
According to Eq. (4), randomly select two indexes and , use , and to generate offspring
by Eq. (5).
If dominates , is regenerated by Eq. (6).
Update of : For , if , then set
Update of Neighboring Solutions: set and
While and
;
If
randomly select a number from , set ,
, and .
.
If generated by one cluster
;
Else
;
End if
End if
;
End while
End for
Use Eq. (3) to update .
end for
end while

In MBSO/D, firstly, solutions in initial population are randomly clustered into N clusters with size K and the best solution of each cluster is determine by (2); secondly, for the best solution of each cluster (the for loop in pseudocode), some solutions are selected according to (3)-(4), the best solution and selected solutions generate a new solution by (5) or (6), and these produce good offspring; thirdly, some neighboring solutions of the best solution are updated by the new solution and aggregation function (the while loop in pseudocode), to help the algorithm obtain a set of solutions with good diversity and convergence; finally, we update (3) to balance exploitation and exploration.

In this MBSO/D, the MOPs are solved by updating the neighboring solutions of each solution. We use the aggregation function to update the neighboring solutions of each solution, so the population is being updated at the same time. This updated strategy makes solution converge to subproblems ; this ensures the convergence of the algorithm. In addition, the weight vectors are uniformly distributed; this ensures the diversity of the algorithm. So, this update strategy can solve MOPs. Moreover, in original BSO, solutions are clustered by some clustering algorithms. But, in this MBSO, each subproblem is considered a class; the solutions are automatically clustered while the population is updated. The clustering in this MBSO does not require additional computation.

This multiobjective BSO algorithm compares with other multiobjective BSO algorithms, e.g., MBSO-C [28] (MBSO with Cauchy mutation) and MBSO-DE [27] (MBSO with differential evolution); the major differences are as follows.

In MBSO-C and MBSO-DE, after the solutions are cluttered by some clustering methods, some solutions of the population are replaced by the newly generated solutions. In these MBSOs, the population is firstly updated by the newly generated solutions; then, solutions are automatically clustered as each subproblem is optimized. In MBSO-C and MBSO-DE, the newly generated solutions compare with other solutions of the same cluster to update this cluster, which may reduce the pressure of convergence; in this algorithm, aggregate function is used to update the population, which may enhance pressure of convergence.

4. Experimental Studies

In this section, the performance of MBSO/D will be verified by comparing it with existing multiobjective optimization algorithms, e.g., NSGAII [11] and MOEA/D [21]. Two test suites are used in this experiment: ten test problems of the CEC 2009 competition [33] and seven DTLZ problems [34]. The number of decision variables is placed to 30, 7, 21, 12, and 22 for F1-F10, DTLZ1 and DTLZ3, DTLZ2 and DTLZ4-DTLZ6, DTLZ7, respectively.

4.1. Performance Metrics

Three performance metrics are adopted to quantify the performances of algorithms: generational distance (GD) [35], inverted generational distance (IGD) [35], and hypervolume indicator (HV) [36]. GD can evaluate the convergence performance of an algorithm. Both IGD and HV can evaluate the diversity and convergence of solutions obtained by an algorithm. Roughly 2000 points uniformly sampled on the Pareto fronts are used in the calculation of IGD and GD for each test problem. In the calculation of HV, the reference point is set to . Moreover, the Wilcoxon Rank-Sum test [37] is employed at a significance level of 0.05. It tests whether the performance of MBSO/D is significantly better (“+”), statistically similar (“=”), or significantly worse (“-”) than/as that obtained by the compared algorithms.

4.2. Parameters Setting

All algorithms are implemented by using the MATLAB language and run independently for thirty times with the maximal number of function evaluations 100 000 on all test problems. For fair comparisons, the population size and the maximal number of function evaluations of the compared algorithms are the same as this work, and other parameters of NSGAII and MOEA/D are the same as the original literature. In MBSO/D, = 0.5 and = 0.5; the size of each cluster is set to 5; the population size is set to 105 for all compared algorithms on each test problem; the size of neighborhood list is set to ; the probability of choosing mate subproblem from its neighborhood is set to 0.9.

4.3. Algorithm Performance Analyses

The statistical results of the GD, IGD, and HV metrics obtained by each MOEA are posted in Tables 1 and 2. The performance of MBSO/D will be checked by these statistical results. In these tables, the highlighted bold results indicate the best results.

4.3.1. Comparisons of MBSO/D with NSGAII and MOEA/D

This subsection presents the comparison results on IGD, GD, and HV in seventeen problems. Table 1 gives the mean and standard deviation values for three comparison algorithms. We can obtain from Table 1 that, in the form of the HV and IGD metrics, the results obtained by MBSO/D are better than those obtained by NSGAII and MOEA/D on more than fourteen problems, which indicates that the final solutions obtained by MBSO/D have a better diversity than those obtained by NSGAII and MOEA/D and have a good convergence; the mean values of GD metric contained by MBSO/D are bigger than those obtained by MOEA/D and NSGAII on more than nine problems; these imply that MBSO/D can obtain a set of solutions with better convergence than MOEA/D and NSGAII on most test problems.

Moreover, MBSO/D outperforms NSGAII and MOEA/D in solving DTLZ1 and DTLZ3; this emphasizes that the selection strategy and crossover operators have the advantage in solving multiple local fronts problems. NSGAII is better at solving DTLZ5. The reason for the higher mean IGD value for MBSO/D is because the update strategy is not suitable for MOPs with degenerated PF. According to the median values of IGD metric, Figure 1 plots the nondominated solutions obtained by MBSO/D. It is observed that the obtained objective vectors of nondominated solutions uniformly cover the full true PF on these seventeen test problems except for problems F5, F6, F9, and F10. MBSO/D cannot converge to the true PF on problems F5, F6, F9, and F10, because the maximal number of function evaluations is small and the sizes of decision variables are big.

The main goal of MOEAs is to obtain a set of solutions with good diversity and convergence. To test the algorithms' ability to accomplish this goal, Figure 2 shows the evolution of the average IGD metric values of the current population with the number of function evaluations for F1, F2, DTLZ1, and DTLZ3. Figure 2 indicates that MBSO/D converges, in terms of the number of function evaluations, much faster than MOEA/D and NSGAII in minimizing the IGD metric value for these four test problems. In other words, for MOPs with many local PFs (DTLZ1 and DTLZ3), MBSO/D can quickly converge to true PF and maintain the diversity of obtained solutions; for two-objective problems (F1 and F2), MBSO/D can find a set of solutions better than MOEA/D and NSGAII. These comparisons suggest that the selection strategy of MBSO/D is good at balancing exploration and exploitation; the updated strategy can maintain the diversity.

4.3.2. Comparisons of MBSO/D with MBSO-C and MBSO-DE

In this subsection, we compare MBSO/D with two MBSOs (MBSO-C [28] and MBSO-DE [27]) to testify the performance of MBSO/D. MBSO/D compares with MBSO-C and MBSO-DE on five ZDT [23] problems (ZDT1-ZDT4 and ZDT6). The experimental results of MBSO-C and MBSO-DE are directly obtained from the original literatures. For fair comparisons, the population size of MBSO/D is set to 100 for five problems; the maximal number of function evaluations is set to 300 000 which is smaller than the number of MBSO-C and MBSO-DE; other parameters settings are adopted in Section 4.2.

The mean and the best values of GD and [11] metrics obtained by MBSO/D, MBSO-C, and MBSO-DE are displayed in Table 2. It can be obtained from this table that, in the form of these two metrics, the results obtained by MBSO/D are better than those obtained by MBSO-C and MBSO-DE, which illustrate that MBSO/D can achieve a set of solutions with better diversity and converge than MBSO-C and MBSO-DE on all these five problems. These comparisons imply that MBSO/D is better at balancing exploration and exploitation than MBSO-C and MBSO-DE on these problems. According to the median values of GD metric, the objective vectors of nondominated solutions obtained by MBSO/D on the five ZDT test problems are plotted in Figure 3, which can visually show the superior performance of MBSO/D. These indicate that MBSO/D can effectively approach the true PFs.

4.3.3. Comparisons of MBSO/D with MOHS/D

In this subsection, MOHS/D is used to compare with MBSO/D [38]. MOHS/D hybridizes MOEA/D with harmony search algorithm to solve MOPs. MBSO/D compares with MOHS/D on five ZDT problems and seven DTLZ problems. The experimental results of MOHS/D are directly got from the original literature to make a fair comparison, and the population sizes of MBSO/D are set to 100 and 200 for two-objective and three-objective problems, respectively. For each problem, the maximal number of function evaluations is the same as MOHS/D [38]; other parameters settings are the same as in Section 4.2.

Table 3 summarizes the best, median, and worst values of IGD metric obtained by MBSO/D and MOHS/D on these twelve test problems. It can be seen from this table that the median values of IGD metric obtained by MBSO/D are smaller than those obtained by MOHS/D on five ZDT problems, DTLZ3 and DTLZ6 and that the median values of IGD metric obtained by MBSO/D are bigger than those obtained by MOHS/D on other five DTLZ problems. Those indicate that we can obtain a set of solutions with better coverage and diversity than MOHS/D on most problems of these twelve test problems.

5. Conclusions

In this paper, we proposed a multiobjective brain storm optimization algorithm, called MBSO/D, based on the idea of decomposition. In this approach, the update strategy of MOEA/D [21] is used in the MBSO, which can well balance the diversity and convergence. Simultaneously, a selection strategy is adopted to improve the search efficiency of this algorithm. Moreover, MBSO/D compares with NSGAII, MOEA/D, MBSO-C, MBSO-DE, and MOHS/D on some test sets with complicated PS or many local PFs. According to the performance analyses, MBSO/D shows competitive performances on most test problems against for comparisons MOEAs. These results imply that the update strategy and selection strategy in MBSO/D can help MBSO to obtain a set of solutions with good diversity and convergence. However, for a few benchmark functions, the proposed algorithm shows shortcoming because the update strategy is not suitable for some MOPs with degenerated PF. In the future, we will study that this algorithm is used to solve real-world problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by National Natural Science Foundation of China (nos. 61502290, 61806120, 61672334, 61673251, and 61401263), the Fundamental Research Funds for the Central Universities (GK201901010), China Postdoctoral Science Foundation (no. 2015M582606), and Natural Science Basic Research Plan in Shaanxi Province of China (no. 2016JQ6045).