The Scientific World Journal

The Scientific World Journal / 2014 / Article
Special Issue

Computational Intelligence and Metaheuristic Algorithms with Applications

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 745921 | https://doi.org/10.1155/2014/745921

Carolina Lagos, Broderick Crawford, Enrique Cabrera, Ricardo Soto, José-Miguel Rubio, Fernando Paredes, "Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm", The Scientific World Journal, vol. 2014, Article ID 745921, 10 pages, 2014. https://doi.org/10.1155/2014/745921

Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm

Academic Editor: Xin-She Yang
Received09 Apr 2014
Accepted27 Jun 2014
Published31 Aug 2014

Abstract

Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs) are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP), the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric.

1. Introduction

Evolutionary algorithms (EAs) are an effective alternative to (approximately) solve several large and complex optimisation problems, as they are able to find good solutions for a wide range of problems in acceptable computational time. Although less studied, EAs for multiobjective optimisation (MO) problems, called evolutionary multiobjective optimisation (EMO), have demonstrated to be very effective. In fact, during the last two decades, several authors have focused their efforts on the development of several EMO algorithms to solve a wide range of MO problems. For instance, Maravall and de Lope [1] use a genetic algorithm (GA) to solve the multiobjective dynamic optimisation for automatic parking system. In [2] the authors propose an improvement to the well-known NSGA algorithm (called NSGA-II) based on an elitist approach. In [3] the author presents an EMO algorithm applied on a specific variation of the well-studied capacitated vehicle routing problem (CVRP), where the author includes in the EMO algorithm an explicit collective memory method, namely, the extended virtual loser (EVL). Other well-known EMO algorithms developed during the last two decades are PAES [4] and MO particle swarm optimisation [5]. More recently, hybrid techniques have been also applied to a large number of optimisation problems (see [6]). A comprehensive literature review related to EMO algorithms can be found in [7].

EMO algorithms have some problems that must be taken into account, though. For instance, they tend to fall into premature convergence with low evolution efficiency [8]. This is because of implicit information embodied in the evolution process and domain knowledge corresponding to optimisation problems which are not fully included in the solution approach [9]. To overcome these problems, one can make use of implicit evolution information. Reynolds [10] proposes an EA, called cultural algorithm (CA), which is inspired from human culture evolution process and that makes use of the implicit evolution information generated at each iteration. The CAs have a dual evolution structure which consists of two spaces: population and belief space. On the one hand, the population space works as in any other EA, that is, using evolutionary operators such as mutation and crossover. On the other hand, in the belief space, implicit knowledge is extracted from selected individuals in the population and stored in a different way. Then, they are used to guide the whole evolution process in the population space such that they can induce the population to escape from local optimal solutions. It has been proved that CAs can effectively improve the evolution performance [9]. Although less studied, CAs have been also used to solve MO problems. Coello et al. [11] and Coello et al. [12], two remarkable surveys on CAs, only mention Coello and Landa [13] work as example of a CA application solving MO problems. More recently, Zhang et al. [14] present a CA which is enhanced by using a particle swarm optimisation algorithm. This enhanced CA is applied to fuel distribution MO problem. Srinivasan and Ramakrishnan [15] present a MO cultural algorithm that is applied on data mining domain. In [16] the authors applied a CA to a biobjective portfolio selection problem using normative knowledge in the belief space. In [17] authors present a formal framework to implement MO cultural algorithms. To the best of our knowledge, [18] is the only article that uses CAs to solve the biobjective uncapacitated facility location problem (BOUFLP). Furthermore, we did not find any article which compares the performance of CAs using different evolutionary strategies at the belief space level. Thus, in this paper we present an extension of the biobjective cultural algorithm (BOCA) developed in [18]. We use two different strategies at the belief space level and compare the performance of our new algorithms with the performance of the previous one. We also compare its results with other well-known EMO algorithms such as PAES and NSGA-II. Obtained solutions were compared using the hypervolume S metric proposed in [19].

The remaining of this paper is organised as follows. Section 2 shows an overview on MO focused on EMO algorithms. Section 2.1 presents briefly the BOUFLP and some of its distinctive features. In Section 3, we describe our implementation for the BOCA algorithm. We describe the main differences between our implementation and the one in [18]. Section 3.2 presents the BOCA algorithm applied to a set of well-known instances from the literature. Finally, Section 4 presents the conclusions of this work.

2. Overview

In this section we show an overview of topics related to this paper. In Section 2.1, MO concepts are presented, emphasizing EMO algorithms and its state of art. More specifically, we focus on the development of EMO algorithms for MO combinatorial optimisation (MOCO) problems. In Section 2.2 we present the BOUFLP formulation based on a cost-coverage approach. Finally, in Section 2.3 we present an overview of CAs and its multiobjective extension. Details of our CA implementation are also presented at the end of this section.

2.1. (Evolutionary) Multiobjective Optimisation

In this section we briefly introduce the main principles of MO problems and, particularly, MOCO problems. For a comprehensive review of this topic see [20, 21]. In this paper we will make use of the following notation for the comparison of vectors (solutions). Let and . We say that if . Similarly, we will say that if but . Finally, we say that if . A solution , with being equal to the number of decision variables, is called an efficient solution and its image , with , a nondominated point of the MO problem if there is no , with , such that . In [22] the author describes several excellence relations. These relations establish strict partial orders in the set of all nondominated points related to different aspects of their quality. Previously, in [23, 24] the authors consider several outperformance relations to address the closeness of the set of nondominated points found by an algorithm to the actual set of nondominated points, called Pareto Frontier (PF). In [25] a comprehensive explanation of the desirable features of an approximation to the PF is presented. In this paper, we choose the metric which is properly explained in [24]. The metric calculates the hypervolume of a multidimensional region [19] and allows the integration of aspects that are individually measured by other metrics. An advantage of the metric is that each algorithm can be assessed independently of the other algorithms involved in the study. However, the values of two sets and , namely, and , respectively, cannot be used to derive whether either set entirely dominates the other. Figure 1(a) shows a situation where and set completely dominates set . Figure 1(b) shows a situation where but neither dominates nor dominates .

In this paper we use EAs to solve the BOUFLP. An EA is a stochastic search procedure inspired by the evolution process in nature. In this process, individuals evolve and the fitter ones have a better chance of reproduction and survival. The reproduction mechanisms favour the characteristics of the stronger parents and hopefully produce better children guaranteeing the presence of those characteristics in future generations [26]. EAs have been successfully applied to a large number of both single- and multiobjective optimisation problems. Comprehensive reviews of EMO algorithms are presented in [11, 24] and more recently in [12]. A more general review of hybrid heuristics solving MOCO problems, where EMO algorithms are also included, is presented in [27].

2.2. Biobjective Uncapacitated Facility Location Problem

Facility location problem (FLP) is one of the most important problems for companies with the aim of distributing products to their customers. The problem consists of selecting sites to install plants, warehouses, and distribution centres, allocating customers to serving facilities and interconnecting facilities by flow assignment decisions. Comprehensive reviews and analysis of the FLP are presented in [2830].

In this paper we consider a two-level supply chain, where a single plant serves a set of warehouses, which serve a set of end customers or retailers. Figure 2 shows this configuration.

Two main (conflicting) objectives can be identified in the FLP:(i)minimise the total cost associated with the facility installation and customer allocation and(ii)maximise the customers rate coverage.

Several works on single-objective optimisation have been carried out considering these two objectives separately. On the one hand, uncapacitated FLP (UFLP) is one of most studied FLPs in the literature. In the UFLP the main goal is to minimise the location-allocation cost of the network. On the other hand, median FLPs are one of the most common FLPs among those that are focused on coverage maximisation. Most important FLP models are well described and formalised in [29]. MO FLPs have been also well studied in the literature during the last two decades. A survey on this topic can be found in [31].

As we mentioned before, in this paper we solve the BOUFLP. BOUFLP has been modelled with minisum and maxisum objectives (cost and coverage). The following model formulation is based on [32]. Let be the set of potential facilities and the set of customers. Let be the fixed cost of opening facility and the demand of customer . Let be the cost of assigning the customer to facility and the distance between facility and customer . Let be the maximal covering distance; that is, customers within this distance to an open facility are considered well served. Let be the set of facilities that could serve customer within the maximal covering distance . Let be 1 if facility is open and 0, otherwise. Let be 1 if the whole demand of customer is served by facility and 0, otherwise. Consider with and . Objective functions and are as follows:

Equation (6) represents total operating cost; the first term corresponds to location cost, that is, the sum of the fixed costs of all the open facilities, and the second term represents the allocation cost, that is, the cost of attending customer demand by an open facility. Equation (7) measures coverage as the sum of the demand of customers attended by open facilities within the maximal covering distance. Equations (2) and (3) ensure that each customer is attended by only one facility. Equation (3) also forces customer to be assigned to an open facility. Finally, equations (4) and (5) set decision variables as binary.

2.3. Biobjective Cultural Algorithm

The experience and beliefs accepted by a community in a social system are the main motivations for the creation of the CAs. Originally proposed by Reynolds [10], CAs model the evolution of cultural systems based on the principles of human social evolution. In this case, evolution is seen as an optimisation process [10]. The CAs guide the evolution of the population based on the knowledge. Knowledge acquired during previous iterations is provided to future generations, allowing accelerating the convergence of the algorithm to good solutions [33]. Domain knowledge is modelled separately from the population, because there is certain independence between them which allows us to work and model them separately, in order to enhance the overall algorithm performance. Figure 3 shows this interaction.

CAs are mainly characterised by presenting two inheritance systems: one at population level, called population space, and the other at knowledge level, called belief space. This key feature is designed to increase the learning rates and convergence of the algorithm and thus to do a more responsive system for a number of problems [34]. Moreover, it allows us to identify two significant levels of knowledge: a microevolutionary level (represented by the population space) and macroevolutionary level (represented by the space of beliefs) [35].

CAs have the following components: population space (set of individuals who have independent features) [35]; belief space (stored individuals acquired in previous generations) [34]; computer protocol, connecting the two spaces and defining the rules on the type of information to be exchanged between them by using the acceptance and influence functions; and finally knowledge sources which are described in terms of their ability to coordinate the distribution of individuals depending on the nature of a problem instance [35]. These knowledge sources can be of the following types: circumstantial, normative, domain, topographic, and historical.

The most distinctive feature of CAs is the use of the belief space which through an influence function affects future generations. For this reason, in this paper we focus on the effect on the algorithm performance of changes in such an influence function. To do this, we have considered results obtained previously in [18], where the authors used an influence function based on historical knowledge, and we compare those results with our BOCA implementation which considers two influence functions: the first one based on circumstantial knowledge and the second one based on normative knowledge. Algorithm 1 shows the general procedure of our BOCA algorithm.

begin
= 0;
 initialise Popuation ;
 initialise BeliefSpace ;
while     do
  Evaluate();
   = bestIndividuals();
  updateBeliefSpace(, accept());
  influence(, );
  ;
   = newPopulation();
end

To initialise the population, we use a semirandom function. In its first phase, this function defines in a stochastic way the set of facilities that will be opened (selected facilities). Then, we allocate each customer to a selected facility minimising the cost function while avoiding minimising the coverage function . This strategy provides better results than using completely random initial populations, and its computational time additional cost is marginal.

To obtain the next generation, two parents are used in a recombination process. To avoid local optimal values, we do not overuse the culture. Thus, a parent is selected from the population to obtain diversity and the other parent is selected from the belief space to influence the next generation. The belief space keeps a list of all the individuals which meet some criteria. These criteria depend on what knowledge the algorithm implements. In this paper the circumstantial knowledge selects the best individuals found so far for each objective function. Thus, one individual will give us information on the best value found for and the other will do the same for . The historical knowledge stores a list of individuals with the best fitness value found so far. The fitness value is calculated as the hypervolume that is covered by an individual. Finally, normative knowledge considers a list of individuals which are pairwise nondominated with respect to the other individuals of their generation.

Let be the number of available facilities and let be the number of customers of our BOUFLP. In this paper decisions variables and are represented by a binary -length vector and matrix, respectively. Comparing two different solutions (individuals) needs an evaluation criterion. In this paper we use the same criterion explained in [18].

3. Computational Experiments

In this section we present the set of instances that are used in this study as well as results obtained by our BOCA implementation.

3.1. Instances Presentation

The instances that were used in this paper correspond to random instances using a problem generator that follows the methodology from UflLib [36]. Previous works in the literature have also used this problem generator to create their test instances [18, 26].

The BOCA algorithm has several parameters that need to be set. As in [18], the number of generations considered in this paper is equal to 100. Population size is set equal to 100, mutation probability in the population space is equal to , and probability of mutation in the belief space is . Both and values are different from the values used in [18]. These values are chosen as they all together yield to the best performance of the algorithm given some test instances. Thus, although resulting values are different from that used in [18], the method we use to set them is the same as that used in that work. This is important in order to fairly compare the different BOCA algorithms.

3.2. Results and Discussion

In this section we compare the results obtained by the previous BOCA algorithm (see [18] for further details) and our approach. Moreover, a comparison between results obtained by well-known EMO algorithms such as NSGA-II and PAES and our BOCA algorithm is also presented in this section.

Tables 1 and 2 show the results obtained by the BOCA implementations using historical [18], circumstantial, and normative knowledge, respectively. In the same way, Tables 3 and 4 present the results obtained by the well-known NSGA-II and PAES algorithms. For each algorithm value (%), time (in seconds), and the number of efficient solutions have been included in these tables. As we mentioned before, we want to produce a set with a large number of efficient solutions , a value close to 100% (ideal), and a small . For the sake of easy reading, we have split the set of instances into two subsets (instances type and ).


Instance BOCA (historical) BOCA (circumstantial) BOCA (normative)
(sec) % (sec) % (sec) %

252 0.7057 13 167 0.7058 13 37 0.6963 11
197 0.6136 11 176 0.6136 11 27 0.6135 10
186 0.6993 9 184 0.6992 9 33 0.6992 9
115 0.5729 5 193 0.5729 5 28 0.5728 3
152 0.6351 8 168 0.6352 8 32 0.6352 8
183 0.5982 9 186 0.5961 8 34 0.5944 8
464 0.8071 18 793 0.7828 16 1908 0.7300 12
1247 0.8013 47 598 0.7798 23 1201 0.7090 30
1177 0.7552 50 611 0.7374 21 1305 0.7098 29
450 0.7471 21 960 0.6307 10 1450 0.6048 15
940 0.7790 46 610 0.7628 32 1725 0.7078 28
740 0.7245 37 536 0.6656 14 446 0.7010 28
1473 0.7878 52 2386 0.6685 27 3481 0.7140 42
1043 0.7611 37 4266 0.6418 27 5412 0.6200 32
1165 0.7883 53 1644 0.7171 24 3941 0.7160 27
735 0.6345 17 1437 0.5849 17 3419 0.6010 20
1459 0.8039 53 2333 0.6725 21 2745 0.7158 35
1434 0.7903 43 2399 0.6535 26 2811 0.7040 48


Instance BOCA (historical) BOCA (circumstantial) BOCA (normative)
(sec) % (sec) % (sec) %

135 0.7795 7 196 0.7795 7 36 0.7721 8
130 0.5856 5 128 0.5813 4 48 0.5807 4
238 0.7410 14 229 0.7278 12 101 0.7170 13
260 0.7261 14 222 0.7246 14 182 0.7218 12
115 0.8117 6 185 0.7250 4 25 0.8020 5
285 0.6517 18 245 0.6340 15 31 0.5967 12
683 0.7650 38 736 0.6945 26 1208 0.7119 20
944 0.7323 59 882 0.7133 30 1014 0.7115 24
1351 0.6335 72 683 0.6263 28 1076 0.6106 28
618 0.5214 32 857 0.4947 27 472 0.4824 29
905 0.7473 45 917 0.6869 37 1820 0.6647 25
800 0.8775 42 565 0.8502 24 781 0.8310 35
802 0.8345 23 885 0.8105 18 2371 0.8090 20
1221 0.7634 53 1405 0.7134 24 2678 0.7050 35
1410 0.7915 58 1706 0.7169 13 2791 0.6940 45
567 0.7498 10 604 0.6720 17 3841 0.6420 15
939 0.6393 37 482 0.5470 27 5712 0.5870 25
1904 0.8016 67 1745 0.7560 34 4958 0.7150 54


Instance NSGA-II PAES
(sec) % (sec) %

1344 0.7057 13 372 0.7057 13
1511 0.6136 11 394 0.6136 11
1859 0.6993 9 326 0.6993 9
3186 0.5729 5 305 0.5729 5
1748 0.6351 8 378 0.6351 8
1650 0.5982 9 384 0.5982 9
1633 0.8071 18 499 0.7795 16
1345 0.8012 43 622 0.7915 41
1394 0.7538 43 603 0.7384 29
1874 0.7508 20 525 0.7342 20
1413 0.7783 40 595 0.7572 33
1474 0.6676 36 511 0.5300 12
2385 0.7913 43 1386 0.7391 25
2522 0.7597 39 1231 0.6729 23
2298 0.7665 34 1269 0.6900 26
2575 0.6344 17 1233 0.6106 15
2446 0.8072 40 1251 0.7590 23
2259 0.7862 38 1218 0.6461 27


Instance NSGA-II PAES
(sec) % (sec) %

1439 0.7795 7 397 0.7795 7
1430 0.5856 5 405 0.5856 5
1288 0.741 13 433 0.7410 13
1747 0.7261 14 440 0.7261 14
1766 0.8117 6 363 0.8117 6
1261 0.6517 17 394 0.6517 17
1566 0.7609 30 562 0.7134 27
1525 0.7285 49 563 0.6899 34
1345 0.6330 51 578 0.5991 53
2562 0.5333 23 502 0.5211 21
1433 0.7631 31 564 0.7360 26
1460 0.8088 47 585 0.7602 32
2698 0.8447 23 1280 0.6925 10
2289 0.7515 41 1241 0.4970 13
2284 0.7988 52 1279 0.6836 26
2603 0.7500 10 1192 0.7253 14
2301 0.6387 35 1212 0.5021 16
3178 0.8029 43 1260 0.7152 32

We then compare our BOCA implementations with the one presented in [18]. Tables 5 and 6 show a comparison between those algorithms. As we can see, when compared in terms of its value (the bigger, the better), BOCA algorithm using historical knowledge () performs consistently better than the ones using circumstantial () and normative () knowledge. In fact obtains a value that is, in average, 5.8% bigger than the one obtained by and 6.5% bigger than the value obtained by . When compared in terms of the CPU time needed to reach the number of iterations (generations), is, in average, faster than both and algorithms. We can note that for instances, times required by and are, in average, quite similar (only 1.6% of difference). Finally, when we look at the number of efficient solutions found by each algorithm ( column), we can see that, again, outperforms both and algorithms. In this case, the average number of efficient solutions found by the algorithm is about 20% bigger than the one obtained by the other two approaches.


Instance

−0.014 33.73 0.00 1.332 85.32 15.38
0.000 10.66 0.00 0.016 86.29 9.09
0.014 1.08 0.00 0.014 82.26 0.00
0.000 −67.83 0.00 0.017 75.65 40.00
−0.016 −10.53 0.00 −0.016 78.95 0.00
0.351 −1.64 11.11 0.635 81.42 11.11
3.011 −70.91 11.11 9.553 −311.21 33.33
2.683 52.04 51.06 11.519 3.69 36.17
2.357 48.09 58.00 6.012 −10.88 42.00
15.58 −113.33 52.38 19.047 −222.22 28.57
2.080 35.11 30.43 9.140 −83.51 39.13
8.130 27.57 62.16 3.244 39.73 24.32
15.143 −61.98 48.08 9.368 −136.32 19.23
15.675 −309.01 27.03 18.539 −418.89 13.51
9.0320 −41.12 54.72 9.172 −238.28 49.06
7.8170 −95.51 0.00 5.280 −365.17 −17.65
16.345 −59.90 60.38 10.959 −88.14 33.96
17.310 −67.29 39.53 10.920 −96.03 −11.63


Instance

0.000 −45.19 0.00 0.949 73.33 −14.29
0.734 1.54 20.00 0.837 63.08 20.00
1.781 3.78 14.29 3.239 57.56 7.14
0.207 14.62 0.00 0.592 30.00 14.29
10.681 −60.87 33.33 1.195 78.26 16.67
2.716 14.04 16.67 8.439 89.12 33.33
9.216 −7.76 31.58 6.941 −76.87 47.37
2.595 6.57 49.15 2.840 −7.42 59.32
1.137 49.44 61.11 3.615 20.36 61.11
5.121 −38.67 15.63 7.480 99.92 9.38
8.082 −1.33 17.78 11.053 −101.10 44.44
3.111 29.38 42.86 5.299 2.38 16.67
2.876 −10.35 21.74 3.056 −195.64 13.04
6.550 −15.07 54.72 7.650 −119.33 33.96
9.425 −20.99 77.59 12.318 −97.94 22.41
10.376 −6.53 −70.00 14.377 −577.43 −50.00
14.438 48.67 27.03 8.181 −508.31 32.43
5.689 8.35 49.25 10.803 −160.40 19.40

Results above are consistent with the good performance obtained by the approach in [18]. Moreover, results show that performance of the BOCA algorithm depends largely on the selected knowledge and it can make the difference in terms of value, time, and number of efficient solutions found by the algorithm. This is an important finding as it points out the relevance of the choice of a specific type of knowledge.

We now compare and algorithms to the well-known NSGA-II and PAES algorithms. Tables 7 and 8 show a comparison between our algorithm and the NSGA-II and PAES algorithms. As we can see, although obtains, in average, a value 6.8% lower than the one obtained by the NSGA-II algorithm, it is more than three times faster. Moreover, when is compared to PAES algorithm, the obtained values are, in average, equivalent while is around 30% faster than PAES. PAES obtains, in average, more efficient points than though (9.32%).


Instance

−1.35 −3532.43 −18.18 −1.35 −905.41 −18.18
−0.02 −5496.30 −10.00 −0.02 −1359.26 −10.00
−0.01 −5533.33 0.00 −0.01 −887.88 0.00
−0.02 −11278.57 −66.67 −0.02 −989.29 −66.67
0.02 −5362.50 0.00 0.02 −1081.25 0.00
−0.64 −4752.94 −12.50 −0.64 −1029.41 −12.50
−10.56 14.41 −50.00 −6.78 73.85 −33.33
−13.00 −11.99 −43.33 −11.64 48.21 −36.67
−6.20 −6.82 −48.28 −4.03 53.79 0.00
−24.14 −29.24 −33.33 −21.40 63.79 −33.33
−9.96 18.09 −42.86 −6.98 65.51 −17.86
4.76 −230.49 −28.57 24.39 −14.57 57.14
−10.83 31.49 −2.38 −3.52 60.18 40.48
−22.53 53.40 −21.88 −8.53 77.25 28.13
−7.05 41.69 −25.93 3.63 67.80 3.70
−5.56 24.69 15.00 −1.60 63.94 25.00
−12.77 10.89 −14.29 −6.04 54.43 34.29
−11.68 19.64 20.83 8.22 56.67 43.75


Instance

−0.96 −3897.22 12.50 −0.96 −1002.78 12.50
−0.84 −2879.17 −25.00 −0.84 −743.75 −25.00
−3.35 −1175.25 0.00 −3.35 −328.71 0.00
−0.60 −859.89 −16.67 −0.60 −141.76 −16.67
−1.21 −6964.00 −20.00 −1.21 −1352.00 −20.00
−9.22 −3967.74 −41.67 −9.22 −1170.97 −41.67
−6.88 −29.64 −50.00 −0.21 53.48 −35.00
−2.39 −50.39 −104.17 3.04 44.48 −41.67
−3.67 −25.00 −82.14 1.88 46.28 −89.29
−10.55 −442.80 20.69 −8.02 −6.36 27.59
−14.80 21.26 −24.00 −10.73 69.01 −4.00
2.67 −86.94 −34.29 8.52 25.10 8.57
−4.41 −13.79 −15.00 14.40 46.01 50.00
−6.60 14.53 −17.14 29.50 53.66 62.86
−15.10 18.17 −15.56 1.50 54.17 42.22
−16.82 32.23 33.33 −12.98 68.97 6.67
−8.81 59.72 −40.00 14.46 78.78 36.00
−12.29 35.90 20.37 −0.03 74.59 40.74

Finally, Tables 9 and 10 show a comparison between and NSGA-II and PAES algorithms. performs quite similar to PAES algorithm with respect to both value and the number of obtained efficient solutions. However, is faster than PAES. Similar situation occurs when is compared to NSGA-II algorithm. Although NSGA-II obtains better values for both and , is much faster than NSGA-II. This situation can be explained by the very fast performance that our algorithm obtains for the set of small instances. When we look further at the results, we can note that if we only consider both medium and large size instances, execution times obtained by both algorithms are quite similar to each other. This result confirms what is outlined in [18] in the sense of the good performance that the BOCA algorithm shows. Furthermore, our results confirm this good performance with respect to other well-known EMO algorithms does not depend on which type of knowledge is considered. However, as we mentioned before, the choice of the knowledge used on the BOCA algorithm is an important issue and it has an impact on the algorithm performance.


Instance

0.01 −704.79 0.00 0.01 −122.75 0.00
0.00 −758.52 0.00 0.00 −123.86 0.00
−0.01 −910.33 0.00 −0.01 −77.17 0.00
0.00 −1550.78 0.00 0.00 −58.03 0.00
0.02 −940.48 0.00 0.02 −125.00 0.00
−0.35 −787.10 −12.50 −0.35 −106.45 −12.50
−3.10 −105.93 −12.50 0.42 37.07 0.00
−2.74 −124.92 −86.96 −1.50 −4.01 −78.26
−2.22 −128.15 −104.76 −0.14 1.31 −38.10
−19.04 −95.21 −100.00 −16.41 45.31 −100.00
−2.03 −131.64 −25.00 0.73 2.46 −3.13
−0.30 −175.00 −157.14 20.37 4.66 14.29
−18.37 0.04 −59.26 −10.56 41.91 7.41
−18.37 40.88 −44.44 −4.85 71.14 14.81
−6.89 −39.78 −41.67 3.78 22.81 −8.33
−8.46 −79.19 0.00 −4.39 14.20 11.76
−20.03 −4.84 −90.48 −12.86 46.38 −9.52
−20.31 5.84 −46.15 1.13 49.23 −3.85


Instance