Abstract

Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.

1. Introduction

Today, optimization problems receive much attention in artificial intelligence. There are several types of optimization, such as numerical [1], linear [2], continuous [3], or combinatorial optimization [4]. Typically, problems arising in these fields are of high complexity. Additionally, many of the problems arising in optimization are applicable to the real world. For these reasons, in the literature, many different techniques designed to be applied to these problems can be found.

Some classical examples of these techniques are the simulated annealing [5], the tabu search [6], the genetic algorithm (GA) [7, 8], ant colony optimization [9], or the particle swarm optimization [10]. Since their proposal, all these metaheuristics have been widely applied in a large amount of fields. In fact, these techniques are the focus of many research studies nowadays [1114].

Despite the existence of these conventional algorithms, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. On the one hand, optimization problems represent a great challenge because they are hard to solve. For this reason, the development of new techniques that outperform the existing ones is a topic of interest for the researchers. On the other hand, optimization problems (such as routing problems) are very important from a social perspective. This is because their resolution directly affects the economy and sustainability in terms of cost reduction and energy saving.

In this way, there have been many recently proposed metaheuristics, which have been successfully applied to various fields and problems. One example is the imperialist competitive algorithm (ICA) [15]. This population based metaheuristic, proposed by Gargari and Lucas in 2007, is based on the concept of imperialisms. In ICA, individuals are called countries and they are divided into various empires, which evolve independently. Throughout the execution, different empires battle each other with the aim of conquering their colonies. When one empire conquers all the colonies, the algorithm converges into the final solution. Some examples of its application can be seen in recent papers [16, 17]. Another example is the artificial bee colony. This technique was proposed in 2005 by Karaboga [18, 19] for multimodal and multidimensional numeric problems. Since then, it has been used frequently in the literature for solving different kinds of problems [2022]. The artificial bee colony is a swarm based technique which emulates the foraging behaviour of honey bees. The population of this metaheuristic consists in a colony, with three kinds of bees: employed, onlooker, and scout bees, each with a different behaviour. The harmony search, presented by Geem et al. in 2001, is another example [23, 24]. This metaheuristic mimics the improvisation of music players. In this case, each musical instrument corresponds to a decision variable; a musical note is the value of a variable; and the harmony represents a solution vector. With the intention of imitating the musicians in a jam session, variables have random values or previously memorized good values in order to find the optimal solution. This algorithm is also used frequently in the literature [2527].

Bat-inspired algorithm is a more recent technique [28, 29]. This metaheuristic, proposed by Yang in 2010, is based on the echolocation behaviour of microbats, which can find their prey and discriminate different kinds of insects even in complete darkness. Yang and Deb proposed the cuckoo search algorithm in 2009 [30, 31]. On this occasion, as authors claim in [30], this metaheuristic is based on the obligate brood parasitic behaviour of some cuckoo species in combination with the Levy flight behaviour of some birds and fruit flies. Another recently developed technique which is very popular today is the firefly algorithm [32, 33]. This nature-inspired algorithm is based on the flashing behaviour of fireflies, which act as a signal system to attract other fireflies. Like the aforementioned techniques, these metaheuristics have been the focus of several research [3440] and review papers [4143].

As can be seen, there are many metaheuristics in the literature to solve optimization problems. Although several techniques have been mentioned, many other recently developed ones could be cited, such as the spider monkey optimization [44] or seeker optimization algorithm [45]. This large amount of existing techniques demonstrates the growing interest in this field, on which several books, special issues in journals, and conferences proceedings are published annually. Moreover, combinatorial optimization is a widely studied field in artificial intelligence nowadays. Being NP-Hard [46], a lot of problems arising in this field are particularly interesting for the researchers. This kind of optimization is the subject of a large number of works every year [4749]. This scientific interest is the reason why this study is focused on this sort of optimization.

This paper is focused on one recently proposed metaheuristic called Golden Ball (GB). This technique is a multiple-population based metaheuristic, and it is based on soccer concepts. A preliminary version of the GB and some basic results were firstly introduced in 2013 by Osaba et al. [50]. Furthermore, the final version of the GB and its practical use for solving complex problems have been presented this very year (2014) by the same authors [51]. In that paper, the GB is introduced, and it is compared with some similar metaheuristics of the literature. In addition, it is successfully applied to two different routing problems: the traveling salesman problem (TSP) [52] and the capacitated vehicle routing problem (CVRP) [53]. Additionally, the results obtained by GB were compared with the ones obtained by two different GAs and two distributed genetic algorithms (DGA) [54, 55]. As a conclusion of that study, it can be said that the GB outperforms these four algorithms when it is applied to the TSP and CVRP.

The authors of that study claim that GB is a technique to solve combinatorial optimization problems. Even so, they only prove its success with two simple routing problems, the TSP and the CVRP. This is the reason that motivates the work presented in this paper. Thus, the objective of this paper is to verify if the GB is a promising metaheuristic to solve combinatorial optimization problems, performing a more comprehensive and rigorous experimentation than that presented to date. Thereby, in this research study, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the ones used in [51]: the asymmetric traveling salesman problem (ATSP) [56] and the vehicle routing problem with backhauls (VRPB) [57]. Furthermore, in order to verify that the GB is also applicable to other types of problems apart from the routing ones, two additional problems have also been used in the experimentation, the n-queen problem (NQP) [58] and the one-dimensional bin packing problem (BPP) [59]. As in [51], the results obtained by GB are compared with the ones obtained by two different GAs and two DGAs. Besides, with the objective of performing a rigorous comparison, two statistical tests are conducted to compare these outcomes: the well-known normal distribution -test and the Friedman test.

The rest of the paper is structured as follows. In Section 2, the GB is introduced. In Section 3, the problems used in the experimentation are described. Then, in Section 4, the experimentation conducted is described. In Section 5, the results obtained are shown and the statistical tests are performed. This work finishes with the conclusions and future work (Section 6).

2. Golden Ball Metaheuristic

In this section, the GB is described. As has been mentioned in Section 1, the GB is a multiple-population based metaheuristic which takes several concepts related to soccer. To begin with, the technique starts with the initialization phase (Section 2.1). In this first phase, the whole population of solutions (called players) is created. Then, these players are divided among the different subpopulations (called teams). Each team has its own training method (or coach). Once this initial phase has been completed, the competition phase begins (Section 2.2). This second phase is divided in seasons. Each season is composed of weeks, in which the teams train independently and face each other creating a league competition. At the end of every season, a transfer procedure happens, in which the players and coaches can switch teams. The competition phase is repeated until the termination criterion is met (Section 2.3). The entire procedure of the technique can be seen in Figure 1. Now, the different steps that form the proposed technique are explained in detail.

2.1. Initialization Phase

As has been said, the first step of the execution is the creation of the initial population . The initial population is composed of number of solutions , called . Note that is the number of players per team, and is the number of teams. Additionally, both parameters must have a value higher than 1.

After the whole population is created, all the are randomly divided in the different teams . Once the players are divided between the different teams, they are represented by the variable , which means the player number of the team . The total set of teams forms the league. All these concepts may be represented mathematically as follows:

Furthermore, every has its own quality, which is represented by the variable (quality of the player of team ). This variable is represented by a real number, which is determined by a cost function . This function depends on the problem. For example, for some routing problems, this function is equivalent to the traveled distance. On the other hand, for the NQP, for instance, this function is the number of collisions. In addition, each has a captain (), which is the player with the best of their team. To state this in a formal way, consider

It should be borne in mind that, depending on the problem characteristics, the objective is to minimize or maximize . In the problems used in this paper, for example, the lower the is, the better the player is.

Moreover, each team has a strength value associated with . This value is crucial for the matches between teams (Section 2.2.2). It is logical to think that the better the players are, the stronger a team is. Thereby, if one team is strong, it can win more matches and it can be better positioned in the classification of the league. In this way, the strength value of a team is equal to the average of the of the players of that team. can be expressed by the following formula:

Once the initialization phase is completed, the competition phase begins. This phase is repeated iteratively until the ending criterion is met.

2.2. Competition Phase

This is the central phase of the metaheuristic. In this stage, each team evolves independently and improves its (Section 2.2.1). Additionally, in this phase, the teams face each other, creating a league competition (Section 2.2.2). This league helps to decide the player transfers between teams (Section 2.2.3). The competition stage is divided into seasons (). Each has two different periods of player transfers. In each season, every team face each other twice. In this way, each team plays 2NT-2 matches in a season. Lastly, an amount of training sessions equal to the number of matches played is performed.

2.2.1. Training of Players

As in real life, trainings are the processes that make players improve their quality. In GB, each has its own training method, and some of them are more effective than others. This fact makes some teams improve more than others. There are two kinds of training methods in GB: conventional trainings and custom trainings.

Conventional trainings are those that are performed regularly throughout the season. This type of training is applied individually for each player. A training method is a successor function, which works on a particular neighborhood structure in the solution space. Taking the TSP as example, one training function could be the 2-opt [60]. As has been said, each team has its own training function, which acts as the coach of the team. The training function is assigned randomly at the initialization phase. Thereby, each uses the method of its team. For each training session, the function is applied a certain number of times, and the generated is accepted if . Besides, this process could make a change in the of a team, if one outperforms the quality of its captain.

It is worth mentioning that one training session has its own termination criterion. A training session ends when there is a number of successors without improvement in the quality of the trained. This number is proportional to the neighborhood of the team training function. For example, taking the 2-opt and a 30-noded TSP instance, the training ends when there are (the size of its neighborhood) successors without improvement, with being the size of the problem (30). Figure 2 schematizes this process.

Furthermore, the fact that every explores the space solution in a different way increases the exploration and exploitation capacity of the GB. This fact occurs because of the use of a different training method for each team. Besides, this is enhanced by the fact that players can change their teams.

On the other hand, the procedure of custom trainings is different. These trainings are performed when one receives a number of conventional training sessions without experiencing any improvement (in this study, this number has been set in ). When this fact happens, it can be considered that is trapped in a local optimum. A custom training is conducted by two players: the trapped and the of their team. The purpose of these operations is to help to escape from the local optimum and to redirect them to another promising region of the solution space. From a practical point of view, custom training combines two players (like the crossover operator of a GA), resulting in a new player who replaces . Taking the TSP as example, a function that combines the characteristics of two players could be the order crossover (OX) [61] or the partially mapped crossover [62]. The custom training helps a thorough exploration of the solution space.

2.2.2. Matches between Teams

In GB, as in the real world, two teams (, ) are involved in a match. Each match consists in goal chances, which are resolved in the following way: first, the players of both teams are sorted by quality (in descending order). Then, each faces . The player with the highest quality wins the goal chance, and their team scores a goal. As can be surmised, the team that achieves more goals wins the match. Furthermore, the team that wins the match obtains 3 points and the loser obtains 0 points. If both teams obtain the same number of goals, each one receives one point. These points are used to perform a team classification, sorting the teams by the points obtained in a whole season. Figure 3 shows the flowchart of a match.

2.2.3. Player Transfers between Teams

The transfers are the processes in which the teams exchange their players. There are two types of transfers in GB: season transfers and special transfers. The former are the conventional ones, and they take place twice in a . In these transfers, the points obtained by each team and its position in the league are crucial factors. In this way, in the middle and the end of each , the teams placed in the top half of the league classification “hire” the best players of the teams located on the lower half. Moreover, teams of the bottom half receive in return the worst players of the top teams. In addition, the better the position of the team is, the better the player it receives is. In other words, the best is reinforced with the best player of the worst team of the league. Furthermore, the second obtains the second best of the penultimate team, and so on. Finally, if the league has an odd number of teams, the team placed in the middle position does not exchange any .

On the other hand, the special transfers are sporadic exchanges that can occur at any time during a season. If one player receives a certain number of conventional and custom trainings without experiencing any improvement, they changes their team (in this study, this number has been set in PT conventional training sessions without improvement). This change is made in order to obtain a different kind of training. Besides, it is no matter whether the destination team has less than its current team. Additionally, with the aim of maintaining the per team, there is an exchange with a random player of the destination team.

As authors said in [51], these interchanges help the search process. This neighborhood changing improves the exploration capacity of the technique.

Lastly, it is noteworthy that another sort of transfer exists in GB. In this case, these transfers are not performed with players, but with team coaches. This process has been called cessation of coaches. In each period of season tranfers, the teams positioned in the bottom half of the league classification change their training form. This change is made hoping to get another kind of training which improves the performance and the of the team. This training exchange is performed randomly among all the training types existing in the system, allowing repetitions between different . This random neighborhood change increases the exploration capacity of the metaheuristic.

2.3. Termination Criterion

The termination criterion is a critical factor in the development of a metaheuristic. It must be taken into account that this criterion has to allow the search to examine a wide area of the solution space. On the other hand, if it is not strict enough, it can lead to a considerable waste of time. In this way, the termination criterion of the GB is composed of three clauses:

In other words, the execution of the finishes when (1) the sum of the quality of all the captains is not improved over the previous season, (2) the sum of the strengths of all the teams has not been improved compared to the previous season, and (3) there is no improvement in the best solution found () in relation to the previous season. When these three conditions are fulfilled, the with the best of the system is returned as the final solution.

3. Description of the Used Problems

As has been mentioned in the introduction of this study, four combinatorial optimization problems have been used in the experimentation conducted. In this section, these problems are described. The first two are routing problems: the ATSP (Section 3.1) and the VRPB (Section 3.2). Besides, with the aim of verifying whether the GB is also a promising technique with other kinds of problems apart from the routing ones, the NQP (Section 3.3) and the BPP (Section 3.4) have been used.

It is important to highlight that the objective of the present paper is not to find an optimal solution to these problems. In fact, in the literature, there are multiple efficient techniques with this objective. Instead, these four problems have been used as benchmarking problems. In this way, the objective of using them is to compare the performance of the GB with the one of the GAs and DGAs and to conclude which obtains better results using the same parameters and functions.

3.1. Asymmetric Traveling Salesman Problem

As the symmetric version of this problem (the TSP), the ATSP has a great scientific interest, and it has been used in many research studies since its formulation [63, 64]. This problem can be defined as a complete graph , where is the set of vertexes which represents the nodes of the system and is the set of arcs which represents the connection between nodes. Additionally, each arc has an associated distance cost . Unlike in the TSP, in the ATSP, the distance cost between two nodes is different depending on the direction of the flow; that is, . Thereby, the objective of the ATSP is to find a route that, starting and finishing at the same node, visits every once and minimizes the total distance traveled. In this way, the objective function is the total distance of the route.

In this study, the solutions for the ATSP are encoded using the permutation representation [65]. According to this encoding, each solution is represented by a permutation of numbers, which represents the order in which the nodes are visited. For example, for a possible 10-node instance, one feasible solution would be encoded as , and its fitness would be  + . This situation is depicted in Figure 4.

3.2. Vehicle Routing Problem with Backhauls

The VRPB is a variant of the basic VRP where customers can demand either a delivery or a pick-up of certain goods [57]. In this problem, all deliveries are done before the pick-ups. This is so because, otherwise, it could be a movement of material within the mobile unit that could be counterproductive, for example, putting collected materials on the front of the trunk, whereas at the bottom some goods remain undelivered. The VRPB is widely used in the literature thanks to its applicability to the real world and to its solving complexity [6668].

The VRPB can be defined as a complete graph , where is the set of vertexes and is the set of arcs. The vertex represents the depot, and the rest are the customers. Besides, in order to facilitate the formulation, the set of customers can be separated into two subsets [69]. The first one, , called linehaul customers, contains those users who demand the delivery of goods. On the other hand, the second subset, , called backhaul customers, demand the pick-up of a certain amount of material. To express customer demand (), positive values are used for linehaul customers and negative values for backhaul ones.

Additionally, a fleet of vehicles is available with a limited capacity . The objective of the VRPB is to find a number of routes with a minimum cost such that (i) each route starts and ends at the depot, (ii) each client is visited exactly by one route, (iii) all deliveries are made before pick-ups, and (iv) the total demand of the customers visited by one route does not exceed the total capacity of the vehicle that performs it.

Finally, the permutation representation is also used for this problem [70], and the routes are also encoded as nodes permutation. In addition, to distinguish the different routes in a solution, they are separated by zeros. As an example, suppose a set of five linehaul customers and seven backhaul customers . One possible solution with three vehicles would be and its fitness would be  + . In Figure 5(a), an example of a VRPB instance is depicted. Furthermore, in Figure 5(b), a possible solution for this instance is shown.

3.3. n-Queen Problem

The NQP is a generalization of the problem of putting eight nonattacking queens on a chessboard [71], which was introduced by Bezzel in 1848 [72]. The NQP consists in placing queens on a chessboard, in order that they cannot attack each other. This problem is a classical combinatorial design problem (constraint satisfaction problem), which can also be formulated as a combinatorial optimization problem [73]. In this paper, the NQP has been formulated as a combinatorial optimization problem, where a solution is coded as an -tuple , which is a permutation of the set . Each represents the row occupied by the queen positioned in the th column. Using this representation, vertical and horizontal collisions are avoided, and the complexity of the problem becomes . Thereby, the fitness function is defined as the number of diagonal collisions along the board. Notice that th and th queens collide diagonally if

In this way, the objective of NQP is to minimize the number of conflicts, the ideal fitness being zero. A possible solution for an 8-queen chessboard is depicted in Figure 6. According to the encoding explained, the solution represented in this figure would be encoded as . Additionally, its fitness would be 3, since there are three diagonal collisions (4-3, 6-5, and 6–8). This same formulation has been used before in the literature [74, 75].

3.4. One-Dimensional Bin Packing Problem

The packing of items into boxes or bins is a daily task in distribution and production. Depending on the item characteristics, as well as the form and capacity of the bins, a wide amount of different packing problems can be formulated. In [59], an introduction to bin-packing problems can be found. The BPP is the simplest one, and it has been frequently used as a benchmarking problem [7678]. The BPP consists of a set of items , each with an associated size and an unlimited supply of bins with the same capacity . The objective of the BPP is to pack all the items into a minimum number of bins. In this way, the objective function is the number of bins, which has to be minimized.

In this study, the solutions are encoded as a permutation of items. To count the number of bins needed in one solution, the item sizes are accumulated in a variable (). When exceeds , the number of bins is incremented in 1, and is reset to 0. Thereby, we suppose a simple instance of 9 items , three different sizes , , and , and . One possible solution could be , and its fitness would be 3 (the number of bins needed to hold all the items). This example is represented in Figure 7.

4. Experimentation Setup

In this section, the experimentation performed is described. According to the study carried out in [51], the GB metaheuristic provides some originality regarding the well-known techniques that can be found today in the literature. In line with this, analyzing the philosophy and the working way of GB, it can be concluded that the DGA is the technique which shares the most similarities with it. Among other similarities, in the evolution of their individuals, these metaheuristics rely on two operators, a local and a cooperative one, which are used for the exploitation and exploration. In addition, these techniques are easy to apply to combinatorial optimization problems.

For these reasons, to prove the quality of the GB, two single-population GAs and two DGAs are used for the experimentation. The general characteristics of these four techniques are explained in Section 4.1. In addition, the parametrization of all the algorithms is described in the same section. The details of the experimentation are introduced in Section 4.2.

4.1. General Description of Developed Techniques

As has been mentioned, the outcomes obtained by the GB are compared with the ones obtained by two basic single-population ( and ) and two different DGAs ( and ). The structure used for both is the one represented in Algorithm 1, and it is considered the conventional one. Furthermore, in Algorithm 2, the structure of both is depicted, which is also the conventional one.

(1) Initialization of the population
(2) while  termination criterion not reached  do
(3)   Parents selection
(4)   Crossover phase
(5)   Mutation phase
(6)   Survivors selection
(7) end
(8) Return the best individual found

(1)   Initialization of the subpopulations
(2)  while  termination criterion not reached  do
(3)   for  each subpopulation  do
(4)     Parents selection
(5)     Crossover phase
(6)     Mutation phase
(7)     Survivors selection
(8)   end
(9)  Individual migration phase
(10) end
(11) Return the best individual found

On one hand, for and conventional operators and parameters have been used, that is, a high crossover probability and a low mutation probability. These concepts are based on the concepts outlined in many previous studies [54, 79, 80]. On the other hand, for and , parameters have been adjusted to be similar to those used for the GB. Thereby, the numbers of cooperative movements (crossovers and custom trainings) and individual movements (mutations and conventional trainings) performed are the same. In addition, the same functions have been used for , , and GB. In this way, the only difference between them is the structure. Thereby, it can be deduced which algorithm is the one that obtains better results, using the same operators for the same number of times.

The population size used for each metaheuristic is 48. All the initial solutions have been generated randomly. For and , this population has been divided into 4 subpopulations of 12 individuals. For the GB, the whole population is also divided into 4 teams of 12 players each. The crossover probability () and mutation probability () of the are 95% and 5%, respectively. On the other hand, different and have been used for every subpopulation of . For , 95%, 90%, 80%, and 75% have been utilized, and for , 5%, 10%, 20%, and 25% have been utilized. At last, for and , and have been used, in order to fit with the GB parameters.

In relation to the parents selection criteria for the s and s, first, each individual of the population is selected as parent with a probability equal to . If one individual is selected for the crossover, the other parent is selected randomly. Regarding the survivor function, a 100% elitist function has been developed for and , and a 50% elitist random (which means that the half of the survivor population is composed of the best individuals, and the remaining ones are selected randomly) has been developed for and . In and , the classic best-replace-worst migration strategy has been used. In this strategy, every subpopulation shares its best individual with the following deme, in a ring topology. This communication happens every generation and the immigrant replaces the worst individual of deme . Ultimately, the execution of both and finishes when there are generations without improvements in the best solution, where is the size of the problem. The problem size is the number of customers in the two routing problems, the number of queens in the NQP and the number of items in the BPP.

The successor functions employed by GB as conventional training functions for the ATSP, NQP, and BPP are the following.(i)2-opt and 3-opt: these functions, proposed by Lin for the TSP [60], have been used widely in the literature [8183]. These operators eliminate at random 2 (for the 2-opt) and 3 (for the 3-opt) arcs of the solution, and they create two or three new arcs, avoiding the generation of cycles.(ii)Insertion function: this operator selects and extracts one random node of a solution, and it inserts it in another random position. Because of its easy implementation and its good performance, this function is often used in the literature for any kind of permutation encoded problem [84, 85].(iii)Swapping function: this well-known function is also widely employed in lots of research studies [86]. In this case, two nodes of a solution are selected randomly, and they swap their position.

These successor functions have also been used as mutation functions for the (a different function for each subpopulation). On the other hand, for the , , and , the 2-opt has been utilized for this purpose, since it is the one that gets the best results.

For the same problems (ATSP, NQP, and BPP), GB uses the half crossover (HX) operator [81] as a custom training function. This operator is a particular case of the traditional one-point crossover, in which the cut-point is made always in the middle of the solution. Assuming a 10-node instance, in Figure 8, an example of this function can be seen. This function has been used as a crossover operator for the and . On the other hand, for the and , the well-known order crossover (OX) [61] has been implemented as a crossover function. In Figure 9, an example of the OX is shown. Finally, in Table 1, a summary of the characteristics of both and is depicted.

Regarding the VRPB, 2-opt and Insertion functions are also used as conventional training functions. These operators, as Savelsbergh called them [87], are intraroute functions, which means that they work within a specific route. Additionally, two interroute functions have been developed.(i)Insertion Routes: this function selects and extracts one random node from a random route. After that, this node is reinserted in a random position in another randomly selected route. This function could create new routes.(ii)Swapping Routes: this operator selects randomly two nodes of two randomly selected routes. These nodes are swapped.

It is noteworthy that all these functions take into account both the vehicles capacity and the class of the nodes’ demands, never making infeasible solutions. As in the previous problems, these same operators have been also developed as mutation functions for the (a different operator for each subpopulation). Moreover, Insertion Routes operator has been used for the same purpose in , , and .

For the VRPB, the half route crossover (HRX) has been used as custom training. This function has been used recently in several studies [49, 85], and it operates as follows: first, half of the routes of one parent are directly inserted into the child. Then, the nodes that remain to be inserted are added in the same order in which they appear in the other parent. As the above functions, the HRX takes into account the VRPB constraints, and it does not generate infeasible solutions. In Figure 10, an example of the HRX procedure in a 20-node instance is shown. Additionally, in Table 2, a summary of the characteristics of both and for VRPB is shown. Finally, the features of the GB for the four problems are depicted in Table 3.

4.2. Description of the Experimentation

In this section, the basic aspects of the experimentation are detailed. First, all the tests have been run on an Intel Core i7 3930 computer, with 3.20 GHz and a RAM of 16 GB. Microsoft Windows 7 has been used as OS. All the techniques were coded in Java. For the ATSP, 19 instances have been employed, which have been obtained from the TSPLib Benchmark [88]. These 19 instances are all the available ones that can be found in the TSPLib webpage (https://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/). Additionally, 12 instances have been utilized for the VRPB. These instances have been created by the authors of this study. With the aim of allowing the replication of this experimentation, the benchmark used is available under request, and it can be obtained from the personal site of the corresponding author of this paper (http://paginaspersonales.deusto.es/e.osaba). The first 6 instances of the benchmark have been picked from the VRPTW Solomon Benchmark (http://w.cba.neu.edu/~msolomon/problems.htm). For these instances, the time constraints have been removed. Furthermore, the demands type has been modified in order to create backhaul and linehaul customers. The vehicles capacity and the amount of customer demands have been retained. On the other hand, the remaining instances have been taken from the VRPWeb, and they belong to the Christofides/Eilon (http://neo.lcc.uma.es/vrp) CVRP set. In this case, only the demand nature has been modified. These problem instances have been adapted for experimentation purpose and so, their optimum solutions are unknown.

In regard to the NQP, 15 different instances have been developed. The name of each of them describes the number of queens and the dimension of the chessboard. For example, the 20-queen instance consists in placing 20 queens on a 20x20 board. For this problem, the optimum is also not shown, since it is 0 in every case. At last, regarding the BPP, 16 instances have been chosen from the well-known Scholl/Klein benchmark (http://www.wiwi.uni-jena.de/entscheidung/binpp/index.htm). These cases are named , where is 1 (50 items), 2 (100 items), 3 (200 items), or 4 (500 items); is 1 (capacity of 100), 2 (capacity of 120), and 3 (capacity of 150); is 1 (items size between 1 and 100) and 2 (items size between 20 and 100); and is A, B, or C as benchmark indexing parameter.

Each instance has been run 40 times. Besides, with the intention of conducting a fair and rigorous outcomes’ comparison, two different statistical tests have been performed: the normal distribution -test and the Friedman test. Thanks to these tests, it can be concluded whether the differences in the results are statistically significant or not. The details of these statistical tests are explained in the next section.

5. Experimentation Results

In this section, the results obtained by each technique for the chosen problems are shown and analysed. In addition, the statistical tests are also depicted in this section. First, the results and statistical tests are displayed (Section 5.1). Then, the analysis of the outcomes obtained is conducted (Section 5.2).

5.1. Results and Statistical Tests

In this section, the outcomes and statistical tests are shown. In Table 4, the results obtained for the ATSP are introduced. Furthermore, in Table 5, the outcomes got for the VRPB are presented. Besides, results obtained for the NQP and BPP are detailed in Tables 6 and 7, respectively. For each instance, the average result and standard deviation are shown. Additionally, average runtimes are also displayed (in seconds).

As mentioned, two statistical tests have been performed according to these outcomes. The first one is the normal distribution -test. By this test, the results obtained by the GB are compared with those obtained by the other techniques. Thanks to the normal distribution -test, it can be concluded whether the differences between GB and the other techniques are statistically significant or not. The statistic has the following form: where : average of the GB, : standard deviation of the GB, : sample size for GB, : average of the technique , : standard deviation of the technique , and : sample size for technique .

It is noteworthy that the GB has been faced with the other four implemented metaheuristics. Thereby, the parameter can be , , , and . The confidence interval has been stated at 95% (). In this way, the result of the test can be positive (+), if ; negative (−), if ; or neutral (), if . A + indicates that GB is significantly better. In the opposite case, it obtains substantially worse solutions. If the result is (), the difference is not significant. In this study, the numerical value of is also displayed. Thereby, the difference in results may be seen more easily. In Table 8, the tests performed for the chosen problems are shown.

The second statistical test conducted is the Friedman test. In Table 9, the results of overall ranking calculated using this test are summarized, where the smaller the score is, the better the ranking is. This ranking is conducted considering the average results of each technique and comparing them instance by instance. Furthermore, in order to check if there are statistical differences between the developed techniques, the value of is also depicted in Table 9. This value has been obtained using the following formula:

is the number of problem instances (e.g., for ATSP, ), is the number of techniques (), and is the value of the Friendman test ranking score. The confidence interval has been stated at the 99% confidence level. The critical point in a distribution with 4 degrees of freedom is 13.277. Thereby, if , it can be concluded that there are significant differences between the five techniques. Otherwise, the differences are not substantial.

5.2. Analysis of the Results

Looking at the results presented in the previous section, one clear conclusion can be drawn: the GB outperforms the other techniques in terms of quality. Analyzing Tables 47, it can be seen how GB obtains better results than the other metaheuristics in 95.16% of the instances (59 out of 62). In the remaining 3 instances, GB obtains the same outcomes as one or more of the other techniques. Besides, as can be proved, GB has never obtained worse results. In addition, as Table 8 demonstrates, GB obtains significantly better results in 95.96% of the cases (238 out of 248), the differences being insignificant in the remaining 4.04%. The conclusions that can be extracted by performing a problem-by-problem analysis are the same. Regarding the ATSP, the GB gets better results in 94.73% of the cases (18 out of 19). In the remaining instance, GB obtains the same results as and . Furthermore, according to Table 8, GB is substantially better in 97.36% of the confrontations (74 out of 76). In regard to VRPB and BPP, GB outperforms the other alternatives in 100% of the cases, and the differences are significant in 93.75% (45 out of 48) of the confrontations for the VRPB and in 100% (64 out of 64) for the BPP. Finally, in relation to NQP, GB proves to be better in 86.66% of the instances. In the remaining 2 cases, it obtains the same results as one or more of the other techniques. Besides, these differences are significantly better for the GB in 91% of the confrontations (55 out of 60).

At last, observing the results offered by the Friedman test (Table 9), it can be seen how GB is arguably the best technique for all the problems. In addition, all the values of are higher than the critical point, 13.277. For this reason, it can be concluded again that there are significant differences among the results obtained by the four techniques for all the problems.

The reasons why GB performs better than the other algorithms are the same that were presented in [51]. On the one hand, the GB combines local improvement phases (conventional trainings) with cooperative (custom trainings and player transfers) and competitive phases (matches). This technique gives greater importance to the autonomous improvement of the players, while the other four algorithms are more focused on the cooperation and competition of individuals. Furthermore, GB uses cooperation between players through custom trainings. Anyway, this resource is used to avoid local optima and to increase the exploration capacity of the technique. For this reason, this kind of trainings is used sporadically and only when it is beneficial for the search process. Besides, in the GB metaheuristic, players can explore different neighborhood structures. This feature is another alternative to avoid local optima, and it helps players to explore in different ways the solution space. On the other hand, , , , and have also some mechanisms to avoid local optima, but optimization mechanisms are not as powerful as the GB.

Regarding runtimes, GB is faster than and , while and need similar times to GB. This fact gives an advantage to GB, since it can obtain better results than the rest of techniques needing similar runtimes as and .

The reason why GB is faster than and can be explained following the same concepts introduced in several recently published works [49, 81]. Comparing individual improvement operators (mutation and custom training) and cooperative operators (crossover and custom training), the first ones need less time. They operate with one solution, and they perform a simple modification which can be made in a minimum time. On the other hand, cooperative operators work with two different solutions, and their working ways are more complex, needing more runtime. GB makes less cooperative movements than and , and this fact is perfectly reflected in runtimes. Additionally, GB, , and obtain similar runtimes because they use their operators similarly.

Another noteworthy fact is the robustness of the GB. The standard deviation of the GB is lower than the one of the other techniques in 93.54% of the instances (58 out of 62). This means that the differences between the worst and the best result found for an instance are lower for the GB, in comparison with the four other algorithms. This fact provides robustness and reliability to the metaheuristic, something very important for a real-world problem.

As a final conclusion, it can be drawn that the GB has proved to be better than the other four metaheuristics for all the used problems. In this way, adding these outcomes to those presented in [51] for the TSP and CVRP, it can be confirmed that the GB is a promising technique for solving combinatorial optimization problems.

6. Conclusions and Further Work

The Golden Ball is a recently published multiple-population metaheuristic, which is based on soccer concepts. Until now, its performance has been tested with two simple routing problems, the TSP and the CVRP. In this paper, the quality of this technique is demonstrated applying it to four additional combinatorial problems. Two of them are routing problems, which are more complex than the previously used ones: the ATSP and the VRPB. Furthermore, one constraint satisfaction problem (NQP) and one combinatorial design problem (BPP) have also been used. In the paper presented, GB has been tested with 62 new problem instances. The outcomes obtained by the GB have been compared with the ones got by two different and two . Additionally, two statistical tests have been conducted, in order to perform a rigorous and fair comparison. As a conclusion, adding the results obtained in this study with those obtained in [51], it can be concluded that the GB is a promising metaheuristic to solve combinatorial optimization problems.

As future work, it has been planned to apply the GB to other types of optimization problems. In addition, it is intended to compare the technique with other population metaheuristics, in terms of concepts and results.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.