Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018, Article ID 2183214, 23 pages
https://doi.org/10.1155/2018/2183214
Research Article

Adaptive Black Hole Algorithm for Solving the Set Covering Problem

1Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
2Escuela de Ingeniería Civil Informática, Universidad de Valparaíso, Valparaíso, Chile
3Universidad Técnica Federico Santa María, Valparaíso, Chile
4Escuela de Ingeniería Industrial, Universidad Diego Portales, Santiago, Chile

Correspondence should be addressed to Rodrigo Olivares; lc.vu@seravilo.ogirdor

Received 5 December 2017; Revised 5 August 2018; Accepted 3 September 2018; Published 16 October 2018

Academic Editor: Georgios Dounias

Copyright © 2018 Ricardo Soto et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Evolutionary algorithms have been used to solve several optimization problems, showing an efficient performance. Nevertheless, when these algorithms are applied they present the difficulty to decide on the appropriate values of their parameters. Typically, parameters are specified before the algorithm is run and include population size, selection rate, and operator probabilities. This process is known as offline control and is even considered as an optimization problem in itself. On the other hand, parameter settings or control online is a variation of the algorithm original version. The main idea is to vary the parameters so that the algorithm of interest can provide the best convergence rate and thus may achieve the best performance. In this paper, we propose an adaptive black hole algorithm able to dynamically adapt its population according to solving performance. For that, we use autonomous search which appeared as a new technique that enables the problem solver to control and adapt its own parameters and heuristics during solving in order to be more efficient without the knowledge of an expert user. In order to test this approach, we resolve the set covering problem which is a classical optimization benchmark with many industrial applications such as line balancing production, crew scheduling, service installation, and databases, among several others. We illustrate encouraging experimental results, where the proposed approach is able to reach various global optimums for a well-known instance set from Beasley’s OR-Library, while improving various modern metaheuristics.

1. Introduction

Nature inspired algorithms are inspired by natural phenomena and behaviours that are met in nature. These algorithms are a subcategory of evolutionary algorithms, which are algorithmic schemes that evolve the solution in each iteration of the algorithm [1]. Evolutionary algorithms use the intelligence of the previous solution for the new one to be generated [2]. In innumerable works, the efficiency of these techniques has been tested. However, it is well known that the performance of metaheuristics depends largely on their correct parameter settings. In fact, finding the appropriate values for the parameters of an algorithm is considered to be a nontrivial task. Previous research has addressed this task that can be classified into two major forms of parameter setting (see Figure 1):  parameter tuning and parameter control. Parameter tuning is known as the process of finding good parameter values before the run of the solver—by using, for instance, training instances—, and then launching the algorithm using these values, which remain static during solving. On the contrary, parameter control launches the algorithm with initial parameter values which are updated during solving time. Now, parameter control is divided into three branches according to the degree of autonomy of the strategies. The control is deterministic when the parameter value is modified by some fixed, predetermined rule using no information from the search. The control is called adaptive, when feedback from the search is employed to determine the variation of a given parameter value, for instance, the population size. Finally, self-adaptive is known as when the parameters are encoded into individuals in order to evolve conjointly with the other variables of the problem. This applies to some metaheuristics; for instance, in evolutionary algorithms the better values of these encoded parameters may lead to better individuals, which in turn have more chances to propagate these good configurations to next generations.

Figure 1: Classification of parameter setting.

This paper proposes an adaptive approach for parameter control by using autonomous search which is a particular case of adaptive systems that improve their solving performance by modifying and adjusting themselves to the problem at hand, either by adaptation or supervised adaptation [3]. Autonomous Search is applied on the black hole algorithm that is a population-based metaheuristic inspired by the black hole phenomenon. The behaviour of this algorithm is principally controlled by two elements: number of solutions and event horizon. The main idea is to automatically adapt the population according to performance exhibited by the algorithm; this population variation indirectly influences on the event horizon.

To evaluate the adaptive approach on the black hole algorithm, we test the set covering problem which is one of the well-known Karp's 21 NP-complete problems. Interesting experimental results are obtained, where the proposed approach is able to obtain various global optimums for a set of 65 well-known set covering problem instances, outperforming also several recently reported techniques.

The remainder of this paper is organized as follows: The related work is introduced in the next section. A brief description of the set covering problem is presented in Section 3. Section 4 details the metaheuristic employed in this work, including their adaptive variation, the binary representation, and the repair of unfeasible solutions. Section 5 illustrate the experimental results, and finally, we conclude and suggest some lines of future research in Section 6.

2. Related Work

In the evolutionary computing field, many optimization problems have widely been treated by using evolutionary algorithms, such as [46] and also [79]. In this research line, the parameter setting approach [10] is one of the most studied challenges [1117]. However, a more reduced work can be observed in the literature when other metaheuristics are involved. For instance, a parameter adaptation study on ant colony optimization is reported in [18], a firefly algorithm for solving the optimal capacitor placement problem is described [19], in [20] a self-adaptive artificial bee colony for constrained numerical optimization is presented, and a modified cuckoo search algorithm for solving engineering problems can be encountered in [21, 22]. Now, considering more similar works, we find a variation of genetic algorithm that adjusts its population size in [23]. The basic idea is to adapt the actual population size depending on the difficulty of the algorithm in its ultimate goal to generate new child chromosomes that outperform their parents. Similarly, [24] presents a version of cuckoo search algorithm that adapts their parameter reflecting the probability whether the nest will be abandoned or updated. In both cases, the adaptive approach depends of quality solution. Finally, in [25] we can observe a variation of the artificial bee colony that consists in controlling the perturbation frequency.

On the other hand, the set covering problem has widely been explored in the optimization and mathematical programming sphere [26]. First works have used exact techniques, for instance branch-and-bound and branch-and-cut algorithms [2730]. Greedy algorithms have been proposed as a real alternative for solving the set covering problem; nevertheless their deterministic nature hinders the generation of high quality solutions [31]. Under this line, it is well known that exact methods are in general unable to tackle large instances of NP-complete problems, consequently much research work has been devoted to the study of efficient metaheuristics to solve hard instances of the set covering problem in an amount of bounded time. For instance, ant colony optimization [3235], simulated annealing [36], tabu search [37], and genetic algorithms [38] have extensively been proposed to tackle the classic set covering problem. In recent years, research has been driven towards solving this problem by using recent bioinspired algorithms, such as teaching-learning based optimization [39] firefly optimization [40], cat swarm optimization [41], shuffled frog leaping [42], artificial bee colony [43], cuckoo search [44], and black hole algorithm [44], among others.

Finally, we can say that an adaptive approach for bioinspired algorithms to solve the set covering problem—or other optimization problem—is an interesting research line that has not yet been highly exploited.

3. Set Covering Problem

In this section we present the set covering problem. The set covering problem consists in finding a set of solutions at the lowest possible cost to cover a set of needs. Line balancing production [45], crew scheduling [4648], service installation [49, 50], databases [51], production flow-lines optimization [52], and Boolean expressions [53] are some practical examples of set covering problem. Formally, we define the problem as follows: Let be a binary matrix with -rows  ×  -columns, and let be a vector representing the cost of each column , assuming that . Then, we can observe that column cover a row that exists in , if . The corresponding mathematical is stated in the following:subject toThe aim is to minimize the summation of column costs, where if component is part of the solution and otherwise. The constraints of the set covering problem guarantee that every row is covered by at least one column .

4. Black Hole Algorithm

The black hole algorithm is a population-based algorithm inspired on the black hole phenomenon [54, 55]. A black hole is a zone of space that has so much mass concentrated in it that there is no way for a nearby object to escape its gravitational force. Anything falling into a black hole, including light, cannot escape.

Similar to other population-based algorithms, the black hole algorithm begins with an initial population of potential solutions, called “stars". At each iteration of the black hole algorithm, the best solution is chosen to be the black hole, which then starts atractting other candidates around it. If the fitness of a star crosses the event horizon of the black hole (Eq. (5)), it will be swallowed and it is gone forever. In such case, a new star (potential solution) is randomly generated and placed in the search space and starts a new search.

The black hole has the ability to absorb the stars that surround it. After initializing the black hole and stars, the black hole begins by absorbing the stars around it and all the stars start moving towards the black hole. The absorption of stars by the black hole is formulated as follows:where and are the components of a solution at iterations and , respectively. is the component of best solution (black hole) in the search space. is a random number in the interval . Finally, represents the number of stars (potential solutions).

While moving towards the black hole, a star may reach a location with lower cost than the black hole. In such case, the black hole moves to the location of that star and vice versa. Then the algorithm will continue with the black hole in the new location and then stars start moving towards this new location.

In addition, there is the probability of crossing the event horizon during moving stars towards the black hole. Every star (or potential solution) that crosses the event horizon of the black hole will be swallowed by the black hole. Every time a candidate is sucked in by the black hole, another potential solution (star) is born and distributed randomly in the search space and starts a new search. This is done to keep the number of potential solutions constant. The next iteration takes place after all the stars have been moved.

The radius of the event horizon in the black hole algorithm is calculated using the following equation:where is the fitness value of the black hole, is the fitness value of the -th star, and is the number of stars (potential solutions).

When the distance between a star and the black hole (best candidate) is less than , that candidate is collapsed and a new star is created and distributed randomly in the search space. Based on the above description the main steps in the black hole algorithm are described in detail in Algorithm 1.

Algorithm 1: Black hole algorithm.

The black hole algorithm begins with the loading and processing phases. Then, at Lines 3-8, an initial population of stars is randomly generated as a vector of binary values that corresponds to one star or potential solution (Line 5). For each star, the fitness value is calculated by evaluating the objective function (Line 7).

Then, while a termination criterion (a maximum number of iterations or a sufficiently good solution was not reached) is met, each fitness of a potential solution is evaluated (Lines 11-37). As previously mentioned, the set covering problem is a minimization problem. This evaluation is handled in comparison presented at Line 13. If the new minimum value is less than minimum global, the minimum global is changed by the new minimum value and the best solution is stored in (black hole).

If the fitness of a star crosses the event horizon of the black hole, replace it with a new star in a random location in the search space. This process is described in the for loop statement at Lines 19-28.

Finally, the last for loop statement (Lines 29-36) generates new solutions according to Equation (4), for each dimension . This value belongs to real domain and it must be brought to a binary domain; thus a function is used again to transform the real value in a binary one.

4.1. Adaptive Black Hole Algorithm

The basic version of the black hole algorithm has not control parameters. If we observe the parameter , we can see that the population size is valuated before the run of the metaheuristic. At each iteration of the algorithm, the solutions are altered updating their position or when they are absorbed by the black hole, in which case a new random solution—or star—is created. However, the total number of solutions is preserved. The second part of the metaheuristic uses an absorption process, called event horizon that is a stochastic procedure and it is influenced by the best solution, discarding bad solutions that seem to lead to poor results. This is not necessarily a correct idea, since often a bad solution can be drive the search to an optimal one.

These concerns are very important for the performance of the algorithm, reason why we have decided to take the event horizon logic to calculate the value of the population size parameter in autonomous way. For that, we use autonomous search which is a particular case of adaptive systems that improve their solving performance by modifying and adjusting themselves to the problem at hand, either by adaptation or supervised adaptation (please see [3, 56] for details). This approach has been successfully applied in constraint programming using bioinspired algorithms for controlling the process resolution of solver tools [57, 58]. The objective of autonomous search is to allow the metaheuristic to self-adapt the value of the parameter during the run, according to the algorithm convergence.

The literature presents several works illustrating how to improve evolutionary algorithms by dynamically controlling parameters during solving time [1317, 22]. We have decided to modify the original black hole algorithm considering the autonomous search principles for building a procedure that adaptively vary the number of solutions (population size parameter), according to the performance exhibited during search.

We are inspired by this approach to improve the local search process of black hole algorithm. The main objective is to give the algorithm the ability to adjust its population size. When algorithm is stagnant some solutions are improved according to the best solution. In opposition, if solutions are far away from the best solution, the worst stars are eliminated and new ones are randomly generated.

We vary only the population parameter since is the only one available to be modified during solving time, this make the implementation easier and the solving process improved. We believe this approach also provide know-how for future experiments involving adaptive population. The procedure is described in Algorithm 2.

Algorithm 2: Adaptive approach for the population size parameter ().

Inputs of the procedure are the value of population size , current iteration , and number of stagnation that represents the number of iterations where the best solution does not improve, commonly called “local search”.

Reached this point, we calculate the distance between best and worst solutions according to cost of each one and then it divides by sum of all costs (Line 2). This value describes the percentage of maximum separation between solutions. A low percentage value indicates that solutions are homogeneous skewed to the best solution, in which case the procedure creates new solutions (Lines 5-18). To create these solutions, we reuse the percentage to determinate if they are cloned from best solution (Lines 8-10) or they are randomly generated (Lines 12-16). On the other hand, if means that solutions are heterogeneous; thus the procedure remove the worst solutions (Lines 20-22).

In both cases, the percentages calculated can also be seen as a mechanism to support the exploration and exploitation phases. If solution are homogeneous, this procedure allows to explore towards new solutions, while if solutions are heterogeneous, the procedure converges towards a set of similar solutions itself.

For computational experiments we use potential solution and number of generations. Although the approach proposed is adaptive, it is necessary to define an initial value for .

4.2. Binary Approaches

It is known that variables of the set covering problem are limited to binary values, namely, : for this reason, we use a binary representation for each solution given by the black hole algorithm, as shown in Figure 2, where

Figure 2: Binary solution representation.

In this paper, we use the standard black hole algorithm [59], where each star represent a candidate solution.

However, the standard version of the black hole algorithm is designed to solve problems with real domains. This task is resolved by transforming domains, by applying binarization strategies, which are responsible for forcing elements to move in a binary domain. The binarization strategy is composed of a transfer function and a discretization method. In this work, we tested 32 different binarization strategies.

We evaluate different functions, separated into two families [60, 61]: -Shape and -Shape (see Table 1).

Table 1: -Shape and -Shape transfer functions.

Once a transfer function is applied, the input real number is mapped to a real number belonging to interval. Then, a discretization method is required to produce a binary value from the real one. For achieving this, we test four different methods:(1)Standard: if condition is satisfied, standard method returns 1; otherwise, return 0.(2)Complement: if condition is satisfied, standard method returns the complement value.(3)Static probability: a probability is generated and it is evaluated with a transfer function.(4)Elitist: discretization method Elitist Roulette, also known as Monte Carlo, is to select randomly among the best individuals of the population, with a probability proportional to its fitness.

4.3. Heuristic Feasibility Operator

Generally, metaheuristics may provide solutions that violate the constraints of the problem. For instance, a new set covering problem solution owning uncovered rows, clearly violates a subset of constraints. In order to provide feasible solutions the algorithm needs additional operators. To this end, we employ a heuristic operator that achieves the generation of feasible solutions, and additionally eliminates column redundancy.

For making all feasible solutions, we calculate a percentage based on the cost of column over the sum of all the constraint matrix rows covered by a column , as shown in The unfeasible solutions are repaired by covering the columns of the solution that had the lower ratio. After this, a local optimization step is applied, where column redundancy is eliminated. A column is redundant when it can be deleted and the feasibility of the solution is not affected.

Algorithm 3 starts with the initialization of variables from instance in Lines 1-5. The recognition of the rows that are not covered are in Lines 6 and 7. Between the statements 8 and 18 is “greedy” heuristic. On the one hand, between the instructions 8 and 12, the columns with lower ratio are added to the solution. On the other hand, between Lines 13 and 18, the redundant columns with higher costs are deleted while the solution is feasible.

Algorithm 3: Heuristic feasibility operator.

5. Experimental Results

After to apply the adaptive approach, we have analyzed the time complexity of the basic algorithm to illustrate that our proposal does not affect its performance. If we observe each flow control, statements and expressions from the basic black hole algorithm, we can determine that time complexity is given by , where is a constant representing the maximum of iterations and it is avoided, value is the number of solutions, and is the dimension of each solution. In worst case, the basic algorithm is upper bounded by .

Now, evaluating time complexity about our adaptive approaches, we determine that the upper bound is given by , where is the new number of solutions and is the dimension of each solution. This procedures are activated when the number of iterations achieve the local search criteria ls, therefore they are independent of the main algorithm and as a consequence its performance and complexity time are not altered.

With respect to space analysis about our approach, we considerate the space used in principal memory. In this context, every time that the adaptive algorithms are lunched, we observe that the usage of the principal memory is not outperformed to 8%. The basic algorithm use a similar percentage of principal memory.

The performance of the adaptive black hole algorithm is experimentally evaluated by using 65 instances of the set covering problem organized in 11 sets taken from the Beasley's OR-library [62]. Table 2 describes instance set, number of rows or constraints , number of columns or variables (dimension) , range of costs, and density (percentage of nonzeroes in the matrix).

Table 2: Instances from Beasley’s OR-Library.

In order to reduce the instance size of set covering problem, we have used a preprocessed instances set. Different preprocessing methods have particularly been proposed in [63]. We employ two of them, which have been proved to be the most effective ones:(i)Column Domination. The nonunicost set covering problem holds different column costs, then once a set of rows is covered by another column and , we say that column is dominated by , then column is removed from the solution.(ii)Column Inclusion. If a row is covered by only one column after the above domination, it means that there is no better column to cover those rows; consequently this column must be included in optimal solution.

The results are evaluated using the relative percentage deviation (). The value quantifies the deviation of the objective value from that in our case is the best known value for each instance and it is calculated as follows:

The minimum (Min), maximum (Max), and average (Avg) of the solutions obtained were achieved running 30 runs over each one of set the 65 test instances. We test all the combinations of transfer functions and discretization methods over all these instances. Before to test all instances, we perform a sampling phase to evaluate which binarization strategy would show the best performance. In appendix section we include preliminary results in the sampling phase (see Tables 9 and 10 and Figures 7 and 8). The binarization strategy that achieved the best results for the black hole algorithm is .

5.1. Black Hole Algorithms Comparison

In this Section, we compare the proposed adaptive black hole algorithm with the basic black hole one. To the best of our knowledge, no autonomous search implementation on top of black hole has been proposed yet to include in the comparison.

The algorithm implementation has been done using Java SE 7 and the experiments have been launched on a 2.3 Ghz Intel Core i3 with 4 GB RAM machine running Windows 7 (available at https://goo.gl/AEYXJq). The initial parameter setting used is as follows: a population size of 40, 4000 iterations, a local search of 40 and 30 executions for each instance. Bold font and underlined row are used to present when the adaptive black hole algorithm outperforms their basic version.

Table 3 illustrates the results obtained for all instances from groups 4, 5, 6, A, B, C, and D. Regarding instance sets 4 and 6, the proposed adaptive approach in this work (BH-AS) improves dramatically the results in 50%, reaching 4 global optimums. Considering instance sets 5 and 6, we can observe that the basic black hole algorithm exhibits great performance achieving 80% of the optimal values. However, the adaptive process of the population size parameter improves results obtaining 2 more optimal values. Now, if we analyze instances from groups A, B, C, and D, we can see that again the basic black hole algorithm shows an outstanding efficiency to solve the set covering problem, solving to optimally 15 of 20 instances. Nevertheless, the adaptive approach shows that it can improve even more, finding one more optimal value for the instance C.1.

Table 3: Computational results of groups 4, 5, 6, A, B, C, and D.

Table 4 presents the results obtained for instances from groups NRE, NRF, NRG, and NRH. If we consider that the basic black hole algorithm reaches 3 optimal values from 20, there is a wide margin to improve. Taking into account the fact that the autoadaptive methods outperform the basic BH algorithm in 11 of 17 possible instances and 2 of them are global optima, we can say that BH-AS performs significantly better than the basic version.

Table 4: Computational results of groups NRE, NRF, NRG, and NRH.

Figure 3 illustrates convergence charts for the hardest instances of each instance group. Here, we observe that for the 4 group the convergence of the BH-AS is clearly faster than the basic algorithm. For the group 5, the BH-AS begins with a bad quality solution but at the middle of the process improves its performance outperforming the basic BHA similar behaviour can be seen for the NRE and NRH group. The performance of instances from groups 6, A, B, C, and D is similar keeping all of them an early convergence. Finally, for benchmarks from groups NRF and NRG, the behaviour of BH-AS is clearly earlier than its competitor.

Figure 3: Convergence charts for the hardest instances of each instance group.

Now, if we regard solving times required for reaching the solutions, we may observe that times are very similar for the two algorithms. However, we must consider that BH-AS need the computation of the adaptive process and is able to outperform the basic BH in terms of optimum values reached. We can observe also a small difference in terms of solving times in favor of the BH-AS with respect to BH.

5.2. BH-AS v/s Original Approach

After to analyze the exhibited efficiency for BH-AS and the basic algorithm for solving the set covering problem, we have observed that the variation is minimal. It is not possible to state which technique is the best. However, in order to show a significant difference between the adaptive approach and the original black hole algorithm, we perform a contrast statistical test for each instance through the Kolmogorov-Smirnov-Lilliefors to determine the independence of samples [64] and Wilcoxon's signed rank [65] to compare statistically the results.

For both tests, we consider a hypothesis evaluation, which is analyzed assuming a of —accuracy—i.e., smaller values than , determines that the corresponding hypothesis cannot be assumed. Both tests were conducted using GNU Octave  (available at http://goo.gl/jtHn8i).

The first test allows us to analyze the independence of samples. For that, we run the algorithm 30 times for each instance and then, we apply the test. According to obtained values, we can decide if samples follow a normal distribution or they are independent. To proceed, we propose the following hypotheses:(i) states that follows a normal distribution.(ii) states the opposite.

The conducted test has yielded lower than ; therefore cannot be assumed.

Then, as samples are independent and cannot be assumed that follow a normal distribution, it is not feasible to use the central limit theorem to approximate the distribution of the sample mean as Gaussian. Therefore, we assume the use of a nonparametric test for evaluating the heterogeneity of samples. For that we use the Wilcoxon’s signed rank test. This is a paired test that compare the medians of two distributions. To proceed, we propose the following new hypotheses:(i): achieved by basic black hole algorithm achieved by BH-AS.(ii) states the opposite.

Table 5 compares the approaches for all tested instances via the Wilcoxon’s signed rank test. As the significance level is also established to , smaller values that defines that cannot be assumed. Again, bold font and underlined row are used for a winner value of the metaheuristic stated in the column of the table, e.g., for instance 4.1, BH-AS is better than basic algorithm as its value is lower than , then cannot be assumed.

Table 5: Statistical test.

According to results, for lower than for black hole optimization are 12 and for BH-AS are 40. The rest does not provide significant information. This results illustrating that the performance of BH-AS is better than the original approach.

5.3. BH-AS v/s Other Optimization Techniques

To evidence the performance of our adaptive approach, we perform a comparison with different approximation techniques: binary cat swarm optimization (BCSO) [41], binary firefly optimization (BFO) [40], binary shuffled frog leaping algorithm (BSFLA) [42], binary artificial bee colony (BABC) [43], and binary electromagnetism-like algorithm (BELA) [66].

We additionally incorporate a comparative by using Mixed Integer Linear Programming (MIP) as exact solving method implemented on MiniZinc G12 MIP. With the solver, the instances are solved to a maximum time of 8 hours. If no solution is found at this point the problem is set to t.o. (time-out).

Tables 6, 7, and 8 illustrate that the proposed approach is able to reach competitive results in contrast to those modern optimization techniques.

Table 6: Comparison results for instance set of groups 4, 5, 6, and A.
Table 7: Computational results for instances set of groups B, C, D, NRE, NRF, and NRG.
Table 8: Computational results for instances of group NRH.
Table 9: Sampling phase: hardest instances for groups 4, 5, 6, A, and B.
Table 10: Sampling phase: hardest instances for groups C, D, NRE, NRF, BRG, and NRH.

For group 4, the adaptive approach shows outstanding behaviour reaching 80% of the total optimum values, while BFO is the only one finding optimum values. However, if we compare the adaptive black hole with MIP, results illustrate the exact method is more efficient than the adapted algorithm in terms of solving time.

Considering groups 5 and 6, we can observe the adaptive black hole algorithm is able of find all optimum values, in reduce solving times. Comparing results with others metaheuristics, we can state that our approach is better than them, outperforming the MIP in terms of solving time.

Finally, for biggest instances—from group A to group NRH—the exact method MIP becomes incapable to find the solution (time-out). Figures 4, 5, and 6 illustrate the comparative among the adaptive approach and approximate algorithms when they are solving hardest instances of each group.

Figure 4: Algorithm comparisons for instances 4.10, 5.10, and 6.5.
Figure 5: Algorithm comparisons for instances A.5, B.5, C.5, and D.5.
Figure 6: Algorithm comparisons for instances NRE.5, NRF.5, NRG.5, and NRH.5.
Figure 7: Sampling phase. Hardest instances for groups 4, 5, 6, A, B, and C.
Figure 8: Sampling phase. Hardest instances for groups D, NRE, NRF, NRG, and NRH.

If we observe instances 4.10 and 6.5, we can see the BH-AS is better than BCSO and BELA. Now, for instance D.5, BH-AS is better than all studied approximates techniques. Now, to solve NRG.5 results indicate that the BH-AS method is the best alternative. Last, if we compare BH-AS v/s MIP, we can say that our approach overcomes the exact method due to this is not able to solve the hardest groups.

6. Conclusions and Future Work

In this paper, we have presented an adaptive approach for black hole algorithm to solve different instances of the set covering problem. This approach is based on adaptive for population size parameter which is valuated before the run of the metaheuristic. To this end, we use autonomous search which is a particular case of adaptive systems that improve their solving performance by modifying and adjusting themselves to the problem at hand, either by adaptation or supervised adaptation. We have added to the core algorithm an effective preprocessing process that allows filtering and discarding values leading to unfeasible solutions. On the other hand, we include a set of binarization strategies to adapt the black hole algorithm to binary domain. We have tested 65 nonunicost instances from the Beasleys OR-Library where several global optimum values which were not reached using by the basic black algorithm were achieved via the autoadaptive approach. We have also compared the proposed adaptive approach by using a nonparametric statistical tests and the results are conclusive.

As future work, we plan to experiment autoadaptive approaches in recent bioinspired algorithms and to provide a larger comparison of techniques for solving the set covering problem. The integration of autonomous search can led the research towards new study lines, such as dynamically selecting the best binarization strategy during solving according to performance indicators as analogously studied in [67, 68].

Appendix

Sampling phase allows us to know in a preliminary way which binarization strategy exhibits the better result. Tables 9 and 10 show results when we tested all binarization strategy. Moreover, we include a set of charts (Figures 7 and 8) that summarize the obtained computational experiments. If we observe, the combination and Elitist has a greater number of convergences towards the lowest known value. For this reason, we have decided to use this binarization strategy for all experiments.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

Ricardo Soto is supported by Grant CONICYT/FONDECYT/REGULAR/1160455. Broderick Crawford is supported by Grant CONICYT/FONDECYT/REGULAR/1171243. Carla Taramasco and Rodrigo Olivares are supported by CONICYT/FONDEF/IDeA/ID16I10449, FONDECYT/STIC-AMSUD/17STIC-03, FONDECYT/MEC/MEC80170097, and CORFO-CENS 16CTTS-66390 through the National Center on Health Information Systems. Moreover, Rodrigo Olivares is supported by Postgraduate Grant Pontificia Universidad Católica de Valparaíso (INF-PUCV 2015-2017).

References

  1. X. Yang, “Nature-inspired mateheuristic algorithms: success and new challenges,” Journal of Computer Engineering and Information Technology, vol. 1, no. 1, 2012. View at Publisher · View at Google Scholar
  2. A. P. Piotrowski, M. J. Napiorkowski, J. J. Napiorkowski, and P. M. Rowinski, “Swarm Intelligence and Evolutionary Algorithms: Performance versus speed,” Information Sciences, vol. 384, pp. 34–85, 2017. View at Publisher · View at Google Scholar · View at Scopus
  3. Y. Hamadi, E. Monfroy, and F. Saubion, “What is autonomous search?” in Hybrid optimization, vol. 45 of Springer Optimization and Its Applications, pp. 357–391, Springer, New York, NY, USA, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  4. D. Gong, X. Ji, J. Sun, and X. Sun, “Interactive evolutionary algorithms with decision-maker’s preferences for solving interval multi-objective optimization problems,” in Communications in Computer and Information Science, pp. 23–29, Springer, Berlin, Germany, 2012. View at Google Scholar
  5. Y. Liu, D. Gong, J. Sun, and Y. Jin, “A Many-Objective Evolutionary Algorithm Using A One-by-One Selection Strategy,” IEEE Transactions on Cybernetics, vol. 47, no. 9, pp. 2689–2702, 2017. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Sun, D. Gong, X. Zeng, and N. Geng, “An ensemble framework for assessing solutions of interval programming problems,” Information Sciences, vol. 436/437, pp. 146–161, 2018. View at Publisher · View at Google Scholar · View at MathSciNet
  7. D. Gong, J. Sun, and Z. Miao, “A set-based genetic algorithm for interval many-objective optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 1, pp. 47–60, 2018. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Sun, D. Gong, and X. Sun, “Solving interval multi-objective optimization problems using evolutionary algorithms with preference polyhedron,” in Proceedings of the 13th Annual Genetic and Evolutionary Computation Conference (GECCO '11), vol. 233, pp. 729–736, ACM Press, Dublin, Ireland, July 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. D.-W. Gong, N.-N. Qin, and X.-Y. Sun, “Evolutionary algorithms for optimization problems with uncertainties and hybrid indices,” Information Sciences, vol. 181, no. 19, pp. 4124–4138, 2011. View at Publisher · View at Google Scholar · View at Scopus
  10. T. Roeper and E. Williams, “The theory of parameters and syntactic development,” in Parameter Setting, pp. 191–215, Springer, Netherlands, 1987. View at Google Scholar
  11. Á. E. Eiben, R. Hinterding, and Z. Michalewicz, “Parameter control in evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 124–141, 1999. View at Publisher · View at Google Scholar · View at Scopus
  12. G. Karafotias, M. Hoogendoorn, and A. E. Eiben, “Parameter Control in Evolutionary Algorithms: Trends and Challenges,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 2, pp. 167–187, 2015. View at Publisher · View at Google Scholar · View at Scopus
  13. C. Salto and E. Alba, “Designing heterogeneous distributed GAs by efficiently self-adapting the migration period,” Applied Intelligence, vol. 36, no. 4, pp. 800–808, 2012. View at Publisher · View at Google Scholar
  14. A. K. Qin and P. N. Suganthan, “Self-adaptive differential evolution algorithm for numerical optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '05), pp. 1785–1791, Edinburgh, Scotland, UK, September 2005. View at Publisher · View at Google Scholar · View at Scopus
  15. W. Yi, L. Gao, X. Li, and Y. Zhou, “A new differential evolution algorithm with a hybrid mutation operator and self-adapting control parameters for global optimization problems,” Applied Intelligence, vol. 42, no. 4, pp. 642–660, 2015. View at Publisher · View at Google Scholar
  16. M. Han, S. Liao, J. Chang, and C. Lin, “Dynamic group-based differential evolution using a self-adaptive strategy for global optimization problems,” Applied Intelligence, vol. 39, no. 1, pp. 41–56, 2013. View at Publisher · View at Google Scholar
  17. K. Liang, X. Yao, and C. S. Newton, “Adapting self-adaptive parameters in evolutionary algorithms,” Applied Intelligence, vol. 15, no. 3, pp. 171–180, 2001. View at Publisher · View at Google Scholar
  18. T. Stutzle, M. Lopez-Ibanez, P. Pellegrini et al., “Parameter adaptation in ant colony optimization,” in What Is Autonomous Search? pp. 191–215, Springer, Berlin, Germany, 2012. View at Google Scholar
  19. J. Olamaei, M. Moradi, and T. Kaboodi, “A new adaptive modified firefly algorithm to solve optimal capacitor placement problem,” in Proceedings of the 18th Electric Power Distribution Network Conference, EPDC 2013, Kermanshah, Iran, May 2013. View at Scopus
  20. X. Li and M. Yin, “Self-adaptive constrained artificial bee colony for constrained numerical optimization,” Neural Computing and Applications, vol. 24, no. 3-4, pp. 723–734, 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. S. Mahmoudi and S. Lotfi, “Modified cuckoo optimization algorithm (MCOA) to solve graph coloring problem,” Applied Soft Computing, vol. 33, pp. 48–64, 2015. View at Publisher · View at Google Scholar · View at Scopus
  22. T. T. Nguyen and D. N. Vo, “Modified cuckoo search algorithm for short-term hydrothermal scheduling,” International Journal of Electrical Power & Energy Systems, vol. 65, pp. 271–281, 2015. View at Publisher · View at Google Scholar · View at Scopus
  23. M. Affenzeller, S. Wagner, and S. Winkler, “Self-adaptive population size adjustment for genetic algorithms,” in Computer Aided Systems Theory EUROCAST 2007, pp. 820–828, Springer, Berlin, Germany, 2007. View at Google Scholar
  24. X. Li and M. Yin, “Modified cuckoo search algorithm with self adaptive parameter method,” Information Sciences, vol. 298, pp. 80–97, 2015. View at Publisher · View at Google Scholar
  25. B. Akay and D. Karaboga, “A modified Artificial Bee Colony algorithm for real-parameter optimization,” Information Sciences, vol. 192, pp. 120–142, 2012. View at Publisher · View at Google Scholar · View at Scopus
  26. A. Caprara, M. Fischetti, and P. Toth, “Algorithms for the set covering problem,” Annals of Operations Research, vol. 98, pp. 353–371, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  27. E. Balas, “A dynamic subgradient-based branch-and-bound procedure for set covering,” Location Science, vol. 5, no. 3, article 203, 1997. View at Publisher · View at Google Scholar
  28. J. E. Beasley, “An algorithm for set covering problem,” European Journal of Operational Research, vol. 31, no. 1, pp. 85–93, 1987. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  29. B. Yelbay, S. I. Birbil, and K Bülbül, “The set covering problem revisited: an empirical study of the value of dual information,” Journal of Industrial and Management Optimization, vol. 11, no. 2, pp. 575–594, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  30. R. A. Rushmeier and G. L. Nemhauser, “Experiments with parallel branch-and-bound algorithms for the set covering problem,” Operations Research Letters, vol. 13, no. 5, pp. 277–285, 1993. View at Publisher · View at Google Scholar · View at Scopus
  31. V. Chvatal, “A greedy heuristic for the set-covering problem,” Mathematics of Operations Research, vol. 4, no. 3, pp. 233–235, 1979. View at Publisher · View at Google Scholar · View at MathSciNet
  32. B. Crawford, R. Soto, E. Monfroy, F. Paredes, and W. Palma, “A hybrid Ant algorithm for the set covering problem,” International Journal of Physical Sciences, vol. 6, no. 19, pp. 4667–4673, 2011. View at Google Scholar · View at Scopus
  33. Z.-G. Ren, Z.-R. Feng, L.-J. Ke, and Z.-J. Zhang, “New ideas for applying ant colony optimization to the set covering problem,” Computers & Industrial Engineering, vol. 58, no. 4, pp. 774–784, 2010. View at Google Scholar
  34. R. Hadji, M. Rahoual, E. Talbi, and V. Bachelet, “Ant colonies for the set covering proble,” in Proceedings of ANTS 2000, pp. 63–66, 2015. View at Google Scholar
  35. L. Lessing, I. Dumitrescu, and T. Stutzle, “A comparison between ACO algorithms for the set covering problem,” in Ant Colony Optimization and Swarm Intelligence, vol. 3172 of Lecture Notes in Computer Science, pp. 1–12, Springer, Berlin, Germany, 2004. View at Publisher · View at Google Scholar
  36. M. J. Brusco, L. W. Jacobs, and G. M. Thompson, “A morphing procedure to supplement a simulated annealing heuristic for cost- and coverage-correlated set-covering problems,” Annals of Operations Research, vol. 86, pp. 611–627, 1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. M. Caserta, “Tabu search-based metaheuristic algorithm for large-scale set covering problems,” in Metaheuristics: Progress in Complex Systems Optimization, pp. 43–63, Springer, Boston, Mass, USA, 2007. View at Google Scholar
  38. J. E. Beasley and P. C. Chu, “A genetic algorithm for the set covering problem,” European Journal of Operational Research, vol. 94, no. 2, pp. 392–404, 1996. View at Publisher · View at Google Scholar · View at Scopus
  39. Y. Lu and F. J. Vasko, “An OR Practitioner's Solution Approach for the Set Covering Problem,” International Journal of Applied Metaheuristic Computing, vol. 6, no. 4, pp. 1–13, 2015. View at Publisher · View at Google Scholar
  40. B. Crawford, R. Soto, M. Olivares-Suárez, and F. Paredes, “A binary firefly algorithm for the set covering problem,” Advances in Intelligent Systems and Computing, vol. 285, pp. 65–73, 2014. View at Publisher · View at Google Scholar · View at Scopus
  41. B. Crawford, R. Soto, N. Berríos et al., “A binary cat swarm optimization algorithm for the non-unicost set covering problem,” Mathematical Problems in Engineering, vol. 2015, Article ID 578541, 8 pages, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  42. B. Crawford, R. Soto, C. Peña, W. Palma, F. Johnson, and F. Paredes, “Solving the Set Covering Problem with a Shuffled Frog Leaping Algorithm,” in Intelligent Information and Database Systems, vol. 9012 of Lecture Notes in Computer Science, pp. 41–50, Springer International Publishing, Cham, Switzerland, 2015. View at Publisher · View at Google Scholar
  43. R. Cuesta, B. Crawford, R. Soto, and F. Paredes, “An Artificial Bee Colony Algorithm for the Set Covering Problem,” in Modern Trends and Techniques in Computer Science, vol. 285 of Advances in Intelligent Systems and Computing, pp. 53–63, Springer International Publishing, Cham, Switzerland, 2014. View at Publisher · View at Google Scholar
  44. R. Soto, B. Crawford, R. Olivares et al., “Solving the non-unicost set covering problem by using cuckoo search and black hole optimization,” Natural Computing, vol. 16, no. 2, pp. 213–229, 2017. View at Publisher · View at Google Scholar · View at Scopus
  45. M. Salveson, “The assembly line balancing problem,” Journal of Industrial Engineering, vol. 6, pp. 18–25, 1955. View at Google Scholar · View at MathSciNet
  46. E. K. Baker, L. D. Bodin, W. F. Finnegan, and R. J. Ponder, “Efficient Heuristic Solutions to an Airline Crew Scheduling Problem,” A I I E Transactions, vol. 11, no. 2, pp. 79–85, 2007. View at Publisher · View at Google Scholar
  47. I. Bartholdi, “A guaranteed-accuracy round-off algorithm for cyclic scheduling and set covering,” Operations Research, vol. 29, no. 3, pp. 501–510, 1981. View at Publisher · View at Google Scholar · View at MathSciNet
  48. J. Rubin, “Technique for the solution of massive set covering problems, with application to airline crew scheduling,” Transportation Science, vol. 7, no. 1, pp. 34–48, 1973. View at Publisher · View at Google Scholar · View at Scopus
  49. C. Toregas, R. Swain, C. ReVelle, and L. Bergman, “The location of emergency service facilities,” Operations Research, vol. 19, no. 6, pp. 1363–1373, 1971. View at Publisher · View at Google Scholar
  50. W. Walker, “Using the set-covering problem to assign fire companies to fire houses,” Operations Research, vol. 22, no. 2, pp. 275–277, 1974. View at Publisher · View at Google Scholar
  51. K. Munagala, S. Babu, R. Motwani, and J. Widom, “The pipelined set cover problem,” in Database Theory---ICDT 2005, vol. 3363 of Lecture Notes in Computer Science, pp. 83–98, Springer, Berlin, Germany, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  52. S. Helber, K. Schimmelpfeng, R. Stolletz, and S. Lagershausen, “Using linear programming to analyze and optimize stochastic flow lines,” Annals of Operations Research, vol. 182, pp. 193–211, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  53. M. A. Breuer, “Simplification of the covering problem with application to Boolean expressions,” Journal of the ACM, vol. 17, pp. 166–181, 1970. View at Publisher · View at Google Scholar · View at MathSciNet
  54. A. Hatamlou, “Black hole: a new heuristic optimization approach for data clustering,” Information Sciences, vol. 222, pp. 175–184, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  55. S. Kumar, D. Datta, and S. K. Singh, “Black hole algorithm and its applications,” in Studies in Computational Intelligence, vol. 575, pp. 147–170, Springer International Publishing, Cham, Switzerland, 2015. View at Publisher · View at Google Scholar
  56. Y. Hamadi, E. Monfroy, and F. Saubion, Autonomous Search, Springer, Berlin, Germany, 2012. View at Scopus
  57. R. Soto, B. Crawford, R. Olivares et al., “Online control of enumeration strategies via bat algorithm and black hole optimization,” Natural Computing, vol. 16, no. 2, pp. 241–257, 2017. View at Publisher · View at Google Scholar · View at Scopus
  58. R. Soto, B. Crawford, R. Olivares et al., “Using autonomous search for solving constraint satisfaction problems via new modern approaches,” Swarm and Evolutionary Computation, vol. 30, pp. 64–77, 2016. View at Publisher · View at Google Scholar · View at Scopus
  59. M. Nemati, H. Momeni, and N. Bazrkar, “Binary Black Holes Algorithm,” International Journal of Computer Applications, vol. 79, no. 6, pp. 36–42, 2013. View at Publisher · View at Google Scholar
  60. S. Mirjalili and A. Lewis, “S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization,” Swarm and Evolutionary Computation, vol. 9, pp. 1–14, 2013. View at Publisher · View at Google Scholar · View at Scopus
  61. Broderick Crawford, Ricardo Soto, Gino Astorga, José García, Carlos Castro, and Fernando Paredes, “Putting Continuous Metaheuristics to Work in Binary Search Spaces,” Complexity, vol. 2017, Article ID 8404231, 19 pages, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  62. J. Beasley, “OR-Library,” 1990, https://goo.gl/lO1UQ6.
  63. M. L. Fisher and P. Kedia, “Optimal solution of set covering/partitioning problems using dual heuristics,” Management Science, vol. 36, no. 6, pp. 674–688, 1990. View at Publisher · View at Google Scholar · View at MathSciNet
  64. H. W. Lilliefors, “On the Kolmogorov-Smirnov test for normality with mean and variance unknown,” Journal of the American Statistical Association, vol. 62, no. 318, pp. 399–402, 1967. View at Publisher · View at Google Scholar
  65. H. B. Mann and D. R. Whitney, “On a test of whether one of two random variables is stochastically larger than the other,” Annals of Mathematical Statistics, vol. 18, pp. 50–60, 1947. View at Publisher · View at Google Scholar · View at MathSciNet
  66. R. Soto, B. Crawford, A. Muñoz, F. Johnson, and F. Paredes, “Pre-processing, repairing and transfer functions can help binary electromagnetism-like algorithms,” in Artificial intelligence perspectives and applications, vol. 347 of Advances in Intelligent Systems and Computing, pp. 89–97, Springer, Cham, Switzerland, 2015. View at Google Scholar · View at MathSciNet
  67. R. Soto, B. Crawford, S. Misra et al., “Choice functions for autonomous search in constraint programming: GA vs. PSO,” Tehnički vjesnik, vol. 20, no. 4, pp. 621–627, 2013. View at Google Scholar · View at Scopus
  68. R. Soto, B. Crawford, W. Palma et al., “Boosting autonomous search for CSPs via skylines,” Information Sciences, vol. 308, pp. 38–48, 2015. View at Publisher · View at Google Scholar · View at Scopus