- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 716069, 12 pages
A Hybrid Soft Computing Approach for Subset Problems
1Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile
2Universidad Finis Terrae, Santiago 7500000, Chile
3Universidad Autónoma de Chile, Santiago 7500000, Chile
4CNRS, LINA, Université de Nantes, Nantes 44322, France
5Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
6Escuela de Ingeniería Industrial, Universidad Diego Portales, Santiago 8370179, Chile
Received 7 April 2013; Revised 21 June 2013; Accepted 22 June 2013
Academic Editor: Ker-Wei Yu
Copyright © 2013 Broderick Crawford et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Subset problems (set partitioning, packing, and covering) are formal models for many practical optimization problems. A set partitioning problem determines how the items in one set () can be partitioned into smaller subsets. All items in must be contained in one and only one partition. Related problems are set packing (all items must be contained in zero or one partitions) and set covering (all items must be contained in at least one partition). Here, we present a hybrid solver based on ant colony optimization (ACO) combined with arc consistency for solving this kind of problems. ACO is a swarm intelligence metaheuristic inspired on ants behavior when they search for food. It allows to solve complex combinatorial problems for which traditional mathematical techniques may fail. By other side, in constraint programming, the solving process of Constraint Satisfaction Problems can dramatically reduce the search space by means of arc consistency enforcing constraint consistencies either prior to or during search. Our hybrid approach was tested with set covering and set partitioning dataset benchmarks. It was observed that the performance of ACO had been improved embedding this filtering technique in its constructive phase.
Set covering problem (SCP) and set partitioning problem (SPP) have many applications, including those involving routing, scheduling, stock cutting, electoral redistricting, and other important real-life situations [1, 2]. Although the best known application of the SPP is airline crew scheduling [3, 4], several other applications exist, including vehicle routing problems (VRP) [5, 6] and query processing . The main disadvantage of SPP-based models is the need to explicitly generate a large set of possibilities to obtain good solutions. Additionally, in many cases, a prohibitive time is needed to find the exact solution.
Furthermore, Set Partitioning Problems occur as subproblems in various combinatorial optimization problems . In airline scheduling, a subtask called crew scheduling takes as input data a set of crew pairings, where the selection of crew pairings which cause minimal costs and ensure that each flight is covered exactly once can be modeled as a set partitioning problem [1, 9]. In [10, 11], solving a particular case of VRP, the dial-a-ride problem (DARP), also uses an SPP decomposition approach.
Because the SPP formulation has demonstrated to be useful modeling important industrial problems (or their phases), it is our interest to solve it with novel techniques. In this work, we solve some test instances of SPP and SCP (SCP is considered a relaxation of SPP) with ant colony optimization (ACO) algorithms and some hybridizations of ACO with a constraint programming (CP) technique: constraint propagation .
ACO is a swarm intelligence metaheuristic which is inspired from the foraging behavior of real ant colonies. The ants deposit pheromone on the ground marking the path for identification by other members of the colony of the routes from the nest to food . From the early nineties, ACO attracted the attention of researchers, and many successful applications solving optimization problems are done .
There exist already some good approaches applying ACO to subset problems . In , ACO is applied on the set packing problem using two solution construction strategies based on exploration and exploitation. In general, the same occurs in relation with SCP, applying ACO only as a construction algorithm and testing the approach only on some small SCP instances. More recent works apply ant computing to the SCP and related problems using techniques to remove redundant columns and local search to improve solutions [17–22].
The best performing metaheuristics for SPP are genetic algorithms [23, 24]. Taking into account these results, it seems that the incomplete approach of ant computing could be considered as a good alternative to solve these problems when complete techniques are not able to get the optimal solution in a reasonable time.
ACO is of limited effectiveness solving very strongly constrained problems. They are problems for which neighborhoods contain few solutions, or none at all, and local search is of very limited use. Probably, the most significant of such problems is the SPP. A direct implementation of the basic ACO framework is incapable of obtaining feasible solutions for many standard tested instances of SPP [25–27].
Trying to solve larger instances of SPP with the original ant system (AS)  or ant colony system (ACS)  implementation derives in a lot of unfeasible labeling of variables, and the ants cannot obtain complete solutions using the classic transition rule when they move in their neighborhood. The root of the problem is that simply following the random proportional transition rule; that is, learning/reforcing good paths is no longer enough, as this does not check for constraint consistency.
In order to improve this aspect of ACO, we are working in the addition of a constraint programming mechanism in the construction phase of ACO; thus, only feasible partial solutions are generated. The CP mechanism allows the incorporation of information about the instantiation of variables after the current decision. In general, ACO algorithms are competitive with other optimization techniques when applied to problems that are not overly constrained. However, when solving highly constrained problems the performance of ACO algorithms degrades. When a problem is highly constrained, the difficulty is in finding feasible solutions. This is where CP comes into play, because these problems are the target problems for CP solvers. CP is a programming paradigm in which a combinatorial optimization problem is modeled as a discrete optimization problem, specifying the constraints that a feasible solution must meet. The CP approach to search for a feasible solution often works by the iteration of constraint propagation and the addition of additional constraints, and it transforms the problem without changing its solutions. Constraint propagation is the mechanism that reduces the domains of the decision variables with respect to the given set of constraints .
Although the idea of obtaining sinergy from hybridization of ACO with CP is not novel [31–36], our proposal is a bit different. We explore the addition to the ACO algorithm of a mechanism to check constraint consistency usually used in complete techniques: arc consistency. Other kinds of cooperation between ACO and CP are shown in , where combinatorial optimization problems are solved in a generic way by a two-phase algorithm. The first phase aims to create a hot start for the second: it samples the solution space and applies reinforcement learning techniques as implemented in ACO to create pheromone trails. During the second phase, a CP optimizer performs a complete tree search guided by the pheromone trails previously accumulated.
Here, we propose the addition of a lookahead mechanism in the construction phase of ACO in order that only feasible solutions are generated. The lookahead mechanism allows the incorporation of information about the instantiation of variables after the current decision. The idea differs from that proposed by [31, 32], and these authors proposed a lookahead function evaluating the pheromone in the shortest common supersequence problem and estimating the quality of a partial solution of an industrial scheduling problem, respectively.
This paper is organized as follows. In Section 2, we explain the problem. In Section 3, we describe the ACO framework. In Section 4, we present the definitions considered in constraint propagation. Our hybrid proposal is described in Section 5. In Section 6, we present the experimental results obtained. Finally, in Section 7, we conclude the paper and give some perspectives for future research.
2. Problem Description
SPP is the problem of partitioning a given set into manually independent subsets while minimizing a cost function defined as the sum of the costs associated with each of the eligible subsets.
In the SPP matrix formulation, we are given a matrix in which all the matrix elements are either zero or one. Additionally, each column is given a nonnegative cost .
Let and be the row set and column set, respectively.
We say that a column covers a row if = 1. Let be a binary variable which is one if column is chosen and zero otherwise. The SPP can be defined formally as minimize (1) subject to (2). These constraints enforce that each row is covered by exactly one column. The SCP is an SPP relaxation. The goal in the SCP is to choose a subset of the columns of minimal weight formally using constraints to enforce that each row is covered by at least one column as (3):
The notations in (5) are often used to complete the description of the problem:
3. Ant Colony Optimization for Set Partitioning Problems
In this section, we briefly present ACO algorithms and give a description of their use to solve SPP. More details about ACO algorithms can be found in .
The basic idea of ACO algorithms comes from the capability of real ants to find the shortest paths between the nest and food source. From a combinatorial optimization point of view, the ants are looking for good solutions. Real ants cooperate in their search for food by depositing pheromone on the ground. An artificial ant colony simulates this behavior implementing artificial ants as parallel processes whose role is to build solutions using a randomized constructive search driven by pheromone trails and heuristic information of the problem.
An important topic in ACO is the adaptation of the pheromone trails during algorithm execution to take into account the cumulated search experience: reinforcing the pheromone associated with good solutions and considering the evaporation of the pheromone on the components over time in order to avoid premature convergence. ACO can be applied in a very straightforward way to SPP. The columns are chosen as the solution components and have associated a cost and a pheromone trail . Each column can be visited by an ant only once, and then a final solution has to cover all rows. A walk of an ant over the graph representation corresponds to the iterative addition of columns to the partial solution obtained so far. Each ant starts with an empty solution and adds columns until a cover is completed. A pheromone trail and a heuristic information are associated with each eligible column . A column to be added is chosen with a probability that depends on pheromone trail and the heuristic information. The most common form of the ACO decision policy (Transition Rule Probability) when ants work with components is where is the partial solution of the ant . and are two parameters which determine the relative influence of the pheromone trail and the heuristic information in the probabilistic decision [13, 18].
3.1. Pheromone Trail
One of the most crucial design decisions to be made in ACO algorithms is the modeling of the set of pheromones. In the original ACO implementation for TSP, the choice was to put a pheromone value on every link between a pair of cities, but for other combinatorial problems pheromone values can be often assigned to the decision variables (first-order pheromone values) . In this work, the pheromone trail is put on the problem's component (each eligible column ) instead of the problems connections. And setting a good pheromone quantity is not a trivial task either. The quantity of pheromone trail laid on columns is based on the idea that the more pheromone trail on a particular item, the more profitable the item is . Then, the pheromone deposited in each component will be in relation to its frequency in the ants solutions (In this work, we divided this frequency by number of ants.).
3.2. Heuristic Information
In this paper, we use a dynamic heuristic information that depends on the partial solution of an ant. It can be defined as , where is the so-called cover value, that is, the number of additional rows covered when adding column to the current partial solution, and is the cost of column . In other words, the heuristic information measures the unit cost of covering one additional row. An ant ends the solution construction when all rows are covered.
3.3. AS and ACS
ACS differs from AS in the following aspects. First, it exploits the search experience accumulated by the ants more strongly than AS does through the use of a more aggressive action choice rule. Second, pheromone evaporation and pheromone deposit take place only on the columns belonging to the best so far solution. Third, each time an ant chooses a column , it removes some pheromone from the component increasing the exploration. ACS has demonstrated better performance than AS in a wide range of problems.
ACS exploits a pseudorandom transition rule in the solution construction; ant chooses the next column with criteria following the Transition Rule Probability (6) or otherwise, where is a random number uniformly distributed in and is a parameter that controls how strongly the ants exploit deterministically the pheromone trail and the heuristic information. It should be mentioned that ACS uses a candidate list to restrict the number of available choices to be considered at each construction step. The candidate list contains a number of the best rated columns according to the heuristic criterion.
Trying to solve larger instances of SPP with the original AS or ACS implementation derives in a lot of unfeasible labeling of variables, and the ants cannot obtain complete solutions. In this paper we explore, the addition of an arc consistency mechanism in the construction phase of ACO; thus, only feasible solutions are generated. A direct implementation of the basic ACO framework is incapable of obtaining feasible solution for many SPP instances.
3.4. The ACO Framework
Each ant starts with an empty solution and adds column until a cover is completed. But to determine if a column actually belongs or not to the partial solution () is not good enough.
The traditional ACO decision policy (4) does not work for SPP because the ants, in this traditional selection process of the next columns, ignore the information of the problem constraint when a variable is instantiated. And in the worst case, in the iterative steps, it is possible to assign values to some variables that will make it impossible to obtain complete solution (see Algorithm 1).
4. Constraint Propagation
Constraint propagation is crucial in CP, and it appears under different names: constraint relaxation, filtering algorithms, narrowing algorithms, constraint inference, simplification algorithms, label inference, local consistency enforcing, rules iteration, and chaotic iteration .
Constraint propagation embeds any reasoning which consists in explicitly forbidding values or combinations of values for some variables of a problem because a given subset of its constraints cannot be satisfied otherwise. Arc consistency is the most well-known way of propagating constraints.
4.1. Arc Consistency
Arc-consistency is one of the most used filtering techniques in constraint satisfaction for reducing the combinatorial space of problems. Arc-consistency is formally defined as a local consistency within the constraint programming field . A local consistency defines properties that the constraint problem must satisfy after constraint propagation. Constraint propagation is simply the process when the given local consistency is enforced to the problem. In the following, some necessary definitions are stated .
Definition 1 (constraint). A constraint is a relation defined on a sequence of variables , called the scheme of . , is the subset of that contains the combinations of values (or tuples) that satisfy . is called the arity of . A constraint with scheme is also noted as .
Definition 2 (constraint network). A constraint network also known as constraint satisfaction problem (CSP) is defined by a triple , where(i)is a finite sequence of integer variables ,(ii) is the corresponding set of domains for , that is, , where is the finite set of values that variable can take,(iii) is a set of constraints , where variables in are in .
Definition 3 (projection). A projection of on is denoted as , which defines the relation with scheme that contains the tuples that can be extended to a tuple on satisfying .
As previously mentioned, arc-consistency is one of the most used ways of propagating constraints. Arc-consistency was initially defined for binary constraint [40, 41], that is, constraints involving two variables. We here give the more general definition for nonarbitrary constraints named generalized arc-consistency (GAC).
Definition 4 ((generalized) arc consistency). Given a network , a constraint , and a variable ,(i)a value is consistent with if and only if there exists a valid tuple satisfying such that . Such a tuple is called a support for on ,(ii)the domain is (generalized) arc-consistent on for if and only if all the values in are consistent with in , that is, ,(iii)the network is (generalized) arc-consistent if and only if is (generalized) arc-consistent for all variables in on all constraints in .
As an example, let us consider the non-arc-consistent network depicted on left side of Figure 1. It considers three variables , , and ; domains ; and constraints : and : . Enforcing arc-consistency allows one to eliminate some inconsistent values. For instance, when constraint is verified, the value 2 from is removed since there is no value greater than it in . The value from is also removed since no support for it exists in . Removing from leads to the removal of from when is checked. The resulting arc-consistent network is depicted on the right side of Figure 1.
As previously illustrated, the main idea of this process is the revision of arcs, that is, to eliminate every value in that is inconsistent with a given constraint . This notion is encapsulated in the function Revise3. This function takes each value in (line 2) and analyses the space , searching for a support on constraint (line 3). If support does not exist, the value is eliminated from . Finally, the function informs if has been changed by returning true or false otherwise (line 8).
Algorithm 3 is responsible for ensuring that every domain is consistent with the set of constraints. This is done by using a loop that verifies arcs until no change happens. The function begins by filling a list with pairs () such that . The idea is to keep the pairs for which is not ensured to be arc-consistent with respect to . This allows to avoid useless calls to Revise3 as done in more basic algorithms such as AC1 and AC2. Then, a loop that takes the pairs () from (line 2) and Revise3 is called (line 4). If Revise3 returns true, is checked whether it is an empty set. If so, the algorithm returns false. Otherwise, normally, a value for another variable has lost its support on . Thus, all pairs () such that must be reinserted in the list . The algorithm ends once is empty, and it returns true when all arcs have been verified and remaining values of domains are arc-consistency with respect to all constraints.
5. Hybridization of Ants and Constraint Programming
Hybrid algorithms provide appropriate compromises between exact (or complete) search methods and approximate (or incomplete) methods, some efforts have been done in order to integrate constraint programming (exact methods) to ants algorithms (stochastic local search methods) [31–36].
A hybridization of ACO and CP can be approached from two directions: we can either take ACO or CP as the base algorithm and try to embed the respective other method into it. A form to integrate CP into ACO is to let it reduce the possible candidates among the not yet instantiated variables participating in the same constraints that current variable. A different approach would be to embed ACO within CP. The point at which ACO can interact with CP is during the labeling phase using ACO to learn a value ordering that is more likely to produce good solutions.
In this work, ACO uses CP in the variable selection (when ACO adds a column to partial solution). The CP algorithm used in this paper is the AC3 filtering procedure . It performs consistency between pairs of a not yet instantiated variable and an instantiated variable; that is, when a value is assigned to the current variable, any value in the domain of a future variable which conflicts with this assignment is removed from the domain.
The AC3 filtering procedure, taking into account the constraints network topology (i.e., which sets of variables are linked by a constraint and which are not), guaranties that at each step of the search, all constraints between ready assigned variables and not yet assigned variables are consistent; it means that columns are chosen if they do not generate any conflicts with the next column to be chosen. Then, a new transition rule is developed embedding AC3 in the ACO framework (see Algorithm 4, lines 7 to 12).
5.1. ACO + CP to SCP
In Figure 2, we present an explanation of how ACO + CP works solving SCP. Here, represents the current variable, and are variables already instantiated, and and are not yet instantiated variables (The sequence , , , , is the order of variable instantiations given by the enumeration strategy in use.).(1)Constraint Propagation Assigning Value 0 in Future Variables. When instantiating a column for each row that can cover that column, the other columns that can also cover it can be put to 0, as long as all rows that can cover each of these columns are already covered (i.e., the search space is reduced and also favoring the optimization).(2)Constraint Propagation Assigning Value 1 in Future Variables. If there is a column, which is the only one that can cover a row, this column can be put to 1.
5.2. ACO + CP to SPP
In Figure 3, we present an explanation of how ACO + CP works solving SPP. Here, represents the current variable, is a variable already instantiated, and and are not yet instantiated variables.(1)Backtracking in Current Variable. If any of the rows that can cover a column is already been covered, the current column cannot be instantiated because it violates a constraint, and then it should be done by using backtracking.(2)Constraint Propagation Assigning Value 0 in Future Variables. When instantiating a column for each uncovered row that can cover that column, the other columns that can also cover (i.e., by constraint propagation it reduces the search space should be put to 0, and in practice, for the uninstantiated variable, the value 1 of its domain was eliminated).(3)Backtracking in Current Variable and Constraint Propagation Assigning Value 1 in Future Variables. If any of these columns (which were assigned the value 0) is the only column that can cover another row, the current column cannot be instantiated because it does not lead to a solution, and then it should be done a backtracking.
6. Experimental Evaluation
We have implemented AS and ACS and the proposed AS + CP and ACS + CP algorithms. The effectiveness of the proposed algorithms was evaluated experimentally using SCP and SPP test instances from Beasley's OR-library . Each instance was solved (else it is indicated with .) 12 times, and the algorithms have been run with 100 ants and a maximum number of 200 iterations. Table 1 shows the value considered for each standard ACO parameter: is the relative influence of the pheromone trail, is the relative influence of the the heuristic information, is the pheromone evaporation rate, is used in ACS pseudorandom proportional action choice rule, is used in ACS local pheromone trail update, and the ACS list size was 300. For each instance, the initial pheromone was calculated as follows:
Algorithms were implemented using ANSI C, GCC 3.3.6, under a 2.0 GHz Intel Core2 Duo T5870 with 1 Gb RAM running Microsoft Windows XP Professional.
Tables 2(a) and 2(b) describe problem instances, and they show the problem code, the number of rows (constraints), the number of columns (decision variables), the Density (i.e., the percentage of nonzero entries in the problem matrix), and the best known cost value for each instance Opt (IP optimal) of the SCP and SPP instances used in the experimental evaluation.
Computational results (best cost obtained) are shown in Tables 3(a), 3b), 4(a), 4(b), 5(a), 5(b), 6(a), and 6(b) and in Figures 4 and 5. The quality of a solution is evaluated using the relative percentage deviation (RPD) and the relative percentage improvement (RPI) measures . The RPD value quantifies the deviation of the objective value from which in our case is the best known cost value for each instance (see the third column), and the RPI value quantifies the improvement of from an initial solution (see the fourth column).
These measures are computed as follows:
For all the implemented algorithms, the solution quality and computational effort (Secs) are related using the marginal relative improvement (MIC) measure (see the fifth column). This measure quantifies the improvement achieved per unit of CPU time (). The solution time is measured in CPU seconds, and it is the time that each algorithm takes to first reach the final best solution.
The results expressed in terms of the average RPD, average RPI, and average MIC show the effectiveness of AS + CP and ACS + CP over AS and ACS to solve SCP (see Tables 3(a), 3(b), 4(a), and 4(b)). Our hybrid solver provides high quality near optimal solutions, and it has the ability to generate them for a variety of instances.
From Tables 5(a), 5(b), 6(a), and 6(b), it can be observed that AS + CP and ACS + CP solving SPP obtained better results than AS and ACS (viewing average RPD and average RPI), but they are a bit worse in the average MIC. It indicates that the computational effort is slightly higher using the hybridization, but our approach can obtain optimal solutions in some instances where AS or ACS failed. Our hybrid approach shows an excellent tradeoff between the quality of the solutions obtained and the computational effort required.
Highly constrained combinatorial optimization problems have proved to be a challenge for constructive metaheuristic. In this paper, ACO framework has been modified so that it may be applied to any constraint satisfaction problem in general and hard constrained instances in particular.
A direct implementation of the basic ACO framework is incapable of obtaining feasible solutions for many strongly constrained problems. In order to improve this aspect of ACO, we integrated a constraint programming mechanism in the construction phase of ACO; thus, only feasible partial solutions are generated.
The effectiveness of the proposed rule was tested on benchmark problems, and we solved SCP and SPP with ACO using constraint propagation in its transition rule, and results were compared with pure ACO algorithms. About efficiency, the computational effort required is almost the same.
An interesting extension of this work would be related to hybridization of AC3 with other metaheuristics . The use of autonomous search (AS)  in conjunction with constraint programming would be also a promising direction to follow. AS represents a new research field, and it provides practitioners with systems that are able to autonomously self-tune their performance while effectively solving problems. Its major strength and originality consist in the fact that problem solvers can now perform self-improvement operations based on analysis of the performances of the solving process. In [46, 47], the order in which the variables are selected for instantiation is determined by a choice function that dynamically selects from a set of variable ordering heuristics the one that best matches the current problem state, and this combination can accelerate the resolution process, especially in harder instances. In , the results show that some phases of reactive propagation are beneficial to the main hybrid algorithm, and the hybridization strategies are thus crucial in order to decide when to perform, or not, constraint propagation.
The author Fernando Paredes is supported by FONDECYT-Chile Grant 1130455.
- E. Balas and M. Padberg, “Set partitioning: a survey,” Management Sciences Research Report, Defense Technical Information Center, Fort Belvoir, Va, USA, 1976.
- T. A. Feo and M. G. C. Resende, “A probabilistic heuristic for a computationally difficult set covering problem,” Operations Research Letters, vol. 8, no. 2, pp. 67–71, 1989.
- A. Mingozzi, M. A. Boschetti, S. Ricciardelli, and L. Bianco, “A set partitioning approach to the crew scheduling problem,” Operations Research, vol. 47, no. 6, pp. 873–888, 1999.
- M. Mesquita and A. Paias, “Set partitioning/covering-based approaches for the integrated vehicle and crew scheduling problem,” Computers and Operations Research, vol. 35, no. 5, pp. 1562–1575, 2008, special issue: algorithms and computational methods in feasibility and infeasibility.
- J. P. Kelly and J. Xu, “A set-partitioning-based heuristic for the vehicle routing problem,” INFORMS Journal on Computing, vol. 11, no. 2, pp. 161–172, 1999.
- M. L. ] Balinski and R. E. Quandt, “On an integer program for a delivery problem,” Operations Research, vol. 12, no. 2, pp. 300–304, 1964.
- R. D. Gopal and R. Ramesh, “Query clustering problem: a set partitioning approach,” IEEE Transactions on Knowledge and Data Engineering, vol. 7, no. 6, pp. 885–899, 1995.
- G. B. Alvarenga and G. R. Mateus, “A two-phase genetic and set partitioning approach for the vehicle routing problem with time windows,” in Proceedings of the 4th International Conference on Hybrid Intelligent Systems (HIS '04), M. Ishikawa, S. Hashimoto, M. Paprzycki et al., Eds., pp. 428–433, IEEE Computer Society, 2004.
- F. Barahona and R. Anbil, “On some difficult linear programs coming from set partitioning,” Discrete Applied Mathematics, vol. 118, no. 1-2, pp. 3–11, 2002, special Issue devoted to the ALIO-EURO Workshop on Applied Combinatorial Optimization.
- R. Borndorfer, M. Grotschel, F. Klostermeier, and C. Kuttner, “Telebus berlin: vehicle scheduling in a dial-a-ride system,” Tech. Rep. SC 9723, Konrad-Zuse-Zentrum fur Informationstechnik, Berlin, Germany, 1997.
- B. Crawford, C. Castro, and E. Monfroy, “Solving dial-a-ride problems with a low-level hybridization of ants and constraint programming,” in Proceedings of the 2nd International Work-Conference on the Interplay between Natural and Artificial Computation (IWINAC '07), J. Mira and J. R. Alvarez, Eds., vol. 4528 of Lecture Notes in Computer Science, pp. 317–327, Springer, 2007.
- K. Apt, Principles of Constraint Programming, Cambridge University Press, New York, NY, USA, 2003.
- M. Dorigo and T. Stutzle, Ant Colony Optimization, MIT Press, Cambridge, Mass, USA, 2004.
- B. Chandra Mohan and R. Baskaran, “A survey: ant colony optimization based recent research and implementation on several engineering domain,” Expert Systems with Applications, vol. 39, no. 4, pp. 4618–4627, 2012.
- G. Leguizamon and Z. Michalewicz, “A new version of ant system for subset problems,” in Proceedings of Congress on Evolutionary Computation (CEC '99), P. Angeline, Z. Michalewicz, M. Schoenauer, X. Yao, and A. Zalzala, Eds., IEEE Press, July 1999.
- X. Gandibleux, X. Delorme, and V. T'Kindt, “An ant colony optimisation algorithm for the set packing problem,” in Proceedings of the 4th International Workshop on Ant Colony Optimization and Swarm Intelligence (ANTS '04), pp. 49–60, 2004.
- M. Rahoual, R. Hadji, and V. Bachelet, “Parallel ant system for the set covering problem,” in Ant Algorithms, pp. 262–267, 2002.
- L. Lessing, I. Dumitrescu, and T. Stutzle, “A comparison between aco algorithms for the set covering problem,” in Proceedings of the 4th International Workshop on Ant Colony Optimization and Swarm Intelligence (ANTS '04), pp. 1–12, 2004.
- Z.-G. Ren, Z.-R. Feng, L.-J. Ke, and Z.-J. Zhang, “New ideas for applying ant colony optimization to the set covering problem,” Computers and Industrial Engineering, vol. 58, no. 4, pp. 774–784, 2010.
- R. M. D. A. Silva and G. L. Ramalho, “Ant system for the set covering problem,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, vol. 5, pp. 3129–3133, October 2001.
- R. Hadji, M. Rahoual, E. Talbi, and V. Bachelet, “Ant colonies for the set covering problem,” in Proceedings of 2nd International Workshop on Ant Algorithms (ANTS '00), M. Dorigo, Ed., pp. 63–66, Brussels, Belgium, 2000.
- M. H. Mulati and A. A. Constantino, “Ant-line: a lineoriented aco algorithm for the set covering problem,” in Proceedings of the 30th International Conference of the Chilean Computer Science Society (SCCC '11), pp. 265–274, Curico, Chile, November 2011.
- P. C. Chu and J. E. Beasley, “Constraint handling in genetic algorithms: the set partitioning problem,” Journal of Heuristics, vol. 4, no. 4, pp. 323–357, 1998.
- D. Levine, “A parallel genetic algorithm for the set partitioning problem,” Tech. Rep., Illinois Institute of Technology, Chicago, Ill, USA, 1994.
- V. Maniezzo and M. Milandri, “An ant-based framework for very strongly constrained problems,” in Ant Algorithms, pp. 222–227, 2002.
- V. Maniezzo and M. Roffilli, “Very strongly constrained problems: an ant colony optimization approach,” Cybernetics and Systems, vol. 39, no. 4, pp. 395–424, 2008.
- M. Randall and A. Lewis, “Modifications and additions to ant colony optimisation to solve the set partitioning problem,” in Proceedings of the 6th IEEE International Conference on e-Science Workshops (e-ScienceW '10), pp. 110–116, Los Alamitos, Calif, USA, December 2010.
- M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 26, no. 1, pp. 29–41, 1996.
- M. Dorigo and L. M. Gambardella, “Ant colony system: a cooperative learning approach to the traveling salesman problem,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 53–66, 1997.
- C. Blum, “Ant colony optimization: introduction and recent trends,” Physics of Life Reviews, vol. 2, no. 4, pp. 353–373, 2005.
- R. Michel and M. Middendorf, “An island model based ant system with lookahead for the shortest supersequence problem,” in Parallel Problem Solving from Nature, vol. 1498 of Lecture Notes in Computer Science, pp. 692–701, Springer, Berlin, Germany, 1998.
- C. Gagne, M. Gravel, and W. Price, “A look-ahead addition to the ant colony optimization metaheuristic and its application to an industrial scheduling problem,” in Proceedings of the 4th Metaheuristics International Conference (MIC '01), J. Sousa, Ed., pp. 79–84, Porto, Portugal, July 2001.
- B. Meyer and A. T. Ernst, “Integrating aco and constraint propagation,” in Proceedings of the 4th International Workshop on Ant Colony Optimization and Swarm Intelligence (ANTS '04), pp. 166–177, 2004.
- B. Crawford and C. Castro, “Aco with lookahead procedures for solving set partitioning and covering problems,” in Proceedings of Workshop on Combination of Metaheuristic and Local Search with Constraint Programming Techniques, Nantes, France, November 2005.
- B. Crawford and C. Castro, “Integrating lookahead and post processing procedures with aco for solving set partitioning and covering problems,” in Proceedings of the 8th International Conference on Artificial Intelligence and Soft Computing (ICAISC '06), L. Rutkowski, R. Tadeusiewicz, L. A. Zadeh, and J. M. Zurada, Eds., vol. 4029 of Lecture Notes in Computer Science, pp. 1082–1090, Springer, 2006.
- D. R. Thiruvady, C. Blum, B. Meyer, and A. T. Ernst, “Hybridizing beam-aco with constraint programming for single machine job scheduling,” in Hybrid Metaheuristics, M. J. Blesa, C. Blum, L. D. Gaspero, A. Roli, M. Sampels, and A. Schaerf, Eds., vol. 5818 of Lecture Notes in Computer Science, pp. 30–44, Springer, Berlin, Germany, 2009.
- M. Khichane, P. Albert, and C. Solnon, “Strong combination of ant colony optimization with constraint programming optimization,” in Proceedings of the 7th International Conference on Integration of Artificial Intelligence and Operations Research (CPAIOR '10), A. Lodi, M. Milano, and P. Toth, Eds., vol. 6140 of Lecture Notes in Computer Science, pp. 232–245, Springer, 2010.
- C. Bessiere, “Constraint propagation,” in Handbook of Constraint Programming, F. Rossi, P. van Beek, and T. Walsh, Eds., pp. 29–84, Elsevier, Rio de Janeiro, Brazil, 2006.
- F. Rossi, P. van Beek, and T. Walsh, Handbook of Constraint Programming, Elsevier, Rio de Janeiro, Brazil, 2006.
- A. K. Mackworth, “Consistency in networks of relations,” Artificial Intelligence, vol. 8, no. 1, pp. 99–118, 1977.
- A. Mackworth, “On reading sketch maps,” in Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI '77), pp. 598–606., 1977.
- R. Dechter and D. Frost, “Backjump-based backtracking for constraint satisfaction problems,” Artificial Intelligence, vol. 136, no. 2, pp. 147–188, 2002.
- J. E. Beasley, “Or-library: distributing test problems by electronic mail,” Journal of the Operational Research Society, vol. 41, no. 11, pp. 1069–1072, 1990.
- R. Soto, B. Crawford, C. Galleguillos, E. Monfroy, and F. Paredes, “A hybrid ac3-tabu search algorithm for solving sudoku puzzles,” Expert Systems with Applications, vol. 40, no. 15, pp. 5817–5821, 2013.
- Y. Hamadi, E. Monfroy, and F. Saubion, “An introduction to autonomous search,” in Autonomous Search, Y. Hamadi, E. Monfroy, and F. Saubion, Eds., pp. 1–11, Springer, Berlin, Germany, 2011.
- B. Crawford, C. Castro, E. Monfroy, R. Soto, W. Palma, and F. Paredes, “Dynamic selection of enumeration strategies for solving constraint satisfaction problems,” Romanian Journal of Information Science and Technology, vol. 15, no. 2, pp. 106–128, 2012.
- B. Crawford, R. Soto, E. Monfroy, W. Palma, C. Castro, and F. Paredes, “Parameter tuning of a choice-function based hyperheuristic using particle swarm optimization,” Expert Systems with Applications, vol. 40, no. 5, pp. 1690–1695, 2013.
- E. Monfroy, C. Castro, B. Crawford, R. Soto, F. Paredes, and C. Figueroa, “A reactive and hybrid constraint solver,” Journal of Experimental and Theoretical Artificial Intelligence, vol. 25, no. 1, pp. 1–22, 2013.
- T. Muller, “Solving set partitioning problems with constraint programming,” in Proceedings of the 6th International Conference on the Practical Application of Prolog and the 4th International Confernce on the Practical Application of Constraint Technology (PAPPACT '98), pp. 313–332, London, UK, March 1998.
- M. Krieken, H. Fleuren, and M. Peeters, “Problem reduction in set partitioning problems,” Discussion Paper 2003-80, Tilburg University, Center for Economic Research, Tilburg, The Netherlands, 2003.