Mathematical Problems in Engineering

Volume 2015, Article ID 637809, 10 pages

http://dx.doi.org/10.1155/2015/637809

## A Variable Depth Search Algorithm for Binary Constraint Satisfaction Problems

Department of Technology and Maritime Innovation, Buskerud and Vestfold University College, P.O. Box 4, 3199 Borre, Norway

Received 7 October 2014; Revised 5 March 2015; Accepted 1 April 2015

Academic Editor: Jianming Shi

Copyright © 2015 N. Bouhmala. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The constraint satisfaction problem (CSP) is a popular used paradigm to model a wide spectrum of optimization problems in artificial intelligence. This paper presents a fast metaheuristic for solving binary constraint satisfaction problems. The method can be classified as a variable depth search metaheuristic combining a greedy local search using a self-adaptive weighting strategy on the constraint weights. Several metaheuristics have been developed in the past using various penalty weight mechanisms on the constraints. What distinguishes the proposed metaheuristic from those developed in the past is the update of *k* variables during each iteration when moving from one assignment of values to another. The benchmark is based on hard random constraint satisfaction problems enjoying several features that make them of a great theoretical and practical interest. The results show that the proposed metaheuristic is capable of solving hard unsolved problems that still remain a challenge for both complete and incomplete methods. In addition, the proposed metaheuristic is remarkably faster than all existing solvers when tested on previously solved instances. Finally, its distinctive feature contrary to other metaheuristics is the absence of parameter tuning making it highly suitable in practical scenarios.

#### 1. Introduction

Organizations like companies or public institutions are confronted in their daily life with a large number of combinatorial optimization problems which occur in many different application domains such as Operations Research (e.g., scheduling and assignment), hardware design (verification and testing, placement and layout), financial decision making (option trading or portfolio management), or even biology (DNA sequencing). The domain of combinatorial optimization refers to optimization problems where the search space (i.e., the set of all feasible solutions) is discrete. The constraint satisfaction problem (CSP) which can model a wide spectrum of combinatorial optimization problems rising in the field of artificial intelligence has become an important field of study in both theoretical and applied computer science. Constraint technology is making a considerable commercial impact worldwide due to its ability to solve highly complex applications operating in the most difficult environment counting on first-class technology to perform the job. ILOG and Cosytec are two of the leading companies producing software based on this technology. A large number of systems based on the constraints technology have been developed. Examples include the APACHE system [1] used at Roissy Airport in Paris, PLAN system [2] which is a medium-long term scheduling system for aircraft assembly line scheduling, the COBRA system [3] that generates work plans for train drivers and conductors of North Western Trains in the UK, and TAP-AI which is a planning system for crew assignment in the airline SAS [4]. Disasters which have long impacted world nations, resulting in mass casualties and huge financial tolls where technology and humans have to work together hand-in-hand without fault, with every single step of a mission meticulously planned out, are another research area where solutions based on constraint technology have received a great attention lately [5, 6]. The handbook of Constraint Programming [7] lists example applications of several areas modeled as CSPs. The paper is organized as follows. Section 2 explains the constraint satisfaction problem. Section 3 provides a survey of methods used to solve the constraint satisfaction problem. Section 4 introduces the metaheuristic in detail. Section 5 presents the results while Section 6 concludes the paper.

#### 2. CSP

The CSP consists of assigning values to variables while satisfying certain constraints. Constraints can be given explicitly, by listing all possible tuples or implicitly, by describing a relation in some mathematical form. As a domain example, consider problems that occur in production scheduling. Scheduling is concerned with the allocation of resources to activities with the goal of optimizing some performance objectives while satisfying certain restrictions or constraints. Depending on the problem posed, resources may refer to machines, humans, and so forth, activities could be manufacturing operations, objectives could be the minimization of the schedule length, and finally constraints may state the precedence relationship among activities as they govern the schedule solution.

A CSP is a tuple , where(i)is a finite set of variables: ,(ii) is a finite set of domains: . Thus each variable has a corresponding discrete domain from which it can be instantiated,(iii) is a finite set of constraints. Each -ary constraint restricts a -tuple of variables and specifies a subset of , each element of which is values that the variables can not take simultaneously. This set is referred to as the no-good set (i.e., an assignment set that is not contained in any solution.)

A solution to a CSP requires the assignment of values to each of the variables from their domains such that all the constraints on the variables are satisfied. In this paper, attention is focused on binary CSPs, where all constraints are binary; that is, they are based on the Cartesian product of the domains of two variables. However, any nonbinary CSP can theoretically be converted to a binary CSP [8, 9]. The structure of a binary CSP can be better visualized by a graph where the set of vertices corresponds to the variables and each edge represents a constraint connecting the pair of variables involved in this constraint. The CSP in its general form is NP-complete [10] and has been extensively studied due to its simplicity and applicability [7]. The simplicity of the problem coupled with its intractability makes it an ideal platform for exploring new algorithmic techniques. This has led to the development of several algorithms for solving CSPs which usually fall into two main categories: systematic algorithms and local search algorithms.

#### 3. A Brief Survey of Methods

Systematic search algorithms rely on a systematic way in their exploration of the search space. These methods [11–16] aim at exploring the entire solution space using tree search algorithms. The two main components of a tree search are the way to go forward, that is, which decision is taken at which point of the search and the way to go backwards, that is, the backtracking strategy that defines how the algorithm will behave when an inconsistency is detected. In practice, methods based on systematic tree search may fail to solve large and complex CSPs instances because the computing time required may become prohibitive. For instance, a CSP with variables, each with a domain of size , makes the search space which is to be explored proportional to , that is, exponential in the number of variables. Most searches that come up in CSPs occur over spaces that are far too large to be searched exhaustively. One way to overcome the combinatorial explosion is to give up completeness. Stochastic local search (SLS) algorithms are techniques which use this strategy and gained popularity due to their conceptual simplicity and good performance. These methods start with an initial assignment of values to variables randomly or heuristically generated. During each iteration, a new solution is selected from the neighborhood of the current one by performing a move. A move might consist in changing the value of one randomly selected variable. Choosing a good neighborhood and a method for searching it is usually guided by intuition, because very little theory is available as a guide. If the new solution provides a better value in light of the objective function, the new solution becomes the current one. In order to avoid premature convergence, SLS methods resort to some sort of randomization (noise probability) to avoid local minima and to better explore the search space. The search is iterated until a termination criterion is reached. Most algorithms applied to CSPs use the so-called 1-exchange neighborhood under which two solutions are direct neighbors if, and only if, they differ at most in the value assigned to one variable. A basis for many SLS algorithms is the minimum conflict heuristic MCH [17]. MCH iteratively modifies the assignment of a single variable in order to minimize the number of violated constraints. Since the introduction of MCH there have been a large number of local search heuristics proposed to tackle CSPs. Several representative state-of-the-art SLS in the literature include the break method for escaping from local minima [18], various enhanced MCH (e.g., randomized iterative improvement of MCH called WMCH [19], MCH with tabu search [20, 21]), and a large body of work on evolutionary algorithms for CSPs [22–26] for interested readers. Weights-based algorithms have been advocated by the intuition that, by introducing weights on variables or constraints, local minima can be avoided and the search process can learn to distinguish between critical and less critical constraints. Methods belonging to this category include genet [27], guided local search [28], discrete Lagrangian search [29], the exponentiated subgradient [30], the scaling and probabilistic smoothing [31], evolutionary algorithms combined with stepwise adaptation of weights [32–34], methods based on dynamically adapting weights on variables [35], or both (i.e., variables and constraints) [36]. Weighting schemes have been also combined with systematic methods to reduce the size of tree search methods and consequently speeding up the solving time [37–39]. Recently, an improved version of the Squeaky Wheel Optimization (SWO) [40] originated in [41] has been proposed for the scheduling problem. In SWO, a greedy algorithm is used to construct an initial solution which is then analyzed in order to identify those tasks that if improved are likely to improve the objective function score. The improved version provides additional postprocessing transformations to explore the neighborhood enhanced with a stochastic local search algorithm. Methods based on large neighborhood search have recently attracted several researchers for solving the CSP [42]. The central idea is to reduce the size of local search space relying on a continual relaxation (removing elements from the solution) and reoptimization (reinserting the removed elements). Systematic methods exhibit poor performance on large problems because bad decisions made early in the search persist for exponentially long times. In contrast, stochastic local search methods replace systematicity with stochastic techniques for diversifying the search. However, the lack of systematicity makes remembering the history of past states problematic. To this end, hybrid search methods offering desirable aspects of both systematic methods and local search methods are becoming more and more popular and interested readers may refer to [43–45] to get a deeper understanding on these mixed methods.

#### 4. Variable Depth Search Algorithm

Traditional local search algorithms for solving CSP problems start from an initial solution and repeat replacing with a better solution in its neighborhood until no better solution is found in , where is a set of solutions obtained from by updating the value of one selected variable. A solution is called locally optimal if no better solution exists in . The algorithm proposed in this paper belongs to the class of variable depth search algorithms where an existing solution is not modified just by making a change to a single variable; instead, the changes affect as many variables as possible when moving from one solution to another. The algorithm is inspired from the famous Kerninghan-Lin algorithm used for solving the graph partitioning problem [46] and the traveling salesman problem [47]. The idea is to replace the search for one favorable move (i.e., the update of one variable) by a search for a favorable sequence of moves (i.e., the update of a series of variables) using the criterion of score to guide the search. The different steps of the algorithm are described in Algorithm 1.(i)Random-initial-solution (): the algorithm starts building an initial solution. The initial solution is simply constructed by assigning to each variable a random value from (Line 5 of Algorithm 1). Based on these values, the status of each constraint is set to either violated or nonviolated.(ii)Assign-Initial-Weights (): during this step the algorithm assigns a fixed amount of weight equal to 1 across all the constraints (Line 6 of Algorithm 1). The distribution of weights to constraints is a key factor to the success of the algorithm. During the course of the search, the algorithm forces hard constraints (i.e., those with large weights) to be satisfied thereby preventing the algorithm at a later stage from getting stuck at a local optimum.(iii)Stopping criterion: the outer loop (Line 7 of Algorithm 1) determines the stopping criterion met by the algorithm. The algorithm stops if a solution has been found (i.e., all the constraints are satisfied) or if a time limit has been reached.(iv)Random-selected-variable (): a starting random variable from which the searching process begins is selected and added to the set (Lines: 9, 10, and 11 of Algorithm 1).(v)Inner loop: the inner loop (Lines: 12, 13, 14, 15, 16, 17, and 18 of Algorithm 1) proceeds by repeatedly selecting for each variable removed from the set , the value from its domain producing the highest score. Given the choice between several equally high scores, the algorithm picks one value at random. The score of a variable is defined as the increase (or decrease in the number of violated constraints) in the number of nonviolated constraints if is assigned the value . The score is given by