Abstract

Many optimization problems (from academia or industry) require the use of a local search to find a satisfying solution in a reasonable amount of time, even if the optimality is not guaranteed. Usually, local search algorithms operate in a search space which contains complete solutions (feasible or not) to the problem. In contrast, in Consistent Neighborhood Search (CNS), after each variable assignment, the conflicting variables are deleted to keep the partial solution feasible, and the search can stop when all the variables have a value. In this paper, we formally propose a new heuristic solution method, CNS, which has a search behavior between exhaustive tree search and local search working with complete solutions. We then discuss, with a unified view, the great success of some existing heuristics, which can however be considered within the CNS framework, in various fields: graph coloring, frequency assignment in telecommunication networks, vehicle fleet management with maintenance constraints, and satellite range scheduling. Moreover, some lessons are given in order to have guidelines for the adaptation of CNS to other problems.

1. Introduction

An exact method (e.g., branch-and-bound, dynamic programming, Lagrangian relaxation-based methods) guarantees the optimality of the provided solution. However, for a large number of applications and most real-life optimization problems, such methods need a prohibitive amount of time to find an optimal solution, because such problems are NP-hard [1]. For these difficult problems, one should prefer to quickly find a satisfying solution, which is the goal of heuristic solution methods. There mainly exist three families of heuristics: constructive heuristics (a solution is built step by step from scratch, like the greedy algorithm), local search heuristics (a solution is iteratively modified: this will be discussed below), and evolutionary heuristics (a population of solutions is managed, like genetic algorithms and ant algorithms). In this paper, only the context of local search methods will be considered.

A local search heuristic starts with an initial solution and tries to improve it iteratively. At each iteration, a modification, called move, of the current solution is performed in order to generate a neighbor solution. The definition of a move, that is, the definition of the neighborhood structure, depends on the considered problem. The most popular local search methods are simulated annealing [2], tabu search [3], threshold algorithms [4], variable neighborhood search [5], and guided local search [6].

Within a local search context, the usual approach consists in working with complete solutions, that is, each variable has a value and the solution might be feasible or not. In the latter case, a penalty function is often used, which depends on the number of violated constraints. In contrast, in Consistent Neighborhood Search (CNS), partial feasible solutions are used. Thus, not every variable has a value, but there is no constraint violation. In such a case, the goal is to minimize the number of nonassigned variables, and a move is performed in at least two phases: (1) give a value to an unassigned variable 𝑠𝑖, and (2) delete the value of the created conflicting variables (i.e., the variables different from 𝑠𝑖 involved in a constraint violation). An intermediate phase might occur between these two phases, which consists in adjusting the value of conflicting variables under some specific conditions. In this paper, which is an extension of [7] and [8], we formally introduce the CNS methodology and the adaptation of tabu search within its framework, then we discuss, with a unified terminology, the great success of some existing heuristics, which can however be considered as belonging to the CNS methodology, for various NP-hard constrained combinatorial problems. For each problem, the reader is referred to the associated paper to have references on the NP-hard aspect, the literature review, and the detailed experimental conditions (computer, language, etc.). For each problem, comparisons have to be done carefully because the conditions of experimentation were not always the same. Remember however that a heuristic is generally performed until the potential to improve the best encountered solution becomes poor. In addition, for each considered problem, the CNS approach will always be compared with state-of-the-art methods, even if such methods are not very recent.

The paper is organized as follows. In Section 2, the CNS methodology is proposed. Then, heuristics for various problems are presented within a CNS framework: graph coloring (Section 3), frequency assignment with polarization (Section 4), car fleet management with maintenance constraint (Section 5), and satellite range scheduling (Section 6). We end up the paper with a conclusion.

In this section, we introduce the CNS methodology and situate it within the optimization methods.

2.1. Presentation of the Method

Let (𝑃) be the considered problem with 𝑛 variables 𝑠1,,𝑠𝑛, let 𝑓 be the objective function to minimize, and let 𝐶 be the set of constraints to satisfy. Each variable 𝑠𝑖 can only have a value in its value domain 𝐷𝑖. A solution of (𝑃) is denoted 𝑠=(𝑠1,,𝑠𝑛), where 𝑠𝑖𝐷𝑖. Solution 𝑠 is feasible if it satisfies all the constraints in 𝐶. In most local search methods, the search space contains complete solutions; that is, each variable 𝑠𝑖 has a value in 𝐷𝑖, and the solutions can be feasible or not. If the search space only contains feasible solutions, the goal is generally to directly minimize the given objective function 𝑓 associated with (𝑃); otherwise, the aim often consists in minimizing 𝑓(𝑠)+𝛼𝑝(𝑠), where 𝑝(𝑠) penalizes the constraint violations associated with 𝑠 and 𝛼 is a parameter which gives more or less importance to the constraint violations. In contrast, a specificity of CNS consists in working with partial and feasible solutions, that is, where some 𝑠𝑖’s do not have a value but all the constraints are satisfied. In such a case, the goal is to minimize the number 𝑓(𝑠) of nonassigned variables in 𝑠, and the process stops of course if 𝑓(𝑠)=0.

Therefore, three search spaces are possible: (1) the complete and feasible search space 𝑆(feasible), (2) the complete and unecessarily feasible search space 𝑆(penalty), where unfeasible solutions are penalized, and (3) the partial and feasible search space 𝑆(partial). When working in 𝑆(feasible), it can be very difficult to define a move which maintains the feasibility of the solution. When working in 𝑆(penalty), it is challenging to: define a move which does not augment too much 𝑝(𝑠), tune the above-mentioned parameter 𝛼, and find a feasible solution because 𝑆(penalty) is much larger than 𝑆(feasible). We will see that such drawbacks are avoided when working in 𝑆(partial).

An important feature of CNS is the definition of the neighborhood structure in 𝑆(partial). In most local search methods, in order to generate a neighbor solution 𝑠 from the current solution 𝑠, a move 𝑚 consists in changing the value of one (or more) variable(s) of 𝑠. The set of neighbor solutions of 𝑠 is denoted 𝑁(𝑠). Let 𝑑(𝑠,𝑠) be the distance between 𝑠 and 𝑠𝑁(𝑠). Usually, 𝑑(𝑠,𝑠) is proportional to the number of modified variables when moving from 𝑠 to 𝑠; thus 𝑑(𝑠,𝑠) is a constant for all 𝑠𝑁(𝑠).

In contrast, any move 𝑚 is performed in at least two phases in CNS. (1)Assignment phase: a value of 𝐷𝑖 is assigned to a non assigned variable 𝑠𝑖. Let 𝐶(𝑚) be the set of conflicting variables (excluding 𝑠𝑖) created by move 𝑚 (a variable is in conflict if it is involved in at least a constraint violation). (2)Reassignment phase (optional): reduce the set 𝐶(𝑚) as follows: for each variable of 𝐶(𝑚), if it is possible to assign a new admissible value to it without creating new conflicts, do it. (3)Repairing phase: in order to keep the partial solution feasible, remove the value of all the variables of 𝐶(𝑚).

Therefore, the distance between 𝑠 and a neighbor solution in 𝑁(𝑠) is usually not a constant.

In most local search algorithms, the selected neighbor solution 𝑠 of the current solution 𝑠 is usually (one of) the best (according to 𝑓 or 𝑓+𝛼𝑝) solution chosen among a sample of 𝑁(𝑠). Sampling is usually unavoidable because it is too much time consuming to evaluate all the neighbor solutions of 𝑠, either because 𝑁(𝑠) is too large or because it is cumbersome to evaluate a single move 𝑚. An important issue is thus to determine the sample (random or not) as well as the size of the sample.

In contrast, in CNS, all the neighbor solutions can be considered at each iteration. This is possible in a reasonable amount of time because of two reasons: (1) it is quick to evaluate a neighbor solution by incremental computation: it is simply |𝐶(𝑚)|; (2) the number of nonassigned variables in the current solution 𝑠 is in general small when compared for example, with the size |𝑁(𝑠)| of the neighbor solutions of 𝑠 in a standard local search approach (working in 𝑆(feasible) or in 𝑆(penalty)).

We have now all the ingredients to formulate a pseudo-code of CNS in Algorithm 1.

Initialization: generate an initial solution 𝑠 , set 𝑠 = 𝑠 and 𝑓 = 𝑓 ( 𝑠 ) ;
While a stopping time condition is not met and 𝑓 > 0 , do
 1. initialize the value of the best move: set 𝑔 = + ;
 2. generate the best move: for each non assigned variable 𝑠 𝑖 and each value 𝑑 𝑗 𝐷 𝑖 , test move 𝑚 = ( 𝑠 𝑖 , 𝑑 𝑗 ) on 𝑠 as follows:
   (a) assignment phase: give value 𝑑 𝑗 to variable 𝑠 𝑖 and compute the associated set 𝐶 ( 𝑚 ) of conflicting variables;
   (b) reassignment phase (optional): for each variable 𝑠 𝑟 of 𝐶 ( 𝑚 ) , if it is possible to assign another admissible value to 𝑠 𝑟
   without augmenting the number of violations, do it and remove 𝑠 𝑟 from 𝐶 ( 𝑚 ) ;
   (c) let 𝑠 c a n d be the so obtained candidate neighbor solution (which might be non feasible at this stage);
   (d) update the best candidate move: if | 𝐶 ( 𝑚 ) | < 𝑔 , set 𝑠 = 𝑠 c a n d and 𝑔 = | 𝐶 ( 𝑚 ) | ;
 3. repairing phase on the best move: remove the value of the 𝑔 conflicting variables of 𝑠 and let 𝑠 be the
  resulting new current solution;
 4. update the record: if 𝑓 𝑓 ( 𝑠 ) < , set 𝑠 = 𝑠 and 𝑓 = 𝑓 ( 𝑠 )
Output: solution 𝑠 (which is a complete feasible solution if 𝑓 = 0 );

In summary, CNS is an approach dealing with partial feasible solutions, which can explore the whole neighborhood of the current solution at each iteration because a straightforward incremental computation can be designed. Many local search methods (e.g., tabu search, simulated annealing, random walk, threshold algorithms, etc.) can be adapted within the framework of CNS.

The adaptation of tabu search within the framework of CNS is now discussed. A generic and standard version of tabu search can be described as follows, assuming that 𝑓 has to be minimized. First, tabu search needs an initial solution as input. Then, the algorithm generates a sequence of neighbor solutions. When a move is performed from 𝑠 to 𝑠, the inverse of that move is forbidden during the following 𝑡 (parameter) iterations (with some exceptions). The solution 𝑠 is computed as 𝑠=argmin𝑠𝑁(𝑠)𝑓(𝑠), where 𝑁(𝑠) is a subset of 𝑁(𝑠) containing all solutions 𝑠 which can be obtained from 𝑠 either by performing a move that is not tabu or such that 𝑓(𝑠)<𝑓(𝑠), where 𝑠 is the best solution encountered along the search so far. Usually, 𝑁(𝑠) is too large, and only a sample of neighbor solutions are selected from 𝑁(𝑠) to be evaluated. The choice of the sample often has a strong impact on the final results. The process is stopped, for example, when an optimal solution is found (when it is known), or when a fixed number of iterations have been performed. Many variants and extensions of this basic algorithm can be found, for example, in [9].

Tabu search adapted within the framework of CNS has the following specificities: working in 𝑆(feasible), minimizing 𝑓 instead of 𝑓, exploring the whole neighborhood of the current solution, using an efficient and straightforward incremental computation after each move when a value is given to a variable 𝑠𝑖 and other values might be adjusted or deleted, and it is then tabu to remove the value of 𝑠𝑖 for a certain number of iterations.

2.2. Search Characteristics of CNS

We now compare the general strategy of three kinds of optimization methods: tree search, standard local search, and CNS. These methods have a very different way to visit the search tree, where the root (the top node in Figures 1 to 4) is the empty solution (no variable is assigned), and the leaves (the bottom nodes in Figures 1 to 4) are complete solutions (all the variables have a value), which can be feasible or not. In such four figures, an empty node is not visited by the considered method; on the contrary a black node is visited, and a black arrow indicates a performed move from one node to another. Let 𝑆 be the set of all the possible nodes in the search tree, and let 𝑋 be subset of 𝑆 which is mainly visited by a specific algorithm. We will see that 𝑋 differs drastically from one method to the other.

Tree search algorithms visit neighbor nodes in the search tree. The visited subtree 𝑋 is likely to be vertical for Depth-First Search as illustrated in Figure 1, where only a few leaves might be visited. In contrast, Breadth-First Search usually focuses on the top of the search tree because it often needs a prohibitive amount of time to go down, as illustrated in Figure 2 where no leaves are visited.

A very different strategy characterizes standard local search methods: as illustrated in Figure 3, only leaves are visited, which is a major advantage when compared to tree search methods. However, standard local search algorithms usually have the following drawbacks. On the one hand, if constraint violation is forbidden, the search space 𝑋 is not necessarily connected; that is, it is not always possible to join two leaves with a sequence of moves. In such a case, the search might be trapped in a connected component of 𝑆 which does not contain good solutions. On the other hand, if constraint violation is allowed but penalized during the search, the number of leaves is drastically augmented, and the search might mainly focus on nonfeasible leaves, as it can be challenging to define the penalty function 𝑝 and to tune its associated parameter 𝛼.

Even if CNS can be considered as a local search method, it mainly explores nodes which are close to the leaves, as illustrated in Figure 4. Because all the leaves correspond to complete and feasible solutions, CNS stops as soon as a leaf is reached.

CNS can start its search from the root, that is, no variable has a value. In such a case, its first iterations would basically consist in greedily assigning a value to a variable until the current solution becomes saturated, that is, when the repairing phase becomes unavoidable. CNS can also start its search from a node located below the root if an external procedure is used to generate the initial solution of CNS. The more efficient is such external procedure, the closer will be the first explored node to the bottom of 𝑆. Last but not least, CNS can perform jumps in 𝑆. Thus, the search space 𝑋 is likely to be connected.

Therefore, CNS does not encounter the above-described drawbacks associated with tree search and standard local search methods. Notice however that there exists an implicit enumeration method able to perform jumps over 𝑆, called Resolution Search and proposed in [10].

3. Graph Coloring

The main reference associated with this section is [11]. The authors proposed a tabu search heuristic for the graph coloring problem, which we denote CNS-GCP for the sake of simplicity.

3.1. Description of the Problem

Given a graph 𝐺=(𝑉,𝐸) with vertex set 𝑉 and edge set 𝐸, the 𝑘-coloring problem (𝑘-GCP) consists in assigning an integer (called color) in {1,,𝑘} to every vertex such that two adjacent vertices have different colors. The Graph Coloring Problem (GCP) consists in finding a 𝑘-coloring with the smallest possible value of 𝑘 (called the chromatic number and denoted 𝜒). Both problems are NP-hard [1], and many heuristics were proposed to solve them. For a recent survey, the reader is referred to [12]. Starting at most with 𝑘=|𝑉|, an upper bound on the chromatic number of 𝐺 can be determined by solving a series of 𝑘-GCPs with decreasing values of 𝑘 until no feasible 𝑘-coloring can be obtained. Only such a strategy, which leads to the best results, will be considered below.

3.2. Description of the Method within a CNS Framework

The best 𝑘-coloring heuristics are based on two approaches. In 𝑆(penalty), the constraint that the endpoints of an edge should have different colors is relaxed. Thus, the strategy consists in allowing conflicts (a conflict occurs if two adjacent vertices have the same color) while minimizing the number of conflicts. In a local search context, a straightforward move is thus to change the color of a conflicting vertex, as proposed in [13].

In contrast, in 𝑆(partial), the constraint imposing that all vertices should be colored is relaxed, but conflicts are forbidden. We have 𝐷𝑖={1,,𝑘} for each 𝑠𝑖. In such a case, the value 𝑠𝑖 of a solution 𝑠=(𝑠1,,𝑠𝑛) in 𝑆(partial) indicates the color of vertex 𝑖, which is in the set {1,,𝑘}, and there is no value (or an artificial value 0) if vertex 𝑖 is not colored. The goal is to minimize the number of uncolored vertices. A move 𝑚=(𝑠𝑖;𝑐) consists in first assigning a color 𝑐 to a uncolored vertex 𝑖 (assignment phase), and then (repairing phase) to remove the color of the created conflicting vertices (i.e., all the vertices adjacent to 𝑖 which have color 𝑐). Then, all the moves which will remove the color 𝑐 of vertex 𝑖 are tabu for a certain number of iterations. This number is dynamically managed and is proportional to the variation of the objective function 𝑓(𝑠)=|𝑠|=|{𝑠𝑖|𝑠𝑖>0}|. At each iteration, the best nontabu move is performed (ties are randomly broken).

3.3. Numerical Comparison with Other Methods

It is shown in [14] and in [11] that the most difficult benchmark instances from the DIMACS Challenge (see ftp://dimacs.rutgers.edu/pub/challenge/graph/) are the ones presented in Table 1. Below, CNS-GCP is compared with other state-of-the-art coloring heuristics, which are Tabucol [13], GH [15], MOR [16], and MMT [17]. Tabucol is a standard tabu search working in 𝑆(penalty). GH, MOR, and MMT are all population-based methods which use local search procedures. GH uses Tabucol to improve offspring solutions, whereas MMT uses a procedure close to CNS-GCP. MOR works in the same search space as CNS-GCP but uses simulated annealing instead of tabu search, and much more sophisticated moves.

A CPU time limit of 60 minutes on a Pentium 4 (2 GHZ, 512 MB of RAM) was considered for CNS-GCP. The first two columns of Table 1, respectively, indicate the name and the number 𝑛 of vertices of the graph. The third column contains two numbers, the first one being the chromatic number (a “?” is put when it is not known), and the second one the best upper bound 𝑘 ever found by a heuristic. Then, for every algorithm, the smallest 𝑘 such that a feasible 𝑘-coloring was found is reported. We can observe that CNS-GCP is rather competitive with the best coloring methods. However, it is much simpler!

4. Frequency Assignment with Polarization

The main reference associated with this section is [18], where the frequency assignment problem with polarization (FAPP) was considered. The authors proposed that a tabu search approach working in 𝑆(partial) denoted CNS-FAPP below.

4.1. Description of the Problem

The FAPP concerns a Hertzian telecommunication network made up of antennae located at a set of geographical sites. A Hertzian liaison joins two sites by one or more paths. Hence, a path is a unidirectional radioelectric bond, established between antennae at distinct sites, which has a given frequency and polarization. Let 𝐹𝑖 and 𝑃𝑖, respectively, be the allowed frequency set and polarization set for path 𝑖, where 𝑃𝑖{{1},{1},{1,1}}. The FAPP consists in finding, for each path, a frequency and a polarization satisfying the following set of constraints.

Let 𝐼𝐶 be the set of the imperative constraints, which are of four types: 𝑝𝑖=𝑝𝑗, 𝑝𝑖𝑝𝑗, |𝑓𝑖𝑓𝑗|=𝜀𝑖𝑗, and |𝑓𝑖𝑓𝑗|𝜀𝑖𝑗, where 𝜀𝑖𝑗0. In addition, some electromagnetic compatibility constraints (𝐸𝐶𝐶𝑠) require a minimal distance between frequencies of two paths: |𝑓𝑖𝑓𝑗|𝛾𝑖𝑗 if 𝑝𝑖=𝑝𝑗 and |𝑓𝑖𝑓𝑗|𝛿𝑖𝑗 if 𝑝𝑖𝑝𝑗. This constraint controls the interference phenomenon, which is why the required distance between frequencies depends on their polarizations; it is smaller if the polarizations are different (i.e., 𝛿𝑖𝑗𝛾𝑖𝑗). Unfortunately, most problems do not have feasible solutions because the domains are too restrictive or the requirements too numerous. Consequently, some deterioration is allowed by permitting some interference, which have to be minimized. With this aim, for the 𝐸𝐶𝐶 constraints, a progressive relaxation is authorized and expressed by relaxation levels; level 0 corresponds to no relaxation, and going from level 𝑘 to level 𝑘+1 involves the relaxation of some or all the frequency gaps, the maximum relaxation level being 10. Formally, we have: ||𝑓𝑖𝑓𝑗||𝛾0𝑖𝑗𝛾𝑘𝑖𝑗𝛾10𝑖𝑗if𝑝𝑖=𝑝𝑗𝛿0𝑖𝑗𝛿𝑘𝑖𝑗𝛿10𝑖𝑗if𝑝𝑖𝑝𝑗(1) since in the 11th level, 𝛾11𝑖𝑗=𝛿11𝑖𝑗=0, so there is no 𝐸𝐶𝐶.

Let 𝐸𝐶𝐶𝑘 be the set of 𝐸𝐶𝐶 constraints at level 𝑘 (for 0𝑘10). This means that each constraint belonging to 𝐸𝐶𝐶𝑘 is affected to its 𝛾𝑘𝑖𝑗 and 𝛿𝑘𝑖𝑗 gaps. More precisely, |𝑓𝑖𝑓𝑗|(|𝑝𝑖+𝑝𝑗|/2)𝛾(𝑘)𝑖𝑗+(|𝑝𝑖𝑝𝑗|/2)𝛿(𝑘)𝑖𝑗. Accordingly, a feasible solution at level 𝑘 is an assignment of all the paths satisfying all the strong constraints 𝐼𝐶 and all the 𝐸𝐶𝐶𝑘 constraints. If such a solution exists, the problem is said to be 𝑘-feasible. Every problem is assumed to be 11-feasible.

Consequently, the objective function of the problem is, in order of priority to (1) minimize the lowest relaxation level 𝑘 for which a 𝑘-feasible solution exists (2) minimize 𝑉(𝑘1): the number of constraints of 𝐸𝐶𝐶(𝑘1) violated at level 𝑘1, and (3) minimize 0𝑖<𝑘1𝑉(𝑖): the sum of the constraints of 𝐸𝐶𝐶𝑖 violated at all levels 𝑖 less than 𝑘1.

4.2. Description of the Method within a CNS Framework

The strategy adopted for the resolution consists in transforming the FAPP optimization problem into 11 decision problems according to the relaxation level on the 𝐸𝐶𝐶; each FAPP(𝑘) contains both the 𝐼𝐶 and the 𝐸𝐶𝐶𝑘 constraints. This enables us to introduce some filtering treatments to reduce the frequency and the polarization domains. Starting from level 𝑘=11 where an initial solution is provided by a greedy constructive method, the general algorithm works in a downward fashion: each time a 𝑘-feasible solution is found, a lower level is considered.

A solution 𝑠=(𝑠1,,𝑠𝑛) indicates for each path 𝑖 its associated resource (𝑓𝑖,𝑝𝑖), where 𝑓𝑖𝐹𝑖 and 𝑝𝑖𝑃𝑖. Thus, 𝐷𝑖=𝐹𝑖×𝑃𝑖. In the assignment phase, a pair (𝑓𝑖,𝑝𝑖) is given to the chosen nonassigned candidate path 𝑖. Then, in the repairing phase, this affectation is propagated to its neighbors in the constraint network, and, if necessary, the conflicting neighboring values are deleted in order to satisfy the 𝐼𝐶 and 𝐸𝐶𝐶𝑘 constraints. This was done efficiently using incremental computing on specific data structures, allowing variable domains to be dynamically reduced.

A tabu list is needed to prevent cycling, which occurs when there is an attempt to instantiate the last deleted variables in the current partial solution. Indeed, all the values (𝑓𝑗,𝑝𝑗) likely to delete the variable 𝑠𝑖=(𝑓𝑖,𝑝𝑗) affected by the move are classified tabu during some iterations; the tabu tenure is proportional to the number of times this resource was affected.

The considered problem was the subject of the Challenge ROADEF 2001 (organized by the French Society of Operations Research and Decision Analysis), involving 27 research teams (see http://uma.ensta-paristech.fr/conf/roadef-2001-challenge/). In Table 2 are presented the results obtained by the five best teams. During the competition, only one run was allowed, and the computing time was limited to one hour on a Pentium 3 (500 MHZ, 128 MB of RAM). Table 2 details the hierarchical objective function, by giving first the relaxation level 𝑘, then the sum of all the unsatisfied 𝐸𝐶𝐶𝑘1, and finally the sum of all the unsatisfied 𝐸𝐶𝐶𝑖, where 𝑖 varies from 0 to 𝑘2. The first column indicates the instance names with the instance number and the number 𝑛 of considered paths. For example, 02-0250 is instance 2 with 250 paths.

The first approach, developed by Bisaillon’s team and referred to as TS-VN, is a local search based on tabu search with a variable neighborhood. The algorithm 𝐻+𝐶𝑃, developed by Caseau, combines constraint propagation with heuristics such as Large Neighborhood Search and Limited Discrepancy Search. The third method, 𝐿𝑆-𝐶𝐶, developed by Gavranovic, is a typical local search guided by the constraint cost; at each level, it builds frequency trees, ignoring the polarization constraints, and then it tries to optimize the polarization allocation. In a similar way, classical tabu search procedures (simply denoted Tabu) working with complete solutions were implemented by Schindl’s team. Finally, the last column gives the results obtained by CNS-FAPP.

We can observe the efficiency of CNS-FAPP when compared to the other methods. Care is needed because the indicated values are the best among 10. CNS-FAPP finds the optimal 𝑘 level for 37 instances out of 40. And last but not least, CNS-FAPP was the winner of the Challenge!

5. Fleet Management with Maintenance Constraints

The main reference associated with this section is [19], where a rather complex solution method was proposed for a problem which can be formulated as a car fleet management problem with maintenance constraints (but denoted CAR for the sake of simplicity). The particularity of the problem is that feasible solutions are very easy to find, but can cost a lot. Thus, 𝑆(partial) was designed to avoid to assign the most expensive value to each variable.

5.1. Description of the Problem

The problem retained for the Challenge ROADEF 1999 was an inventory management problem (see http://www.roadef.org/content/roadef/challenge.htm for the details), where a cost function has to be minimized. A car rental company manages a stock of cars of different types. It receives requests from customers asking for cars of specific types for a given time horizon. Basically a request is characterized by its start and end times, by a required car type, and by the number of required cars. All requests are supposed to be known for the considered time horizon. The satisfaction of all customer requests is mandatory. If there are not enough cars available in stock, the company can react in three different ways: (1) upgrading: it can offer a better car type to the customer (but the company encounters the additional associated cost); (2) subcontracting: the company can decide to subcontract some requests to other providers, which is generally the most expensive alternative; (3) purchasing new cars, which then belong to the stock of the company for the rest of the time horizon.

Two types of maintenance constraints make the problem difficult: (1) a maximum time of use without maintenance is given for each car type (each maintenance has a duration, a cost and a number of workers needed to perform it); (2) the company has a fixed number of maintenance workers, which means that the maintenances should be scheduled so that the capacity of the workshop is never exceeded. In addition, the following costs are also known: the costs (fixed and time dependent) associated with the assignment of a car to a request, and the inventory cost per day of a car in stock (rented or not). The goal is to satisfy all the requests while minimizing the total cost.

5.2. Description of the Method within a CNS Framework

The general pseudocode of the method, denoted CNS-CAR, is summarized in Algorithm 2. First, an initial solution is greedily generated. Step 1 of the main loop tries to improve the current solution without changing the set of purchased cars (with the use of two tabu search procedures working in 𝑆(partial), denoted TS1-CAR and TS2-CAR below), while the second step generates a new solution with a different set of purchased cars. The stopping criterion is a time limit of one hour, as imposed by the organizers of the Challenge.

Initialization: generate an initial solution 𝑠 ;
While the time limit is not reached, do
 1. try to improve 𝑠 without changing the set of purchased cars, with the successive use of TS1-CAR and TS2-CAR;
 2. update 𝑠 by purchasing a car or by removing a previously purchased car (the requests associated with a removed car
  are initially subcontracted).

In TS1-CAR, a solution 𝑠 can be modeled as follows. Let 𝑠𝑟=𝑡 if request 𝑟 is performed by a car of type 𝑡 of the fleet (purchased or not), and 𝑠𝑟 has no value (or an artificial value, say 0) if request 𝑟 is subcontracted to another provider. Thus, 𝑆(partial) is defined in order to minimize the number of subcontracted requests. A neighbor 𝑠 of a solution 𝑠 is obtained by assigning a car 𝑐 of type 𝑡 to a subcontracted request 𝑟 (i.e., the corresponding 𝑠𝑟 equals 𝑡 instead of 0). To make such a change feasible, in the repairing phase, requests covered by 𝑐 that overlap with 𝑟 are subcontracted (i.e., the associated 𝑠𝑗 values are set to 0), and the maintenances of car 𝑐 are possibly rescheduled in a greedy fashion while satisfying the maintenance constraints. If it is not possible, other 𝑠𝑗’s such that 𝑠𝑗=𝑡 might be set to 0 in order to create more room to schedule the maintenances. If it is still not possible to generate a feasible schedule for the maintenances (because of the maintenances schedule of the other car types), such a move is not considered further.

TS2-CAR is an extension of TS1-CAR in the following sense: (1) it works on several car types during the same move; (2) it tries to reduce the total cost not only by assigning cars to subcontracted requests, but also by avoiding upgrades; (3) a reassignment phase is performed; (4) the repairing phase has more options to validate the move proposed in the assignment phase. A neighbor 𝑠 of a solution 𝑠 is obtained by assigning a car of type 𝑡 to a request 𝑟, where 𝑟 is subcontracted or covered by a car of type 𝑡𝑡 in 𝑠, where type 𝑡 is an upgrade of type 𝑡. In other words, 𝑠𝑟 equals 𝑡 instead of 0 or 𝑡. The reassignment and repairing phases are performed simultaneously as follows: all the requests 𝐶𝑡 covered by the cars of type 𝑡 might be reassigned within car type 𝑡 (while considering an exact method for a specific case of the graph coloring problem), and it is allowed to subcontract some requests of 𝐶𝑡. In such a phase, the maintenance schedule of all the cars might change (in a greedy fashion or by the use of an exact method). In the two tabu procedures, when a request 𝑟 is assigned to a car type 𝑡 (i.e., 𝑠𝑟 is set equal to 𝑡), it is then tabu to remove the value 𝑡 from 𝑠𝑟 for a certain number of iterations.

Diversification procedures were also used, based on the following idea: the requests which were not subcontracted from a large number of iterations are subcontracted, in order to make room for other requests in the schedule.

5.3. Numerical Comparison with Other Methods

In Tables 3 and 4 the results for the 16 benchmark instances of the Challenge are reported. CNS-CAR is compared with the four best methods (among the thirteen proposed heuristics) of the Challenge. The winners of the contest were Briant and Bouzgarrou. Their algorithm mainly combines linear programming ignoring the maintenance constraints and then adjust the solution according to the maintenance constraints. The name of an instance is coded with a vector (𝑥,𝑦,𝑧,𝑤), where 𝑥 is the number of requests, 𝑦 is the number of car types, 𝑧 is the capacity of the workshop, and 𝑤 is equal to 𝑏 if purchases are allowed, and to 𝑛𝑏 otherwise. The time horizon of all instances is [0,730] corresponding to a period of 2 years.

The algorithm was run with the time limit equivalent to one hour on a Pentium Pro (200 MHZ, 64 MB of RAM), as imposed by the organizers of the challenge. The results are shown in Table 3 (for instances where purchases are forbidden) and Table 4 (associated with the same instances, but where purchases are allowed). The column labeled 𝐵𝑒𝑠𝑡 contains the best known solution for each instance. An asterisk is put when CNS-CAR was able to equal or improve the previous best known solution. Some of these best results have been obtained when using different parameters from those mentioned above (for tuning purposes) or by running CNS-CAR for more than one hour. The next four columns contain the percentage gap with respect to 𝐵𝑒𝑠𝑡 obtained by the four best methods, labeled with the initials of the members of each team, namely, BB (for Briant and Bouzgarrou), AGHKU (for Asdemir, Karslioğlu, Gürbüz, and Ünal), B (for Bayrak), and DD (for Dhaenens-Flipo and Durand). The next column contains the percentage gap with respect to 𝐵𝑒𝑠𝑡 obtained with CNS-CAR. For each instance, ten runs of CNS-CAR were executed, and average results are reported. The last line of each column indicates average results. We can observe that CNS-CAR gives in average better results than those obtained by the four best competitors of the Challenge.

6. Satellite Range Scheduling

The main reference associated with this section is [20], in which the problem is referred to as the daily photograph scheduling of an earth observation satellite (but only denoted SAT below). The authors proposed a tabu search approach working in 𝑆(partial) denoted CNS-SAT below. Note that CNS kind of approaches were also very successfully adapted to other satellite range scheduling problems: the multi-resource satellite range scheduling problem [21] where more than one resource are available, and a satellite range scheduling with partial acquisition and transition times [22].

6.1. Description of the Problem

The considered satellite range scheduling problem can be described as follows [23]. Let 𝑃={𝑝1,,𝑝𝑛} be the set of 𝑛 candidate photographs which can be scheduled to be taken on the next day. A set of possibilities is associated with each photograph 𝑝𝑖 corresponding to the different ways to take 𝑝𝑖: (1) for a mono 𝑝𝑖, there are three possibilities because a monophotograph can be taken by any of the three cameras (front, middle, and rear) on the satellite; (2) for a stereo 𝑝𝑖, there is one single possibility because a stereo photograph requires simultaneously the front and the rear camera. With each monophotograph 𝑝𝑖𝑃 are associated three pairs of elements (𝑝𝑖,camera_1),(𝑝𝑖,camera_2), and (𝑝𝑖,camera_3). Similarly, with each stereo photograph 𝑝𝑖𝑃 is associated one pair (𝑝𝑖,camera_13). Letting 𝑛1 and 𝑛2 be, respectively, the number of mono- and stereophotographs in 𝑃 (where 𝑛=𝑛1+𝑛2), there are in total 𝑚=3𝑛1+𝑛2 possible pairs of elements for the given set 𝑃 of candidates. Now, associating a binary (decision) variable 𝑠𝑖 with each such pair, a photograph schedule corresponds to a binary vector: 𝑠=(𝑠1,𝑠2,,𝑠𝑚), where 𝑠𝑖=1 if the corresponding pair (photo, camera) is present in the schedule, and 𝑠𝑖=0 otherwise. For example, if 𝑃={𝑝1,𝑝2,𝑝3} where 𝑝1 and 𝑝2 are monophotographs and 𝑝3 is a stereophotograph, then 𝑠=(1,0,0,0,0,0,1) represents a schedule in which 𝑝1 is taken by camera 1, 𝑝2 is rejected, and 𝑝3 is taken by cameras 1 and 3.

The SAT is to find a subset 𝑃 of 𝑃 which satisfies all the imperative constraints and maximizes the sum of the profits of the photographs in 𝑃. The objective function can be defined as follows. First, the profit of a pair (𝑝, camera) (or its 0-1 variable) is defined as the profit of the photograph 𝑝. The total profit of all the pairs of the given set 𝑃 is then represented by a vector: 𝑔=(𝑔1,𝑔2,,𝑔𝑚), where 𝑔𝑖=𝑔𝑗(𝑖𝑗) if 𝑔𝑖 and 𝑔𝑗 correspond to two different pairs of elements involving the same photograph 𝑝, that is, (𝑝, camera_𝑥) and (𝑝, camera_𝑦). Then the total profit value of a schedule 𝑠=(𝑠1,𝑠2,,𝑠𝑚) is the sum of the profits of the photographs in 𝑠, that is, 𝑓(𝑠)=𝑚𝑖=1𝑔𝑖𝑠𝑖.

A capacity constraint is the following. A size is associated with each photograph 𝑝𝑖, which represents the amount of memory required to record 𝑝𝑖 when it is taken. The size of a pair (𝑝, camera) (or its 0-1 variable) is defined as the size of the photograph 𝑝. The total size of all the pairs of the given set 𝑃 is then represented by a vector: 𝑐=(𝑐1,𝑐2,,𝑐𝑚), where 𝑐𝑖=𝑐𝑗(𝑖𝑗) if 𝑐𝑖 and 𝑐𝑗 correspond to two different pairs of elements involving the same photograph 𝑝, that is, (𝑝, camera_𝑥) and (𝑝, camera_𝑦). The capacity constraint states that the sum of the sizes of the photographs in a schedule 𝑠=(𝑠1,𝑠2,,𝑠𝑚) cannot exceed the maximal recording capacity on board, which is expressed as 𝑚𝑖=1𝑐𝑖𝑠𝑖Max_capacity.

Binary constraints involving the nonoverlapping of two trials and the minimal transition time between two successive trials of a camera, and also some constraints involving limitations on instantaneous data flow are conveniently expressed by simple relations over two pairs (photo and camera). A binary constraint forbids the simultaneous presence of a pair (𝑝𝑖,𝑘𝑖) and another pair (𝑝𝑗,𝑘𝑗) in a schedule. If 𝑠𝑖 and 𝑠𝑗 are the corresponding decision variables of such two pairs, then a binary constraint is defined as follows: 𝑠𝑖+𝑠𝑗1. Let C2 denote the set of all such pairs (𝑠𝑖,𝑠𝑗) which should verify the above binary constraint.

Some constraints involving limitations on instantaneous data flow cannot be expressed in the form of binary constraints as above. These remaining constraints may however be expressed by relations over three pairs (photo and camera). A ternary constraint forbids the simultaneous presence of three pairs (𝑝𝑖,𝑘𝑖),(𝑝𝑗,𝑘𝑗), and (𝑝𝑙,𝑘𝑙). Letting 𝑠𝑖,𝑠𝑗, and 𝑠𝑙 be the decision variables corresponding to these pairs, then such a ternary constraint is written as follow 𝑠𝑖+𝑠𝑗+𝑠𝑙2. Let C3_1 denote the set of all such triplets (𝑠𝑖,𝑠𝑗,𝑠𝑙) which should verify this ternary constraint.

Finally, we need to be sure that a schedule contains no more than one pair from {(𝑝,𝑘𝑖),(𝑝,𝑘𝑗),(𝑝,𝑘𝑙)} for any (mono) photograph 𝑝. Letting 𝑠𝑖,𝑠𝑗 and 𝑠𝑙 be the decision variables corresponding to these pairs, then this ternary constraint is expressed as 𝑠𝑖+𝑠𝑗+𝑠𝑙1. Clearly there are exactly 𝑛1 ternary constraints of this type. Let C3_2 denote the set of all such triplets (𝑠𝑖,𝑠𝑗,𝑠𝑙) which verify this second type of ternary constraints. C3 denotes the union of C3_1 and C3_2, that is, C3 = C3_1 C3_2.

6.2. Description of the Method

In contrast with the previous problems where all the constraints are considered to define the search space, a partially constrained search space C is considered here, which is composed of all binary vectors of 𝑚 elements satisfying constraints C2 and C3 above. The relaxation of the capacity constraint helps to obtain better results and to accelerate the search. Let 𝑠=(𝑠1,𝑠2,,𝑠𝑚) C and 𝑠=(𝑠1,𝑠2,,𝑠𝑚), and then 𝑠 is a neighbor of 𝑠, that is, 𝑠𝑁(𝑠), if and only if the following conditions are verified: (1)there is only one 𝑖 such that 𝑠𝑖=0 and 𝑠𝑖=1, for 1𝑖𝑚, (2)for the above 𝑖, for all (𝑠𝑖,𝑠𝑗) C2, 𝑠𝑗=0 for 1𝑗𝑚, and (3)for the above 𝑖, for all (𝑠𝑖,𝑠𝑗,𝑠𝑘) C3_1, 𝑠𝑗+𝑠𝑘1 for 1𝑗,𝑘𝑚.

Thus, a neighbor of 𝑠 can be obtained by adding a pair (photo and camera) (i.e., flipping a variable 𝑠𝑖 from 0 to 1) in the current schedule and then dropping some pairs (photo and camera) (i.e., flipping some 𝑠𝑗’s from 1 to 0) to repair binary and ternary constraint violations.

During the search, the capacity constraint may be violated by the current solution 𝑠=(𝑠1,𝑠2,,𝑠𝑚); that is, the total size of 𝑠 may exceed the maximal allowed capacity. To satisfy the capacity constraint, the following mechanism is used. Each time the current solution is improved, the capacity constraint is checked. If the constraint is violated, the solution is immediately repaired by suppressing the elements 𝑠𝑖 which have the worst ratio 𝑔𝑖/𝑐𝑖 until the capacity constraint is satisfied.

Each time a move is carried out, a single variable 𝑠𝑖 flips from 0 to 1, and several 𝑠𝑗’s flip from 1 to 0. It is then tabu to flip again these 𝑠𝑗 values from 0 to 1 during tabu(𝑗) iterations, where tabu(𝑗)=𝐶(𝑗)+𝛼freq(𝑗), where 𝐶(𝑗) is the number of binary and ternary constraints involving the element 𝑠𝑗, freq(𝑗) the number of times 𝑠𝑗 is flipped from 1 to 0 from the beginning of the search, and 𝛼 is an instance-dependent coefficient which defines a penalty factor for each move. To explain this, a variable involved in a large number of constraints has naturally more risk to be flipped during a move than a variable having few constraints on it. It is thus logic to give a longer tabu tenure for a move whose variable has many constraints on it. The second part of the function aims to penalize a move which repeats too often. Note that intensification and diversification procedures were also used to enhance the efficiency of the general method.

6.3. Numerical Comparison with Other Methods

Experiments are carried out on a set of 20 realistic instances provided by the CNES (French National Space Agency) and described in details in [23]. These instances belong to two different sets: without capacity constraint (13 instances) and with capacity constraint (7 instances). The instances without capacity constraints, as well as one instance with capacity constraint, will not be commented; it is very easy to solve them, either with an exact method or with the above-described CNS-SAT algorithm. The other six instances have from 488 to 1057 candidate photographs, giving up to 2355 binary variables and 35933 constraints. Existing exact algorithms are unable to solve optimally these instances.

The best known nonexact algorithm was a tabu search TS-SAT proposed by the CNES. The main differences with CNS-SAT algorithm are the following. (1) TS-SAT uses a different (integer) formulation of the problem; (2) it manipulates only feasible solutions (the search space is thus 𝑆(feasible)); (3) it uses a different neighborhood structure; (4) it considers only a sample of neighbor solutions to make a move; (5) the tabu tenure for each move is randomly taken from predefined (very small) ranges.

To solve an instance, CNS-SAT is allowed to run 9 million iterations on a PC (200 MHZ, 32 MB of RAM), which is considered as reasonable by the CNES. CNS-SAT was run 100 times on each instance with different random seeds, and the average value is returned for each instance. The first three columns of Table 5 give the name of the instance, the number of candidate photographs 𝑛, and the number of 0-1 variables 𝑚. Columns 4 and 5, respectively, show the best profit 𝑓TS and the associated computing time timeTS (in seconds) obtained with TS-SAT. Columns 6-7 give the average profit value 𝑓CNS and the average time timeCNS needed by CNS-SAT to find such a solution. It is easy to see that CNS-SAT is much more efficient and quicker than TS-SAT (see also the line labeled “Average”).

7. Conclusion

In this paper, we propose and discuss a generic method for combinatorial optimization problems. Its consideration within various fields shows that CNS is very efficient, robust, quick, and relatively easy to implement. Note that other heuristic solution methods which were not discussed here could be considered within the CNS framework (e.g., [24, 25]). On the contrary to tree search, CNS mainly focuses on the bottom part of the search tree (i.e., it evolves close to the leaves). In contrast with standard local search methods, on the one hand, it can perform jumps in the search tree, which means that the search space is likely to be connected. On the other hand, there is no need to extend the search space by considering unfeasible solutions and penalizing them with a function which can be difficult to design and to tune.

CNS is especially well adapted when the optimization problem can be divided into a series of decision problems. It was the case for three of the presented applications.(i)Graph coloring can be tackled by the search of 𝑘-colorings with decreasing values of 𝑘. (ii)The frequency assignment problem can be approached at level 𝑘 by only considering interference constraints at level 𝑘 and imperative constraints. Then, if a feasible solution is found at level 𝑘, level 𝑘1 is considered. (iii)The car fleet management problem can be considered with a fixed number 𝑘 of cars in stock (𝑘 being first equal to the existing cars in stock), and the provided solution will be the less costly solution among the different considered 𝑘 values (𝑘 can augment if cars are purchased).

CNS is a very flexible method for at least four reasons. (i)It can manage various ways of representing a solution; the component 𝑖 of solution 𝑠, denoted 𝑠𝑖, can involve only one information (e.g., the color of a vertex, the car type of a client request, a binary decision value associated with a selection of a photograph or not), or several data of different nature (e.g., a frequency and a polarization, a binary decision value for the selection of a photograph, and the associated camera). This allows to better manage the repairing phase. (ii)It can consider various types of constraints. It is well adapted for constraints linking two or three variables together, because the repairing phase is usually straightforward in such situations. If a specific constraint involves several variables, such a constraint can be relaxed (at least to save CPU time), as it was the case for the satellite range scheduling problem when the capacity constraint was only considered at specific iterations. (iii)It is also well adapted for some problems where the unassigned variables actually correspond to an expensive assignment for the considered problem (e.g., a nonassigned variable corresponds to a subcontracted request for the car rental company). (iv)Other significant ingredients can be easily added within the framework of CNS to enhance its efficiency, such as intensification or diversification procedures.

CNS can be easily combined with evolutionary heuristics, like genetic or adaptive memory algorithms. We can consider that it was already successfully performed for graph coloring [17] and a satellite range scheduling problem [21]. In both cases, the resulting methods are the best for the considered problems. Therefore, a relevant avenue of research would consist in hybridizing CNS with other techniques for other optimization problems.