Abstract
Many optimization problems (from academia or industry) require the use of a local search to find a satisfying solution in a reasonable amount of time, even if the optimality is not guaranteed. Usually, local search algorithms operate in a search space which contains complete solutions (feasible or not) to the problem. In contrast, in Consistent Neighborhood Search (CNS), after each variable assignment, the conflicting variables are deleted to keep the partial solution feasible, and the search can stop when all the variables have a value. In this paper, we formally propose a new heuristic solution method, CNS, which has a search behavior between exhaustive tree search and local search working with complete solutions. We then discuss, with a unified view, the great success of some existing heuristics, which can however be considered within the CNS framework, in various fields: graph coloring, frequency assignment in telecommunication networks, vehicle fleet management with maintenance constraints, and satellite range scheduling. Moreover, some lessons are given in order to have guidelines for the adaptation of CNS to other problems.
1. Introduction
An exact method (e.g., branch-and-bound, dynamic programming, Lagrangian relaxation-based methods) guarantees the optimality of the provided solution. However, for a large number of applications and most real-life optimization problems, such methods need a prohibitive amount of time to find an optimal solution, because such problems are NP-hard [1]. For these difficult problems, one should prefer to quickly find a satisfying solution, which is the goal of heuristic solution methods. There mainly exist three families of heuristics: constructive heuristics (a solution is built step by step from scratch, like the greedy algorithm), local search heuristics (a solution is iteratively modified: this will be discussed below), and evolutionary heuristics (a population of solutions is managed, like genetic algorithms and ant algorithms). In this paper, only the context of local search methods will be considered.
A local search heuristic starts with an initial solution and tries to improve it iteratively. At each iteration, a modification, called move, of the current solution is performed in order to generate a neighbor solution. The definition of a move, that is, the definition of the neighborhood structure, depends on the considered problem. The most popular local search methods are simulated annealing [2], tabu search [3], threshold algorithms [4], variable neighborhood search [5], and guided local search [6].
Within a local search context, the usual approach consists in working with complete solutions, that is, each variable has a value and the solution might be feasible or not. In the latter case, a penalty function is often used, which depends on the number of violated constraints. In contrast, in Consistent Neighborhood Search (CNS), partial feasible solutions are used. Thus, not every variable has a value, but there is no constraint violation. In such a case, the goal is to minimize the number of nonassigned variables, and a move is performed in at least two phases: (1) give a value to an unassigned variable , and (2) delete the value of the created conflicting variables (i.e., the variables different from involved in a constraint violation). An intermediate phase might occur between these two phases, which consists in adjusting the value of conflicting variables under some specific conditions. In this paper, which is an extension of [7] and [8], we formally introduce the CNS methodology and the adaptation of tabu search within its framework, then we discuss, with a unified terminology, the great success of some existing heuristics, which can however be considered as belonging to the CNS methodology, for various NP-hard constrained combinatorial problems. For each problem, the reader is referred to the associated paper to have references on the NP-hard aspect, the literature review, and the detailed experimental conditions (computer, language, etc.). For each problem, comparisons have to be done carefully because the conditions of experimentation were not always the same. Remember however that a heuristic is generally performed until the potential to improve the best encountered solution becomes poor. In addition, for each considered problem, the CNS approach will always be compared with state-of-the-art methods, even if such methods are not very recent.
The paper is organized as follows. In Section 2, the CNS methodology is proposed. Then, heuristics for various problems are presented within a CNS framework: graph coloring (Section 3), frequency assignment with polarization (Section 4), car fleet management with maintenance constraint (Section 5), and satellite range scheduling (Section 6). We end up the paper with a conclusion.
2. Consistent Neighborhood Search
In this section, we introduce the CNS methodology and situate it within the optimization methods.
2.1. Presentation of the Method
Let be the considered problem with variables , let be the objective function to minimize, and let be the set of constraints to satisfy. Each variable can only have a value in its value domain . A solution of is denoted , where . Solution is feasible if it satisfies all the constraints in . In most local search methods, the search space contains complete solutions; that is, each variable has a value in , and the solutions can be feasible or not. If the search space only contains feasible solutions, the goal is generally to directly minimize the given objective function associated with ; otherwise, the aim often consists in minimizing , where penalizes the constraint violations associated with and is a parameter which gives more or less importance to the constraint violations. In contrast, a specificity of CNS consists in working with partial and feasible solutions, that is, where some ’s do not have a value but all the constraints are satisfied. In such a case, the goal is to minimize the number of nonassigned variables in , and the process stops of course if .
Therefore, three search spaces are possible: (1) the complete and feasible search space , (2) the complete and unecessarily feasible search space , where unfeasible solutions are penalized, and (3) the partial and feasible search space . When working in , it can be very difficult to define a move which maintains the feasibility of the solution. When working in , it is challenging to: define a move which does not augment too much , tune the above-mentioned parameter , and find a feasible solution because is much larger than . We will see that such drawbacks are avoided when working in .
An important feature of CNS is the definition of the neighborhood structure in . In most local search methods, in order to generate a neighbor solution from the current solution , a move consists in changing the value of one (or more) variable(s) of . The set of neighbor solutions of is denoted . Let be the distance between and . Usually, is proportional to the number of modified variables when moving from to ; thus is a constant for all .
In contrast, any move is performed in at least two phases in CNS. (1)Assignment phase: a value of is assigned to a non assigned variable . Let be the set of conflicting variables (excluding ) created by move (a variable is in conflict if it is involved in at least a constraint violation). (2)Reassignment phase (optional): reduce the set as follows: for each variable of , if it is possible to assign a new admissible value to it without creating new conflicts, do it. (3)Repairing phase: in order to keep the partial solution feasible, remove the value of all the variables of .
Therefore, the distance between and a neighbor solution in is usually not a constant.
In most local search algorithms, the selected neighbor solution of the current solution is usually (one of) the best (according to or ) solution chosen among a sample of . Sampling is usually unavoidable because it is too much time consuming to evaluate all the neighbor solutions of , either because is too large or because it is cumbersome to evaluate a single move . An important issue is thus to determine the sample (random or not) as well as the size of the sample.
In contrast, in CNS, all the neighbor solutions can be considered at each iteration. This is possible in a reasonable amount of time because of two reasons: (1) it is quick to evaluate a neighbor solution by incremental computation: it is simply ; (2) the number of nonassigned variables in the current solution is in general small when compared for example, with the size of the neighbor solutions of in a standard local search approach (working in or in ).
We have now all the ingredients to formulate a pseudo-code of CNS in Algorithm 1.
|
In summary, CNS is an approach dealing with partial feasible solutions, which can explore the whole neighborhood of the current solution at each iteration because a straightforward incremental computation can be designed. Many local search methods (e.g., tabu search, simulated annealing, random walk, threshold algorithms, etc.) can be adapted within the framework of CNS.
The adaptation of tabu search within the framework of CNS is now discussed. A generic and standard version of tabu search can be described as follows, assuming that has to be minimized. First, tabu search needs an initial solution as input. Then, the algorithm generates a sequence of neighbor solutions. When a move is performed from to , the inverse of that move is forbidden during the following (parameter) iterations (with some exceptions). The solution is computed as , where is a subset of containing all solutions which can be obtained from either by performing a move that is not tabu or such that , where is the best solution encountered along the search so far. Usually, is too large, and only a sample of neighbor solutions are selected from to be evaluated. The choice of the sample often has a strong impact on the final results. The process is stopped, for example, when an optimal solution is found (when it is known), or when a fixed number of iterations have been performed. Many variants and extensions of this basic algorithm can be found, for example, in [9].
Tabu search adapted within the framework of CNS has the following specificities: working in , minimizing instead of , exploring the whole neighborhood of the current solution, using an efficient and straightforward incremental computation after each move when a value is given to a variable and other values might be adjusted or deleted, and it is then tabu to remove the value of for a certain number of iterations.
2.2. Search Characteristics of CNS
We now compare the general strategy of three kinds of optimization methods: tree search, standard local search, and CNS. These methods have a very different way to visit the search tree, where the root (the top node in Figures 1 to 4) is the empty solution (no variable is assigned), and the leaves (the bottom nodes in Figures 1 to 4) are complete solutions (all the variables have a value), which can be feasible or not. In such four figures, an empty node is not visited by the considered method; on the contrary a black node is visited, and a black arrow indicates a performed move from one node to another. Let be the set of all the possible nodes in the search tree, and let be subset of which is mainly visited by a specific algorithm. We will see that differs drastically from one method to the other.
Tree search algorithms visit neighbor nodes in the search tree. The visited subtree is likely to be vertical for Depth-First Search as illustrated in Figure 1, where only a few leaves might be visited. In contrast, Breadth-First Search usually focuses on the top of the search tree because it often needs a prohibitive amount of time to go down, as illustrated in Figure 2 where no leaves are visited.
A very different strategy characterizes standard local search methods: as illustrated in Figure 3, only leaves are visited, which is a major advantage when compared to tree search methods. However, standard local search algorithms usually have the following drawbacks. On the one hand, if constraint violation is forbidden, the search space is not necessarily connected; that is, it is not always possible to join two leaves with a sequence of moves. In such a case, the search might be trapped in a connected component of which does not contain good solutions. On the other hand, if constraint violation is allowed but penalized during the search, the number of leaves is drastically augmented, and the search might mainly focus on nonfeasible leaves, as it can be challenging to define the penalty function and to tune its associated parameter .
Even if CNS can be considered as a local search method, it mainly explores nodes which are close to the leaves, as illustrated in Figure 4. Because all the leaves correspond to complete and feasible solutions, CNS stops as soon as a leaf is reached.
CNS can start its search from the root, that is, no variable has a value. In such a case, its first iterations would basically consist in greedily assigning a value to a variable until the current solution becomes saturated, that is, when the repairing phase becomes unavoidable. CNS can also start its search from a node located below the root if an external procedure is used to generate the initial solution of CNS. The more efficient is such external procedure, the closer will be the first explored node to the bottom of . Last but not least, CNS can perform jumps in . Thus, the search space is likely to be connected.
Therefore, CNS does not encounter the above-described drawbacks associated with tree search and standard local search methods. Notice however that there exists an implicit enumeration method able to perform jumps over , called Resolution Search and proposed in [10].
3. Graph Coloring
The main reference associated with this section is [11]. The authors proposed a tabu search heuristic for the graph coloring problem, which we denote CNS-GCP for the sake of simplicity.
3.1. Description of the Problem
Given a graph with vertex set and edge set , the -coloring problem (-GCP) consists in assigning an integer (called color) in to every vertex such that two adjacent vertices have different colors. The Graph Coloring Problem (GCP) consists in finding a -coloring with the smallest possible value of (called the chromatic number and denoted ). Both problems are NP-hard [1], and many heuristics were proposed to solve them. For a recent survey, the reader is referred to [12]. Starting at most with , an upper bound on the chromatic number of can be determined by solving a series of -GCPs with decreasing values of until no feasible -coloring can be obtained. Only such a strategy, which leads to the best results, will be considered below.
3.2. Description of the Method within a CNS Framework
The best -coloring heuristics are based on two approaches. In , the constraint that the endpoints of an edge should have different colors is relaxed. Thus, the strategy consists in allowing conflicts (a conflict occurs if two adjacent vertices have the same color) while minimizing the number of conflicts. In a local search context, a straightforward move is thus to change the color of a conflicting vertex, as proposed in [13].
In contrast, in , the constraint imposing that all vertices should be colored is relaxed, but conflicts are forbidden. We have for each . In such a case, the value of a solution in indicates the color of vertex , which is in the set , and there is no value (or an artificial value 0) if vertex is not colored. The goal is to minimize the number of uncolored vertices. A move consists in first assigning a color to a uncolored vertex (assignment phase), and then (repairing phase) to remove the color of the created conflicting vertices (i.e., all the vertices adjacent to which have color ). Then, all the moves which will remove the color of vertex are tabu for a certain number of iterations. This number is dynamically managed and is proportional to the variation of the objective function . At each iteration, the best nontabu move is performed (ties are randomly broken).
3.3. Numerical Comparison with Other Methods
It is shown in [14] and in [11] that the most difficult benchmark instances from the DIMACS Challenge (see ftp://dimacs.rutgers.edu/pub/challenge/graph/) are the ones presented in Table 1. Below, CNS-GCP is compared with other state-of-the-art coloring heuristics, which are Tabucol [13], GH [15], MOR [16], and MMT [17]. Tabucol is a standard tabu search working in . GH, MOR, and MMT are all population-based methods which use local search procedures. GH uses Tabucol to improve offspring solutions, whereas MMT uses a procedure close to CNS-GCP. MOR works in the same search space as CNS-GCP but uses simulated annealing instead of tabu search, and much more sophisticated moves.
A CPU time limit of 60 minutes on a Pentium 4 (2 GHZ, 512 MB of RAM) was considered for CNS-GCP. The first two columns of Table 1, respectively, indicate the name and the number of vertices of the graph. The third column contains two numbers, the first one being the chromatic number (a “?” is put when it is not known), and the second one the best upper bound ever found by a heuristic. Then, for every algorithm, the smallest such that a feasible -coloring was found is reported. We can observe that CNS-GCP is rather competitive with the best coloring methods. However, it is much simpler!
4. Frequency Assignment with Polarization
The main reference associated with this section is [18], where the frequency assignment problem with polarization (FAPP) was considered. The authors proposed that a tabu search approach working in denoted CNS-FAPP below.
4.1. Description of the Problem
The FAPP concerns a Hertzian telecommunication network made up of antennae located at a set of geographical sites. A Hertzian liaison joins two sites by one or more paths. Hence, a path is a unidirectional radioelectric bond, established between antennae at distinct sites, which has a given frequency and polarization. Let and , respectively, be the allowed frequency set and polarization set for path , where ,,. The FAPP consists in finding, for each path, a frequency and a polarization satisfying the following set of constraints.
Let be the set of the imperative constraints, which are of four types: , , , and , where . In addition, some electromagnetic compatibility constraints () require a minimal distance between frequencies of two paths: if and if . This constraint controls the interference phenomenon, which is why the required distance between frequencies depends on their polarizations; it is smaller if the polarizations are different (i.e., ). Unfortunately, most problems do not have feasible solutions because the domains are too restrictive or the requirements too numerous. Consequently, some deterioration is allowed by permitting some interference, which have to be minimized. With this aim, for the constraints, a progressive relaxation is authorized and expressed by relaxation levels; level 0 corresponds to no relaxation, and going from level to level involves the relaxation of some or all the frequency gaps, the maximum relaxation level being 10. Formally, we have: since in the 11th level, , so there is no .
Let be the set of constraints at level (for ). This means that each constraint belonging to is affected to its and gaps. More precisely, . Accordingly, a feasible solution at level is an assignment of all the paths satisfying all the strong constraints and all the constraints. If such a solution exists, the problem is said to be -feasible. Every problem is assumed to be 11-feasible.
Consequently, the objective function of the problem is, in order of priority to (1) minimize the lowest relaxation level for which a -feasible solution exists (2) minimize : the number of constraints of violated at level , and (3) minimize : the sum of the constraints of violated at all levels less than .
4.2. Description of the Method within a CNS Framework
The strategy adopted for the resolution consists in transforming the FAPP optimization problem into 11 decision problems according to the relaxation level on the ; each FAPP() contains both the and the constraints. This enables us to introduce some filtering treatments to reduce the frequency and the polarization domains. Starting from level where an initial solution is provided by a greedy constructive method, the general algorithm works in a downward fashion: each time a -feasible solution is found, a lower level is considered.
A solution indicates for each path its associated resource (,), where and . Thus, . In the assignment phase, a pair is given to the chosen nonassigned candidate path . Then, in the repairing phase, this affectation is propagated to its neighbors in the constraint network, and, if necessary, the conflicting neighboring values are deleted in order to satisfy the and constraints. This was done efficiently using incremental computing on specific data structures, allowing variable domains to be dynamically reduced.
A tabu list is needed to prevent cycling, which occurs when there is an attempt to instantiate the last deleted variables in the current partial solution. Indeed, all the values () likely to delete the variable affected by the move are classified tabu during some iterations; the tabu tenure is proportional to the number of times this resource was affected.
The considered problem was the subject of the Challenge ROADEF 2001 (organized by the French Society of Operations Research and Decision Analysis), involving 27 research teams (see http://uma.ensta-paristech.fr/conf/roadef-2001-challenge/). In Table 2 are presented the results obtained by the five best teams. During the competition, only one run was allowed, and the computing time was limited to one hour on a Pentium 3 (500 MHZ, 128 MB of RAM). Table 2 details the hierarchical objective function, by giving first the relaxation level , then the sum of all the unsatisfied , and finally the sum of all the unsatisfied , where varies from to . The first column indicates the instance names with the instance number and the number of considered paths. For example, 02-0250 is instance 2 with 250 paths.
The first approach, developed by Bisaillon’s team and referred to as -, is a local search based on tabu search with a variable neighborhood. The algorithm +, developed by Caseau, combines constraint propagation with heuristics such as Large Neighborhood Search and Limited Discrepancy Search. The third method, -, developed by Gavranovic, is a typical local search guided by the constraint cost; at each level, it builds frequency trees, ignoring the polarization constraints, and then it tries to optimize the polarization allocation. In a similar way, classical tabu search procedures (simply denoted ) working with complete solutions were implemented by Schindl’s team. Finally, the last column gives the results obtained by CNS-FAPP.
We can observe the efficiency of CNS-FAPP when compared to the other methods. Care is needed because the indicated values are the best among 10. CNS-FAPP finds the optimal level for 37 instances out of 40. And last but not least, CNS-FAPP was the winner of the Challenge!
5. Fleet Management with Maintenance Constraints
The main reference associated with this section is [19], where a rather complex solution method was proposed for a problem which can be formulated as a car fleet management problem with maintenance constraints (but denoted CAR for the sake of simplicity). The particularity of the problem is that feasible solutions are very easy to find, but can cost a lot. Thus, was designed to avoid to assign the most expensive value to each variable.
5.1. Description of the Problem
The problem retained for the Challenge ROADEF 1999 was an inventory management problem (see http://www.roadef.org/content/roadef/challenge.htm for the details), where a cost function has to be minimized. A car rental company manages a stock of cars of different types. It receives requests from customers asking for cars of specific types for a given time horizon. Basically a request is characterized by its start and end times, by a required car type, and by the number of required cars. All requests are supposed to be known for the considered time horizon. The satisfaction of all customer requests is mandatory. If there are not enough cars available in stock, the company can react in three different ways: (1) upgrading: it can offer a better car type to the customer (but the company encounters the additional associated cost); (2) subcontracting: the company can decide to subcontract some requests to other providers, which is generally the most expensive alternative; (3) purchasing new cars, which then belong to the stock of the company for the rest of the time horizon.
Two types of maintenance constraints make the problem difficult: (1) a maximum time of use without maintenance is given for each car type (each maintenance has a duration, a cost and a number of workers needed to perform it); (2) the company has a fixed number of maintenance workers, which means that the maintenances should be scheduled so that the capacity of the workshop is never exceeded. In addition, the following costs are also known: the costs (fixed and time dependent) associated with the assignment of a car to a request, and the inventory cost per day of a car in stock (rented or not). The goal is to satisfy all the requests while minimizing the total cost.
5.2. Description of the Method within a CNS Framework
The general pseudocode of the method, denoted CNS-CAR, is summarized in Algorithm 2. First, an initial solution is greedily generated. Step 1 of the main loop tries to improve the current solution without changing the set of purchased cars (with the use of two tabu search procedures working in , denoted TS1-CAR and TS2-CAR below), while the second step generates a new solution with a different set of purchased cars. The stopping criterion is a time limit of one hour, as imposed by the organizers of the Challenge.
|
In TS1-CAR, a solution can be modeled as follows. Let if request is performed by a car of type of the fleet (purchased or not), and has no value (or an artificial value, say 0) if request is subcontracted to another provider. Thus, is defined in order to minimize the number of subcontracted requests. A neighbor of a solution is obtained by assigning a car of type to a subcontracted request (i.e., the corresponding equals instead of 0). To make such a change feasible, in the repairing phase, requests covered by that overlap with are subcontracted (i.e., the associated values are set to 0), and the maintenances of car are possibly rescheduled in a greedy fashion while satisfying the maintenance constraints. If it is not possible, other ’s such that might be set to 0 in order to create more room to schedule the maintenances. If it is still not possible to generate a feasible schedule for the maintenances (because of the maintenances schedule of the other car types), such a move is not considered further.
TS2-CAR is an extension of TS1-CAR in the following sense: (1) it works on several car types during the same move; (2) it tries to reduce the total cost not only by assigning cars to subcontracted requests, but also by avoiding upgrades; (3) a reassignment phase is performed; (4) the repairing phase has more options to validate the move proposed in the assignment phase. A neighbor of a solution is obtained by assigning a car of type to a request , where is subcontracted or covered by a car of type in , where type is an upgrade of type . In other words, equals instead of 0 or . The reassignment and repairing phases are performed simultaneously as follows: all the requests covered by the cars of type might be reassigned within car type (while considering an exact method for a specific case of the graph coloring problem), and it is allowed to subcontract some requests of . In such a phase, the maintenance schedule of all the cars might change (in a greedy fashion or by the use of an exact method). In the two tabu procedures, when a request is assigned to a car type (i.e., is set equal to ), it is then tabu to remove the value from for a certain number of iterations.
Diversification procedures were also used, based on the following idea: the requests which were not subcontracted from a large number of iterations are subcontracted, in order to make room for other requests in the schedule.
5.3. Numerical Comparison with Other Methods
In Tables 3 and 4 the results for the 16 benchmark instances of the Challenge are reported. CNS-CAR is compared with the four best methods (among the thirteen proposed heuristics) of the Challenge. The winners of the contest were Briant and Bouzgarrou. Their algorithm mainly combines linear programming ignoring the maintenance constraints and then adjust the solution according to the maintenance constraints. The name of an instance is coded with a vector , where is the number of requests, is the number of car types, is the capacity of the workshop, and is equal to if purchases are allowed, and to otherwise. The time horizon of all instances is corresponding to a period of 2 years.
The algorithm was run with the time limit equivalent to one hour on a Pentium Pro (200 MHZ, 64 MB of RAM), as imposed by the organizers of the challenge. The results are shown in Table 3 (for instances where purchases are forbidden) and Table 4 (associated with the same instances, but where purchases are allowed). The column labeled contains the best known solution for each instance. An asterisk is put when CNS-CAR was able to equal or improve the previous best known solution. Some of these best results have been obtained when using different parameters from those mentioned above (for tuning purposes) or by running CNS-CAR for more than one hour. The next four columns contain the percentage gap with respect to obtained by the four best methods, labeled with the initials of the members of each team, namely, BB (for Briant and Bouzgarrou), AGHKU (for Asdemir, Karslioğlu, Gürbüz, and Ünal), B (for Bayrak), and DD (for Dhaenens-Flipo and Durand). The next column contains the percentage gap with respect to obtained with CNS-CAR. For each instance, ten runs of CNS-CAR were executed, and average results are reported. The last line of each column indicates average results. We can observe that CNS-CAR gives in average better results than those obtained by the four best competitors of the Challenge.
6. Satellite Range Scheduling
The main reference associated with this section is [20], in which the problem is referred to as the daily photograph scheduling of an earth observation satellite (but only denoted SAT below). The authors proposed a tabu search approach working in denoted CNS-SAT below. Note that CNS kind of approaches were also very successfully adapted to other satellite range scheduling problems: the multi-resource satellite range scheduling problem [21] where more than one resource are available, and a satellite range scheduling with partial acquisition and transition times [22].
6.1. Description of the Problem
The considered satellite range scheduling problem can be described as follows [23]. Let be the set of candidate photographs which can be scheduled to be taken on the next day. A set of possibilities is associated with each photograph corresponding to the different ways to take : (1) for a mono , there are three possibilities because a monophotograph can be taken by any of the three cameras (front, middle, and rear) on the satellite; (2) for a stereo , there is one single possibility because a stereo photograph requires simultaneously the front and the rear camera. With each monophotograph are associated three pairs of elements , and . Similarly, with each stereo photograph is associated one pair . Letting and be, respectively, the number of mono- and stereophotographs in (where ), there are in total possible pairs of elements for the given set of candidates. Now, associating a binary (decision) variable with each such pair, a photograph schedule corresponds to a binary vector: , where if the corresponding pair (photo, camera) is present in the schedule, and otherwise. For example, if where and are monophotographs and is a stereophotograph, then represents a schedule in which is taken by camera 1, is rejected, and is taken by cameras 1 and 3.
The SAT is to find a subset of which satisfies all the imperative constraints and maximizes the sum of the profits of the photographs in . The objective function can be defined as follows. First, the profit of a pair (, camera) (or its 0-1 variable) is defined as the profit of the photograph . The total profit of all the pairs of the given set is then represented by a vector: , where if and correspond to two different pairs of elements involving the same photograph , that is, (, camera_) and (, camera_). Then the total profit value of a schedule is the sum of the profits of the photographs in , that is, .
A capacity constraint is the following. A size is associated with each photograph , which represents the amount of memory required to record when it is taken. The size of a pair (, camera) (or its 0-1 variable) is defined as the size of the photograph . The total size of all the pairs of the given set is then represented by a vector: , where if and correspond to two different pairs of elements involving the same photograph , that is, (, camera_) and (, camera_). The capacity constraint states that the sum of the sizes of the photographs in a schedule cannot exceed the maximal recording capacity on board, which is expressed as .
Binary constraints involving the nonoverlapping of two trials and the minimal transition time between two successive trials of a camera, and also some constraints involving limitations on instantaneous data flow are conveniently expressed by simple relations over two pairs (photo and camera). A binary constraint forbids the simultaneous presence of a pair and another pair in a schedule. If and are the corresponding decision variables of such two pairs, then a binary constraint is defined as follows: . Let C2 denote the set of all such pairs which should verify the above binary constraint.
Some constraints involving limitations on instantaneous data flow cannot be expressed in the form of binary constraints as above. These remaining constraints may however be expressed by relations over three pairs (photo and camera). A ternary constraint forbids the simultaneous presence of three pairs , and . Letting , and be the decision variables corresponding to these pairs, then such a ternary constraint is written as follow . Let C3_1 denote the set of all such triplets which should verify this ternary constraint.
Finally, we need to be sure that a schedule contains no more than one pair from for any (mono) photograph . Letting and be the decision variables corresponding to these pairs, then this ternary constraint is expressed as . Clearly there are exactly ternary constraints of this type. Let C3_2 denote the set of all such triplets which verify this second type of ternary constraints. C3 denotes the union of C3_1 and C3_2, that is, C3 = C3_1 C3_2.
6.2. Description of the Method
In contrast with the previous problems where all the constraints are considered to define the search space, a partially constrained search space C is considered here, which is composed of all binary vectors of elements satisfying constraints C2 and C3 above. The relaxation of the capacity constraint helps to obtain better results and to accelerate the search. Let C and , and then is a neighbor of , that is, , if and only if the following conditions are verified: (1)there is only one such that and , for , (2)for the above , for all C2, for , and (3)for the above , for all C3_1, for .
Thus, a neighbor of can be obtained by adding a pair (photo and camera) (i.e., flipping a variable from 0 to 1) in the current schedule and then dropping some pairs (photo and camera) (i.e., flipping some ’s from 1 to 0) to repair binary and ternary constraint violations.
During the search, the capacity constraint may be violated by the current solution ; that is, the total size of may exceed the maximal allowed capacity. To satisfy the capacity constraint, the following mechanism is used. Each time the current solution is improved, the capacity constraint is checked. If the constraint is violated, the solution is immediately repaired by suppressing the elements which have the worst ratio until the capacity constraint is satisfied.
Each time a move is carried out, a single variable flips from 0 to 1, and several ’s flip from 1 to 0. It is then tabu to flip again these values from 0 to 1 during tabu iterations, where tabufreq, where is the number of binary and ternary constraints involving the element , freq the number of times is flipped from 1 to 0 from the beginning of the search, and is an instance-dependent coefficient which defines a penalty factor for each move. To explain this, a variable involved in a large number of constraints has naturally more risk to be flipped during a move than a variable having few constraints on it. It is thus logic to give a longer tabu tenure for a move whose variable has many constraints on it. The second part of the function aims to penalize a move which repeats too often. Note that intensification and diversification procedures were also used to enhance the efficiency of the general method.
6.3. Numerical Comparison with Other Methods
Experiments are carried out on a set of 20 realistic instances provided by the CNES (French National Space Agency) and described in details in [23]. These instances belong to two different sets: without capacity constraint (13 instances) and with capacity constraint (7 instances). The instances without capacity constraints, as well as one instance with capacity constraint, will not be commented; it is very easy to solve them, either with an exact method or with the above-described CNS-SAT algorithm. The other six instances have from 488 to 1057 candidate photographs, giving up to 2355 binary variables and 35933 constraints. Existing exact algorithms are unable to solve optimally these instances.
The best known nonexact algorithm was a tabu search TS-SAT proposed by the CNES. The main differences with CNS-SAT algorithm are the following. (1) TS-SAT uses a different (integer) formulation of the problem; (2) it manipulates only feasible solutions (the search space is thus ); (3) it uses a different neighborhood structure; (4) it considers only a sample of neighbor solutions to make a move; (5) the tabu tenure for each move is randomly taken from predefined (very small) ranges.
To solve an instance, CNS-SAT is allowed to run 9 million iterations on a PC (200 MHZ, 32 MB of RAM), which is considered as reasonable by the CNES. CNS-SAT was run 100 times on each instance with different random seeds, and the average value is returned for each instance. The first three columns of Table 5 give the name of the instance, the number of candidate photographs , and the number of 0-1 variables . Columns 4 and 5, respectively, show the best profit and the associated computing time (in seconds) obtained with TS-SAT. Columns 6-7 give the average profit value and the average time needed by CNS-SAT to find such a solution. It is easy to see that CNS-SAT is much more efficient and quicker than TS-SAT (see also the line labeled “Average”).
7. Conclusion
In this paper, we propose and discuss a generic method for combinatorial optimization problems. Its consideration within various fields shows that CNS is very efficient, robust, quick, and relatively easy to implement. Note that other heuristic solution methods which were not discussed here could be considered within the CNS framework (e.g., [24, 25]). On the contrary to tree search, CNS mainly focuses on the bottom part of the search tree (i.e., it evolves close to the leaves). In contrast with standard local search methods, on the one hand, it can perform jumps in the search tree, which means that the search space is likely to be connected. On the other hand, there is no need to extend the search space by considering unfeasible solutions and penalizing them with a function which can be difficult to design and to tune.
CNS is especially well adapted when the optimization problem can be divided into a series of decision problems. It was the case for three of the presented applications.(i)Graph coloring can be tackled by the search of -colorings with decreasing values of . (ii)The frequency assignment problem can be approached at level by only considering interference constraints at level and imperative constraints. Then, if a feasible solution is found at level , level is considered. (iii)The car fleet management problem can be considered with a fixed number of cars in stock ( being first equal to the existing cars in stock), and the provided solution will be the less costly solution among the different considered values ( can augment if cars are purchased).
CNS is a very flexible method for at least four reasons. (i)It can manage various ways of representing a solution; the component of solution , denoted , can involve only one information (e.g., the color of a vertex, the car type of a client request, a binary decision value associated with a selection of a photograph or not), or several data of different nature (e.g., a frequency and a polarization, a binary decision value for the selection of a photograph, and the associated camera). This allows to better manage the repairing phase. (ii)It can consider various types of constraints. It is well adapted for constraints linking two or three variables together, because the repairing phase is usually straightforward in such situations. If a specific constraint involves several variables, such a constraint can be relaxed (at least to save CPU time), as it was the case for the satellite range scheduling problem when the capacity constraint was only considered at specific iterations. (iii)It is also well adapted for some problems where the unassigned variables actually correspond to an expensive assignment for the considered problem (e.g., a nonassigned variable corresponds to a subcontracted request for the car rental company). (iv)Other significant ingredients can be easily added within the framework of CNS to enhance its efficiency, such as intensification or diversification procedures.
CNS can be easily combined with evolutionary heuristics, like genetic or adaptive memory algorithms. We can consider that it was already successfully performed for graph coloring [17] and a satellite range scheduling problem [21]. In both cases, the resulting methods are the best for the considered problems. Therefore, a relevant avenue of research would consist in hybridizing CNS with other techniques for other optimization problems.