Abstract

Most memetic algorithms (MAs) for graph partitioning reduce the cut size of partitions using iterative improvement. But this local process considers one vertex at a time and fails to move clusters between subsets when the movement of any single vertex increases cut size, even though moving the whole cluster would reduce it. A new heuristic identifies clusters from the population of locally optimized random partitions that must anyway be created to seed the MA, and as the MA runs it makes beneficial cluster moves. Results on standard benchmark graphs show significant reductions in cut size, in some cases improving on the best result in the literature.

1. Introduction

Consider an unweighted undirected graph , where is a set of vertices, and is the set of edges that connect them. A -way partition of the graph is a partitioning of the vertex set into disjoint subsets. A partition is said to be balanced if the difference in size between the largest and the smallest subset is at most 1, that is, for all ,  . The cut size of a partition is defined to be the number of edges connecting vertices in different subsets of the partition. The -way graph partitioning problem is the problem of finding a balanced -way partition with the minimum cut size. If , it can be called bipartitioning and if , multiway partitioning. These problems arise in applications such as sparse matrix factorization, network partitioning, layout and floor planning, circuit placement, social network analysis, and software-defined networking [1, 2].

For general graphs, partitioning is known to be NP-hard [3]. Bui and Jones [4] have shown that even finding good approximate solutions is also NP-hard.

Therefore, many heuristic methods have been proposed: some of them work well, but they cannot of course guarantee optimality. The simplest heuristic is iterative improvement partitioning (IIP) [5, 6], exemplified by the Kernighan-Lin (KL) [7] and the Fiduccia-Mattheyses (FM) algorithms [8], but these algorithms only produce solutions which are approximations to local optima; however, this limitation can be overcome by hybridizing them with metaheuristics, such as simulated annealing [9], genetic algorithms (GAs) [10], tabu search [11, 12], or ant colony optimization [13]. Recently, a number of techniques based on GAs have achieved notable results for [1419] and [2026]. Kim et al. [27] have surveyed this work.

The use of IIP for local optimization of partitioning produced by a GA becomes less effective as the graph becomes larger. We will show that this is because IIP often fails to move densely interconnected subgraphs, called clusters, between partitions, and hence fails to find partitions with small cut sizes.

The goal of the work reported in this paper is to overcome the barriers to effective search which are presented by clusters, by modifying the GA so that it contributes to move clusters appropriately. We present a memetic algorithm (MA), which is a GA combined with local optimization, in which a heuristic finds clusters in some of the positions in each generation, by examining population of individuals, each of which represents a position, rather than trying to identify them directly from a single graph. It moves some of these clusters. This heuristic supplements the well-known ability of MAs to provide attractive initial points for local optimization. Experimental results show that this approach can substantially improve the performance of an MA. The contributions of this work are summarized as follows.(i)We provide a detailed explanation of the difficulty of moving clusters in graph partitioning and provide experimental results quantifying the impact of clusters on the search for partitions with a small cut size.(ii)We present a heuristic for detecting and moving clusters, which is based on a new, population-based, measure of the distance between vertices called genic distance.(iii)We show that this heuristic substantially improves the ability of an MA to find good partitions.

The remainder of this paper is organized as follows. In Section 2 we briefly introduce IIP algorithms and the test graphs used in our experiments. In Section 3 we investigate the difficulty of moving clusters in graph partitioning. In Section 4 we describe our new cluster-handling heuristic, and an MA that uses this heuristic is described in Section 5. In Section 6 we present experimental results, and draw conclusions in Section 7.

2. Preliminaries

2.1. Iterative Improvement Algorithms in Bipartitioning

Iterative improvement partitioning starts with a random partition. This is refined in a series of passes. At the start of each pass, all the vertices are free to move between subsets. IIP selects vertices and moves them, but each vertex is only moved once during a pass. At the end of the pass, the best partition found during the pass is identified and used as the input to the next pass. Passes continue until there is no further improvement.

There are a number of IIP algorithms, of which KL [7] is often considered to be the first reasonable heuristic for bipartitioning. In KL, the movement of vertices during a pass is restricted to the swapping of a pair of vertices between subsets.

Let be a partition of into two subsets and . We define the gain associated with a vertex to be the reduction in cut size obtained by moving to the other subset. By extension, the gain obtained by swapping vertices and can be expressed as follows:where KL selects the pair with the highest value of and effects the exchange. The vertices and are not considered again during the current pass. A sequence of pairs are selected in this way. The algorithm chooses that maximizes and exchanges and . KL performs further passes until no improvement is possible.

FM is another widely used IIP algorithm, which is similar to KL, except that it only moves one vertex at a time. This makes FM faster than KL, with little loss in partition quality. Several variants of KL and FM exist [15, 28, 29].

2.2. Local Optimization Algorithms for Multiway Partitioning

There are three main schemes for multiway partitioning, which are developments of the recursive, pair-wise, and direct approaches [21] to bipartitioning. The recursive KL algorithm bisects the graph recursively until there are subsets. The pair-wise KL [7] starts with an arbitrary -way partition. It picks two subsets at a time from the subsets and performs bipartitioning to reduce the cut size between those pairs. Sanchis [30] extended the FM algorithm to multiway partitioning and showed that the direct method performed better than recursion. The extended algorithm considers moving each vertex from its current subset to every other subset. To perform local optimization in the proposed MA for multiway partitioning, we use a variant of this algorithm, called EFM (extended FM) [21]. The time complexity of EFM is .

2.3. Local Search in Memetic Algorithms

It is already clear that combining a GA with local optimization algorithms is an effective approach to the graph partitioning problem [15]. Some authors have explored fast but weak local optimization algorithms. For example [31, 32], 2-opt was used to relocate border vertices, which are those with edges that connect to vertices in other subsets. Bui and Moon [10] obtained better results with KL by allowing only a single pass, while restricting the number of vertices to be swapped. Conversely, other authors have reported notable improvements by enhancing local optimization algorithms. For bipartitioning, Kim and Moon [15] suggested a new KL-based local optimization algorithm, formulated using a new type of gain, called lock gain, which only takes into account the edges that connect a vertex to the vertices that have already been moved. Combined with a GA, this algorithm obtained impressive results on most benchmark graphs. For multiway partitioning, the combination of MAs with specialized local optimization algorithms showed good results [20, 21]. Steenbeek et al. [18] proposed what they called a cluster enhancement heuristic, which they combined with an MA, and reported successful results. Their MA uses a vertex swap heuristic to identify clusters. The MA only handles the moving of clusters between subsets.

2.4. Test Graphs

We tested our MA on Johnson’s benchmark graphs [9], which have been widely used in other studies [10, 11, 1417, 20, 21, 23, 3336]. They are composed of 8 random graphs G and 8 random geometric graphs U. The two different classes of graphs are briefly described below.(i)G: a random graph on vertices, with an independent probability that any two vertices are connected by an edge. The probability is biased so that the expected vertex degree, , is .(ii)U: a random geometric graph on vertices that lie in the unit square and whose coordinates are chosen uniformly from the unit interval. Every pair of vertices separated by a distance of or less is connected by an edge. The expected degree of a vertex is .

3. Difficulty of Moving Clusters

Suppose that the cluster shown in Figure 1(a) is involved in a bipartitioning problem. The four vertices in this cluster are fully interconnected, and they all belong to the same subset. Moving this cluster to the other subset, across the dotted line in Figure 1(b) will reduce the cut size of the partition by 4. However, there is no motivation to move any single vertex, because they all have negative gain: the gains of , , , and are −1, −4, −2, and −1, respectively. This example illustrates how IIP algorithms may miss a significant reduction in cut size that could be achieved by moving several vertices together.

The baleful effect of clusters on local search algorithms trying to solve the graph partitioning problem motivated this study. Kim [37, 38] indicated that graph partitioning is hard primarily due to the difficulty of moving clusters. Dutt and Deng [39, 40] have also observed that an IIP method applied to circuit partitioning can fail because of the difficulty of dealing with clusters that straddle subsets.

3.1. Experimental Support

We designed experiments to quantify the effect of clusters on IIP algorithms, represented by the KL algorithm. Using the cluster detection method to be described in Section 4.1, we find clusters in the graph and select one randomly. We then take a locally optimum bipartition obtained by KL and move the selected cluster to the other subset, creating a perturbed partition . Applying KL to , we obtain a new local optimum .

Assuming that , the number of vertices in the clusters, is small, can be expected to match if KL is successful in moving the cluster back. However, if KL fails to return the perturbed cluster to its original subset, the cut size of the partition may increase. Repeating this experiment, we derive , as an approximation of the probability that the cut size of is larger than that of .

For comparison, we perturbed vertices randomly selected within a locally optimized partition, by moving them to the other subset. We call this partition . We apply the KL algorithm to and then obtain a new locally optimum partition . Repeating this experiment, we derive , as an approximation of the probability that the cut size of is larger than that of .

Table 1 shows the values of and for 16 benchmark graphs. We see that is always larger than , as we would expect. We notice that the gap between and is larger on the geometric graphs (U) than on the random graphs (G).

4. Cluster-Handling Heuristic

Cluster analysis is a well-known problem for which plenty of algorithms exist, many of which require a lot of computation. The insight that motivates our heuristic is that the application of a local optimization process, such as IIP, to a randomly partitioned graph creates a modified partition in which clusters tend to be wholly allocated to one subset or another (and are then difficult to move, as we have already observed). A single partition of this sort is of little help in identifying clusters, because the clusters are not separated at all within each subset; but if we create many random partitions and optimize them, we can reasonably infer that vertices that find themselves in the same subset in most of these partitions belong to the same cluster. We can make this inference in a structured way using the “genic distance” metric that we will introduce. This approach to cluster analysis may seem indirect, but it is efficient in the context of an evolutionary approach to graph partitioning, because the set of partitions required for finding clusters using genic distance is also the population which we must create to be evolved by our MA.

One way of dealing with clusters is to devise a local optimization heuristic that can identify clusters [18, 19, 38]. However, this prevents us from building on previous studies of IIP algorithms.

Our approach is to add an additional heuristic to our MA, which finds and moves clusters. The heuristic identifies clusters in the population of partitions which have already been optimized locally. It selects clusters with higher gains and moves them. IIP local optimization is then applied again.

4.1. Cluster Detection

Let be an indicator function, that is, and . Then, we can trivially establish the following.

Fact 1. If is true, then .

Proof. From Table 2.

Fact 2. .

Proof. From Table 3.

We now define a metric called genic distance, which measure the extent to which two vertices connected by an edge can be considered to belong to the same cluster. We denote the genic distance of an edge within a population of locally optimized partitions as , which can be expressed as follows: where and are the genes corresponding to and , respectively. The value of gene (i.e., the partition number that vertex belongs to) in the th individual is . For convenience, we assume that each vertex has an edge that connects it to itself, so that for each vertex . Then, the following proposition holds.

Proposition 1. For each population, becomes a pseudometric on .

Proof. Since , for each . It is enough to show the following three conditions.(i): (ii)Symmetry. Let be in : (iii)Triangle Inequality. Consider each group of three edges . If and , then for each . By contraposition, if , then or .For each , By summing the above inequalities for all , Therefore we have That is, satisfies the triangle inequality.

Proposition 1 suggests that the measure is reasonable. A pseudometric space is a generalization of a metric space, in which points need not be distinguishable; thus it is possible that for some edge , with distinct vertices .

Our heuristic detects clusters by collecting a number of local optima and computes genic distances for all the edges in each graph. This takes time, but this cost is negligible since this computation is a preprocess performed before the MA runs. The heuristic temporarily eliminates edges with genic distances that are greater than a threshold value . We set to be the smallest value that satisfies Each remaining connected component containing more than three vertices is considered to be a cluster.

Figure 2 shows how our heuristic detects clusters. Figure 2(a) shows an example graph with 11 vertices and 15 edges. Four individuals, corresponding to locally optimized partitions, from the population are shown in Figure 2(b). In Figure 2(c), each edge is labeled with its genic distance. If the threshold value of genic distance is 1, then the edges with larger genic distances, indicated by dotted line, are eliminated. Then four connected components remain: , , , and . The last two of these connected components are considered too small to be clusters. Thus clusters and , shaded in Figure 2(c), remain as candidates for moving.

4.2. Cluster-Moving Scheme

Our heuristic improves the offspring of each generation after crossover by moving the clusters that were detected using the technique described in the previous subsection. To select the clusters to be moved and their target subsets, we introduce a measure called cluster gain, such that is the reduction in the cut size of the partition when all the vertices in cluster are moved to the subset . For example, moving the cluster in Figure 1(a) to the other subset in the partition is associated with a cluster gain of 2.

This cluster-moving scheme, described in Algorithm 1, is applied to each individual generated by crossover, which is a partition that may be unbalanced. However, cut size and cluster gain are well defined on unbalanced partitions. Our scheme does not consider moving every cluster in every partition, because we found that making all clusters movable causes the premature convergence of the MA. Thus, our heuristic selects clusters at random as candidates for moving. In our experiments, we set to 5. We compute the cluster gain that results from moving each candidate cluster to each of the other subsets. The candidate cluster and destination subset with the highest cluster gain are selected. Assume that is positive, all the vertices in cluster are moved to subset , and cluster is removed from the set of candidate clusters. This process is repeated until no candidates remain, or no move yields a positive cluster gain. No attempt is made to balance the partition during cluster-moving; this is performed later.

Select clusters: clusters selected at random;
do  {
 Calculate for all ;
;
;
if    then  {
  Move cluster to partition ;
  ;
 }
}  until  ( or )

5. Memetic Search Framework

CMPA (cluster-moving memetic partitioning algorithm) is a memetic algorithm that we have designed for graph partitioning. In this MA, an individual is a -way partition. Each gene in an individual corresponds to a vertex and has a value between 0 and , which indicates the subset to which the vertex belongs; that is, the th gene . It is a steady-state MA, meaning that there is only one offspring from population in each generation. Crossover is followed by a cluster-moving step and then local optimization.

Algorithm 2 shows the processes that make up CMPA, which we now describe in detail.(i)Initialization. When the MA starts, individuals (i.e., partitions) are created at random. Then each individual is improved by local optimization. We set to be 30 for bipartitioning and 50 for multiway partitioning (with or ).(ii)Selection. We used the roulette-wheel-based proportional selection. The probability that the best individual is chosen was set to four times the probability that the worst is chosen. The fitness value of the th individual is expressed as , where , , and are the cut sizes of the partitions corresponding to the best, the worst, and the th individual, respectively.(iii)Normalization. Laszewski [31] first used normalization to improve the performance of GA and its variants have been suggested in [23, 26, 41, 42]. The parent individuals are normalized before crossover following Laszewski [31, 33]. The subset of one parent which shares the largest number of vertices with subsets of the other parent is selected, and that subset is numbered 0. This process is repeated, incrementing the index, until all subsets are numbered.(iv)Crossover and Cluster Moving. We used a standard five-point crossover. After crossover, the cluster-handling heuristic described in Section 4.2 is applied to the individual. At this point, individuals usually correspond to unbalanced partitions. We select a random location in the individual and adjust the values of the genes, which are the subsets to which the corresponding vertices belong, to the right of this location (in a typical circular string) until the partition is balanced. This is effectively a mutation effect, and no further mutation was introduced.(v)Local Optimization. KL [7] was used for the bisection problems and EFM (extended FM) [21] was used for multiway partitioning ( or ).(vi)Replacement. We used a replacement scheme due to Bui and Moon [10]. If an offspring is better than its closer parent, the MA replaces that parent. If it is better than its other parent, then that parent is replaced. Otherwise it replaces the worst individual in the population.(vii)Stopping Condition. This is based on consecutive failures to replace an individual’s parents. Termination is triggered by consecutive failures: 30 in bipartitioning and 50 in multiway partitioning.

Create initial population of fixed size;
Apply local optimization to each member of population;
Calculate genic distance from population;
Find clusters and store their information;
do  {
 Select and from population;
 Normalization(, );
offspring Crossover(, );
 Cluster-moving(offspring);
 Local-optimization(offspring);
 Replace(population, offspring);
}  until (stopping condition);
return the best solution;

6. Experimental Results

We conducted experiments on 2-way, 8-way, and 32-way partitioning. Table 4 shows the performance of MA combined with KL (denoted KL-MA) and CMPA on bipartitioning. Table 5 shows the performance of the genetic extended FM algorithm (GEFM) [21], one of the most effective approaches, and CMPA on 8-way partitioning, and Table 6 gives the results for 32-way partitioning. CMPA uses a cluster-handling heuristic but KL-MA and GEFM do not. We performed 1,000 runs for bipartitioning and 100 runs for 8-way and 32-way partitioning. All the programs were written in the C language and compiled using GNU’s gcc compiler. It was run on a Pentium IV 2.8 GHz computer with Linux. In the tables, the bold-faced numbers indicate the lower of the average cut sizes obtained by the two algorithms. CMPA outperformed KL-MA and GEFM on most graphs, with comparable running times.

CMPA underperformed on some random graphs, which may be due to the weak cluster structures in these graphs. CMPA’s average performance was better on all the geometric graphs. This suggests that effective cluster handling is more important on the geometric graphs, as we suggested in Section 3.1.

The local optimization algorithm is much more expensive than the cluster-handling heuristic; thus, CMPA does not take much longer to run than KL-MA or GEFM. On average, CMPA required 14.14% more time than KL-MA for the bipartitioning problems, and 2.02% more than GEFM in 32-way partitioning. CMPA ran 5.52% faster in 8-way partitioning.

7. Concluding Remarks

We devised a cluster-moving heuristic and combined it with a memetic algorithm (MA) for graph partitioning. Experiments on 2-way, 8-way, and 32-way partitioning showed that this heuristic significantly improved the performance of the MA, especially on the 32-way partitioning.

The method of moving clusters that we have introduced addresses a significant weakness in standard IIP algorithms. The idea of adding an operation to complement a local optimization algorithm might be used to address other deficiencies in MAs.

Our method of cluster detection is based on a measure that we call genic distance, which is designed to reflect the extent to which two vertices connected by an edge belong to the same cluster. Instead of computing genic distances once in an initialization phase, an MA could recompute these distances as it progresses: this might improve the accuracy of cluster detection, at some cost in time. We believe that genic distance might also be useful in solving other problems involving clusters, such as the maximum clique problem.

Appendix

Results on Real-World Graphs

We also tested on four real-world benchmark graphs used in [11, 15, 43]. The sizes of the graphs are given in Table 7. We conducted experiments on 2-way and 8-way partitioning. We performed 100 runs for bipartitioning and 50 runs for 8-way partitioning. It was run on an Intel Core i7 3.6 GHz computer with Linux. Table 8 compares CMPA with KL-MA on bipartitioning, and Table 9 compares CMPA with GEFM on 8-way partitioning. In the tables, the bold-faced numbers indicate the lower of the average cut sizes obtained by the two algorithms. Similarly to the results in Section 6, CMPA overall performed better than the others, with comparable running times.

Disclosure

A preliminary version of this paper appeared in Proceedings of the Genetic and Evolutionary Computation Conference, 2007 (p. 1520).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the ICT R&D program of MSIP/IITP, Korea (10045253, The development of SDN/OpenFlow-Based Enterprise Network Controller Technology) project. The authors would like to thank Professor Byung-Ro Moon and Dr. Yongjoo Song for their valuable suggestions for improving this paper. The authors also thank Jong-Pil Kim for providing the source code of GEFM [21].