About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 475018, 16 pages
http://dx.doi.org/10.1155/2012/475018
Research Article

A VNS Metaheuristic with Stochastic Steps for Max 3-Cut and Max 3-Section

1Research Center of Security and Future, School of Finance, Jiangxi University of Finance and Economics, Nanchang 330013, China
2Key Laboratory of Management, Decision and Information Systems, Academy of Mathematics and Systems Science, CAS, Beijing 100190, China

Received 15 February 2012; Accepted 30 May 2012

Academic Editor: John Gunnar Carlsson

Copyright © 2012 Ai-fan Ling. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A heuristic algorithm based on VNS is proposed to solve the Max 3-cut and Max 3-section problems. By establishing a neighborhood structure of the Max 3-cut problem, we propose a local search algorithm and a variable neighborhood global search algorithm with two stochastic search steps to obtain the global solution. We give some numerical results and comparisons with the well-known 0.836-approximate algorithm. Numerical results show that the proposed heuristic algorithm can obtain efficiently the high-quality solutions and has the better numerical performance than the 0.836-approximate algorithm for the NP-Hard Max 3-cut and Max 3-section problems.

1. Introduction

Given a graph 𝐺 ( 𝑉 ; 𝐸 ) , with nodes set 𝑉 and edges set 𝐸 , the Max 3-cut problem is to find a partition 𝑆 0 𝑉 , 𝑆 1 𝑉 and 𝑆 2 𝑉 , of the set 𝑉 , such that 𝑆 0 𝑆 1 𝑆 2 = 𝑉 , 𝑆 𝑖 𝑆 𝑗 = ( 𝑖 𝑗 ) and the sum of the weights on the edges connecting the different parts is maximized. Similar to the Max cut problem, the Max 3-cut problem has long been known to be NP complete [1], even for any un-weighted graphs [2], and has also applications in circuit layout design, statistical physics, and so on [3]. However, due to the complexity of this problem, its research progresses is much lower than that of the Max cut problem. Based on the semidefinite programming relaxation proposed by Goemans and Williamson [4], Frieze and Jerrum [5] obtained a 0.800217-approximation algorithm for the Max 3-Cut problem. Recently, Goemans and Williamson [6] and Zhang and Huang [7] improved Frieze and Jerrum’s 0.800217-approximation ratio to 0.836 using a complex semidefinite programming relaxation of the Max 3-cut problem.

For the purpose of our analysis, we first introduce some notations. We denote the complex conjugate of 𝑦 = 𝑎 + 𝑖 𝑏 by 𝑦 = 𝑎 𝑖 𝑏 , where i = 1 is the pure image number and the real part and image part of a complex number by Re ( ) and Im ( ) , respectively. For an 𝑛 dimensional complex vector 𝐲 𝑛 written as bold letter and 𝑛 dimensional complex matrix 𝑌 𝑛 × 𝑛 , we write 𝐲 and 𝑌 to denote their conjugate and transpose. That is, 𝐲 = 𝐲 𝑇 and 𝑌 = 𝑌 𝑇 . The set of 𝑛 dimensional real symmetric (semidefinite positive) matrices and the set of 𝑛 dimensional complex Hermitian (semidefinite positive) matrices are denoted by 𝒮 𝑛 ( 𝒮 + 𝑛 ) and 𝑛 ( + 𝑛 ) , respectively. We sometimes use 𝐴 0 to show 𝐴 𝒮 + 𝑛 (or 𝐴 + 𝑛 ). For any two complex vector 𝐮 ,   𝐯 𝑛 , we write 𝐮 , 𝐯 = 𝐮 𝐯 = 𝐮 𝐯 as their inner product. For any two complex matrices 𝐴 , 𝐵 𝑛 , we write 𝐴 , 𝐵 = 𝐴 𝐵 as their inner product; that is, 𝐴 , 𝐵 = 𝐴 𝐵 = T r ( 𝐵 𝐴 ) = 𝑖 , 𝑗 𝑏 𝑖 𝑗 𝑎 𝑖 𝑗 , where 𝐴 = ( 𝑎 𝑖 𝑗 ) and 𝐵 = ( 𝑏 𝑖 𝑗 ) . means the module of a complex number or the 2-norm of a complex vector or the 𝐹 -norm of a complex matrix.

Let the third root of unity be denoted by 𝜔 0 = 1 , 𝜔 = 𝜔 1 = 𝑒 𝑖 ( 2 𝜋 / 3 ) , 𝜔 2 = 𝑒 𝑖 ( 4 𝜋 / 3 ) . Introduce a complex variable 𝑦 𝑖 { 1 , 𝜔 , 𝜔 2 } , 𝑖 = 1 , , 𝑛 , then it is not hard to know that 2 3 1 3 𝑦 𝑖 𝑦 𝑗 1 3 𝑦 𝑗 𝑦 𝑖 = 2 3 𝑦 1 R e 𝑖 𝑦 𝑗 . ( 1 . 1 ) Denote 𝑆 𝑘 = { 𝑖 𝑦 𝑖 = 𝜔 𝑘 , 𝑘 = 0 , 1 , 2 } and 𝐲 = ( 𝑦 1 , , 𝑦 𝑛 ) 𝑇 . Then the Max 3-cut problem can be expressed as 2 M 3 C m a x 𝑓 ( 𝐲 ) = 3 𝑖 < 𝑗 𝑤 𝑖 𝑗 𝑦 1 R e 𝑖 𝑦 𝑗 s . t . 𝐲 { 1 , 𝜔 , 𝜔 2 } 𝑛 , ( 1 . 2 ) here 𝐲 { 1 , 𝜔 , 𝜔 2 } 𝑛 means that 𝑦 𝑖 { 1 , 𝜔 , 𝜔 2 } , 𝑖 = 1 , , 𝑛 , 𝑊 = ( 𝑤 𝑖 𝑗 ) is the weight-valued matrix of a given graph.

By relaxing the complex variable 𝑦 𝑖 into an 𝑛 dimensional complex vector 𝐲 𝑖 , we get a complex semidefinite programming (CSDP) relaxation of (M3C) as follows: 2 C S D P m a x 3 𝑖 < 𝑗 𝑤 𝑖 𝑗 𝐲 1 R e 𝑖 𝐲 𝑗 𝐲 s . t . 𝑖 𝐴 = 1 , 𝑖 = 1 , 2 , , 𝑛 , 𝑘 𝑖 𝑗 𝑌 1 , 𝑖 , 𝑗 = 1 , 2 , , 𝑛 , 𝑘 = 0 , 1 , 2 𝑌 0 , ( 1 . 3 ) where 𝑌 𝑖 𝑗 = 𝐲 𝑖 𝐲 𝑗 , 𝐴 𝑘 𝑖 𝑗 = 𝜔 𝑘 𝐞 𝑖 𝐞 𝑇 𝑗 + 𝜔 𝑘 𝐞 𝑗 𝐞 𝑇 𝑖 and 𝐞 𝑖 denotes the vector with zeros everywhere except for an unit in the 𝑖 th component. It is easily to verify that constraints 𝐴 𝑘 𝑖 𝑗 𝑌 1 can be expressed as 𝜔 R e 𝑘 𝑌 𝑖 𝑗 1 2 , 𝑘 = 0 , 1 , 2 . ( 1 . 4 ) To get an approximate solution of M3C, Goemans and Williamson [6] do not directly solve the CSDP, but solve an equivalent real SDP with following form (Although some softwares, such as SeDuMi [8] and the earlier version of SDPT3-4.0 [9], can deal with SDPs with complex data, this does not reduce the dimensions of problems): 1 R S D P m a x 2 𝐞 𝑄 𝑂 𝑂 𝑄 𝑋 s . t . 𝑖 𝐞 𝑇 𝑖 𝑂 𝑂 𝐞 𝑖 𝐞 𝑇 𝑖 𝐴 𝑋 = 2 , 𝑖 = 1 , 2 , , 𝑛 , R e 𝑘 𝑖 𝑗 𝐴 I m 𝑘 𝑖 𝑗 𝐴 I m 𝑘 𝑖 𝑗 𝐴 R e 𝑘 𝑖 𝑗 𝐴 𝑋 2 , 1 𝑖 < 𝑗 𝑛 , 𝑘 = 0 , 1 , 2 0 𝑖 𝑗 𝑂 𝑂 𝐴 0 𝑖 𝑗 𝑋 = 0 , 1 𝑖 < 𝑗 𝑛 , 𝑂 𝐴 0 𝑖 𝑗 𝐴 0 𝑖 𝑗 𝑂 𝑋 = 0 , 1 𝑖 < 𝑗 𝑛 , 𝑂 𝐞 𝑖 𝐞 𝑇 𝑖 𝐞 𝑖 𝐞 𝑇 𝑖 𝑂 𝑋 = 0 , 𝑖 = 1 , 2 , , 𝑛 , 𝑋 𝓢 + 2 𝑛 , ( 1 . 5 ) where 𝑄 = ( 1 / 3 ) d i a g ( 𝑊 𝑒 ) 𝑊 is the Laplace matrix of given graph, 𝑂 is an 𝑛 -dimensional full zeros matrix.

In RSDP, the first, third, and forth classes of equality constraints ensure that 𝑋 𝑖 𝑖 = 1 , 𝑖 = 1 , 2 , , 𝑛 and with the form 𝑋 = 𝑅 𝑆 𝑆 𝑅 . ( 1 . 6 ) The final two classes of equality constraints ensure that 𝑆 𝑖 𝑖 = 0 ( 𝑖 = 1 , , 𝑛 ) and 𝑆 is a skew-symmetric matrix.

If 𝑋 is an optimal solution of RSDP, then the complex matrix 𝑌 = 𝑅 + 𝑖 𝑆 is an optimal solution of CSDP. Then one can randomly generate a complex vector 𝜉 , such that 𝜉 𝑁 ( 0 , 𝑌 ) , and set 𝑦 𝑖 = 𝜉 1 , i f A r g 𝑖 0 , 2 𝜋 3 , 𝜉 𝜔 , i f A r g 𝑖 2 𝜋 3 , 4 𝜋 3 , 𝜔 2 𝜉 , i f A r g 𝑖 4 𝜋 3 , , 2 𝜋 ( 1 . 7 ) where A r g ( ) [ 0 , 2 𝜋 ] means the complex angle principal value of a complex number. Goemans and Williamson [6] had verified that, see also Zhang and Huang [7], ̂ 𝑌 𝑓 ( 𝐲 ) 0 . 8 3 6 𝑄 . ( 1 . 8 )

The algorithm proposed by Goemans and Williamson [6] can obtain a very good approximate ratio, and RSDP can be solved by interior point algorithm, but the 0.836-approximate algorithm will be not practical in numerical study for the Max 3-cut problem. From RSDP, one can see that for a graph with 𝑛 nodes, RSDP has 2 𝑛 + 5 𝑛 ( 𝑛 1 ) / 2 constraints and 3 𝑛 ( 𝑛 1 ) / 2 slack variables via the inequality constraints. That is to say, RSDP has a 2 𝑛 dimensional unknown symmetrical semidefinite positive matrix variable and a 3 𝑛 ( 𝑛 1 ) / 2 dimensional unknown vector variable, and 2 𝑛 + 5 𝑛 ( 𝑛 1 ) / 2 constraints, and has also many matrices without an explicit block diagonal structure although they are sparse. For instance, when 𝑛 = 1 0 0 , RSDP becomes a very-high-dimensional semidefinite programming problem with 14850 slack variables and 24950 constraints. Further, as we known, it is only a class of universal and medium-scale instances for Max 3-cut problems with 50 to 100 nodes. Hence, it will be very time consuming to solve such a RSDP relaxation of M3C using the current existing any SDP softwares. This leads that 0.836-approximate algorithm is not suitable for computational study of the Max 3-cut problem. This limitation for solving M3C based on CSDP (or RSDP) relaxation motivates us to find a new efficient and fast algorithm for the practical purpose for the Max 3-cut problem.

In the current paper, we first establish a definition of 𝐾 -neighborhood structure of the Max 3-cut problem and design a local search algorithm to find the local minimizer. And then, we propose a variable neighborhood search (VNS) metaheuristic with stochastic steps which is originally considered by Mladenović and Hansen [10], by which we can find efficiently a high-quality global approximate solution of the Max 3-cut problem. Further, combining a greed algorithm, we extend the proposed algorithm to the Max 3-section problem. To the best of our knowledge, it is first time to consider the computational study of the Max 3-cut problem. In order to test the performance of the proposed algorithm, we compare the numerical results with Goemans and Williamson’s 0.836-approximate algorithm.

This paper is organized as follows. In Section 2, we give some definitions and lemmas. In Section 3, we present the VNS metaheuristic for solving the Max 3-cut problem. The VNS is extended to the Max 3-section problem in Section 4. In Section 5, we give some numerical results and comparisons.

2. Preliminaries

In this section, we will establish some definitions and give some facts for our sequel purpose. For the third roots of unity, 1 , 𝜔 , 𝜔 2 , we can get the following fact: 1 𝜔 2 = 𝜔 𝜔 2 2 = 1 𝜔 2 2 = 3 . ( 2 . 1 ) Denote 𝕊 = { 1 , 𝜔 , 𝜔 2 } 𝑛 . Then based on (2.1), for any 𝐲 𝕊 , we may definite a 𝐾 -neighborhood of 𝐲 as follows.

Definition 2.1. For any 𝐲 𝕊 and any positive integer number 𝐾 ( 1 𝐾 𝑛 ) , one defines the 𝐾 -neighborhood, denoted by 𝑁 𝐾 ( 𝐲 ) , of 𝐲 as the set 𝑁 𝐾 ( 𝐲 ) = 𝐳 𝕊 𝐳 𝐲 2 = 𝑛 𝑖 = 1 𝑧 𝑖 𝑦 𝑖 2 . 3 𝐾 ( 2 . 2 ) In particular, if 𝐾 = 1 , we write the 1-neighborhood 𝑁 1 ( 𝐲 ) of 𝐲 as 𝑁 ( 𝐲 ) .

The boundary of the 𝐾 -neighborhood 𝑁 𝐾 ( 𝐲 ) is defined by 𝜕 𝑁 𝐾 ( 𝐲 ) = { 𝐳 𝕊 𝐲 𝐳 2 = 3 𝐾 } . Clearly, 𝑁 ( 𝐲 ) = 𝜕 𝑁 ( 𝐲 ) . If 𝐳 𝜕 𝑁 𝐾 ( 𝐲 ) , we call 𝐳 a 𝐾 -neighbor of 𝐲 . From Definition 2.1, the difference of between points 𝐲 and its 𝐾 -neighbor 𝐳 is that they have only 𝐾 different components. By computing straightforwardly, we get the number of elements of 𝜕 𝑁 𝐾 ( 𝐲 ) , that is | 𝜕 𝑁 𝐾 ( 𝐲 ) | = 2 𝐾 𝐶 𝐾 𝑛 . Particularly, | 𝜕 𝑁 ( 𝐲 ) | = 2 𝑛 when 𝐾 = 1 .

Example 2.2. Let 𝐲 = ( 𝜔 , 𝜔 , 𝜔 2 ) 𝑇 { 1 , 𝜔 , 𝜔 2 } 3 . Then ( 1 , 𝜔 , 𝜔 2 ) 𝑇 𝑁 ( 𝐲 ) , ( 1 , 𝜔 2 , 𝜔 2 ) 𝑇 𝜕 𝑁 2 ( 𝐲 ) 𝑁 2 ( 𝐲 ) , and ( 1 , 𝜔 2 , 𝜔 ) 𝑇 𝜕 𝑁 3 ( 𝐲 ) 𝑁 3 ( 𝐲 ) .

Definition 2.3. For any 𝑢 { 0 , 1 , 2 } , define two maps from { 1 , 𝜔 , 𝜔 2 } to itself as follows: 𝜏 𝑖 ( 𝜔 𝑢 ) = 𝜔 𝑢 + 𝑖 { 1 , 𝜔 , 𝜔 2 } , 𝑖 = 1 , 2 .

Clearly, for any 𝑢 { 0 , 1 , 2 } , 𝜏 𝑖 ( 𝜔 𝑢 ) 𝜔 𝑢 , 𝑖 = 1 , 2 and 𝜏 1 ( 𝜔 𝑢 ) 𝜏 2 ( 𝜔 𝑢 ) . Applying Definition 2.3, for any 𝐳 𝑁 ( 𝐲 ) there exists an unique component, 𝑧 𝑘 say, of 𝐳 , such that 𝑧 𝑘 𝑦 𝑘 and either 𝑧 𝑘 = 𝜏 1 ( 𝑦 𝑘 ) or 𝑧 𝑘 = 𝜏 2 ( 𝑦 𝑘 ) , and other components of 𝐳 and 𝐲 are the same. For simplicity, for any 𝐳 𝑁 ( 𝐲 ) with 𝑧 𝑘 𝑦 𝑘 and 𝑧 𝑖 = 𝑦 𝑖 ( 𝑖 = 1 , , 𝑛 , 𝑖 𝑘 ) , we denote by 𝐳 = 𝜏 𝑘 1 ( 𝐲 ) or 𝐳 = 𝜏 𝑘 2 ( 𝐲 ) corresponding to 𝑧 𝑘 = 𝜏 1 ( 𝑦 𝑘 ) or 𝑧 𝑘 = 𝜏 2 ( 𝑦 𝑘 ) . By Definitions 2.1 and 2.3, for any 𝐲 𝕊 , we can structure its 1-neighborhood points using maps defined by Definition 2.3; that is, we have the following result.

Lemma 2.4. Let 𝜏 𝑖 ( ) ( 𝑖 = 1 , 2 ) be defined by Definition 2.3. Then, for any 𝐲 𝕊 and any fixed positive integer number 𝑘 ( 1 𝑘 𝑛 ) , one has 𝜏 𝑘 𝑖 ( 𝐲 ) 𝑁 ( 𝐲 ) , 𝑖 = 1 , 2 , ( 2 . 3 ) that is, 𝜏 𝑘 1 ( 𝐲 ) and 𝜏 𝑘 2 ( 𝐲 ) are two 1-neighborhood points of 𝐲 .

Definition 2.5. A point ̂ 𝐲 𝕊 is called a 𝐾 -local maximizer of the function 𝑓 over 𝕊 , if ̂ 𝑓 ( 𝐲 ) 𝑓 ( 𝐲 ) , for all 𝐲 𝑁 𝐾 ( ̂ 𝐲 ) . Furthermore, if ̂ 𝑓 ( 𝐲 ) 𝑓 ( 𝐲 ) for all 𝐲 𝕊 , then ̂ 𝐲 is called a global maximizer of 𝑓 over 𝕊 . A 1-local maximizer of the function 𝑓 is also called a local maximizer of the function 𝑓 over 𝕊 .

3. VNS for Max 3-Cut

3.1. Local Search Algorithm

Let 𝐲 0 = ( 𝑦 0 1 , , 𝑦 0 𝑛 ) 𝑇 𝕊 be a feasible solution of problem M3C. If 𝐲 0 is not a local maximizer of 𝑓 , then for all 𝐲 𝑁 ( 𝐲 0 ) , we may find a ̃ 𝐲 𝑁 ( 𝐲 0 ) , such that ̃ 𝑓 ( 𝐲 ) = m a x { 𝑓 ( 𝐲 ) 𝐲 𝑁 ( 𝐲 0 ) } . It is clear that ̃ 𝑓 ( 𝐲 ) 𝑓 ( 𝐲 0 ) . If ̃ 𝐲 is not still a local maximizer of 𝑓 , then replacing 𝐲 0 with ̃ 𝐲 and repeating the process until a point ̂ 𝐲 satisfying ̂ ̂ 𝑓 ( 𝐲 ) = m a x { 𝑓 ( 𝐲 ) 𝐲 𝑁 ( 𝐲 ) } is found, which indicates that ̂ 𝐲 is a local maximizer of 𝑓 .

For any positive integer number 𝑘 ( 1 𝑘 𝑛 ) , let 𝐲 𝑘 = ( 𝑦 𝑘 1 , , 𝑦 𝑘 𝑛 ) 𝑇 = 𝜏 𝑘 𝑖 ( 𝐲 0 ) 𝑁 ( 𝐲 0 ) ( 𝑖 = 1 , 2 ) ; that is, 𝑦 𝑘 𝑖 = 𝑦 0 𝑖 𝑦 , 𝑖 = 1 , 2 , , 𝑘 1 , 𝑘 + 1 , , 𝑛 ; 𝑘 𝑘 𝑦 0 𝑘 . ( 3 . 1 ) Denote 𝐲 𝛿 ( 𝑘 ) = 𝑓 0 𝐲 𝑓 𝑘 . ( 3 . 2 ) Then, we have the following result whose proof is clear.

Lemma 3.1. Consider 2 𝛿 ( 𝑘 ) = 3 𝑘 1 𝑖 = 1 𝑤 𝑖 𝑘 𝑦 R e 0 𝑖 𝑦 𝑘 𝑘 𝑦 0 𝑘 + 2 3 𝑛 𝑗 = 𝑘 + 1 𝑤 𝑘 𝑗 𝑦 R e 𝑘 𝑘 𝑦 0 𝑘 𝑦 0 𝑗 2 , 𝑘 > 1 ; 3 𝑛 𝑗 = 𝑘 + 1 𝑤 𝑘 𝑗 𝑦 R e 𝑘 𝑘 𝑦 0 𝑘 𝑦 0 𝑗 , 𝑘 = 1 . ( 3 . 3 )

Based on Lemma 3.1, if we know the value of 𝑓 ( 𝐲 0 ) , then we can obtain the value function 𝑓 ( 𝐲 𝑘 ) at next iterative point 𝐲 𝑘 by calculating 𝛿 ( 𝑘 ) by (3.3), instead of calculating directly the values 𝑓 ( 𝐲 𝑘 ) , which reduces sharply the computational cost. By Definition 2.1, there exist two points satisfying (3.1) for fixed k ; that is, when 𝐲 𝑘 𝑁 ( 𝐲 0 ) and (3.1) is satisfied, then either 𝑦 𝑘 𝑘 = 𝜏 1 ( 𝑦 0 𝑘 ) or 𝑦 𝑘 𝑘 = 𝜏 2 ( 𝑦 0 𝑘 ) . For our convenience, we denote 𝛿 ( 𝑘 ) by 𝛿 1 ( 𝑘 ) when 𝑦 𝑘 𝑘 = 𝜏 1 ( 𝑦 0 𝑘 ) and by 𝛿 2 ( 𝑘 ) when 𝑦 𝑘 𝑘 = 𝜏 2 ( 𝑦 0 𝑘 ) . In what follows, we describe the local search algorithm for the Max 3-cut problem denoted by LSM3C; by this algorithm, we can get a local maximizer of function 𝑓 ( 𝐲 ) over 𝕊 .

For LSM3C, one has the following.(1)Input any initial feasible solution 𝐲 0 of problem (M3C). (2)For 𝑘 from 1 to 𝑛 , set 𝐳 𝑘 1 = 𝜏 𝑘 1 ( 𝐲 0 ) , calculate 𝛿 1 ( 𝑘 ) , and set again 𝐳 𝑘 2 = 𝜏 𝑘 2 ( 𝐲 0 ) , calculate 𝛿 2 ( 𝑘 ) . (3)Find 𝛿 𝑖 ( 𝑘 ) by the following way: 𝛿 𝑖 𝑘 𝛿 = m i n 1 ( 1 ) , 𝛿 2 ( 1 ) , , 𝛿 1 ( 𝑘 ) , 𝛿 2 ( 𝑘 ) , , 𝛿 1 ( 𝑛 ) , 𝛿 2 ( 𝑛 ) . ( 3 . 4 ) (4)If 𝛿 𝑖 ( 𝑘 ) 0 , then set ̂ 𝐲 = 𝐲 0 , return ̂ 𝐲 , and stop. Otherwise, go to next. (5)Set 𝐲 0 = 𝜏 𝑘 𝑖 ( 𝐲 0 ) ; go to Step 2.

3.2. Variable Neighborhood Stochastic Search

Let ̂ 𝐲 be a local maximizer obtained by LSM3C and 𝐾 m a x ( 1 < 𝐾 m a x 𝑛 ) a fixed positive integer number. we now describe the variable neighborhood search (VNS) with stochastic steps, by which we can find an approximate global maximizer of problem (M3C). The proposed VNS algorithm actually has three phases: First, for any given positive integer number 𝐾 < 𝐾 m a x , a 𝐾 -neighborhood point, 𝐲 say, is randomly selected; that is, 𝐲 𝑁 𝐾 ( ̂ 𝐲 ) . Next, a solution, ̂̂ 𝐲 say, is obtained by applying algorithm LSM3C to 𝐲 . Finally, the current solution jumps from ̂ 𝐲 to ̂̂ 𝐲 if it improves the former one. Otherwise, the order 𝐾 of the neighborhood is increased by one when 𝐾 < 𝐾 m a x and the above steps are repeated until some stopping condition is met. The VNS that is also called k-max [11] can be illustrated as follows.

For VNS-k, one has the following.(1)Arbitrary choose a point 𝐲 0 𝕊 , implement LSM3C starting from 𝐲 0 𝕊 and denote the obtained local maximizer by ̂ 𝐲 . Set 𝐾 = 1 . (2)Randomly take a point 𝐲 𝜕 𝑁 𝐼 ( 𝐾 ) ( ̂ 𝐲 ) and implement again LSM3C from 𝐲 , and denote the obtained new local maximizer by ̂̂ 𝐲 . (3)If ̂ 𝑓 ( ̂ ̂ 𝐲 ) > 𝑓 ( 𝐲 ) , then set ̂ 𝐲 𝐲 = ̂ ̂ and 𝐾 = 1 ; go to Step 2. (4)If 𝐾 < 𝐾 m a x ( 𝑛 ) , set 𝐾 = 𝐾 + 1 ; go to Step 2. Otherwise, return ̂ 𝐲 as an approximate global solution of problem M3C and stop.

The subscript 𝐼 ( 𝐾 ) in Step 2 is a function of 𝐾 and is also a positive integer number not greater than 𝑛 . 𝐼 ( 𝐾 ) reflects the main skill of converting the current neighborhood of local maximizer ̂ 𝐲 into another neighborhood of ̂ 𝐲 . For a given 𝐾 m a x , let 𝑚 = 𝑛 / 𝐾 m a x and 𝐾 0 = 𝑛 𝑚 𝐾 m a x , where 𝑎 means the integral part of 𝑎 . We divide the 𝑛 neighborhoods of ̂ 𝐲 , ̂ 𝑁 ( 𝐲 ) , 𝑁 2 ( ̂ 𝐲 ) , , 𝑁 𝐾 ( ̂ 𝐲 ) , , 𝑁 𝑛 ( ̂ 𝐲 ) into 𝐾 m a x neighborhood blocks 𝑁 𝐼 ( 1 ) ( ̂ 𝐲 ) , , 𝑁 𝐼 ( 𝐾 m a x ) ( ̂ 𝐲 ) , such that, for 𝐾 = 1 , 2 , , 𝐾 m a x 𝐾 0 , 𝑁 ( 𝐾 1 ) 𝑚 + 1 ( ̂ 𝐲 ) 𝑁 𝐼 ( 𝐾 ) ( ̂ 𝐲 ) 𝑁 𝐾 𝑚 + 1 ( ̂ 𝐲 ) , ( 3 . 5 ) and, for 𝐾 = 𝐾 m a x 𝐾 0 + 1 , , 𝐾 m a x 𝐾 0 + 𝑗 , , 𝐾 m a x , 𝑁 ( 𝐾 1 ) ( 𝑚 + 1 ) + 1 ( ̂ 𝐲 ) 𝑁 𝐼 ( 𝐾 ) ( ̂ 𝐲 ) 𝑁 𝐾 ( 𝑚 + 1 ) ( ̂ 𝐲 ) . ( 3 . 6 ) In order to obtain the 𝐾 m a x neighborhood blocks of ̂ 𝐲 , 𝑁 𝐼 ( 𝐾 ) ( ̂ 𝐲 ) , 𝐾 = 1 , , 𝐾 m a x , we divide the set { 1 , 2 , , 𝑛 } into 𝐾 m a x disjoint subsets, where each subset of the first 𝐾 m a x 𝐾 0 subsets has 𝑚 integers and each subset of the last 𝐾 0 subsets has 𝑚 + 1 integers. For any integer 𝐾 ( 𝐾 m a x ) , let [ ] 𝐼 ( 𝐾 ) = ( 𝐾 1 ) 𝑚 + 𝑐 𝑚 + 1 , 𝐾 = 1 , 2 , , 𝐾 m a x 𝐾 0 , ( 3 . 7 ) or 𝐼 𝐾 ( 𝐾 ) = m a x 𝐾 0 [ ] 𝐾 𝑚 + ( 𝑚 + 1 ) 𝑐 + 1 + ( 𝑚 + 1 ) 𝐾 m a x 𝐾 0 = [ ( ] 𝐾 1 𝑚 + 1 ) 𝑐 + 1 + ( 𝑚 + 1 ) ( 𝐾 1 ) m a x 𝐾 0 , 𝐾 = 𝐾 m a x 𝐾 0 + 1 , , 𝐾 m a x 𝐾 0 + 𝑗 , , 𝐾 m a x . ( 3 . 8 ) Then we can randomly choose a point 𝐲 in 𝜕 𝑁 𝐼 ( 𝐾 ) ( ̂ 𝐲 ) , where 𝑐 ( 0 , 1 ) is a random number from uniformly distribution 𝒰 ( 0 , 1 ) , such that 𝑁 𝐼 ( 𝐾 ) ( ̂ 𝐲 ) satisfies (3.5) or (3.6).

VNS-k stops when the maximum 𝐾 neighborhood is reached. Additionally, we also consider another termination criterion of VNS based on the maximum CPU-time and denoted by VNS-t. VNS-t can obtain a better solution than VNS-k since VNS-t actually runs several times VNS-k in the maximum allowing time 𝑡 m a x , but it generally has to spend more computational time. The VNS-t can be stated as follows.

For VNS-t, one has the following.(1)Set 𝑡 C P U = 0 , running VNS-k for an arbitrary initial point 𝐲 0 𝕊 , and let a local optimal solution ̂ 𝐲 be obtained. (2)If 𝐾 = 𝐾 m a x ( 𝑛 ) , go to Step 3. (3)If 𝑡 C P U < 𝑡 m a x , then set 𝐾 = 1 ; go to Step 2 in VNS-k. Otherwise, return ̂ 𝐲 as an approximate global solution of problem M3C and stop.

We mention that it differs from the classical variable neighborhood search metaheuristic that is originally proposed by Mladenović and Hansen [10]. In order to obtain a global optimal solution or a high-quality approximate solution of problem M3C, we use two stochastic steps in VNS. First, for a fixed 𝐾 , a 𝐾 -neighbor of ̂ 𝐲 is chosen randomly. Second, by the definition of 𝐼 ( 𝐾 ) , when we change the neighborhood of ̂ 𝐲 from 𝑁 𝐼 ( 𝐾 1 ) to 𝑁 𝐼 ( 𝐾 ) , 𝑁 𝐼 ( 𝐾 ) may take any a neighborhood among 𝑁 ( 𝐾 1 ) 𝑚 + 𝑗 , 𝑗 = 1 , 2 , , 𝑚 of ̂ 𝐲 , which is decided by random number 𝑐 . In VNS, positive integer 𝐾 m a x decides the maximum search neighborhood block of ̂ 𝐲 , which also decides directly the CPU-time of VNS. Based on the second stochastic step, we may choose a relative small 𝐾 m a x comparing with 𝑛 . This can decrease our computational time.

4. A Greedy Algorithm for Max 3-Section

When the number of nodes 𝑛 is a multiple of three and the condition | 𝑆 0 | = | 𝑆 1 | = | 𝑆 2 | = 𝑛 / 3 is required, the Max 3-cut problem becomes the Max 3-section problem. Notice that 1 + 𝜔 + 𝜔 2 = 0 , then the Max 3-section problem can be formulated as the following programming problem M3S: 2 M 3 S m a x 3 𝑖 < 𝑗 𝑤 𝑖 𝑗 𝑦 1 R e 𝑖 𝑦 𝑗 s . t . 𝑛 𝑖 = 1 𝑦 𝑖 = 0 , 𝐲 𝕊 , ( 4 . 1 ) and its CSDP relaxation is 2 C S D P 1 m a x 3 𝑖 < 𝑗 𝑤 𝑖 𝑗 𝐲 1 R e 𝑖 𝐲 𝑗 s . t . 𝐞 𝐞 𝑇 𝐲 𝑌 = 0 , 𝑖 𝐴 = 1 , 𝑖 = 1 , 2 , , 𝑛 , 𝑘 𝑖 𝑗 𝑌 1 , 𝑖 , 𝑗 = 1 , 2 , , 𝑛 , 𝑘 = 0 , 1 , 2 𝑌 0 , ( 4 . 2 ) where 𝐞 is the column vector of all ones. Andersson [12] extended Frieze and Jerrum’s random rounding method to M3S and obtained a ( 2 / 3 ) + 𝑂 ( 1 / 𝑛 3 ) -approximate algorithm, which is the current best approximate ratio for M3S; also see the recent research of Gaur et al. [13]. The author of the current paper considers a special the Max 3-Section problem and obtains a 0.6733-approximate algorithm; see Ling (2009) [14].

Clearly, the feasible region of problem M3S is a subset of 𝕊 , and the optimal value of problem M3S is not greater than that of problem M3C. Assume that we have get a global optimal solution or a high-quality approximate solution ̂ 𝐲 of problem M3C. It is clear that ̂ 𝐲 may not satisfy the condition 𝑛 𝑖 = 1 ̂ 𝑦 𝑖 = 0 . But we may adjust ̂ 𝐲 to get a new feasible solution 𝐲 𝑠 using a greedy algorithm, such that 𝐲 𝑠 satisfies 𝑛 𝑖 = 1 𝑦 𝑠 𝑖 = 0 . This is the motivation that we propose the greedy algorithm for the Max 3-section problem.

For the sake of our analysis, without loss of generality, we assume that the local maximizer ̂ 𝐲 satisfies | 𝑆 0 | = m a x { | 𝑆 0 | , | 𝑆 1 | , | 𝑆 2 | } . This means that 𝑆 0 = { 𝑖 ̂ 𝑦 𝑖 = 1 } is the subset of 𝑉 with maximum cardinal number. If | 𝑆 𝑘 | = m a x { | 𝑆 0 | , | 𝑆 1 | , | 𝑆 2 | } ( 𝑘 0 , 𝑘 = 1 , 2 ) , then we may set 𝑦 𝑁 𝑖 = 𝑤 𝑘 ̂ 𝑦 𝑖 , 𝑖 = 1 , , 𝑛 . The resulted new solution 𝐲 𝑁 = ( 𝑦 𝑁 1 , , 𝑦 𝑁 𝑖 ) will not change the objective value since ̂ 𝑓 ( 𝐲 ) = 𝑓 ( 𝑤 𝑘 ̂ 𝐲 ) ( 𝑘 0 , 𝑘 = 1 , 2 ) ; moreover, the new partition { 𝑆 𝑁 0 , 𝑆 𝑁 1 , 𝑆 𝑁 2 } based on 𝐲 𝑁 satisfies | 𝑆 𝑁 0 | = m a x { | 𝑆 𝑁 0 | , | 𝑆 𝑁 1 | , | 𝑆 𝑁 2 | } . By our assumption, the partition 𝑆 = { 𝑆 0 , 𝑆 1 , 𝑆 2 } still exist four possible cases.

Case 1. | 𝑆 0 | | 𝑆 1 | 𝑛 / 3 | 𝑆 2 | .

Case 2. | 𝑆 0 | 𝑛 / 3 | 𝑆 1 | | 𝑆 2 | .

Case 3. | 𝑆 0 | | 𝑆 2 | 𝑛 / 3 | 𝑆 1 | .

Case 4. | 𝑆 0 | 𝑛 / 3 | 𝑆 2 | | 𝑆 1 | .
The sizes adjusting greedy algorithm of Cases 3 and 4 are similar to Cases 1 and 2. Hence, we mainly consider Cases 1 and 2 for adjusting the partition of 𝑉 from 𝑆 = { 𝑆 0 , 𝑆 1 , 𝑆 2 } to 𝑆 𝑆 = { 0 , 𝑆 1 , 𝑆 2 } such that | 𝑆 𝑘 | = 𝑛 / 3 , 𝑘 = 0 , 1 , 2 . Denote 𝛿 0 ( 𝑖 ) = 𝑗 𝑆 1 𝑆 2 𝑤 𝑖 𝑗 , 𝑖 𝑆 0 , 𝛿 0 1 ( 𝑖 ) = 𝑗 𝑆 1 𝑤 𝑖 𝑗 , 𝑖 𝑆 0 , 𝛿 1 0 ( 𝑖 ) = 𝑗 𝑆 0 𝑤 𝑖 𝑗 , 𝑖 𝑆 1 , 𝛿 0 2 ( 𝑖 ) = 𝑗 𝑆 2 𝑤 𝑖 𝑗 , 𝑖 𝑆 0 , 𝛿 2 0 ( 𝑖 ) = 𝑗 𝑆 0 𝑤 𝑖 𝑗 , 𝑖 𝑆 2 , 𝛿 1 2 ( 𝑖 ) = 𝑗 𝑆 2 𝑤 𝑖 𝑗 , 𝑖 𝑆 1 , 𝛿 2 1 ( 𝑖 ) = 𝑗 𝑆 1 𝑤 𝑖 𝑗 , 𝑖 𝑆 2 . ( 4 . 3 ) Then, it follows from simple computation that 𝛿 0 ( 𝑖 ) = 𝛿 0 1 ( 𝑖 ) + 𝛿 0 2 ( 𝑖 ) , f o r e a c h 𝑖 𝑆 0 , 𝑖 𝑆 𝑘 𝛿 𝑘 𝑙 ( 𝑖 ) = 𝑖 𝑆 𝑙 𝛿 𝑙 𝑘 ̂ ( 𝑖 ) , 𝑘 , 𝑙 = 0 , 1 , 2 , 𝑘 𝑙 , 𝑓 ( 𝐲 ) = 𝑖 𝑆 0 𝛿 0 ( 𝑖 ) + 𝑖 𝑆 1 𝛿 1 2 = ( 𝑖 ) 𝑖 𝑆 0 𝛿 0 1 ( 𝑖 ) + 𝑖 𝑆 0 𝛿 0 2 ( 𝑖 ) + 𝑖 𝑆 1 𝛿 1 2 ( 𝑖 ) = 𝑑 0 1 + 𝑑 0 2 + 𝑑 1 2 , ( 4 . 4 ) where 𝑑 0 1 = 𝑖 𝑆 0 𝛿 0 1 ( 𝑖 ) , 𝑑 0 2 = 𝑖 𝑆 0 𝛿 0 2 ( 𝑖 ) , 𝑑 1 2 = 𝑖 𝑆 1 𝛿 1 2 ( 𝑖 ) .
In what follows, we describe the size adjusting greedy algorithms (SAGAs) for Cases 1 and 2, and denote the greedy algorithms for the two cases by SAGA1 and SAGA2, respectively.
For SAGA1, one has the following.(1)Calculate 𝑚 0 2 = 𝑖 𝑆 0 𝛿 0 2 ( 𝑖 ) | | 𝑆 0 | | , 𝑚 1 2 = 𝑖 𝑆 1 𝛿 1 2 ( 𝑖 ) | | 𝑆 1 | | . ( 4 . 5 ) (2)If 𝑚 0 2 𝑚 1 2 , let 𝑆 1 = { 𝑗 1 , 𝑗 2 , , 𝑗 | 𝑆 1 | } , where 𝛿 1 2 ( 𝑗 𝑙 ) 𝛿 1 2 ( 𝑗 𝑙 + 1 ) , 𝑙 = 1 , 2 , , | 𝑆 1 | . Set 𝑆 1 = { 𝑗 1 , 𝑗 2 , , 𝑗 𝑛 / 3 } , 𝑆 2 = 𝑆 2 ( 𝑆 1 𝑆 1 ) and renew to calculate 𝛿 0 2 ( 𝑖 ) = 𝑗 𝑆 2 𝑤 𝑖 𝑗 , ( 4 . 6 ) for each 𝑖 𝑆 0 . Let 𝑆 0 = { 𝑖 1 , 𝑖 2 , , 𝑖 | 𝑆 0 | } , where 𝛿 0 2 ( 𝑖 𝑘 ) 𝛿 0 2 ( 𝑖 𝑘 + 1 ) . Set 𝑆 0 = { 𝑖 1 , 𝑖 2 , , 𝑖 𝑛 / 3 } and 𝑆 2 = 𝑆 2 ( 𝑆 0 𝑆 0 ) .(3)If 𝑚 0 2 < 𝑚 1 2 , let 𝑆 0 = { 𝑖 1 , 𝑖 2 , , 𝑖 | 𝑆 0 | } , where 𝛿 0 2 ( 𝑖 𝑘 ) 𝛿 0 2 ( 𝑖 𝑘 + 1 ) , 𝑘 = 1 , 2 , , | 𝑆 0 | , set 𝑆 0 = { 𝑖 1 , 𝑖 2 , , 𝑖 𝑛 / 3 } , 𝑆 2 = 𝑆 2 ( 𝑆 0 𝑆 0 ) , and then renew to calculate 𝛿 1 2 ( 𝑖 ) = 𝑗 𝑆 2 𝑤 𝑖 𝑗 , ( 4 . 7 ) for each 𝑖 𝑆 1 . Set 𝑆 1 = { 𝑗 1 , 𝑗 2 , , 𝑗 𝑛 / 3 } and 𝑆 2 = 𝑆 2 ( 𝑆 1 𝑆 1 ) , where 𝛿 1 2 ( 𝑗 𝑘 ) 𝛿 1 2 ( 𝑗 𝑘 + 1 ) here.(4)Return the current partition 𝑆 𝑆 = { 0 , 𝑆 1 , 𝑆 2 } ; stop.
For SAGA2, one has the following.(1)Calculate 𝑑 0 1 = 𝑖 𝑆 0 𝛿 0 1 ( 𝑖 ) , 𝑑 0 2 = 𝑖 𝑆 0 𝛿 0 2 ( 𝑖 ) , and 𝑚 0 1 = 𝑑 0 1 | | 𝑆 0 | | , 𝑚 0 2 = 𝑑 0 2 | | 𝑆 0 | | . ( 4 . 8 ) (2)If 𝑚 0 1 𝑚 0 2 , let 𝑆 0 = 𝑖 1 , 𝑖 2 , , 𝑖 | 𝑆 0 | , ( 4 . 9 ) where 𝛿 0 1 ( 𝑖 𝑘 ) 𝛿 0 1 ( 𝑖 𝑘 + 1 ) , 𝑘 = 1 , 2 , , | 𝑆 0 | . Set 𝑆 0 = 𝑖 1 , 𝑖 2 , , 𝑖 | 𝑆 0 | 𝑞 1 , 𝑆 1 = 𝑆 1 𝑆 0 𝑆 0 , ( 4 . 1 0 ) where 𝑞 1 = ( 𝑛 / 3 ) | 𝑆 1 | . Renew to calculate 𝛿 0 2 ( 𝑖 ) = 𝑗 𝑆 2 𝑤 𝑖 𝑗 𝑆 , 𝑖 0 . ( 4 . 1 1 ) and let 𝑆 0 = 𝑖 1 , 𝑖 2 , , 𝑖 | | | | 𝑆 0 | | | | , ( 4 . 1 2 ) where 𝛿 0 2 ( 𝑖 𝑘 ) 𝛿 0 2 ( 𝑖 𝑘 + 1 ) , 𝑆 𝑘 = 1 , 2 , , | 0 | . Set 𝑆 0 = 𝑖 1 , 𝑖 2 , , 𝑖 𝑛 / 3 , 𝑆 2 = 𝑆 2 𝑆 0 𝑆 0 . ( 4 . 1 3 ) (3)If 𝑚 0 1 > 𝑚 0 2 , let 𝑆 0 = 𝑖 1 , 𝑖 2 , , 𝑖 | 𝑆 0 | , ( 4 . 1 4 ) where 𝛿 0 2 ( 𝑖 𝑘 ) 𝛿 0 2 ( 𝑖 𝑘 + 1 ) , 𝑘 = 1 , 2 , , | 𝑆 0 | . Set 𝑆 0 = 𝑖 1 , 𝑖 2 , , 𝑖 | 𝑆 0 | 𝑞 2 , 𝑆 2 = 𝑆 2 𝑆 0 𝑆 0 , ( 4 . 1 5 ) where 𝑞 2 = ( 𝑛 / 3 ) | 𝑆 2 | . Renew to calculate 𝛿 0 1 ( 𝑖 ) = 𝑗 𝑆 1 𝑤 𝑖 𝑗 𝑆 , 𝑖 0 . ( 4 . 1 6 ) and let 𝑆 0 = 𝑖 1 , 𝑖 2 , , 𝑖 | | | | 𝑆 0 | | | | , ( 4 . 1 7 ) where 𝛿 0 1 ( 𝑖 𝑘 ) 𝛿 0 1 ( 𝑖 𝑘 + 1 ) , 𝑆 𝑘 = 1 , 2 , , | 0 | . Set 𝑆 0 = 𝑖 1 , 𝑖 2 , , 𝑖 𝑛 / 3 , 𝑆 1 = 𝑆 1 𝑆 0 𝑆 0 . ( 4 . 1 8 ) (4)Return the current partition 𝑆 𝑆 = { 𝟎 , 𝑆 𝟏 , 𝑆 𝟐 } ; stop.

5. Numerical Results

This section describes the obtained experimental results for some instances of Max 3-cut and Max 3-Section problems using the proposed VNS metaheuristic. We also show a quantitative comparison with 0.836-approximate algorithm. The computational experiments are performed in an Intel Pentium 4 processor at 2.0 GHz, with 512 MB of RAM, and all algorithms are coded in Matlab. Because RSDP relaxation of M3C includes many slack variables, many constraints, and matrices variables without a block diagonal structures, in our numerical comparisons, we choose SDPT3-4.0 [9], one of the best and well-known solvers of semidefinite programming, to solve RSDP relaxation of M3C.

All our test problems are generated randomly by the following way. Let 𝑝 ( 0 , 1 ) be a constant and 𝑟 ( 0 , 1 ) a random number. If 𝑟 𝑝 , then there is an edge between nodes 𝑖 and 𝑗 with weight 𝑤 𝑖 𝑗 , that is, a random integer between 1 and 10. Otherwise, 𝑤 𝑖 𝑗 = 0 ; that is, there is no edge between nodes 𝑖 and 𝑗 . Because of the limits of memory of SDPT3, when 𝑛 > 2 0 0 , RSDP becomes a huge semidefinite programming problem with not less than 59700 slack variables and 99900 constraints and is out of memory of SDPT3. Hence, in the numerical experiments, we consider 30 instances with 𝑝 = 0 . 1 , 0 . 3 , 0 . 6 , and 𝑛 varying from 20 to 200.

Firstly, we check the influence of 𝐾 m a x on the quality of solution obtained by VNS-k. For a given graph, we take 𝐾 m a x = 3 , 5 , 1 0 , 1 5 , 3 0 ; Table 1 presents the results, where Wnp in the first column of this table and the following tables means that a graph is randomly generated with nodes 𝑛 and density 𝑝 ; for instance, W30.6 presents a graph generated randomly with 𝑛 = 3 0 and 𝑝 = 0 . 6 . We find from Table 1 that the influence of 𝐾 m a x to objective value denoted by Obj in Table 1 is slight when 𝐾 m a x > 5 , but the CPU time increases sharply as 𝐾 m a x increases. This result is actually not surprising. Indeed, because 𝐼 ( 𝐾 ) > 𝐾 , we choose randomly a point 𝐲 in 𝜕 𝑁 𝐼 ( 𝐾 ) ( ̂ 𝐲 ) , instead of 𝜕 𝑁 𝐾 ( ̂ 𝐲 ) . This avoids to choose too large 𝐾 m a x which leads to more CPU-time cost. Hence, in sequel numerical comparisons, we fix 𝐾 m a x = 5 for all test problems.

tab1
Table 1: The objective value obtained by VNS for M3C with different 𝐾 m a x .

Secondly, we compare VNS (VNS-k, VNS-t) metaheuristic with 0.836-approximate algorithm for all test problems. To avoid the effect of initial points, for each test problem, after RSDP is solved, we run the round procedure of 0.836-approximate algorithm and VNS metaheuristic ten times, respectively.

Table 2 gives the result of numerical comparisons. In the numerical presentations of Table 2, Objrsdp is the optimal value of problem RSDP; that is, it is an upper bound of M3C. ObjGM is the largest value obtained by 0.836-approximate algorithm in the ten tests. Objvns stands for the largest value obtained by VNS for M3C in the ten tests, respectively. 𝑚 and s.v. are the number of constraints and slack variables (s.v.), respectively. 𝑡 G M and 𝑡 v n s 𝑘 are the average time (second) associated with the two algorithms in the ten tests. For the maximum CPU time of VNS-t, we take 𝑡 m a x = 2 𝑡 v n s - 𝑘 , but the real CPU time of VNS-t will be greater than 𝑡 m a x . Additionally, for measuring the performance of solutions, we take 𝜌 = 𝙾 𝚋 𝚓 v n s 𝙾 𝚋 𝚓 r s d p 𝙾 𝚋 𝚓 r s d p = 𝙾 𝚋 𝚓 v n s 𝙾 𝚋 𝚓 r s d p 1 ( 5 . 1 ) for M3C and 𝜌 = 𝙾 𝚋 𝚓 v n s + s a g a 𝙾 𝚋 𝚓 r s d p 𝙾 𝚋 𝚓 r s d p = 𝙾 𝚋 𝚓 v n s + s a g a 𝙾 𝚋 𝚓 r s d p 1 ( 5 . 2 ) for M3S. Clearly, 𝜌 can reflect how close to the solution obtained by VNS from the optimal solution of RSDP. One can see from Table 2 that (1) the VNS metaheuristic not only can obtain a better solution than 0.836-approximate algorithm for all test problems, but also that the elapsed CPU-time of VNS metaheuristic is much less than that of 0.836-approximate algorithm for all test problems, (2) the performance of solution can be improved by VNS-t for most of test problems when the termination criterion of VNS is based on the maximum CPU-time, but VNS-t spends more computational time than VNS-k. The improved performance can be reflected by 𝜌 = 𝜌 𝑡 𝜌 𝑘 in the final column of Table 2. Average speaking, VNS-t improves 0.91 percentage point.

tab2
Table 2: The numerical comparisons of 0.836-approximate algorithm with VNS metaheuristic.

Finally, we consider the solution of M3S by combining VNS-k and greedy sizes-adjusted algorithm SAGA stated in Section 4. Let ̂ 𝐲 be an approximate solution of M3C obtained by VNS; we can obtain an approximate solution of M3S from SAGA. The numerical results are reported by Table 3 in which O b j v n s + s a g a stands for the largest value obtained by VNS-k plus SAGA for M3S. Although our sizes-adjusted algorithm may decrease the objective value obtained by VNS, the changes of objective values are very slight from Table 3. Particular, objective values of some problems do not decrease, instead increase, such as W150.3. We do not compare the obtained results with Andersson’s 2/3-approximate algorithm. Because we find that all approximate solutions of M3S obtained by VNS plus SAGA still are better than that of 0.836-approximate algorithm with the exception of only W30.1 and W30.3.

tab3
Table 3: The numerical results of combining VNS-k metaheuristic with SAGA for M3S.

6. Conclusions

A variable neighborhood stochastic metaheuristic was proposed to solve the Max 3-cut and Max 3-section problems in this paper. Our algorithms can solve Max 3-cut and Max 3-section problems with different sizes and densities. Although 0.836-approximate algorithm has the very good theoretic results, in numerical aspects, our comparisons indicate that the proposed VNS metaheuristic is superior to the well-known 0.836-approximate algorithm and can efficiently obtain very high-quality solutions of the Max 3-cut and Max 3-section problems.

We mention that the proposed algorithm in fact can deal with higher dimensional G-set graphs problems created by Pro. Rinaldi using a graph generator, rudy. But, we cannot give numerical comparisons with 0.836-approximate algorithm since RSDP relaxations of these problems are out of memory of the current all SDP software. In additionally, if we increase 𝐾 m a x or 𝑡 m a x in numerical implementing, then the quality of solution of M3C will further be improved by VNS.

Funding

This work is supported by the National Natural Science Foundations of China (no. 71001045, 10971162), the China Postdoctoral Science Foundation (no. 20100480491), the Natural Science Foundation of Jiangxi Province of China (no. 20114BAB211008), and the Jiangxi University of Finance and Economics Support Program Funds for Outstanding Youths.

Acknowledgment

The authors would like to thank the editor and an anonymous referee for their numerous suggestions for improving the paper.

References

  1. R. M. Karp, “Reducibility among combinatorial problems,” in Complexity of Computer Computations, R. Miller and J. Thatcher, Eds., pp. 85–103, Plenum Press, New York, NY, USA, 1972.
  2. M. R. Garey, D. S. Johnson, and L. Stockmeyer, “Some simplified NP-complete graph problems,” Theoretical Computer Science, vol. 1, no. 3, pp. 237–267, 1976. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  3. F. Barahona, M. Gröetschel, G. Reinelt, and M. Juenger, “An application of combinotiorial optimization to statistical physics and circuit layout design,” Operations Research, vol. 36, no. 3, pp. 493–513, 1988. View at Publisher · View at Google Scholar · View at Scopus
  4. M. X. Goemans and D. P. Williamson, “Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming,” Journal of the Association for Computing Machinery, vol. 42, no. 6, pp. 1115–1145, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. A. Frieze and M. Jerrum, “Improved approximation algorithms for MAX k-CUT and MAX BISECTION,” in Integer Programming and Combinatorial Optimization, E. Balas and J. Clausen, Eds., vol. 920, pp. 1–13, 1995. View at Publisher · View at Google Scholar
  6. M. X. Goemans and D. P. Williamson, “Approximation algorithms for MAX-3-CUT and other problems via complex semidefinite programming,” Journal of Computer and System Sciences, vol. 68, no. 2, pp. 442–470, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. S. Zhang and Y. Huang, “Complex quadratic optimization and semidefinite programming,” SIAM Journal on Optimization, vol. 16, no. 3, pp. 871–890, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. J. F. Sturm, “Using SeDuMi 1.02, ‘a MATLAB toolbox for optimization over symmetric cones’,” Optimization Methods and Software, vol. 11, no. 1–4, pp. 625–653, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. K.-C. Toh, M. J. Todd, and R. H. Tütüncü, “SDPT3 version 4.0 (beta)—a MATLAB software for semidefinite-quadratic-linear programming,” 2004, http://www.math.nus.edu.sg/~mattohkc/sdpt3.html.
  10. N. Mladenović and P. Hansen, “Variable neighborhood search,” Computers & Operations Research, vol. 24, no. 11, pp. 1097–1100, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. P. Hansen, N. Mladenović, and J. A. Moreno Pérez, “Variable neighbourhood search: methods and applications,” Annals of Operations Research, vol. 175, pp. 367–407, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. G. Andersson, “An approximation algorithm for Max p-Section,” Lecture Notes in Computer Science, vol. 1563, pp. 237–247, 1999. View at Publisher · View at Google Scholar
  13. D. R. Gaur, R. Krishnamurti, and R. Kohli, “The capacitated max k-cut problem,” Mathematical Programming, vol. 115, no. 1, pp. 65–72, 2008. View at Publisher · View at Google Scholar
  14. A.-F. Ling, “Approximation algorithms for Max 3-section using complex semidefinite programming relaxation,” in Combinatorial Optimization and Applications, vol. 5573 of Lecture Notes in Computer Science, pp. 219–230, 2009. View at Publisher · View at Google Scholar