Abstract

A heuristic algorithm based on VNS is proposed to solve the Max 3-cut and Max 3-section problems. By establishing a neighborhood structure of the Max 3-cut problem, we propose a local search algorithm and a variable neighborhood global search algorithm with two stochastic search steps to obtain the global solution. We give some numerical results and comparisons with the well-known 0.836-approximate algorithm. Numerical results show that the proposed heuristic algorithm can obtain efficiently the high-quality solutions and has the better numerical performance than the 0.836-approximate algorithm for the NP-Hard Max 3-cut and Max 3-section problems.

1. Introduction

Given a graph ๐บ ( ๐‘‰ ; ๐ธ ) , with nodes set ๐‘‰ and edges set ๐ธ , the Max 3-cut problem is to find a partition ๐‘† 0 โŠ‚ ๐‘‰ , ๐‘† 1 โŠ‚ ๐‘‰ and ๐‘† 2 โŠ‚ ๐‘‰ , of the set ๐‘‰ , such that ๐‘† 0 โ‹ƒ ๐‘† 1 โ‹ƒ ๐‘† 2 = ๐‘‰ , ๐‘† ๐‘– โ‹‚ ๐‘† ๐‘— = โˆ… ( ๐‘– โ‰  ๐‘— ) and the sum of the weights on the edges connecting the different parts is maximized. Similar to the Max cut problem, the Max 3-cut problem has long been known to be NP complete [1], even for any un-weighted graphs [2], and has also applications in circuit layout design, statistical physics, and so on [3]. However, due to the complexity of this problem, its research progresses is much lower than that of the Max cut problem. Based on the semidefinite programming relaxation proposed by Goemans and Williamson [4], Frieze and Jerrum [5] obtained a 0.800217-approximation algorithm for the Max 3-Cut problem. Recently, Goemans and Williamson [6] and Zhang and Huang [7] improved Frieze and Jerrumโ€™s 0.800217-approximation ratio to 0.836 using a complex semidefinite programming relaxation of the Max 3-cut problem.

For the purpose of our analysis, we first introduce some notations. We denote the complex conjugate of ๐‘ฆ = ๐‘Ž + ๐‘– ๐‘ by ๐‘ฆ = ๐‘Ž โˆ’ ๐‘– ๐‘ , where i = โˆš โˆ’ 1 is the pure image number and the real part and image part of a complex number by Re ( โ‹… ) and Im ( โ‹… ) , respectively. For an ๐‘› dimensional complex vector ๐ฒ โˆˆ โ„‚ ๐‘› written as bold letter and ๐‘› dimensional complex matrix ๐‘Œ โˆˆ โ„‚ ๐‘› ร— ๐‘› , we write ๐ฒ โˆ— and ๐‘Œ โˆ— to denote their conjugate and transpose. That is, ๐ฒ โˆ— = ๐ฒ ๐‘‡ and ๐‘Œ โˆ— = ๐‘Œ ๐‘‡ . The set of ๐‘› dimensional real symmetric (semidefinite positive) matrices and the set of ๐‘› dimensional complex Hermitian (semidefinite positive) matrices are denoted by ๐’ฎ ๐‘› ( ๐’ฎ + ๐‘› ) and โ„‹ ๐‘› ( โ„‹ + ๐‘› ) , respectively. We sometimes use ๐ด โ‰ฝ 0 to show ๐ด โˆˆ ๐’ฎ + ๐‘› (or ๐ด โˆˆ โ„‹ + ๐‘› ). For any two complex vector ๐ฎ ,โ€‰โ€‰ ๐ฏ โˆˆ โ„‚ ๐‘› , we write โŸจ ๐ฎ , ๐ฏ โŸฉ = ๐ฎ โ‹… ๐ฏ = ๐ฎ โˆ— ๐ฏ as their inner product. For any two complex matrices ๐ด , ๐ต โˆˆ โ„‹ ๐‘› , we write โŸจ ๐ด , ๐ต โŸฉ = ๐ด โ‹… ๐ต as their inner product; that is, โŸจ ๐ด , ๐ต โŸฉ = ๐ด โ‹… ๐ต = T r ( ๐ต โˆ— โˆ‘ ๐ด ) = ๐‘– , ๐‘— ๐‘ ๐‘– ๐‘— ๐‘Ž ๐‘– ๐‘— , where ๐ด = ( ๐‘Ž ๐‘– ๐‘— ) and ๐ต = ( ๐‘ ๐‘– ๐‘— ) . โ€– โ‹… โ€– means the module of a complex number or the 2-norm of a complex vector or the ๐น -norm of a complex matrix.

Let the third root of unity be denoted by ๐œ” 0 = 1 , ๐œ” = ๐œ” 1 = ๐‘’ ๐‘– ( 2 ๐œ‹ / 3 ) , ๐œ” 2 = ๐‘’ ๐‘– ( 4 ๐œ‹ / 3 ) . Introduce a complex variable ๐‘ฆ ๐‘– โˆˆ { 1 , ๐œ” , ๐œ” 2 } , ๐‘– = 1 , โ€ฆ , ๐‘› , then it is not hard to know that 2 3 โˆ’ 1 3 ๐‘ฆ ๐‘– โ‹… ๐‘ฆ ๐‘— โˆ’ 1 3 ๐‘ฆ ๐‘— โ‹… ๐‘ฆ ๐‘– = 2 3 ๎€ท ๎€ท ๐‘ฆ 1 โˆ’ R e ๐‘– โ‹… ๐‘ฆ ๐‘— . ๎€ธ ๎€ธ ( 1 . 1 ) Denote ๐‘† ๐‘˜ = { ๐‘– โˆถ ๐‘ฆ ๐‘– = ๐œ” ๐‘˜ , ๐‘˜ = 0 , 1 , 2 } and ๐ฒ = ( ๐‘ฆ 1 , โ€ฆ , ๐‘ฆ ๐‘› ) ๐‘‡ . Then the Max 3-cut problem can be expressed as 2 M 3 C โˆถ m a x ๐‘“ ( ๐ฒ ) = 3 ๎“ ๐‘– < ๐‘— ๐‘ค ๐‘– ๐‘— ๎€ท ๎€ท ๐‘ฆ 1 โˆ’ R e ๐‘– โ‹… ๐‘ฆ ๐‘— ๎€ธ ๎€ธ s . t . ๐ฒ โˆˆ { 1 , ๐œ” , ๐œ” 2 } ๐‘› , ( 1 . 2 ) here ๐ฒ โˆˆ { 1 , ๐œ” , ๐œ” 2 } ๐‘› means that ๐‘ฆ ๐‘– โˆˆ { 1 , ๐œ” , ๐œ” 2 } , ๐‘– = 1 , โ€ฆ , ๐‘› , ๐‘Š = ( ๐‘ค ๐‘– ๐‘— ) is the weight-valued matrix of a given graph.

By relaxing the complex variable ๐‘ฆ ๐‘– into an ๐‘› dimensional complex vector ๐ฒ ๐‘– , we get a complex semidefinite programming (CSDP) relaxation of (M3C) as follows: 2 C S D P โˆถ m a x 3 ๎“ ๐‘– < ๐‘— ๐‘ค ๐‘– ๐‘— ๎€ท ๎€ท ๐ฒ 1 โˆ’ R e ๐‘– โ‹… ๐ฒ ๐‘— โ€– โ€– ๐ฒ ๎€ธ ๎€ธ s . t . ๐‘– โ€– โ€– ๐ด = 1 , ๐‘– = 1 , 2 , โ€ฆ , ๐‘› , ๐‘˜ ๐‘– ๐‘— โ‹… ๐‘Œ โ‰ฅ โˆ’ 1 , ๐‘– , ๐‘— = 1 , 2 , โ€ฆ , ๐‘› , ๐‘˜ = 0 , 1 , 2 ๐‘Œ โ‰ฝ 0 , ( 1 . 3 ) where ๐‘Œ ๐‘– ๐‘— = ๐ฒ ๐‘– โ‹… ๐ฒ ๐‘— , ๐ด ๐‘˜ ๐‘– ๐‘— = ๐œ” ๐‘˜ ๐ž ๐‘– ๐ž ๐‘‡ ๐‘— + ๐œ” โˆ’ ๐‘˜ ๐ž ๐‘— ๐ž ๐‘‡ ๐‘– and ๐ž ๐‘– denotes the vector with zeros everywhere except for an unit in the ๐‘– th component. It is easily to verify that constraints ๐ด ๐‘˜ ๐‘– ๐‘— โ‹… ๐‘Œ โ‰ฅ โˆ’ 1 can be expressed as ๎€ท ๐œ” R e ๐‘˜ ๐‘Œ ๐‘– ๐‘— ๎€ธ 1 โ‰ฅ โˆ’ 2 , ๐‘˜ = 0 , 1 , 2 . ( 1 . 4 ) To get an approximate solution of M3C, Goemans and Williamson [6] do not directly solve the CSDP, but solve an equivalent real SDP with following form (Although some softwares, such as SeDuMi [8] and the earlier version of SDPT3-4.0 [9], can deal with SDPs with complex data, this does not reduce the dimensions of problems): 1 R S D P โˆถ m a x 2 โŽก โŽข โŽข โŽฃ โŽค โŽฅ โŽฅ โŽฆ โŽก โŽข โŽข โŽฃ ๐ž ๐‘„ ๐‘‚ ๐‘‚ ๐‘„ โ‹… ๐‘‹ s . t . ๐‘– ๐ž ๐‘‡ ๐‘– ๐‘‚ ๐‘‚ ๐ž ๐‘– ๐ž ๐‘‡ ๐‘– โŽค โŽฅ โŽฅ โŽฆ โŽก โŽข โŽข โŽฃ ๎€ท ๐ด โ‹… ๐‘‹ = 2 , ๐‘– = 1 , 2 , โ€ฆ , ๐‘› , R e ๐‘˜ ๐‘– ๐‘— ๎€ธ ๎€ท ๐ด โˆ’ I m ๐‘˜ ๐‘– ๐‘— ๎€ธ ๎€ท ๐ด I m ๐‘˜ ๐‘– ๐‘— ๎€ธ ๎€ท ๐ด R e ๐‘˜ ๐‘– ๐‘— ๎€ธ โŽค โŽฅ โŽฅ โŽฆ โŽก โŽข โŽข โŽฃ ๐ด โ‹… ๐‘‹ โ‰ฅ โˆ’ 2 , 1 โ‰ค ๐‘– < ๐‘— โ‰ค ๐‘› , ๐‘˜ = 0 , 1 , 2 0 ๐‘– ๐‘— ๐‘‚ ๐‘‚ โˆ’ ๐ด 0 ๐‘– ๐‘— โŽค โŽฅ โŽฅ โŽฆ โŽก โŽข โŽข โŽฃ โ‹… ๐‘‹ = 0 , 1 โ‰ค ๐‘– < ๐‘— โ‰ค ๐‘› , ๐‘‚ ๐ด 0 ๐‘– ๐‘— ๐ด 0 ๐‘– ๐‘— ๐‘‚ โŽค โŽฅ โŽฅ โŽฆ โŽก โŽข โŽข โŽฃ โ‹… ๐‘‹ = 0 , 1 โ‰ค ๐‘– < ๐‘— โ‰ค ๐‘› , ๐‘‚ ๐ž ๐‘– ๐ž ๐‘‡ ๐‘– ๐ž ๐‘– ๐ž ๐‘‡ ๐‘– ๐‘‚ โŽค โŽฅ โŽฅ โŽฆ โ‹… ๐‘‹ = 0 , ๐‘– = 1 , 2 , โ€ฆ , ๐‘› , ๐‘‹ โˆˆ ๐“ข + 2 ๐‘› , ( 1 . 5 ) where ๐‘„ = ( 1 / 3 ) d i a g ( ๐‘Š ๐‘’ ) โˆ’ ๐‘Š is the Laplace matrix of given graph, ๐‘‚ is an ๐‘› -dimensional full zeros matrix.

In RSDP, the first, third, and forth classes of equality constraints ensure that ๐‘‹ ๐‘– ๐‘– = 1 , ๐‘– = 1 , 2 , โ€ฆ , ๐‘› and with the form โŽก โŽข โŽข โŽฃ โŽค โŽฅ โŽฅ โŽฆ ๐‘‹ = ๐‘… โˆ’ ๐‘† ๐‘† ๐‘… . ( 1 . 6 ) The final two classes of equality constraints ensure that ๐‘† ๐‘– ๐‘– = 0 ( ๐‘– = 1 , โ€ฆ , ๐‘› ) and ๐‘† is a skew-symmetric matrix.

If ๐‘‹ is an optimal solution of RSDP, then the complex matrix ๎ ๐‘Œ = ๐‘… + ๐‘– ๐‘† is an optimal solution of CSDP. Then one can randomly generate a complex vector ๐œ‰ , such that ๎ ๐œ‰ โˆผ ๐‘ ( 0 , ๐‘Œ ) , and set ๎ ๐‘ฆ ๐‘– = โŽง โŽช โŽช โŽช โŽจ โŽช โŽช โŽช โŽฉ ๎€ท ๐œ‰ 1 , i f A r g ๐‘– ๎€ธ โˆˆ ๎‚ƒ 0 , 2 ๐œ‹ 3 ๎‚ , ๎€ท ๐œ‰ ๐œ” , i f A r g ๐‘– ๎€ธ โˆˆ ๎‚ƒ 2 ๐œ‹ 3 , 4 ๐œ‹ 3 ๎‚ , ๐œ” 2 ๎€ท ๐œ‰ , i f A r g ๐‘– ๎€ธ โˆˆ ๎‚ƒ 4 ๐œ‹ 3 ๎‚ , , 2 ๐œ‹ ( 1 . 7 ) where A r g ( โ‹… ) โˆˆ [ 0 , 2 ๐œ‹ ] means the complex angle principal value of a complex number. Goemans and Williamson [6] had verified that, see also Zhang and Huang [7], ฬ‚ ๎‚€ ๎ ๐‘Œ ๎‚ ๐‘“ ( ๐ฒ ) โ‰ฅ 0 . 8 3 6 โ‹… ๐‘„ โ‹… . ( 1 . 8 )

The algorithm proposed by Goemans and Williamson [6] can obtain a very good approximate ratio, and RSDP can be solved by interior point algorithm, but the 0.836-approximate algorithm will be not practical in numerical study for the Max 3-cut problem. From RSDP, one can see that for a graph with ๐‘› nodes, RSDP has 2 ๐‘› + 5 ๐‘› ( ๐‘› โˆ’ 1 ) / 2 constraints and 3 ๐‘› ( ๐‘› โˆ’ 1 ) / 2 slack variables via the inequality constraints. That is to say, RSDP has a 2 ๐‘› dimensional unknown symmetrical semidefinite positive matrix variable and a 3 ๐‘› ( ๐‘› โˆ’ 1 ) / 2 dimensional unknown vector variable, and 2 ๐‘› + 5 ๐‘› ( ๐‘› โˆ’ 1 ) / 2 constraints, and has also many matrices without an explicit block diagonal structure although they are sparse. For instance, when ๐‘› = 1 0 0 , RSDP becomes a very-high-dimensional semidefinite programming problem with 14850 slack variables and 24950 constraints. Further, as we known, it is only a class of universal and medium-scale instances for Max 3-cut problems with 50 to 100 nodes. Hence, it will be very time consuming to solve such a RSDP relaxation of M3C using the current existing any SDP softwares. This leads that 0.836-approximate algorithm is not suitable for computational study of the Max 3-cut problem. This limitation for solving M3C based on CSDP (or RSDP) relaxation motivates us to find a new efficient and fast algorithm for the practical purpose for the Max 3-cut problem.

In the current paper, we first establish a definition of ๐พ -neighborhood structure of the Max 3-cut problem and design a local search algorithm to find the local minimizer. And then, we propose a variable neighborhood search (VNS) metaheuristic with stochastic steps which is originally considered by Mladenoviฤ‡ and Hansen [10], by which we can find efficiently a high-quality global approximate solution of the Max 3-cut problem. Further, combining a greed algorithm, we extend the proposed algorithm to the Max 3-section problem. To the best of our knowledge, it is first time to consider the computational study of the Max 3-cut problem. In order to test the performance of the proposed algorithm, we compare the numerical results with Goemans and Williamsonโ€™s 0.836-approximate algorithm.

This paper is organized as follows. In Section 2, we give some definitions and lemmas. In Section 3, we present the VNS metaheuristic for solving the Max 3-cut problem. The VNS is extended to the Max 3-section problem in Section 4. In Section 5, we give some numerical results and comparisons.

2. Preliminaries

In this section, we will establish some definitions and give some facts for our sequel purpose. For the third roots of unity, 1 , ๐œ” , ๐œ” 2 , we can get the following fact: โ€– 1 โˆ’ ๐œ” โ€– 2 = โ€– โ€– ๐œ” โˆ’ ๐œ” 2 โ€– โ€– 2 = โ€– โ€– 1 โˆ’ ๐œ” 2 โ€– โ€– 2 = 3 . ( 2 . 1 ) Denote ๐•Š = { 1 , ๐œ” , ๐œ” 2 } ๐‘› . Then based on (2.1), for any ๐ฒ โˆˆ ๐•Š , we may definite a ๐พ -neighborhood of ๐ฒ as follows.

Definition 2.1. For any ๐ฒ โˆˆ ๐•Š and any positive integer number ๐พ ( 1 โ‰ค ๐พ โ‰ค ๐‘› ) , one defines the ๐พ -neighborhood, denoted by ๐‘ ๐พ ( ๐ฒ ) , of ๐ฒ as the set ๐‘ ๐พ ( ๎‚ป ๐ฒ ) = ๐ณ โˆˆ ๐•Š โˆถ โ€– ๐ณ โˆ’ ๐ฒ โ€– 2 = ๐‘› โˆ‘ ๐‘– = 1 โ€– โ€– ๐‘ง ๐‘– โˆ’ ๐‘ฆ ๐‘– โ€– โ€– 2 ๎‚ผ . โ‰ค 3 ๐พ ( 2 . 2 ) In particular, if ๐พ = 1 , we write the 1-neighborhood ๐‘ 1 ( ๐ฒ ) of ๐ฒ as ๐‘ ( ๐ฒ ) .

The boundary of the ๐พ -neighborhood ๐‘ ๐พ ( ๐ฒ ) is defined by ๐œ• ๐‘ ๐พ ( ๐ฒ ) = { ๐ณ โˆˆ ๐•Š โˆถ โ€– ๐ฒ โˆ’ ๐ณ โ€– 2 = 3 ๐พ } . Clearly, ๐‘ ( ๐ฒ ) = ๐œ• ๐‘ ( ๐ฒ ) . If ๐ณ โˆˆ ๐œ• ๐‘ ๐พ ( ๐ฒ ) , we call ๐ณ a ๐พ -neighbor of ๐ฒ . From Definition 2.1, the difference of between points ๐ฒ and its ๐พ -neighbor ๐ณ is that they have only ๐พ different components. By computing straightforwardly, we get the number of elements of ๐œ• ๐‘ ๐พ ( ๐ฒ ) , that is | ๐œ• ๐‘ ๐พ ( ๐ฒ ) | = 2 ๐พ ๐ถ ๐พ ๐‘› . Particularly, | ๐œ• ๐‘ ( ๐ฒ ) | = 2 ๐‘› when ๐พ = 1 .

Example 2.2. Let ๐ฒ = ( ๐œ” , ๐œ” , ๐œ” 2 ) ๐‘‡ โˆˆ { 1 , ๐œ” , ๐œ” 2 } 3 . Then ( 1 , ๐œ” , ๐œ” 2 ) ๐‘‡ โˆˆ ๐‘ ( ๐ฒ ) , ( 1 , ๐œ” 2 , ๐œ” 2 ) ๐‘‡ โˆˆ ๐œ• ๐‘ 2 ( ๐ฒ ) โŠ‚ ๐‘ 2 ( ๐ฒ ) , and ( 1 , ๐œ” 2 , ๐œ” ) ๐‘‡ โˆˆ ๐œ• ๐‘ 3 ( ๐ฒ ) โŠ‚ ๐‘ 3 ( ๐ฒ ) .

Definition 2.3. For any ๐‘ข โˆˆ { 0 , 1 , 2 } , define two maps from { 1 , ๐œ” , ๐œ” 2 } to itself as follows: ๐œ ๐‘– ( ๐œ” ๐‘ข ) = ๐œ” ๐‘ข + ๐‘– โˆˆ { 1 , ๐œ” , ๐œ” 2 } , ๐‘– = 1 , 2 .

Clearly, for any ๐‘ข โˆˆ { 0 , 1 , 2 } , ๐œ ๐‘– ( ๐œ” ๐‘ข ) โ‰  ๐œ” ๐‘ข , ๐‘– = 1 , 2 and ๐œ 1 ( ๐œ” ๐‘ข ) โ‰  ๐œ 2 ( ๐œ” ๐‘ข ) . Applying Definition 2.3, for any ๐ณ โˆˆ ๐‘ ( ๐ฒ ) there exists an unique component, ๐‘ง ๐‘˜ say, of ๐ณ , such that ๐‘ง ๐‘˜ โ‰  ๐‘ฆ ๐‘˜ and either ๐‘ง ๐‘˜ = ๐œ 1 ( ๐‘ฆ ๐‘˜ ) or ๐‘ง ๐‘˜ = ๐œ 2 ( ๐‘ฆ ๐‘˜ ) , and other components of ๐ณ and ๐ฒ are the same. For simplicity, for any ๐ณ โˆˆ ๐‘ ( ๐ฒ ) with ๐‘ง ๐‘˜ โ‰  ๐‘ฆ ๐‘˜ and ๐‘ง ๐‘– = ๐‘ฆ ๐‘– ( ๐‘– = 1 , โ€ฆ , ๐‘› , ๐‘– โ‰  ๐‘˜ ) , we denote by ๐ณ = ๐œ ๐‘˜ 1 ( ๐ฒ ) or ๐ณ = ๐œ ๐‘˜ 2 ( ๐ฒ ) corresponding to ๐‘ง ๐‘˜ = ๐œ 1 ( ๐‘ฆ ๐‘˜ ) or ๐‘ง ๐‘˜ = ๐œ 2 ( ๐‘ฆ ๐‘˜ ) . By Definitions 2.1 and 2.3, for any ๐ฒ โˆˆ ๐•Š , we can structure its 1-neighborhood points using maps defined by Definition 2.3; that is, we have the following result.

Lemma 2.4. Let ๐œ ๐‘– ( โ‹… ) ( ๐‘– = 1 , 2 ) be defined by Definition 2.3. Then, for any ๐ฒ โˆˆ ๐•Š and any fixed positive integer number ๐‘˜ ( 1 โ‰ค ๐‘˜ โ‰ค ๐‘› ) , one has ๐œ ๐‘˜ ๐‘– ( ๐ฒ ) โˆˆ ๐‘ ( ๐ฒ ) , ๐‘– = 1 , 2 , ( 2 . 3 ) that is, ๐œ ๐‘˜ 1 ( ๐ฒ ) and ๐œ ๐‘˜ 2 ( ๐ฒ ) are two 1-neighborhood points of ๐ฒ .

Definition 2.5. A point ฬ‚ ๐ฒ โˆˆ ๐•Š is called a ๐พ -local maximizer of the function ๐‘“ over ๐•Š , if ฬ‚ ๐‘“ ( ๐ฒ ) โ‰ฅ ๐‘“ ( ๐ฒ ) , for all ๐ฒ โˆˆ ๐‘ ๐พ ( ฬ‚ ๐ฒ ) . Furthermore, if ฬ‚ ๐‘“ ( ๐ฒ ) โ‰ฅ ๐‘“ ( ๐ฒ ) for all ๐ฒ โˆˆ ๐•Š , then ฬ‚ ๐ฒ is called a global maximizer of ๐‘“ over ๐•Š . A 1-local maximizer of the function ๐‘“ is also called a local maximizer of the function ๐‘“ over ๐•Š .

3. VNS for Max 3-Cut

3.1. Local Search Algorithm

Let ๐ฒ 0 = ( ๐‘ฆ 0 1 , โ€ฆ , ๐‘ฆ 0 ๐‘› ) ๐‘‡ โˆˆ ๐•Š be a feasible solution of problem M3C. If ๐ฒ 0 is not a local maximizer of ๐‘“ , then for all ๐ฒ โˆˆ ๐‘ ( ๐ฒ 0 ) , we may find a ฬƒ ๐ฒ โˆˆ ๐‘ ( ๐ฒ 0 ) , such that ฬƒ ๐‘“ ( ๐ฒ ) = m a x { ๐‘“ ( ๐ฒ ) โˆถ ๐ฒ โˆˆ ๐‘ ( ๐ฒ 0 ) } . It is clear that ฬƒ ๐‘“ ( ๐ฒ ) โ‰ฅ ๐‘“ ( ๐ฒ 0 ) . If ฬƒ ๐ฒ is not still a local maximizer of ๐‘“ , then replacing ๐ฒ 0 with ฬƒ ๐ฒ and repeating the process until a point ฬ‚ ๐ฒ satisfying ฬ‚ ฬ‚ ๐‘“ ( ๐ฒ ) = m a x { ๐‘“ ( ๐ฒ ) โˆถ ๐ฒ โˆˆ ๐‘ ( ๐ฒ ) } is found, which indicates that ฬ‚ ๐ฒ is a local maximizer of ๐‘“ .

For any positive integer number ๐‘˜ ( 1 โ‰ค ๐‘˜ โ‰ค ๐‘› ) , let ๐ฒ ๐‘˜ = ( ๐‘ฆ ๐‘˜ 1 , โ€ฆ , ๐‘ฆ ๐‘˜ ๐‘› ) ๐‘‡ = ๐œ ๐‘˜ ๐‘– ( ๐ฒ 0 ) โˆˆ ๐‘ ( ๐ฒ 0 ) ( ๐‘– = 1 , 2 ) ; that is, ๐‘ฆ ๐‘˜ ๐‘– = ๐‘ฆ 0 ๐‘– ๐‘ฆ , ๐‘– = 1 , 2 , โ€ฆ , ๐‘˜ โˆ’ 1 , ๐‘˜ + 1 , โ€ฆ , ๐‘› ; ๐‘˜ ๐‘˜ โ‰  ๐‘ฆ 0 ๐‘˜ . ( 3 . 1 ) Denote ๎€ท ๐ฒ ๐›ฟ ( ๐‘˜ ) = ๐‘“ 0 ๎€ธ ๎€ท ๐ฒ โˆ’ ๐‘“ ๐‘˜ ๎€ธ . ( 3 . 2 ) Then, we have the following result whose proof is clear.

Lemma 3.1. Consider โŽง โŽช โŽช โŽจ โŽช โŽช โŽฉ 2 ๐›ฟ ( ๐‘˜ ) = 3 ๐‘˜ โˆ’ 1 โˆ‘ ๐‘– = 1 ๐‘ค ๐‘– ๐‘˜ ๎€บ ๐‘ฆ R e 0 ๐‘– โ‹… ๎€ท ๐‘ฆ ๐‘˜ ๐‘˜ โˆ’ ๐‘ฆ 0 ๐‘˜ + 2 ๎€ธ ๎€ป 3 ๐‘› โˆ‘ ๐‘— = ๐‘˜ + 1 ๐‘ค ๐‘˜ ๐‘— ๎‚ƒ ๎€ท ๐‘ฆ R e ๐‘˜ ๐‘˜ โˆ’ ๐‘ฆ 0 ๐‘˜ ๎€ธ โ‹… ๐‘ฆ 0 ๐‘— ๎‚„ 2 , ๐‘˜ > 1 ; 3 ๐‘› โˆ‘ ๐‘— = ๐‘˜ + 1 ๐‘ค ๐‘˜ ๐‘— ๎‚ƒ ๎€ท ๐‘ฆ R e ๐‘˜ ๐‘˜ โˆ’ ๐‘ฆ 0 ๐‘˜ ๎€ธ โ‹… ๐‘ฆ 0 ๐‘— ๎‚„ , ๐‘˜ = 1 . ( 3 . 3 )

Based on Lemma 3.1, if we know the value of ๐‘“ ( ๐ฒ 0 ) , then we can obtain the value function ๐‘“ ( ๐ฒ ๐‘˜ ) at next iterative point ๐ฒ ๐‘˜ by calculating ๐›ฟ ( ๐‘˜ ) by (3.3), instead of calculating directly the values ๐‘“ ( ๐ฒ ๐‘˜ ) , which reduces sharply the computational cost. By Definition 2.1, there exist two points satisfying (3.1) for fixed k ; that is, when ๐ฒ ๐‘˜ โˆˆ ๐‘ ( ๐ฒ 0 ) and (3.1) is satisfied, then either ๐‘ฆ ๐‘˜ ๐‘˜ = ๐œ 1 ( ๐‘ฆ 0 ๐‘˜ ) or ๐‘ฆ ๐‘˜ ๐‘˜ = ๐œ 2 ( ๐‘ฆ 0 ๐‘˜ ) . For our convenience, we denote ๐›ฟ ( ๐‘˜ ) by ๐›ฟ 1 ( ๐‘˜ ) when ๐‘ฆ ๐‘˜ ๐‘˜ = ๐œ 1 ( ๐‘ฆ 0 ๐‘˜ ) and by ๐›ฟ 2 ( ๐‘˜ ) when ๐‘ฆ ๐‘˜ ๐‘˜ = ๐œ 2 ( ๐‘ฆ 0 ๐‘˜ ) . In what follows, we describe the local search algorithm for the Max 3-cut problem denoted by LSM3C; by this algorithm, we can get a local maximizer of function ๐‘“ ( ๐ฒ ) over ๐•Š .

For LSM3C, one has the following.(1)Input any initial feasible solution ๐ฒ 0 of problem (M3C). (2)For ๐‘˜ from 1 to ๐‘› , set ๐ณ ๐‘˜ 1 = ๐œ ๐‘˜ 1 ( ๐ฒ 0 ) , calculate ๐›ฟ 1 ( ๐‘˜ ) , and set again ๐ณ ๐‘˜ 2 = ๐œ ๐‘˜ 2 ( ๐ฒ 0 ) , calculate ๐›ฟ 2 ( ๐‘˜ ) . (3)Find ๐›ฟ ๐‘– โˆ— ( ๐‘˜ โˆ— ) by the following way: ๐›ฟ ๐‘– โˆ— ๎€ท ๐‘˜ โˆ— ๎€ธ ๎€ฝ ๐›ฟ = m i n 1 ( 1 ) , ๐›ฟ 2 ( 1 ) , โ€ฆ , ๐›ฟ 1 ( ๐‘˜ ) , ๐›ฟ 2 ( ๐‘˜ ) , โ€ฆ , ๐›ฟ 1 ( ๐‘› ) , ๐›ฟ 2 ๎€พ ( ๐‘› ) . ( 3 . 4 ) (4)If ๐›ฟ ๐‘– โˆ— ( ๐‘˜ โˆ— ) โ‰ฅ 0 , then set ฬ‚ ๐ฒ = ๐ฒ 0 , return ฬ‚ ๐ฒ , and stop. Otherwise, go to next. (5)Set ๐ฒ 0 = ๐œ ๐‘˜ โˆ— ๐‘– โˆ— ( ๐ฒ 0 ) ; go to Step 2.

3.2. Variable Neighborhood Stochastic Search

Let ฬ‚ ๐ฒ be a local maximizer obtained by LSM3C and ๐พ m a x ( 1 < ๐พ m a x โ‰ค ๐‘› ) a fixed positive integer number. we now describe the variable neighborhood search (VNS) with stochastic steps, by which we can find an approximate global maximizer of problem (M3C). The proposed VNS algorithm actually has three phases: First, for any given positive integer number ๐พ < ๐พ m a x , a ๐พ -neighborhood point, ๐ฒ say, is randomly selected; that is, ๐ฒ โˆˆ ๐‘ ๐พ ( ฬ‚ ๐ฒ ) . Next, a solution, ฬ‚ฬ‚ ๐ฒ say, is obtained by applying algorithm LSM3C to ๐ฒ . Finally, the current solution jumps from ฬ‚ ๐ฒ to ฬ‚ฬ‚ ๐ฒ if it improves the former one. Otherwise, the order ๐พ of the neighborhood is increased by one when ๐พ < ๐พ m a x and the above steps are repeated until some stopping condition is met. The VNS that is also called k-max [11] can be illustrated as follows.

For VNS-k, one has the following.(1)Arbitrary choose a point ๐ฒ 0 โˆˆ ๐•Š , implement LSM3C starting from ๐ฒ 0 โˆˆ ๐•Š and denote the obtained local maximizer by ฬ‚ ๐ฒ . Set ๐พ = 1 . (2)Randomly take a point ๐ฒ โˆˆ ๐œ• ๐‘ ๐ผ ( ๐พ ) ( ฬ‚ ๐ฒ ) and implement again LSM3C from ๐ฒ , and denote the obtained new local maximizer by ฬ‚ฬ‚ ๐ฒ . (3)If ฬ‚ ๐‘“ ( ฬ‚ ฬ‚ ๐ฒ ) > ๐‘“ ( ๐ฒ ) , then set ฬ‚ ๐ฒ ๐ฒ = ฬ‚ ฬ‚ and ๐พ = 1 ; go to Step 2. (4)If ๐พ < ๐พ m a x ( โ‰ค ๐‘› ) , set ๐พ = ๐พ + 1 ; go to Step 2. Otherwise, return ฬ‚ ๐ฒ as an approximate global solution of problem M3C and stop.

The subscript ๐ผ ( ๐พ ) in Step 2 is a function of ๐พ and is also a positive integer number not greater than ๐‘› . ๐ผ ( ๐พ ) reflects the main skill of converting the current neighborhood of local maximizer ฬ‚ ๐ฒ into another neighborhood of ฬ‚ ๐ฒ . For a given ๐พ m a x , let ๐‘š = โŒŠ ๐‘› / ๐พ m a x โŒ‹ and ๐พ 0 = ๐‘› โˆ’ ๐‘š ๐พ m a x , where โŒŠ ๐‘Ž โŒ‹ means the integral part of ๐‘Ž . We divide the ๐‘› neighborhoods of ฬ‚ ๐ฒ , ฬ‚ ๐‘ ( ๐ฒ ) , ๐‘ 2 ( ฬ‚ ๐ฒ ) , โ€ฆ , ๐‘ ๐พ ( ฬ‚ ๐ฒ ) , โ€ฆ , ๐‘ ๐‘› ( ฬ‚ ๐ฒ ) into ๐พ m a x neighborhood blocks ๐‘ ๐ผ ( 1 ) ( ฬ‚ ๐ฒ ) , โ€ฆ , ๐‘ ๐ผ ( ๐พ m a x ) ( ฬ‚ ๐ฒ ) , such that, for ๐พ = 1 , 2 , โ€ฆ , ๐พ m a x โˆ’ ๐พ 0 , ๐‘ ( ๐พ โˆ’ 1 ) ๐‘š + 1 ( ฬ‚ ๐ฒ ) โŠ† ๐‘ ๐ผ ( ๐พ ) ( ฬ‚ ๐ฒ ) โŠ† ๐‘ ๐พ ๐‘š + 1 ( ฬ‚ ๐ฒ ) , ( 3 . 5 ) and, for ๐พ = ๐พ m a x โˆ’ ๐พ 0 + 1 , โ€ฆ , ๐พ m a x โˆ’ ๐พ 0 + ๐‘— , โ€ฆ , ๐พ m a x , ๐‘ ( ๐พ โˆ’ 1 ) ( ๐‘š + 1 ) + 1 ( ฬ‚ ๐ฒ ) โŠ† ๐‘ ๐ผ ( ๐พ ) ( ฬ‚ ๐ฒ ) โŠ† ๐‘ ๐พ ( ๐‘š + 1 ) ( ฬ‚ ๐ฒ ) . ( 3 . 6 ) In order to obtain the ๐พ m a x neighborhood blocks of ฬ‚ ๐ฒ , ๐‘ ๐ผ ( ๐พ ) ( ฬ‚ ๐ฒ ) , ๐พ = 1 , โ€ฆ , ๐พ m a x , we divide the set { 1 , 2 , โ€ฆ , ๐‘› } into ๐พ m a x disjoint subsets, where each subset of the first ๐พ m a x โˆ’ ๐พ 0 subsets has ๐‘š integers and each subset of the last ๐พ 0 subsets has ๐‘š + 1 integers. For any integer ๐พ ( โ‰ค ๐พ m a x ) , let [ ] ๐ผ ( ๐พ ) = ( ๐พ โˆ’ 1 ) โ‹… ๐‘š + ๐‘ โ‹… ๐‘š + 1 , ๐พ = 1 , 2 , โ€ฆ , ๐พ m a x โˆ’ ๐พ 0 , ( 3 . 7 ) or ๐ผ ๎€ท ๐พ ( ๐พ ) = m a x โˆ’ ๐พ 0 ๎€ธ [ ] ๎€ท ๎€ท ๐พ ๐‘š + ( ๐‘š + 1 ) โ‹… ๐‘ + 1 + ( ๐‘š + 1 ) ๐พ โˆ’ m a x โˆ’ ๐พ 0 ๎€ธ ๎€ธ = [ ( ] ๎€ท ๐พ โˆ’ 1 ๐‘š + 1 ) โ‹… ๐‘ + 1 + ( ๐‘š + 1 ) ( ๐พ โˆ’ 1 ) โˆ’ m a x โˆ’ ๐พ 0 ๎€ธ , ๐พ = ๐พ m a x โˆ’ ๐พ 0 + 1 , โ€ฆ , ๐พ m a x โˆ’ ๐พ 0 + ๐‘— , โ€ฆ , ๐พ m a x . ( 3 . 8 ) Then we can randomly choose a point ๐ฒ in ๐œ• ๐‘ ๐ผ ( ๐พ ) ( ฬ‚ ๐ฒ ) , where ๐‘ โˆˆ ( 0 , 1 ) is a random number from uniformly distribution ๐’ฐ ( 0 , 1 ) , such that ๐‘ ๐ผ ( ๐พ ) ( ฬ‚ ๐ฒ ) satisfies (3.5) or (3.6).

VNS-k stops when the maximum ๐พ neighborhood is reached. Additionally, we also consider another termination criterion of VNS based on the maximum CPU-time and denoted by VNS-t. VNS-t can obtain a better solution than VNS-k since VNS-t actually runs several times VNS-k in the maximum allowing time ๐‘ก m a x , but it generally has to spend more computational time. The VNS-t can be stated as follows.

For VNS-t, one has the following.(1)Set ๐‘ก C P U = 0 , running VNS-k for an arbitrary initial point ๐ฒ 0 โˆˆ ๐•Š , and let a local optimal solution ฬ‚ ๐ฒ be obtained. (2)If ๐พ = ๐พ m a x ( โ‰ค ๐‘› ) , go to Step 3. (3)If ๐‘ก C P U < ๐‘ก m a x , then set ๐พ = 1 ; go to Step 2 in VNS-k. Otherwise, return ฬ‚ ๐ฒ as an approximate global solution of problem M3C and stop.

We mention that it differs from the classical variable neighborhood search metaheuristic that is originally proposed by Mladenoviฤ‡ and Hansen [10]. In order to obtain a global optimal solution or a high-quality approximate solution of problem M3C, we use two stochastic steps in VNS. First, for a fixed ๐พ , a ๐พ -neighbor of ฬ‚ ๐ฒ is chosen randomly. Second, by the definition of ๐ผ ( ๐พ ) , when we change the neighborhood of ฬ‚ ๐ฒ from ๐‘ ๐ผ ( ๐พ โˆ’ 1 ) to ๐‘ ๐ผ ( ๐พ ) , ๐‘ ๐ผ ( ๐พ ) may take any a neighborhood among ๐‘ ( ๐พ โˆ’ 1 ) ๐‘š + ๐‘— , ๐‘— = 1 , 2 , โ€ฆ , ๐‘š of ฬ‚ ๐ฒ , which is decided by random number ๐‘ . In VNS, positive integer ๐พ m a x decides the maximum search neighborhood block of ฬ‚ ๐ฒ , which also decides directly the CPU-time of VNS. Based on the second stochastic step, we may choose a relative small ๐พ m a x comparing with ๐‘› . This can decrease our computational time.

4. A Greedy Algorithm for Max 3-Section

When the number of nodes ๐‘› is a multiple of three and the condition | ๐‘† 0 | = | ๐‘† 1 | = | ๐‘† 2 | = ๐‘› / 3 is required, the Max 3-cut problem becomes the Max 3-section problem. Notice that 1 + ๐œ” + ๐œ” 2 = 0 , then the Max 3-section problem can be formulated as the following programming problem M3S: 2 M 3 S โˆถ m a x 3 ๎“ ๐‘– < ๐‘— ๐‘ค ๐‘– ๐‘— ๎€ท ๎€ท ๐‘ฆ 1 โˆ’ R e ๐‘– โ‹… ๐‘ฆ ๐‘— ๎€ธ ๎€ธ s . t . ๐‘› ๎“ ๐‘– = 1 ๐‘ฆ ๐‘– = 0 , ๐ฒ โˆˆ ๐•Š , ( 4 . 1 ) and its CSDP relaxation is 2 C S D P 1 โˆถ m a x 3 ๎“ ๐‘– < ๐‘— ๐‘ค ๐‘– ๐‘— ๎€ท ๎€ท ๐ฒ 1 โˆ’ R e ๐‘– โ‹… ๐ฒ ๐‘— ๎€ธ ๎€ธ s . t . ๐ž ๐ž ๐‘‡ โ€– โ€– ๐ฒ โ‹… ๐‘Œ = 0 , ๐‘– โ€– โ€– ๐ด = 1 , ๐‘– = 1 , 2 , โ€ฆ , ๐‘› , ๐‘˜ ๐‘– ๐‘— โ‹… ๐‘Œ โ‰ฅ โˆ’ 1 , ๐‘– , ๐‘— = 1 , 2 , โ€ฆ , ๐‘› , ๐‘˜ = 0 , 1 , 2 ๐‘Œ โ‰ฝ 0 , ( 4 . 2 ) where ๐ž is the column vector of all ones. Andersson [12] extended Frieze and Jerrumโ€™s random rounding method to M3S and obtained a ( 2 / 3 ) + ๐‘‚ ( 1 / ๐‘› 3 ) -approximate algorithm, which is the current best approximate ratio for M3S; also see the recent research of Gaur et al. [13]. The author of the current paper considers a special the Max 3-Section problem and obtains a 0.6733-approximate algorithm; see Ling (2009) [14].

Clearly, the feasible region of problem M3S is a subset of ๐•Š , and the optimal value of problem M3S is not greater than that of problem M3C. Assume that we have get a global optimal solution or a high-quality approximate solution ฬ‚ ๐ฒ of problem M3C. It is clear that ฬ‚ ๐ฒ may not satisfy the condition โˆ‘ ๐‘› ๐‘– = 1 ฬ‚ ๐‘ฆ ๐‘– = 0 . But we may adjust ฬ‚ ๐ฒ to get a new feasible solution ๐ฒ ๐‘  using a greedy algorithm, such that ๐ฒ ๐‘  satisfies โˆ‘ ๐‘› ๐‘– = 1 ๐‘ฆ ๐‘  ๐‘– = 0 . This is the motivation that we propose the greedy algorithm for the Max 3-section problem.

For the sake of our analysis, without loss of generality, we assume that the local maximizer ฬ‚ ๐ฒ satisfies | ๐‘† 0 | = m a x { | ๐‘† 0 | , | ๐‘† 1 | , | ๐‘† 2 | } . This means that ๐‘† 0 = { ๐‘– โˆถ ฬ‚ ๐‘ฆ ๐‘– = 1 } is the subset of ๐‘‰ with maximum cardinal number. If | ๐‘† ๐‘˜ | = m a x { | ๐‘† 0 | , | ๐‘† 1 | , | ๐‘† 2 | } ( ๐‘˜ โ‰  0 , ๐‘˜ = 1 , 2 ) , then we may set ๐‘ฆ ๐‘ ๐‘– = ๐‘ค ๐‘˜ ฬ‚ ๐‘ฆ ๐‘– , ๐‘– = 1 , โ€ฆ , ๐‘› . The resulted new solution ๐ฒ ๐‘ = ( ๐‘ฆ ๐‘ 1 , โ€ฆ , ๐‘ฆ ๐‘ ๐‘– ) will not change the objective value since ฬ‚ ๐‘“ ( ๐ฒ ) = ๐‘“ ( ๐‘ค ๐‘˜ ฬ‚ ๐ฒ ) ( ๐‘˜ โ‰  0 , ๐‘˜ = 1 , 2 ) ; moreover, the new partition { ๐‘† ๐‘ 0 , ๐‘† ๐‘ 1 , ๐‘† ๐‘ 2 } based on ๐ฒ ๐‘ satisfies | ๐‘† ๐‘ 0 | = m a x { | ๐‘† ๐‘ 0 | , | ๐‘† ๐‘ 1 | , | ๐‘† ๐‘ 2 | } . By our assumption, the partition ๐‘† = { ๐‘† 0 , ๐‘† 1 , ๐‘† 2 } still exist four possible cases.

Case 1. | ๐‘† 0 | โ‰ฅ | ๐‘† 1 | โ‰ฅ ๐‘› / 3 โ‰ฅ | ๐‘† 2 | .

Case 2. | ๐‘† 0 | โ‰ฅ ๐‘› / 3 โ‰ฅ | ๐‘† 1 | โ‰ฅ | ๐‘† 2 | .

Case 3. | ๐‘† 0 | โ‰ฅ | ๐‘† 2 | โ‰ฅ ๐‘› / 3 โ‰ฅ | ๐‘† 1 | .

Case 4. | ๐‘† 0 | โ‰ฅ ๐‘› / 3 โ‰ฅ | ๐‘† 2 | โ‰ฅ | ๐‘† 1 | .
The sizes adjusting greedy algorithm of Cases 3 and 4 are similar to Cases 1 and 2. Hence, we mainly consider Cases 1 and 2 for adjusting the partition of ๐‘‰ from ๐‘† = { ๐‘† 0 , ๐‘† 1 , ๐‘† 2 } to ๎‚ ๎‚ ๐‘† ๐‘† = { 0 , ๎‚ ๐‘† 1 , ๎‚ ๐‘† 2 } such that | ๎‚ ๐‘† ๐‘˜ | = ๐‘› / 3 , ๐‘˜ = 0 , 1 , 2 . Denote ๐›ฟ 0 ( ๎“ ๐‘– ) = ๐‘— โˆˆ ๐‘† 1 โˆช ๐‘† 2 ๐‘ค ๐‘– ๐‘— , ๐‘– โˆˆ ๐‘† 0 , ๐›ฟ 0 1 ๎“ ( ๐‘– ) = ๐‘— โˆˆ ๐‘† 1 ๐‘ค ๐‘– ๐‘— , ๐‘– โˆˆ ๐‘† 0 , ๐›ฟ 1 0 ๎“ ( ๐‘– ) = ๐‘— โˆˆ ๐‘† 0 ๐‘ค ๐‘– ๐‘— , ๐‘– โˆˆ ๐‘† 1 , ๐›ฟ 0 2 ( ๎“ ๐‘– ) = ๐‘— โˆˆ ๐‘† 2 ๐‘ค ๐‘– ๐‘— , ๐‘– โˆˆ ๐‘† 0 , ๐›ฟ 2 0 ( ๎“ ๐‘– ) = ๐‘— โˆˆ ๐‘† 0 ๐‘ค ๐‘– ๐‘— , ๐‘– โˆˆ ๐‘† 2 , ๐›ฟ 1 2 ๎“ ( ๐‘– ) = ๐‘— โˆˆ ๐‘† 2 ๐‘ค ๐‘– ๐‘— , ๐‘– โˆˆ ๐‘† 1 , ๐›ฟ 2 1 ๎“ ( ๐‘– ) = ๐‘— โˆˆ ๐‘† 1 ๐‘ค ๐‘– ๐‘— , ๐‘– โˆˆ ๐‘† 2 . ( 4 . 3 ) Then, it follows from simple computation that ๐›ฟ 0 ( ๐‘– ) = ๐›ฟ 0 1 ( ๐‘– ) + ๐›ฟ 0 2 ( ๐‘– ) , f o r e a c h ๐‘– โˆˆ ๐‘† 0 , ๎“ ๐‘– โˆˆ ๐‘† ๐‘˜ ๐›ฟ ๐‘˜ ๐‘™ ๎“ ( ๐‘– ) = ๐‘– โˆˆ ๐‘† ๐‘™ ๐›ฟ ๐‘™ ๐‘˜ ฬ‚ โˆ‘ ( ๐‘– ) , ๐‘˜ , ๐‘™ = 0 , 1 , 2 , ๐‘˜ โ‰  ๐‘™ , ๐‘“ ( ๐ฒ ) = ๐‘– โˆˆ ๐‘† 0 ๐›ฟ 0 โˆ‘ ( ๐‘– ) + ๐‘– โˆˆ ๐‘† 1 ๐›ฟ 1 2 = โˆ‘ ( ๐‘– ) ๐‘– โˆˆ ๐‘† 0 ๐›ฟ 0 1 โˆ‘ ( ๐‘– ) + ๐‘– โˆˆ ๐‘† 0 ๐›ฟ 0 2 โˆ‘ ( ๐‘– ) + ๐‘– โˆˆ ๐‘† 1 ๐›ฟ 1 2 ( ๐‘– ) = ๐‘‘ 0 1 + ๐‘‘ 0 2 + ๐‘‘ 1 2 , ( 4 . 4 ) where ๐‘‘ 0 1 = โˆ‘ ๐‘– โˆˆ ๐‘† 0 ๐›ฟ 0 1 ( ๐‘– ) , ๐‘‘ 0 2 = โˆ‘ ๐‘– โˆˆ ๐‘† 0 ๐›ฟ 0 2 ( ๐‘– ) , ๐‘‘ 1 2 = โˆ‘ ๐‘– โˆˆ ๐‘† 1 ๐›ฟ 1 2 ( ๐‘– ) .
In what follows, we describe the size adjusting greedy algorithms (SAGAs) for Cases 1 and 2, and denote the greedy algorithms for the two cases by SAGA1 and SAGA2, respectively.
For SAGA1, one has the following.(1)Calculate ๐‘š 0 2 = โˆ‘ ๐‘– โˆˆ ๐‘† 0 ๐›ฟ 0 2 ( ๐‘– ) | | ๐‘† 0 | | , ๐‘š 1 2 = โˆ‘ ๐‘– โˆˆ ๐‘† 1 ๐›ฟ 1 2 ( ๐‘– ) | | ๐‘† 1 | | . ( 4 . 5 ) (2)If ๐‘š 0 2 โ‰ฅ ๐‘š 1 2 , let ๐‘† 1 = { ๐‘— 1 , ๐‘— 2 , โ€ฆ , ๐‘— | ๐‘† 1 | } , where ๐›ฟ 1 2 ( ๐‘— ๐‘™ ) โ‰ฅ ๐›ฟ 1 2 ( ๐‘— ๐‘™ + 1 ) , ๐‘™ = 1 , 2 , โ€ฆ , | ๐‘† 1 | . Set ๎‚ ๐‘† 1 = { ๐‘— 1 , ๐‘— 2 , โ€ฆ , ๐‘— ๐‘› / 3 } , ๎‚ ๐‘† 2 = ๐‘† 2 โ‹ƒ ( ๐‘† 1 โงต ๎‚ ๐‘† 1 ) and renew to calculate ๐›ฟ ๎…ž 0 2 ( ๎“ ๐‘– ) = ๐‘— โˆˆ ๎ ๐‘† 2 ๐‘ค ๐‘– ๐‘— , ( 4 . 6 ) for each ๐‘– โˆˆ ๐‘† 0 . Let ๐‘† 0 = { ๐‘– 1 , ๐‘– 2 , โ€ฆ , ๐‘– | ๐‘† 0 | } , where ๐›ฟ ๎…ž 0 2 ( ๐‘– ๐‘˜ ) โ‰ฅ ๐›ฟ ๎…ž 0 2 ( ๐‘– ๐‘˜ + 1 ) . Set ๎‚ ๐‘† 0 = { ๐‘– 1 , ๐‘– 2 , โ€ฆ , ๐‘– ๐‘› / 3 } and ๎‚ ๐‘† 2 = ๎‚ ๐‘† 2 โ‹ƒ ( ๐‘† 0 โงต ๎‚ ๐‘† 0 ) .(3)If ๐‘š 0 2 < ๐‘š 1 2 , let ๐‘† 0 = { ๐‘– 1 , ๐‘– 2 , โ€ฆ , ๐‘– | ๐‘† 0 | } , where ๐›ฟ 0 2 ( ๐‘– ๐‘˜ ) โ‰ฅ ๐›ฟ 0 2 ( ๐‘– ๐‘˜ + 1 ) , ๐‘˜ = 1 , 2 , โ€ฆ , | ๐‘† 0 | , set ๎‚ ๐‘† 0 = { ๐‘– 1 , ๐‘– 2 , โ€ฆ , ๐‘– ๐‘› / 3 } , ๎ ๐‘† 2 = ๐‘† 2 โ‹ƒ ( ๐‘† 0 โงต ๎‚ ๐‘† 0 ) , and then renew to calculate ๐›ฟ ๎…ž 1 2 ( ๎“ ๐‘– ) = ๐‘— โˆˆ ๎ ๐‘† 2 ๐‘ค ๐‘– ๐‘— , ( 4 . 7 ) for each ๐‘– โˆˆ ๐‘† 1 . Set ๎‚ ๐‘† 1 = { ๐‘— 1 , ๐‘— 2 , โ€ฆ , ๐‘— ๐‘› / 3 } and ๎‚ ๐‘† 2 = ๎ ๐‘† 2 โ‹ƒ ( ๐‘† 1 โงต ๎‚ ๐‘† 1 ) , where ๐›ฟ ๎…ž 1 2 ( ๐‘— ๐‘˜ ) โ‰ฅ ๐›ฟ ๎…ž 1 2 ( ๐‘— ๐‘˜ + 1 ) here.(4)Return the current partition ๎‚ ๎‚ ๐‘† ๐‘† = { 0 , ๎‚ ๐‘† 1 , ๎‚ ๐‘† 2 } ; stop.
For SAGA2, one has the following.(1)Calculate ๐‘‘ 0 1 = โˆ‘ ๐‘– โˆˆ ๐‘† 0 ๐›ฟ 0 1 ( ๐‘– ) , ๐‘‘ 0 2 = โˆ‘ ๐‘– โˆˆ ๐‘† 0 ๐›ฟ 0 2 ( ๐‘– ) , and ๐‘š 0 1 = ๐‘‘ 0 1 | | ๐‘† 0 | | , ๐‘š 0 2 = ๐‘‘ 0 2 | | ๐‘† 0 | | . ( 4 . 8 ) (2)If ๐‘š 0 1 โ‰ค ๐‘š 0 2 , let ๐‘† 0 = ๎€ฝ ๐‘– 1 , ๐‘– 2 , โ€ฆ , ๐‘– | ๐‘† 0 | ๎€พ , ( 4 . 9 ) where ๐›ฟ 0 1 ( ๐‘– ๐‘˜ ) โ‰ฅ ๐›ฟ 0 1 ( ๐‘– ๐‘˜ + 1 ) , ๐‘˜ = 1 , 2 , โ€ฆ , | ๐‘† 0 | . Set ๎ ๐‘† 0 = ๎€ฝ ๐‘– 1 , ๐‘– 2 , โ€ฆ , ๐‘– | ๐‘† 0 | โˆ’ ๐‘ž 1 ๎€พ , ๎‚ ๐‘† 1 = ๐‘† 1 ๎š ๎‚€ ๐‘† 0 โงต ๎ ๐‘† 0 ๎‚ , ( 4 . 1 0 ) where ๐‘ž 1 = ( ๐‘› / 3 ) โˆ’ | ๐‘† 1 | . Renew to calculate ๐›ฟ ๎…ž 0 2 ๎“ ( ๐‘– ) = ๐‘— โˆˆ ๐‘† 2 ๐‘ค ๐‘– ๐‘— ๎ ๐‘† , ๐‘– โˆˆ 0 . ( 4 . 1 1 ) and let ๎ ๐‘† 0 = โŽง โŽช โŽจ โŽช โŽฉ ๐‘– ๎…ž 1 , ๐‘– ๎…ž 2 , โ€ฆ , ๐‘– ๎…ž | | | | ๎ ๐‘† 0 | | | | โŽซ โŽช โŽฌ โŽช โŽญ , ( 4 . 1 2 ) where ๐›ฟ ๎…ž 0 2 ( ๐‘– ๎…ž ๐‘˜ ) โ‰ฅ ๐›ฟ ๎…ž 0 2 ( ๐‘– ๎…ž ๐‘˜ + 1 ) , ๎ ๐‘† ๐‘˜ = 1 , 2 , โ€ฆ , | 0 | . Set ๎‚ ๐‘† 0 = ๎€ฝ ๐‘– ๎…ž 1 , ๐‘– ๎…ž 2 , โ€ฆ , ๐‘– ๎…ž ๐‘› / 3 ๎€พ , ๎‚ ๐‘† 2 = ๐‘† 2 ๎š ๎‚€ ๎ ๐‘† 0 โงต ๎‚ ๐‘† 0 ๎‚ . ( 4 . 1 3 ) (3)If ๐‘š 0 1 > ๐‘š 0 2 , let ๐‘† 0 = ๎€ฝ ๐‘– 1 , ๐‘– 2 , โ€ฆ , ๐‘– | ๐‘† 0 | ๎€พ , ( 4 . 1 4 ) where ๐›ฟ 0 2 ( ๐‘– ๐‘˜ ) โ‰ฅ ๐›ฟ 0 2 ( ๐‘– ๐‘˜ + 1 ) , ๐‘˜ = 1 , 2 , โ€ฆ , | ๐‘† 0 | . Set ๎ ๐‘† 0 = ๎€ฝ ๐‘– 1 , ๐‘– 2 , โ€ฆ , ๐‘– | ๐‘† 0 | โˆ’ ๐‘ž 2 ๎€พ , ๎‚ ๐‘† 2 = ๐‘† 2 ๎š ๎‚€ ๐‘† 0 โงต ๎ ๐‘† 0 ๎‚ , ( 4 . 1 5 ) where ๐‘ž 2 = ( ๐‘› / 3 ) โˆ’ | ๐‘† 2 | . Renew to calculate ๐›ฟ ๎…ž 0 1 ๎“ ( ๐‘– ) = ๐‘— โˆˆ ๐‘† 1 ๐‘ค ๐‘– ๐‘— ๎ ๐‘† , ๐‘– โˆˆ 0 . ( 4 . 1 6 ) and let ๎ ๐‘† 0 = โŽง โŽช โŽจ โŽช โŽฉ ๐‘– ๎…ž 1 , ๐‘– ๎…ž 2 , โ€ฆ , ๐‘– ๎…ž | | | | ๎ ๐‘† 0 | | | | โŽซ โŽช โŽฌ โŽช โŽญ , ( 4 . 1 7 ) where ๐›ฟ ๎…ž 0 1 ( ๐‘– ๎…ž ๐‘˜ ) โ‰ฅ ๐›ฟ ๎…ž 0 1 ( ๐‘– ๎…ž ๐‘˜ + 1 ) , ๎ ๐‘† ๐‘˜ = 1 , 2 , โ€ฆ , | 0 | . Set ๎‚ ๐‘† 0 = ๎€ฝ ๐‘– ๎…ž 1 , ๐‘– ๎…ž 2 , โ€ฆ , ๐‘– ๎…ž ๐‘› / 3 ๎€พ , ๎‚ ๐‘† 1 = ๐‘† 1 ๎š ๎‚€ ๎ ๐‘† 0 โงต ๎‚ ๐‘† 0 ๎‚ . ( 4 . 1 8 ) (4)Return the current partition ๎‚ ๎‚ ๐‘† ๐‘† = { ๐ŸŽ , ๎‚ ๐‘† ๐Ÿ , ๎‚ ๐‘† ๐Ÿ } ; stop.

5. Numerical Results

This section describes the obtained experimental results for some instances of Max 3-cut and Max 3-Section problems using the proposed VNS metaheuristic. We also show a quantitative comparison with 0.836-approximate algorithm. The computational experiments are performed in an Intel Pentium 4 processor at 2.0โ€‰GHz, with 512โ€‰MB of RAM, and all algorithms are coded in Matlab. Because RSDP relaxation of M3C includes many slack variables, many constraints, and matrices variables without a block diagonal structures, in our numerical comparisons, we choose SDPT3-4.0 [9], one of the best and well-known solvers of semidefinite programming, to solve RSDP relaxation of M3C.

All our test problems are generated randomly by the following way. Let ๐‘ โˆˆ ( 0 , 1 ) be a constant and ๐‘Ÿ โˆˆ ( 0 , 1 ) a random number. If ๐‘Ÿ โ‰ค ๐‘ , then there is an edge between nodes ๐‘– and ๐‘— with weight ๐‘ค ๐‘– ๐‘— , that is, a random integer between 1 and 10. Otherwise, ๐‘ค ๐‘– ๐‘— = 0 ; that is, there is no edge between nodes ๐‘– and ๐‘— . Because of the limits of memory of SDPT3, when ๐‘› > 2 0 0 , RSDP becomes a huge semidefinite programming problem with not less than 59700 slack variables and 99900 constraints and is out of memory of SDPT3. Hence, in the numerical experiments, we consider 30 instances with ๐‘ = 0 . 1 , 0 . 3 , 0 . 6 , and ๐‘› varying from 20 to 200.

Firstly, we check the influence of ๐พ m a x on the quality of solution obtained by VNS-k. For a given graph, we take ๐พ m a x = 3 , 5 , 1 0 , 1 5 , 3 0 ; Table 1 presents the results, where Wnp in the first column of this table and the following tables means that a graph is randomly generated with nodes ๐‘› and density ๐‘ ; for instance, W30.6 presents a graph generated randomly with ๐‘› = 3 0 and ๐‘ = 0 . 6 . We find from Table 1 that the influence of ๐พ m a x to objective value denoted by Obj in Table 1 is slight when ๐พ m a x > 5 , but the CPU time increases sharply as ๐พ m a x increases. This result is actually not surprising. Indeed, because ๐ผ ( ๐พ ) > ๐พ , we choose randomly a point ๐ฒ in ๐œ• ๐‘ ๐ผ ( ๐พ ) ( ฬ‚ ๐ฒ ) , instead of ๐œ• ๐‘ ๐พ ( ฬ‚ ๐ฒ ) . This avoids to choose too large ๐พ m a x which leads to more CPU-time cost. Hence, in sequel numerical comparisons, we fix ๐พ m a x = 5 for all test problems.

Secondly, we compare VNS (VNS-k, VNS-t) metaheuristic with 0.836-approximate algorithm for all test problems. To avoid the effect of initial points, for each test problem, after RSDP is solved, we run the round procedure of 0.836-approximate algorithm and VNS metaheuristic ten times, respectively.

Table 2 gives the result of numerical comparisons. In the numerical presentations of Table 2, Objrsdp is the optimal value of problem RSDP; that is, it is an upper bound of M3C. ObjGM is the largest value obtained by 0.836-approximate algorithm in the ten tests. Objvns stands for the largest value obtained by VNS for M3C in the ten tests, respectively. ๐‘š and s.v. are the number of constraints and slack variables (s.v.), respectively. ๐‘ก G M and ๐‘ก v n s โˆ’ ๐‘˜ are the average time (second) associated with the two algorithms in the ten tests. For the maximum CPU time of VNS-t, we take ๐‘ก m a x = 2 ๐‘ก v n s - ๐‘˜ , but the real CPU time of VNS-t will be greater than ๐‘ก m a x . Additionally, for measuring the performance of solutions, we take ๐œŒ = ๐™พ ๐š‹ ๐š“ v n s โˆ’ ๐™พ ๐š‹ ๐š“ r s d p ๐™พ ๐š‹ ๐š“ r s d p = ๐™พ ๐š‹ ๐š“ v n s ๐™พ ๐š‹ ๐š“ r s d p โˆ’ 1 ( 5 . 1 ) for M3C and ๐œŒ = ๐™พ ๐š‹ ๐š“ v n s + s a g a โˆ’ ๐™พ ๐š‹ ๐š“ r s d p ๐™พ ๐š‹ ๐š“ r s d p = ๐™พ ๐š‹ ๐š“ v n s + s a g a ๐™พ ๐š‹ ๐š“ r s d p โˆ’ 1 ( 5 . 2 ) for M3S. Clearly, ๐œŒ can reflect how close to the solution obtained by VNS from the optimal solution of RSDP. One can see from Table 2 that (1) the VNS metaheuristic not only can obtain a better solution than 0.836-approximate algorithm for all test problems, but also that the elapsed CPU-time of VNS metaheuristic is much less than that of 0.836-approximate algorithm for all test problems, (2) the performance of solution can be improved by VNS-t for most of test problems when the termination criterion of VNS is based on the maximum CPU-time, but VNS-t spends more computational time than VNS-k. The improved performance can be reflected by โˆ‡ ๐œŒ = ๐œŒ ๐‘ก โˆ’ ๐œŒ ๐‘˜ in the final column of Table 2. Average speaking, VNS-t improves 0.91 percentage point.

Finally, we consider the solution of M3S by combining VNS-k and greedy sizes-adjusted algorithm SAGA stated in Section 4. Let ฬ‚ ๐ฒ be an approximate solution of M3C obtained by VNS; we can obtain an approximate solution of M3S from SAGA. The numerical results are reported by Table 3 in which O b j v n s + s a g a stands for the largest value obtained by VNS-k plus SAGA for M3S. Although our sizes-adjusted algorithm may decrease the objective value obtained by VNS, the changes of objective values are very slight from Table 3. Particular, objective values of some problems do not decrease, instead increase, such as W150.3. We do not compare the obtained results with Anderssonโ€™s 2/3-approximate algorithm. Because we find that all approximate solutions of M3S obtained by VNS plus SAGA still are better than that of 0.836-approximate algorithm with the exception of only W30.1 and W30.3.

6. Conclusions

A variable neighborhood stochastic metaheuristic was proposed to solve the Max 3-cut and Max 3-section problems in this paper. Our algorithms can solve Max 3-cut and Max 3-section problems with different sizes and densities. Although 0.836-approximate algorithm has the very good theoretic results, in numerical aspects, our comparisons indicate that the proposed VNS metaheuristic is superior to the well-known 0.836-approximate algorithm and can efficiently obtain very high-quality solutions of the Max 3-cut and Max 3-section problems.

We mention that the proposed algorithm in fact can deal with higher dimensional G-set graphs problems created by Pro. Rinaldi using a graph generator, rudy. But, we cannot give numerical comparisons with 0.836-approximate algorithm since RSDP relaxations of these problems are out of memory of the current all SDP software. In additionally, if we increase ๐พ m a x or ๐‘ก m a x in numerical implementing, then the quality of solution of M3C will further be improved by VNS.

Funding

This work is supported by the National Natural Science Foundations of China (no. 71001045, 10971162), the China Postdoctoral Science Foundation (no. 20100480491), the Natural Science Foundation of Jiangxi Province of China (no. 20114BAB211008), and the Jiangxi University of Finance and Economics Support Program Funds for Outstanding Youths.

Acknowledgment

The authors would like to thank the editor and an anonymous referee for their numerous suggestions for improving the paper.