Mathematical Problems in Engineering

Volume 2018, Article ID 3586731, 8 pages

https://doi.org/10.1155/2018/3586731

## An Order Effect of Neighborhood Structures in Variable Neighborhood Search Algorithm for Minimizing the Makespan in an Identical Parallel Machine Scheduling

^{1}Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia^{2}Faculty of Engineering, Mechanical Engineering Department, Helwan University, Cairo 11732, Egypt

Correspondence should be addressed to Mohammed A. Noman; as.ude.usk@1demmahomm

Received 11 September 2017; Revised 6 February 2018; Accepted 27 February 2018; Published 22 April 2018

Academic Editor: Ton D. Do

Copyright © 2018 Ibrahim Alharkan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Variable neighborhood search (VNS) algorithm is proposed for scheduling identical parallel machine. The objective is to study the effect of adding a new neighborhood structure and changing the order of the neighborhood structures on minimizing the makespan. To enhance the quality of the final solution, a machine based encoding method and five neighborhood structures are used in VNS. Two initial solution methods which were used in two versions of improved VNS (IVNS) are employed, namely, longest processing time (LPT) initial solution, denoted as HIVNS, and random initial solution, denoted as RIVNS. The proposed versions are compared with LPT, simulated annealing (SA), genetic algorithm (GA), modified variable neighborhood search (MVNS), and improved variable neighborhood search (IVNS) algorithms from the literature. Computational results show that changing the order of neighborhood structures and adding a new neighborhood structure can yield a better solution in terms of average makespan.

#### 1. Introduction

Identical parallel machine scheduling (IPMS) with the objective of minimizing the makespan is one of the combinational optimization problems. It is known to be NP-hard by Garey and Johnson [1] since it does not have a polynomial time algorithm. Exact algorithms such as branch and bound [2] and cutting plane algorithms [3] solve this type of IPM and find optimal solution for small size instances. As the problem size increases, the exact algorithms are inefficient and take much time to get a solution.

That disadvantages bring a need for heuristics and metaheuristics that give optimal or near optimal solution within a reasonable amount of time. Longest Processing Time Rule (LPT) proposed by Mokotoff [4] is the first heuristic applied in IPMS which has a tight worst case performance of bound of 4/3–1/3, where is the number of parallel machines. LPT is based on distributing jobs on machines according to maximum processing time and the remaining jobs go one by one to the least loaded machine until assigning all the jobs to the machines. The LPT heuristic performs well for makespan criteria but the solution obtained is often local optima. Later, Coffman et al. [5] proposed MULTIFIT algorithm that is based on techniques from bin-packing. Blackstone Jr. and Phillips [6] proposed a simple heuristic for improving LPT sequence by exchange jobs between processors to reduce makespan. Lee and Massey [7] combine two heuristics, LPT and MULTIFIT, to form a new one. The heuristic uses LPT heuristic as an initial solution for the MULTIFIT heuristic. The performance of the combined heuristic is better than LPT and the error bound is not worse than the MULTIFIT. Yue [8] proved the bound for MULTIFIT to be 13/11. Lee and Massey [9] extend the MULTIFIT algorithm and show that the error bound of implementing the algorithm is only 1/10. Garey and Johnson [1] proposed that 3-phase composite heuristic consists of constructive phase and two improvement phases with no preliminary sort of processing times. They showed that their proposed heuristic is quicker than LPT. Ho and Wong [10] introduce Two-Machine Optimal Scheduling which uses lexicographic search. Their method performs better that LPT, MULTIFIT, and MULTIFIT extension algorithm and it takes less amount of CPU time than MULTIFIT and MULTIFIT extension algorithms.

Riera et al. [11] proposed two approximate algorithms that use LPT as an initial solution and compare them with dynamic programming and MULTIFIT algorithms. Algorithm 1 uses exchange between two jobs to improve the makespan. Algorithm 2 schedules a job such that the completion time and process time of the selected job are near the bound. Their second algorithm is compared with MULTIFIT algorithms and results showed similarity to the MULTIFIT algorithm, but their algorithm reduces CPU time with respect to MULTIFIT heuristic. Cheng and Gen [12] applied memetic algorithm to minimize the maximum weighted absolute lateness on PMS and showed that it outperforms genetic algorithm and the conventional heuristics. Ghomi and Ghazvini [13] proposed a pairwise interchange algorithm, and it gave near optimal solution in a short period of time. Min and Cheng [14] proposed a genetic algorithm GA using machine code. They showed that GA outperforms LPT and SA and is suitable for large scale IPMS problems. Gupta and Ruiz-Torres [15] proposed a LISTFIT heuristic based on bin-backing and list scheduling. The LISTFIT generate an optimal or near optimal solution and outperforms LPT, MULTIFIT, and COMBINE heuristics. Costa et al. [16] proposed algorithm inspired by the immune systems of vertebrate animals. Lee et al. [17] proposed a simulated annealing (SA) approach for makespan minimization on IPMS. It chooses LPT as an initial solution. Computational results showed that the SA heuristic outperforms the LISTFIT and pairwise interchange (PI) algorithms. Moreover, it is efficient for large scale problems. Tang and Luo [18] propose a new ILS algorithm combining with a variable number of cyclic exchanges. Experiments show that the algorithm is efficient for . Akyol and Bayhan [19] proposed a dynamical neural network that employs parameters of time varying penalty. The simulation results showed that the proposed algorithm generated feasible solutions and it found better makespan when compared to LPT. Kashan and Karimi [20] presented discrete particle swarm optimization (DPSO) algorithm for makespan minimization. Computational results showed that hybridized DPSO (HDPSO) algorithm outperforms both SA and DPSO algorithms. Sevkli and Uysal [21] proposed modified variable neighborhood search (MVNS) which is based on exchange and move neighborhood structures. Computational results demonstrated that the proposed algorithm outperforms both GA and LPT algorithms. Min and Cheng [14] proposed a harmony search (HS) algorithm with dynamic subpopulation (DSHS). Results show that DSHS algorithm outperforms SA and HDPSO for many instances. Moreover, the execution time is less than 1 sec. for all computations. Chen et al. [22] proposed discrete harmony search (DHS) algorithm that uses discrete encoding scheme to initialize the harmony memory (HM), then the improvisation scheme for generating new harmony is redefined for suitability for solving the combinational optimization problem. In addition, the study made hybridizing a local search method with DHS to increase the speed of local search. Computational results show that the DHS algorithm is very competitive when compared with other heuristics in the literature. Jing and Jun-qing [23] proposed efficient variable neighborhood search that uses four neighborhood structures and has two versions. One version uses LPT sequence as an initial solution. The other version uses random sequence as an initial solution. A computational result demonstrates that EVNS is efficient in searching global or near global optimum. M. Sevkli and A. Z. Sevkli [24] proposed stochastically perturbed particle swarm optimization algorithm (SPPSO). The algorithm compared two recent PSO algorithms. It is concluded that SPPSO algorithm has produced better results than DPSO and PSOspv in terms of the optimal solutions number. Laha [25] proposed an improved simulated annealing (SA) heuristic. Computational results show that the proposed heuristic is better than that produced by the best-known heuristic in the literature. Other advantages of it are the ease of implementation. In this paper, the proposed algorithm of Jing and Jun-qing [23] in their paper “efficient variable neighborhood search for identical parallel machines scheduling” is used with some changes on it. One of the changes is changing in the order of the neighborhood structures and the other change is adding another neighborhood structure to get five neighborhood structures in our proposed algorithm.

The remaining sections of this paper are organized as follows. In Section 2, a brief description of IPMS problem is mentioned. In Section 3, the steps of proposed algorithm are described in detail and the neighborhood structures of this proposed algorithm are explained. In Section 4, computational results are discussed. Conclusion is made in Section 5.

#### 2. Problem Description

The identical parallel machine scheduling (IPMS) problem can be described as follows.

A set of an independent jobs to be processed on identical parallel machines with the processing time of job on any identical machine is given by .

A job can only be processed on one machine simultaneously and a machine cannot process more than one job at a time. Priority and precedence constraints are not allowed. There is no job cancellation and a job completes its processing on a machine without interruption.

The objective is to minimize the total completion time “the makespan” of scheduling jobs on the machines.

This scheduling problem can be described by a triple as follows:where indicates parallel machine environment, indicates number of machines, *β* indicates no constraints in this problem, and indicates that the objective is to minimize the makespan.

This problem is interesting because minimizing the makespan has the effect of balancing the load over the various machines, which is an important goal in practice.

#### 3. Development of the Proposed (IVNS) Algorithm

##### 3.1. Basic VNS

Variable neighborhood search (VNS) is a metaheuristic proposed by Mladenović and Hansen [26] to enhance the solution quality by systematic neighborhoods changes. The main VNS algorithm steps can be summarized as follows: Initialization: choose the neighborhood structures set (), , obtain an initial solution, and select a stopping condition. Repeat the next steps until the stopping condition is satisfied:(1)Set .(2)Repeat the following steps until :(a)Shaking: generate a point at random from the th neighborhood of ().(b)Local search: apply some local search method with as initial solution; denote with the so obtained local optimum.(c)Move or not: if the local optimum is better than the incumbent , move there (), and continue the search with 1 (); otherwise, set , improved variable neighborhood search (IVNS) algorithm.

As we mentioned earlier, the proposed algorithm is an addition of the proposed algorithm of Jing and Jun-qing [23].

The proposed algorithms have two versions and two types for each version as shown in Figure 1. In the first version, a new neighborhood structure was added to the four neighborhood structures which are proposed by Jing and Jun-qing [23] while in the second version the order of these neighborhood structures was changed. Both versions use LPT [20] and random initial solutions and are referred to as “HIVNS” and “RIVNS,” respectively. All these versions of the proposed algorithm use the same five neighborhood structures. These neighborhood structures will be discussed in the following section.