Mathematical Problems in Engineering

Volume 2015, Article ID 612604, 10 pages

http://dx.doi.org/10.1155/2015/612604

## A Bee Colony Optimization Approach for Mixed Blocking Constraints Flow Shop Scheduling Problems

^{1}Department of Mathematical Sciences, Shiraz University of Technology, Shiraz 71555-313, Iran^{2}Department of Industrial Engineering, Shiraz University of Technology, Shiraz 71555-313, Iran

Received 11 June 2015; Revised 21 September 2015; Accepted 20 October 2015

Academic Editor: Marco Mussetta

Copyright © 2015 Mostafa Khorramizadeh and Vahid Riahi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The flow shop scheduling problems with mixed blocking constraints with minimization of makespan are investigated. The Taguchi orthogonal arrays and path relinking along with some efficient local search methods are used to develop a metaheuristic algorithm based on bee colony optimization. In order to compare the performance of the proposed algorithm, two well-known test problems are considered. Computational results show that the presented algorithm has comparative performance with well-known algorithms of the literature, especially for the large sized problems.

#### 1. Introduction

Flow shop scheduling problem has been extensively studied for over 50 years because of its significance in both theory and industrial applications [1]. As an important branch of flow shop scheduling problems, the blocking flow shop scheduling has attracted much attention in recent years. In the blocking flow shop, because of the lack of intermediate buffer storage between consecutive machines, a job has to stay in the current machine until the next machine is available for processing, even if it has completed processing in this machine [2]. This increases the waiting time and thus influences the efficiency of production. In the classical flow shop problem, the buffer space capacity between machines is considered unlimited. Some flow shop problems are concerned with the blocking constraints such as RSb, RCb, and (these blocking constraints will be described in Section 2). For problems with classical blocking constraint (RSb), Wang et al. [3, 4] developed a hybrid genetic algorithm for flow shop scheduling with limited buffers and a hybrid harmony search algorithm and Ribas et al. [5] proposed an iterated greedy algorithm. RCb constraint was introduced for the first time by Dauzere-Peres [6]. Regarding RCb constraint, an integer linear programming model, lower bounds, and a metaheuristic are presented in [7]. These problems have also been solved in [8] by a metaheuristic algorithm. A new blocking constraint called has been proposed by Trabelsi et al. [9] which is an RCb constraint simultaneously subjected to different types of blocking constraints on successive machines in a process. In [10], authors proposed some heuristic for the flow shop problem with RSb, RCb, and constraints and solved it with genetic algorithm. In [11], the complexity of a production system where the RSb and RCb constraints are mixed is studied.

In this paper, we propose a constructive heuristic for minimizing the makespan in flow shop scheduling problems with mixed blocking constraints. We propose a new heuristic method for the problem, which is based on the ideas of the bee colony optimization. The presented algorithm is developed by using the ideas of the Taguchi orthogonal arrays and the path relinking method. Moreover, some heuristic local search methods and perturbation procedures are also designed to improve the efficiency of the resulting algorithm. To compare the presented algorithm with some efficient algorithms of the literature, two sets of well-known test problems are used. The numerical results show that the presented algorithm is comparative with well-known algorithms of the literature.

This paper is structured as follows: the mixed blocking flow shop problem is described in Section 2. Section 3 is concerned with the description of the presented algorithm of this paper. The numerical results are given in Section 4. Finally, the paper is concluded in Section 5.

#### 2. Problem Description

To describe the problem, we followed the same approach as [10]. Flow shop scheduling problems are a class of scheduling problems with a workshop or group shop in which the flow control will enable an appropriate sequencing for each job and for processing on a set of machines or with other resources in compliance with given processing orders. Particularly, the maintaining of a continuous flow of processing tasks is desired with a minimum of idle time and a minimum of waiting time. Flow shop scheduling is a special case of job shop scheduling where there is strict order of all operations to be performed on all jobs. Flow shop scheduling may apply as well to production facilities as to computing designs. The deterministic flow shop scheduling problem consists of a finite set of jobs to be processed on a finite set of machines. Each job must be processed on every machine in its routing consisting of operations . Operation needs an execution time on machine , performed in order. For , only one job can be executed on machine , at any time. Here, preemptive operations are not authorized. Objective function is to reduce the time when all operations are completed, that is, makespan. Several different cases of flow shop can be considered such as the classical flow shop problem without any blocking constraint, the flow shop problem with only one blocking constraint between all machines, and the flow shop problems in which different blocking constraints are mixed. One constraint met in industrial problems is that a job blocks a machine until this job starts on next machine in routing. This classical blocking constraint is denoted by RSb (Release when Starting Blocking). Another constraint is that a machine will be immediately available to treat its next operation after its job on the following machine in process is finished without regard to whether or not it leaves the machine. This blocking constraint is denoted by (Release when Completing ). In a large production line, different types of blocking constraints can be encountered which depend on intermediate storage between machines, characteristics of machines, and technical constraints. To characterize a flow shop problem where different blocking constraints are mixed, a vector is introduced that contains blocking constraints between machines. Element is blocking constraint between machines and . This vector has elements (as many elements as the number of transitions between machines). See [10] for details and a mathematical programming formulation of the problem. For the flow shop scheduling problem, we use a permutation of jobs as a solution representation. For example, suppose there are six jobs and four machines in a flow shop scheduling problem. A permutation is a permutation of six jobs and this solution represents a scheduling in which the sequence of jobs on each machine is , , , , , .

#### 3. Bee Colony Algorithm

To describe the bee colony algorithm, we use the same approach as [12]. The bee colony algorithm is a stochastic P-metaheuristic that belongs to the class of swarm intelligence algorithms. BC is the one, which has been most widely studied and applied to solve the real-world problems. The basic artificial bee colony algorithm [12] classifies bees into three categories: employed bees, onlooker bees, and scout bees that both the onlookers and the scouts are also called unemployed bees. Employed bees are having no knowledge about food sources and looking for a food source to exploit. Onlooker bees are waiting for the waggle dances exerted by the other bees to establish food sources and scout bees carry out a random search in the environment surrounding the nest for new food sources. In the BC algorithm, each solution to the problem under consideration represented by an -dimensional real-valued vector is called a food source and the nectar amount of the food resource is evaluated by the fitness value. The following steps show the template of the bee algorithm inspired by the food foraging behavior in a bee colony:(1)Initialization: compute an initial population.(2)Employed bee stage: find food sources by exploring the environment randomly.(3)Onlooker bee stage: select the food sources after sharing the information with employed bees.(4)Scout bee stage: select one of the solutions; then, replace it with a new randomly generated solution.(5)Remember the best food source found so far.(6)If a termination criterion has not been satisfied, go to step (2); otherwise, stop the procedure and report the best food source found so far. Due to the fact that the basic BC algorithm was originally designed for continuous function optimization, for making it applicable for solving the mixed blocking flow shop problems with makespan criterion, a new variant of the BC algorithm is presented in this section.

##### 3.1. Initialization

In the BC algorithm, initial population is often generated randomly. To guarantee an initial solution with certain quality and diversity, we generate one solution by using the NEH heuristic of Nawaz et al. [13]. Then, to maintain the diversity of the initial population, the other solutions are randomly generated in the entire search space. In Section 4, we show the impact of selecting NEH algorithm and some random solutions instead of selecting just random solution. The framework of the NEH heuristic is presented as follows:(1)Compute the total processing time on all machines for each job. Then, obtain a sequence by sorting jobs according to their nonincreasing total processing time.(2)The first two jobs of are taken, and the two possible subsequences of these two jobs are evaluated, and then select the better one as the current sequence . Set .(3)Set . Take the th job of and insert it into () possible positions of the current sequence to obtain subsequences. Select the subsequence with the minimum makespan as the current sequence for the next generation. Repeat this step until all jobs are sequenced and the final sequence is found.

##### 3.2. Employed Bee Phase

According to the basic BC algorithm, the employed bees find new solutions in the neighborhood of their current positions. Let be the population. For each member , at first, we apply a perturbation method to and generate a new solution . Then, the path relinking procedure is executed on and and produces a set of candidate solutions. Finally, a local search (which will be described later) is applied to the best candidate solution with a small probability and the current solution is replaced by the best candidate solution. The pseudocode of employed bee phase procedure is given as follows:(1)Let be the current population and set .(2)Apply the perturbation method to to obtain a new solution .(3)Let be the set of candidate solutions obtained by executing the path relinking method to and .(4)Use the local search method to improve the best candidate solution with probability .(5)Replace with the best candidate solution.(6)Let . If , then, stop. Otherwise, go to step (2).

To complete the discussion, we need to describe the procedures for perturbation, path relinking, and local search. The perturbation procedure is based on the insertion operator. In the insertion operator jobs and are selected, the job at position is inserted into position , and all jobs between positions and are shifted accordingly [14]. In the perturbation method, at first, a random position is selected. Then, for all , the operator is applied to the current solution and the best obtained solution is chosen as the perturbed solution. The path relinking is applied in order to explore the search space between and . Path relinking is based on the interchange operator. Consider two distinct positions and . The operator exchanges the positions of the job at position and the job at position . In what follows, the path relinking procedure is explained by using an example. Let and . At first, job which is in position of is chosen. Since job is in position in , we apply the operator on and generate . In the next step, job which is in position of is selected. Since job is in position of , we apply on to obtain . Similarly, the next generated solution is , obtained by executing on . The set of candidate solutions obtained by using the path relinking method is . As mentioned earlier, the local search method is applied on the best candidate solution with probability . The local search method is also based on the exchange operator. In the local search method, at first, a position is randomly selected. Then, for all , the operator is applied to the current solution. The result is the best obtained solution. The local search method is repeated until no improvement is observed in a certain number of times (MaxCounter).

##### 3.3. Onlooker Bee Phase

In the onlooker bee phase, we try to improve the quality of the members of the populations. For this purpose, we repeat the following procedure a certain number of times (MaxIter). At first, a member of the population is randomly chosen. Then, the perturbation method (as described in Section 3) is applied to the selected member. Finally, an improvement method (to be described later) is applied to the perturbed solution and if the resulting solution is better than the selected member, then the selected member is replaced with the resulting solution. The pseudocode of the onlooker bee phase is given as follows:(1)Set .(2)Let be a random member of the population.(3)Let be the solution obtained by running the perturbation method to .(4)Apply the improvement method to to generate a new solution .(5)If is better than , then set .(6)Let . If , then, stop. Otherwise, go to step (2).

Two local search methods LS1 and LS2 are used in the improvement method. LS1 is exactly the local search method described in Section 3 and is based on the exchange operator. LS2 is obtained by using the insertion operator instead of the exchange operator in LS1. In LS2, a job is removed from a permutation and inserted into other positions; then, the permutation with the best out of the insertions is retained for the next iteration. In the improvement method, at first, LS1 is applied on the given solution and a new solution is generated. If is better than , we set . Then, LS2 is applied on and another solution is produced. Similarly, if is better than , we set . If the current solution is not improved by using of LS1 and LS2, then the perturbation method (described in Section 3) is applied on . This process is repeated until no improvement is observed in a certain number (MaxCnt) of iterations. The pseudocode of this improvement method is given as follows:(1)Set and let be the current solution.(2)Let be the solution obtained after an application of LS1 to . If is better than , then .(3)Let be the solution obtained after an application of LS2 to . If is better than , then .(4)If is not improved after the application of LS1 and LS2, then apply the perturbation method to and replac with the perturbed solution.(5)Let . If , then, stop. Otherwise, go to step (2).

##### 3.4. Scout Bee Phase

In this phase, the algorithm tries to generate new solutions by executing the following procedure a certain number of times (MaxLoop). At first, two distinct solutions of the population and are randomly selected. Then, the better one is determined. Without less of generality, let be the better one. In the next step, a combination method is applied to the best solution found so far and to generate a new solution. Finally, is replaced with the new generated solution. The pseudocode of the scout bees phase procedure is given as follows:(1)Set .(2)Let and be two members of the population chosen randomly.(3)If is better than , then let . Otherwise, .(4)Let be the solution obtained after an application of the combination method to and ( is the best solution so far).(5)If , then ; otherwise, let .(6)Let . If , then, stop. Otherwise, go to step (2).

In the following, we describe the combination method. The ideas of the combination method are usually taken from crossover operators of evolutionary algorithms. However, in this paper, the combination method is based on the Taguchi orthogonal arrays. For this reason, it is required to briefly describe the Taguchi orthogonal arrays. To describe the Taguchi orthogonal arrays, we follow the same approach as [15]. Taguchi orthogonal arrays are concerned with factors, where each factor has levels. The purpose is to determine the best setting of each factor’s level. Clearly, examining all possible combinations often needs a lot of computational effort and is inefficient. Therefore, a small but representative sample of whole combinations is considered for this purpose. Let be the number of elements of this sample. This sample can be determined by an array with rows and columns and is denoted by . All members of the sample must satisfy three conditions. Every level must make the same number of times in any column occur. For every factors in any columns, every combination of levels must make the same number of times occur. The members of the factor must be uniformly distributed over the whole space of all possible combinations. In this paper, the parameter is always equal to 2 and can be omitted from this notation. In other words, the more simple notation can be used instead of . To use in the combination method, we need to explain how the best combination of each factor’s level is determined. Let , where is the objective function value of the th member of the sample; if the th level of the factor in is , then and otherwise . The th level of the factor is the best level of this factor, if .

In the following, an example is presented to explain how Taguchi orthogonal arrays are used to combine some members of the population. Consider the orthogonal array . Note that , , and . In the combination method, 5 solutions are generated and the best one is returned as the result of the combination method. From these 5 solutions, 4 solutions are generated by using the rows of and a solution is generated by using the best level of each factor. At first, we explain how solutions are generated by using the rows of . Assume that we want to combine two solutions and by using the third row of . In the first step, cut points and are randomly chosen and and are cut into pieces. These pieces are characterized by grey and white colors. Since the first component of the third row of is 1, the first piece of the combined solution is taken from the first piece of . Therefore, the first piece of the combined solution is . The second piece of the combined solution is which is taken from , because the second member of the third row of is . But here, 6 is repeated in the first piece of the combined solution as well. Therefore, only 2 and 5 appeared in the combined solution and the entry corresponding to is left blank. Similarly, as the third member of the third row of is 1, the third piece of the combined solution is taken from and the entry corresponding to the repeated number is left blank. The outcome of this procedure is . Note that the missing components of the combined solution are 1 and 3. Now, we consider the first member . The first blank position belongs to 1, because in at first 1 appeared. Finally, the second blank position is devoted to 3 and the resulting solution is . This procedure is depicted in Figure 1. Now, we explain how the best level of each factor is used to construct a solution. As we noted before, to compute the best level of each factor, we have to compute for and . Consider Figure 2. is constructed by using the first column of . Since level 0 appeared in the first and second row of the first column of , we have . Similarly, is also computed by using the first column of . Since level 1 appeared in the third and fourth row of the first column of , we have . We have . Therefore, the best level of the first factor is 1. By following the same procedure, the best levels of the second and third factors are 1 and 0, respectively. Now, since the best level of the first factor is 0, the first piece of the new solution is taken from . Similarly, the second and third factors of are taken from and , respectively. Finally, from , the best one is and is returned as the output of the combination method.