Abstract

Scheduled production system leads to avoiding stock accumulations, losses reduction, decreasing or even eliminating idol machines, and effort to better benefitting from machines for on time responding customer orders and supplying requested materials in suitable time. In flexible job-shop scheduling production systems, we could reduce time and costs by transferring and delivering operations on existing machines, that is, among NP-hard problems. The scheduling objective minimizes the maximal completion time of all the operations, which is denoted by Makespan. Different methods and algorithms have been presented for solving this problem. Having a reasonable scheduled production system has significant influence on improving effectiveness and attaining to organization goals. In this paper, new algorithm were proposed for flexible job-shop scheduling problem systems (FJSSP-GSPN) that is based on gravitational search algorithm (GSA). In the proposed method, the flexible job-shop scheduling problem systems was modeled by color Petri net and CPN tool and then a scheduled job was programmed by GSA algorithm. The experimental results showed that the proposed method has reasonable performance in comparison with other algorithms.

1. Introduction

Classic job-shop scheduling problem systems contain 𝑁 independent job on 𝑀 machines. Each job includes one or more operations that must be implemented sequentially. Each operation needs specific process time. Flexible job-shop scheduling problem system is specific type of classic job-shop scheduling production systems, in which one job could be implemented on a set of machines.

Purpose of scheduling this problem is determining operation sequence for each machine, such that sequence order is kept and total time of operation (during implementing one job) be minimized.

In this paper, FJSSP-GSPN algorithm based on gravitational local search and time Petri net is proposed for scheduling time optimization in FJSSP. The proposed algorithm comprised two stages: in the first stage, the system was modeled by timed Petri net and the simulation of results was done with Petri net method. In the second stage, a new algorithm based on gravitational local search algorithm was proposed which is called FJSSP-GSPN.

In the simulation stage, in order to determine performance of the system, one job has been simulated by CPN tool with several suboperations. Also, in the second stage, gravitational searching algorithm and proposed solution have been presented to make suitable time for implementing several operations on one job, which in fact is assigning proper the machine to the related operation.

The rest of the paper is as follows: In Section 3, problem analyzing and in Section 4, its disjunctive graph model are presented. In Section 5, colored Petri Net and in Section 6 simulating phase with CPN tool is described. In Section 7, gravitational searching algorithm is explained and finally in section eight, we explain the proposed solution by using the gravitational searching algorithm.

Flexible job-shop scheduling problem system is one of the most important combined optimization problems, and is kind of NP-hard problem.

The job-shop scheduling problem (JSSP) has been studied for more than 50 years in both academic and industrial environments and also recently, many researchers have been done for the flexible job-shop scheduling problem system (FJSSP).

Brucker and Schlie [1] who first considered Job-shop scheduling with multipurpose Multipurpose computing machines, offered a multilateral algorithm for solving flexible job-shop problem with two jobs. In real world, for solving a problem with more than two jobs, two perceptions have been used: hierarchical perception and integrated perception.

In hierarchical perception, assigning any operation to the machines and determining operation sequences are performed individually. In other words, assignment and sequence determination are independent. But in integrated perception, sequence determination is based on this idea that in order to decrease complexity, the main problem should be decomposed into two problems called assignment and sequence determination. Frequent usage of this perception is due to decomposition it into two assignment problem and sequence determination problem. Brandimarte [2] was the first one who used this perception for FJSSP. He specified path determination with distribution rules and then focused on solving scheduling problem with tabu search algorithm.

Jain and Meeran [3] provided a concise overview of JSPs over the last few decades and highlighted the main techniques. The JSP is the most difficult class of combinational optimization. Garey et al. [4] demonstrated that JSPs are nondeterministic polynomial-time hard (NP-hard), hence we cannot find an exact solution in a reasonable computation time. The single objective JSP has attracted wide research attention. Most studies of single-objective JSPs result in a schedule to minimize the time required to complete all jobs, that is, to minimize the makespan. Many approximate methods have been developed to overcome the limitations of exact enumeration techniques.

These approximate approaches include simulated annealing (SA) (Lourenço [5]), tabu search (Sun et al. [6]; Nowicki and Smutnicki [7]; Pezzella and Merelli [8]), and genetic algorithms (GA) (Bean [9]; Kobayashi et al. [10]; Gonçalves et al. [11]; Wang and Zheng [12]).

Fattahi et al. [13] have considered hierarchical and integrated perceptions in relation to scheduling job-shop production systems. They based on these perceptions two SA and TA heuristics, offered six combined algorithms and compared them.

The concluded that combined algorithms from SA and TA along with hierarchical perception would provide better solutions than other algorithms. TE also offered in their article a new technique for introducing structure of solution in scheduling flexible job-shop production problems.

I. C. Choi and D. S. Choi [14] have presented a local searching algorithm for scheduling job-shop production problems. They regarded that there is a possibility of a substitute operation for any operation. In this mode, a machine and a process time are assigned for all operations, and then for some other operations, alternative machines and process time are dedicated. Moreover, a run time has been considered for any operations, which is depended of the last operation.

Xia and Wu [15] have presented a hybrid optimizing perception for scheduling multiobject flexible job-shop production system problems. In their study, combination of two methods SA and particle swarm optimization have been used for optimizing flexible job-shop production system problem. Dedicated PSO algorithm for either assignment problem or determining operations uses designated machine. Value of object function is calculated by SA algorithm and implemented for each particle in PSO algorithm once.

Mastrolilli and Gambardella [16] proposed a tabu search procedure with effective neighborhood functions for the flexible job-shop problem. Many authors have proposed a method of assigning operations to machines and then determining sequence of operations on each machine. Pezzella et al. [17] and Gao et al. [18] proposed the hybrid genetic and variable neighborhood descent algorithm for this problem. There are only a few papers considering parallel algorithms for the FJSP. Yazdani et al. [19] propose a parallel variable neighborhood search (VNS) algorithm for the FJSP based on independent VNS runs. Defersha and Chen [20] describe a coarse grain version of the parallel genetic algorithm for the considered. FJSP basing on island model of parallelization focusing on genetic operators used and scalability of the parallel algorithm. Both papers are focused on the parallelization side of the programming methodology and they do not use any special properties of the FJSP.

In this study, we first considered the problem with primary process and ignored substitute process; regarded flexibility and obtained construction duration have been used as upper boundary. Then, local searching procedure is looking for better a solution by using distribution rules. In this study, different distribution rules in local searching procedure have been considered.

3. Flexible Job-Shop Scheduling Problem Systems’ Analysis

In this section, mathematical model (combined of integer and linear programming) is presented for better understanding the problem and also applying it for solving small problems optimally.

Flexible job-shop scheduling production system contains 𝑁 job on 𝑀 machines. Each job includes some operations and for each operation there is an opportunity to use a set of operational machines. As flexible job-shop scheduling systems have considerable importance in production centers, they have attracted attention of production unit managers.

Furthermore, specific mathematical characteristics of this problem that have offered effective strategies for solving this problem are interested for researchers of this area of mathematics field. Simple form of flexible job-shop scheduling production systems is classic job-shop scheduling production system which schedules 𝑛 job of 𝐽1,𝐽2,…,𝐽𝑛 on set of 𝑀 machines of 𝑀1,𝑀2,…,π‘€π‘š.

Each job has β„Žπ‘— operation that must be implemented serially. Subscript 𝑗 indicates job, subscript β„Ž indicates operation, and subscript 𝑖 presents machines. The Purpose of scheduling this problem is determining the sequence of operations for each machine, such that a predefined object function like construction duration gets optimized.

Each job has one sequence of 𝑂𝑗,β„Ž operations; β„Ž=1,…,β„Žπ‘—,  where 𝑂𝑗,β„Ž presents β„Žth operation of 𝑗th job, and β„Žπ‘— presents number of required operations for 𝑗 th job. Machines set is presented by 𝑀={𝑀1,𝑀2,…,π‘€π‘š}. Subscript 𝑖 presents machine and subscript 𝑗 presents job and subscript β„Ž is applied for operation.

To implement each β„Ž operation on 𝑗 job (presented as 𝑂𝑗,β„Ž), a set of jobs are assigned, which have the capacity of performing that operation. This set is presented as 𝑀𝑗,β„ŽβŠ‚π‘€. Each machine would have a specific process time for implementing operation. This specific process time for implementing each operation is presented with 𝑃𝑖,𝑗,β„Ž.

In this study, we define 𝑀𝑗,β„Ž set with variable π‘Žπ‘–,𝑗,β„Ž with value one and zero. If variable π‘Žπ‘–,𝑗,β„Ž has value 1, it means that machine 𝑗 has capacity for implementing operation 𝑂𝑗,β„Ž. For assignment, we use variable 𝑦𝑖,𝑗,β„Ž with value one or zero. This variable is determined by model. If value of this variable be 1, then it means that machine 𝑗 is selected among operational machines for implementing 𝑂𝑖,𝑗 operation.

Eventually, the result solution from variable 𝑦𝑖,𝑗,β„Ž gives assignment-problem solution (i.e., each operation among the assignable machine is performed by which machine).

For solving the sequence problem, we consider initial time π‘‘π‘˜,𝑙 and final time π‘“π‘‘π‘˜,𝑙 for each operation. Value of those variables is determined by a model. Moreover, an assumed the job which the number of its operations is equal to number of machines is considered as the initial job.

In this model, we use variable π‘₯𝑖,𝑗,β„Ž,π‘˜,𝑙 with value one or zero. If this variable has value 1, it means that operation π‘‚π‘˜,𝑙 on machine 𝑗 is implemented immediately after operation 𝑂𝑗,β„Ž. Also, 𝑠𝑒𝑖,𝑓,π‘˜ presents the startup time of job π‘˜ after a job from family 𝑓 on machine 𝑖. 𝐹𝑓,𝑗=ξ‚»π‘Ž1ifπ‘˜βˆˆπ‘“0otherwise𝑖,𝑗,β„Ž=ξ‚»1if𝑂𝑗,β„Žcanbeperformedonmachine𝑖0otherwise.(3.1) Variables of this model include:𝑦𝑖,𝑗,β„Ž=ξ‚»1ifmachine𝑖seclectforoperation𝑂𝑗,β„Žπ‘₯0otherwise𝑖,𝑗,β„Ž,π‘˜,𝑙=ξ‚»1if𝑂𝑗,β„Žprocecedsπ‘‚π‘˜,𝑙immediatelyonmachine𝑖0otherwise.(3.2)𝐢max: maximum time of constructing duration, π‘š: a large number, π‘‘π‘˜,𝑙: initial time for operation π‘‚π‘˜,𝑙, π‘“π‘‘π‘˜,𝑙: final time for operation π‘‚π‘˜,𝑙, 𝑃𝑖,π‘˜,𝑙: process time for operation π‘‚π‘˜,𝑙 on machine 𝑖, and 𝑆𝑖,𝑗,π‘˜: start up time for job π‘˜ on machine 𝑗 if the previous job is job 𝑗.

By having parameters 𝑆𝑖,𝑓,π‘˜, 𝑃𝑖,𝑗,β„Ž, π‘Žπ‘–,𝑗,β„Ž, fa, π‘š, and 𝑛 problem FJSP is modeled as follows:(1)min 𝐢max;(2)π‘‘π‘˜,𝑙+𝑦𝑖,π‘˜,𝑙⋅𝑝𝑖,π‘˜,π‘™β‰€π‘“π‘–π‘˜,𝑙for𝑖=1,…,π‘š,π‘˜=1,…,𝑛,𝑙=1,…,β„Žπ‘˜;(3)𝑠𝑖,𝑗,π‘˜=βˆ‘πΉπ‘“,𝑗⋅𝑠𝑒𝑖,𝑓,π‘˜for𝑖=1,…,π‘š,π‘˜=1,…,𝑛,𝑗=1,…,𝑛,𝑓=1,…,fa;(4)π‘“π‘–π‘˜,π‘™β‰€π‘‘π‘˜,𝑙+1 for π‘˜=1,…,𝑛,  𝑙=1,…,β„Žπ‘˜βˆ’1;(5)π‘“π‘–π‘˜,𝑙≀𝐢max for π‘˜=1,…,𝑛,  𝑙=1,…,β„Žπ‘˜;(6)𝑦𝑖,π‘˜,π‘™β‰€π‘Žπ‘–,π‘˜,𝑙 for 𝑖=1,…,π‘š,β€‰π‘˜=1,…,𝑛,  𝑙=1,…,β„Žπ‘˜;(7)𝑑𝑗,β„Ž+𝑝𝑖,𝑗,β„Ž+𝑠𝑖,𝑗,π‘˜β‰€π‘‘π‘˜,𝑗+(1βˆ’π‘₯𝑖,𝑗,β„Ž,π‘˜,𝑙)𝑀 for 𝑗=0,…,𝑛,π‘˜=1,…,𝑛,β„Ž=1,…,β„Žπ‘—,𝑙=1,…,β„Žπ‘˜,𝑖=1,…,π‘š;(8)𝑓𝑗,β„Ž+𝑠𝑖,𝑗,π‘˜β‰€π‘‘π‘—,β„Ž+1+(1βˆ’π‘₯𝑖,π‘˜,𝑙,𝑗,β„Ž+1)𝑀 for 𝑗=1,…,𝑛,π‘˜=0,…,𝑛,β„Ž=1,…,β„Žπ‘—βˆ’1,𝑙=1,…,β„Žπ‘˜,𝑖=1,…,π‘š;(9)βˆ‘π‘¦π‘–,𝑗,β„Ž=1  for 𝑗=0,…,𝑛,β„Ž=1,…,β„Žπ‘—,𝑖=1,…,π‘š;(10)π‘₯βˆ‘βˆ‘π‘–,𝑗,β„Ž,π‘˜,𝑙=𝑦𝑖,π‘˜,𝑙  for 𝑖=1,…,π‘š,π‘˜=1,…,𝑛,𝑙=1,…,β„Žπ‘˜;(11)π‘₯βˆ‘βˆ‘π‘–,𝑗,β„Ž,π‘˜,𝑙=𝑦𝑖,π‘˜,𝑙  for 𝑖=1,…,π‘š,𝑗=1,…,𝑛,β„Ž=1,…,β„Žπ‘—;(12)π‘₯𝑖,𝑗,β„Ž,π‘˜,𝑙≀𝑦𝑖,π‘˜,𝑙 for 𝑗=1,…,𝑛,π‘˜=1,…,𝑛,β„Ž=1,…,β„Žπ‘—,𝑙=1,…,β„Žπ‘˜,𝑖=1,…,π‘š;(13)π‘₯𝑖,𝑗,β„Ž,π‘˜,𝑙≀𝑦𝑖,π‘˜,𝑙 for 𝑗=1,…,𝑛,π‘˜=1,…,𝑛,β„Ž=1,…,β„Žπ‘—,𝑙=1,…,β„Žπ‘˜,𝑖=1,…,π‘š;(14)π‘₯𝑖,π‘˜,𝑙,π‘˜,𝑙=0  for 𝑖=1,…,π‘š,π‘˜=1,…,𝑛,𝑙=1,…,β„Žπ‘˜;(15)𝑠𝑖,π‘˜,π‘˜,=0  for 𝑖=1,…,π‘š,π‘˜=1,…,𝑛;(16)π‘₯𝑖,𝑗,β„Ž,π‘˜,𝑙,𝑦𝑖,𝑗,β„Žβˆˆ{0,1}.

Constraint 1 is the object function of a problem which minimizes maximum completion time. Constraint 2 presents startup time and finishing time of each operation. Constraint 3 introduces run time for each job. Constraints 4 and 8 cause that prerequirement limitations are respected. Constraint 5 defines 𝐢max. Constraint 6 causes that required machines for each operation is selected among assignable machines for that operation. Constraint 7 guarantees that if operation 𝑙 from job π‘˜ is performed after operation β„Ž from job 𝑗 on machine 𝑖, its startup time is after finishing operation β„Ž from job 𝑗 and also after the process time of preparing machine 𝑖. Constraint 9 causes that among all assignable machines for a specific operation just one machine selected. Constraints 10 and 11 imply that only one operation is performed on machine 𝑖 after and before other operations. Constraints 12 and 12 imply that each operation is just performed on its assignable machine after and before other operations. Constraint 14 guarantees that any operation is processed once.

4. The Disjunctive Graph Model

The JSSP can be described as a disjunctive graph 𝐺=(𝑉;πΆπ‘ˆπ·), where (1)  𝑉 is a set of nodes representing operations of the jobs together with two special nodes, a source (0) and a sink, representing the beginning and end of the schedule, respectively.

(2)  C is a set of conjunctive arcs representing technological sequences of the operations. (3)  𝐷 is a set of disjunctive arcs representing pairs of operations that must be performed on the same machines. The processing time for each operation is the weighted value attached to the corresponding nodes.

Figure 2 shows this in a graph representation for the problem given in Table 1. The Gantt-Chart is a convenient way of visually representing a solution of the FJSSP. An example of a solution for the 3 * 3 problem in Table 1 is given in Figure 1.

Job-shop scheduling can also be viewed as defining the ordering between all operations that must be processed on the same machine, that is, to fix precedences between these operations. In the disjunctive graph model, this is done by turning all undirected (disjunctive) arcs into directed ones. A selection is a set of directed arcs selected from disjunctive arcs. By definition, a selection is complete if all the disjunctions are selected. It is consistent if the resulting directed graph is acyclic.

5. Colored Petri Nets

The Petri Nets theory was born from the thesis defended by Carl Adam Petri in the Faculty of Mathematics and Physics of the Technical University of Darmstadt (Germany) in 1962, entitled β€œCommunication with autonomata.” At the end of the 60s and beginning of the 70s, researchers from MIT in the USA developed the foundations of the concept of the Petri nets as we know today.

According to Murata [21], Petri nets are a type of bipartite, directed, and weighted graph, which can capture the dynamics of a discrete-event system. The Petri nets provide a compact representation of a system because they do not represent explicitly all the space of states from the modeled system.

An ordinary Petri net is a 4-tuple PN = (P, T, Pre post), formed by a finite set of places 𝑃 of dimension 𝑛, a finite set of transitions 𝑇 of dimension m, an input condition Pre: 𝑃×𝑇→𝑁, and an output condition Post: 𝑃×𝑇→𝑁. To each place an integer nonnegative number denominated token is associated.

Models with time restrictions can be developed via Petri nets, as shown for example, in [22]. Manufacturing, transportation, and telecommunication systems are some of the examples of application of that methodology.

A limitation of the ordinary Petri nets, also called place/transition Petri nets, is the fact that they demand a large quantity of places and transitions to represent complex systems (as most real systems are). As the net expands, the general view of the modeled system starts to get compromised, and the analysis of the modeled system becomes difficult to do.

Real systems often present similar processes which occur in parallel or concurrently, and they differ from each other only by their inputs and outputs. In the colored Petri nets, the quantity of places, transitions, and arcs is, generally, sensibly reduced via the addition of data to the structure of the net.

According to Jensen [23] a more compact representation of a Petri net is obtained via the association of a data set (denominated token colors) to each token. The concept of color is analogous to the concept of type, common among the programming languages.

Colored Petri net (CPN) is a tool by which validation of discrete-event systems are studied and modeled. CPNs are used to analyze and obtain significant and useful information from the structure and dynamic performance of the modeled system. Colored Petri nets mainly focus on synchronization, concurrency, and asynchronous events. The graphic features of CPNs specify the applicability and visualization of the modeled system. Furthermore, synchronous and asynchronous events present their prioritized relations and structural adaptive effects. The main difference between CPNs and Petri nets (PN) is that in CPNs the elements are separable but in PNs they are not. Colored indicates the elements specific feature. The relation between CPNs and ordinary PNs is analogous to high-level programming languages to an assembly code (Low-level programming language). Theoretically, CPNs have precise computational power but practically since high-level programming languages have better structural specifications, they have greater modeling power.

CPN’s drawback is its nonadaptivity therefore it is not possible to access the previous information available in CPNs. If there is more than one transition activated then each transition can be considered as the next shot. This colored Petri net’s characteristic indicates that since several events occur concurrently and event incidences are not similar, then when events occur they do not change by time and this phenomenon is in contrast with the real and dynamic world. Simulation would be similar to execution of the main program. Our Purpose is to use the simulated model for analyzing the performance of the systems, as a result here the system problems and the weak points would be identified. However, classic CPN tools can do nothing to improve and solve problems and also it would not be possible to predict the next optimized situation.

According to Jensen [23], a colored Petri net is a 9 tuple:CPN=(Ξ“,𝑃,𝑇,𝐴,𝑁,𝐢,𝐺,𝐸,𝐼),(5.1) where Ξ“ is a finite, nonempty set of types, denominated colors set; 𝑃 is a finite set of places of dimension 𝑛; 𝑇 is a finite set of transitions of dimension π‘š; 𝐴 is a finite set of arcs so that π‘ƒβˆ©π‘‡=π‘ƒβˆ©π΄=π‘‡βˆ©π΄=βˆ…; 𝑁 is a node function, defined from 𝐴 by 𝑃×𝑇βˆͺ𝑇×𝑃; 𝐢 is a color function, defined from 𝑃 on Ξ“; 𝐺 is a guard function, defined from 𝑇; 𝐸 is a function of expression of arcs, defined from 𝐴; 𝐼 is an initiation function, defined from 𝑃.

The colors set determines the types, operations and functions which can be associated to the expressions utilized on the net (arc functions, guards, colors, etc.) The sets P, T, A and 𝑁 have analogous significance to the vertexes and precedence functions sets defined for the ordinary Petri nets. Color functions map every place on the net, including them in a color set. Guard functions map all the transitions on the net, moderating the stream of tokens according to Boolean expressions. Arc functions map each arc on the net, associating them to a compatible expression with the possible color sets. Finally, initialization functions map the places on the net associating them to the existent multisets.

There are four main components in Petri net. (1)(●) Token: specify existence in system.(2) (651310.fig.00200) Place: temporary place for maintenance of Tokens.(3)(β†’) Arc: show Token directions.(4)(651310.fig.00300) Transition: specify main operation in system. A system can be modeled just by using these four simple elements.

5.1. CPN Tool

CPN Tools is an industrial-strength computer tool for constructing and analysing CPN models. Using CPN Tools, it is possible to investigate the behaviour of the modelled system using simulation, to verify properties by means of state space methods and model checking, and to conduct simulation-based performance analysis. User interaction with CPN Tools is based on direct manipulation of the graphical representation of the CPN model using interaction techniques, such as tool palettes and marking menus. The functionality of the tool can be extended with user-defined Standard ML functions [24].

6. The First Stage (Simulating Problem with CPN Tool)

Regarding problem analyzing and offered comments, performance of scheduling operation system in a flexible production job-shop is simulated with CPN tool. This simulation which could be contains one or more operations for one job, is implemented on one machine. Also, this simulation is extension able on more operations or even more jobs. First, suppose (Figure 3) that you have one job with one operation and a job-shop with several machines (up to 5 machines).

Next to each place, there is an inscription which determines the set of token colors (data values) that the tokens on the place are allowed to have. The set of possible token colors is specified by means of a type (as known from programming languages), and it is called the color set of the place. By convention the color set is written below the place. The places have the color set INT and STRING, color sets are defined using the CPN ML keyword colset.

The color sets are defined as: colsetSTRING=string; colsetINT=int;

The arc expressions are written in the CPN ML programming language and are built from typed variables, constants, operators, and functions. When all variables in an expression are bound to values (of the correct type) the expression can be evaluated. An arc expression evaluates to a multiset of token colors. As an example, consider the two arc expressions 𝑖 and 𝑠𝑑 on the three arcs connected to the transition. They contain the variables 𝑖 and 𝑠𝑑 declared as follows: varπ‘–βˆΆINT; varπ‘ π‘‘βˆΆSTRING.

According to problem analysis, this job could be implemented on each machine, but the ideal machine is the one that performs this job in less time, so each machine sends operational time for the related job to the comprising section. The comprising section recognizes minimum time among input values and displays it with the name of the related machine.

Now suppose that we have one job with two operations, and according to problem definition, this operation should be implemented serially (Figure 4). Thus, after that first operation is implemented, second operation would be allowed by sending a return signal.

As you can see, this simulation for one job that has up to 5 operations has been implemented on five machines (Figure 5), but this simulation could be modeled on 𝑗 job with 𝑁 operation and 𝑀 machines, too.

7. Gravitational Search Algorithm

In GSA, optimization is done by using gravitational rules and movement rules in an artificial discrete-time system.

System area is the same as problem definition area. According to the gravitational rule, act, and state of other masses are recognized through gravitational forces. So, this force could be used as a tool for transferring information. We can also use the proposed solution for solving any optimization problem which within it any answers of problem is definable as a state in space, and its degree of similarity with other answers of problem is mentioned as a distance. Value of masses in each problem is also mentioned in regards to purpose function. In the first step, system space is determined. Area includes a multidimensional coordinated system in problem definition space.

Each point in space is one of the answers of a problem and search factors are also the series of masses.

Each mass has three properties:

(a) mass state, (b) gravitational mass, (c) inertia mass.

Above mentioned masses are resulted from active gravitational mass and inertia mass concepts in physics.

In physics, active gravitational mass is a criteria of degree of gravitational force around a body, and inertia mass is a criteria of body resistance against movement. These two properties could be not equal, and their amounts are determined based on suitability of each mass. Mass state is a point in space which is one of the problem answers. After forming a system, its rules are determined.

We suppose that there is only the gravity rule and movement rule. Their general forms are similar to nature rules and have been defined as follows.

Gravity Rule: Any mass in an artificial system attracts all other masses toward itself. The value of this force is proportional with the gravitational mass of the related mass and distance between two masses.

Movement Rule: Recent speed of each mass is equal to the sum of the coefficient of the last speed of that mass and its variable speed. Also, acceleration or variable speed is equal to delivered force on mass, divide by the amount mass.

In the following, we explain principals of this algorithm. Suppose that there is a system with 𝑆 masses and within it, state of mass 𝑖th is defined as relation (7.1), where π‘₯ denotes position of mass 𝑖th in dimension 𝑑, and 𝑛 denotes number of dimensions in the search space.𝑋𝑖=ξ€·π‘₯1𝑖,…,π‘₯𝑑𝑖,…,π‘₯𝐷𝑖(7.1)

Worst(𝑑) and Best(𝑑) are for the minimization problems and are calculated as follows. (for maximization problems it is just enough to consider the inverse of these two relations).Best(𝑑)=maxπ‘—βˆˆ{1,…,π‘š}fit𝑗(𝑑)worst(𝑑)=minπ‘—βˆˆ{1,…,π‘š}fit𝑗(𝑑).(7.2)

We can account fitness of recent population with relation (7.3), and obtain mass of factor 𝑖th in time t (i.e., with relation (7.4)), where 𝑀 and fit are denoted mass and fitness of factor 𝑖th in time t, respectively.π‘žπ‘–(𝑑)=fitπ‘–βˆ’worst(𝑑)π‘€π‘žBest(𝑑)βˆ’worst(𝑑),(7.3)(𝑑)=𝑖(𝑑)βˆ‘π‘ π‘—=1(𝑑).(7.4)

In this system, force 𝐹 is delivered on mass 𝑖th from mass jth in time 𝑑 in the direction of dimension d, the value of this force is obtained based on the following; 𝐺(𝑑) is gravity constant in time 𝑑 which is regulated in the beginning of the operating algorithm, and it is decreased by the time.𝐹𝑑𝑖𝑗(𝑑)=𝐺(𝑑)×𝑀𝑗(𝑑)𝑅𝑖𝑗π‘₯(𝑑)+πœ€π‘‘π‘—(𝑑)βˆ’π‘₯𝑑𝑖(𝑑).(7.5)

𝑅 is the ECLIDIAN distance between factor 𝑖th and factor 𝑗th that are defined as, β€œπœ€β€ is also a small value for avoiding denominator from becoming zero.𝑖𝑗=ξ€·π‘₯2βˆ’π‘₯1ξ€Έ2+𝑦2βˆ’π‘¦1ξ€Έ2+𝑧2βˆ’π‘§1ξ€Έ2𝑛+β‹―+2βˆ’π‘›1ξ€Έ2.(7.6)

The force delivered on mass 𝑖th in direction 𝑑 at time 𝑑 is equal to resultant of total force from π‘˜ superior mass in population (k is better factor than recent factor). 𝐾best denote series of π‘˜ superior masses in population. K value is not constant and is defined as a time-dependant value, such that all masses at the beginning influence on each other and deliver force, but by passing time, number of effective members in population is decreased linearly. And for accounting sum of delivered forces on mass 𝑖th in dimension d, we could write the following. In this relation, rand is a random number with normal distribution in the interval [0,1].𝐹𝑑𝑖(𝑑)=π‘—βˆˆπΎbest,𝑗≠𝑖𝑀rand𝑗×𝐺(𝑑)𝑗(𝑑)×𝑀𝑖(𝑑)𝑅𝑖𝑖π‘₯(𝑑)+πœ€π‘‘π‘—(𝑑)βˆ’π‘₯𝑑𝑖(𝑑).(7.7)

According to Newton’s second movement rule, each mass takes acceleration in the direction of dimension d, which is proportional with delivered force on that mass.π‘Žπ‘‘π‘–πΉ(𝑑)=𝑑𝑖(𝑑)π‘€π‘–βŸΉπ‘Ž(𝑑)𝑑𝑖(𝑑)=π‘—βˆˆπΎbest,𝑗≠𝑖𝑀rand𝑗×𝐺(𝑑)𝑗(𝑑)𝑅𝑖𝑖π‘₯(𝑑)+πœ€π‘‘π‘—(𝑑)βˆ’π‘₯𝑑𝑖.(𝑑)(7.8)

Speed of each mass is equal to sum of coefficient of the mass’ recent speed and acceleration, and is explained as follows. In this relation, rand is a random number with normal distribution in the interval [0,1], and its random property is resultant of keeping search in random mood.𝑉𝑑𝑖(𝑑+1)=rand𝑖×𝑉𝑑𝑖(𝑑)+π‘Žπ‘‘π‘–(𝑑).(7.9)

Now, mass should move. It is obvious that more speed of the mass, causes more movement in that dimension. New state of factor 𝑖th is mentioned by relation (7.10). π‘₯𝑑𝑖(𝑑+1)=π‘₯𝑑𝑖(𝑑)+𝑉𝑑𝑖(𝑑+1).(7.10)

At the beginning of the forming system, each mass (factor) is randomly positioned in one point of space that is an answer of problem. In each moment, masses are evaluated and then changing in the position of each mass is calculated after solving relations 8 to 11. System parameters are updated in each stage (𝐺,𝑀).

Stop condition could be determined after passing specified time. In Table 2, semicode of this algorithm has been presented.

8. Proposed Method Based on Gravitational Search Algorithm (FJSSP-GSPN)

In regards to gravitational searching algorithm, each searching factor should contain information for solving problems. This information says that for example each factor in any time might be aware that in each point of searching space, which operation is implementing on which machine. According to problem definition (see Table 3), each operation is implementable on set of machines. But in any time, only one job is implemented on each machine, and then next operation should be implemented.

You could see with some attention that Table 3 is similar to 𝑁-minister problem table that in each column is placed just one minister (one job for each machine). The only difference is that in the new table maybe several jobs are implemented on each machine. To brief this table we use one-dimensional array and we assign to each factor in the searching space one sample of it (Figure 6).

It is obvious that each house of this array is assigned to one column of the table, and value of that house states the number of machines that the related job would be implemented by. For example, second house of array indicates that the second job (in second column) would be implemented on third machine.

In gravitational searching algorithm, each factor in searching space includes a one-dimensional array which keeps summary of recent state of implemented operations on related machines. So, with having five masses, in fact five searching factors are applied for finding the purpose state (minimum time for performing operation).

To indicate that bigger mass has better state, we should subtract total time of implementing an operation from a constant value (this value could be maximum required time for implementing a job which counts as a constraint). Result answer of this subtraction is π‘žπ‘–, conforming with formula (7.3). Now if based on formula (7.4), we divide fitness of one factor on sum of factors fitness, mass factor is attained.

Accounting delivered force, acceleration, speed, and position of each mass are depended on dimension of each mass, and they are independent of each other.

Consider a two-dimensional space. If there are two masses in one column during the application calculations on dimension X, calculations should be stopped, since second mass does not deliver force on first mass in direction of dimension X.

For example, in Figure 7, you see that two masses (A and B) are placed in one column, so they do not deliver force in direction of dimension 𝑋 on each other.

And similarly, two masses 𝐢 and 𝐷 are placed in one line, and so do not deliver force on each other in direction of dimension π‘Œ. But pair masses (B, C), (B, D), (C, D), and (A, D) are delivered force on each other in both directions of dimensions 𝑋 and Y, and so calculations are applied on them completely.

Therefore, in first condition, we investigate unparallelism of two masses in interested dimension. Then in order to account sum of delivered forces on related mass, we need to determine forces delivered from those masses which are placed in 𝐾best series (Algorithm 2).

𝐾best array is filled with initial value of (βˆ’1). According to gravitational algorithm, at the first moment of operating algorithm, all masses deliver force on each other. After assessing first condition, we add number of masses to 𝐾best series as it has shown in Algorithm 1.

For (byte 𝑗 = 0 ; 𝑗 < 𝑗 _num βˆ’1 ; 𝑗 + + )
  K _best [ 𝑗 ] = βˆ’ 1 ;
for (byte 𝑖 = 0 ; i <= mass_num βˆ’ 1; i++ )
   {
    if (Loc_arr [ 0 , 𝑖 ] >= n)
     Loc_arr [ 0 , 𝑖 ] = n βˆ’ 1;
    if (Loc_arr [ 1 , 𝑖 ] >= n)
     Loc_arr [ 1 , 𝑖 ] = n βˆ’ 1;
    if (Loc_arr [ 0 , π‘˜ ] != Loc_arr [ 0 , 𝑖 ] )
     {
   for (byte j = 0; j < mass_num βˆ’ 1; j++ )
    if (K_best [ 𝑗 ] == βˆ’1)
     {
     K_best[j] = arr[Loc_arr [ 0 , 𝑖 ] , Loc_arr [ 1 , 𝑖 ] ] ;
        break;
        }
   }
}

while ((K_best[l] >= 0) && (number <= mass_num))
   {
 R = Math.Sqrt((Math.Pow((Loc_arr[0, k_best_T] βˆ’
 Loc_arr [ 0 , π‘˜ ] ), 2) + Math.Pow((Loc_arr[1, k_best_T] βˆ’
 Loc_arr [ 1 , π‘˜ ] ), 2)));
 F_arr [ 0 , π‘˜ ] = F_arr [ 0 , π‘˜ ] + ((rand_obj.Next(100)/100.0) * G *
 (Math.Abs((hiu_mass[k_best_T] βˆ’ hiu_mass[k])) /
 (R + E))*Math.Abs(Loc_arr[0,k_best_T] βˆ’ Loc_arr [ 0 , π‘˜ ] ));
 }
 A_mass = F_arr [ 0 , π‘˜ ] /hiu_mass[k];
 V_arr [ 0 , π‘˜ ] = ((rand_obj.Next(100)/100.0)*V_arr [ 0 , π‘˜ ] ) + A_mass;
 x_temp = (Loc_arr [ 0 , π‘˜ ] + Math.Round(V_arr [ 0 , π‘˜ ] ));

It is obvious that according to gravitational algorithm, in the next moment, we should add the condition of β€œbeing heavier masses” to the first condition, that is, in addition to condition of unparallelism of masses, those masses which are heavier than recent masses, should be added to 𝐾best series.

Now, we could write calculations based on Algorithm 2.

Based on applied force by each factor on respected mass, GSA should be calculated (sum of force and distance).

We could use following pseudocode for these calculations (Algorithm 3).

l = 0;
while (k_Best[l] != βˆ’1)
{
 if (k_Best[l] > 0)
  k_Best_Temp = k_Best[l] βˆ’ 1;
 else
  k_Best_Temp = 0;
R = (Math.Sqrt((Math.Pow((gls_Loc_Arr[k_Best_Temp] –
gls_Loc_Arr[k]),2) +
Math.Pow((gls_Loc_Arr[k_Best_Temp] βˆ’ gls_Loc_Arr[k]),
2))));
f_Arr[k] = f_Arr[k] + ((rand_obj.Next(100)/100.0) * G *
    (Math.Abs((gls_Hiu[k_Best_Temp] – gls_Hiu[k])) /
   (R+E))*Math.Abs(gls_Loc_Arr[ k_Best_Temp] βˆ’
      gls_Loc_Arr[k]));
}

New positions of mass have been specified. And it is obvious that the researcher factor should have a new state and finally a new mass in a new position of search space. But how should these changes in state and mass be created?

In proposed solution, we divide state array on N-dimensions of search space, that is, for each dimension, we assign some houses to state array.

For example, we would have a state array with six houses and a three-dimensional search space, where we assign two houses for dimension 𝑋 and two houses for dimension π‘Œ and two houses for dimension Z (Figure 8).

Attention that order of assigned dimensions to the houses are arbitrary, but with change in position of factor in search space, movement is determined in direction of the related dimension, and only corresponding cells with that dimension may be changed in state array, and other values remain constant. So factors could move in direction of their corresponding dimensions.

Way of changing values is important, and is explained as follow.

When a factor starts to move in one direction, we divide each corresponding value with related dimension on distance, then by calling neighborhood function, we specify that by replacing which value in state array total spent time would be decreased and corresponding mass found better state.

If in searching space just one mass is remained, search would be finished, and by considering number of remained mass in array (best mass), list of machines is presented for processing remained operation of respected job, so that production time would be minimized.

For instance, array (Figure 9) shows that if the first job is implemented by the third machine, the second job is implemented by the fifth machine and so on, then we would have ideal time for producing or performing related job, such that required time for performing one job on specified machines with above mentioned operations and ignoring other times (such as supplying materials, path stops, delivering times).

9. Experimental Results

To illustrate the effectiveness and performance of the proposed algorithm in this paper, we consider 43 instances from two classes of standard JSP test problems: instances FT06, FT10, and FT20 designed by Fisher and Thompson and instances LA01–LA40 designed by Lawrence. All the test problems are taken from web ftp://mscmga.ms.ic.ac.uk/pub/jobshop1.txt [30].

All the runs were carried out on a Intel Pentium Core i5 Duo 2.4 GHz Processor and 4 GB RAM configuration system. The algorithm was coded in C# language under the operation system Windows XP. Numerical results are compared with those reported in some existing literature works using other approaches [25–30], including some heuristic and metaheuristic algorithms. Take the benchmark problem FT10 for instance.

Table 4 summarizes the results of the experiments on the 43 instances. The contents of the table include the name of each problem, the scale of the problem (size π‘›Γ—π‘š), the value of the best-known solution for each problem (πΆβˆ—), the value of the best solution found by using the proposed algorithm (GSPN). From the table, it can be seen that the proposed algorithm is able to find the best known solution for 37 instances, and the deviation of the minimum found makespan from the best known solution is also very small. The proposed algorithm can yields good solution with respect to almost all other algorithms, The superior results indicate the successful incorporation of PSO and SA, which facilitates the escape from local minimum points and increases the possibility of finding a better solution. Therefore, it can conclude that the proposed GSPN solves the JSP efficiently.

10. Conclusion

In this paper, gravitational search algorithm for solving the flexible job-shop scheduling problem is presented. Due to their special structure, our Gravitational search-based algorithm works faster and more efficiently than other known algorithms. Our primary objective is to show that the proposed exploiting gravity leads to an efficient heuristic for the FJSSP. We presented computational results derived by testing the algorithms which have been developed in this paper on a number of benchmark problems. The gravitational search algorithms yield excellent results for almost all problems. Finally, we believe that the methodology used in this paper can be extended to solve other scheduling problems.

Acknowledgments

The authors gratefully acknowledge the comments of the referee which improved the presentation of the paper.