Abstract

A particle swarm optimization algorithm (PSO) has been used to solve the single machine total weighted tardiness problem (SMTWT) with unequal release date. To find the best solutions three different solution approaches have been used. To prepare subhybrid solution system, genetic algorithms (GA) and simulated annealing (SA) have been used. In the subhybrid system (GA and SA), GA obtains a solution in any stage, that solution is taken by SA and used as an initial solution. When SA finds better solution than this solution, it stops working and gives this solution to GA again. After GA finishes working the obtained solution is given to PSO. PSO searches for better solution than this solution. Later it again sends the obtained solution to GA. Three different solution systems worked together. Neurohybrid system uses PSO as the main optimizer and SA and GA have been used as local search tools. For each stage, local optimizers are used to perform exploitation to the best particle. In addition to local search tools, neurodominance rule (NDR) has been used to improve performance of last solution of hybrid-PSO system. NDR checked sequential jobs according to total weighted tardiness factor. All system is named as neurohybrid-PSO solution system.

1. Introduction

Particle swarm optimization (PSO) is an evolutionary computation technique developed by Eberhart and Kennedy [1]. Its working principle is based on modelling the motion style of the birds. GA is a search technique based on population. The algorithm operates on the given population along the search procedure. There is no filtering procedure. Because of this feature, it differs from genetic algorithms. In this study, a hybrid system based on PSO has been proposed to solve single machine total weighted tardiness problem with unequal release date. Only one job can proceed in the single machine system that will be solved. It has a processing time, a due date, penalty of tardiness, and release date for each job. There is no preemption. The single machine total weighted tardiness problem with unequal release date can be seen in the manufacturing industry, chemical process industry, electronic and computer engineering, service systems, and many areas as well. One of the most studied scheduling problems is single machine total weighted tardiness (SMTWT). Computation method of SMTWT can be seen in Table 4. SMTWT with unequal release date problem is an NP-hard. Recently, some metaheuristics such as particle swarm optimization (PSO), genetic algorithm (GA), simulated annealing (SA), and ant colony optimization (ACO) have been applied to solve the single machine scheduling problems.

Generally, the exact solution of SMTWT problem can be done by using branch-and-bound algorithm (Kan et al. [2], Potts and Van Wassenhove [3], and Abdul-Razaq et al. [4]) and dynamic programming (Held and Karp [5] and Baker and Schrage [6, 7]). Another effective heuristic method is adjacent pairwise interchange (API) method to minimize mean tardiness. API was developed by Fry et al. [8].

Single machine total weighted tardiness problem with unequal release date is presented as follows: 1 . A new dominance rule for 1 problem can be used in reducing the number of alternatives in any exact approach by Akturk and Ozdemir [9]. A neurodominance rule has been used for single machine tardiness problem with unequal release dates by Cakar [10]. Mahnam and Moslehi [11] studied on the problem of the minimization of the sum of maximum earliness and tardiness on a single machine with unequal release times in their paper. It has been proven that this problem is NP-hard in the strong sensation and a branch-and-bound algorithm has been developed as an exact method. Eren [12] considered single machine scheduling problem with unequal release dates and a learning effect in his paper. The problem of scheduling jobs with release dates, due dates, weights, and equal process times on a single machine has been studied by Van den Akker et al. [13]. The target is the minimization of total weighted tardiness. Kooli and Serairi [14] solved the SMTWT with unequal release date using mixed integer programming approach. Yin et al. [15] used several dominance properties and branch-and-bound algorithm with honey bees optimization algorithm (MBO) to solve the SMTWT with unequal release date. Wu et al. [16] applied simulated annealing approach to solve the SMTWT with unequal release date.

Matsuo et al. [17] used simulated annealing algorithm for single machine total weighted tardiness (SMTWT) problem. Crauwels et al. [18] presented a comparative study of a few heuristic methods such as TS (Tabu search), GA, and SA for SMTWT problem. The best one among these heuristics was TS. Den Besten et al. [19] and Merckle and Middendorf [20] used ant colony optimization (ACO) for SMTWT problem. Laguna et al. [21] presented a paper about the discussion of the use of three local search strategies within a Tabu search method for the approximate solution of a single machine scheduling problem. Cheng et al. [22] proposed a hybrid algorithm to minimize total tardiness based on ACO metaheuristic.

Tasgetiren et al. [23] used PSO to solve SMTWT problem; furthermore, ACO and ILS (iterative local search) have been compared in his study. Panneerselvam [24] proposed a simple heuristic to solve single machine scheduling problem. Sen et al. [25] published a detailed survey paper about SMTWT. Additionally, a review study about SMTWT has been done by Koulamas [26]. Yang et al. [27] proposed a combined approach with PSO and SA to improve performance of PSO.

PSO, GA, and SA are search algorithms based on different search topologies. Since each method has different search mechanisms, obtained best solutions and the steps to reach the best solution may differ. But if these algorithms are used as a hybrid system in other words as a combined system, the quality of the obtained best solution and the process time are improved. Therefore, GA and SA are used together with PSO based algorithm. Designed search system that consists of GA and SA is named as subhybrid solution system and works interactively to search for the best solution. Here, SA supports GA. SA gets the best solution found by GA as initial solution and when it finds better solution than this solution, it transfers this better solution to GA. When SA supported GA and subhybrid solution system, finds the best solution, and stops working, then it sends this obtained best solution to PSO algorithm. PSO gets this best solution into its population and tries to find better solution. The system including subhybrid solution system and PSO is named as hybrid-PSO system. Then, when PSO finishes working, it sends the obtained solution again to GA and the solution loop will keep going like this. This continuous working system also prevents PSO to stop in any local minima. Sometimes all solutions in the swarm may be the same and to escape from this unwanted situation different algorithms are used to support PSO. Here in the proposed solution system PSO has no chance to stop in any local minima since subhybrid solution system based on combination of GA and SA is used. When the final solution is obtained by hybrid solution system, neurodominance rule (NDR) is applied to this solution. NDR checks the obtained final solution if there is any one violating other one’s order and if there is, it warns. The criterion of NDR about changing sequentially coming jobs is TWT criterion. This overall solution system is named as neurohybrid-PSO system. In this study, the performances of PSO and neurohybrid-PSO systems have been compared. It is observed that neurohybrid-PSO has better performance.

This paper is organized as follows. Section 2 explained the computational structure of PSO. In Section 3, solution steps and used parameters of PSO are discussed. Section 4 shows the explanation of how SPV rule works. Section 5 discussed genetic algorithms. Section 6 discussed simulated annealing algorithm. Section 7 shows the working structure of neurodominance rule presented. In Section 8, neurohybrid-PSO solution system is explained with working mechanisms. In Section 9, experimental design, computational results, and analysis about neurohybrid-PSO solution system are reported.

2. Computational Structure of Particle Swarm Optimization

Particle swarm optimization (PSO) is a population based evolutionary algorithm found by Russell Eberhart and James Kennedy in 1995. This algorithm has been modelled based on the actions of the bird and the fish swarms when they are looking for food and how they are escaping from any dangerous case. Pseudo code of PSO can be seen in Algorithm 1. Since PSO finds solution faster, requires less parameters, and lacks possibility of stopping in local minima, it has superiority on other algorithms.

START
Population   let velocity and position values to the piece of particle
REPEAT
   FOR to POPULATION
        Compute the fitness value;
        Update the value;
        Update value;
        Update the velocity and position values;
   END of FOR
UNTIL      Defined stopping criterion
END

PSO consists of the elements named as particle, where each particle generates different solution alternatives to the related problem. These particles community is named as swarm. All particles in the swarm begin to search for process by getting random values in the solution space. Each particle has two vectorial components that are position () and velocity () vector. The position vector keeps the position information of the particle and the velocity vector keeps amount of the displacement and direction of the particle. In PSO, which is an iterative algorithm, the velocity components and thereby position component are updated in each iteration. The new velocity value of a particle is calculated by using the experience obtained in the previous iterations, the general experience of the swarm and randomness:where is the number of iterations, is a self-learning factor, is a social learning factor, are randomly generated numbers between , is the inertia weight, is the best position value found by particle in the latest iteration and named as local the best value, and is the best position value in the swarm found until that time and named as global the best value. The new velocity value of the particle is calculated by using the previous velocity value and the local best value and global best value.

The new position vector is computed by adding the new position vector value to the old position vector as shown in The position values of particles are taken and the quality of proposed solutions is determined by using fitness function. The fitness function is an evaluation function, which gets the position values of particles as input parameters and generates numerical values. In the minimizations problems the particles having smaller fitness values are preferred to the particles having greater fitness values; on the other hand, in the maximization problems, the particles having greater fitness values are preferred to the particles having smaller fitness values, reversely.

As a result, each particle in PSO is started to search for random position and velocity values. In each iteration, the velocity and position values are updated and a fitness value is generated by using fitness function. Additionally, in each iteration the best local value of the particle and the best global value of the swarm are updated. After a certain number of iterations the best value of the swarm will be the solution presented by PSO algorithm for the given problem [28].

3. Solution Steps and Used Parameters of Particle Swarm Optimization Algorithms

Step 1. Assigning initialization values, one has to do the following.(A)Set as iteration counter starting value.(B)Generate particles as randomly.
The continuous values belonging positions are randomly established. The following equation is used for the uniformly construction of the initial continuous position values belonging to the particle: In this equation, , , and is a uniform random number between 0 and 1. :Population size has been taken as 30.(C)Initial velocities are generated by a similar formula as follows:Continuous velocity values are restricted to some range: is a uniform random number between 0 and 1: .(D)Apply the smallest position value (SPV) to find a sequence performing personal best.(E)Evaluate each particle in the swarm using fitness function. Compute the personal best ().(F)Obtain the best fitness value () comparing all of the personal best.

Step 2. Running the counter, one has

Step 3. Updating inertia weight, is a decreasing factor:

Step 4. Updating velocity,Social and cognitive parameters have been taken as , and and can be described as uniform random numbers between .

Step 5. Updating position,

Step 6. Find the sequence applying SPV rule.

Step 7. Compute the using new sequences, compare previous personal best and current personal best, and select the successful particle.

Step 8. Update the global best ().

Step 9. Stopping criterion, if program reaches the maximum number of iterations, then stop.

4. The Application of the SPV Rule and Demonstration of a Particle

The solution space to be searched in SPO can be shown as a matrix. Each row of this matrix represents a particle, and it represents a job order for SMTWT problem with unequal release date. The total weighted tardiness of each job order gives personal best, and the best of them will give the global best: A job order will be found according to value by using SPV rule, and according to this job order TWT value will be calculated. Computation method of complete time can be seen in Figure 2.

Here, to find the job schedule the ordering is done starting from the smallest value to the biggest value and by this way job schedule is found. As it is seen from Table 1, the job schedule is as 4-6-5-8-1-7-2-3.

5. Genetic Algorithms

The genetic algorithms are search and optimization methods based on natural selection principles. The principles of the genetic algorithms have been firstly presented by John Holland. After the presentation of these fundamental principles of the genetic algorithms, many scientific studies have been published. The genetic algorithms, which are different from traditional optimization methods, do not use parameter set, but they use their coded forms. The genetic algorithms working based on probabilistic rules need only target function. They do not search for all of the solution space, but they only search for a certain part of the solution space. Thus, they implement an efficient search and reach the solution in a shorter time. Another important superiority of the GA is to examine the population composed of solutions simultaneously and not to stop in local solutions by this way.

The genetic algorithms do not deal with the problem, but they deal with their codes. The code modelling is done based on the structure of the problem. The initial population is formed. The operations are done based on determined crossover and mutation rates, and in each population the ranking is done based on the best fitness value. The algorithm is ended when the defined population number is reached, or the determined fitness value is obtained.

The application of GA for the solution of SMTWT problem with unequal release date can be described in the following form. Each chromosome represents one job order. The fitness function is TWT. Linear Order Crossover (LOX) method has been used as a crossover method. Working principle of LOX method can be seen in Cakar [29] and Cakar [10]. The determined rates and values regarding solution have been given below:crossover rate: 100%,mutation rate: 4%,number of the population: 250,population size: 100.

6. Simulated Annealing

Simulated annealing (SA) is a heuristic algorithm too much successfully applied to the combinatorial optimization problems. SA algorithm is a technic, which gets model and reference the annealing process of the melt metals during the cooling. The target function of the order of the produced solutions by this method will show a general decrement. However, due to the structure of the algorithm in some cases some solutions with higher target function values are also accepted. By this way the algorithm does not stop because of a local minimum solution and it will keep going the search or a better solution or for a better local minima. The SA algorithm is a useful heuristic search algorithm, which has given the solutions near the best solutions for especially combinatorial optimization problems.

In Figure 1 the flowchart and pseudocode of SA algorithm have been given [30, 31]. As seen in the figure, the SA is starting with an initial solution (), initial temperature (), and an iteration number (). The role of the temperature is to control the possibility of the acceptance of the disturbing solution. On the other hand, the reason of the usage of the iteration number is to decide the number of repetitions until a solution is found on a stable state under the temperature [32, 33]. The temperature may get the following implicit flexibility index meaning. At the beginning of the searches, in other words, at high temperature situation, some flexibility may be moved to a worse solution cases; however, less of this flexibility is existing in the searches done later, which means at lower temperature. Based on these , through a heuristic perturbation on the existing solutions, a new neighborhood solution () is generated. In case of improvement on the change of an objective function, the neighborhood solution () will be a good solution. Even if the change of an objective function is not improved, the neighborhood solution will be a new solution with a suitable probability based on . This situation removes the possibility of finding a global optimum solution out of a local optimum. In case of no change after certain iterations, the algorithm is stopped. If there is still improvement on the new solution, the algorithm continues with a new temperature value.

To generate new solution two different operators have been used: swap and inverse operator. Swap operator is the same as mutation process of GA. During the inverse operator, a sequential job group is randomly chosen and then reversely ordered. Thus, a new solution alternative is obtained:

7. Neurodominance Rule

Neurodominance rule is a system obtained based on training of a backpropagation neural network (BPANN) by using the data prepared with the implementation of API method. It is an intelligent system, which decides the priority of sequential two jobs based on TWT criteria. If any sequence violates the neurodominance rule, then violating jobs are switched according to the total weighted tardiness criterion.

The starting time of job , the processing time of job , due date of job , the weight of job , the processing time of job , the due date of job , the weight of job , release date of job , and release date of job were given as inputs to the BPANN. “0” and “1” values were used to determine the precedence of the jobs. If output value of the BPANN is “0,” then should precede . If output value of the BPANN is “1,” then should precede [10].

Working mechanism of neurodominance rule has been explained with an example; see Tables 2 and 3.

8. Neurohybrid System Based on PSO

In this system PSO system works primarily and later it tries to improve the obtained solution by using GA and SA together as a hybrid system to find better solution. GA and SA work interactively. SA gets the best solution found by GA, and it improves this solution and sends the obtained better one to GA again. When working of GA finishes, obtaining the best solution by subhybrid solution is transferred to the initial population of PSO algorithm to search for better solution. The best solution found by PSO (hybrid-PSO solution) is sent to GA again, and GA and SA work interactively to search for a better solution. This loop stops if the obtained solution is fitting with predefined stopping criterion. As a result of working of these three search algorithms interactively, since each algorithm has different search mechanisms, faster and better solutions may be found. Then, neurodominance rule (NDR) is applied to obtained final solution and based on total weighted tardiness criterion, the violating orders are corrected, and the solution gets more excellent. The overall system is named as neurohybrid-PSO. The general working principle of the proposed solution system has been shown in Figure 3; on the other hand, detailed working system has been presented in Figures 4 and 5.

9. Experimental Design and Solutions

In this problem, the success of the PSO based neurohybrid system (NHPSO) has been repeated 100 times by using randomly generated 8100-sample set. Upper and lower bounding schemes have been used as performance measurement criteria. Processing times intervals, weights intervals, and the number of jobs have been shown in Table 5. And both of them are integers. The proportional range belonging due dates (RDD) and average tardiness factor (TFF) were selected from the following set: . , an integer due date from the distribution , was generated for each job ; here, represents total processing time, . Release dates are produced based on a uniform distribution between 0 and .

The COVERT, ATC, WSPT, WDD, WPD, EDD, LPT, SPT, and CR priority rules are primarily applied to the randomly generated problems. COVERT and ATC are dynamic priority rules and the others are static rules. The formula and working principle are as given below. The initial population of PSO has been constituted using the obtained solutions by using implementation of the priority rules mentioned above:(1)COVERT(2)ATC(3)EDD(4)SPT(5)LPT(6)CR(7)WSPT(8)WDD(9)WPD(10)FCFSIn Table 6, the average of the best values of PSO in the initial population and then the obtained best solutions by the implementation of PSO have been compared, and the amount of the improvement has been reported. Furthermore, NHPSO has been applied to the same problems and the amount of improvements comparing to the initial solutions have also been given. As it is evidently seen in Table 6, hybrid solution system has shown better improvement. Additionally, the contribution of the used NDR to the hybrid system is also demonstrated in Table 6. Also, comparison of GA and SA can be seen in Tables 7 and 8.

The solution values given by PSO and NHPSO for 100 generations have been presented in the graphical representation on Figure 6. It is evident that the proposed NHPSO is working better, reaching the solution quicker and giving better solution in a certain generation. These features are given to the NHPSO by interactive working SA, GA, and NDR applying these to the final solution. Comparison of PSO, GA, and SA can be seen in Figure 7.

The linear lower bound has been originally obtained by Potts and Van Wassenhove [34] based on using the Lagrangian relaxation approach with subproblems that are total weighted completion time problems. An additional derivation of it has been presented by Abdul-Razaq et al. [4] based on the reduction of the total weighted tardiness criterion to a linear function, that is, total weighted completion time problem. The parameters can be described as given in the following form for the job : to , and is the completion time of job , and the author hasAssume that is a vector of linear weights, that is, weights belonging to the linear function , chosen so that . If so a lower bound can be written by using given linear function below:This situation demonstrates that the solution of the total weighted completion time problem takes a lower bound on the total weighted tardiness problem. The lower bounding scheme has also been used by Akturk and Yildirim [35] and Cakar [10] in their studies.

In Tables 911, it is clearly seen that SA, GA, PSO, and NHPSO are improving the linear lower bound for the given different number of jobs. Proposed method NHPSO is doing improvement better than PSO, GA, and SA. Furthermore, NDR is also contributing to the improvement of lower bound here.

The number of better, equal, or worse lower bounds obtained for the given examples has been presented in Tables 1214. The amount of improvement is noteworthy at %99.5 confidence level.

10. Conclusion

In this study, we proposed a neurohybrid-PSO system to solve total weighted tardiness problem with unequal release date. It is known that hybrid intelligent systems work better than the others. In the proposed hybrid system, GA and SA work interactively with each other to support PSO and increase the performance of the PSO algorithm. It has been shown in the paper that the proposed neurohybrid-PSO works better than PSO. NDR method has been applied to the solution obtained from hybrid-PSO which has been working interactively with GA and SA, to improve the solution more. Computational results showed that NDR improves the hybrid-PSO system’s performance. It can be seen that neurohybrid-PSO solution system can improve the upper and lower bounding schemes. In the future, the proposed solution system may be applied to single machine total weighted tardiness problem with double due date.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.