Abstract

The Just-In-Time (JIT) scheduling problem is an important subject of study. It essentially constitutes the problem of scheduling critical business resources in an attempt to optimize given business objectives. This problem is NP-Hard in nature, hence requiring efficient solution techniques. To solve the JIT scheduling problem presented in this study, a new local search metaheuristic algorithm, namely, the enhanced Best Performance Algorithm (eBPA), is introduced. This is part of the initial study of the algorithm for scheduling problems. The current problem setting is the allocation of a large number of jobs required to be scheduled on multiple and identical machines which run in parallel. The due date of a job is characterized by a window frame of time, rather than a specific point in time. The performance of the eBPA is compared against Tabu Search (TS) and Simulated Annealing (SA). SA and TS are well-known local search metaheuristic algorithms. The results show the potential of the eBPA as a metaheuristic algorithm.

1. Introduction

Scheduling problems altogether constitute a large and important field of study. It involves the allocation of production (or operational) resources with the intent of optimizing business objectives. Business objectives may include reduced operational costs, reduced production time, increased customer satisfaction, and increased profits in optimizing production processes or service delivery. Several categorizations of scheduling problems are found in the literature [1, 2]. However, of particular interest in this study is the problem of Just-in-Time (JIT) scheduling [3, 4].

JIT scheduling, as described by Taiichi Ohno (commonly referred to as the father of JIT) is, “in a flow process, the right parts needed in assembly reach the assembly line at the time they are needed and only in the amount needed” [5]. Ohno perfected JIT principles at Toyota manufacturing plants in Japan while being vice president of manufacturing. At the time, Toyota created high quality vehicles at relatively low costs, compared to its competitors. This was in spite of the disadvantage of having a lack of natural resources in the country. The success of implementing JIT techniques in manufacturing gave Toyota a prominent position within the automobile sector.

Observing Toyota’s success, many organizations on a global scale have adopted and implemented JIT techniques with relative successes. Proper implementations of JIT techniques have resulted in documentation emphasizing improved product qualities, improved service deliveries, improved customer satisfaction, improved employer and employee relations, decreased production costs, reduced levels of inventory, and increased profit turnover [6]. Organizations have further benefited in remaining competitive within industry, in offering products and/or services without negotiating quality at competitive costs. These factors constitute important business objectives, as organizations remain competitive on the basis of cost, quality, and service delivery [7].

The JIT scheduling problem is largely studied in the sectors of engineering, manufacturing, and service delivery [1]. The objective is the optimized delivery of business resources that meet demand, rather than manufacturing or supplying less or in surplus. JIT scheduling objectives are summarized as follows [8].(1)Competitiveness: companies remain competitive in offering high quality products and services at relatively low costs in meeting demands.(2)Efficient production processes: the objective is greater productivity in maintaining quality and minimizing costs.(3)Improved quality of products: production of smaller quantities allow for better assessment checks. This results in improved product quality.(4)Minimizing wastage: this will reduce costs and save time and effort in business.(5)Reduced inventory: this will minimize investments as capital will not be held (or lost) in holding excess inventory.(6)Efficient space utilization: fewer inventories mean more space available.(7)Improved customer satisfaction: the on-time delivery of quality products and services at competitive rates earn customer satisfaction.(8)Improved supplier relations: supplier relations get strengthened in having organized delivery of goods and services as required.The JIT scheduling problems is -Hard [3, 9, 10]. This study investigates the JIT scheduling problem and determines solutions using metaheuristic techniques. The study proposes an enhanced Best Performance Algorithm (eBPA) which is an enhancement of our recently introduced Best Performance Algorithms (BPA) in literature [1113]. eBPA, though similar to BPA, improves on the initial implementation of the latter to enhance its efficiency and execution time. Further details on the initial BPA algorithm can be found in Chetty and Adewumi [11, 12]. This study compares eBPA with two other standard metaheuristics to ascertain its efficiency in handling this JIT scheduling problem. The objective of this study is therefore to test the potential of eBPA algorithm in comparing its solutions against well-known local search metaheuristic algorithms for an -Hard problem. This study constitutes initial research on the eBPA algorithm for scheduling problems.

Previous studies on JIT scheduling problems have investigated both the single and multiple machine scenarios. Many optimization techniques have been investigated in determining solutions. They include both the exact and heuristic algorithms. Ronconi and Kawamura [14] investigated a single machine JIT scheduling problem with restrictive common due dates. The objective was the minimization of the earliness and tardiness penalties. The study proposed a Branch and Bound algorithm which used lower bounds and pruning rules in exploiting properties of the problem in determining solutions. The algorithm was investigated using 280 jobs. These jobs were characterized by different due dates. The proposed algorithm showed to be effective in outperforming the CPLEX optimization software. Monette et al. [15] studied a JIT job-shop scheduling problem. Jobs were characterized by earliness and tardiness penalties with respect to their due dates. The objective was the minimization of the earliness and tardiness penalties. The study presented a constrained programming algorithm. This was a filtering algorithm based on machine relaxation. The study investigated a large range of benchmark test instances. 72 problems were studied in total. The algorithm showed to be very effective in determining 29 of the best-known solutions from the problems studied. Dereniowski and Kubiak [16] studied a JIT multislot scheduling problem. In this problem, processing time was divided into time slots rather than a single due date for the jobs. The intent of the study was to determine a minimization of the schedule makespan. The study presented algorithms for both the single and parallel machine problem instances.

Süer et al. [17] studied a single machine scheduling problem with nonzero ready times. Jobs were assumed to have arrived at different times, with the arrival times being known in advance. The objective was determining the job sequences in minimizing tardiness. For the problem setting, preemption was not allowed. The study investigated the Genetic Algorithm (GA) and compared its solutions to known optimal solutions for small to large size problems. Results showed that GA determined optimal solutions for smaller instances and near-optimal solutions for larger instances. van Laarhoven et al. [18] investigated the Simulated Annealing (SA) algorithm in finding the minimum makespan in large instances of job-shop scheduling problems. The results showed that SA found shorter makespan than tailored deterministic algorithms, at the expense of greater execution times. The conclusion was that the disadvantage of expensive computation times was compensated by the simplicity of the algorithm and the higher quality solutions determined. Sidhoum et al. [19] studied a JIT scheduling problem in a parallel machine environment. Jobs were characterized by distinct due dates and earliness and tardiness penalties. The research was motivated due to the difficulty of determining lower bounds for JIT scheduling problems in single and parallel machine environments. A simple heuristic algorithm was presented. Results showed the differences between the lower and upper bound values for the single and parallel machine environments being around 1% for the problem instances investigated. McMullen [20] investigated Tabu Search to a mix-model production scheduling problem at an assembly line. The objective of the algorithm was to best determine an assembly schedule based on the part-usage rates and the number of setups involved in the process. The problem objective was to determine an assembly sequence that optimized the assembly process. Results showed that the multiple-objective problem of minimizing part-usage and setup time could be valuable from a managerial perspective. Naso et al. [21] investigated a hybridized algorithm constructed using GA and a constructive heuristic for a JIT delivery problem in supply chain management. The problem setting is that of a ready-mixed concrete delivery service, in trying to best coordinate the supply of concrete from producers to customer’s on-time. Apart from problem complexity, strict time constraints had forbid early or tardy delivery of ready-mixed concrete. The problems objective was scheduled delivery that maximized profit, in minimizing risk. The case study presented used actual industrial data. The hybridized algorithm was compared to that of four other constructive heuristics. Results showed that the hybridized algorithm determined superior solutions to that of the constructive heuristics.

The rest of this paper is structured as follows. Section 2 describes and presents the JIT scheduling problem studied. Section 3 describes previous research work in this field. Section 4 describes and presents the local search metaheuristic algorithms. Section 4 presents and discusses the experimental results obtained. Finally, Section 5 draws conclusions and outlines possible future work.

2. Problem Description and Mathematical Formulation

The allocations of company resources to meet business demands are critical to the success of an organization. Therefore, in JIT problem formulation, the untimely scheduling of business resources that miss expected due dates is accompanied by penalties factors called earliness and tardiness penalties. An earliness penalty is incurred when a job (which implies a service rendered or an item being produced) is scheduled in business before its expected time. The implication of an earliness penalty relates to the cost of holding inventory before its expected time, as an example. Also, a tardiness penalty is incurred when a job is expected to complete after its expected due date. This could imply, for example, customer dissatisfaction.

The due date of a job refers to either a specific point in time or an interval specified by a window frame of time. The jobs due date is important. It relates to the demand of products or services at predetermined times. The inability of organizations to provide on-time delivery of products and/or services sets the stage for competitiveness in industry.

In a perfect scheduling environment resources will be made available as required. Realistically, however, the limited availability of resources and the differences in demands result in resources becoming available before or after expected due dates. Hence, the problem with JIT scheduling relates to either minimizing the earliness penalty, minimizing the tardiness penalty, or both in scheduling resources [1]. Optimizing a JIT schedule is difficult due to the conflicting objectives.

Most JIT investigations have studied the scheduling of jobs on a single machine where the due dates are specific points in time. This research studies a JIT problem of scheduling jobs on parallel machines where the due dates are window frames of time. The single machine scenario is easier to model and solve. Although in industry the possibility of bottlenecking exists. Surprisingly, far fewer papers have surfaced on JIT problems of scheduling jobs on multiple and parallel machines.

The mathematical model presented in this study is that given in Adamu and Abass [22]. This study takes the opportunity of correcting the original mathematical formulation by removing irrelevant constraints and reformulating the objective function in terms of the schedule. Also, although the formulation is a maximization model, the original study presented solutions for a minimization model. These inconsistencies present the opportunity for this problem to be restudied.

In the mathematical formulation given below, the left and right hand sides of a window interval of time represent the earliest start time (where the job becomes available for processing) and the latest due date (where the job must be completed). The jobs are scheduled starting from time zero. The problems objective is the maximization of the total weight of all on-time jobs. is the weight of a job. This relates to the importance of job being delivered on-time. This problem assumes equivalent earliness and tardiness penalties. These penalty factors are not considered in the objective function. The mathematical formulation is as follows.

Indices:(i): indicative of each machine, that is, .(ii): indicative of each job, that is, .

Parameters:(i): representing the left hand side of the due window of job . This is the earliest start time of job .(ii): representing the right hand side of the due window of job . This is the expected completion time of job .(iii): representing the processing time of each job .(iv): representing the actual start time of job on machine .(v): given a schedule , represents the completion time of job on machine , that is, . Hence, job is said to be early if , tardy if else on-time if .(vi): weight of job .

Variables:(i): representative if job is allocated on machine , in schedule .

Objective function:Maximize Subject to constraints: Equation 1 represents the total weight of all on-time jobs. Equation 2 ensures that if job is scheduled on machine it will start and complete processing between its earliest start time and latest finishing time . Equation 3 ensures that job will be assigned to at most one machine . Equation 4 represents a job being either on-time, early or tardy, with 1 representing on-time and 0 otherwise.

The problem assumptions are as follows.(1)Setup time is included in processing time. Hence, preemption is not allowed. When job is completed, there is no delay in starting job on machine .(2)There is no delay in machine processing. When job starts, it is expected to be completed as represented by processing time .(3)Only one job can be processed at any given time on machine .

3. Methodology

This study investigates three local search metaheuristic algorithms to the problem of JIT scheduling presented in Section 2. Meta (loosely speaking) refers to an algorithm beyond or higher level to that of a simple heuristic. Heuristic means “to find” or to “discover by trial and error” [23]. There are no formal definitions for what a heuristic and metaheuristic algorithm is in computational studies. However, the trend is to classify stochastic algorithms based on randomization into these categories. The metaheuristic algorithms investigated include eBPA, Tabu Search (TS), and Simulated Annealing (SA).

TS and SA are well-known local search metaheuristic algorithms. eBPA, on the other hand, is an enhancement of the Best Performance Algorithm (BPA) previously introduced in literature by Chetty and Adewumi [11, 12]. The improvement lies essentially in the implementation aspect of the algorithm. This study aims to present the potentials of eBPA in solving an NP-Hard problem. eBPA improves on both the efficiency and execution time performances of BPA, in having a simpler and completely new design. For this reason, standard implementations of all algorithms will be compared in determining solutions for the JIT scheduling problem studied. The presentation of the eBPA and the documentation of its potential are the primary objective of this study. The algorithms are presented and explained in the subsections below.

3.1. Enhanced Best Performance Algorithm

The eBPA is modeled on the analogy of professional athletes desiring to improve on their best registered performances within competitive environments. Numerous sporting disciplines exist yet the principles are the same in that professional athletes desire to perfect their levels of skills in order to beat their personal best performances and that of their competitors. Before entering professional, all athletes start off with a simple love for the sport and a desire to succeed. Thereafter, with constant practice and strategy their skill levels increase. They learn from trial and error techniques and improve on their strengths and weaknesses. Knowledge is also acquired in learning from mentors and/or other athletes. In becoming professional, the ultimate goal of an athlete is to develop a level of skill that would cause the athlete to give off a performance that would surpass their personal best registered performances.

Apart from coaching, an effective strategy could be to maintain an archive of a limited number of the athletes’ best registered performances. This could be in the form of recordings. Recordings contain both the techniques used and the result determined in the delivery of performances. Athletes can use this information to identify strengths and weaknesses in the delivery of performances. With this knowledge, weaknesses can be improved upon or new techniques can be learned. In making appropriate changes and with sufficient practice, refined skills can be achieved. The objective is for the athlete to develop a level of skill that would allow for their best performance to be superseded.

Technique (or skill) in this context refers to a solution determined by an optimization algorithm. The result of executing a performance refers to the result of evaluating the objective function using this solution. Therefore, there are notable similarities between an athlete improving on skill level and an optimization algorithm determining improved solutions. Based on this analogy, the enhanced Best Performance Algorithm (eBPA) has been modeled. There are five guiding rules governing the eBPA. They include the following.(1)An athlete maintains an archive of recordings for a limited number of their best performances delivered during competitive environments.(2)An athlete reviews a performance from this archive and makes appropriate changes to the technique used with the hope of executing the new technique in trying to determine a performance that would at least meet the minimum criterion of being accepted into the archive.(3)If an improved performance is determined, the archive is updated in replacing the recording of the worst performance with the recording of the new.(4)The new recording is then chosen as the next recording to be reviewed by the athlete (given a certain probability).(5)Only performances with unique techniques are allowed to be registered in the list.To artificially simulate this analogy, the eBPA maintains a limited number of the best solutions found in a list called the Performance List (). Here, solutions refer to recordings stored in an archive. Solution attributes (i.e., the design variables of a solution) distinguish one solution from the next. Therefore, in allowing solutions to be inserted into the , only solutions that are unique must be considered. Disallowing duplicate solutions prevents the algorithm from working with solutions previously visited. In the , the best and worst solutions (determined by their solution qualities or results) must be indexed. Another solution from the list that is considered to be worked with must also be indexed. This is called the working solution.

To try and determine improved solutions, over solutions already registered in the , local search changes are applied to the solution indexed as the working solution. The new solution determined in applying the change is referred to as an update of the working solution. If the result of this solution at least improves on the result of the worst solution or is equivalent in solution quality but unique in their design variables, then the is updated in replacing the worst solution with that of the new. The new solution is then indexed as the working solution. If this solution result improves on the best solution result (i.e., indexed by the best index) then it is also indexed as the new best solution. The worst solution would also need to be redetermined and reindexed upon an update of the being made. If an update of the is not achieved, then local search changes will continue to be applied to the solution indexed as the working solution. However, given a certain probability, the worked with solution for the next iteration could also be the solution that had been updated in the current iteration (i.e., the updated working solution). The possibility of the algorithm working with the updated working solution, for the next iteration, represents the willingness of an athlete to work with a new but disimproved technique. Working with disimproved solutions could possibly lead to improved solutions being found.

These strategies represent eBPA’s ability to work with both improved and disimproved solutions. A solution is considered improved if the updated working solution result at least improves on the working solution result. A solution is considered disimproved in two ways. Firstly, if the result of the updated working solution causes the to be updated without the actual working solution result in the being improved upon. This means that a better solution is found, which although being a disimproved solution to the current working solution still meets the minimum requirements of being accepted into the . Secondly, a disimproved solution is accepted to be worked with if the updated working solution does not cause an update of the and if the probability factor is satisfied. This will cause the updated working solution to be the worked with solution in the next iteration. Accepting disimproved solutions is eBPA’s strategy of escaping local entrapment and cycling.

To further constrain (or destrain) the acceptance into the , the eBPA allows for the to be dynamically resized. Large sizes have less restrictive acceptance criterion compared to smaller sizes. Reason being the quality of the worst solution is worse off. Therefore, strategically reducing the size is used to further intensify the search in promising neighborhood regions. Similarly, strategically increasing the size can be used to escape local entrapment.

After the termination criterion is satisfied, the solution indexed by the best index will be returned as the best solution found. This solution is representative of the best technique determined by an athlete. The eBPA is shown in Algorithm 1.

(1)     Initialize variables = 0, = 0, = 0
(2)Set the size of the Performance List, that is,
(3)Set probability
(4)Set the first solution in the Performance List, that is,
(5)Calculate the fitness value of , that is,
(6)Set
(7)Set Boolean variable
(8)if not Stopping_Criterion_Met() then (e.g., for   to   do)
 (8.1) if resize() then
    (8.1.1) resize_PL()
 (8.2) end if
 (8.3) if    then
    (8.3.1) = Determine_Solution
 (8.4) else
    (8.4.1) = Determine_Solution
    (8.4.2) = true
 (8.5) end if
 (8.6) = Determine_Fitness(
 (8.7) if   better than   then
    (8.7.1) perform_Update()
 (8.8) end if
 (8.9) if random  then
    (8.9.1) =
 (8.10) end if
(9)end if (or e.g., end for)
(10) return

3.2. Tabu Search

TS is a neighborhood search algorithm based on the analogy of something that should not be touched or interfered with [24, 25]. This is implemented by maintaining a limited number of elite solutions (or specific solution attributes) in a list called the Tabu List (). The is commonly implemented in a first-in-first-out (FIFO) way, hence recording the most recent best solutions found. In searching neighborhood regions of solution , that is, , the maximum number of neighbors considered is , as any solution recorded in the has a tabu status and will not be interfered. This technique reduces the risk of cycling around local optima, as disimproved moves are accepted in implementing its metaheuristic technique to escape premature convergence.

TS also employs other strategies such as an aspiration condition, diversification, and intensification. An aspiration condition overrules a tabu status of a solution/attribute. For example, if a solution is found which improves on the best solution but uses a tabu attribute, the tabu status is overruled and the solution is accepted as the best solution found. Diversification is the analogy of a random restart. Intensification constructs other solutions, from some of the best attributes of the best solution/s found. Diversification and intensification additionally help prevent the trap of premature convergence. TS guides the search deterministically, with its strategies modeled around its main feature which is memory.

TS is implemented in this study by recording the overall solution. Using a solution , a candidate list () of solutions is determined neighboring . The best candidate is then selected as the new current solution for the next iteration. If this solution improves on , then is updated to be . also gets inserted into in a FIFO manner. An intensification strategy is implemented such that if no improved solution over is found, for objective function evaluations, then a random best solution is selected from the to be the next current solution after a move being applied to it. The algorithm for TS is shown in Algorithm 2.

(1)     Initialize to be the initial tour
(2)Set
(3)Evaluate the fitness of
(4)Set (the fitness of ) =
(5)Set the size of the Tabu List, that is,
(6)Set the size of the Candidate List, that is,
(7)Initiate the Tabu List and the Candidate List i.e.
(8)for   to   do
(8.1) = Generate_New_Candidate_List
(8.2) = Find_Best_Candidate(
(8.3) = Determine_Fitness
(8.4) if   better then   then
   (8.4.1) =
   (8.4.2) =
   (8.4.3) Update with
(8.5) else
   (8.5.1) if Intensification_Criterion_Met() then
    (8.5.1.1) = Reset_Current()
   (8.5.2) end if
(8.6) end if
(9)end for
(10) return  

3.3. Simulated Annealing

SA [26, 27] is a Markov chain optimization technique modeled on the analogy of heated metal annealing to an equilibrium state. At higher temperatures the atomic composition of metal is more volatile, making the metallic structure unstable. However, when the metal starts to cool, the atomic structure becomes less volatile allowing it to stabilize. When completely cooled, an equilibrium state of stability is reached. For the annealing process to be successful, the decrease in the rate of temperature must be slow.

High temperatures allow SA to explore different neighborhood regions of solution space more easily. At these temperatures, local search changes will allow the search trajectory to wander from one neighborhood region to the next, in accepting both improved and disimproved solutions using elements of randomization. At higher temperatures the ability to accept disimproved solutions will be greater than when at lower temperatures. Accepting disimproved solutions is SA’s technique of escaping local entrapment. The explorative ability of SA will identify the promising neighborhood regions, but as temperature decreases by a constant rate of , the explorative ability will decrease and exploitative ability will increase. At lower temperatures, exploitation searches neighborhood regions for local optimum points in trying to determine the best solution found by the algorithm. The best solution found by SA will be returned when its lowest temperature is reached, which is symbolic of the equilibrium state.

SA is implemented by starting off with equivalent and solutions. At each temperature (reduced by a rate of ) a number of local search moves are performed on the solution to produce solutions. If a solution is found which improves on , then will be assigned to be . However, given a certain metropolis probability, can also be assigned to be a disimproved solution. If improves on , then will be assigned to be . This process continues until reaches its final temperature . The algorithm for SA is shown in Algorithm 3.

(1)  Initialize to be the initial tour
(2) Set =
(3) Evaluate the fitness of =
(4) Set (the fitness of ) =
(5) Initiate starting temperature and final temperature
(6) while    do
   (6.1) for   to   do
     (6.1.1) = Determine_Solution()
     (6.1.2) = Determine_Fitness
     (6.1.3) if   better then   then
       (6.1.3.1) = true
     (6.1.4) else
       (6.1.4.1) Calculate acceptance probability
       (6.1.4.2) if   > random  then
            (6.1.4.2.1) = true
       (6.1.4.3) end if
     (6.1.5) end else
     (6.1.6) if    then
       (6.1.6.1) = false
       (6.1.6.2) =
       (6.1.6.3) =
       (6.1.6.4) if   better than   then
            (6.1.6.4.1) =
            (6.1.6.4.2) =
       (6.1.6.5) end if
     (6.1.7) end if
   (6.2) end for
   (6.3) Update according to cooling schedule
(7) end while
(8) return  

4. Results and Discussion

Simulations were run using sets of jobs , tested on sets of machines . For each job , its processing time was randomly determined to fall within the interval . To set the starting and completion times and for job two “Traffic Congestion Ratio” variables and were randomly selected from set . Using , was randomly generated to fall within the interval . Using , was randomly generated to fall within the interval .

To test the algorithms fairly, a set of jobs was initially generated and passed in as the input parameter to each of the algorithms. This was then used to test the algorithms on a particular machine . Therefore, each algorithm used the same job set in testing on a particular machine. In this way the results were determined fairly for comparative purposes. To determine average performance results, each algorithm was run 10 times for each pair of job-machine combination. 10 runs were sufficient considering the expensive computational times of the metaheuristic algorithms. From the 10 runs, per job-machine combination, the best solution of each algorithm is compared. The best solution is referred to as the best fitness value (BFV). This is the highest total weight of all on-time jobs from the 10 runs, per job-machine combination per algorithm. Comparisons of average solution performances are also documented. This is for their average fitness value (AFV) solutions and their average execution time (AVG) performances.

To further test the algorithms fairly, their parameter settings were set such that each metaheuristic algorithm executed for exactly 1,000,000 objective function evaluations per run. The parameter settings were set as follows.(i)eBPA: the was set at 5. The was set at 1,000,000. was set at 0.005.(ii)TS: the was set at 7. The was set at 100. The was set at 10,000.(iii)SA: the was set at 1,000. was set at 115. was set at 0.005. was set at 0.99.The program was written in Java. It was programmed using the Netbeans 7.0 Integrated Development Environment. All simulations were run on the same platform. The computer used had a Windows 7 Enterprise operating system, an Intel Core i5 CPU, 4 GB of RAM, and a 500 GB hard-drive. The findings of the simulations are documented in Table 1.

Table 1 gives the statistical values of the best (BFV) and average (AFV) fitness values of each algorithm, per machine set, for the class of 500 jobs. The overall best and average fitness value solutions, per machine set, are highlighted in bold font. This is for clarity. From Table 1 it is seen that eBPA determined the overall BFV solutions for all machine sets. On average, eBPA determined the overall AFV solutions for machine sets 2 and 10. SA determined the overall AFV solutions for machine sets 5, 15, and 20. However, it is seen that these solutions are only marginally superior to eBPA’s solutions. TS has shown to be the weakest of the algorithms.

Graphical comparisons of the algorithms best and average fitness value solutions, as determined from Table 1, are seen in Figures 1 and 2.

Table 2 gives the statistical values of the average execution times in milliseconds (ms) for the algorithms, per machine set, for the class of 500 jobs. Although it is observed that the average execution times of the algorithms are fairly similar, eBPA executed the fastest for machine sets 2, 5, 10, and 20. TS executed the fastest for machine set 15. The relatively fast execution times of eBPA relate to its small Performance List () size, which strategically decreased as the algorithm iterated. This caused the acceptance criterion to become increasingly restrictive, allowing for greater exploitation in accepting fewer solutions to update the . This allowed eBPA to identify stronger solutions and explains its relatively fast execution times. A graphical comparison of the statistics given in Table 2 is seen in Figure 3.

For the class of 500 jobs it is concluded that the eBPA was the strongest algorithm.

Table 3 gives the statistical values for the overall BFV and AFV solutions, per machine set, for the class of 1,500 jobs. From Table 3 it is observed that eBPA determined the overall BFV solutions for all machine sets, except machine set 10. It also determined the overall AFV solutions for all machine sets. SA determined the overall BFV solutions for machine set 10. SA again determined superior solutions over TS.

Graphical comparisons of the algorithms best and average fitness value solutions, as determined from Table 3, are seen in Figures 4 and 5.

Table 4 gives the statistics of the average execution times of the metaheuristic algorithms, per machine set, for the class of 1,500 jobs. It is observed that the average execution times were much more competitive for this class of jobs. eBPA performed faster on average for machine sets 2, 10, and 20. TS performed the fastest for machine set 5, and SA performed the fastest for machine set 15. Graphical comparisons of the execution time performances are seen in Figure 6.

For the class of 1,500 jobs it is also concluded that the eBPA was the strongest algorithm.

Table 5 gives the statistical values of the BFV and AFV solutions for each algorithm, per machine set, for the class of 2,500 jobs. From Table 5 it is seen that eBPA determine better BFV and AFV solutions for machine sets 10 and 15, while SA determined better BFV and AFV solutions for machine sets 2 and 20. For machine set 5, eBPA determined a better AFV solution and SA determined a better BFV solution.

Graphical comparisons of the algorithms best and average fitness value solutions, as determined from Table 5, are seen in Figures 7 and 8.

Table 6 gives the statistics of the average execution times for the metaheuristic algorithms, per machine set, for the class of 2,500 jobs. It is observed that for this class, TS executed the fastest for machine set 2, SA executed the fastest for machine set 5, and eBPA executed the fastest for machine sets 10, 15, and 20. Graphical comparisons of the execution time performances are seen in Figure 9.

For the class of 2,500 jobs, both eBPA and SA performed similarly in determining an equivalent number of best solutions. However, eBPA executed the fastest for most machine sets.

The strong performances of the eBPA for this JIT scheduling problem show its potential as a metaheuristic algorithm. Although standard implementations of the algorithms were compared, the results documented are significant in that the techniques employed by the eBPA show to be very competitive in being compared to the techniques of TS and SA. TS and SA are very competitive and well-known local search metaheuristic algorithms in the literature. The strength of eBPA lays in its memory structure, and the techniques used in allowing the population of solutions contained within to direct the search. Solutions registered in the would have identified the most attractive points within the neighborhood regions of the solution space. However, it uses the information of the worst solution in the list as a strategic point to move the search forward. The memory structure adapts dynamically in accepting solutions that satisfy the acceptance criterion. It uses each solution inserted into the as the next solution. This strategy allows eBPA to use a population of solutions to direct the search rather than using the population as a network to exploit a neighborhood region.

As the search iterates and the worst solution in the is improved upon, the acceptance criteria become more restrictive allowing for greater levels of exploitation. Exploitation is further increased with the dynamically reducing in size by cutting away worst solutions in a strategic manner. This constrains the acceptance criteria further. This allows the algorithm to exploit quality solutions as the narrows in size. The solutions accepted into the does not need to be the best overall. However, along the way the best solution will be found. An added advantage of eBPA is its simplistic design and the few parameter settings that it requires.

5. Conclusion

The Just-In-Time (JIT) scheduling problem is of great significance to both academics and industries. The objective is to determine operational processes that would allocate limited business resources efficiently to optimize given business objectives. These objectives may include the optimization of operational costs, operational times, inventory storage, customer and supplier relations, and profits margins. In this study, the JIT problem of allocating a large number of jobs required to be processed on parallel machines was investigated. A job represents a business resource required to be made available during a specific window interval of time. An example may be the delivery of vehicles to customers that require rented vehicles within a specific time frame. The objective was therefore to determine a schedule that would maximize the total weighed number of all on-time jobs that could be scheduled. This is an -Hard problem.

To determine solutions, we proposed the eBPA algorithms and investigated its results alongside well-known TS and SA techniques for comparison purposes. Results obtained show that eBPA performed competitively well with both TS and SA in terms of best and average fitness values obtained as well as the execution times thus presenting a good potential for other NP-Hard problems. Further study will therefore investigate the performance of eBPA for other types of discrete optimization problems and perhaps compare it with other population-based techniques such as Genetic Algorithms [28] and Particle Swarm Optimization [29]. We also aim to further improve on its performance through hybridization and parameter improvement.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The financial assistance of the National Research Foundation (DAAD-NRF) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at are those of the author and are not necessarily to be attributed to the DAAD-NRF.