Abstract

In this study, the fuzzy-neural ensemble and geometric rule fusion approach is presented to optimize the performance of job dispatching in a wafer fabrication factory with an intelligent rule. The proposed methodology is a modification of a previous study by fusing two dispatching rules and diversifying the job slacks in novel ways. To this end, the geometric mean of the neighboring distances of slacks is maximized. In addition, the fuzzy c-means (FCM) and backpropagation network (BPN) ensemble approach was also proposed to estimate the remaining cycle time of a job, which is an important input to the new rule. A new aggregation mechanism was also designed to enhance the robustness of the FCM-BPN ensemble approach. To validate the effectiveness of the proposed methodology, some experiments have been conducted. The experimental results did support the effectiveness of the proposed methodology.

1. Introduction

The scheduling of complex manufacturing systems is usually a NP-hard problem (see Table 1), which means it is very difficult for the production controller to find the best schedule within a reasonable period of time. To resolve this problem, a viable strategy is to develop a digital manufacturing model for the complex manufacturing system, based on which production simulation can be carried out to search for the optimal schedule in an efficient way. Based on this viewpoint, this study attempts to optimize the performance of job dispatching in a wafer fabrication factory, with the aid of the digital manufacturing model of the wafer fabrication factory. In this field, a digital manufacturing model is mainly used for three purposes—to evaluate the performance of a scheduling method, to optimize the scheduling performance, and to generate some test data for the subsequent optimization.

A wafer fabrication factory is a very complex manufacturing system featured by thousands of machines, tens of product types, various priorities, reentrant process flows, uncertain demand, and others. In addition, building a wafer fabrication factory usually needs billions of dollars. The capital-intensive feature makes the efficient use of a wafer fabrication factory through good production management a very important task. However, some studies [13] noted that job scheduling in a semiconductor manufacturing factory is a very difficult task. As a result, many wafer fabrication factories suffer from lengthy cycle times. It is therefore not possible to promise their customers an attractable due date. On the other hand, owing to the uncertainty of the cycle time, the risk of missing the due date is also high.

Many different methods can be used for job dispatching in a wafer fabrication factory, such as dispatching rules, heuristics, data mining-based approaches [4, 5], agent technologies [4, 68], and simulation. Gupta and Sivakumar [9] classified the approaches for scheduling a wafer fabrication system into four categories: heuristic rules, mathematical, programming techniques, neighborhood search methods, and artificial intelligence techniques. A lot of studies have committed to the development of advanced dispatching rules for a wafer fabrication factory. For example, Altendorfer et al. [10] proposed the work in parallel queue (WIPQ) rule to maximize the throughput at a low level of work in process (WIP). Zhang et al. [11] proposed the dynamic bottleneck detection (DBD) approach, in which workstations are classified into several categories and then different dispatching rules, including first-in-first out-(FIFO), the shortest processing time until the next bottleneck (SPNB), and critical ratio (CR), are applied to these categories. Cao et al. [12] proposed a drum-buffer-rope based scheduling method for semiconductor manufacturing systems, which was focused on the control of the bottleneck machines. According to Lee et al. [13], the past experiences of scheduling were emphasized, and Petri nets were used to model the semiconductor manufacturing activities. As a result, the scheduling of a semiconductor manufacturing process was based on the token movements in the corresponding Petri net. Wang et al. [14] considered the scheduling of unrelated parallel machines in semiconductor manufacturing. After the problem reduction, some local heuristics could be proposed. Chen [15] smoothed the fluctuation in the estimated remaining cycle time, balanced it with that of the mean release rate, and proposed the nonlinear fluctuation smoothing policy for mean cycle time (NFSMCT). Hu et al. [16] divided the process flow into several stages and protected the bottleneck step at each reentrant stage from the system fluctuations. Although these dispatching rules are relatively easier to use, they cannot produce optimal or near-optimal scheduling results. To this end, a scheduling problem has to be formulated as a mathematical programming problem. The optimal solution of the mathematical programming problem gives the optimal scheduling of the manufacturing system. However, the mathematical programming problem of scheduling a wafer fabrication factory is too large and can hardly be solved. To resolve this problem, several treatments have been taken in the literature.

The first way is to choose the most suitable dispatching rule from the existing rules for the wafer fabrication factory dynamically. Chern and Liu [17] proved that general family-based scheduling rules are better than the individual job scheduling rule in terms of machine utilization for multiserver and multiple-job reentrant production systems under some conditions. Hsieh et al. [5] chose one rule from the fluctuation smoothing policy for mean cycle time (FSMCT), the fluctuation smoothing policy for cycle time variation (FSVCT), largest deviation first (LDF), one step ahead (OSA), and FIFO. The selection was based on the results of extensive production simulation.

The second way is to add adjustable parameters that can be optimized to the dispatching rule. For example, Chen [18] proposed the one-factor tailored NFSMCT (1f-TNFSMCT) rule and the one-factor tailored nonlinear FSVCT (1f-TNFSVCT) rule. Various values of the parameter were tried in production simulation, and the one giving the best performance was chosen. Chen [19] used more parameters and proposed 2f-TNFSMCT and 2f-TNFSVCT. Both Dabbas and Fowler [20] and Dabbas et al. [21] combined some dispatching rules into a single rule by forming their linear combination with relative weights. However, a systematic procedure to determine the weights of these rules was missing. Similarly, Chiang et al. [22] developed a genetic algorithm (GA) for generating good dispatching rules through combining multiple rules linearly. Chen and Wang [23] proposed a biobjective nonlinear fluctuation smoothing rule with an adjustable factor (1f-biNFS) to optimize both the average cycle time and cycle time standard deviation at the same time. More degrees of freedom seem to be helpful for the performance of customizable rules. For this reason, Chen et al. [24] extended 1f-biNFS to a biobjective fluctuation smoothing rule with four adjustable factors (4f-biNFS). One drawback of these methods is that only static factors are used, and they must be determined in advance. To this end, most studies (e.g., [1520]) performed extensive simulations. This is not only time consuming but also fails to consider enough possible combinations of these factors. To solve this problem, Kim et al. [25] suggested three techniques for accelerating rule comparison using production simulation.

The third way is to estimate the best schedule from a limited simulation results. In this manner, only a few combinations of the adjustable parameters will be tried in the production simulation, and then some estimation technique is applied to estimate the scheduling performance from the parameter values. To this end, Dabbas et al. [26] applied the design of experiment (DOE) techniques as well as the desirability function approach. In Li and Sigurdur [27], historic schedules were transformed into appropriate data files that were mined in order to find out which past scheduling decisions corresponded to the best practices. Harrath et al. [7] proposed a hybrid genetic algorithm (GA) and data mining approach to determine the optimal scheduling plan of a job shop, in which GA was used to generate a learning population of good solutions. These good solutions were then mined to find out some decision rules that could be transformed into a metaheuristic. Koonce and Tsai [4] proposed a similar methodology. Chen [28] attempted to relate the scheduling performance to the parameter values using a backpropagation network (BPN). However, the explanatory ability of the BPN was not as good as expected. Recently, Chen [29] constructed a nonlinear programming (NLP) model to optimize the parameter values in 2f-TNFSMCT and 2f-TNFSVCT by maximizing the standard deviation of slacks, which was considered to reduce possible ties. However, the NLP model was very difficult to optimize and required a lot of time to solve.

In short, the existing approaches have the following problems.(1)A more effective approach to optimize the parameter values is needed.(2)Maximizing the standard deviation of slacks may lead to the situation that most slacks concentrate on the two extremes.(3)How to avoid carrying out large-scale, time-consuming production simulation experiment is worth exploring.(4)New applications of the digital manufacturing model are still to be discovered.In order to solve some of these problems and to further improve the performance of job scheduling in a wafer fabrication factory, Chen’s approach has been modified, and a fuzzy-neural-ensemble and geometric rule fusion approach was proposed in this study. The proposed methodology has the following new characteristics.(1)To diversify the slack, the geometric mean of the neighboring distances of slacks is to be considered, rather than the standard deviation.(2)Diversifying the slacks leads to a NLP problem, which is not easy to solve. To solve this problem, a systematic procedure has been established to search the optimal solution of the NLP problem in an effective manner.(3)The fuzzy c-means (FCM) and backpropagation network (BPN) ensemble approach [30] was also modified to estimate the remaining cycle time of a job. With a novel aggregation mechanism, the FCM-BPN ensemble approach is not only accurate but also robust to the untrained data.(4)The two existing rules, 2f-TFSMCT and 2f-TNFSVCT, are fused in a novel way so that two objectives, the average cycle time and cycle time variation, can be improved at the same time. Data fusion is a technique that gathers information and combines data from multiple sources in order to achieve inferences. This method is more efficient and potentially more accurate than if the inference was achieved from a single source.

The differences between the proposed methodology and the previous methods are summarized in Table 2.

The outline of this paper is described as follows. First, in Section 2, a new rule is proposed through the fusion of the two rules, 2f-TNFSMCT and 2f-TNFSVCT, in a novel way. To determine the values of parameters in the new rule, a NLP model is established. A systematic procedure is then used to solve the NLP problem. In addition, the remaining cycle time of each job is a necessary input of the new rule. The FCM and BPN ensemble approach, therefore, is applied to estimate the remaining cycle time of a job. In Section 3, the performance of the proposed methodology is evaluated by carrying out some experiments. Finally, this paper is concluded in Section 4.

2. Methodology

The variables and parameters that will be used in this study are defined in the Abbreviations section.

The proposed methodology includes the following six steps.

Step 1. Fuse the two rules, 2f-FSMCT and 2f-FSVCT, to form the new rule 2f-biFS.

Step 2. Establish a NLP model to optimize the parameters of the new rule.

Step 3. Apply a systematic procedure to solve the NLP problem.

Step 4. Estimate the remaining cycle time of a job using the FCM-BPN ensemble approach.

Step 5. Incorporate the estimated remaining cycle time into the new rule.

Step 6. Carry out experiments to evaluate the performance of the new rule.

The flow chart of the proposed methodology is shown in Figure 1. The algorithm proposed in this study is therefore indicated with FCM-BPN-ensemble-2f-biFS. Some steps correspond to the operations of the digital manufacturing model, as summarized in Table 3.

2.1. The Wafer Fabrication Environment

In this study, a wafer fabrication factory located in Taichung Scientific Park of Taiwan is analyzed. The wafer fabrication factory has a monthly capacity of about 25,000 wafers. Currently, more than 10 types of memory products are being produced with 58 nm~110 nm technologies in the wafer fabrication factory. For this, more than 500 workstations are used. Machine failure and repairing times basically follow the exponential and uniform distributions. Jobs released into the fabrication factory are assigned three types of priorities, that is, “normal”, “hot” and “super hot”. Jobs with the highest priorities will be processed first. The current scheduling policy in the wafer fabrication factory is FIFO. As a result, the longest average cycle time exceeds three months with a variation of more than 300 hours. The wafer fabrication factory is therefore seeking better scheduling rules to replace FIFO.

2.2. The New Rule

Information fusion has been widely used in scheduling and other system control purposes. According to Zhao et al. [31], the historical data of passenger flow of urban rail transportation were gathered and fused, in order to calculate the characteristic data for real-time schedule. Lebret et al. [32] presented an intelligence fusion system based on gain scheduling control and hybrid neural networks for system fault detection and fault-tolerant control. Li et al. [33] measured and fused the collected raw data for the scheduling of a multihop hierarchical sensor network. In this study, the two basic fluctuation smoothing (FS) rules [34] are to be fused, in order to have a better control over the scheduling of the wafer fabrication factory. The first one is FSMCT, in which the slack of a job is defined as which is aimed at the reduction of the average cycle time. Reducing the cycle time is a critical task in managing a wafer fabrication factory, especially, in a mass production setting such as in a memory fabrication factory [35]. The second rule is FSVCT: which is aimed at the minimization of cycle time standard deviation. Some variants of the two rules were summarized in Table 4 [18, 19, 23, 36, 37]. Jobs with smaller slack values ( or ) will be processed earlier. However, a tie is formed if more than one job has the same slack, which results in scheduling difficulties.

Chen [19] normalized the parameters in the FS rules and then divided them, which led to the 2f-TNFSMCT rule: and the 2f-TNFSVCT rule: where , . In Chen’s study, three models were established to form the combination of and . For example, However, no model can guarantee the optimization of the scheduling performance. To solve this problem, Wang et al. [37] advocated that, through diversifying the slacks of jobs, the overlapping of slacks can be reduced, which avoids mis-scheduling and is conducive to the scheduling performance. To this end, Wang maximized the standard deviation of slacks. However, this treatment may lead to the situation that most slacks concentrate on the two extremes [29], as illustrated in Figure 2.

In this study, to diversify the slack, the geometric mean of the neighboring distances of slacks is maximized instead: An example is illustrated in Figure 3. Obviously, to maximize the geometric mean of the neighboring distances, the slacks need to be evenly distributed.

However, (9) is not easy to solve. To tackle this, the following procedure is established.(1)Change the values of parameters.(2)Derive the slack of each job.(3)Exclude jobs with very large or small slacks.(4)Sort the slacks of the remaining jobs.(5)Calculate the distance between each neighboring pair.(6)Add a small value , for example, 0.01, to the distance to avoid the case of zero slack.(7)Obtain the geometric mean of the distances.(8)If the geometric mean has been maximized, stop; otherwise, return to Step 1.As a result, in 2f-TNFSMCT, the following objective function is to be optimized: where is the slack of the job that ranks the jth according to 2f-TNFSMCT. On the other hand, in 2f-TNFSVCT, the following objective function is to be optimized: where is the slack of the job that ranks the jth according to 2f-TNFSVCT. To fuse the two rules into a single one, that is, 2f-biFS, a natural way is to maximize the weighted geometric mean of the two objective functions, subject to several linear and nonlinear constraints: subject to or This model is obviously a NLP problem. Since and are limited to be within , the following exhaustive search algorithm can be used to reach the optimal solution of the NLP problem in an effective way.

Step 1. Choose a model from (6)~(8).

Step 2. Determine the weight .

Step 3. Let

Step 4. Calculate .

Step 5. Calculate the slack of each job according to 2f-TNFSMCT.

Step 6. Calculate .

Step 7. Calculate the slack of each job according to 2f-TNFSVCT.

Step 8. Calculate .

Step 9. Calculate . If is greater than , then .

Step 10. . If is greater than 1, go to Step 11; otherwise, return to Step 4.

Step 11. Search the neighborhood of the current optimal solution to fine-tune the valueof .

An example is provided in Table 5 to illustrate the procedure mentioned above, in which , , and . If the nonlinear model in (7) is referred to and ; , then the optimal objective function value is = 47.1 when .

A parametric analysis was also performed. The objective function values associated with various values of () are shown in Figure 4. In theory, converges to a small positive value ().

The remaining cycle time of a job is an important input to the 2f-biFS rule. To estimate it, the FCM-BPN ensemble approach [30] was modified, and a new aggregation mechanism was proposed, as described in the next section.

2.3. The FCM Approach

FCM has been applied to a variety of purposes in a wafer fabrication factory. For instance, Liu and Chen [38] proposed a modified FCM algorithm along with a quality index () to cluster the characteristic values of low-yield wafers. Wang [39] proposed a hybrid scheme combining the entropy FCM (EFCM) with spectral clustering to denoise the noisy wafer map and to extract meaningful defect clusters. In the proposed methodology, the remaining cycle times of all jobs in front of a machine must be estimated before determining the order of these jobs. In addition, job classification is considered to be beneficial to the accuracy of the remaining cycle time estimation [30, 3639]. For these reasons, in the FCM-BPN ensemble approach, jobs are classified into clusters using FCM. The use of FCM has the following advantages.(1)Classification is a subjective concept. Therefore, the absolute job classification may not be correct. In a fuzzy classification method like FCM, a job belongs to more than one cluster with different degrees, which provides a solution to this problem. The FCM-BPN ensemble approach considers the estimates from all of the clusters and may be more robust than the FCM-BPN without ensemble.(2)In a crisp clustering method, it is possible that some clusters have a very few examples. In contrast, in FCM all examples belong to a cluster with different degrees, which provides a solution to this problem.

The objective function of FCM is to minimize the weighted sum of squared error: where is the required number of clusters; is the number of jobs; indicates the membership that job belongs to cluster ; measures the distance from job to the centroid of cluster ; is a parameter to adjust the fuzziness and is usually set to 2. The procedure of FCM starts from the normalization of data. The (normalized) attributes of job are placed in vector . Subsequently, an initial guess of the clustering results is generated. The performance of FCM is highly sensitive to the initial guess. After each round of clustering, the centroid of each cluster is updated as follows: in which where is the centroid of cluster . and are also updated at the same time. The iterative process of job clustering continues until the clustering results converge: where is a real number representing the threshold for the convergence of membership. A problem of FCM is that the number of clusters must be decided in advance. To this end, the separate distance test ( test) proposed by Xie and Beni [40] is applicable: where is the number of categories. According to (22) and (21), is a function of , and is a function of . So, we can try several values of to minimize . In fact, the value minimizing determines the suitable number of clusters. After FCM, the original digital manufacturing model has been split into several smaller ones that are easier to deal with.

An example is provided in Table 6 to illustrate the application of FCM. All decision and response variables have been normalized into [0.1 0.9] to consider the situation that the future value may be greater/smaller than all the historical values. The Fuzzy Logic Toolbox of MATLAB is used to implement the FCM approach.

The results of the test were summarized in Table 7. In this case, the optimal number of clusters was 3. The clustering results when are shown in Table 8. If each job is classified into the cluster to which the membership is the highest, then the classification results are as shown in Table 9.

2.4. The BPN Approach

After clustering, a BPN is constructed for each cluster. A portion of the jobs in each cluster is input as the “training examples” to the BPN to determine the parameter values. The configuration of the BPN is established as follows. The BPN is a three-layer multiple-input single-output (MISO) system. Inputs are the normalized values of the parameters associated with a job. There is only one hidden layer with neurons. The output from the BPN is the normalized value of the remaining cycle time estimate. The activation function used in each layer is Log Sigmoid function (see Figure 5). The Levenberg-Marquardt algorithm is applied to train the BPN because of its efficiency. BPN training using the Levenberg-Marquardt algorithm has been stated in many past studies in this field [29] and therefore will not be repeated here.

In the previous example, take cluster 1 as an example. There are 16 jobs in this cluster. Split them into two parts: the training data (the first 12 jobs) and the testing data (the remaining jobs) and then construct a BPN to estimate the cycle time of jobs in this cluster. The Neural Network Toolbox of MATLAB is used to implement the BPN approach. The estimation results were summarized in Table 10. The estimation performance was basically very good. For the testing data, however, the estimation error was relatively large.

The estimation accuracy can be evaluated with the following indexes: and denote the actual value and forecast of job , respectively; is the total number of data. In this example,MAE = 43 (hrs),MAPE = 3.7%,RMSE = 62 (hrs).The performance of the FCM-BPN approach was also compared with those of BPN trained using the Levenberg-Marquardt algorithm and FCM-BPN trained using the gradient descent algorithm. The results are shown in Table 11.

2.5. Aggregating the Estimates from BPNs

In past studies, a job is usually classified into the cluster with the highest membership. However, that makes FCM not different from crisp classifiers. To tackle this problem, Chen [30] applied the BPNs of all clusters to estimate the cycle time of a job and used a BPN to aggregate these estimates. However, the BPN aggregator is very sensitive and may lead to unexpected results for untrained data. In addition, Chen [30] also showed that the aggregation performance of simple linear combination was even worse. For these reasons, a new aggregation mechanism is proposed as follows.

According to (18), is inversely proportional to : Therefore, Further, according to (19), the error is proportional to the distance to the centroid. For this reason, a natural way to aggregate the estimates from the BPNs is where is the remaining cycle time of job estimated by the BPN of cluster .

In the previous example, after aggregation, the estimation results are shown in Table 12. The estimation accuracy, measured in terms of MAE, for example, was slightly improved from 43 hours to 40.5 hours. It is worth noting that the estimation error was significantly reduced for the untrained (testing) data, which supported the robustness of the aggregation mechanism. In Table 13, the performances of some aggregation mechanisms were also compared. In linear combination, the weighted sum of the estimates from all of the BPNs was obtained, in which the weight was equal to the membership of belonging to a cluster. According to the experimental results, although the BPN aggregation mechanism achieved the best estimation accuracy, it may generate a considerable deviation for untrained (testing) data. In contrast, the proposed aggregation mechanism did not have this problem, and the estimation accuracy was also quite good.

3. Experiments

Twelve scheduling policies, including FIFO, EDD, CR, SRPT, FSVCT, FSMCT, DBD [11], NFS [23], 2f-TNFSMCT, 2f-TNFSVCT, the slack-diversifying rule [29], and the proposed FCM-BPN-ensemble-2f-biFS, were applied to schedule the target wafer fabrication factory. In total, the data of 1000 jobs of five major cases have been collected and were separated by their product types and priorities.

In FIFO, jobs were sequenced on each machine first by their priorities, then by their arrival times at the machine. In EDD, jobs were sequenced first by their priorities, then by their due dates that were equal to the multiple of the total processing times plus the release time: where is called the cycle time multiplier and is determined based on the cycle time statistics. In CR, jobs were sequenced first by their priorities, then by their critical ratios that are defined as In SRPT, jobs were sequenced first by their priorities, then by their remaining processing times. In FSVCT and FSMCT, jobs were sequenced on each machine first by their priorities and then by their slack values, which were determined by (1) and (2). To assist this, the remaining cycle time statistics have been collected from the historical data. The operation of NFS is similar, in which the slack of a job is calculated in the following way: where ranges between 0 and 1 and will be optimized by trying various values in the experiment. In DBD, workstations are first classified into four categories, and then CR + FIFO, the shortest processing time until the next bottleneck (SPNB) + CR + FIFO, shortest processing time (SPT) + CR + FIFO, and CR + FIFO are applied to sequence jobs in these categories, respectively. In the slack-diversifying rule [29], the weighted sum of the standard deviations of slacks calculated by 2f-TNFSMCT and 2f-TNFSVCT was obtained, and then jobs were sequenced according to this value.

Two performance measures, the average cycle time and cycle time standard deviation of each case, achieved by the twelve scheduling policies were observed and summarized in Tables 14 and 15.

According to the experimental results, the following points can be made.(1)For the average cycle time, the proposed FCM-BPN-ensemble-2f-biFS outperformed the eleven existing policies. The most significant advantage was over the 2f-TNFSVCT rule, which is about 28% on average.(2)On the other hand, the proposed FCM-BPN-ensemble-2f-biFS also surpassed the existing rules, especially FSMCT, in reducing cycle time standard deviation. The average advantage was up to 61%.(3)Both the slack-diversifying rule [29] and the proposed FCM-BPN-ensemble-2f-biFS achieved very good scheduling performances, which supported slack diversifying to be a viable strategy for improving the performances of similar rules.(4)In addition, the performance of the proposed FCM-BPN-ensemble-2f-biFS was also better than the slack-diversifying rule [29]. This showed that the treatments taken in this study, including the new way of slack diversification and forming the BPN ensemble, were indeed effective. To differentiate their effects, additional experiments have been conducted, and the results were shown in Figure 6. Obviously, forming the BPN ensemble was more effective than the new way of slack diversification. That was reasonable because in theory slack diversification has its limits.(5)If the cycle time is long, the remaining cycle time will be much longer than the remaining processing time, which leads to the ineffectiveness of SRPT. As a result, SRPT performed poor in reducing the average cycle times of such cases, such as case IV. FSMCT has similar problems.(6)In contrast, the performances of EDD and CR were satisfactory for cases with short cycle times because it is less likely that the cycle time (multiplier) will deviate from the estimated value.(7)We also compared the performances of different fusion mechanisms. To this end, the linear fusion in which the slack values derived by the two rules were simply added was also tested. The results are shown in Figure 7. Obviously, the effects of the linear fusion mechanism on the average cycle time were poor, and certainly the proposed geometric fusion mechanism could be used as an alternative.

4. Conclusions and Directions for Future Research

To further improve the performance of job scheduling in a wafer fabrication factory, the slack-diversifying rule used by Chen [29] is modified, and the FCM-BPN-ensemble-2f-biFS was proposed in this study. In the proposed methodology, two existing rules, 2f-TNFSMCT and 2f-TNFSVCT, are fused by maximizing the geometric mean of the neighboring distances of slacks. In addition, to enhance the accuracy of estimating the remaining cycle time, the FCM-BPN ensemble approach is applied with a novel aggregation mechanism that considers the operations of FCM and will not overreact for untrained data.

The effectiveness of the proposed FCM-BPN-ensemble-2f-biFS was evaluated with some experiments. According to the experimental results, the following happened.(1)The proposed FCM-BPN-ensemble-2f-biFS outperformed the eleven existing methods/rules in reducing the average cycle time and cycle time standard deviation simultaneously. (2)To diversify the slacks of jobs, maximizing the geometric mean of the neighboring distances of slacks is a better choice than maximizing the standard deviation of slacks.(3)Improving the accuracy of the remaining cycle time estimation was shown to be a more effective way to the improvement of the scheduling performance, while slack diversification has its limits.

However, this study has its limits that can only be solved by applying it to an actual wafer fabrication factory. In addition, developing different versions of the proposed rule for bottleneck and nonbottleneck machines can be considered in future studies. Further, to optimize the values of the parameters, other search algorithms that reach near-optimal solutions in less time are to be developed.

Abbreviations

: Job index;
: Step number
: The critical ratio of job at step ;
: The cycle time of job
: The estimated cycle time of job
: The due date of job
: The release time of job
: The estimated remaining cycle time of job from step
: The remaining processing time of job from step
: The step cycle time of job until step
: The slack of job at step
: The slack of job at step in FSMCT and its variants
: The slack of job at step in FSVCT and its variants
: The current time
: The total processing time of job
: Mean release rate
: The output from hidden-layer node ,
: The connection weight between input node and hidden-layer node , ;
: The connection weight between hidden-layer node and the output node
: Inputs to the BPN,
: The threshold on hidden-layer node
: The threshold on the output node.

Acknowledgment

This work was supported by the National Science Council of Taiwan.