Abstract

This paper discusses the stock size selection problem (Chambers and Dyson, 1976), which is of relevance in the float glass industry. Given a fixed integer N, generally between 2 and 6 (but potentially larger), we find the N best sizes for intermediate stock from which to cut a roster of orders. An objective function is formulated with the purpose of minimizing wastage, and the problem is phrased as a combinatorial optimization problem involving the selection of columns of a cost matrix. Some bounds and heuristics are developed, and two exact algorithms (depth-first search and branch-and-bound) are applied to the problem, as well as one approximate algorithm (NOMAD). It is found that wastage reduces dramatically as N increases, but this trend becomes less pronounced for larger values of N (beyond 6 or 7). For typical values of N, branch-and-bound is able to find the exact solution within a reasonable amount of time.

1. Introduction

The cutting and packing of stock are important problems in the metal, paper, wood, and glass industries (amongst others). Consequently, many researchers have considered these problems as mathematical optimization problems and derived good algorithms towards their solutions. In particular, the Stock-Cutting problem is concerned with the cutting of specific rectangles (orders) desired by customers from larger shapes (blanks) produced during the manufacturing process. This problem was first treated as a linear programming problem in [1] for one-dimensional stock-cutting and in [2] for two-dimensional stock cutting and has since been extensively studied in various forms. Indeed, Sweeney and Paternoster [3] reviewed more than 400 books, articles, dissertations, and working papers on stock cutting and packing in 1992, and since then new work has appeared (e.g., [4, 5]). In general, the stock cutting problem is concerned with the cutting out of many smaller rectangles (or other shapes) from a fixed larger rectangle. A related, but less well-known, problem is the selection of stock sizes or blanks (the larger rectangle) from which to cut orders. This problem is of particular importance in the float glass industry, where “holding good stock sizes appears to have at least as big an impact on trim loss as cutting up the stock plates efficiently” [6].

A typical glass manufacturing plant receives hundreds of different sized orders per year for a single material and thickness of glass. A single order size will typically need to be cut hundreds, thousands, or tens of thousands of times to satisfy customer demand. In the production of float glass, a continuous ribbon of flat glass is produced in the manufacturing plant. This ribbon is cut on-line into large sizes (blanks) that are stored and cut as needed into specific order sizes. This two-stage cutting process is carried out for various practical reasons: it is costly and sometimes impossible to cut the many different order sizes directly on the float-line, and it is also sometimes infeasible to store the many different order sizes in advance. Given expected order sizes and numbers, the stock size selection problem is the problem of deciding which large sizes (blanks) to cut on the float line in order to minimize wastage after all the orders have been cut from these blanks.

In this paper, we study the stock size selection problem as it applies to a local South African float glass manufacturing plant. Given a list of orders and a small positive integer, , as well as cutting limitations on the float line, we show how to find the blank sizes to cut on the float-line that minimize the wastage when the orders are later cut from these sizes. Our treatment of the problem differs from that of [6] because we restrict our attention to the case (relevant to the industry) where only one type of order is cut from a blank at a time. We reduce the optimization problem to a column selection problem, which we then solve directly with depth-first search (DFS) and branch-and-bound methods and heuristically using NOMAD.

Section 2 provides a formal description of the problem which is mathematically modeled in Section 3. Section 4 introduces various algorithms for solving the problem as formulated, with results following in Section 5. Finally, some concluding remarks and insights are mentioned in Section 6.

2. Problem Description

2.1. The Meta Problem

A float glass factory receives a roster of orders from numerous clients. These orders come in a wide variety of magnitudes, shapes, and sizes. For example, motor corporations may have large orders for windscreens, while there may be some clients that require a unique, single order.

Blanks are large pieces of glass that are cut directly on the production line at the glass factory. It is from these blanks that individual orders are later cut. It is desirable to cut a small, fixed number of different blank types on the production line (with each blank type being cut for an arbitrarily large number of times). Each order is then assigned to be cut from one of these blank types, and some glass is lost as wastage in this process. Given a value for (typically between and ) and a list of orders, the stock size selection problem asks for the dimensions (widths and lengths) of the blank types to be cut on-line so as to minimize the wastage when the orders are cut from these blanks. The dimensions are constrained by the float-line parameters and cutting machinery and thus must lie in a range between a known fixed minimum width () and length () and a known fixed maximum width () and length ().

2.2. Blank Cutting and Problem Assumptions

We make three key assumptions about the process of cutting orders from blanks in this paper.(1)Each individual blank is only ever cut out into copies of a single order, as illustrated in Figure 2 (as opposed to a complex stock-cutting problem like those considered in [2]).(2)Each order will be cut out from only one type of blank. That is, each blank type can be a cutting medium for many orders, but each order can only be assigned to one blank type for cutting. Figure 1 portrays an example of this mapping relationship.(3)The blank cannot be rotated before cutting the order.

The last point above is relevant in the industry, because for certain types of flat glass it is necessary to preserve the direction of the grain relative to the order dimensions.

Referring to Figure 2, we notice a shaded region as a result of the cutting of an order from the blank. This region represents a wasted section of the blank. Blank cutting invariably results in an off-cut wastage primarily due to the 3 cardinal problem assumptions outlined earlier. More often than not, the chosen order dimensions will not allow for “perfect fits.” Indeed, Figure 2 reveals that no more of order can be cut in the blank. However, there are cases where there is no wastage. For instance, looking at Figure 1, the last order has no off-cut since it shares the same dimensions as its blank. The primary aim of this optimization problem is to minimize the wastage from cutting by finding a set number of optimal blank types to satisfy the list of orders.

3. Model Formulation

3.1. A Basic Objective Function

Let there be orders, each specified by a triple , where indicates the quantity of the order to be cut and and give the width and length of the order, respectively. Let the chosen blank types have widths and lengths . For each order index running from to , assign it to be cut from blank . The wastage associated with such a set-up is where The first term in the sum indicates the quantity of blank needed to cater for the th order multiplied by the area of this blank. The second term is the total area of the th order that needs to be cut. The difference gives the wastage on the th order.

The above function must be minimized over all possible blank types as well as assignments of orders to blanks for cutting. The latter values are in fact uniquely determined by the former. Notice how each term in the sum that makes up the objective function is independent of the others. The choice of will only influence the th term in the sum, and therefore we can independently pick each one in such a way as to minimize this th term. Since for each value of there are only values of to choose from and is typically very small, this is not a very challenging subproblem. The objective function thus only depends on the choice of blank sizes, as depicted as follows: with and given by

3.2. Transition to Combinatorial Optimization

Before considering methods to solve the problem, we give some thought as to the search space, that is, the space of all possible dimensions of all the blanks. At first glance, this seems like a -dimensional continuous space, with two variables (width and length) associated with each blank type. This is a very bleak prospect as the objective function itself is far from continuous. It is clear, however, that very small changes in the dimensions of a blank type, , will often have little qualitative effect on the solution: the same number of each order will still fit in the blank, and so and will be unchanged. Small decreases in and of this type are then guaranteed to decrease the objective function (which is locally bilinear in these regions where and are constant). We can thus continue to decrease and until is a multiple of some and is a (possibly different) multiple of . It follows that the dimensions of the blanks can be restricted to the set of multiples of the dimensions of the orders to be cut from them: and so the search space is

This set is discrete and also bounded and can easily be enumerated.

Figure 3 details two examples on how orders’ dimensions are utilized to generate the search space of possible blanks.

The problem is now fundamentally a combinatorial optimization problem. The objective is to optimally choose a finite, predefined number () of blanks from and then optimally assign orders to be cut in these respective choices, graphically seen in Figure 1, such that wastage is minimised.

3.3. The Cost Matrix Data Structure

It is possible to restructure the optimization problem introduced above into a very suggestive form that is very simple to write down and reason about. To achieve this, a 2-dimensional array, called the cost matrix, is employed, each row representing an order and each column a blank from the creation procedure outlined in the previous section. Each element in this matrix represents the waste associated with cutting order from blank type , where with and as defined in (4).

We can write the matrix , where and . For example, if the element in position of the matrix was 12, it would indicate that cutting order 2 from blank 3 would result in a wastage of 12 units.

A point in the search space corresponds to a selection of a set of columns of this matrix, and the objective function can then clearly be rewritten in terms of the matrix as

3.4. Similarity to the p-Median Problem

Interestingly, this problem is similar to another combinatorial problem known as the p-median problem. Briefly, the p-median problem involves locating facilities to satisfy demand points. Assigning a demand point to be satisfied by a particular facility incurs some or other costs. An example of this quantified cost may be a community having to travel a distance kilometres to reach a designated clinic in rural Western Australia [7].

The objective of the p-median problem is to minimise the sum of these costs accrued from satisfying demand points.

Like the glass cutting problem, which stores waste values in its cost matrix, the p-median problem captures these costs in a 2D array as well. Investigating the p-median problem's integer programming formulation gives one a sense of what the cutting problem's formulation may look like [8]: subject to

Notice that is the cost or distance associated with demand point being satisfied by potential facility . It is then that is the cost matrix in the above formulation.

Constraint (10) expresses the need that all demand points must be satisfied. Constraint (11) prevents any user's demand from being satisfied from a location with no available facility. The total number of facilities, designated , is set by constraint (12).

The glass cutting problem in this paper shares many concepts with the p-median problem.

Referring to Table 1, there are a number of noteworthy similarities. Firstly, both are minimization problems. The objective is to minimise the costs/wastage that will inherently be accumulated by how we decide to satisfy demands (p-median problem) or assign orders (glass cutting problem). These decisions can be captured in a binary decision variable like seen in the p-median problem formulation. Recall that only a finite number of blanks can be chosen. Moreover, recall that the p-median problem can only locate facilities. These are analogous constraints that can be similarly formulated.

3.5. An Integer Programming Formulation

Recognizing its likeness to the p-median problem, the integer programming formulation for the glass cutting problem can be defined as follows: such that

Constraint (18) defines the binary decision variable, explained above. Constraint (16) stipulates that every order must be cut. Constraint (17) limits one to only selecting blanks to cut the orders. Equation (19) defines as a Heaviside function, imperative to the operation of constraint (17).

4. Methodology

It may be prudent to visualise the forthcoming optimization procedure of the glass cutting problem. Selection and assignment ensue with the cost matrix. Consider the following cost matrix. This instance has 5 orders, seen by the number of rows, and 7 blanks, seen by the number of columns. Consider

Looking at this matrix, the mathematical problem can be defined as follows.

Considering a matrix , select columns of so that the sum of the minimum coefficients within the columns of the selected rows is as small as possible.

Table 2 shows an example array that captures wastage according to the example cost matrix . Take note that only 3 of the 7 blanks can be chosen. In this case, blanks , , and have been chosen. In accordance with the definition above, the minimum coefficients of the rows in these chosen columns are selected and summed. This sum represents the total wastage from the proposed solution. It is this value that needs to be minimised.

Clearly, the choice of the columns (i.e., blanks) determines the objective function value of the problem. The assignment component is to some extent automatically performed once the columns have been chosen since the minimum coefficient of each row is chosen within the columns. The next section will explain the 3 optimization techniques utilised to effectively choose the columns.

4.1. Depth-First Search

Depth-first search (DFS) is an algorithm that explores a tree or graph data structure. The search begins at the root node of the tree, usually resembling the starting state of a problem. Its strategy is to constantly seek to branch “deeper” from the current node. If the current node has no unexplored edges, the algorithm “backtracks” to the current node’s predecessor. The algorithm will attempt to branch again in the depth-first manner. This process of backtracking and branching continues until all nodes reachable from the root node have been explored [9].

It is important to note that DFS will always visit every state in the search space. It is guaranteed, therefore, to find the optimal solution, but if the search space is large, it can take a very long time to do so. With regard to our problem, the algorithm iterates through every possible combination of selected columns in the cost matrix. This means that the algorithm evaluates every possible combination of blanks, calculating the wastage for each of these solutions.

The root node of the tree is the empty set: no columns were selected. The children of a node are obtained by appending to the set a single new column to the right of all the columns already in the set. The leaf nodes consist of sets of size . Reaching a leaf node in the search tree is indicative of a feasible proposed solution. The objective function is evaluated for the solution and the current is best updated if necessary. The depth-first search method we utilise is outlined in Algorithm 1.

(1)    Initialize cost matrix,
(2)   Initialize number of blanks to select,
(3)   function DFS
(4)      Set as the number of columns in .
(5)      if   not defined then
(6)     
(7)     
(8)      else
(9)      Set as the last element in the current path,
(10)    end if
(11)      if     then
(12)    
(13)    
(14)     else
(15)      // Recursively “bubble” DFS to descendants
(16)    
(17)     Initialize ( ) array to store cost
(18)     values of descendants
(19)     Initialize ( ) array
(20)    to store the respective paths of descendants
(21)    for   from 1 to   do
(22)      
(23)       DFS
(24)   end for
(25)    // Find best path at this junction and return it
(26)   
(27)   
(28)   
(29)   return  
(30)    end if
(31)  end function

4.2. Branch-and-Bound Method

As previously mentioned, depth-first search needs to visit every point in the search space in order to find the optimal solution. This quickly becomes prohibitive as the search space gets large. Branch-and-bound method is a modified version of depth-first search method that takes advantage of known bounds on the objective function value in order to prune the search tree down to a more manageable size. The method was first described in [10].

The basic idea behind branch-and-bound method is to keep track of an upper bound, , on the global optimum value as the search proceeds (usually just the best value found so far) and at each node to compute a node specific lower-bound on the objective function value that could be obtained by continuing to expand the children of that node. If for some node , then that node can be pruned from the search without any risk of missing out on the minimum value. Branch-and-bound method is thus an exact algorithm that finds the global optimum solution, but with good bounds it can do so in a fraction of the time it would take to apply DFS.

We describe below how we constructed our upper bounds and lower bounds for branch- and-bound method, as well as some useful preprocessing.

4.2.1. Upper Bounds

Any feasible point provides an upper bound on the minimization problem. To begin with, we made use of two heuristics to generate “good” feasible points that could be used as initial upper bounds for the branch-and-bound algorithm. The first heuristic begins with all the columns included in . It then successively removes columns greedily in such a way as to increase the cost function as little as possible on each step until there are only columns remaining. It returns these columns as its solution. The second heuristic starts with and successively adds columns greedily in such a way as to decrease the cost function as much as possible on each step until there are columns in . It returns these columns as its solution. Empirically this tends to be a rather good guess.

As the algorithm progresses and finds new solutions, it keeps a constant record of the best solution found so far. This best solution acts as the upper bound when deciding whether to prune a node.

Before proceeding we point out a mathematical convenience that we will use when discussing the lower bounds. When at a specific node in the DFS tree we have already selected a set with columns and we will select the remaining columns (call them , with ) from the part of the matrix to the right of all the columns in as we proceed with our search, we create the vector as follows: The objective function for the remaining columns can be written as This was a useful technique in the code itself because only the vector needed to be passed down the tree in order to be able to reconstruct the full function evaluation at all of the children. It was also useful in reasoning about lower bounds, as illustrated below.

4.2.2. First Lower Bound

Let us assume that the algorithm is at a node in the tree with defined as above, and let the last column in be column of the matrix. Then, for any choice of , where the inequality follows from the fact that is a subset of the set of columns with .

This is the first lower bound: the cost at any node is bounded below by the cost of including all remaining columns from here onwards.

4.2.3. Second Lower Bound

Next, we ask what the reduction in cost would be due to the addition of some column, , into the set. We define We also define the reduction in cost due to the addition of a set of columns, :

Now, for two columns, where the inequality follows from the fact that one of the two terms in the first sum is precisely equal to the term in the second sum, and the other is positive or zero.

This could be generalized to the result

Now, let be the set with that maximizes ; then, clearly is also the set that minimizes the objective function and so its function value is the theoretical lower bound at the current vertex. Now sort the remaining columns by their -values, pick the largest such values, and put the corresponding columns together into a set, . We have

Thus this last sum is an upper bound on the maximum reduction possible. This yields the lower bound on the objective function:

This is the second lower bound. It can only be applied when there are no infinite values in (when the reductions become infinite some of the inequalities above break down). Extending this bound to the case where there are infinities in would be of great use in this problem and is suggested for future work.

4.2.4. Presorting of Columns

Because of the way we set up the branch-and-bound algorithm mentioned herein, a column towards the right of the cost matrix will never appear as the parent of a column is towards the left. This means that if we wish to prune a large number of nodes, we should ensure that the most important columns are towards the left of the matrix and the less important columns are towards the right. We presorted the columns of the cost matrix as follows.(1)Rank the entries in each row of the matrix from lowest cost to highest cost. Place for lowest cost, for second lowest, and so forth.(2)Find the column with the most first places (break ties with 2 s and then 3 s, etc.) and move it to the front of the matrix.(3)Repeat steps and with the rest of the columns in the matrix. This guarantees that, of any the two columns, the one to the left has won one of these minicontests against the other and in some sense is of lower cost and hence is more important. The result of this can be seen in Figure 4.

4.3. NOMAD

Nonlinear optimization by mesh adaptive direct search (NOMAD) is a software optimization package designed for numerous optimization problems. NOMAD provides a C++ implementation of the Mesh adaptive direct search (MADS) algorithm found in [1115]. More specifically, NOMAD solves optimization problems of the form where for all and is a subset of [14]. As already mentioned, NOMAD makes use of the MADS algorithm. Nomad is essentially an iterative method that utilises blackbox functions which evaluates trial points on a mesh [16].

Each MADS iteration pertains to three steps, the poll, the search, and finally the update. The search allows for trial points to be created anywhere on the mesh, while the poll step is strictly more defined due to the reliance of the convergence on it. Ideally, the algorithm converges globally to a point which satisfies the local optimality conditions based upon the functions defining the problem.

A loose description of this technique can be thought of as an arbitrary point selected and a local minimum found. Iteratively repeated this process leads to numerous local minimums. Providing enough iterations are conducted, a fairly accurate global minimum can be selected from this set. Now considering the problem at hand, a glass manufacturer may be dissatisfied with the time taken to find an exact solution using the DFS and branch-and-bound algorithms. NOMAD, however, offers the alternative of speed, albeit at the potential of an inexact solution. The results highlight such a comparison.

5. Results and Analysis

All algorithms and methods were implemented on a i7-3930k:12 cores @3.9 Ghz, 64 gigabytes of 1600 Mhz ram, 2 GTX 690 PC utilising MATLAB 2013a. We implemented the algorithms on an industrial dataset with 33 orders that produced 151 possible blank sizes from which to choose.

5.1. Complexity Analysis
5.1.1. Blank Choices and the Solution Space

As was mentioned previously, the number of blanks that can be used to satisfy the orders is established a priori. It would be appropriate to investigate how this decision might affect the size of the solution space and hence the performance of the algorithms.

Being a combinatorial optimization problem, each solution is made up of a number of different choices. The fact that more blanks are available to choose from, as well as the number that we allow to be selected, invariably increases the problem's size as there is an increasing number of possible combinations.

Using our blank generation procedure and a hypothetical data set of orders, imagine that a total of 151 blanks are produced. Referring to Table 3, by altering the number of selections from these 151 blanks we can see how the solution space grows. Immediately, one will notice that the solution space grows exponentially, to the extent so that graphing this increase without employing a log scale would not allow for effective viewing. Figure 5 is a log-scaled bar graph that allows us to visualise how the solution space grows when increasing the number of blanks that can be chosen.

5.1.2. A Theoretical Upper Bound on the Number of Blanks Generated Given the Order Lists

As we can see from the above analysis the number of blanks that can be chosen will dramatically increase the number of possible solutions. Obviously, a higher number of possible blanks to start with will also increase the size of the search space. Recall that the entity that will determine how many blanks are generated is the order list since it is from these orders that possible blanks are derived. We now attempt to find a theoretical upper bound in an effort to get some idea of the order lists' influence on the problem size.

First, let us assume that there are orders. An order , for the purposes of this analysis, is represented by , where is the width and is the length of order . Each blank is constructed with multiples of some order . Therefore, the blank's dimensions of width, , and length, , can be written as follows:

There are a minimum width, , and a minimum length, , that a blank can have, determined by the smallest order that needs to be cut. Similarly, a blank also has a maximum width and length ( and , resp.,) which will be determined by equipment and other factors. Taking this into account we can say that

Only considering the width component of (33) we are able to obtain an interval for , ’s defined as the multiples of widths of order used to construct a blank. Recall that and so we must utilize ceil and floor functions to ensure that is a feasible value. Similarly, we can obtain a range for :

In order to obtain an upper bound for the number of blanks generated for an order list we now need to sum, for all orders, the product of the number of elements in the intervals for and :

However, we can simplify (36) by noting that and because and . We then continue and find an upper bound:

The progression to (39) reveals that the number of generated blanks is bounded linearly by the number of orders that need to be cut.

5.2. Algorithm Performance
5.2.1. Execution Times

Table 4 is the execution times for the branch-and-bound, depth-first search, and NOMAD algorithms. As we would expect, the exact solution procedures, DFS and branch-and-bound, are heavily affected by the increase in problem size that comes with increasing the number of selectable blanks. Indeed, DFS and branch-and-bound method were only tested up to 5 and 6 selectable blanks, respectively. DFS took over 2 hours to complete with 5 selectable blanks and branch-and-bound was in excess of 3 hours for the 6 selectable blanks case. The results for all 3 algorithms can be seen in Figure 6.

We note that branch and bound, an algorithm that is an improvement on the brute force enumeration that comes with DFS, keeps a relatively low computation time up to about 4 selectable blanks. Looking at the area of interest, Figure 7, both exact solution methods were competitive with the NOMAD heuristic in terms of speed up until 3 blanks were made selectable. Thereafter, the exact solution procedures, most notably DFS, suffer as a result of the increasing problem size.

NOMAD is consistently fast in its execution, having less than 10 seconds computation time for all cases; however, a heuristic often sacrifices solution quality for speed as we will see later.

5.2.2. Branch and Bound Details

The execution time results indicate that the branch-and-bound algorithm is significantly better than the DFS algorithm for this problem; it does more work at each node. This indicates that a significant portion of the search space is being pruned. Figure 8 illustrates the fraction of the search space (leaf nodes) actually visited by the branch-and-bound method for different values of . Observe how this fraction decreases rapidly for different values of . It is conceivable that with better bounds it may even get close to the scale of the rapid expansion in search space size.

In the code for branch-and-bound we first implemented lower bound 1 and then for nodes that did not get pruned we tried lower bound 2. It was not at all clear that this second step would successfully prune any nodes after the first step failed. Figure 9 indicates, however, that a significant portion of nodes were pruned by the second lower bound. A third lower bound that was a slight generalization of the first one was also attempted, but did not prune many extra nodes after the first lower bound was applied, and also significantly slowed down the algorithm and so it was removed.

5.2.3. Solution Quality of NOMAD

Table 5 shows the wastage values obtained for DFS, branch-and-bound, and NOMAD. DFS and branch and bound are exact solution procedures and so will have the same wastage at completion, the optimal minimal wastage. These values can be found in the column of minimum wastage. NOMAD, however, is a heuristic and does not guarantee an optimal value. In fact, NOMAD performs rather erratically in solution quality, as can be seen in Figure 10. In several cases we notice significant error percentages. It would most certainly be unsuitable to obtain a solution that has an optimality of 60%. NOMAD’s inconsistency makes it an unreliable tool if minimising wastage is the primary objective.

5.2.4. A Simplicity versus Optimality Consideration

A trend that we notice with the results of all algorithms is the decreasing wastage with the increasing selectable blanks. This is not surprising since if we allow more blanks to be selected we enhance our capability to cater for all the orders, reducing wastage. This can be seen in Figure 11. There is a case where NOMAD had more wastage although there had been more selectable blanks, but this can be explained by the underlying uncertainty that NOMAD has exhibited during testing.

This trend of decreasing wastage is not so extraordinary. Practitioners in the glass manufacturing industry are aware of the prospect of further reducing wastage in this manner. It is rather a question of simplicity versus optimality since operating the machinery at the factory becomes more complicated as one increases the number of blanks involved in satisfying the orders.

6. Conclusions

The selection of blanks to satisfy orders holds great significance in the glass manufacturing industry. Random and heuristic methods for guessing the best blank sizes tend to result in relatively high wastage. Minimising this loss translates to a meaningful benefit for a glass manufacturing enterprise. Furthermore, it is possible that these ideas will find application in the metal, paper, and wood industries (amongst others).

Making the transition to a discrete combinatorial problem proved worthwhile. Presented with any order list we are able to generate a set of feasible blanks. It provided us with the flexibility to apply tried and tested algorithms in the field of combinatorial optimization, such as branch and bound, to optimally select these blanks. Identifying a similarity with the p-median problem effectively allowed us to formulate the problem, making it a matter of selecting columns. Some future work may involve developing a more elaborate model. For example, it would be beneficial to formulate a robust model that accommodates for different cutting machines and the option to rotate when cutting. We may also wish to include other factors into the objective function apart from wastage. In particular, it may be of interest to model the risk of an order being cancelled or changed into the stock size selection process.

It was shown that the number of blanks in the search space is bounded linearly by the number of orders, . Nevertheless, the actual computation time is not linear in this number, because we need to choose a subset of size from these blank sizes. We expect the computational time to look like and so it is polynomial in and exponential in .

The branch and bound implementation performed well with larger problem sizes, whilst still providing an optimal solution. The upper and lower bound estimates and presorting of columns were effective at trimming the solution space, so that for only of the search space had to be explored to find the optimal solution. An interesting potential improvement involves adapting the second lower bound estimate to handle cases, where there are infinities in the cost matrix, and investigating other bounding and pruning strategies. As it stands, branch-and-bound takes around 3 hours to compute the case for our data and less than an hour for . These are feasible times for industry. NOMAD had a somewhat erratic performance when it came to solution quality, on one occasion having a 60% difference from the optimal solution. Its strength lies in fast execution times, being less than 10 seconds in all tested cases. One might be able to run NOMAD several times to try and get a sense if it is near the global minimum in the solution space and not in a grossly suboptimal local minimum. It may also be possible to fine-tune NOMAD towards solving this particular problem. Our results, however, are not entirely positive. Further work may include developing a heuristic that combines relatively fast execution times with good, consistent solution quality.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors wish to thank Riaan Von Wielligh from PFG Building Glass for his assistance in the problem description as well as the members of the 2013 MISG South Africa study group.