The Scientific World Journal

The Scientific World Journal / 2014 / Article
Special Issue

Recent Advances on Bioinspired Computation

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 967254 | https://doi.org/10.1155/2014/967254

Xiang Li, Mohammad Reza Bonyadi, Zbigniew Michalewicz, Luigi Barone, "A Hybrid Evolutionary Algorithm for Wheat Blending Problem", The Scientific World Journal, vol. 2014, Article ID 967254, 13 pages, 2014. https://doi.org/10.1155/2014/967254

A Hybrid Evolutionary Algorithm for Wheat Blending Problem

Academic Editor: X. Yang
Received09 Nov 2013
Accepted30 Dec 2013
Published20 Feb 2014

Abstract

This paper presents a hybrid evolutionary algorithm to deal with the wheat blending problem. The unique constraints of this problem make many existing algorithms fail: either they do not generate acceptable results or they are not able to complete optimization within the required time. The proposed algorithm starts with a filtering process that follows predefined rules to reduce the search space. Then the linear-relaxed version of the problem is solved using a standard linear programming algorithm. The result is used in conjunction with a solution generated by a heuristic method to generate an initial solution. After that, a hybrid of an evolutionary algorithm, a heuristic method, and a linear programming solver is used to improve the quality of the solution. A local search based posttuning method is also incorporated into the algorithm. The proposed algorithm has been tested on artificial test cases and also real data from past years. Results show that the algorithm is able to find quality results in all cases and outperforms the existing method in terms of both quality and speed.

1. Introduction

Wheat is Australia’s most important grain crop. About 80 percentage of Australia’s wheat is exported. Australia is the world’s fourth-largest exporter of wheat. Usually, wheat is sold to the central collection sites via truck in batches, called loads. When submitted, each load is weighted and sampled and the result of quality checks is given. There are 10 to 20 attributes, such as protein content and moisture, that are checked. A grade is assigned to each load according to the result of the quality check. This grade is used to deliver products (wheat) within given specifications. There are 26 grades in total; each one has its own quality requirements and price. The value of the wheat is determined by its grade (see Table 1).


GradeProtein lower boundProtein upper boundPrice per tonne

G111.0%12.5%$240
G210.0%11.0%$220

For example, consider grades G1 and G2 (for simplicity, only the protein content is presented here). To be graded as G1, the protein content of wheat must be within the 11.0%–12.5% range, and for G2 the range is from 10% to 11.0%. G1 has a higher requirement on protein and has a higher price. Now let us consider three loads of wheat and how they will to be graded.

As shown in Table 2, L1 (with 11.5% of protein) is graded as G1, L2 (with 10.5% of protein) is graded as G2, and L3 (with 10.0% of protein) is graded as G2. Note that, although L1, L2 have a higher protein value than the required lower bounds, the price does not increase.


LoadProteinGradePrice per tonneTonne

L111.5%G1$240100
L210.5%G2$220100
L310.0%G2$22080

In fact, there are many cases where the quality of wheat is above the minimum requirement or cases where the wheat is just short of obtaining a higher grade. One way to improve the overall value is to blend the wheat.

Blending is the process of mixing wheat of different qualities. This is usually done by blending low-quality (low price) wheat with some high-quality wheat to achieve a better overall value. Blending is a vital part of the entire wheat supply chain and, as discussed below, plays a major role in generating profit.

By blending different loads, the mixture (called a lot) could be assigned with a new grade based on the weighted average quality result.

Figures 1 and 2 present two examples to illustrate the basics of blending.

In Figure 1, there are two loads, L1 and L2. L1 is 100 tonnes, with a protein percentage of 11.5% that would be graded as G1. L2 is 100 tonnes, with a protein percentage of 10.5% that would be graded as G2. The price of G1 is $240 per tonne and G2 is $220 per tonne.

Suppose that the requirement of G1 is to have at least 11.0% of protein. Clearly, L1 exceeds the protein requirement of G1 (with no additional benefit) and can be mixed with L2 to achieve a better total value. If L1 and L2 are blended together, the mixed lot will have a protein percentage of 11.0% and thus still meet the requirement of G1. This results in an increase of total value: the value before blending sums to $46,000 and the value after blending is $48,000, realising an uplift of $2,000.

Figure 2 represents a more complicated example. There are two loads, L1 and L3. L1 is 100 tonnes, with a protein percentage of 11.5% that would be graded as G1. L3 is 80 tonnes, with a protein percentage of 10.0% that would be graded as G2. The price of G1/G2 is still $240/220 per tonne. Since L3 has less protein than L2, in this case, the blending of L1 and L3 no longer meets the protein requirement of G1.

Instead, we can split L3 into two subloads, L4 with 50 tonnes and L5 with 30 tonnes, and then blend L1 and L4 together to form a G1 lot. In this case, the value before blending sums to $41,600 and the value after blending is $42,600, increasing profit by $1,000.

The growers in Australia do not actually do the physical blending work. However, they could sell their wheat at the blended price if a blending plan is provided. Thus, the result of a provided blending plan is directly related to their profit. In fact, for growers with hundreds of loads of wheat, the profit of blending can be easily beyond $200,000.

However, building a good blending plan is a very complex task even for experts. As an example, in Australia, the grading standard includes up to 20 attributes (protein, moisture, screening, earth, etc.) and 2 unique constraints (discussed in Section 2) to determine the grade of wheat. Moreover, one individual grower might have more than 500 loads of wheat, all with different qualities. As a result, a good blending plan often needs hours or days of work.

During the harvest season, the prices of wheat change daily or even more frequently. As the price of wheat changes, the optimal way to blend also changes. This indicates that not only the quality of the blending plan is important, but the time taken to generate that plan is also important. A good plan created after hours of work might be already outdated due to the changes of the price. There is not enough time to manually build a good blending plan every time the price changes. A tool which can generate the blending plan in a short period of time is in much demand.

This paper extends our previous work [1] where we proposed a linear programming guided hybrid evolutionary algorithm to address the wheat blending problem. The proposed algorithm is hybridized with an evolutionary algorithm, a heuristic algorithm, and a linear programming algorithm. In addition to these, a heuristic based initialization method is used to reduce the search space and a local search is also applied to fine-tune the final result. During the 2013 harvest season, the proposed algorithm helped thousands of growers build their blending plans and generate tens of millions dollars profit for the growers. In this paper, more experiment results are presented and detailed stage-by-stage performance analysis is included as well.

The rest of the paper is organized as follows. Section 2 introduces the blending problem in detail. Then Section 3 provides some background on related work for solving the underlined problem. The proposed hybrid evolutionary algorithm is described in Section 4. In Section 5, the proposed algorithm is applied to the test cases and the results of comparison with a heuristic algorithm in current use are provided. The impact of each stage is included in Section 6 and Section 7 concludes the paper.

2. Model of the Problem

Wheat is one of the most important agricultural commodities in Australia and is one of Australia’s most valuable exports. Blending is an important stage in the whole wheat supply chain. Before milling, wheat with different levels of quality may be mixed together to balance the cost and quality. The price of wheat is based on many quality attributes and some wheat may have higher quality values than required. In these cases, high-quality wheat can be blended with low-quality wheat to balance the quality, thereby having a better overall value.

This problem can be described by the following model. Assume that is the number of loads, is the number of grades, represents a load, and represents a grade. Also, , , , and represent the set of all loads, the set of all grades, the price per tonne of load , and the price per tonne of grade wheat, respectively. Consider that is the decision variables vector defined by which is the number of tonnes from load that has been blended into grade lot. The objective of this blending problem is to In (1), is the earned profit when load is blended into a lot with grade . This refers to the fact that maximizing the profit generated by blending is desirable.

Then, is the number of tonnes of load originally and (2) and (3) indicate that the total tonnes of load used in blending should always be greater than or equal to 0 and less than or equal to its original tonne weight:

Equations (4) and (5) are the constraints on the quality standards of each grade. represents one quality attribute, for example, the protein percentage. , , and are the quality attribute of load , the maximum requirement of quality attribute for grade , and the minimum requirement of quality attribute for grade , respectively. The weighted average result of quality attribute for the blended lot with grade should always be within the min/max range:

Linear constraints usually cannot model real-world problems precisely. As in our problem, there are two nonlinear constraints involved which makes the problem quite unique.

Firstly, the Australian standards suggest that the weighting of wheat is precise down to the 10 kilo range. Thus is required as an integer vector since it used tonnes based weighting. This hard constraint corresponds to

There is also one more constraint which further complicates the problem. As proposed, it is possible to just take a part from load to use in blending, known as a split. However, the total number of splits allowed for the entire blending plan is limited, and this may differ from grower to grower. This constraint is included in where is the number of splits allowed and is the ceiling function which returns the smallest integer not less than .

In this section, related algorithms for solving the general blending problem are detailed and a brief introduction to the epsilon level constraint handling is included.

3.1. Linear Programming

Linear programming (LP) is an optimization technique that has been designed for addressing continuous space (decision variables are continuous) optimization problems. LP requires that the objective function and constraints are all linear and LP algorithms are able to solve such optimization problem to optimality. There are many methods to solve linear programming problems such as simplex, criss-cross, and interior point methods [2].

In this blending problem, the objective function (1) and constraints (2), (3), (4), and (5) are all linear. Thus the linear relaxed version, which only considers (1) to (5), can be solved efficiently using a linear programming algorithm. There have been a few attempts to solve similar blending problems (with only linear constraints) using linear programming algorithms, especially before the early 1990s [3].

However, for this problem, (6) and (7) affect the model significantly. The result from a linear-relaxed model might break either or both of the constraints. Firstly, linear programing is operated in the continuous space; thus there is no guarantee that the result is feasible for (6). Secondly, the result might use any number of splits, which breaks (7). Both of the constraints are important for business. Constraint (6) is clearly stated in the Australian standards and (7) comes from the capacity limitation for the shared storage space. In addition, (7) is also used by the business to control hidden operational cost.

We can partially solve the problem of (6) by rounding the results. A simple half-up rounding will do the job but then the result is no longer guaranteed to satisfy all the constraints from (2) to (5). During our experiments, there are around 30% of the cases in which the result after the half-up rounding is still feasible. Some heuristic based rounding methods could increase the chance to 60%, but those methods are computationally expensive and are not the focus of this paper.

Again, we can use rounding (if the variable representation is transformed from to and capping the cases where the value is rounded up to 1) to solve the problem of (7). However, the variations needed are significant and feasibility of the solution is not guaranteed either. The result is also quite possibly a suboptimal solution, since those extra splits used usually contribute major sources of profit.

3.2. Mixed Integer Programming

Blending problems are also often modelled as mixed integer programming problems, especially for real-world cases [4]. Integer programming (IP) is a type of linear programming in which decision variables are integers and mixed integer linear programming (MILP) is a variety where only some of the variables are constrained to be integers. There are different methods for IP/MILP: some are exact (such as the methods which use branch and bound or cutting plane) and some are approximation methods. In the exact methods, normally the relaxed version of the problem is solved by LP and then this information is used (e.g., in branch and bound) to find optimal solutions. However, the time complexity of these methods is exponential [3].

There are many studies using exact algorithms to solve blending problems. Bilgen and Ozkarahan proposed a mixed-integer linear programming model for optimizing a wheat supply chain. The objective is to minimize the total cost for blending, loading, transportation, and storage [5]. Ashayeri et al. apply the model to the blending of chemical fertilizers [6]. Jia and Ierapetritou also use a mixed-integer linear programming model to optimize the blending of gasoline [7]. The MILP model is also used in the blending of water [8] and oil [9].

MILP could model the problem more precisely than a LP since (6) is not relaxed. However, (7) is still not solved. In addition, the execution speed is limiting the usage of exact methods in here. One grower could have up to 700 loads and it may need days of time for those exact algorithms to finish. Thus, those exact methods are not applicable for this problem. Actually, unlike academic researchers, real-world users are usually more concerned of the speed of the tool, instead of the optimality of the solution. Users might be happy to have a cup of coffee while waiting for the result, but, in general, waiting for hours is not acceptable, especially in decision support systems. As a rule of thumb, a casual user usually prefers a tool that is fast and generates a quality result, but not necessarily the optimal result.

3.3. Metaheuristic

Metaheuristic algorithms are also a popular choice for solving complex mixed-integer programming problems [10]. Examples include applying evolution strategy to the problem of optimal multilayer coating design [11] and to optimize chemical engineering plants [12]. Other cases include an ant colony system for optimizing electrical power distribution networks [13], a genetic algorithm to optimize the design of antenna [14], to optimize the deployment of patrol manpower [15], and to optimize exosensor distribution for smart home systems [16]. Yokota et al. proposed a genetic algorithm to solve nonlinear mixed integer programming [17] and there are many other algorithms created for solving the general MILP [1820].

However, those algorithms are either too general or too specific for the underlying problems. To obtain the best result, real-world constraints like (7) usually need specially designed methods and intense tuning [21].

3.4. Evolutionary Algorithm

An evolutionary algorithm (EA) is a stochastic population-based metaheuristic that mimics biologically inspired operators such as mutation, recombination, and selection. In an EA, a set (known as the population) of initially generated solution candidates (known as individuals) is processed (generations: the main loop of the EA). In each generation, a subset of the individuals in the population is selected (via the selection operator, to mimic the competition between individuals). The selected individuals are then modified (via the mutation and/or recombination operator), resulting in a new set of individuals. This subset is merged into the original population and after a selection process (to mimic the “survival of the fittest” process), a new population is generated. This process is repeated until a certain termination criterion is met (such as reaching the maximum generation limit or the solution is not being improved for a long time) [22].

In many practical cases, it has been reported that hybridizing an EA with other methods is effective [23]. There are many ways to hybridize an EA with other methods. For example, one way would be to incorporate with other methods to create problem dependent operators [24]. Another way would be to apply another method to improve the final solutions found by the EA [25]; It is also possible to use problem specific representation [26] or to run one or more algorithms interactively [27].

3.5. Heuristic Algorithm

One existing tool has been used by growers to help them build the blending plan, and it uses a heuristic based algorithm. The heuristic is based on the fact that protein percentage is the main attribute to differentiate grades. Thus the algorithm tries to find a load that has the best ratio. If given a load and a target lot with grade , the ratio can be calculated as where is the unit price of grade wheat, is the unit price of the load , is the minimum protein requirement of grade , and is the protein percentage of load respectively.

After that, the algorithm tries to find one or more companion loads which have better quality attributes to improve the weighted average quality. The combination of the selected load, the target lot, and the companion loads is called a blend. The algorithm stops if it cannot find any blend with profit.

The method used to find the companion loads is to do an exhaustive search with all the combination of 3 (or less) loads. The whole process is summarized as in Algorithm 1. The NOT_VALID method tests whether any constraint violation is introduced. The NO_PROFIT method tests whether any profit is generated.

hasNext = true
While   hasNext
hasNext = false
, = SELECT_BY_BEST_RATIO(L)
blends = COMBINATIONS(L, , )
= null
for  each  blend  in  blends
if NOT_VALID(blend)
   continue
end if
if NO_PROFIT(blend)
   continue
end if
If     or   blend is  better  than  
    =
end  if
end for
if     is  not   null
   APPLY( )
   hasNext = true
end  if
end  while

This algorithm was a lot faster than doing the blending manually, but the generated result was often suboptimal. This tool could solve some simple problems but, for more complex cases, the user typically used the tool to generate a base solution and then tweaked it to get a better result (in fact, our proposed algorithm follows the same ways as the users. It generates an initial solution first and then tweaks it to get a better result). The users were generally happy with the tool but always seek for a better tool that could generate a quality blending plan all the time while keeping the execution time short.

3.6. Epsilon Level Constraint Handling

Epsilon () level constraint handling ( LCH) is a method that transforms the constrained optimization problems into unconstrained problems [28]. The transformation is done by replacing the ordinary comparison operator by the   level comparison operator. The level comparison operator combines the constraint violation values and objective values for evaluating candidate solutions.

In short, the level comparison compares two solutions by their constraint violation values first. The solution which has a lower constraint violation value is ranked higher. However, if both the violation values are under a small threshold , then the constraint violation values are ignored, and the two solutions are only compared by their objective function values.

Suppose there are two candidate solutions and , and are the objective values, and and are constraint violation values of and ; then the level comparison operator and is defined by the following:

There are many ways to control the threshold . The formula used by the proposed algorithm is included in Section 4.

4. The Proposed Hybrid Evolutionary Algorithm

The proposed algorithm contains four stages: search space reduction, initialization, evolutionary loop, and local search. The working flow is shown in Algorithm 2.

BEGIN
search space reduction(Stage 1)
initialization(Stage 2)
  while  not terminated
  mutation(Stage 3–1)
  heuristic(Stage 3–2)
  simplex(Stage 3–3)
  end while
  local search(Stage 4)
END ALGORITHM

In Sections 4.1 to 4.4, each of those stages is presented. Firstly, the algorithm tries to eliminate all the obvious bad choices using predefined rules. Then the algorithm solves the linear-relaxed version of the problem and uses the result as a clue to build an initial solution of the nonrelaxed version. After that, the algorithm tries to tweak the solution in an iterative fashion. In each iteration, there is an evolutionary algorithm to optimize the loads to blend, a heuristic to choose the right loads to split, and the use of a linear programming algorithm to find the optimal way to split. Final tune-up is done by a local search.

Additionally, Section 4.5 introduces a specially designed constraint handling method that is proposed into the algorithm to encourage the exploration of infeasible regions. Local search in the main loop is included in Section 4.6.

4.1. Search Space Reduction

In this stage, the algorithm tries to eliminate some obvious bad choices before it starts the stochastic process. This is done by a rule-based filtering process. These rules are based on advice from domain experts and experimental results. Some rules include the following.(i)Never blend a load to a lot which requires at least another 2% of protein. As the protein percentage is generally from 10% to 14%, overcoming the 2% margin is too costly.(ii)Never blend a load that has an extra 1.5% of protein above the grade requirement, unless there are only few choices. This rule attempts to save the good quality loads for a better global result.

After this filtering process, the search space (number of possible blends) of the problem is greatly reduced.

The key of this stage is to ensure that there is no bad choice made. To do so, the thresholds of the rules are carefully chosen. Those values could almost guarantee that it does not have a negative impact on finding the optimal solution.

4.2. Initialization

Since the execution speed is crucial for this problem, the algorithm uses a heuristic based initialization method instead of any random initialization method. This might sacrificed the diversity of solutions but the algorithm could get a good basic solution with the least computation.

The algorithm starts with applying the simplex algorithm [3] to solve a linear-relaxed version of the problem. The linear-relaxed version is the same problem but only consider constraints (2) to (5). Then the algorithm uses the heuristic (8) to build a solution of the nonrelaxed problem, but with a threshold of 15. Only loads that have the profit-protein ratio greater than 15 are considered. After that, the algorithm extracts the common parts from both solutions and generates an initial solution based on them.

The threshold value 15 is a very high number for the profit-protein ratio. This is to ensure that the algorithm is not too greedy at the beginning. The simplex result is used to double check that and also serves as a clue for reaching the global optimal. The decisions made in this step are then fixed, not possible to modify by the latter stages.

The purpose of this stage is to generate a basic solution with no or few bad choices and further reduce the search space. Since we have chosen a very high threshold number, we can ensure that the decisions are all obvious good ones.

4.3. Evolutionary Loop

This is the main loop where the new solutions are generated. It contains an evolutionary algorithm to optimize the loads to blend, a heuristic to choose the right loads to split, and the use of a linear programming algorithm to find the optimal way to split. The operators used in this stage are as follows.(i)Mutation: for a randomly selected load, change its allocation to a random lot.(ii)Heuristic: it is to choose which load to split. For all the possible combinations of load and target lot with grade , this applies the 2-way tournament selection to choose combinations from the top that have the best value of (iii)Simplex algorithm: the selected loads in the heuristic step form a subproblem and the problem is solved with unlimited splits allowed using the simplex algorithm.

In each iteration, the algorithm tries to modify the existing solution by the mutation operator one or more times (by some probabilities). The probabilities are to ensure that the algorithm is possible to perform bigger variations. The generated new solution contains no split and is called raw solution. Then, the algorithm iterates over all the loads using the heuristic mentioned above and tries to find good candidates to do the split. After that, the algorithm builds a linear-relaxed model with only the selected loads and solves it using the simplex algorithm. The generated solution is called split solution and always satisfies (7) since there are no more than variables in the model.

Thus, each solution has actually two forms: raw and split form. Note that the mutation only operates on the raw form and the simplex algorithm resulting in a solution in the split form. Also, the result of the simplex algorithm might not satisfy (6). In those cases, rounding is applied.

4.4. Local Search

Random modification is usually very inefficient when the result is close to the optimal point. It needs to be really lucky to find any improvement and it is often much more time consuming than doing an exhaustive search. Thus, a local search method is applied at the end to fine-tune the result. It tries to finds all possible combinations that could give an increase of profit. The procedure is as follows.(i)For all possible combinations of load and target lot with grade , apply the one which gives the most profit until there are no combinations that could generate any profit.

4.5. Constraint Handling

As a highly constrained problem, the search space of this problem is generally separated by the constraints into many isolated feasible regions. The simplex result from the initialization stage is used here to guide the search jumping out of a single feasible region. The idea is to depenalise any blend that also can be found in the simplex result. Such blend might be a bad move by itself but is also possiblely a vital part of a bigger profitable blend.

More detailedly, if any blend violates any of the constraints (2) to (5) and the same blend can be found in the linear-relaxed result, its constraint violation value is reduced. The formula used is where is the original constraint violation value, is the reduced constraint violation value, is the number of splits allowed, and is the number of splits used by the simplex result, respectively.

Also, in this algorithm, solutions are compared using the level comparison operators. The value of is set according to the following equations: where is the initial value, is the constraint violation value for the best solution in the initialization step, is the value at iteration , and is the iterations limit. This formula suggests that the methods will be focusing on finding feasible solutions when .

4.6. Local Search in the Evolutionary Loop

Within the main evolutionary loop, the generated solution also gets a chance to perform a single local search step and is used to speed up the convergence. Many different quality solutions are generated during the evolutionary loop and they are all good starting points for the local search. The algorithm only performs the local search by a single step. This is to ensure that the result is not suffering from premature convergence significantly.

4.7. Summary

The complete steps are shown in Algorithm 3.

input: , ,
output:
SEARCH_SPACE_REDUCTION()
= INITIALIZE()
While   none of termination condition was met
=
for  
= MUTATE( )
= RANDOM(0,1)
while  
= MUTATE( )
= RANDOM(0,1)
end  while
= ROUND(SIMPLEX( , GET_SPLIT( )))
= RANDOM(0,1)
if     and     is  feasible
  =  LOCAL_SEARCH( )
end if
if     is better than  
end if
end for
= LOCAL_SEARCH_ALL( )
end while

The parameters are(i): the number of offspring;(ii): the probability of applying additional mutation;(iii): the probability of applying one local search step within the evolutionary loop.

And the termination conditions are defined as(1)no improvement after iterations;(2)total number of evaluations is over .

5. Experimental Results

In this section, the proposed algorithm is applied to 20 selected real-world and 73 artificial test cases. All real-world test cases are created using the data from past years and should cover the most typical scenarios. The proposed algorithm is compared with the existing heuristic based algorithm here and the results were averaged over 20 runs for each test case.

5.1. Parameters Setting

The proposed algorithm has been implemented as a web service, running on distributed servers. To improve convergence, we always set the population size to 1 and use elitism selection. The main parameters in this experiment were set as follows:(i),(ii),(iii),(iv),(v).

The values of those parameters are selected manually. This set of parameters gives the best averaged result on the 10 real-world test cases (R1–R10, see Section 5.2).

A larger population size is also tested. It is completely applicable but there is no fundamental improvement up to the size of 4. After that, the execution time is increased significantly. In cases where the population size is more than 1, the 2-way tournament selection is used.

5.2. Test Cases

The 10 real-world test cases (R1–R10) are selected by domain experts, aiming to cover the most typical scenarios. The number of loads ranges from 26 to 718 in each case and is the dominant factor in the complexity. R8 is the largest case and quite possibly the most complex. R6 is a combined case (loads from two growers) to test the extreme scenario. The profit generated ranges from thousands to a quarter-million dollars. Note that test cases R1–R10 do not have any limitation on the split allowed.

The result of the proposed algorithm is compared with the heuristic based tool in current use. The benchmark here is the known best results which are optimized manually by domain experts (supported by computer tools). The experts have spent weeks of time on those cases and they believe the results are good enough to be used as the benchmark.

Test cases RS1–RS10 are the same ones as R1–R10, but with only 1 split allowed. Those cases are more constrained and are harder (slower) to optimize. Note that there is no known best result in these cases (the proposed hybrid algorithm outperforms them). Instead, we use the known best result from R1–R10 to serve as the upper bound.

The 28 artificial tests (A1–A28) are simple test cases which contains many typical pitfalls. The number of loads ranges from 3 to 7. The first 20 tests (A1–A20) do not require any split to obtain the optimal solution but the rest (A21–A28) do.

There are 45 more artificial tests (AC1–AC45) which are pair-wise combination of the real-world test cases R1–R10. Those tests are more time consuming but also have more potential to optimize. The linear-relaxed result is served as the upper bound.

5.3. Results

Table 3 shows the result on cases where there are an unlimited allowed number of splits. With the split limit constraint relaxed, those cases are relatively easy. The proposed algorithm found a close-to-optimal result in all cases, while the heuristic algorithm only succussed in the most simple cases. N8 is the only case where the hybrid algorithm is greater than 1% from the known-best result. In all cases, the hybrid algorithm is significantly faster.


Test
case
Number of
loads
Known bestHeuristic algorithmHybrid algorithm
Splits usedPercentage to
known best
Splits
used
Time used
(seconds)
Percentage to
known best
Splits
used
Time used
(seconds)

R134310.3515.3031.4
R2145217.9311.9022.3
R3260001.4001.0
R43321006.5005.9
R5127223.7522.1022.2
R6718236.546732.40349.9
R7129215.8228.9032.2
R8610520.422139.71.2625.3
R94922.2710.1021.6
R104729.6314.5022.9

Table 4 shows the result on cases with only 1 split allowed. In real-world cases, the split limit is normally set as 1 to 10 depending on the choice made by the user. The proposed algorithm still outperforms the heuristic algorithm in terms of both quality and speed, and the results are very close to the upper bound except for RS1. Note that for RS5, RS6, and RS8, the heuristic algorithm generates better results than the cases with an unlimited split allowed. This suggests that the heuristic algorithm can easily get stuck in local optima.


Test
case
Number of
loads
Heuristic algorithmHybrid algorithm
Percentage to
upper bound
Splits
used
Time used
(seconds)
Percentage to
upper bound
Splits
used
Improvement
over heuristic
Time used
(seconds)

RS13410.6111.76.614.41.4
RS214518.119.41.8119.82.6
RS326001.40001.0
RS4332006.50005.9
RS512723117.20.1029.63.5
RS671818.41572.50.3122.258.6
RS712915.8128.40.5018.12.3
RS861018.4185.91.3020.937.8
RS9492.707.80.502.21.6
RS10479.8112.10.3110.42.9

Table 5 shows the result of the artificial tests (A1–A28). The generated blending plan is required to be the same or equal-valued with the precalculated optimal result to be able to pass the test. The proposed algorithm passes all the tests while the heuristic fails on 5 cases.


Test
case
Number of
loads
Heuristic
algorithm
Hybrid
algorithm

A14PassPass
A27PassPass
A36PassPass
A46PassPass
A53PassPass
A65PassPass
A74PassPass
A86PassPass
A93PassPass
A103Fail Pass
A113PassPass
A123PassPass
A135PassPass
A143PassPass
A155PassPass
A164PassPass
A177PassPass
A185PassPass
A194Fail Pass
A205PassPass
A217PassPass
A227PassPass
A233Fail Pass
A246PassPass
A254Fail Pass
A264Fail Pass
A273PassPass
A284PassPass

Table 6 shows the result of the combination cases (AE1–AE45). Again, the proposed algorithm outperforms the heuristic algorithm. AE5, AE13, AE20, AE27, and AE44 are the only cases where the results are greater than 3% from the linear-relaxed upper bound. It also shows that the heuristic algorithm is rarely generating good solutions for large test cases (like AE26, AE28, AE33, AE36, AE37, etc.). This suggests that the heuristic algorithm might be too greedy at the beginning and cannot get out of the local optima. In contrast to this, the results from the proposed algorithm do not suffer much from a large number of loads. Additionally, the running time of the proposed algorithm grows significantly slower than the heuristic algorithm.


Test
case
Number of
loads
Heuristic algorithmHybrid algorithm
Percentage to
upper bound
Splits
used
Time used
(seconds)
Percentage to
upper bound
Splits
used
Improvement
over heuristic
Time used
(seconds)

AE117919.7221.42.4221.495.5
AE2604.3118.01.812.673.0
AE336638.08192.61.3559.367.5
AE416122.7122.31.4427.465.5
AE575236.06912.17.81243.9246.4
AE616312.5613.52.2411.754.9
AE764428.59491.71.5837.8562.5
AE8834.2119.10.723.572.9
AE9815.2318.11.414.003.7
AE1017126.2121.92.5332.206.3
AE1147720.5857.40.9824.709.1
AE1227215.3439.01.6516.174.1
AE1386316.54738.14.8714.0139.2
AE1427419.1524.71.0322.415.8
AE1575531.17438.11.0943.6783.1
AE1619420.4235.62.1523.013.2
AE1719212.5420.41.5412.545.5
AE1835828.05145.82.3535.677.3
AE191535.3416.61.953.662.8
AE2074431.39478.610.61230.1277.7
AE2115525.2415.91.3432.056.3
AE2263636.36405.62.2953.6448.1
AE237515.1313.90.8416.944.4
AE247317.6216.70.3220.863.2
AE2545925.9489.51.4733.0518.0
AE26105036.018654.71.71353.62164.4
AE2746125.32358.63.8128.7026.8
AE2894226.151796.60.81034.26120.9
AE293818.112746.70.668.1912.2
AE3037915.42510.50.2418.009.6
AE3184532.59757.21.7845.5989.2
AE3225624.62289.00.6531.8312.8
AE3373752.261038.50.25108.9241.9
AE3417628.9423.60.9339.367.0
AE351749.3532.51.468.685.1
AE3684722.711998.20.8928.21130.7
AE37132851.6101104.32.66101.24219.1
AE3876754.77501.31.64117.3131.5
AE3976536.14695.02.3452.8772.5
AE4073934.36390.02.1748.8260.8
AE4117825.4347.30.3433.584.1
AE4217621.0434.80.6325.908.7
AE4365941.68767.31.3569.1737.7
AE4465726.45568.23.8830.7323.1
AE459611.3227.40.3312.373.7

6. Performance Evaluation

The proposed hybrid algorithm consists of 4 stages: search space reduction, initialization, evolutionary loop, and final tune-up. In this section, the performance evaluation on each stage is investigated.

Tables 7 and 8 show the results if each of the functions is disabled. The value column is used to indicate the loss of quality, and the time used is used to indicate the loss of speed (less than 100 means the algorithm runs faster than the full version).


Test caseNo search space reduction No initializationNo main loop
Value
(percentage)
Time used
(percentage)
Value
(percentage)
Time used
(percentage)
Value
(percentage)
Time used
(percentage)

R110095.83100135.0975.5325.65
R299.1392.86100178.6780.1532.48
R310093.88100114.5010033.18
R410092.24100120.9110033.12
R510092.88100140.9093.3421.39
R698.7992.01100119.6786.9229.90
R710092.34100155.9395.1132.20
R898.7793.7699.94154.8072.9222.78
R910095.43100111.3210031.79
R1010094.53100135.2195.4626.58
RS110092.22100112.3775.5325.65
RS297.5092.37100155.0280.1532.48
RS310095.68100115.9610033.18
RS410094.95100143.6910033.12
RS510093.14100125.3293.3421.39
RS698.6994.32100112.9286.9229.90
RS710095.30100195.1195.1132.20
RS896.5094.4199.21115.0472.9222.78
RS910094.87100118.7010031.79
RS1010093.47100157.7495.4626.58


Test
case
No constraint handlingNo local search in loopNo final tune-up
Value
(percentage)
Time used
(percentage)
Value
(percentage)
Time used
(percentage)
Value
(percentage)
Time used
(percentage)

R110097.3697.5982.7410099.87
R285.0995.0982.6477.8510099.14
R310096.6210084.6310098.68
R410097.6310080.6710097.59
R573.6295.7083.7477.4610099.33
R670.1596.0988.2279.7610098.63
R710094.5295.5986.8210098.20
R882.7493.0687.9579.5499.997.37
R910095.9710087.2210099.73
R1098.2894.4797.9981.9410097.88
RS199.5297.7297.6983.6610098.51
RS270.4295.5388.1581.5599.999.08
RS310097.3710089.3010098.07
RS410094.6610081.5010098.63
RS595.1495.0181.1078.7910099.62
RS680.9596.6282.0082.4999.998.35
RS797.6097.9292.8180.9110099.31
RS893.6997.9997.9279.6010097.73
RS910095.7010089.2810098.26
RS1098.1697.3198.6380.8510099.68

The search space reduction stage requires around 7% of the running time. However, it builds a good base for further optimizing, as in some cases the quality of the solution drops if these two stages are missing.

The initialization stage greatly reduces the processing time required. For RS7, the time required is almost doubled if initialization is missing. And for R8 and RS8, it also improves the quality of result.

The main loop contributes a huge improvement on the quality of the generated solution. For cases like R3, R4, and R9, the algorithm can still provide good solutions just using initialization and posttuning, but not for the other cases. The result for RS1 to RS10 is the same as R1 to R10, since, without the main loop, the algorithm is not able to use any split.

The constraint handling methods (linear-guided and level comparison) do not require much time but could improve the result up to 30% for nontrivial cases. This suggests that the proposed algorithm is able to escape from the local optima with the constraint handling methods.

The local search in the main loop plays a major role in improving the quality of the solution. It also requires a significant chunk of time but is worthwhile. As mentioned before, at the later stage of the optimization process, an exhaustive search is usually more efficient than a stochastic variation.

The final tune-up improves the result slightly for some cases without much execution time needed. Note that the local search in the main loop could partially replace the effect of the final tune-up since they are basically the same method. This stage is to ensure that there is no missing profit.

7. Conclusions and Future Work

In this paper, a hybrid evolutionary algorithm for solving the Australian wheat blending problem is proposed. The algorithm starts with a filtering process to reduce the search space. The filtering is based on predefined rules suggested by domain experts. Then the algorithm generates its initial solution by extracting the common parts from both the result from the linear-relaxed version of the problem and the result from a heuristic method. The main loop of the algorithm uses a combination of an evolutionary algorithm, a heuristic method, and the simplex algorithm to improve the solution while maintaining the feasibility of the solution. For the constraint handling part, the result from the linear-relaxed problem is used in conjunction with the epsilon level comparison. Those constraint handling methods help the algorithm explore the infeasible regions more efficiently. Final tune-up is performed by a local search method. The proposed algorithm is tested on 20 real-world test cases and 73 artificial test cases. Result shows that the proposed algorithm always finds equal or better results compared with the existing heuristic algorithm.

For further study, the parameter setting of this algorithm could be investigated. One promising way to improve the algorithm is to design an adaptive way to control the mutation probability, the local search probability, and especially the threshold used in LCH.

There are also additional functionalities requested by the growers. The growers have signed a few supply contracts before the harvest and they want to fulfil their contracts with minimum cost and maximize the profit of the rest products. Additionally, sometimes it is beneficial for the growers to buy some wheat from the other growers. Therefore, the growers also want the optimizer to generate a blending plan with the consideration of trading between multiple growers. The blending of other types of wheat is also requested.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was partially funded by the ARC Discovery Grants DP0985723, DP1096053, and DP130104395, as well as by Grant N N519 5788038 from the Polish Ministry of Science and Higher Education (MNiSW).

References

  1. X. Li, M. R. Bonyadi, Z. Michalewicz, and L. Barone, “Solving a real-world wheat blending problem using a hybrid evolutionary algorithm,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC ’13), pp. 2665–2671, Cancún, Mexico, June 2013. View at: Publisher Site | Google Scholar
  2. H. Karloff, Linear Programming, Birkhäuser, Boston, Mass, USA, 2nd edition, 2008.
  3. G. B. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton, NJ, USA, 1998.
  4. Y. Pochet and L. A. Wolsey, Production Planning by Mixed Integer Programming, Springer, New York, NY, USA, 2006.
  5. B. Bilgen and I. Ozkarahan, “A mixed-integer linear programming model for bulk grain blending and shipping,” International Journal of Production Economics, vol. 107, no. 2, pp. 555–571, 2007. View at: Publisher Site | Google Scholar
  6. J. Ashayeri, A. G. M. van Eijs, and P. Nederstigt, “Blending modelling in a process manufacturing: a case study,” The European Journal of Operational Research, vol. 72, no. 3, pp. 460–468, 1994. View at: Publisher Site | Google Scholar
  7. Z. Jia and M. Ierapetritou, “Mixed-integer linear programming model for gasoline blending and distribution scheduling,” Industrial and Engineering Chemistry Research, vol. 42, no. 4, pp. 825–835, 2003. View at: Publisher Site | Google Scholar
  8. D. Randall, L. Cleland, C. S. Kuehne, G. W. B. Link, and D. P. Sheer, “Water supply planning simulation model using mixed-integer linear programming ‘engine’,” Journal of Water Resources Planning and Management, vol. 123, no. 2, pp. 116–124, 1997. View at: Publisher Site | Google Scholar
  9. L. F. L. Moro and J. M. Pinto, “Mixed-integer programming approach for short-term crude oil scheduling,” Industrial and Engineering Chemistry Research, vol. 43, no. 1, pp. 85–94, 2004. View at: Publisher Site | Google Scholar
  10. L. Costa and P. Oliveira, “Evolutionary algorithms approach to the solution of mixed integer non-linear programming problems,” Computers and Chemical Engineering, vol. 25, no. 2-3, pp. 257–266, 2001. View at: Publisher Site | Google Scholar
  11. T. Bäck and M. Schütz, “Evolution strategies for mixed-integer optimization of optical multilayer systems,” in Evolutionary Programming IV: Proceedings of the 4th Annual Conference on Evolutionary Programming, pp. 33–51, MIT Press, Cambridge, Mass, USA, 1st edition, 1995. View at: Google Scholar
  12. M. Emmerich, M. Grötzner, B. Groß, and M. Schütz, “Mixed-integer evolution strategy for chemical plant optimization with simulators,” in Evolutionary Design and Manufacture, I. C. Parmee, Ed., pp. 55–67, Springer, Berlin, Germany, 2000. View at: Publisher Site | Google Scholar
  13. J. F. Gómez, H. M. Khodr, P. M. de Oliveira et al., “Ant colony system algorithm for the planning of primary distribution circuits,” IEEE Transactions on Power Systems, vol. 19, no. 2, pp. 996–1004, 2004. View at: Publisher Site | Google Scholar
  14. R. L. Haupt, “Antenna design with a mixed integer genetic algorithm,” IEEE Transactions on Antennas and Propagation, vol. 55, no. 3, pp. 577–582, 2007. View at: Publisher Site | Google Scholar
  15. B. B. Pal, D. Chakraborti, P. Biswas, and A. Mukhopadhyay, “An application of genetic algorithm method for solving patrol manpower deployment problems through fuzzy goal programming in traffic management system: a case study,” International Journal of Bio-Inspired Computation, vol. 4, no. 1, pp. 47–60, 2012. View at: Publisher Site | Google Scholar
  16. M. P. Poland, C. D. Nugent, H. Wang, and L. Chen, “Genetic algorithm and pure random search for exosensor distribution optimisation,” International Journal of Bio-Inspired Computation, vol. 4, no. 6, pp. 359–372, 2012. View at: Publisher Site | Google Scholar
  17. T. Yokota, M. Gen, and Y.-X. Li, “Genetic algorithm for non-linear mixed integer programming problems and its applications,” Computers and Industrial Engineering, vol. 30, no. 4, pp. 905–917, 1996. View at: Publisher Site | Google Scholar
  18. Y. C. Lin, F. S. Wang, and K. S. Hwang, “A hybrid method of evolutionary algorithms for mixed-integer nonlinear optimization problems,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC ’99), vol. 3, Washington, DC, USA, 1999. View at: Publisher Site | Google Scholar
  19. C. T. Su and C. S. Lee, “Network reconfiguration of distribution systems using improved mixed-integer hybrid differential evolution,” IEEE Transactions on Power Delivery, vol. 18, no. 3, pp. 1022–1027, 2003. View at: Publisher Site | Google Scholar
  20. K. Deep, K. P. Singh, M. L. Kansal, and C. Mohan, “A real coded genetic algorithm for solving integer and mixed integer optimization problems,” Applied Mathematics and Computation, vol. 212, no. 2, pp. 505–518, 2009. View at: Publisher Site | Google Scholar
  21. F. G. Lobo, C. F. Lima, and Z. Michalewicz, Parameter Setting in Evolutionary Algorithm, Springer, Berlin, Germany, 2007.
  22. T. Bäck, Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, Oxford University Press, New York, NY, USA, 1st edition, 1996.
  23. C. Grosan, A. Abraham, and H. Ishibuchi, Hybrid Evolutionary Algorithms, Springer, Berlin, Germany, 2007.
  24. M. Lin and L. T. Yang, “Hybrid genetic algorithms for scheduling partially ordered tasks in a multi-processor environment,” in Proceedings of the 6th International Conference on Real-Time Computing Systems and Applications (RTCSA ’99), pp. 382–387, Hong Kong, 1999. View at: Publisher Site | Google Scholar
  25. P. Moscato, C. Cotta, and A. Mendes, “Memetic algorithms,” in New Optimization Techniques in Engineering, chapter 3, pp. 53–85, Springer, Berlin, Germany, 2004. View at: Google Scholar
  26. L. M. O. Queiroz and C. Lyra, “Adaptive hybrid genetic algorithm for technical loss reduction in distribution networks under variable demands,” IEEE Transactions on Power Systems, vol. 24, no. 1, pp. 445–453, 2009. View at: Publisher Site | Google Scholar
  27. C. P. Gomes and B. Selman, “Algorithm portfolio design: theory vs. practice,” in Proceedings of the 13th Conference on Uncertainty in Artificial Intelligence (UAI ’97), pp. 190–197, Providence, RI, USA, 1997. View at: Google Scholar
  28. T. Takahama and S. Sakai, “Constrained optimization by ε constrained particle swarm optimizer with ε-level control,” in Soft Computing as Transdisciplinary Science and Technology, A. Abraham, Y. Dote, T. Furuhashi, M. Köppen, A. Ohuchi, and Y. Ohsawa, Eds., pp. 1019–1029, Springer, Berlin, Germany, 2005. View at: Publisher Site | Google Scholar

Copyright © 2014 Xiang Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1358
Downloads544
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.