Abstract

A stand-level, multiobjective evolutionary algorithm (MOEA) for determining a set of efficient thinning regimes satisfying two objectives, that is, value production for sawlog harvesting and volume production for a pulpwood market, was successfully demonstrated for a Eucalyptus fastigata trial in Kaingaroa Forest, New Zealand. The MOEA approximated the set of efficient thinning regimes (with a discontinuous Pareto front) by employing a ranking scheme developed by Fonseca and Fleming (1993), which was a Pareto-based ranking (a.k.a Multiobjective Genetic Algorithm—MOGA). In this paper we solve the same problem using an improved version of a fitness sharing Pareto ranking algorithm (a.k.a Nondominated Sorting Genetic Algorithm—NSGA II) originally developed by Srinivas and Deb (1994) and examine the results. Our findings indicate that NSGA II approximates the entire Pareto front whereas MOGA only determines a subdomain of the Pareto points.

1. Introduction

Forest estate planning requires the ability to have a suite of possible thinning regimes for each management unit where the estate is an amalgam of the many management units. Satisfying any kinds of goals for the forest estate is determined by choosing a mosaic of thinning regimes for all the management units. The preferred choices are normally achieved through some form of optimisation [1]. Chikumbo and Nicholas [2] demonstrated how a suite of thinning regimes at a management unit level might be approximated.

Chikumbo and Nicholas [2] formulated a stand-level optimisation problem to determine a set of efficient thinning regimes. Each regime defined an initial planting stocking, frequency of thinning, timing of thinning, intensity of thinning, final crop number prior to clearfelling, and rotation length. The problem was solved as a multiobjective optimisation to simultaneously maximise sawlog (or value) production and pulpwood (or volume) production, which are conflicting objectives. Traditionally, thinning regimes for most tree crops are determined either for value or volume production but not for both. This normally means an initially high tree stocking density, a short rotation with no thinning operations for volume production, and a more complex regime with thinning operations, pruning, and longer rotations for value production. The emphasis for value production is to maximise volume of clear wood per tree, rather than maximising biomass growth on as many trees as possible. Such an approach to forest management does not give the forester the flexibility to deal with volatile markets where the demands for sawlog and pulpwood tend to change rapidly. An ability to provide all products at the markets with the flexibility of meeting an increased supply of the product most in demand requires a different tact to forest management. Chikumbo and Nicholas [2] demonstrated how to determine a set of efficient thinning regimes that are suitable for both volume and value production in different ratios or “tradeoff,” where the choice of a single regime was made on the basis of the product most in demand.

As such, the critical part of approximating the set of efficient thinning regimes was to find trade-off solutions (i.e., nondominated solutions) where for each solution an improvement in one objective did not lead to worsening in the other [3]. For the mathematically savvy, nondominated solutions can be defined as follows.

Assume two solutions , , where is denoted as the solution space. Then is said to dominate (also written ) if and only if

However, is said to cover () if and only if

In this case is nondominated by .

Therefore, the set of nondominated solutions, , within the entire search space, , is called the Pareto-optimal set.

The set of solutions to the multiobjective problem was determined using a competitive co-evolutionary genetic algorithm [4] with five subpopulations of 100 individuals each, computed over 1000 generations. This is an island model characterised by multiple sub-populations instead of a single large population. These subpopulations evolve independently for a certain number of generations (isolation time). After the isolation time a number of individuals are distributed between subpopulations (a process called migration). Each subpopulation exerts selective pressure on the other, thereby maintaining diversity a lot longer than each subpopulation would do solitarily, thereby guarding against premature convergence. When competition is superimposed between the subpopulations, the ones with higher mean fitness values will maintain larger subpopulation sizes and receive more capable individuals, because they have more chances of finding the global optimum [5].

The Fonseca and Fleming ranking scheme [6] was used to determine the nondominated solutions. Only six nondominated thinning regimes were obtained, which was far less than expected because there should be as many nondominated solutions as possible, up to the size of the population [7]. A close inspection revealed that the Fonseca and Fleming ranking scheme quickly lost diversity resulting in “genetic drift” where convergence only occurs in one “niche” of the search space. The island model formulation is not sufficient to offset this genetic drift or avoid loss of diversity. Chikumbo and Nicholas [2] had no notion of the inability of MOGA to prevent “crowding” (i.e., inability to maintain diversity and encourage a good spread of solutions), in their first attempt at estimating efficient thinning regimes for E. fastigata. The more generations the algorithm was run, the more the Pareto front points exhibited crowding resulting in points superimposed on each other on certain subregions of the Pareto front. The workings of MOGA are explained later. This paper explores an alternative ranking scheme that is better at maintaining diversity such that not only a few points on a limited number of sub-regions of the Pareto front are estimated, rather that all the points on the Pareto front are determined.

The concept of applying evolutionary algorithms (or population-based search algorithms that mimic the evolutionary process) to multiobjective optimisation first appeared in the literature in the late 1960s, and it was not until 1985 when Schaffer [8] published the first algorithm. Figure 1 shows a diagrammatic representation of an evolutionary algorithm and how it works. Evolutionary algorithms work on populations of individuals instead of single solutions. In this way the search is performed in a parallel manner [9].

At the beginning of the computation, a number of individuals (the population) determined by the user are randomly initialised. The objective function(s), based on the “fitness value(s),” is/are then evaluated for these individuals; thus, the first/initial generation is produced. If the optimisation criteria are not met, the creation of a new generation starts, which is guided by a process of “selection,” “crossover/recombination,” and “mutation.” Individuals are selected according to their fitness for the production of offspring. Parents are recombined to produce offspring. All offspring will be mutated with a certain probability. The fitness of the offspring is then computed. The offspring are inserted into the population replacing the parents, producing a new generation. This cycle is performed until the optimisation criteria are reached [9].

Today there exist many variants of evolutionary algorithms, namely, genetic algorithms [10], evolutionary strategies [11, 12], and evolutionary programming [13].

The subject of multiobjective optimisation problems is extremely complex and mathematically difficult, with many underresearched areas and outstanding problems [14]. Evolutionary algorithms have become established as the method at hand for exploring the search space in multi-objective optimisation problems [15].

In the absence of preference information among the objectives, there is no single solution for a multi-objective optimisation problem [16]. This is because under these circumstances, optimality takes a whole new meaning. This was first proposed by Edgeworth [17] and later generalised by Pareto [18]. Determining a set of nondominated solutions is what is now commonly known as Pareto optimality. Evolutionary algorithms have become the default method for approximating the Pareto front for multiobjective optimisation problems. This is because there are hardly any alternative methods. Also, evolutionary algorithms have the capability to search many Pareto-optimal solutions in parallel and exploit similarities of solutions by the recombination/crossover operation [15]. For multiobjective evolutionary algorithms (MOEAs), the Pareto-optimal solutions are approximated in a single optimisation run, for efficiency.

Various implementations of MOEA have appeared in the literature each with a different approach and algorithm for estimating the Pareto front. Some of the more advanced algorithms currently in use are as follows:(a)Schaeffer’s Vector Evaluated Genetic Algorithm (VEGA)—switching objective type [8];(b)Hajela and Lin’s Weighting-Based Genetic Algorithm (HLGA)—parameter variation type [19];(c)Fonseca and Fleming’s Multiobjective Genetic Algorithm (MOGA)—Pareto-based goal interaction [6];(d)Horn, Nafpliotis, and Goldberg’s Niched Pareto Genetic Algorithm (NPGA)—cooperative sharing [20];(e)Srinivas and Deb’s Nondominated Sorting Genetic Algorithm (NSGA)—fitness sharing Pareto [16];(f)Coello’s Min-Max Optimisation (CMMO)—ideal non-Pareto feasibility vetting [21];(g)Zitzler and Thiele’s Strength Pareto Evolutionary Algorithm (SPEA)—elitist external niching type [22];(h)Corne, Knowles, and Oates’ Pareto Envelope-Based selection Algorithm (PESA)—integrated selection and diversity maintenance via a hypergrid crowding strategy [23]; (i)Knowles and Corne’s Pareto Archived Evolution Strategy (PAES)—single parent, local search [24].

In this paper the optimal thinning regime problem for Eucalyptus fastigata [2] is revisited and solved using single population MOGA and NSGA II algorithms. The GEATbx software that runs in MATLAB was used in the analysis, and it employs MOGA, which became our default. The reason NSGA II was chosen as a comparison is because of its wide use and reputation in the field of evolutionary computation. The results are compared, not to find which is better of the two but rather to determine a greater depth of the Pareto front in order to provide more thinning regimes to choose from. A brief description of the data and model formulation are given in the next section followed by a concise account of the MOGA and NSGA II algorithms. Sections 4 and 5 present in some detail the findings, and a conclusion on the extent of the Pareto front is presented.

2. Data, Objective Functions, and State Equations

The data for approximating the Pareto front using the MOGA and NSGA II algorithms came from a Nelder trial [25] of E. fastigata established in 1979 in Kaingaroa Forest, New Zealand. The details can be found in Chikumbo and Nicholas [2].

The emphasis of this paper is to approximate the solutions, or vectors, of decision variables of MOEA that satisfy constraints and optimise an objective vector function whose elements represent two objective functions. The decision variables, , form the 9 genes of each chromosome or individual, which represent a potential solution with their bound constraints, for both lower bound (lb) and upper bound (ub) as shown in Figure 2. Therefore, an evolutionary algorithm is initiated with a set of randomly generated potential solutions that are driven towards optimality over generations, where each generation is defined by selection, recombination/crossover, and mutation for a set of potential solutions or population.

These objective functions or fitness functions form a mathematical description of performance, which are in conflict with each other: subject to inequality constraints, and equality constraints, where the objective functions, or :and the vector of decision variables, .

The objective functions were as follows: for sawlog production (NZ$m6 stems−1 ha−2), and for pulpwood production (NZ$m3 ha−2), where is time in years, fhvvalue and fhvvol are forest holding values [26] for sawlog production and pulpwood production, respectively, where;; and. where;; and. where,;; and.

The relationship in objective function (6) concentrates volume production per tree, which is the goal for sawlog production. State equation (8) tracks the number of trees per hectare (sph) harvested (i.e., or control) subject to the two objectives, and growth dynamics in state equations (9)–(11), which are stand basal area in square metres per hectare (sba), mean stand height of the 100 largest trees per hectare (mht), and volume in cubic metres per hectare (vol), respectively. The state equations (9)–(11) are discrete-time dynamical equations [27], which have been demonstrated to accurately determine growth dynamics influenced by changes in tree spacing, initial planting stocking, and thinning treatment [28]. The parameters of these state equations are shown in Table 1.

The constraint ensures that the total number of trees thinned over the complete rotation will not exceed the initial number of trees planted and is handled by a penalty function, , in the objective functions (3) and (4), which is equal to one when constraint (13) is not violated and zero otherwise.

The thinning regime problem for E. fastigata is a nonlinear optimal control problem of dynamical equations (9)–(11). These dynamical equations experience state changes in time under some rules, that is, the growth dynamics of E. fastigata. These equations are mathematical “black box” representations of the underlying physical growth processes of E. fastigata. The state is described by a collection of variables that fully characterise growth. These parameters are time variant as they vary with the changing number of stems per hectare. The state is a vector of values in a Euclidean state space and evolves continuously with time along a continuous trajectory through the state space. The value of the state at any time is governed by its past values, as well as by the past values of all external influences on the system. The external influences that are manipulated to regulate the state constitute the control inputs, which are the decision variables.

The aim in the E. fastigata control problem is to determine the control inputs that produce a desired outcome, represented by the two objective functions, (4) and (5). Finding analytical solutions to nonlinear optimal control problems is a nontrivial task. It has only been achieved satisfactorily for limited classes of systems. Even more difficult a task is to analytically solve the optimal control problem. However, we employ evolutionary algorithms for solving the E. fastigata control problem as first demonstrated by Chikumbo [29] for a single objective control problem. A summary of the genetic algorithms for the E. fastigata control problem is shown in Table 2 for MOGA and NSGA II algorithms that are explained in more detail in the following section.

3. Ranking Schemes for Estimating the Pareto Front

This section explains the basis of the algorithms used to estimate the Pareto front, particularly MOGA and NSGA II. The explanation here is in sink with the findings in this paper.

Many approaches are now in use for MOEA (with different ranking schemes that alter the “fitness landscape” of the search space by adding more “ordering information”), and the key to their success lies in how they accomplish fitness assignment and selection and how they maintain diversity to achieve a well-distributed Pareto (trade-off) front [15]. For fitness assignment and selection, there exist two groups of ranking schemes, that is, non-Pareto [8, 19] and Pareto-based [30] schemes. The MOGA and NSGA II considered here are both Pareto-based approaches.

Goldberg [30] made a key observation that, for a multimodal search space with multiple peaks, different subdomains (which he called niches) exist with unique stable subgroups of individuals (analogous to species). A conventional evolutionary algorithm will tend to converge its individuals to only one niche (a phenomenon known as genetic drift). However, important information that could have been determined from the other peaks or sub-domains of importance is missed. This is the reason why Chikumbo and Nicholas [2] could only find 6 nondominated solutions for the E. fastigata problem, using the MOGA algorithm. The use of ranking schemes encourages the formation of artificial niches, thereby preventing genetic drift and predicting nondominated solutions from most of these important subdomains of a search space.

To maintain diversity in the population, fitness sharing may be used [31]. The idea of fitness sharing is to subdivide the population into several subpopulations based on the similarity among individuals. Note that “similarity” in the context of MOEA can be measured in the space of the decision variables or in the space of the objective functions. Fitness sharing is defined in the following way: In expression (14), , indicates the distance between solutions/individuals and (in any space defined), and is a parameter (or threshold) that defines the size of a niche or neighbourhood. Any solutions within this distance will be considered as part of the same niche. The fitness, , of an individual is modified as follows: where is the number of individuals located in the neighborhood (or niche) of the th individual.

Several other schemes are possible to maintain diversity and to encourage a good spread of solutions, such as crowding. In a crowding model, crowding of solutions anywhere in the search space is discouraged, thereby providing the diversity needed to maintain multiple nondominated solutions. The crowding distance in the model is determined from the sum of distances between solution’s neighbours in Euclidean space that is defined by objective function values.

3.1. Fonseca and Fleming’s Multiobjective Genetic Algorithm (MOGA)

The ranking scheme proposed by Fonseca and Fleming [6] involved assigning an individual’s rank (in the objective function space) equal to the number of population individuals that dominated that individual; that is, the ranking was done according to the degree of domination; the more members of the current population that dominate a particular individual, the lower its rank. It, therefore, uses fitness sharing in the objective function space and mating is also restricted. Reproduction probabilities were determined by means of exponential ranking. Afterwards the fitness values were averaged and shared among individuals having identical ranks [15]. Finally, stochastic universal sampling, which provides zero bias and minimum spread (i.e., the range of possible values for the number of offspring of an individual) was used to fill the sampling pool.

The main strength of MOGA is that it is efficient and relatively easy to implement. It has also been successful in solving optimal control problems where it has exhibited very good overall performance [32].

3.2. Srinivas and Deb’s Nondominated Sorting Genetic Algorithm (NSGA II)

NSGA [16] is a nondomination-based genetic algorithm for multi-objective optimisation. It is an effective algorithm that was initially popular. However, it was later criticised for its computational complexity, lack of elitism (i.e., a mechanism for retaining in the population the best individual hitherto), and for choosing the optimal value for the sharing parameter [33]. Consequently Deb et al. [34] developed a second-generation algorithm, NSGA II. This improved version is a better sorting algorithm that incorporates elitism and does not choose the sharing parameter a priori. NSGA II is one of the most successful evolutionary multiobjective optimisation algorithms.

Convergence to the Pareto front is made possible by nondominated sorting. This sorting ranks individuals by iteratively determining the nondominated solutions in the population, assigning those individuals the next best rank and removing them from the population. Favouring individuals with large crowding distances maintain diversity within one rank. NSGA II, being elitist, keeps as many nondominated solutions as possible, up to the size of the population [7]. The elitist selection is as follows. After evaluating the offspring’s fitness (nondominated rank and crowding distance), parents and offspring fight for survival as Pareto dominance is applied to the combined population of parents and offspring. Then, the least dominated solutions survive to make the population of the next generation. The sampling pool is filled using the tournament selection process, where a number of individuals (ranging from 2 to ) are chosen randomly and the “best” individual from that group is selected as parent for the next generation.

4. Results

The MOGA for the optimal thinning regime optimisation problem ran for 25 minutes using the MATLAB [35] “SPMD” parallel computing for the objective function evaluations on a MacBook Pro, with a 2.93 GHz Intel Core 2 Duo processor. The SPMD is a block of MATLAB statements that denote a single program to be executed by the available processors, and multiple data are assigned to the processors for computation in parallel. In this case it was more of “MPSD,” that is, multiple programs single data, since each processor computed a different objective function each from the same decision variable vector. The NSGA II had a longer turnaround time of 82 minutes, and it also utilised parallel computing using the MATLAB “PARFOR” for the objective function evaluations. PARFOR is an extension of the normal FOR loop statement, where the iterations of statements in the loop are executed in parallel on separate processors.

The MOGA had a total of 153 nondominated solutions whereas NSGA II had 500. Chikumbo and Nicholas [2] ran their competitive coevolutionary genetic algorithm (ccGA) with 5 subpopulations, using MOGA for Pareto front estimation and the same hardware but with no parallel computing. The computation ran over twice as many generations and in 39 minutes, but approximated only 6 nondominated solutions, because of genetic drift. Table 3 shows a summary of these results.

Many of the MOGA nondominated solutions were replicates and superimposed on each other when plotted on graph as shown in Figure 3. It is also clear from the graph that the Pareto front was disjointed, a reminder of the difficulty of solving such kinds of problems using conventional techniques as experienced by Chikumbo and Mareels [36]. Table 4 shows initial nondominated solutions and the complete solutions are found in Table 6.

NSGA II nondominated solutions were more widely spread. This revealed a greater extent of the Pareto front that was not obvious with MOGA as shown in Figure 4. The Pareto front was even more disjointed being composed of a set of disjoint continuous sets in the objective function space. The disjointed parts gave rise to isolated nondominated solution subsets, while the continuous parts (or topologies) were associated with the actual isolated solutions. Each topology was associated with a specific network of decision variables or thinning regimes. A snippet of the isolated nondominated solutions is shown in Table 5, and the complete results are found in Table 7.

When the MOGA solutions are superimposed on the NSGA II ones it becomes clear that these solutions are originating from the same niches in the search space as shown in Figure 5.

The 6 nondominated solutions predicted by Chikumbo and Nicholas [2] occupied the niche close to the origin but were not included in the graphical plot in Figure 5 because they were not possible to see clearly.

Investigating the implications of the overlaps, or lack thereof, of the Pareto sets in the objective function domain involved the comparison in the decision variables domain (or phenotype) using the Kruskal-Willis test [37]. This is a nonparametric version of the classical one-way analysis of variance and an extension of the Wilcoxon rank sum test [38], specifically for testing equality of population medians among groups (which in this case were the three Pareto sets, ccGA-MOGA, MOGA, and NSGA II). Ordering the data from smallest to largest across all groups and taking the numeric index of this ordering determined the ranks.

4.1. Initial Planting Stocking

A value of which is practically zero means that the null hypothesis is rejected, suggesting that at least one sample median of the initial planting stockings from the different Pareto sets is significantly different. The box plots in Figure 6 show that the NSGA II Pareto set is different from the other two, which are significantly similar within the 25th and 75th percentiles. The NSGA II shows other initial planting densities (shown as ‘‘+’’ signs in Figure 6) that are higher than the range that overlaps those from the ccGA-MOGA and MOGA Pareto sets. The Wilcoxon rank sum test, plotted in Figure 7, shows more clearly the difference between the population medians of these Pareto sets for the initial planting stockings.

4.2. Final Crop Number

A value of 0.1542 and the box plots in Figure 8 suggest some overlap between the final crop numbers from the three Pareto sets, where the MOGA and NSGA II sets are lower than and higher than the common overlapping range of final crop numbers, respectively (as shown by the “+” signs in Figure 8). The mean ranking in Figure 9 emphasizes the degree of overlap of the final crop numbers from the three Pareto sets.

4.3. Rotation Length

There was a difference in the rotation length medians amongst the Pareto sets because of the value of . The NSGA II Pareto set showed a much greater range of 25–35 years, whereas ccGA-MOGA was consistently 35 years and MOGA only ranged between 34 and 35 years. The box plots are shown in Figure 10. The summary of the mean ranks of the rotation lengths in Figure 11 clarifies the overlap between ccGA-MOGA and MOGA population medians for the rotational lengths, and also between ccGA-MOGA and NSGA II.

4.4. Frequency of Thinning

The NSGA II Pareto set was the only one that had a regime with no thinnings at all where the initial planting stocking was 2000 stems ha−1 with a final harvesting at age 25 years. Also in the set was included a regime with a single thinning at age 20 of 200 stems ha−1 and final harvesting at age 25 years. As for the rest of the thinning regimes from all the Pareto sets, the frequency of thinning at a total of three in one rotation was the norm with small differences in the timings and intensities.

4.5. Timing and Intensity of Thinning

For brevity only the description (without the box plots and the plotted mean ranks) is given here for the timing and intensity of thinnings.

Timings for the first thinnings from the three Pareto sets showed similarities between NSGA II and ccGA-MOGA with overlaps of the timing medians at 5 years. MOGA had a higher median at 6 years. The intensities of the first thinnings also showed some similarities from all the Pareto sets although the NSGA II intensities covered a wider range than the other two.

The timings for the second thinnings were similar for the MOGA and NSGA II, whereas the intensities from all the Pareto sets were also the same, with NSGA II, again, showing a wider range of intensities.

The timings for the third thinnings showed dissimilarities between MOGA and NSGA II but showing thinning intensity similarities from all the three Pareto sets, with both MOGA and NSGA II showing a wider range of thinning intensities.

5. Discussion

The previous results suggest that the nondominated solutions from the three Pareto sets belong to the same family of solutions. It is clear that the Pareto front is disjointed with many topologies and the differences observed in the decision variables (i.e., initial planting stocking; final crop number; rotation length; frequency, timing, and intensity of thinning) are mere reflections of the different topologies of the Pareto front or sub-domains (niches) of the nondominated solutions. It also seems that the computational efficiency of MOGA or ccGA-MOGA is due to the fact that the Fonseca and Fleming ranking scheme attempts to only predict part of the Pareto front, whereas NSGA II seems to approximate all the Pareto points, hence the differences in the turnaround times as shown in Table 3. This is despite employing parallel computing capability for the MOGA and NSGA II algorithms. It is also important to note that, with just an Intel Core 2 Duo processor, it may not be possible to experience appreciable improvements in the computational efficiencies.

The question that lingers with the Fonseca and Fleming ranking scheme is how it determines the subdomain of the Pareto front to approximate and whether it is possible for the user to have control on which subdomain of the Pareto front to estimate. The same question can be raised about the NSGA II algorithm, whether it can be made to concentrate on estimating Pareto points on a specific subdomain rather than all the Pareto points, making the algorithm more computationally efficient. Although the answer to the above question is beyond the scope of this paper, it does seem obvious that MOGA and NSGA II can be applied for different output and computational requirements. Where computational efficiency is of paramount importance, MOGA would be the appropriate choice but at the expense of approximating only a part of the Pareto front. The user will also not have the ability to choose the subregion on the Pareto front to estimate. Situations with large problems characterised by many objectives (greater than three) and a large set of decision variables may benefit from the application of MOGA. However, when it is vitally important to approximate the entire Pareto front, NSGA II would be ideal, but with a computational overhead. Note also that most research in evolutionary algorithms has centered on dealing with 2 or 3 objective functions and beyond that is still an area of active research [39].

A larger set of nondominated solutions from the search space implies greater flexibility in decision making in sifting through a plethora of alternatives based on prevailing preferences and values of the decision maker(s). From a silvicultural management perspective, this is good news for the forester, as such flexibility at a stand level facilitates implementing a mosaic of thinning regimes across an estate to meet sawlog and pulpwood demands from markets that are plagued with uncertainty.

The ability of NSGA II to maintain diversity in the search space to approximate an entire Pareto front lies in the way it does its fitness assignment (through fitness sharing) as alluded to earlier. The crowding model in NSGA II has been demonstrated to work, which encouraged a good spread of solutions over all the subdomains or topologies of the Pareto front. Although MOGA also has the ability to form niches and distribute the population over the Pareto-optimal region, its fitness sharing is done in the objective function domain. This maintains diversity in the objective function values and may not necessarily do so for the decision variables, which are important to the decision maker(s) [16]. An attempt to understand this further led to running MOGA and NSGA II models over 1000 generations. MOGA converged to a single point on the Pareto front, whereas NSGA II kept all its 500 Pareto points. As for the ccGA-MOGA, which is just a modified MOGA with 5 subpopulations linked through migration instead of one large population [2], diversity was maintained through the subpopulations [5] and convergence to a single Pareto point after 1000 generations was avoided. To construct a better MOGA that will not converge to a single point on the Pareto front, several niching and elitist models have to be considered, and Obayashi et al. [40] reviewed some of these models.

At this stage there is no similar study in this same problem domain of optimal silvicultural regimes that can be compared against the results obtained here. However, there is work in other disciplines that compare well with some of the findings in this paper. (a)Ducheyne et al. [41] found out NSGA II to produce better results than MOGA although the latter had a more evenly spread-out Pareto front, for a bi-objective forest estate management problem where the estate consisted of 295 stands (or management units). The problem was run for only 50 generations, leaving questions as to whether the results would have been any different if the problem was run over more generations. The Pareto front was also smooth and continuous, a sharp contrast to the one in this paper; (b)Zitzler et al. [15] provided a systematic comparison of various approaches to biobjective function optimisation using carefully chosen test functions. MOGA and the first-generation NSGA were among some of these approaches. Some of the test functions produced smooth well-distributed nondominated sets and the most difficult ones, disjointed Pareto fronts. NSGA was ranked better than MOGA for performance. (c)Sağ and Çunkaş [42] also compared various approaches that included NSGA, NSGA II, MOGA, VEGA, SPEA, SPEA2 (second-generation SPEA), NPGA, and PESA, using test functions for multi-objective optimisation. Their general findings were that while the NSGA-II, SPEA2, and PESA could keep track of all the feasible solutions found during the optimisation, the number of Pareto-optimal solutions of other algorithms remained poor. Focusing on a more difficult problem that had two objective functions and two constraints where the Pareto set was discontinuous and divided into 5 regions, NSGA-II, SPEA2 and PESA kept track of all the feasible solutions. MOGA was poor at finding points at the extreme ends of the curve giving a good sampling of solutions at the midsection of the curve. However, the number of Pareto-optimal solutions remained poor for MOGA.

This paper has attempted to provide an insight into some of the well-established algorithms for estimating MOEA Pareto fronts at a stand level. The intention was not to minimise or disregard the important work already done in other disciplines. Instead, the aim was to bring forest analysts close to the MOEA community so that both can interact and mutually benefit. In the near future, if forest analysts take on board some of the findings presented in this paper, then the goals of this paper will have been accomplished.

6. Conclusion

The results clearly indicate that, for the stand-level optimal thinning regime problem, the NSGA II is capable of approximating an entire Pareto front at the expense of computational efficiency and that MOGA and ccGA-MOGA, though computationally efficient, will crowd solutions and also estimate only a part of the Pareto front. The key to NSGA II wider coverage in estimating the entire Pareto front lies in its ability of information sharing across individual potential solutions and maintaining diversity in the search space in collectively satisfying the two objectives, sawlog production and pulpwood production. For a better MOGA that will approximate an entire Pareto front, a niching and elitist model will have to be included that will provide a means for controlling the number, location, extent, and distribution of solutions in the search space. In its current form, MOGA suffers from genetic drift and will converge to a single point on the Pareto front if run for many generations. However, with the island model version, ccGA-MOGA, where the population is divided into subpopulations, genetic drift may be minimised although not sufficiently enough to avert estimation for only a subregion of the Pareto front. The final take-home message here is that, for a MOEA, it is important to choose an approach fit for purpose as in finding the appropriate balance between computational efficiency and the extent of the Pareto front to be estimated.

Acknowledgments

This project was funded by the New Zealand Foundation for Research Science and Technology under Contract no. C04X0806, “Protecting and Enhancing the Environment through Forestry,” and the contribution from Future Forest Research Ltd is gratefully acknowledged. The author is also thankful to Dr. Peter Clinton (Scion) and Dr. Ruth Falshaw (Scion) for their final comments on this paper.