Mathematical Problems in Engineering

Volume 2015, Article ID 349781, 17 pages

http://dx.doi.org/10.1155/2015/349781

## A Multiobjective Genetic Algorithm Based on a Discrete Selection Procedure

^{1}School of Science, Southwest University of Science and Technology, Mianyang 621010, China^{2}Australasian Joint Research Centre for Building Information Modelling
School of Built Environment, Curtin University, Perth, WA 6845, Australia^{3}Department of Housing and Interior Design, Kyung Hee University, Seoul 136701, Republic of Korea^{4}School of Mathematics, Anhui Normal University, Wuhu 430000, China^{5}School of Mathematics, Chongqing Normal University, Chongqing 404100, China

Received 12 November 2014; Revised 15 January 2015; Accepted 20 January 2015

Academic Editor: Jianxiong Ye

Copyright © 2015 Qiang Long et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Multiobjective genetic algorithm (MOGA) is a direct search method for multiobjective optimization problems. It is based on the process of the genetic algorithm; the population-based property of the genetic algorithm is well applied in MOGAs. Comparing with the traditional multiobjective algorithm whose aim is to find a single Pareto solution, the MOGA intends to identify numbers of Pareto solutions. During the process of solving multiobjective optimization problems using genetic algorithm, one needs to consider the elitism and diversity of solutions. But, normally, there are some trade-offs between the elitism and diversity. For some multiobjective problems, elitism and diversity are conflicting with each other. Therefore, solutions obtained by applying MOGAs have to be balanced with respect to elitism and diversity. In this paper, we propose metrics to numerically measure the elitism and diversity of solutions, and the optimum order method is applied to identify these solutions with better elitism and diversity metrics. We test the proposed method by some well-known benchmarks and compare its numerical performance with other MOGAs; the result shows that the proposed method is efficient and robust.

#### 1. Introduction

In this paper, we consider the following multiobjective optimization problem:where is a multiobjective function (vector valued function), is a box set, and and are lower bound and upper bound, respectively. We assume that each function in is Lipschitz continuous but not necessarily differentiable.

The multiobjective optimization has extensive applications in engineering and management [1–3]. Most of the optimization problems appearing in the real-world application have multiple objectives; they can be modeled as multiobjective optimization problems. However, due to the theoretical and computational challenges, it is not easy to numerically solve multiobjective optimization problems. Therefore, the multiobjective optimization attracted lots of researches over the last decades [4–8].

So far, there are two types of methods to solve multiobjective optimization problems: the indirect method and direct method. The indirect method converts multiple objectives into a single one. One strategy is to combine the multiple objective functions using the utility theory [9] or the weighted sum method [10, 11]. The difficulty for such method is the selection of the utility function or proper weights so as to satisfy the decision-maker’s preference. Another indirect method is to formulate the multiple objectives, except one, as constraints. However, it is not easy to determine the upper bounds of these objectives. On the one hand, small upper bounds could exclude some Pareto solutions; on the other hand, large upper bounds could enlarge the objective function value space which leads to some sub-Pareto solutions. Additionally, the indirect method can only obtain a single Pareto solution in each run. However, in practical applications, decision-makers often prefer a number of Pareto solutions so that they can choose one strategy according to their preferences.

Direct methods are devoted to exploring the entire set of Pareto solutions or a representative subset. However, it is extremely hard or impossible to obtain the entire set of Pareto solutions for most multiobjective optimization problems, except for some simple cases. Therefore, stepping back to a representative subset is preferred. Genetic algorithm, as a population-based algorithm, is a good choice to achieve this goal. The generic single-objective genetic algorithm can be modified to find a set of nondominated solutions in a single run [12–14]. The ability of the genetic algorithm to simultaneously search different regions of a solution space makes it possible to find a diverse set of solutions for difficult problems. The crossover and mutation operators of the genetic algorithm can be applied to various domains defined by different objectives, which in return creates new nondominated solutions in unexplored parts of the Pareto front. In addition, multiobjective genetic algorithm does not require the user to prioritize, scale, or weight objectives. Therefore, the genetic algorithm is one of the most popular metaheuristic approaches for solving multiobjective optimization problems [15–17].

The first multiobjective optimization method based on the genetic algorithm, called the vector evaluated genetic algorithm (VEGA), was proposed by Schaffer [18]. Afterwards, several multiobjective evolutionary algorithms were developed, such as multiobjective genetic algorithm (MOGA) [19], niched Pareto genetic algorithm (NPGA) [20], weight-based genetic algorithm (WBGA) [21], random weighted genetic algorithm (RWGA) [22], nondominated sorting genetic algorithm (NSGA) [23], strength Pareto evolutionary algorithm (SPEA) [24], improved SPEA (SPEA2) [25], Pareto-archived evolution strategy (PAES) [26], Pareto envelope-based selection algorithm (PESA) [27].

There are two basic criteria to measure a set of solutions for the multiobjective optimization problem [28].(1)*Elitism*. The obtained solutions should be as close to the real Pareto solutions as possible. This can be measured by the closeness between the real Pareto frontier and the image of the obtained solutions since the image set of the real Pareto solutions is the real Pareto frontier.(2)*Diversity*. In order to extensively describe Pareto solutions, the obtained solutions should distribute uniformly over the set of real Pareto solutions. The diversity of the obtained solutions is measured by the diversity of their images.

These two criteria of Pareto solutions are often in conflict with each other. Therefore, one has to balance the trade-off between elitism and diversity. The aim of this paper is to introduce new techniques to tackle these issues. The rest of the paper is organized as follows. In Section 2, we review some basic definitions of multiobjective optimization and the process of genetic algorithm. In Section 3, we propose an improved genetic algorithm for solving multiobjective optimization problems. In Section 4, some numerical experiments are carried out and the results are analyzed. Section 5 concludes the paper.

#### 2. Preliminaries

In this section, we first review some definitions and theorems in the multiobjective optimization and then introduce the general procedure of genetic algorithm.

##### 2.1. Definitions in Multiobjective Optimization

First of all, we present the following notations which are often used in vector optimization. Given two vectors then (i) for all ;(ii) for all ;(iii) for all , and there is at least one such that ; that is, ;(iv) for all . “,” “,” and “” can be defined similarly. In this paper, we call by dominates or is dominated by (in some literatures, is called dominates or is dominated by ; we reverse this definition since we are solving minimization problem in this paper).

*Definition 1. *Suppose that and . If for any , then is called an absolute optimal point of X.

Absolute optimal point is an ideal point, but it may not exist.

*Definition 2. *Let and . If there is no such that then is called an* efficient point* (or* weakly efficient point*).

The sets of absolute optimal points, efficient points, and weakly efficient points of are denoted as , , and , respectively. For the problem MOP, is called the* decision variable space* and its image set is called the* objective function value space*.

*Definition 3. *Suppose that . If for any , is called an* absolute optimal solution* of the problem MOP. The set of absolute optimal solutions is denoted as .

The concept of the absolute optimal solution is a direct extension of that for single-objective optimization. It is the ideal solution but may not exist for most cases.

*Definition 4. *Suppose that . If there is no such that that is, is an efficient point (or weakly efficient point) of the objective function value space , then is called an* efficient solution* (or* weakly efficient solution*) of the problem MOP. The sets of efficient solutions and weakly efficient solutions are denoted as and , respectively.

Another name of the efficient solution is* Pareto solution*, which was introduced by Koopmans and Reiter in 1951 [29]. The meaning of the Pareto solution is that if , then there is no feasible solution , such that any of is not worse than that of . In other words, is the best solution in the sense of “.” Another intuitive interpretation of Pareto solution is that it cannot be improved with respect to any objective without worsening at least one of the other objectives. The set of Pareto solutions is denoted by . Its image set is called the* Pareto frontier*, denoted by . The following two theorems are well known.

Theorem 5 (see [6]). *For the multiobjective optimization, it holds that *

*Theorem 6 (see [6]). For the objective function value space , if the sets of efficient points and weakly efficient points (i.e., and , resp.) are known, then, in the feasible set , it holds that *

*Theorem 5 illustrates the relationship of the sets of absolute optimal solutions, efficient solutions, and weakly efficient solutions. Theorem 6 reveals that the preimage of efficient points (or weakly efficient points) in is efficient solutions (or weakly efficient solutions) of the problem MOP.*

*2.2. Genetic Algorithm*

*2.2. Genetic Algorithm*

*Genetic algorithm is one of the most important evolutionary algorithms. It was introduced by Holland in the 1960s and then developed by his students and colleagues at the University of Michigan between the 1960s and 1970s [30]. Over the last two decades, the genetic algorithm was increasingly enriched by plenty of literatures, such as [31–34]. Nowadays various genetic algorithms are applied in different areas, for example, mathematical programming, combinational optimization, automatic control, and image processing.*

*Suppose that and represent parents and offspring at the th generation, respectively. Then, the general structure of genetic algorithm can be described in the following pseudocode.*

*General Structure of Genetic Algorithm*InitializationGenerate the initial population .Set crossover rate, mutation rate, and maximal generation time.Let .Since the maximal generation time is not reached, do the following.Crossover operator: generate .Mutation operator: generate .Evaluate and : compute the fitness function.Select operator: build the next population., go to . End End.

*From the pseudocode, we can see that there are three important operators in a general genetic algorithm: the crossover, mutation, and selection operators. The implementation of these operators is highly dependent on the way of encoding.*

*3. A New Multiobjective Genetic Algorithm*

*3. A New Multiobjective Genetic Algorithm*

*In this section, we present a new multiobjective genetic algorithm for solving the problem MOP. We first propose a ranking strategy called the optimum order method and then metrics for the elitism and diversity of solutions. Finally, a new selection operator for genetic algorithm is designed using the optimum order method and the elitism and diversity metrics.*

*Theoretically, the terminology “solution” means the point in the decision variable space, while the corresponding point in the objective function value space is named as “image of solution.” However, most of the following discussions are in the objective function value space. In order to simplify the description, we indiscriminately call the point from the decision variable space and its corresponding image from the objective function value space as the “solution,” if there is no confusion to do so.*

*3.1. The Optimum Order Method*

*3.1. The Optimum Order Method*

*In numerical optimization, in order to compare the numerical performance of different solutions, it is necessary to assign a fitness value for each solution. For the single-valued function, fitness is normally assigned as its function value. However, to assign the fitness of a multiobjective function is not straightforward. So far, there are three typical approaches. The first one is weighted sum approach [22, 35], which converts the multiple objectives into a single one using normalized weight , , . The decision of weight parameters is not an easy task for this approach. The second one is to alter the objective functions [18, 25], which randomly divides the current population into equal subpopulations: . Then, each solution in the subpopulation is assigned a fitness value based on the objective function . In fact, this approach is a straightforward extension of the single-objective genetic algorithm. The last one is Pareto-ranking approach [4, 23, 36], which is a direct application of the definition of Pareto solution. In the following, we present a rank strategy called the optimum order method [6].*

*Definition 7. *Let and ; for any , defineThen, is called the* optimal number* of corresponding to for all objectives. Furthermore, is defined as* the total optimal number* of corresponding to all the other solutions for all objectives.

*Obviously, for a minimization problem, a larger optimal number corresponds to a better solution. Therefore, optimal numbers can be considered as criteria for ranking a set of solutions. Due to this observation, we propose the following algorithm to rank a population of solutions.*

*Algorithm 8 (optimum order method (OOM)). *
Consider the following steps.*Step 1 (input).* It includes the population of solutions and their objective function values.*Step 2 (compute optimal numbers).* Compute the optimal number and total optimal number of each solution; fill these numbers into Table 1 according to (8).*Step 3 (rank the solution).* Rearrange the order of solutions according to the decreasing order of the total optimal numbers . More precisely, denote and so on.