Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2011 / Article

Research Article | Open Access

Volume 2011 |Article ID 695087 | 10 pages | https://doi.org/10.1155/2011/695087

A Novel Ranking Method Based on Subjective Probability Theory for Evolutionary Multiobjective Optimization

Academic Editor: Yuri Vladimirovich Mikhlin
Received10 Mar 2011
Revised30 Jun 2011
Accepted21 Jul 2011
Published15 Sep 2011

Abstract

Most of the engineering problems are modeled as evolutionary multiobjective optimization problems, but they always ask for only one best solution, not a set of Pareto optimal solutions. The decision maker's subjective information plays an important role in choosing the best solution from several Pareto optimal solutions. Generally, the decision-making processing is implemented after Pareto optimality. In this paper, we attempted to incorporate the decider's subjective sense with Pareto optimality for chromosomes ranking. A new ranking method based on subjective probability theory was thus proposed in order to explore and comprehend the true nature of the chromosomes on the Pareto optimal front. The properties of the ranking rule were proven, and its transitivity was presented as well. Simulation results compared the performance of the proposed ranking approach with the Pareto-based ranking method for two multiobjective optimization cases, which demonstrated the effectiveness of the new ranking approach.

1. Introduction

Evolutionary multiobjective optimization (EMO) is widely used in various engineering fields for analyzing the complex criteria [1]. The general evolutionary algorithms (EAs) first generate the Pareto optimal solutions based on Pareto optimality [2–4], and then choose the best one by the user’s interaction [5]. The first step to achieve the Pareto optimal set is usually not only time consuming, but also generates many redundant solutions. In fact, most of the engineering cases only require one solution without any additive choices. However, little efforts have been put into incorporating the subjective information with Pareto optimality to reduce the search space in the first step. So this paper tried to propose a new rank method to generate one solution which is consist with the subjective information in one step.

The paper is organized as follows. The next section introduces a biobjective optimization case where the proposed method is initiated from. Based on the simple case, the new ranking method based on subjective probability is proposed in Section 3. In order to ensure the efficiency of the new method, several theory proof are presented to demonstrate its properties in Section 4. In Section 5, our proposed method is applied to two different EMO algorithms, genetic programming and particle swarm optimization, to certify its validity for ranking large numbers of chromosomes. The first simulation test is implemented to identify the model design of a classical nonlinear autoregressive with extra inputs (NARX) model, which is the most popular method as a foundation for model construction [6]. In addition, in the second simulation, the test function with multimodality is also discussed. Experimental results are also presented along with a comparison to the corresponding Pareto-based EMO approaches. Concluding remarks are given in Section 6.

2. Preliminary

In this part, I would like to give an example to introduce the idea of ranking chromosomes based on subjective probability theory as following.

Consider that we would wait for a line to buy a ticket, and there are two strategies: the ticket costs 1 but we have to wait for 100 minutes; or we can only spend 70 minutes waiting and cost 2. Which choice will you make if considering the absolute lost/earn amounts of money and time? On the other hand, which choice will you make if considering the lost/earn percentage of money and time?

Obviously, it is a biobjective minimization problem, and time and money are two objectives. Assume 𝑓1 and 𝑓2 denote the two objectives, 𝑢={𝑢1,𝑢2} and 𝑣={𝑣1,𝑣2} are two points on the Pareto front. Two strategies described in this problem can be expressed as 𝑢={100,1} and 𝑣={70,2}.

First, assume that a decision maker’s attitude is influenced by the absolute difference between values of two strategies for each objective. That is, |𝑢1−𝑣1|=|100−70|=30, |𝑢2−𝑣2|=|1−2|=1. Suppose that the decision maker holds the first strategy, he has two choices: change his assets and hold the second strategy; not change his assets and still hold the first strategy. If he changes his mind to take the second strategy, he would save 30 minutes and cost 1 more; otherwise, he would lose the opportunity of saving 30 minutes but save 1 at the same time. In contrast, suppose that he holds the second strategy, if he changes his mind from the second strategy to the first strategy, he would lose 30 minutes and save 1; otherwise, he would save 30 minutes and cost 1.

Table 1 shows the decision table of this case, by using the multicriteria decision-making model. For our problem, there are four actions and two criteria to evaluate the decision maker’s attitude. Since both of the objectives are minimized, we can consider that the positive consequence means the degree of satisfactory if the corresponding action is taken; the negative one means the degree of disappointment if the action is taken. Then, the decision maker will choose the action with a higher degree of satisfactory in terms of two criteria. In order to balance two criteria, we can compare the consequences of different actions based on the utility function𝑈(time, money) or according to the weighted function∑2𝑖=1𝑤𝑖∗𝑓𝑖, where 𝑤𝑖 means the importance index of different criteria (such as, 𝑤1for time and 𝑤2 for money).


ActionsConsequences
Criterion 1: time ( 𝑓 1 )Criterion 2: money ( 𝑓 2 )

Change from 𝑢 to 𝑣  30−1
No change from 𝑢 to 𝑣 −30 1
Change from 𝑣 to 𝑢 −30 1
No change from 𝑣 to 𝑢  30−1

But it is limited to measure the decision maker’s evaluation only according to the absolute difference of two consequences. Always, the decision maker will change the evaluation as his possessive assets change. This is the basis of the proposed method to comprehend the true nature of the chromosomes on the Pareto optimal front. For example, when the decision maker holds the first strategy, his current assets are 100 minutes and 1, if he changes to the second strategy, his time asset will obtain (100−70)/100=0.3 benefit with reference to his present time asset (100 mins), and his money asset will lose (2−1)/1=1 benefit with reference to his present money asset (2.1). In contrast, when his current assets are 70 minutes and 2, if he gives up what he owns in order to achieve 100 minutes and 1, his time asset will lose (100−70)/70=3/7 benefit with reference to his present money asset (70 mins), and his money asset will obtain (2−1)/2=0.5 benefit with reference to his present money asset (3.1). Therefore, if the situation of the decision maker changes, then so may his attitude. This is an important nature we should take into account for chromosome ranking. Based on such conclusion, we proposed a new ranking method that can assess the decision maker’s attitude in the context of his current assets, and then redetermine the evaluation according to the change of his current assets. Table 2 presents the decision table when the new ranking method is applied to our example.


ActionsConsequences
Criterion 1: time ( 𝑓 1 )Criterion 2: money ( 𝑓 2 )

Change from 𝑢 to 𝑣 3 0 / 1 0 0 = 0 . 3 − 1 / 1 = − 1
No change from 𝑢 to 𝑣 − 3 0 / 1 0 0 = − 0 . 3 1 / 1 = 1
Change from 𝑣 to 𝑢 − 3 0 / 7 0 = − 3 / 7 1 / 2 = 0 . 5
No change from 𝑣 to 𝑢 3 0 / 7 0 = 3 / 7 − 1 / 2 = − 0 . 5

In Table 2, the positive consequence means the degree of belief if the corresponding action is taken; the negative value means the degree of unbelief of the corresponding action. So we define the belief/unbelief degree of an action changing from 𝑢𝑖 to 𝑣𝑖 as 𝑏𝑖𝑢(𝑢,𝑣)=𝑖−𝑣𝑖𝑢𝑖.(2.1)

It can be seen that even though the absolute differences of time criterion of four actions are the same, the degree of belief and the degree of unbelief differ too much, when the current asset of the decision maker changes. In Table 1, the degree of satisfaction of changing from 𝑢 to 𝑣 is the same as that of not changing from 𝑣 to 𝑢, so does the degree of disappointment. But in Table 2, when the decision maker holds with 𝑢, the degree of belief of changing from 𝑢 to 𝑣 is 0.3 with respect to time and not changing is 1 with respect to money; on the other hand, when the decision maker holds with 𝑣, the degree of belief of changing from 𝑣 to 𝑢 is 0.5 with respect to money, and not changing is 3/7 with respect to time. Hence, the rank of 𝑣 and 𝑢 cannot be determined by using the traditional utility functions. A new ranking rule is then proposed in the following part.

3. A Ranking Rule Based on Subjective Probability

From the above example, we acknowledge an implicit assumption that a decision maker’s degrees of belief/unbelief are always conditional upon his current situation. Thus, we define a decision maker’s subjective probability 𝑃(𝐴∣𝑥) to denote the probability of the decision maker taking action 𝐴 when his current fortune is 𝑥.

Assume that there are two actions{𝐴1,𝐴2} in the event space. When the decision maker stands at the point 𝑢, 𝐴1 denotes “change from 𝑢 to 𝑣” is approved while 𝐴2 denotes “change from 𝑢 to 𝑣” is not approved. Thus, 𝑃(𝐴1∣𝑢)+𝑃(𝐴2∣𝑢)=1.

If 𝑃(𝐴1∣𝑢)>𝑃(𝐴2∣𝑢), the decision maker will change 𝑢 to 𝑣. In another saying, he believes 𝑣 is better than 𝑢. Otherwise, he prefers 𝑢 to 𝑣. Thus, we propose to model the decision maker’s preference based on his subjective probability for ranking.

The decision maker’s subjective probabilities, 𝑃(𝐴𝑖∣𝑢), is expressed according to the degree of subjective belief/unbelief for the action 𝐴𝑖 when the decision maker owns 𝑢. As we have seen, in Table 2, the consequences show the degrees of subjective belief/unbelief of each action in the context of the decision maker’s current assets. It is noted that the range of belief degree is (0,+∞) and the range of unbelief degree is (−∞,0). In order to avoid the infinite issue because of the denominator of the belief/unbelief degree equation being zero, there are two postprocessing ways for 𝑏𝑖(𝑢,𝑣) from (2.1). One way is to add an infinitesimally small number to the denominator of the belief/unbelief degree equation when its value is zero, and such small number should be almost zero-influence for the outcome of the optimization. Another way is to apply the property of some special utility function to 𝑏𝑖(𝑢,𝑣) to transfer the infinite space to the finite space, such as, ant-tangent function. But the second postprocessing way would bring about a risk that the choice of the priorities of the different criteria on the outcome of the optimization is hard to determine for the user because a nonlinear utility function changed the relationships of these criteria. Therefore, here the first postprocessing way is used to avoid the infinite issue for the belief/unbelief degree equation, which makes the ranking strategy intuitive for the user.

Since different criteria are considered to be in the different priority level, we define 𝜔1 and 𝜔2 as the weights of belief/unbelief degree of criterion 1 and 2, respectively, 𝜔1+𝜔2=1. In order to make sure 𝑃(𝐴𝑖∣𝑢)∈[0,1], the subjective probabilities are written as 𝜔𝑃(𝐴1∣𝑢)=1∗((𝑢1−𝑣1)/𝑢1)+𝜔2∗((𝑢2−𝑣2)/𝑢2)+12,𝜔(3.1)𝑃(𝐴2∣𝑢)=1∗((𝑣1−𝑢1)/𝑢1)+𝜔2∗((𝑣2−𝑢2)/𝑢2)+12.(3.2)

The same derivation also works for the situation when the decision maker stands at the point 𝑣, thus we have 𝜔𝑃(𝐴1∣𝑣)=1∗((𝑣1−𝑢1)/𝑣1)+𝜔2∗((𝑣2−𝑢2)/𝑣2)+12,𝜔𝑃(𝐴2∣𝑣)=1∗((𝑢1−𝑣1)/𝑣1)+𝜔2∗((𝑢2−𝑣2)/𝑣2)+12.(3.3) Suppose 𝑑(𝑢,𝑣)=𝜔1∗((𝑢1−𝑣1)/𝑢1)+𝜔2∗((𝑢2−𝑣2)/𝑢2), 𝑑(𝑣,𝑢)=𝜔1∗((𝑣1−𝑢1)/𝑣1)+𝜔2∗((𝑣2−𝑢2)/𝑣2), we propose a new ranking rule described as follows.(1)If 𝑑(𝑢,𝑣)>𝑑(𝑣,𝑢), then we have 𝑃(𝐴1∣𝑢)>𝑃(𝐴2∣𝑢), thereby 𝑢 should be substituted by 𝑣. Therefore, in this case, 𝑣≻𝑢, that is, rank(𝑣)<rank(𝑢).(2)If (𝑢,𝑣)<𝑑(𝑣,𝑢), then we have 𝑃(𝐴1∣𝑢)<𝑃(𝐴2∣𝑢), thereby 𝑢 should not be substituted by 𝑣. Therefore, in this case, 𝑣≺𝑢, that is, rank(𝑣)>rank(𝑢).(3)If 𝑑(𝑢,𝑣)=𝑑(𝑣,𝑢), thus 𝑣∼𝑢, that is, rank(𝑣)=rank(𝑢).

4. Method Analysis

This part illustrates the analysis of ranking two tradeoffs 𝑢 and 𝑣 with the proposed method.

Theorem 4.1. When 𝑃(𝐴1∣𝑢)>𝑃(𝐴2∣𝑢), then 𝑃(𝐴1∣𝑣)<𝑃(𝐴2∣𝑣).

Proof. Since 𝑃(𝐴1∣𝑢)>𝑃(𝐴2∣𝑢), then 𝜔1∗((𝑢1−𝑣1)/𝑢1)+𝜔2∗((𝑢2−𝑣2)/𝑢2)>0,
For the multiobjective optimization problem, one term is positive and another is negative. They would yield to the same solution. Hence, here we assume 𝜔1∗|((𝑢1−𝑣1)/𝑢1)|>𝜔2∗|((𝑢2−𝑣2)/𝑢2)|, 𝑢1>𝑣1, 𝑢2<𝑣2. Derived from this assumption, we have 𝜔1∗|((𝑣1−𝑢1)/𝑣1)|>𝜔2∗|((𝑢1−𝑣1)/𝑢1)|; 𝜔2∗|((𝑣2−𝑢2)/𝑣2)|<𝜔2∗|((𝑢2−𝑣2)/𝑢2)|. Consequently, 𝜔1∗|((𝑣1−𝑢1)/𝑣1)|>𝜔2∗|((𝑣2−𝑢2)/𝑣2)|, that is, 𝑃(𝐴1∣𝑣)<𝑃(𝐴2∣𝑣).
As a conclusion, when 𝑃(𝐴1∣𝑢)>𝑃(𝐴2∣𝑢), then 𝑃(𝐴1∣𝑣)<𝑃(𝐴2∣𝑣) is certain. In other words, when 𝑑(𝑢,𝑣)>0, then 𝑑(𝑣,𝑢)<0.In this case, the decision maker prefers to hold 𝑣 instead of 𝑢. It is observed that this theorem consists with the ranking rule 1.

Theorem 4.2. When 𝑃(𝐴1∣𝑢)<𝑃(𝐴2∣𝑢), the relation of 𝑃(𝐴1∣𝑣) and 𝑃(𝐴2∣𝑣) yields to the final rank sequence.

Proof. According to 𝑃(𝐴1∣𝑢)<𝑃(𝐴2∣𝑢), we have 𝜔1∗|((𝑢1−𝑣1)/𝑢1)|<𝜔2∗|((𝑢2−𝑣2)/𝑢2)|. But it is found that 𝜔1∗|((𝑣1−𝑢1)/𝑣1)|>𝜔2∗|((𝑢1−𝑣1)/𝑢1)|; 𝜔2∗|((𝑣2−𝑢2)/𝑣2)|<𝜔2∗|((𝑢2−𝑣2)/𝑢2)|, so the comparison of 𝜔1∗|((𝑣1−𝑢1)/𝑣1)| and 𝜔2∗|((𝑣2−𝑢2)/𝑣2)| is still unknown. Therefore, we should consider the following situations corresponding to different relations of 𝑃(𝐴1∣𝑣) and 𝑃(𝐴2∣𝑣).(1)When 𝑑(𝑣,𝑢)>0, that is, 𝑃(𝐴1∣𝑣)>𝑃(𝐴2∣𝑣), the ranking model is if 𝑑(𝑢,𝑣)<0, and 𝑑(𝑣,𝑢)>0,thus 𝑣≺𝑢, that is, rank(𝑣)>rank(𝑢). This case obeys the ranking rule 2.(2)When 𝑑(𝑣,𝑢)<0, that is, 𝑃(𝐴1∣𝑣)<𝑃(𝐴2∣𝑣), 𝑃(𝐴1𝑢), and 𝑃(𝐴1∣𝑣) are compared and the point with the higher probability should be changed, that is,(i)if 𝑃(𝐴1∣𝑢)>𝑃(𝐴1∣𝑣), that is, 𝑑(𝑢,𝑣)>𝑑(𝑣,𝑢), thus 𝑣≻𝑢, that is, rank(𝑣)<rank(𝑢);(ii)if 𝑃(𝐴1∣𝑢)<𝑃(𝐴1∣𝑣), that is, 𝑑(𝑢,𝑣)<𝑑(𝑣,𝑢), thus 𝑣≺𝑢, that is, rank(𝑣)>rank(𝑢);(iii)if 𝑃(𝐴1∣𝑢)=𝑃(𝐴1∣𝑣), that is, 𝑑(𝑢,𝑣)=𝑑(𝑣,𝑢), thus 𝑣∼𝑢, that is, rank(𝑣)=rank(𝑢). As a conclusion, when 𝑃(𝐴1∣𝑢)<𝑃(𝐴2∣𝑢), different relations of 𝑃(𝐴1∣𝑣) and 𝑃(𝐴2∣𝑣) would cause different ranking, and still obey the proposed ranking model.

Theorem 4.3. Transitivity property: when 𝑤1=𝑤2, if 𝑑(𝑢,𝑣)>𝑑(𝑣,𝑢) and 𝑑(𝑣,𝑤)>𝑑(𝑤,𝑣), then 𝑑(𝑢,𝑤)>𝑑(𝑤,𝑢).

Proof. Since 𝑑(𝑢,𝑣)−𝑑(𝑣,𝑢)=−𝜔1∗(𝑣1/𝑢1−𝑢1/𝑣1)−𝜔2∗(𝑣2/𝑢2−(𝑢2/𝑣2)), assume that 𝑘1=(𝑣1/𝑢1)>1, 𝑘2=(𝑢2/𝑣2)>1, then we have 𝑑(𝑢,𝑣)−𝑑(𝑣,𝑢)=−𝜔1∗(𝑘1−1/𝑘1)+𝜔2∗(𝑘2−1/𝑘2).
In the case with 𝑤1=𝑤2, 𝑑(𝑢,𝑣)−𝑑(𝑣,𝑢)=𝜔1∗(𝑘2−𝑘1)(1+1/𝑘1𝑘2). Because 𝜔1,𝑘1,𝑘2>0, 𝑑(𝑢,𝑣)−𝑑(𝑣,𝑢)>0, it can be derived that 𝑘2>𝑘1.
Similarly, assume that 𝑚1=(𝑤1/𝑣1)>1, 𝑚2=(𝑣2/𝑤2)>1, according to 𝑑(𝑣,𝑤)>𝑑(𝑤,𝑣), we can get that 𝑚2>𝑚1.
Hence, 𝑢 and 𝑤 can be ranked based on 𝑑(𝑢,𝑤)−𝑑(𝑤,𝑢)=𝜔1∗𝑢2−𝑤2𝑤11𝑢11+𝑤1𝑢2/𝑢1𝑤2=𝜔1∗𝑘2𝑚2−𝑘1𝑚111+𝑘1𝑚1𝑘2𝑚2>0.(4.1)

5. Simulation

In the simulation part, as an extension, we applied the proposed approach ranking hundreds of candidates to a general EMO approach and discussed the improvement. Genetic programming (GP) and particle swarm optimization (PSO), two popular evolutionary algorithms for multiobjective optimization, are used to combine with the proposed ranking method in the following simulations.

5.1. The Proposed Ranking Method with MOGP

From the previous researchers [7, 8], it is found that multiobjective genetic programming (MOGP) was a good EMO tool to discover and optimize the structure of nonlinear models. So, we take into account a nonlinear systems design problem, and then compare the application performance of the proposed approach and the traditional Pareto-based EMO methods.

Assume an unknown system expressed in the form of a general regression model as 𝐲=10𝐱4𝐱3+5𝐱3+5+𝐧,(5.1) where 𝐲 is the output vector that is observed after the input data went through an unknown system in the presence of the additive white noise 𝐧, with zero mean and variance 0.01. The input data is actually expressed by an input regression matrix with four different features 𝐗={𝐱𝟏,𝐱𝟐,𝐱𝟑,𝐱𝟒}. The vectors 𝐱𝟏, 𝐱𝟐, 𝐱𝟑 are independent variables generated by random while 𝐱𝟒 is the sum of 𝐱𝟏 and 𝐱𝟐. That means the diversity of structures is increased and more candidate solutions have the same approximate performance but different structure complexity. Three objectives are required for this problem, that is, minimizing the complexity of the model structure, minimizing the number of features involved in the model and minimizing the mean squared error (MSE) of the output.

Thereby, the Pareto-based EMO methods would generate more Pareto optimal solutions in the first step. Here, we first used a popular Pareto-based EMO method, the nondominated sorting genetic algorithms (NSGA-II) described in [9] to identify the unknown nonlinear system. NSGAII-GP is implemented in two steps: Pareto optimality and final decision making. For the nonlinear system design problem, the final decision generally prefers to the smallest approximate error in the Paretooptimal set. Consequently, its results are compared with those results obtained after applying the proposed ranking approach to solve this problem. For the proposed ranking approach, we defined all the weights parameters as 1.

We used 2000 records in the training data. All of the genetic programming algorithms defined the same simulation parameters: population size = 100, generation = 20, maximum depth of trees = 5, crossover probability = 0.7, and mutation probability = 0.3. Table 3 presents the results of the proposed ranking approach and NSGAII-GP in 10 trails.

(a)

TrialsResults of the MOGP algorithm with the proposed ranking approach
StructureNumber of featuresMSE

1 𝐱 3 + 𝐱 3 𝐱 4 21.8479e-4
2 𝐱 3 + 𝐱 4 𝐱 3 21.8265e-4
3 𝐱 3 + 𝐱 4 𝐱 3 21.8143e-4
4 𝐱 4 𝐱 3 + 𝐱 3 21.7979e-4
5 𝐱 3 + 𝐱 3 𝐱 4 21.7688e-4
6 𝐱 3 + 𝐱 4 𝐱 3 21.7720e-4
7 𝐱 4 𝐱 3 + 𝐱 3 21.8062e-4
8 𝐱 4 𝐱 3 + 𝐱 3 21.7902e-4
9 𝐱 3 + 𝐱 3 𝐱 4 21.8638e-4
10 𝐱 3 + 𝐱 4 𝐱 3 21.7611e-4

(b)

TrialsResults of the MOGP algorithm with NSGA-II
StructureNumber of featuresMSE

1 ( 𝐱 3 + 𝐱 2 ) + 𝐱 1 + ( 𝐱 1 𝐱 2 𝐱 3 𝐱 2 ) + ( 𝐱 4 + 𝐱 3 ) ( 𝐱 3 + 𝐱 4 ) + 𝐱 1 𝐱 1 + 𝐱 3 𝐱 4 41.8479e-4
2 ( ( 𝐱 4 + 𝐱 1 ) + 𝐱 3 𝐱 4 ) + 𝐱 1 𝐱 1 𝐱 1 + ( 𝐱 4 𝐱 4 ) ( 𝐱 3 𝐱 4 ) 31.8265e-4
3 ( 𝐱 4 𝐱 3 ) ( 𝐱 4 𝐱 2 ) ( 𝐱 4 + 𝐱 2 ) 𝐱 3 𝐱 2 + ( 𝐱 4 + 𝐱 3 ) + ( 𝐱 3 𝐱 4 ) 31.8143e-4
4 𝐱 2 𝐱 3 + 𝐱 4 𝐱 3 + ( 𝐱 2 + 𝐱 3 ) + 𝐱 4 + ( 𝐱 4 + 𝐱 3 ) 𝐱 4 𝐱 4 ( 𝐱 3 𝐱 3 ) ( 𝐱 2 𝐱 2 ) 31.7979e-4
5 ( 𝐱 2 𝐱 2 ) ( 𝐱 2 𝐱 3 ) ( 𝐱 3 𝐱 1 ) + ( 𝐱 3 + 𝐱 4 + 𝐱 3 𝐱 4 ) + ( 𝐱 2 + 𝐱 3 𝐱 3 ) 41.7688e-4
6 𝐱 1 𝐱 1 + ( ( 𝐱 3 + 𝐱 4 ) + 𝐱 4 𝐱 3 ) + ( 𝐱 4 𝐱 3 ( 𝐱 4 𝐱 4 ) ) ( 𝐱 2 𝐱 3 ) 41.7720e-4
7 ( ( 𝐱 1 𝐱 1 ) 𝐱 1 + 𝐱 1 ) + ( 𝐱 4 𝐱 3 𝐱 4 𝐱 2 ) + ( 𝐱 3 + 𝐱 4 𝐱 4 ) 31.8062e-4
8 ( ( 𝐱 1 𝐱 1 ) + 𝐱 4 𝐱 3 ) + ( ( 𝐱 1 + 𝐱 3 ) + 𝐱 3 𝐱 3 ) + 𝐱 3 𝐱 3 𝐱 4 𝐱 4 + 𝐱 4 31.7902e-4
9 ( 𝐱 1 𝐱 1 𝐱 1 ) + 𝐱 1 𝐱 2 + 𝐱 3 + 𝐱 2 + 𝐱 4 𝐱 3 + 𝐱 3 𝐱 3 + 𝐱 1 41.8638e-4
10 ( 𝐱 3 + 𝐱 4 𝐱 3 ) + 𝐱 1 𝐱 1 𝐱 1 𝐱 1 + ( 𝐱 4 + 𝐱 4 𝐱 3 ) ( ( 𝐱 2 + 𝐱 2 ) 𝐱 3 𝐱 2 ) 41.7611e-4

Through the comparison of the results of (a) and (b) in Table 3, it can be found that NSGAII-GP cannot obtain the optimal structure of models while the new GP method with the proposed ranking approach can easily converge to the best solution. Actually, NSGA-II can only find the Paretooptimal set, from which designers should use multicriteria decision making (MCDM) techniques to obtain the best solution. But the realization of MCDM always needs a complex ranking procedure among all the Pareto optimal solutions. Hence, the new MOGP algorithm using the proposed ranking approach provides a better way to solve the nonlinear system design problem without much complex comparison of candidate solutions and be able to obtain the satisfied solution.

5.2. The Proposed Ranking Method with MOPSO

Multiobjective particle swarm optimization (MOPSO) is widely used in a variety of applications, such as, neural network, with the outstanding advantages of simple implementation and low computational cost [10]. This section applied the proposed ranking method to the MOPSO algorithm to generate a new algorithm. The test function T4 proposed by Zitzler [11] is taken into account to test the new algorithm’s ability to deal with multimodality.

As we know, the test function T4 contains 219 local Pareto optimal sets, as the red triangles in Figure 1; the global Pareto optimal front is formed with the function 𝑔=1, as the blue line formed by the red circles in Figure 1. Thus, the general Paretooptimality methods usually sank into the local Pareto optimal fronts and hard to totally obtain the global Pareto front. From Figure 1, it is seen that there are more than one local Pareto optimal front in this simulation problem and additionally not all local Paretooptimal sets are distinguishable in the objective space. This issue caused the general Pareto optimality methods tend to stop at some local Pareto optimal solutions and miss some global Pareto optimal solutions. Therefore, the MOPSO using the Pareto optimal ranking method has much trouble to obtain the global Pareto optimal sets in the Paretooptimality step. Consequently, it is probable that the global optimal solution was omitted in the Paretooptimality step, which would directly make the performance of the decision-making step worse.

However, when the proposed ranking method with the MOPSO is applied to solve this problem, it works very well, more importantly, there is no risk about missing the global optimal solution and only one step is implemented to obtain the final unique result. Assume that the first objective 𝑓1 is at a higher priority level than the second objective 𝑓2, then the weights of belief/unbelief degree of the objective 𝑓1 and 𝑓2 are defined to have the following relationship: 𝑤1/𝑤2=2. Figure 2 presents the convergence curve in terms of the objective 𝑓1 (black) and 𝑓2 (blue), respectively. It is found that the proposed ranking method can make the objective with a higher priority quickly converge to the global minimum or maximum, and then other objectives naturally converge to the global minimum or maximum that satisfied the first objective. This simulation has been worked for 100 times and the results are stable. Therefore, it is concluded that the proposed ranking method with the MOPSO can achieve the final result that located on the goal Pareto front and throw off the local optimums when the multiobjective optimization problem is multimodality.

6. Summary

A new ranking approach for evolutionary multiobjective optimization was presented. Compared with the Pareto-based EMO algorithms, the main advantage of this proposed ranking approach is to conduct the final solution which consists with the subjective information in one step without Pareto optimality. More importantly, this approach used the belief/unbelief degree to express a subjective choice of the user instead of the conventional absolute difference of two consequences, because the belief/unbelief degree takes the current asset into account instead of measuring different species on the same scale. Therefore, the proposed function can comprehend the true nature of the chromosomes ranking better than a simple weighted sum of the objective. The validity of this approach for ranking large numbers of candidate solution in EMO is demonstrated by two simulations. First simulation test is about nonlinear system design. Simulation results present that the results of the MOGP with the proposed ranking approach show higher solution accuracy and faster convergence than a popular multiobjective GP algorithm, NSGAII-GP. Furthermore, the proposed ranking approach is applied to the MOPSO to present its performance to deal with the multimodality optimization problem. After being compared with the general Paretooptimality method, it is found that the proposed ranking approach with the subjective preference information performs better to obtain the final global optimal solution.

References

  1. K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms, Wiley-Interscience Series in Systems and Optimization, John Wiley & Sons, Chichester, UK, 2001. View at: Zentralblatt MATH
  2. K. Deb, S Agrawal, A. Pratab, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site | Google Scholar
  3. E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: improving the strength Pareto evolutionary algorithm,” in Proceedings of the Evolutionary Methods for Design, Optimization and Control With Applications to Industrial Problems, (EUROGEN 2001), K. Giannakoglou, Ed., Athens, Greece, September 2001. View at: Google Scholar
  4. Y. Jin and B. Sendhoff, “Pareto-based multiobjective machine learning: an overview and case studies,” IEEE Transactions on Systems, Man and Cybernetics Part C, vol. 38, no. 3, pp. 397–415, 2008. View at: Publisher Site | Google Scholar
  5. C. A. C. Coello, G. B. Lamont, and D. A. Van Veldhuizen, Evolutionary Algorithms for Solving Multi-Objective Problems, Genetic and Evolutionary Computation Series, Springer, New York, NY, USA, 2nd edition, 2007.
  6. K. Rodriguez-Vazquez, C. M. Fonseca, and P. J. Fleming, “Identifying the structure of nonlinear dynamic systems using multiobjective genetic programming,” IEEE Transactions on Systems, Man, and Cybernetics Part A, vol. 34, no. 4, pp. 531–545, 2004. View at: Publisher Site | Google Scholar
  7. G. N. Beligiannis, L. V. Skarlas, S. D. Likothanassis, and K. G. Perdikouri, “Nonlinear model structure identification of complex biomedical data using a genetic-programming-based technique,” IEEE Transactions on Instrumentation and Measurement, vol. 54, no. 6, pp. 2184–2190, 2005. View at: Publisher Site | Google Scholar
  8. M. Willis, H. Hiden, M. Hinchliffe, B. Mckay, and G. W. Barton, “Systems modelling using genetic programming,” Computers and Chemical Engineering, vol. 21, no. 1, pp. 1161–1166, 1997. View at: Google Scholar
  9. K. Deb, S. Agrawal, A. Pratab, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site | Google Scholar
  10. M. R. Sierra and C. A. Coello Coello, “Multi-objective particle swarm optimizers: a survey of the state-of-the-art,” International Journal of Computational Intelligence Research, vol. 2, no. 3, pp. 287–308, 2006. View at: Google Scholar
  11. E. Zitzler, Evolutionary algorithms for multiobjective optimization: methods and applications, Ph.D. thesis, Swiss Federal Institute of Technology (ETH), Zurich, Switzerland, 1999.

Copyright © 2011 Shuang Wei and Henry Leung. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

577 Views | 365 Downloads | 2 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder