Research Article  Open Access
A Novel Ranking Method Based on Subjective Probability Theory for Evolutionary Multiobjective Optimization
Abstract
Most of the engineering problems are modeled as evolutionary multiobjective optimization problems, but they always ask for only one best solution, not a set of Pareto optimal solutions. The decision maker's subjective information plays an important role in choosing the best solution from several Pareto optimal solutions. Generally, the decisionmaking processing is implemented after Pareto optimality. In this paper, we attempted to incorporate the decider's subjective sense with Pareto optimality for chromosomes ranking. A new ranking method based on subjective probability theory was thus proposed in order to explore and comprehend the true nature of the chromosomes on the Pareto optimal front. The properties of the ranking rule were proven, and its transitivity was presented as well. Simulation results compared the performance of the proposed ranking approach with the Paretobased ranking method for two multiobjective optimization cases, which demonstrated the effectiveness of the new ranking approach.
1. Introduction
Evolutionary multiobjective optimization (EMO) is widely used in various engineering fields for analyzing the complex criteria [1]. The general evolutionary algorithms (EAs) first generate the Pareto optimal solutions based on Pareto optimality [2–4], and then choose the best one by the user’s interaction [5]. The first step to achieve the Pareto optimal set is usually not only time consuming, but also generates many redundant solutions. In fact, most of the engineering cases only require one solution without any additive choices. However, little efforts have been put into incorporating the subjective information with Pareto optimality to reduce the search space in the first step. So this paper tried to propose a new rank method to generate one solution which is consist with the subjective information in one step.
The paper is organized as follows. The next section introduces a biobjective optimization case where the proposed method is initiated from. Based on the simple case, the new ranking method based on subjective probability is proposed in Section 3. In order to ensure the efficiency of the new method, several theory proof are presented to demonstrate its properties in Section 4. In Section 5, our proposed method is applied to two different EMO algorithms, genetic programming and particle swarm optimization, to certify its validity for ranking large numbers of chromosomes. The first simulation test is implemented to identify the model design of a classical nonlinear autoregressive with extra inputs (NARX) model, which is the most popular method as a foundation for model construction [6]. In addition, in the second simulation, the test function with multimodality is also discussed. Experimental results are also presented along with a comparison to the corresponding Paretobased EMO approaches. Concluding remarks are given in Section 6.
2. Preliminary
In this part, I would like to give an example to introduce the idea of ranking chromosomes based on subjective probability theory as following.
Consider that we would wait for a line to buy a ticket, and there are two strategies: the ticket costs 1 but we have to wait for 100 minutes; or we can only spend 70 minutes waiting and cost 2. Which choice will you make if considering the absolute lost/earn amounts of money and time? On the other hand, which choice will you make if considering the lost/earn percentage of money and time?
Obviously, it is a biobjective minimization problem, and time and money are two objectives. Assume and denote the two objectives, and are two points on the Pareto front. Two strategies described in this problem can be expressed as and .
First, assume that a decision maker’s attitude is influenced by the absolute difference between values of two strategies for each objective. That is, , . Suppose that the decision maker holds the first strategy, he has two choices: change his assets and hold the second strategy; not change his assets and still hold the first strategy. If he changes his mind to take the second strategy, he would save 30 minutes and cost 1 more; otherwise, he would lose the opportunity of saving 30 minutes but save 1 at the same time. In contrast, suppose that he holds the second strategy, if he changes his mind from the second strategy to the first strategy, he would lose 30 minutes and save 1; otherwise, he would save 30 minutes and cost 1.
Table 1 shows the decision table of this case, by using the multicriteria decisionmaking model. For our problem, there are four actions and two criteria to evaluate the decision maker’s attitude. Since both of the objectives are minimized, we can consider that the positive consequence means the degree of satisfactory if the corresponding action is taken; the negative one means the degree of disappointment if the action is taken. Then, the decision maker will choose the action with a higher degree of satisfactory in terms of two criteria. In order to balance two criteria, we can compare the consequences of different actions based on the utility function(time, money) or according to the weighted function, where means the importance index of different criteria (such as, for time and for money).

But it is limited to measure the decision maker’s evaluation only according to the absolute difference of two consequences. Always, the decision maker will change the evaluation as his possessive assets change. This is the basis of the proposed method to comprehend the true nature of the chromosomes on the Pareto optimal front. For example, when the decision maker holds the first strategy, his current assets are 100 minutes and 1, if he changes to the second strategy, his time asset will obtain benefit with reference to his present time asset (100 mins), and his money asset will lose benefit with reference to his present money asset (2.1). In contrast, when his current assets are 70 minutes and 2, if he gives up what he owns in order to achieve 100 minutes and 1, his time asset will lose benefit with reference to his present money asset (70 mins), and his money asset will obtain benefit with reference to his present money asset (3.1). Therefore, if the situation of the decision maker changes, then so may his attitude. This is an important nature we should take into account for chromosome ranking. Based on such conclusion, we proposed a new ranking method that can assess the decision maker’s attitude in the context of his current assets, and then redetermine the evaluation according to the change of his current assets. Table 2 presents the decision table when the new ranking method is applied to our example.

In Table 2, the positive consequence means the degree of belief if the corresponding action is taken; the negative value means the degree of unbelief of the corresponding action. So we define the belief/unbelief degree of an action changing from to as
It can be seen that even though the absolute differences of time criterion of four actions are the same, the degree of belief and the degree of unbelief differ too much, when the current asset of the decision maker changes. In Table 1, the degree of satisfaction of changing from to is the same as that of not changing from to , so does the degree of disappointment. But in Table 2, when the decision maker holds with , the degree of belief of changing from to is 0.3 with respect to time and not changing is 1 with respect to money; on the other hand, when the decision maker holds with , the degree of belief of changing from to is 0.5 with respect to money, and not changing is 3/7 with respect to time. Hence, the rank of and cannot be determined by using the traditional utility functions. A new ranking rule is then proposed in the following part.
3. A Ranking Rule Based on Subjective Probability
From the above example, we acknowledge an implicit assumption that a decision maker’s degrees of belief/unbelief are always conditional upon his current situation. Thus, we define a decision maker’s subjective probability to denote the probability of the decision maker taking action when his current fortune is .
Assume that there are two actions in the event space. When the decision maker stands at the point , denotes “change from to ” is approved while denotes “change from to ” is not approved. Thus, .
If , the decision maker will change to . In another saying, he believes is better than . Otherwise, he prefers to . Thus, we propose to model the decision maker’s preference based on his subjective probability for ranking.
The decision maker’s subjective probabilities, , is expressed according to the degree of subjective belief/unbelief for the action when the decision maker owns . As we have seen, in Table 2, the consequences show the degrees of subjective belief/unbelief of each action in the context of the decision maker’s current assets. It is noted that the range of belief degree is () and the range of unbelief degree is (). In order to avoid the infinite issue because of the denominator of the belief/unbelief degree equation being zero, there are two postprocessing ways for from (2.1). One way is to add an infinitesimally small number to the denominator of the belief/unbelief degree equation when its value is zero, and such small number should be almost zeroinfluence for the outcome of the optimization. Another way is to apply the property of some special utility function to to transfer the infinite space to the finite space, such as, anttangent function. But the second postprocessing way would bring about a risk that the choice of the priorities of the different criteria on the outcome of the optimization is hard to determine for the user because a nonlinear utility function changed the relationships of these criteria. Therefore, here the first postprocessing way is used to avoid the infinite issue for the belief/unbelief degree equation, which makes the ranking strategy intuitive for the user.
Since different criteria are considered to be in the different priority level, we define and as the weights of belief/unbelief degree of criterion 1 and 2, respectively, . In order to make sure , the subjective probabilities are written as
The same derivation also works for the situation when the decision maker stands at the point , thus we have Suppose , , we propose a new ranking rule described as follows.(1)If , then we have , thereby should be substituted by . Therefore, in this case, , that is, .(2)If then we have , thereby should not be substituted by . Therefore, in this case, , that is, .(3)If , thus , that is, .
4. Method Analysis
This part illustrates the analysis of ranking two tradeoffs and with the proposed method.
Theorem 4.1. When , then .
Proof. Since , then 0,
For the multiobjective optimization problem, one term is positive and another is negative. They would yield to the same solution. Hence, here we assume , , . Derived from this assumption, we have ; . Consequently, >, that is, .
As a conclusion, when , then is certain. In other words, when , then In this case, the decision maker prefers to hold instead of . It is observed that this theorem consists with the ranking rule 1.
Theorem 4.2. When , the relation of and yields to the final rank sequence.
Proof. According to , we have . But it is found that ; , so the comparison of and is still unknown. Therefore, we should consider the following situations corresponding to different relations of and .(1)When , that is, , the ranking model is if , and thus , that is, . This case obeys the ranking rule 2.(2)When , that is, , , and are compared and the point with the higher probability should be changed, that is,(i)if , that is, , thus , that is, ;(ii)if , that is, , thus , that is, ;(iii)if , that is, , thus , that is, . As a conclusion, when , different relations of and would cause different ranking, and still obey the proposed ranking model.
Theorem 4.3. Transitivity property: when , if and , then .
Proof. Since , assume that , , then we have .
In the case with , . Because , , it can be derived that .
Similarly, assume that , , according to , we can get that .
Hence, and can be ranked based on
5. Simulation
In the simulation part, as an extension, we applied the proposed approach ranking hundreds of candidates to a general EMO approach and discussed the improvement. Genetic programming (GP) and particle swarm optimization (PSO), two popular evolutionary algorithms for multiobjective optimization, are used to combine with the proposed ranking method in the following simulations.
5.1. The Proposed Ranking Method with MOGP
From the previous researchers [7, 8], it is found that multiobjective genetic programming (MOGP) was a good EMO tool to discover and optimize the structure of nonlinear models. So, we take into account a nonlinear systems design problem, and then compare the application performance of the proposed approach and the traditional Paretobased EMO methods.
Assume an unknown system expressed in the form of a general regression model as where is the output vector that is observed after the input data went through an unknown system in the presence of the additive white noise , with zero mean and variance 0.01. The input data is actually expressed by an input regression matrix with four different features . The vectors , , are independent variables generated by random while is the sum of and . That means the diversity of structures is increased and more candidate solutions have the same approximate performance but different structure complexity. Three objectives are required for this problem, that is, minimizing the complexity of the model structure, minimizing the number of features involved in the model and minimizing the mean squared error (MSE) of the output.
Thereby, the Paretobased EMO methods would generate more Pareto optimal solutions in the first step. Here, we first used a popular Paretobased EMO method, the nondominated sorting genetic algorithms (NSGAII) described in [9] to identify the unknown nonlinear system. NSGAIIGP is implemented in two steps: Pareto optimality and final decision making. For the nonlinear system design problem, the final decision generally prefers to the smallest approximate error in the Paretooptimal set. Consequently, its results are compared with those results obtained after applying the proposed ranking approach to solve this problem. For the proposed ranking approach, we defined all the weights parameters as 1.
We used 2000 records in the training data. All of the genetic programming algorithms defined the same simulation parameters: population size = 100, generation = 20, maximum depth of trees = 5, crossover probability = 0.7, and mutation probability = 0.3. Table 3 presents the results of the proposed ranking approach and NSGAIIGP in 10 trails.
(a)  
 
(b)  

Through the comparison of the results of (a) and (b) in Table 3, it can be found that NSGAIIGP cannot obtain the optimal structure of models while the new GP method with the proposed ranking approach can easily converge to the best solution. Actually, NSGAII can only find the Paretooptimal set, from which designers should use multicriteria decision making (MCDM) techniques to obtain the best solution. But the realization of MCDM always needs a complex ranking procedure among all the Pareto optimal solutions. Hence, the new MOGP algorithm using the proposed ranking approach provides a better way to solve the nonlinear system design problem without much complex comparison of candidate solutions and be able to obtain the satisfied solution.
5.2. The Proposed Ranking Method with MOPSO
Multiobjective particle swarm optimization (MOPSO) is widely used in a variety of applications, such as, neural network, with the outstanding advantages of simple implementation and low computational cost [10]. This section applied the proposed ranking method to the MOPSO algorithm to generate a new algorithm. The test function T4 proposed by Zitzler [11] is taken into account to test the new algorithm’s ability to deal with multimodality.
As we know, the test function T4 contains 21^{9} local Pareto optimal sets, as the red triangles in Figure 1; the global Pareto optimal front is formed with the function , as the blue line formed by the red circles in Figure 1. Thus, the general Paretooptimality methods usually sank into the local Pareto optimal fronts and hard to totally obtain the global Pareto front. From Figure 1, it is seen that there are more than one local Pareto optimal front in this simulation problem and additionally not all local Paretooptimal sets are distinguishable in the objective space. This issue caused the general Pareto optimality methods tend to stop at some local Pareto optimal solutions and miss some global Pareto optimal solutions. Therefore, the MOPSO using the Pareto optimal ranking method has much trouble to obtain the global Pareto optimal sets in the Paretooptimality step. Consequently, it is probable that the global optimal solution was omitted in the Paretooptimality step, which would directly make the performance of the decisionmaking step worse.
However, when the proposed ranking method with the MOPSO is applied to solve this problem, it works very well, more importantly, there is no risk about missing the global optimal solution and only one step is implemented to obtain the final unique result. Assume that the first objective is at a higher priority level than the second objective , then the weights of belief/unbelief degree of the objective and are defined to have the following relationship: . Figure 2 presents the convergence curve in terms of the objective (black) and (blue), respectively. It is found that the proposed ranking method can make the objective with a higher priority quickly converge to the global minimum or maximum, and then other objectives naturally converge to the global minimum or maximum that satisfied the first objective. This simulation has been worked for 100 times and the results are stable. Therefore, it is concluded that the proposed ranking method with the MOPSO can achieve the final result that located on the goal Pareto front and throw off the local optimums when the multiobjective optimization problem is multimodality.
6. Summary
A new ranking approach for evolutionary multiobjective optimization was presented. Compared with the Paretobased EMO algorithms, the main advantage of this proposed ranking approach is to conduct the final solution which consists with the subjective information in one step without Pareto optimality. More importantly, this approach used the belief/unbelief degree to express a subjective choice of the user instead of the conventional absolute difference of two consequences, because the belief/unbelief degree takes the current asset into account instead of measuring different species on the same scale. Therefore, the proposed function can comprehend the true nature of the chromosomes ranking better than a simple weighted sum of the objective. The validity of this approach for ranking large numbers of candidate solution in EMO is demonstrated by two simulations. First simulation test is about nonlinear system design. Simulation results present that the results of the MOGP with the proposed ranking approach show higher solution accuracy and faster convergence than a popular multiobjective GP algorithm, NSGAIIGP. Furthermore, the proposed ranking approach is applied to the MOPSO to present its performance to deal with the multimodality optimization problem. After being compared with the general Paretooptimality method, it is found that the proposed ranking approach with the subjective preference information performs better to obtain the final global optimal solution.
References
 K. Deb, MultiObjective Optimization Using Evolutionary Algorithms, WileyInterscience Series in Systems and Optimization, John Wiley & Sons, Chichester, UK, 2001. View at: Zentralblatt MATH
 K. Deb, S Agrawal, A. Pratab, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGAII,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site  Google Scholar
 E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: improving the strength Pareto evolutionary algorithm,” in Proceedings of the Evolutionary Methods for Design, Optimization and Control With Applications to Industrial Problems, (EUROGEN 2001), K. Giannakoglou, Ed., Athens, Greece, September 2001. View at: Google Scholar
 Y. Jin and B. Sendhoff, “Paretobased multiobjective machine learning: an overview and case studies,” IEEE Transactions on Systems, Man and Cybernetics Part C, vol. 38, no. 3, pp. 397–415, 2008. View at: Publisher Site  Google Scholar
 C. A. C. Coello, G. B. Lamont, and D. A. Van Veldhuizen, Evolutionary Algorithms for Solving MultiObjective Problems, Genetic and Evolutionary Computation Series, Springer, New York, NY, USA, 2nd edition, 2007.
 K. RodriguezVazquez, C. M. Fonseca, and P. J. Fleming, “Identifying the structure of nonlinear dynamic systems using multiobjective genetic programming,” IEEE Transactions on Systems, Man, and Cybernetics Part A, vol. 34, no. 4, pp. 531–545, 2004. View at: Publisher Site  Google Scholar
 G. N. Beligiannis, L. V. Skarlas, S. D. Likothanassis, and K. G. Perdikouri, “Nonlinear model structure identification of complex biomedical data using a geneticprogrammingbased technique,” IEEE Transactions on Instrumentation and Measurement, vol. 54, no. 6, pp. 2184–2190, 2005. View at: Publisher Site  Google Scholar
 M. Willis, H. Hiden, M. Hinchliffe, B. Mckay, and G. W. Barton, “Systems modelling using genetic programming,” Computers and Chemical Engineering, vol. 21, no. 1, pp. 1161–1166, 1997. View at: Google Scholar
 K. Deb, S. Agrawal, A. Pratab, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGAII,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site  Google Scholar
 M. R. Sierra and C. A. Coello Coello, “Multiobjective particle swarm optimizers: a survey of the stateoftheart,” International Journal of Computational Intelligence Research, vol. 2, no. 3, pp. 287–308, 2006. View at: Google Scholar
 E. Zitzler, Evolutionary algorithms for multiobjective optimization: methods and applications, Ph.D. thesis, Swiss Federal Institute of Technology (ETH), Zurich, Switzerland, 1999.
Copyright
Copyright © 2011 Shuang Wei and Henry Leung. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.