Research Article  Open Access
Le Quang Thuy, Nguyen Thi Bach Kim, Nguyen Tuan Thien, "Generating Efficient Outcome Points for Convex Multiobjective Programming Problems and Its Application to Convex Multiplicative Programming", Journal of Applied Mathematics, vol. 2011, Article ID 464832, 21 pages, 2011. https://doi.org/10.1155/2011/464832
Generating Efficient Outcome Points for Convex Multiobjective Programming Problems and Its Application to Convex Multiplicative Programming
Abstract
Convex multiobjective programming problems and multiplicative programming problems have important applications in areas such as finance, economics, bond portfolio optimization, engineering, and other fields. This paper presents a quite easy algorithm for generating a number of efficient outcome solutions for convex multiobjective programming problems. As an application, we propose an outer approximation algorithm in the outcome space for solving the multiplicative convex program. The computational results are provided on several test problems.
1. Introduction
The convex multiobjective programming problem involves the simultaneously minimize noncomparable convex objective functions , , over nonempty convex feasible region in and may be written by where . When is a polyhedral convex set and , are linear functions, problem is said to be a linear multiobjective programming problem .
For a nonempty set , we denote by and the sets of all efficient points and weakly efficient points of , respectively, that are
Here for any two vectors , the notations and mean and , respectively, where is the nonnegative orthant of , and is the interior of . By definition, .
The set is called the outcome set (or image) of under . A point is said to be an efficient solution for problem if . For simplicity of notation, let denote the set of all efficient solutions for problem . When , is called weakly efficient solution for problem and the set of all weakly efficient solutions is denoted by , it is clear that and are the preimage of and under , respectively. We will refer to and as the efficient outcome set and weakly efficient outcome set for problem , respectively.
The goal of problem is to generate the sets and or at least their subsets. However, it has been shown that, in practice, the decision maker prefers basing his or her choice of the most preferred solution primarily on and rather than and . Arguments to this effect are given in [1].
It is well known that the task of generating , , , , or significant portions of these sets for problem is a difficult problem. This is because they are, in general, nonconvex sets, even in the case of linear multiobjective programming problem .
Problem arises in a wide variety of applications in engineering, economics, network planning, production planning, operations research, especially in multicriteria design and in multicriteria decision making (see, for instance, [2, 3]). Many of the approaches for analyzing convex multiobjective programming problems involve generating either the sets , , , and or a subset thereof, without any input from decision maker (see, e.g., [1, 2, 4â€“13] and references therein). For a survey of recent developments see [6].
This paper has two purposes:(i)the first is to propose an algorithm for generating a number of efficient outcome points for convex multiobjective programming problem depending on the requirements of decision makers (Algorithm 1 in Section 2). Computational experiments show that this algorithm is quite efficient;(ii)as an application, we present an outer approximation algorithm for solving the convex multiplicative programming problem associated with the problem in outcome space (Algorithm 2 in Section 3), where the problem can be formulated as
It is well known that problem is a global optimization problem and is known to be NPhard, even special cases such as when , is a polyhedron, and is linear for each (see [14]). Because of the wide range of its applications, this problem attracts a lot of attention from both researchers and practitioners. Many algorithms have been proposed for globally solving the problem , (see, e.g., [10, 14â€“20] and references therein).
The paper is organized as follows. In Section 2, we present Algorithm 1 for generating efficient outcome points for convex multiobjective programming problem and its theoretical basis. To illustrate the performance of Algorithm 1, we use it to generate efficient points for a sample problem. The Algorithm 2 for solving the convex multiplicative programming associated with the problem , and numerical examples are described in Section 3.
2. Generating Efficient Outcome Points for Problem
2.1. Theoretical Basis
Assume henceforth that is a nonempty, compact convex set given by where all the are convex functions on . Then the set is nonempty fulldimensional convex set in . Furthermore, from Yu [21, page 22, Theoremâ€‰â€‰3.2], we know that .
For each , we consider the following problem: We denote the optimal value for the problem by and the optimal solution for this problem by .
Let , where As usual, the point is said to be an ideal efficient point of . It is clear that if then . Therefore, we suppose that . Obviously, by definition, if is an optimal solution for the problem given by then is an optimal solution for the problem , and the optimal values of these two problems are equal.
To generate various efficient outcome points in , the algorithm will rely upon the point , where for each , and is any real number satisfying
We consider the set given by It is obvious that is a nonempty, fulldimensional compact convex set in . The set is instrumental in Algorithm 1 to be presented in Section 2.2 for generating efficient outcome points for problem .
Remark 2.1. In 1998, Benson [1] presented the outer approximation algorithm for generating all efficient extreme points in the outcome set of a multiobjective linear programming problem. Here, the seems to be analogous with the set considered by Benson [1]. However, note that and is not necessarily a subset of . The Figure 1 illustrates the set in the case of .
Proposition 2.2. .
Proof. This result is easy to show by using Yu [21, page 22, Theoremâ€‰â€‰3.2] and the definition of the point . Therefore, the proof is omitted.
Let It is clear that . The following fact will play an important role in establishing the validity of our algorithm.
Proposition 2.3. For each point , let denote the unique point on the boundary of that belongs to the line segment connecting and . Then .
Proof. Let . Since is the compact convex set and belongs to the boundary of , the set is also the compact convex set containing the origin 0 of the outcome space and 0 belongs to the boundary of . According to the separation theorem [22], there is a nonzero vector such that
Let be a simplex with vertices 0, , where are the unit vectors of . By definition, we can choose the point such that . This implies that . From (2.7), by taking to be to see that
Furthermore, the expression (2.7) can be written by
that is,
According to [23, Chapter 4, Theoremâ€‰â€‰2.10], a point is a weakly efficient point of if and only if there is a nonzero vector and such that is an optimal solution to the convex programming problem
Combining this fact, (2.8) and (2.10) give . To complete the proof, it remains to show that . Assume the contrary that
By the definition, we have
where for each , is the optimal solution set for the following problem:
It is easy to see that the optimal values of two problems and are the same. From this fact and the definition of the point , it follows that
Therefore, if then there is such that . Since , we always have and with . By the choice of the point , we have . Hence,
This contradiction proves that must belong to the efficient outcome set .
Remark 2.4. This Proposition 2.3 can be viewed as an Benson's extension in [1, Propositionâ€‰â€‰2.4]. Benson showed that the unique point on the boundary of (which is considered in [1]) and corresponding segment belongs to the weakly efficient outcome set , here we prove that belongs to the efficient outcome set .
Remark 2.5. Let be a given point in . From Propositions 2.3 and 2.2, the line segment connecting and contains a unique point which lies on the boundary of . To determine this efficient outcome point , we have to find the unique value of , , such that belongs to the boundary of (see Figure 2). It means that is the optimal value for the problem
By definition, it is easy to see that is also the optimal value for the following convex programming problem with linear objective function Note that exists because the feasible region of problem is a nonempty, compact convex set. Furthermore, by the definition, it is easy to show that if is an optimal solution for Problem , then is an efficient solution for the convex multiobjective programming problem , that is, . For the sake of convenience, is said to be an efficient solution associated with , and is said to be an efficient outcome point generated by . It is easily seen that by varying the choices of points , the decision maker can generate multiple points in . In theory, if could vary over all of , we could generate all of the properly efficient points of (see [24]) in this way.
The following Proposition 2.6 shows that for each efficient outcome point generated by a given point , we can determine new points which belong to and differ . This work is accomplished by a technique called cutting reverse polyblock.
A set of the form , where , , and , is called a reverse polyblock in hyperrectangle with vertex set . A vertex is said to be proper if there is no such that . It is clear that a reverse polyblock is completely determined by its proper vertices.
Proposition 2.6 (see, e.g., [20]). Let be a nonempty compact convex set contained in a reverse polyblock with vertex set . Let and is the unique point on the boundary of that belongs to the line segment connecting and . Then, is a reverse polyblock with vertex set , where where, as usual, denotes the th unit vector of , and
Remark 2.7. By Proposition 2.3, the point as described in Proposition 2.6 belongs to . From (2.18), it is easy to see that for all and for each , the vertex because and . The points are called new vertices corresponding to .
2.2. The Algorithm
After the initialization step, the algorithm can execute the iteration step many times to generate a number of efficient outcome points for problem (CMP) depending on the user's requirements. Note that the construction of the box in Substep 1.1 of Algorithm 1 involves solving convex programming problems, each of which has a simple linear objective function and the same feasible region. Let be a positive integer number. The algorithm for generating efficient outcome points to the problem and efficient solutions associated with them are described as follows.
Algorithm 1. Step 1 (Initialization). (1.1)Construct , where and are described in Section 2.1. Set .(1.2) Set (the set of efficient outcome points),â€ƒâ€ƒ (the set of efficient solutions),â€ƒâ€ƒ,â€ƒâ€ƒ (the number of elements of the set ),â€ƒâ€ƒ.
Step 2 (Iteration). See Steps 2.1 through 2.3 below.(2.1) Set .(2.2)For each dobegin.Find an optimal solution and the optimal value to the problem .Set ,â€ƒ and (see Remark 2.5), with and . Determine vertices corresponding to via formula (2.18). Set and .end(2.3)IfThen Terminate the algorithmElse Set and return the Step 2.
2.3. Example
To illustrate the Algorithm 1, we consider the convex multiobjective programming problem (see, Benson in [14]) with , where and that satisfies the constraints
What follows is a brief summary of the results of executing Algorithm 1 for determining different efficient outcome points for this sample problem, and 7 efficient solutions associated with them.
Step 1. By solving the problem , we obtain the optimal solution and the optimal value . Then, is the optimal solution for the problem .
By solving problem , we obtain the optimal value and the optimal solution . Hence, is the optimal solution for the problem . From (2.4), we choose
Then . Set , , ,; and .
Step 2 (The first time). Substep 2.1. Set .Substep 2.2 (The set has only one element (1.0, 2.380437)). Let and . By solving the problem , we obtain the optimal solution and the optimal value . Then, (the first efficient outcome point), (the first efficient solution).Set Using formula (2.18), we obtain new vertices and corresponding to .Set and .Substep 2.3. Since , set and return to the Step 2.
Step 2 (The second time). Substep 2.1. .Substep 2.2. (The set has 2 elements:) (i)Let and . The problem has the optimal solution , and the optimal value . We have Set Two new vertices corresponding to are , and .Set and .(ii)Let and . Then,, and are the optimal solution and the optimal value for the problem , respectively. Then, Set Two new vertices corresponding to are , and .Set , and .Substep 2.3. Since , set and return to the Step 2.
Step 2 (The third time). Substep 2.1. .
Substep 2.2. (The set has 4 elements: , , , .
â€‰By a calculation analogous to above, we yield four next efficient outcome points , , , and generated by , , , , respectively, and four next efficient solution , , , associated with , , , , respectively. Namely,
Since , the algorithm is terminated. Thus, after 3 iterations, we obtain , and .
3. Application to Problem
Consider convex multiplicative programming problem associated with the convex multiobjective programming problem where is a nonempty compact convex set defined by (2.1) and is convex on and positive on , .
One of the direct reformulations of the problem as an outcomespace problem is given by where is the outcome set of under . By assumption, we have .
The following proposition tells us a link between the global solution to the problem and the efficient outcome set for the problem .
Proposition 3.1. Any global optimal solution to problem must belong to the efficient outcome set .
Proof. The proposition follows directly from the definition.
Remark 3.2. We invoke Propositions 3.1 and 2.2 to deduce that problem is equivalent to the following problem: The relationship between two problems and is described by the following theorem and was given in [14, Theoremâ€‰â€‰2.2]. However, we give here a full proof for the reader's convenience.
Theorem 3.3. If is a global optimal solution to problem , then any such that is a global optimal solution to problem . Furthermore, the global optimal values of two problems and are equal, that is,
Proof. Suppose that is a global optimal solution to problem and satisfies By Proposition 3.1, . Since and , from (3.2), we have . Thus, . Now, we show that is a global optimal solution of the problem . Indeed, assume the contrary that there is a point such that . Combining this fact and (3.2) gives Since , we have . Therefore, . This contradicts the hypothesis that is a global optimal solution to problem and proves that is a global optimal solution to problem . This completes the proof.
By Theorem 3.3 and Proposition 3.1, solving the problem can be carried out in two stages:(1)finding a global optimal solution to the problem . Then is also a global optimal solution to the problem , (2)finding a global optimal solution to the problem which satisfies .
In the next section, the outer approximation algorithm is developed for solving the problem .
3.1. Outer Approximation Algorithm for Solving Problem
Starting with the polyblock (see Section 2.1), the algorithm will iteratively generate a sequence of reverse polyblocks , , such that For each , the new reverse polyblock is constructed via the formula where , is a global optimal solution to the problem , and is the efficient outcome point generated by .
For each , let denote the vertex set of the reverse polyblock . The following Proposition 3.4 shows that the function achieves a minimum over the reverse polyblock at a proper vertex.
Proposition 3.4. Let , and let be a reverse polyblock. Consider the problem to minimize subject to . An optimal solution to the problem then exists, where is a proper vertex of .
Proof. By definition, note that the objective function is a continuous function on , and is a compact, and the problem has an optimal solution . For each , there is a proper vertex of such that . That means . By definition of the function , we have . This shows that , where is the vertex set of , and the proof is completed.
Remark 3.5. By Proposition 3.4, instead of solving problem , we solve the simple problem . From (3.4), it is clear that for each , the optimal value is a lower bound for the problem , and is an increasing sequence, that is, for all .
Let be a given sufficient small real number. Let . Then is an upper bound for the problem . A point is said to be an optimal solution to problem if there is a lower bound for this problem such that .
Below, we will present an algorithm for finding optimal solution to problem .
At the beginning of a typical iteration of the algorithm, we have from the previous iteration an available nonempty reverse polyblock that contains and an upper bound for the problem . In iteration , firstly problem is solved to obtain the optimal solution set . By the construction, . Take any . The optimal value is the best currently lower bound. Then, we solve the convex programming problem with to receive the optimal value . By Proposition 2.3, the feasible solution is an outcome efficient point generated by (see Remark 2.5). Now, the best currently upper bound is and the feasible solution satisfying is said to be a currently best feasible solution. If , then the algorithm terminates, and is an optimal solution for the problem . Otherwise, set , where and . According to Proposition 2.6, the vertex set of the reverse polyblock is where are determined by formula (2.18). Figure 3 illustrates two beginning steps of the algorithm in the case of .
By the construction, it is easy to see that (the sequence of upper bounds for the problem is the nonincreasing sequence, that is, for all
Now, the outer approximation algorithm for solving is stated as follows.
Algorithm 2 (Outer approximation algorithm). Initialization Step
Construct , where and are described in Section 2.1. Choose ( is a sufficient small number). Set and ( is a sufficient large number. This number can be viewed as an initialization the upper bound). Set and go to Iteration step k.
Iteration Step []. See Steps k.1 through k.5 below.Determine the optimal solution set . Choose an arbitrary and set ( currently best lower bound).Let . Find the optimal value to the problem . And set . (Update the upper bound)If Then (currently best upper bound)â€ƒâ€ƒâ€ƒâ€ƒâ€ƒâ€ƒâ€‰â€‰â€‰â€‰and (currently best feasible point).Ifâ€‰â€‰Then Terminate the algorithm: is an optimal solutionElse Set where and â€ƒâ€ƒand determine the set by formula (2.18).Set ; and go to iteration step .
Theorem 3.6. The algorithm terminates after finitely many steps and yields an optimal solution to problem .
Proof. Let be a given positive number. Since the function is uniformly continuous on the compact set , we can choose an enough small number such that if and , then . Then, to prove the termination of the algorithm, we need to show only that
Observe first that the positive series is convergent, since the open boxes are disjoint, and all of them are contained in the closure of . It follows that
Now, by the construction of and , we have
for . Note that all of the points are contained in the set . Furthermore, by the choice of (see (2.4)), the closure of is a compact subset of the interior of the cone . This observation implies that has a lower bound far from zero. Combining this fact, (3.9) and (3.11) imply that . Also, the observation implies that is bounded. Finally, by (3.10), we have .
3.2. Computational Results
First, the Algorithm 2 has been applied for solving the test example given by Benson in [14] (see the example in Section 2.3) where , , and , .
The calculation process for solving this example is described as follows.
Initialization.
Similarly, the example in Section 2.3, we have with , and . We choose and set , (the initialization upper bound), and go to iteration step .
Iteration (). Step 1. We have . Choose , and set (currently best lower bound).Step 2. Let . Solving the problem , we obtain the optimal solution and the optimal value . Then, .Step 3. Since , we have (currently best upper bound), and (currently best feasible point).Step 4. Since , we set , where , and determine the set Step 5. Set , and go to iteration step .
Iteration (). Step 1. We have . Choose and set (the currently lower bound).Step 2. Let . The convex programming has the optimal solution , and the optimal value . Then, we have .Step 3. Since , we set (the currently best upper bound), and (the currently best feasible point).Step 4. Now, . Therefore, , where , and .Step 5. Set , and go to iteration step .
Iteration (). Step 1. We have , choose , and set .Step 2. Let . The problem has the optimal solution , and the optimal value . Then .Step 3. Since , the currently best upper bound and the currently best feasible point .Step 4. Since , we set , where , , and determine the set .Step 5. Set , and go to iteration step .
After 42 iterations, the algorithm terminates with and , where . Then, the optimal solution for the problem and for the problem are given by and . The approximate optimal value of problem (3.12) is 9.770252094.
Below, we present the results of computational experiment for two types of problem. We take in the numerical experiments. We tested our algorithm for each problem with random data for several times. Then, we took results in average. Numerical results are summarized in Tables 1 and 2.


Type 1. where , , , is an matrix, and .
Type 2. where