Abstract

One of the most important optimality conditions to aid in solving a vector optimization problem is the first-order necessary optimality condition that generalizes the Karush-Kuhn-Tucker condition. However, to obtain the sufficient optimality conditions, it is necessary to impose additional assumptions on the objective functions and on the constraint set. The present work is concerned with the constrained vector quadratic fractional optimization problem. It shows that sufficient Pareto optimality conditions and the main duality theorems can be established without the assumption of generalized convexity in the objective functions, by considering some assumptions on a linear combination of Hessian matrices instead. The main aspect of this contribution is the development of Pareto optimality conditions based on a similar second-order sufficient condition for problems with convex constraints, without convexity assumptions on the objective functions. These conditions might be useful to determine termination criteria in the development of algorithms.

1. Introduction

There are many contributions, concepts, and definitions that characterize and give the Pareto optimality conditions for solutions of a vector optimization problem (see, for instance, [1, 2]). One of the most important condition is the first-order necessary optimality condition that generalizes the Karush-Kuhn-Tucker (KKT) condition. However, to obtain the sufficient optimality conditions, it is necessary to impose additional assumptions (like convexity and its generalizations) on the objective functions and on the constraint set.

In this paper, we deal with a particular case of vector optimization problem (VOP), where each objective function consists of a ratio of two quadratic functions. Without generalized convexity assumptions on the objective functions, but by imposing some additional assumptions on a linear combination of Hessian matrices, Pareto optimality conditions are obtained and duality theorems are established. Let us consider the following vector quadratic fractional optimization problem : where is an open set, , , , and , , are continuously differentiable real-valued functions defined on . In addition, we assume that , , , are quadratic functions and for and . We denote by the feasible set of elements satisfying . We say that is a feasible point if . The value is the result of the th objective function if the decision maker chooses the action .

Fractional optimization problems arise frequently in decision making applications, including science management, portfolio selection, cutting and stock, and game theory, in the optimization of the ratio performance/cost, or profit/investment, or cost/time.

There are many contributions dealing with the scalar (single-objective) fractional optimization problem (FP) and vector fractional optimization problem (VFP). In most of them, using convexity or generalized convexity, optimality conditions in the KKT sense and the main duality theorems for optimal points are obtained. With a parametric approach, which transforms the original problem in a simpler associated problem, Dinkelbach [3], Jagannathan [4], and Antczak [5] established optimality conditions, presented algorithms, and applied their approaches in an example (FP) consisting of quadratic functions. Using some known generalized convexity, Antczak [5], Khan and Hanson [6], Reddy and Mukherjee [7], Jeyakumar [8], and Liang et al. [9] established optimality conditions and theorems that relate the pair primal-dual of problem (FP). In Craven [10] and Weir [11], other results for the scalar optimization (FP) can be found.

Further, Liang et al. [12] extended their approach to the vector optimization case (VFP) considering the type duals of Mond and Weir [13], Schaible [14], and Bector [15]. Considering the parametric approach of Dinkelbach [3], Jagannathan [4], and Bector et al. [16] and two classes of generalized convexity, Osuna-Gómez et al. [17] established weak Pareto optimality conditions and the main duality theorems for the differentiable vector optimization case (VFP). Santos et al. [18] deepened these results to the more general nondifferentiable case (VFP). Jeyakumar and Mond [19] used generalized convexity to study the problem (VFP).

Few studies are found involving quadratic functions at both the numerator and denominator of the ratio objective function. Most of them involve the mixing of linear and quadratic functions. The most similar approaches to the scalar quadratic fractional optimization problem (QFP) were considered in [2024]. On the other hand, Benson [25] considered a pure (QFP) consisting of convex functions, where some theoretical properties and optimality conditions are developed, and an algorithm and its convergence properties are presented.

The closest approaches to the vector optimization case were considered in [2633]. Using an iterative computational test, Beato et al. [27, 28] characterized the Pareto optimal point for the problem , consisting of linear and quadratic functions, and some theoretical results were obtained by using the function linearization technique of Bector et al. [16]. Arévalo and Zapata [26], Konno and Inori [29], and Rhode and Weber [33] analyzed the portfolio selection problem. Kornbluth and Steuer [32] used an adapted Simplex method in the problem (VFP) consisting of linear functions. Korhonen and Yu [30, 31] proposed an iterative computational method for solving the problem , consisting of linear and quadratic functions, based on search directions and weighted sums.

The approach taken in this work is different from the previous ones. The main aspect of this contribution is the development of Pareto optimality conditions for a particular vector optimization problem based on a similar second-order sufficient condition for Pareto optimality for problems with convex constraints without the hypothesis of convexity on the objective functions. These conditions might be useful to determine termination criteria in the development of algorithms, and new extensions can be established from these, where more general vector optimization problems in which algorithms are based on quadratic approximations are used locally.

This paper is organized as follows. We start by defining some notations and basic properties in Section 2. In Section 3, the sufficient Pareto optimality conditions are established. In Section 4, the relationship among the associated problems is presented and duality theorems are established. Finally, comments and concluding remarks are presented in Section 5.

2. Preliminaries

Let denote the nonnegative real numbers and let denote the transpose of the vector . Furthermore, we will adopt the following conventions for inequalities among vectors. If and , thenif and only if , ; if and only if , ; if and only if , ; if and only if and .Similarly we consider the equivalent convention for inequalities , , and .

Different optimality definitions for the problem are referred to as Pareto optimal solutions [34], two of which are defined as follows.

Definition 1. Feasible point is said to be a Pareto optimal solution of , if there does not exist another such that .

Definition 2. Feasible point is said to be a weakly Pareto optimal solution of , if there does not exist another such that .

Hypotheses of convexity or generalized convexity on the objective functions will be avoided in this work, but we will use such hypotheses on the constraint set. We recall the definition of convexity, where denotes the gradient of the function at the point .

Definition 3. Let be a function defined on an open convex set and differentiable at . is called convex at if, for all , . When is convex on the set , we simply say that is convex.

Maeda [35] used the generalized Guignard constraint qualification (GGCQ) [36], to derive the following necessary Pareto optimality conditions for the problem (VOP) in the KKT sense. Assuming differentiability of the objective and the constraint functions, Maeda guarantees the existence of Lagrange multipliers, all strictly positive, associated with the objective functions.

Lemma 4 (Maeda [35]). Let be a Pareto optimal solution of . Suppose that (GGCQ) holds at ; then there exist vectors , such that

For each and , we consider the objective functions defined as and , where , is symmetric, is symmetric and positive semidefinite, and , and , , with , where is the solution of the system ; that is, is the point in which the function reaches its minimum and this ensures that , . We cannot consider the cases where has no solution.

3. Sufficient Optimality Conditions

Without assumptions of generalized convexity, but imposing some additional assumptions on a linear combination of Hessian matrices of the objective functions and , , we provide in the next theorem a sufficient condition that guarantees that a feasible point of is Pareto optimal point. Similar to a second-order sufficient condition for Pareto optimality, this condition explores the intrinsic characteristics of the problem .

We assume, unlike the objective functions, that each is convex. Also, given , for each , we define the scalar functions and by

Theorem 5. Let be a feasible point of . Suppose that the constraint function is convex for each and there exist vectors , , such that If, for any , we obtain then is a Pareto optimal solution for .

Proof. Given , we obtain for each Thus, each function satisfies Suppose that is not a Pareto optimal solution of . Then there exists another point such that Since , , from (8) we obtain From (9), we have and we obtain inequalities with at least one strict inequality. Multiplying the inequalities above by their respective , , and summing all the products, we obtain Then, we have Substituting (3) into (14), we get Using (6) and (15), we obtain That is, On the other hand, by convexity of , we have, for each , Since , , we have However, since is feasible point, condition (4) and , , imply that We conclude that which contradicts (17). Therefore is a Pareto optimal solution for .

The expression in Theorem 5 is manipulated in a similar manner in [6, 7, 9, 12, 19]; however some generalized convexity on the functions and is imposed. In most of them, for each and , the hypotheses , , and , satisfy some generalized convexity. This is not the purpose of this work, but the constraint functions can be assumed in a more general class of convex functions; for example, the generalized convexity of Liang et al. [9] can be used.

In the following, the Pareto optimal solution set is denoted by .

Corollary 6. Let be a feasible point of . Suppose that the constraint function is convex for each and there exist vectors , , such that (3), (4), and (5) are valid. If are positive semidefinite matrices for each , then .

Proof. By hypothesis, given and , we obtain Therefore, inequality (6) is valid and the result follows from Theorem 5.
To ensure that inequality (6) is valid, we start exploring the features of the Hessian matrices of the objective functions of .
Negative values can occur in each term of the sum , which depends on each matrix , , and the vector . Let us check new conditions for which (6) is satisfied; that is, we want to ensure the result of Theorem 5 by analysing the function
Note that is a quadratic function without the linear part; thus in we obtain on if and only if ; that is, we can use the classical results on quadratic optimization to check if . The next corollary follows immediately from Theorem 5.
Corollary  7. Let be a feasible point of . Suppose that the constraint function is convex for each and there exist vectors , , such that (3), (4), and (5) are valid. If , then .

Using the previous results to check whether a feasible point is a Pareto optimal solution of , we propose the following computational test method.

Pareto Optimality Test

Step 1. Given , find the vectors and such that (3) and (4) are valid. If the vectors and do not exist, then .

Step 2. Otherwise, solve . If , we say that has passed the Pareto optimality test and .

Pareto optimality test starts with a feasible point; then it seeks to solve a system of linear equations containing unknowns, and , the inequalities , , and two equalities (3) and (4). If this system has no solution, then the point does not satisfy the first-order necessary condition for Pareto optimality, so the method terminates concluding that . Otherwise, in Step 2, a quadratic optimization problem on should be solved. If the minimum of the quadratic problem is nonnegative, then the procedure ends, concluding that . Otherwise, we say that has not passed the Pareto optimality test. Its complexity lies in solving a system of linear inequalities plus a quadratic optimization problem.

The next results, which address a linear combination of the Hessian matrices, can be used to develop a computational search method.

Looking at the previous Pareto optimality test, if the fixed point is assumed to be a variable , then the linear system in Step 1 becomes a nonlinear system for the variables , , . And the quadratic optimization problem in Step 2 becomes a quadratic optimization problem of the type . This raises considerable difficulties. In order to reduce these difficulties, we further explore the characteristics of the matrix One possibility is to search for points such that becomes positive semidefinite. In this case depends only on .

Consider a fixed point ; the next theorem takes advantage of the symmetry and diagonalizations of the matrices and , , to give sufficient Pareto optimality conditions for a feasible point of . Consider the usual inner product in .

Theorem 8. Let be a feasible point of . Suppose that the constraint function is convex for each and there exist vectors , , such that (3), (4), and (5) are valid. Consider also, for each and , the following functions: where and are the columns of orthogonal matrices and , constructed from the normalized eigenvectors of the matrices and , respectively. If for all the following inequality is valid, where and are the eigenvalues of the matrices and associated with the eigenvectors and , respectively, then .

Proof. The matrices and , , are diagonalizable and can be rewritten as and , where and are diagonal matrices, with their diagonal formed by the eigenvalues and , , of the matrices and , respectively. Thus, we obtain Since, for all , we have , for all and , we conclude that . Therefore, inequality (6) is valid and the result follows from Theorem 5.

Theorem 8 is not simple to use since (26) depends on all points of the feasible set; that is, it depends on the functions , , , , and . However, even if, for some and , occurs, inequality (6) can still be satisfied. In order to obtain (26), we present the next corollary, which follows immediately from the previous theorem.

Corollary 9. Let be a feasible point of . Suppose that the constraint function is convex for each and there exist vectors , , such that (3), (4), and (5) are valid. Consider also, for each and , where and are the columns of orthogonal matrices and , constructed from the normalized eigenvectors of the matrices , and , are the eigenvalues of the matrices and associated with the eigenvectors and , respectively. If, for all , one obtains for each and , then .

Proof. According to Theorem 8, it is enough to show that, for every feasible point and for all and , is valid. Given and a pair , we obtain Therefore, the result follows from Theorem 8.

From Corollary 9, if each quadratic function , , is nonnegative in the feasible set, then a feasible point satisfying (3), (4), and (5) is a Pareto optimal solution of .

Let . Then , and the nonnegativity of the quadratic depends on each matrix and each vector , where and . For example, the unconstrained requires that each matrix be positive semidefinite and that , where is a solution of the system .

Corollary 10. Let be a feasible point of . Suppose that the constraint function is convex for each and there exist vectors , , such that (3), (4), and (5) are valid. If, for each pair , the matrix is positive semidefinite and (see (28)), then .

Proof. By hypothesis, for all , we have for each pair . Therefore, the result follows from Corollary 9.

Given a pair , writing each entry of the matrix and each entry of the vector according to the entries of the eigenvectors and , where , we obtain, for each pair ,

We can draw some conclusions from (30). For example, for a fixed pair , the vector is a linear combination of the eigenvectors and . If , , or , then is a symmetric matrix. Moreover, if and there exists a pair such that , then the matrix . In this case, if there exists such that , does not make sense. However, when (26) is required, it is possible to show that is not possible.

The results of Theorems 5 and 8 and their corollaries can be used in order to develop a method of searching for Pareto optimal solutions of , and it might be useful to determine the termination criteria in the development of algorithms.

4. Duality

Matrix (24) defines a specific function, and by adding some assumptions about it, we obtain new results, such as, a relationship between the problem and a scalar problem associated with it, and the main duality theorems.

In the scalar optimization problem case, Dinkelbach [3] and Jagannathan [4] used a parametric approach that transforms the fractional optimization problem in a new scalar optimization problem. Similarly, we consider the following parameterized problem associated with the problem : where , , , , and , , are defined in , and .

Using assumptions of generalized convexity, Osuna-Gómez et al. [17] presented the problem and obtained necessary and sufficient conditions for weakly Pareto optimality and main duality theorems. The results presented in [3, 4, 17] considered each objective function as , , and they studied the properties of the parameter . Following the ideas presented by Osuna-Gómez et al. [17], we obtain new results by considering directly , , where . However, by imposing hypothesis on the linear combination of matrices , and , we consider Pareto optimal solutions rather than weakly Pareto optimal solutions.

To characterize the solutions of the problems (VOP), Geoffrion [37] used the solutions of the associated scalar problems. Similarly, we consider the following weighted scalar problem associated with the problem (VQFP): where , , , , and , , are defined in , and , .

4.1. The Relationship between the Associated Problems

The next theorem and its proof are similar to Lemma  1.1 from [17], when Pareto optimal solutions (not necessarily weak) are considered.

Theorem 11. if and only if .

Proof. See Lemma  1.1 in [17], considering “” instead of “.”

In Section 3, we define the matrix , where and , . Let us define now the set , the function given by and, for each , the functions given by Then, we have , where , , and we can establish some relations among the associated problems , , and .

Theorem 12. If is an optimal solution of the weighted scalar problem , then .

Proof. Suppose that ; then there exists another point such that This contradicts the minimality of in .

Lemma 13. Let . Suppose that the constraint qualification (GGCQ) is satisfied at ; then there exist vectors and such that

Proof. Let , , , and , . Then and if , by Lemma 4, there exist and such that is a critical point, in the KKT sense, of the problem . That is, From (35), there exist , , , and such that Therefore, the result is valid.

Lemma 14. Let . If there exists , such that the matrix is positive semidefinite, then the objective function of is convex.

Proof. Given , we have, for each , Hence, for each objective function of , we have If there exists such that the matrix is positive semidefinite, then Therefore, the objective function of is convex.
Note that the hypothesis of semidefiniteness on the matrix or on the matrices , , , is punctual. However, in the next example, we draw a situation in which, for all and , we have , for all , and then , for all .
Example. Consider the problem , where and for all For all these functions, we obtain for all Therefore, for this example, each point satisfying (3), (4), and (5) is Pareto optimal. For example, for and , we have that is Pareto optimal solution. Likewise, for and , we have that is Pareto optimal solution.

Theorem 11 shows an equivalence between the associated problems and . The next theorem shows a relation between the problems and ; then it provides a converse to the Theorem 12.

Theorem 15. Let . Suppose that the constraint qualification (GGCQ) is satisfied at and the constraint function is convex for each . Then there exists such that if the matrix is positive semidefinite, then is the optimal solution for the weighted scalar problem .

Proof. If and satisfies (GGCQ), by Lemma 13, there exist and , such that Therefore, is a critical point of the weighted scalar problem , and since is positive semidefinite, by Lemma 14, the objective function of is convex. Since for each the constraint function is convex, it follows that is an optimal solution for .

4.2. Duality Theorems

For a given mathematical optimization problem there are many types of duality. Two well-known duals are the Wolfe dual [38] and the Mond-Weir dual [13]. In this work, we consider the primal problem and discuss the Mond-Weir dual problem, but we use the associated problem to generate the constraint set of the dual problem. Let us consider the following vector quadratic fractional dual optimization problem : where and , , are the same quadratic functions defined on , and we denote its feasible set by .

Theorem 16 (weak duality). Let and . If is positive semidefinite and the constraint function is convex for each , then

Proof. If there are and such that , then Since , then , and implies that Once is positive semidefinite and each constraint function is convex, we can use Lemma 14 to conclude that the objective function of is convex, and which is a contradiction.

Theorem 17 (strong duality). Let . Suppose that (GGCQ) holds at ; then there exists such that is feasible for and the values of the objective function of and are equal. Moreover, if is positive semidefinite and the constraint function is convex for each , then .

Proof. If , by Lemma 13, there are and such that satisfies Then and the values of the objective functions of and are equal. Moreover, if is positive semidefinite, each constraint function is convex, and , then there exists another point such that contradicting the weak duality.

Theorem 18 (converse duality). Let and be feasible point of the primal problem . If is positive semidefinite and the constraint function is convex for each , then .

Proof. If and , then , , and Therefore, is a critical point for the weighted scalar problem . Since is positive semidefinite, by Lemma 14, the objective function of is convex. Moreover, if each constraint function is convex, , then is an optimal solution of . Thus, by Theorem 12, we have .

We can obtain a second type of converse duality theorem requiring more of the matrix function . Specifically, there must be vectors such that is positive definite; that is, , , and .

Theorem 19 (strict converse duality). Let and such that If the matrix is positive definite and the constraint function is convex for each , then .

Proof. Suppose . Since and , then and . If each constraint function is convex, , we obtain Using the proof of Theorem 5, given and , for all , we have Therefore, for , we obtain and since is positive definite and , then by (51) which is a contradiction.

5. Conclusions

The main contribution of this work is the development of Pareto optimality conditions for a particular vector optimization problem, where each objective function consists of a ratio of two quadratic functions with convexity being only assumed on the constraint set. We took advantage of the diagonalization of Hessian matrices. We have shown the relationship between the particular problem and two problems associated with it, and we use some assumptions of the linear combination of Hessian matrices to show the main duality theorems. For the particular problem, the results presented in this work might be useful to determine the termination criteria in the development of algorithms, and new extensions can be established to more general vector optimization problems, in which algorithms based on quadratic approximations are used locally. In future work we plan to develop algorithms using the concepts presented here.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are indebted to the anonymous reviewers for their helpful comments. W. A. Oliveira was supported by Coordination for the Improvement of Higher Level Personnel of Brazil (CAPES). A. Beato-Moreno was partially supported by Spain’s Ministry of Science and Technology under Grant MTM2007-63432. A. C. Moretti and L. L. Salles Neto were partially supported by National Council for Scientific and Technological Development of Brazil (CNPq) and Foundation for Research Support of the State of São Paulo (FAPESP).