International Scholarly Research Notices

International Scholarly Research Notices / 2011 / Article

Research Article | Open Access

Volume 2011 |Article ID 491941 | https://doi.org/10.5402/2011/491941

Anurag Jayswal, "Optimality and Duality for Nondifferentiable Minimax Fractional Programming with Generalized Convexity", International Scholarly Research Notices, vol. 2011, Article ID 491941, 19 pages, 2011. https://doi.org/10.5402/2011/491941

Optimality and Duality for Nondifferentiable Minimax Fractional Programming with Generalized Convexity

Academic Editor: H. C. So
Received15 Mar 2011
Accepted02 May 2011
Published28 Jun 2011

Abstract

We establish several sufficient optimality conditions for a class of nondifferentiable minimax fractional programming problems from a view point of generalized convexity. Subsequently, these optimality criteria are utilized as a basis for constructing dual models, and certain duality results have been derived in the framework of generalized convex functions. Our results extend and unify some known results on minimax fractional programming problems.

1. Introduction

Several authors have been interested in the optimality conditions and duality results for minimax programming problems. Necessary and sufficient conditions for generalized minimax programming were developed first by Schmitendorf [1]. Tanimoto [2] defined a dual problem and derived duality theorems for convex minimax programming problems using Schmitendorf's results.

Yadav and Mukherjee [3] also employed the optimality conditions of Schmitendorf [1] to construct the two dual problems and derived duality theorems for differentiable fractional minimax programming problems. Chandra and Kumar [4] pointed out that the formulation of Yadav and Mukherjee [3] has some omissions and inconsistencies, and they constructed two new dual problems and proved duality theorems for differentiable fractional minimax programming. Liu et al. [5, 6], Liang and Shi [7], and Yang and Hou [8] paid much attention on minimax fractional programming problem and established sufficient optimality conditions and duality results.

Lai et al. [9] derived necessary and sufficient conditions for nondifferentiable minimax fractional problem with generalized convexity and applied these optimality conditions to construct one parametric dual model and also discussed duality theorems. Lai and Lee [10] obtained duality theorems for two parameter-free dual models of a nondifferentiable minimax fractional programming problem, involving generalized convexity assumptions. Ahmad and Husain [11, 12] established sufficient optimality conditions and duality theorems for nondifferentiable minimax fractional programming problem under (𝐹,𝛼,𝜌,𝑑) convexity assumptions, thus extending the results of Lai et al. [9] and Lai and Lee [10]. Jayswal [13] discussed the optimality conditions and duality results for nondifferentiable minimax fractional programming under 𝛼-univexity. Yuan et al. [14] introduced the concept of generalized (𝐶,𝛼,𝜌,𝑑)-convexity and focused their study on a nondifferentiable minimax fractional programming problems. Recently, Jayswal and Kumar [15] established sufficient optimality conditions and duality theorems for a class of nondifferentiable minimax fractional programming involving (𝐶,𝛼,𝜌,𝑑)-convexity.

In the present paper, we discuss the sufficient optimality conditions for a nondifferentiable minimax fractional programming problem from a view point of generalized convexity. Subsequently, we apply the optimality conditions to formulate a dual problem, and we prove weak, strong and strict converse duality theorems involving generalized convexity.

The paper is organized as follows. In Section 2, we present a few definitions and notations and recall a set of necessary optimality conditions for a nondifferentiable minimax fractional programming problem which will be needed in the sequel. In Section 3, we discussed sufficient optimality conditions with somewhat limited structures of generalized convexity. Furthermore, a dual problem is formulated and duality results are presented in Section 4. Finally, in Section 5, we summarize our main results and also point out some additional research opportunities arising from certain modifications of the principal problem model considered in this paper.

2. Notations and Preliminaries

Let 𝑅𝑛 denote the 𝑛-dimensional Euclidean space and let 𝑅𝑛+ be its nonnegative orthant.

In this paper, we consider the following nondifferentiable minimax fractional programming problem:min𝑥𝑅𝑛sup𝑦𝑌𝑓(𝑥,𝑦)+𝑥,𝐴𝑥1/2(𝑥,𝑦)𝑥,𝐵𝑥1/2subjectto𝑔(𝑥)0,(P) where 𝑓,𝑅𝑛×𝑅𝑚𝑅 and 𝑔𝑅𝑛𝑅𝑝 are continuous differentiable functions, 𝑌 is a compact subset of 𝑅𝑚, and 𝐴 and 𝐵 are 𝑛×𝑛 positive semidefinite matrices. The problem (P) is nondifferentiable programming problem if either 𝐴 or 𝐵 is nonzero. If 𝐴 and 𝐵 are null matrices, then the problem (P) is a usual minimax fractional programming problem which was studied by Liang and Shi [7] and Yang and Hou [8].

Let 𝑃 = {𝑥𝑅𝑛𝑔(𝑥)0} be the set of all feasible solutions of (P). For each (𝑥,𝑦)𝑅𝑛×𝑅𝑚, we define𝜙(𝑥,𝑦)=𝑓(𝑥,𝑦)+𝑥,𝐴𝑥1/2(𝑥,𝑦)𝑥,𝐵𝑥1/2.(2.1) Assume that for each (𝑥,𝑦)𝑅𝑛×𝑌, 𝑓(𝑥,𝑦)+𝑥,𝐴𝑥1/20, and (𝑥,𝑦)𝑥,𝐵𝑥1/2>0.

Denote𝑌(𝑥)=𝑓𝑦𝑌𝑥,𝑦+𝑥,𝐴𝑥1/2𝑥,𝑦𝑥,𝐵𝑥1/2=sup𝑧𝑌𝑓(𝑥,𝑧)+𝑥,𝐴𝑥1/2(𝑥,𝑧)𝑥,𝐵𝑥1/2,𝐽={1,2,,𝑝},𝐽(𝑥)=𝑗𝐽𝑔𝑗,(𝑥)=0𝐾(𝑥)=(𝑠,𝑡,̃𝑦)𝑁×𝑅𝑠+×𝑅𝑚𝑠𝑡1𝑠𝑛+1,𝑡=1,𝑡2,,𝑡𝑠𝑅𝑠+with𝑠𝑖=1𝑡𝑖=1,̃𝑦=𝑦1,𝑦2,𝑦𝑠,𝑦𝑖.𝑌(𝑥),𝑖=1,2,,𝑠(2.2)

Since 𝑓 and are continuously differentiable and 𝑌 is a compact subset of 𝑅𝑚, it follows that for each 𝑥𝑃, 𝑌(𝑥)𝜙. Thus, for any 𝑦𝑖𝑌(𝑥), we have a positive constant 𝑣=𝜙(𝑥,𝑦𝑖).

Definition 2.1. A functional 𝐹𝑋×𝑋×𝑅𝑛𝑅(where𝑋𝑅𝑛) is said to be sublinear in its third argument, if for all (𝑥,𝑥0)𝑋×𝑋, 𝐹𝑥,𝑥0;𝑎1+𝑎2𝐹𝑥,𝑥0;𝑎1+𝐹𝑥,𝑥0;𝑎2,𝑎1,𝑎2𝑅𝑛,𝐹𝑥,𝑥0;𝛼𝑎=𝛼𝐹𝑥,𝑥0;𝑎,𝛼𝑅,𝛼0,𝑎𝑅𝑛.(2.3) The following result from Lai and Lee [10] is needed in the sequel.

Lemma 2.2. Let 𝑥 be an optimal solution for (P) satisfying 𝑥,𝐴𝑥>0,𝑥,𝐵𝑥>0 and let 𝑔𝑗(𝑥), 𝑗𝐽(𝑥) be linearly independent, then there exist (𝑠,𝑡,̃𝑦)𝐾(𝑥), 𝑣𝑅+, 𝑢,𝑣𝑅𝑛 and 𝜇𝑅𝑝+ such that 𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖+𝐴𝑢𝑣𝑥,𝑦𝑖+𝐵𝑣𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥=0,(2.4)𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2𝑣𝑥,𝑦𝑖𝑥,𝐵𝑥1/2=0,𝑖=1,2,,𝑠,(2.5)𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥=0,(2.6)𝑡𝑖𝑅𝑠+,𝑠𝑖=1𝑡𝑖=1,𝑦𝑖𝑥𝑌,𝑖=1,2,,𝑠,(2.7)𝑢,𝐴𝑢1,𝑣,𝐵𝑣1,𝑥,𝐴𝑢=𝑥,𝐴𝑥1/2,𝑥,𝐵𝑣=𝑥,𝐵𝑥1/2.(2.8)

It should be noted that both the matrices 𝐴 and 𝐵 are positive definite at the solution 𝑥 in the above Lemma. If one of 𝐴𝑥,𝑥 and 𝐵𝑥,𝑥 is zero, or both 𝐴 and 𝐵 are singular at 𝑥, then for (𝑠,𝑡,̃𝑦)𝐾(𝑥), we can take 𝑍̃𝑦(𝑥) defined in Lai and Lee [10] by𝑍̃𝑦𝑥=𝑧𝑅𝑛𝑔𝑗𝑥𝑥,𝑧0,𝑗𝐽withanyoneofthefollowing(𝑖)-(𝑖𝑖𝑖)holds(i)𝐴𝑥,𝑥>0,𝐵𝑥,𝑥=0𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖+𝐴𝑥𝐴𝑥,𝑥1/2𝑣𝑥,𝑦𝑖+𝑣,𝑧2𝐵𝑧,𝑧1/2<0,(ii)𝐴𝑥,𝑥=0,𝐵𝑥,𝑥>0𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖𝑣𝑥,𝑦𝑖𝐵𝑥𝐵𝑥,𝑥1/2,𝑧+𝐵𝑧,𝑧1/2<0,(iii)𝐴𝑥,𝑥=0,𝐵𝑥,𝑥=0𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖𝑣𝑥,𝑦𝑖+𝑣,𝑧𝐵𝑧,𝑧1/2+𝐵𝑧,𝑧1/2<0.(2.9) If we take the condition 𝑍̃𝑦(𝑥)=𝜙 in Lemma 2.2, then the result of Lemma 2.2 still holds.

3. Sufficient Optimality Conditions

In this section, we present three sets of sufficient optimality conditions for (P) in the framework of generalized convexity.

Let 𝐹𝑋×𝑋×𝑅𝑛𝑅 be sublinear functional, 𝜙0,𝜙1𝑅𝑅, 𝜃𝑅𝑛×𝑅𝑛𝑅𝑛, and 𝑏0,𝑏1𝑋×𝑋𝑅+. Let 𝜌0, 𝜌1 be real numbers.

Theorem 3.1. Let 𝑥𝑃 be a feasible solution for (P), and there exist 𝑣𝑅+,(𝑠,𝑡,̃𝑦)𝐾(𝑥), 𝑢,𝑣𝑅𝑛, and 𝜇𝑅𝑝+ satisfying (2.4)–(2.8). Suppose that there exist 𝐹,𝜃,𝜙0,𝑏0,𝜌0 and 𝜙1,𝑏1,𝜌1 such that 𝐹𝑥,𝑥;𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖+𝐴𝑢𝑣𝑥,𝑦𝑖𝐵𝑣𝜌0𝜃𝑥,𝑥2𝑏0𝑥,𝑥𝜙0𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣0,(3.1)𝑏1𝑥,𝑥𝜙1𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥0𝐹𝑥,𝑥;𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥𝜌1𝜃𝑥,𝑥2.(3.2) Further, assume that 𝑎0𝜙1𝜙(𝑎)0,(3.3)0(𝑏𝑎)0𝑎0,(3.4)0𝑥,𝑥0,𝑏1𝑥,𝑥𝜌>0,(3.5)0+𝜌10,(3.6) then 𝑥 is an optimal solution of (P).

Proof. Suppose to the contrary that 𝑥 is not an optimal solution of (P), then there exists 𝑥𝑃 such that sup𝑦𝑌𝑓(𝑥,𝑦)+𝑥,𝐴𝑥1/2(𝑥,𝑦)𝑥,𝐵𝑥1/2<sup𝑦𝑌𝑓𝑥,𝑦+𝑥,𝐴𝑥1/2(𝑥,𝑦)𝑥,𝐵𝑥1/2.(3.7) We note that sup𝑦𝑌𝑓𝑥,𝑦+𝑥,𝐴𝑥1/2(𝑥,𝑦)𝑥,𝐵𝑥1/2=𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2𝑥,𝑦𝑖𝑥,𝐵𝑥1/2=𝑣,(3.8) for 𝑦𝑖𝑌(𝑥), 𝑖=1,2,,𝑠, 𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2𝑥,𝑦𝑖𝑥,𝐵𝑥1/2sup𝑦𝑌𝑓(𝑥,𝑦)+𝑥,𝐴𝑥1/2(𝑥,𝑦)𝑥,𝐵𝑥1/2.(3.9) Thus, we have 𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2𝑥,𝑦𝑖𝑥,𝐵𝑥1/2<𝑣for𝑖=1,2,,𝑠.(3.10) It follows that 𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2𝑣𝑥,𝑦𝑖𝑥,𝐵𝑥1/2<0,for𝑖=1,2,,𝑠.(3.11) From (2.5), (2.7), (2.8), and (3.11), we get 𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖<𝑥,𝐵𝑣𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥.,𝐵𝑣(3.12) On the other hand, from (2.6), (3.3), and (3.5), we have 𝑏1𝑥,𝑥𝜙1𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥0.(3.13) It follows from (3.2) that 𝐹𝑥,𝑥;𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥𝜌1𝜃𝑥,𝑥2.(3.14) From (2.4), the sublinearity of 𝐹, and (3.6), we get 𝐹𝑥,𝑥;𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖+𝐴𝑢𝑣𝑥,𝑦𝑖𝐵𝑣𝜌0𝜃𝑥,𝑥2.(3.15) Then by (3.1), we have 𝑏0𝑥,𝑥𝜙0𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣0.(3.16) From (3.4), (3.5), and the above inequality, we obtain 𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣0,(3.17) which contradicts (3.12). Therefore, 𝑥 is an optimal solution for (P). This completes the proof.

Remark 3.2. If both 𝐴 and B are zero matrices, then Theorem 3.1 above reduces to Theorem 3.1 given in Yang and Hou [8].

Theorem 3.3. Let 𝑥𝑃 be a feasible solution for (P), and there exist 𝑣𝑅+,(𝑠,𝑡,̃𝑦)𝐾(𝑥), 𝑢,𝑣𝑅𝑛, and 𝜇𝑅𝑝+ satisfying (2.4)–(2.8). Suppose that there exist 𝐹,𝜃,𝜙0,𝑏0,𝜌0 and 𝜙1,𝑏1,𝜌1 such that 𝐹𝑥,𝑥;𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖+𝐴𝑢𝑣𝑥,𝑦𝑖𝐵𝑣𝜌0𝜃𝑥,𝑥2𝑏0𝑥,𝑥𝜙0𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣>0,(3.18) or equivalently, 𝑏0𝑥,𝑥𝜙0𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣0𝐹𝑥,𝑥;𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖+𝐴𝑢𝑣𝑥,𝑦𝑖𝐵𝑣<𝜌0𝜃𝑥,𝑥2,(3.19)𝑏1𝑥,𝑥𝜙1𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥0𝐹𝑥,𝑥;𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥𝜌1𝜃𝑥,𝑥2.(3.20) Further, assume that (3.3), (3.5), (3.6), and 𝑎0𝜙0(𝑎)0,(3.21) are satisfied, then 𝑥 is an optimal solution of (P).

Proof. Suppose to the contrary that 𝑥 is not an optimal solution of (P). Following the proof of Theorem 3.1, we get 𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖<𝑥,𝐵𝑣𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥.,𝐵𝑣(3.22) Using (3.5), (3.19), (3.21), and (3.22), we have 𝐹𝑥,𝑥;𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖+𝐴𝑢𝑣𝑥,𝑦𝑖𝐵𝑣<𝜌0𝜃𝑥,𝑥2.(3.23) By (2.6), (3.3), (3.5), and (3.20), it follows that 𝐹𝑥,𝑥;𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥𝜌1𝜃𝑥,𝑥2.(3.24) On adding (3.23) and (3.24), and making use of the sublinearity of 𝐹 and (3.6), we have 𝐹𝑥,𝑥;𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖+𝐴𝑢𝑣𝑥,𝑦𝑖𝐵𝑣+𝐹𝑥,𝑥;𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥𝜌0+𝜌1𝜃𝑥,𝑥2<0𝐹𝑥,𝑥;𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖+𝐴𝑢𝑣𝑥,𝑦𝑖+𝐵𝑣𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥<0.(3.25) On the other hand, (2.4) implies 𝐹𝑥,𝑥;𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖+𝐴𝑢𝑣𝑥,𝑦𝑖+𝐵𝑣𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥=0.(3.26) Hence we have a contradiction to inequality (3.25). Therefore, 𝑥 is an optimal solution for (P). This completes the proof.

Theorem 3.4. Let 𝑥𝑃 be a feasible solution for (P) and there exist𝑣𝑅+,(𝑠,𝑡,̃𝑦)𝐾(𝑥), 𝑢,𝑣𝑅𝑛, and 𝜇𝑅𝑝+ satisfying (2.4)–(2.8). Suppose that there exist 𝐹,𝜃,𝜙0,𝑏0,𝜌0 and 𝜙1,𝑏1,𝜌1 such that 𝑏0𝑥,𝑥𝜙0𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑢𝑣𝑥,𝑦𝑖𝑥,𝐵𝑣0𝐹𝑥,𝑥;𝑠𝑖=1𝑡𝑖𝑥𝑓,𝑦𝑖+𝐴𝑢𝑣𝑥,𝑦𝑖𝐵𝑣𝜌0𝜃𝑥,𝑥2,𝐹𝑥,𝑥;𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥𝜌1𝜃𝑥,𝑥2𝑏1𝑥,𝑥𝜙1𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥>0.(3.27) Further, assume that (3.3), (3.5), (3.6), and (3.21) are satisfied, then 𝑥 is an optimal solution of (P).

Proof. The proof is similar to that of Theorem 3.3 and hence omitted.

Remark 3.5. (i) If both 𝐴 and 𝐵 are zero matrices, then Theorems 3.3 and 3.4 above reduce to Theorems 3.3 given in Yang and Hou [8].
(ii) If 𝐹(𝑥,𝑢;𝑎)=𝜂(𝑥,𝑢),𝑎 where 𝜂 is a function from 𝑋×𝑋𝑅𝑛, 𝜙1(𝑝𝑗=1𝜇𝑗𝑔𝑗(𝑥))=𝜙1(𝑝𝑗=1𝜇𝑗𝑔𝑗(𝑥)𝜇𝑗𝑔𝑗(𝑥)), and 𝜌0=𝜌1=0, then Theorems 3.3 and 3.4 above reduce to Theorems 1(b) and 1(c) given by Mishra et al. [16].

4. Duality

In this section, we present a dual model to (P) and establish weak, strong, and strict converse duality results.

To unify and extend the dual models, we need to divide {1,2,,𝑝} into several parts. Let 𝐽𝛼(0𝛼𝑟) be a partition of {1,2,𝑝}, that is, 𝐽𝛼𝐽𝛽=𝜙,for𝛼𝛽,𝑟𝛼=0𝐽𝛼={1,2,,𝑝}.(4.1)

We note that for (P)-optimal 𝑥, (2.6) implies𝑗𝐽𝛼𝜇𝑗𝑔𝑗𝑥=0,𝛼=0,1,,𝑟.(4.2) We now recast the necessary condition in Lemma 2.2 in the following form.

Lemma 4.1. Let 𝑥 be an optimal solution for (P) satisfying 𝑥,𝐴𝑥>0, 𝑥,𝐵𝑥>0 and let 𝑔𝑗(𝑥), 𝑗𝐽(𝑥) be linearly independent, then there exist (𝑠,𝑡,̃𝑦)𝐾(𝑥), 𝑢,𝑣𝑅𝑛 and 𝜇𝑅𝑝+ such that 𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/2𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖++𝐴𝑢𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2+𝑗𝐽0𝜇𝑗𝑔𝑗𝑥𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝐵𝑣=0,(4.3)𝑗𝐽𝛼𝜇𝑗𝑔𝑗𝑥𝜇=0,𝛼=1,2,,𝑟,(4.4)𝑅𝑝+𝑡𝑖0,𝑠𝑖=1𝑡𝑖=1,𝑦𝑖𝑥𝑌,𝑖=1,2,,𝑠,(4.5) where 𝐽𝛼(0𝛼𝑟) is a partition of {1,2,𝑝}.

Proof. It suffices to establish (4.3). From (2.4) and (2.5), 𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖𝑓𝑥+𝐴𝑢,𝑦𝑖+𝑥,𝐴𝑥1/2𝑥,𝑦𝑖𝑥,𝐵𝑥1/2𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝐵𝑣+𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥=0,𝑖=1,2,,𝑠.(4.6) Multiply the respective equation above by 𝑡𝑖((𝑥,𝑦𝑖)𝑥,𝐵𝑥1/2), 𝑖=1,2,,𝑠 and add them together, we have 𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/2𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖++𝐴𝑢𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝐵𝑣=0.(4.7) The above equation together with (2.6) implies that 𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/2𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖++𝐴𝑢𝑝𝑗=1𝜇𝑗𝑔𝑗𝑥𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2+𝑗𝐽0𝜇𝑗𝑔𝑗𝑥𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝐵𝑣=0.(4.8) Hence, the lemma is established.

Our dual model is as follows: max𝑠,𝑡,̃𝑦𝐾(𝑧)sup(𝑧,𝜇,𝑢,𝑣)𝐻𝑠,𝑡,̃𝑦𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐴𝑧1/2,(D) where 𝐻(𝑠,𝑡,̃𝑦) denotes the set of all (𝑧,𝜇,𝑢,𝑣)𝑅𝑛×𝑅𝑛+×𝑅𝑛×𝑅𝑛 satisfying𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖++𝐴𝑢𝑝𝑗=1𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝐵𝑣=0,(4.9)𝑗𝐽𝛼𝜇𝑗𝑔𝑗𝐽(𝑧)0,𝛼=1,2,,𝑟,𝛼𝐽𝛽=𝜙,for𝛼𝛽,𝑟𝛼=0𝐽𝛼={1,2,,𝑝}.(4.10)

Theorem 4.2 (weak duality). Let 𝑥 be a feasible solution for (P), and let (𝑧,𝜇,𝑢,𝑣,𝑠,𝑡,̃𝑦) be a feasible solution for (4.18). Suppose that there exist 𝐹,𝜃,𝜙0,𝑏0,𝜌0 and 𝜙𝛼,𝑏𝛼,𝜌𝛼, 𝛼=1,2,,𝑟 such that 𝐹𝑥,𝑧;𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖++𝐴𝑢𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝐵𝑣𝜌0(𝜃𝑥,𝑧)2𝑏0(𝑥,𝑧)𝜙0𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑥)𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗g×(𝑧)𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/20,(4.11)𝑏𝛼(𝑥,𝑧)𝜙𝛼𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑗𝐽𝛼𝜇𝑗𝑔𝑗(𝑧)0𝐹𝑥,𝑧;𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑗𝐽𝛼𝜇𝑗𝑔𝑗(𝑧)𝜌𝛼(𝜃𝑥,𝑧)2,𝛼=1,2,,𝑟.(4.12) Further, assume that 𝑎0𝜙𝛼𝜙(𝑎)0,𝛼=1,2,,𝑟,(4.13)0(𝑏𝑎)0𝑎0,(4.14)0(𝑥,𝑧)>0,𝑏𝛼𝜌(𝑥,𝑧)0,𝛼=1,2,,𝑟,(4.15)0+𝑟𝛼=1𝜌𝛼0,(4.16) then sup𝑦𝑌𝑓(𝑥,𝑦)+𝑥,𝐴𝑥1/2(𝑥,𝑦)𝑥,𝐵𝑥1/2𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2.(4.17)

Proof. Suppose to contrary that sup𝑦𝑌𝑓(𝑥,𝑦)+𝑥,𝐴𝑥1/2(𝑥,𝑦)𝑥,𝐵𝑥1/2<𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2,(4.18) then, we get 𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑓(𝑥,𝑦)+𝑥,𝐴𝑥1/2<𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)(𝑥,𝑦)𝑥,𝐵𝑥1/2,𝑦𝑌.(4.19) Further, this implies 𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2<𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/2.(4.20) Hence, we have 𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑥)𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/2<𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑗𝐽0𝜇𝑗𝑔𝑗(.𝑥)(4.21) Using the fact that (𝑠𝑖=1𝑡𝑖((𝑧,𝑦𝑖)𝑧,𝐵𝑧1/2))>0 and 𝑗𝐽0𝜇𝑗𝑔𝑗(𝑥)0 and the last inequality, we have 𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑥)𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/2<0.(4.22) From (4.11),(4.14),(4.15), and (4.22), we get 𝐹𝑥,𝑧;𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖++𝐴𝑢𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝐵𝑣<𝜌0(𝜃𝑥,𝑧)2.(4.23) Using (𝑠𝑖=1𝑡𝑖((𝑧,𝑦𝑖)𝑧,𝐵𝑧1/2))>0, (4.10), (4.13), and (4.15), we get 𝑏𝛼(𝑥,𝑧)𝜙𝛼𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑗𝐽𝛼𝜇𝑗𝑔𝑗(𝑧)0,𝛼=1,2,,𝑟.(4.24) From (4.12), we have 𝐹𝑥,𝑧;𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑗𝐽𝛼𝜇𝑗𝑔𝑗(𝑧)𝜌𝛼𝜃(𝑥,𝑧)2,𝛼=1,2,,𝑟.(4.25) On adding (4.23) and (4.25) and making use of sublinearity of 𝐹 and (4.16), we have 𝐹𝑥,𝑧;𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖++𝐴𝑢𝑝𝑗=1𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝐵𝑣<0,(4.26) which contradicts (4.9). This completes the proof.

Theorem 4.3 (weak duality). Let 𝑥 be a feasible solution for (P) and let (𝑧,𝜇,𝑢,𝑣,𝑠,𝑡,̃𝑦) be a feasible solution for (4.18). Suppose that there exist 𝐹,𝜃,𝜙0,𝑏0,𝜌0 and 𝜙𝛼,𝑏𝛼,𝜌𝛼, 𝛼=1,2,,𝑟 such that 𝑏0(𝑥,𝑧)𝜙0𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑥)𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗×(𝑧)𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/2<0𝐹𝑥,𝑧;𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖++𝐴𝑢𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝐵𝑣𝜌0𝜃(𝑥,𝑧)2,𝑏𝛼(𝑥,𝑧)𝜙𝛼𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑗𝐽𝛼𝜇𝑗𝑔𝑗(𝑧)0𝐹𝑥,𝑧;𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑗𝐽𝛼𝜇𝑗𝑔𝑗(𝑧)<𝜌𝛼(𝜃𝑥,𝑧)2,𝛼=1,2,,𝑟.(4.27) Further, assume that (4.14), (4.15), and (4.16) are satisfied, then sup𝑦𝑌𝑓(𝑥,𝑦)+𝑥,𝐴𝑥1/2(𝑥,𝑦)𝑥,𝐵𝑥1/2𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2.(4.28)

Proof. The proof is similar to that of the above theorem and hence omitted.

Theorem 4.4 (strong duality). Assume that 𝑥 is an optimal solution for (P) and 𝑔𝑗(𝑥), 𝑗𝐽(𝑥) are linearly independent. Then there exist (𝑠,𝑡,̃𝑦)𝐾(𝑥) and (𝑥,𝜇,𝑢,𝑣)𝐻(𝑠,𝑡,̃𝑦)such that (𝑥,𝜇,𝑢,𝑣,𝑠,𝑡,̃𝑦) is an optimal solution for (4.18). If, in addition, the hypotheses of any of the weak duality (Theorem 4.2 or Theorem 4.3) holds for a feasible point (𝑧,𝜇,𝑢,𝑣,𝑠,𝑡,̃𝑦), then the problems (P) and (4.18) have the same optimal values.

Proof. By Lemma 4.1, there exist (𝑠,𝑡,̃𝑦)𝐾(𝑥) and (𝑥,𝜇,𝑢,𝑣)𝐻(𝑠,𝑡,̃𝑦) such that (𝑥,𝜇,𝑢,𝑣,𝑠,𝑡,̃𝑦) is a feasible for (4.18), optimality of this feasible solution for (4.18) follows from Theorems 4.2 or 4.3 accordingly.

Theorem 4.5 (strict converse duality). Let 𝑥 and (𝑧,𝜇,𝑢,𝑣,𝑠,𝑡,̃𝑦) be optimal solutions for (P) and (4.18), respectively. Suppose that 𝑔𝑗(𝑥), 𝑗𝐽(𝑥) are linearly independent and there exist 𝐹,𝜃,𝜙0,𝑏0,𝜌0, and 𝜙𝛼,𝑏𝛼,𝜌𝛼,𝛼=1,2,,𝑟 such that 𝐹𝑥,𝑧;𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖++𝐴𝑢𝑗𝐽0𝜇𝑗𝑔𝑗×(𝑧)𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝐵𝑣𝜌0𝜃𝑥,𝑧2𝑏0𝑥𝜙,𝑧0𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2×𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2+𝑗𝐽0𝜇𝑗𝑔𝑗𝑥𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/20,(4.29)𝑏𝛼𝑥𝜙,𝑧𝛼𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑗𝐽𝛼𝜇𝑗𝑔𝑗𝑥(𝑧)0𝐹,𝑧;𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑗𝐽𝛼𝜇𝑗𝑔𝑗(𝑧)𝜌𝛼𝜃𝑥,𝑧2,𝛼=1,2,,𝑟.(4.30) Further, assume (4.13), (4.15), (4.16), 𝜙0(𝑎)0𝑎>0,(4.31) then 𝑥=𝑧, that is, 𝑧 is an optimal solution for (P).

Proof. Suppose to contrary that 𝑥𝑧. From the strong duality Theorem 4.4, we know that sup𝑦𝑌𝑓𝑥,𝑦+𝑥,𝐴𝑥1/2(𝑥,𝑦)𝑥,𝐵𝑥1/2=𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2.(4.32) Then, we get 𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑓𝑥,𝑦+𝑥,𝐴𝑥1/2𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗𝑥(𝑧),𝑦𝑥,𝐵𝑥1/2,𝑦𝑌.(4.33) Further, this implies 𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗×(𝑧)𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/2,𝑦𝑌.(4.34) Hence, we have 𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2+𝑗𝐽0𝜇𝑗𝑔𝑗𝑥𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/2𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑗𝐽0𝜇𝑗𝑔𝑗𝑥.(4.35) Using the fact that (𝑠𝑖=1𝑡𝑖((𝑧,𝑦𝑖)𝑧,𝐵𝑧1/2))>0 and 𝑗𝐽0𝜇𝑗𝑔𝑗(𝑥)0 and the last inequality, we have 𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑥,𝑦𝑖+𝑥,𝐴𝑥1/2+𝑗𝐽0𝜇𝑗𝑔𝑗𝑥𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2+𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑥,𝑦𝑖𝑥,𝐵𝑥1/20.(4.36) From (4.15), (4.29), (4.31), and (4.36), we get 𝐹𝑥,𝑧;𝑠𝑖=1𝑡𝑖𝑧,𝑦𝑖𝑧,𝐵𝑧1/2𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖++𝐴𝑢𝑗𝐽0𝜇𝑗𝑔𝑗(𝑧)𝑠𝑖=1𝑡𝑖𝑓𝑧,𝑦𝑖+𝑧,𝐴𝑧1/2