Research Article | Open Access

# Optimality and Duality for Nondifferentiable Minimax Fractional Programming with Generalized Convexity

**Academic Editor:**H. C. So

#### Abstract

We establish several sufficient optimality conditions for a class of nondifferentiable minimax fractional programming problems from a view point of generalized convexity. Subsequently, these optimality criteria are utilized as a basis for constructing dual models, and certain duality results have been derived in the framework of generalized convex functions. Our results extend and unify some known results on minimax fractional programming problems.

#### 1. Introduction

Several authors have been interested in the optimality conditions and duality results for minimax programming problems. Necessary and sufficient conditions for generalized minimax programming were developed first by Schmitendorf [1]. Tanimoto [2] defined a dual problem and derived duality theorems for convex minimax programming problems using Schmitendorf's results.

Yadav and Mukherjee [3] also employed the optimality conditions of Schmitendorf [1] to construct the two dual problems and derived duality theorems for differentiable fractional minimax programming problems. Chandra and Kumar [4] pointed out that the formulation of Yadav and Mukherjee [3] has some omissions and inconsistencies, and they constructed two new dual problems and proved duality theorems for differentiable fractional minimax programming. Liu et al. [5, 6], Liang and Shi [7], and Yang and Hou [8] paid much attention on minimax fractional programming problem and established sufficient optimality conditions and duality results.

Lai et al. [9] derived necessary and sufficient conditions for nondifferentiable minimax fractional problem with generalized convexity and applied these optimality conditions to construct one parametric dual model and also discussed duality theorems. Lai and Lee [10] obtained duality theorems for two parameter-free dual models of a nondifferentiable minimax fractional programming problem, involving generalized convexity assumptions. Ahmad and Husain [11, 12] established sufficient optimality conditions and duality theorems for nondifferentiable minimax fractional programming problem under convexity assumptions, thus extending the results of Lai et al. [9] and Lai and Lee [10]. Jayswal [13] discussed the optimality conditions and duality results for nondifferentiable minimax fractional programming under -univexity. Yuan et al. [14] introduced the concept of generalized -convexity and focused their study on a nondifferentiable minimax fractional programming problems. Recently, Jayswal and Kumar [15] established sufficient optimality conditions and duality theorems for a class of nondifferentiable minimax fractional programming involving -convexity.

In the present paper, we discuss the sufficient optimality conditions for a nondifferentiable minimax fractional programming problem from a view point of generalized convexity. Subsequently, we apply the optimality conditions to formulate a dual problem, and we prove weak, strong and strict converse duality theorems involving generalized convexity.

The paper is organized as follows. In Section 2, we present a few definitions and notations and recall a set of necessary optimality conditions for a nondifferentiable minimax fractional programming problem which will be needed in the sequel. In Section 3, we discussed sufficient optimality conditions with somewhat limited structures of generalized convexity. Furthermore, a dual problem is formulated and duality results are presented in Section 4. Finally, in Section 5, we summarize our main results and also point out some additional research opportunities arising from certain modifications of the principal problem model considered in this paper.

#### 2. Notations and Preliminaries

Let denote the -dimensional Euclidean space and let be its nonnegative orthant.

In this paper, we consider the following nondifferentiable minimax fractional programming problem: where and are continuous differentiable functions, is a compact subset of , and and are positive semidefinite matrices. The problem (P) is nondifferentiable programming problem if either or is nonzero. If and are null matrices, then the problem (P) is a usual minimax fractional programming problem which was studied by Liang and Shi [7] and Yang and Hou [8].

Let = be the set of all feasible solutions of (P). For each , we define Assume that for each , , and .

Denote

Since and are continuously differentiable and is a compact subset of , it follows that for each , . Thus, for any , we have a positive constant .

*Definition 2.1. *A functional is said to be sublinear in its third argument, if for all ,
The following result from Lai and Lee [10] is needed in the sequel.

Lemma 2.2. *Let be an optimal solution for (P) satisfying and let , be linearly independent, then there exist , , and such that
*

It should be noted that both the matrices and are positive definite at the solution in the above Lemma. If one of and is zero, or both and are singular at , then for , we can take defined in Lai and Lee [10] by If we take the condition in Lemma 2.2, then the result of Lemma 2.2 still holds.

#### 3. Sufficient Optimality Conditions

In this section, we present three sets of sufficient optimality conditions for (P) in the framework of generalized convexity.

Let be sublinear functional, , , and . Let , be real numbers.

Theorem 3.1. *Let be a feasible solution for (P), and there exist , , and satisfying (2.4)–(2.8). Suppose that there exist and such that
**
Further, assume that
**
then is an optimal solution of (P).*

*Proof. *Suppose to the contrary that is not an optimal solution of (P), then there exists such that
We note that
for , ,
Thus, we have
It follows that
From (2.5), (2.7), (2.8), and (3.11), we get
On the other hand, from (2.6), (3.3), and (3.5), we have
It follows from (3.2) that
From (2.4), the sublinearity of , and (3.6), we get
Then by (3.1), we have
From (3.4), (3.5), and the above inequality, we obtain
which contradicts (3.12). Therefore, is an optimal solution for (P). This completes the proof.

*Remark 3.2. *If both and *B* are zero matrices, then Theorem 3.1 above reduces to Theorem 3.1 given in Yang and Hou [8].

Theorem 3.3. *Let be a feasible solution for (P), and there exist , , and satisfying (2.4)–(2.8). Suppose that there exist and such that
**
or equivalently,
**
Further, assume that (3.3), (3.5), (3.6), and
**
are satisfied, then is an optimal solution of (P).*

*Proof. *Suppose to the contrary that is not an optimal solution of (P). Following the proof of Theorem 3.1, we get
Using (3.5), (3.19), (3.21), and (3.22), we have
By (2.6), (3.3), (3.5), and (3.20), it follows that
On adding (3.23) and (3.24), and making use of the sublinearity of and (3.6), we have
On the other hand, (2.4) implies
Hence we have a contradiction to inequality (3.25). Therefore, is an optimal solution for (P). This completes the proof.

Theorem 3.4. *Let be a feasible solution for (P) and there exist, , and satisfying (2.4)–(2.8). Suppose that there exist and such that
**
Further, assume that (3.3), (3.5), (3.6), and (3.21) are satisfied, then is an optimal solution of (P).*

*Proof. *The proof is similar to that of Theorem 3.3 and hence omitted.

*Remark 3.5. * (i) If both and are zero matrices, then Theorems 3.3 and 3.4 above reduce to Theorems 3.3 given in Yang and Hou [8].

(ii) If where is a function from , , and , then Theorems 3.3 and 3.4 above reduce to Theorems 1(b) and 1(c) given by Mishra et al. [16].

#### 4. Duality

In this section, we present a dual model to (P) and establish weak, strong, and strict converse duality results.

To unify and extend the dual models, we need to divide into several parts. Let be a partition of , that is,

We note that for (P)-optimal , (2.6) implies We now recast the necessary condition in Lemma 2.2 in the following form.

Lemma 4.1. *Let be an optimal solution for (P) satisfying , and let , be linearly independent, then there exist , and such that
**
where is a partition of .*

*Proof. *It suffices to establish (4.3). From (2.4) and (2.5),
Multiply the respective equation above by , and add them together, we have
The above equation together with (2.6) implies that
Hence, the lemma is established.

Our dual model is as follows: where denotes the set of all satisfying

Theorem 4.2 (weak duality). *Let be a feasible solution for (P), and let be a feasible solution for (4.18). Suppose that there exist and , such that
**
Further, assume that
**
then
*

*Proof. *Suppose to contrary that
then, we get
Further, this implies
Hence, we have
Using the fact that and and the last inequality, we have
From (4.11),(4.14),(4.15), and (4.22), we get
Using , (4.10), (4.13), and (4.15), we get
From (4.12), we have
On adding (4.23) and (4.25) and making use of sublinearity of and (4.16), we have
which contradicts (4.9). This completes the proof.

Theorem 4.3 (weak duality). *Let be a feasible solution for (P) and let be a feasible solution for (4.18). Suppose that there exist and , such that
**
Further, assume that (4.14), (4.15), and (4.16) are satisfied, then
*

*Proof. *The proof is similar to that of the above theorem and hence omitted.

Theorem 4.4 (strong duality). *Assume that is an optimal solution for (P) and , are linearly independent. Then there exist and such that is an optimal solution for (4.18). If, in addition, the hypotheses of any of the weak duality (Theorem 4.2 or Theorem 4.3) holds for a feasible point , then the problems (P) and (4.18) have the same optimal values.*

*Proof. *By Lemma 4.1, there exist and such that is a feasible for (4.18), optimality of this feasible solution for (4.18) follows from Theorems 4.2 or 4.3 accordingly.

Theorem 4.5 (strict converse duality). *Let and be optimal solutions for (P) and (4.18), respectively. Suppose that , are linearly independent and there exist , and such that
**
Further, assume (4.13), (4.15), (4.16),
**
then , that is, is an optimal solution for (P).*

*Proof. *Suppose to contrary that . From the strong duality Theorem 4.4, we know that
Then, we get
Further, this implies
Hence, we have
Using the fact that and and the last inequality, we have
From (4.15), (4.29), (4.31), and (4.36), we get
Using , (4.10), (4.13), and (4.15), we get
From (4.30), we have
On adding (4.37) and (4.39) and making use of sublinearity of and (4.16), we have