Journal of Applied Mathematics

Volume 2013 (2013), Article ID 641345, 11 pages

http://dx.doi.org/10.1155/2013/641345

## Optimality Condition and Wolfe Duality for Invex Interval-Valued Nonlinear Programming Problems

Department of Applied Mathematics and Physics, Xi’an University of Posts and Telecommunication, Xi’an 710121, China

Received 10 May 2013; Accepted 10 October 2013

Academic Editor: Deng-Feng Li

Copyright © 2013 Jianke Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The concepts of preinvex and invex are extended to the interval-valued functions. Under the assumption of invexity, the Karush-Kuhn-Tucker optimality sufficient and necessary conditions for interval-valued nonlinear programming problems are derived. Based on the concepts of having no duality gap in weak and strong sense, the Wolfe duality theorems for the invex interval-valued nonlinear programming problems are proposed in this paper.

#### 1. Introduction

In real world applications of mathematical programming, one cannot ignore the possibility that a small uncertainty in the data can make the usual optimal solution completely meaningless from a practical viewpoint. So the major difficulty we are faced with is how to seek a solution for these real world optimization problems. There are several optimization models to deal with these problems. If the coefficients of optimization problem are assumed as random variables with known distributions, the problem can be categorized as the stochastic optimization problem. Stochastic optimization is a widely used and a standard approach to deal with uncertainty; for the detail of this topic one can see the books written by Birge and Louveaux [1], Kall and Mayer [2], and Prékopa [3]. If the coefficients of optimization problem are assumed as fuzzy variables, the problem can be categorized as the fuzzy optimization problem. The book written by Delgado et al. [4] gives the main stream of this topic. However, there are several drawbacks of stochastic optimization and fuzzy optimization in real world applications. Firstly, the specifications of the distributions and membership functions in the stochastic optimization problems and fuzzy optimization problems are very subjective. Secondly, the approach of stochastic optimization (fuzzy optimization) requires the evaluation of the solution on the whole uncertainty set in order to determine its expected cost, which is computationally hard in general. Finally, one cannot guarantee that the real cost matches the expected cost in stochastic optimization, since the expected cost is only an estimator of the possible solutions.

In recent years, some deterministic frameworks of optimization methods are studied to overcome the drawbacks of stochastic optimization and fuzzy optimization. One of these deterministic optimization methods is robust optimization, which is the worst case based method and does not need a probability distribution on the uncertainty set. The earliest date of studies on robust optimization can be back to 1973 ([5]); Soyster proposed the first robust model for linear optimization problems with uncertain data. However, the model is very conservative in the sense that they protect against the worst case scenario. The interest in robust formulations in the optimization community was revived in the 1990s. A number of important robust formulations and applications were introduced by Ben-Tal et al. [6], El Ghaoui et al. [7, 8] and Bertsimas and Sim [9], who provided a detailed analysis of the robust optimization framework in linear optimization and general convex programming. In robust optimization, the considered uncertainty set plays a crucial role, since it determines the level of protection of the solution. The solution of robust optimization models might be too conservative if all scenarios are considered. Another one of these deterministic optimization methods is interval-valued optimization, which provides an alternative choice for considering the uncertainty into the optimization problems. The coefficients in the interval-valued optimization are assumed as closed intervals. The bounds of uncertain data in interval-valued optimization are easier to be handled than specifying the distributions and membership functions in stochastic optimization and fuzzy optimization problems, respectively.

Duality theory has played a fundamental role in the area of constrained optimization and has been studied for over a century. The duality theory for interval linear programming problems with real-valued objective function was discussed by Rohn [10]. Wu [11–14] has studied the duality theory for interval-valued programming problems. In [11], Wu has proposed the Wolfe duality for interval-valued nonlinear programming problems. The Lagrangian duality for interval-valued nonlinear programming problems was also studied by Wu in [13]. Although the Wolfe and Lagrangian duality theory obtained in [11–13] can be applied to the problems of interval-valued linear programming, the results obtained using this method will be complicated. Based on the concept of a scalar product of closed intervals, Wu [14] has proposed the new weak and strong duality theorems for interval-valued linear programming problems. Zhou and Wang [15] have established the optimality sufficient condition and a mixed dual model for interval-valued nonlinear programming problems. However, these results were mainly established for the interval-valued programming problems involving the optimization of convex objective functions over convex feasible regions. In real world applications, not all practical problems fulfill the requirements of convexity. Then, generalized convex functions [16–21] have been introduced in order to weaken as much as possible the convexity requirements for results related to optimality conditions and duality results.

In this paper, we study the Karush-Kuhn-Tucker optimality sufficient and necessary conditions for interval-valued optimization problems under the assumption of generalized convexity. We extend the concepts of preinvex and invex for real-valued functions to interval-valued functions. Under the assumption of invexity, the Karush-Kuhn-Tucker optimality sufficient and necessary conditions for interval-valued optimization problems are derived for the purpose of proving the strong duality theorems. By using the concept of having no duality gap in weak and strong sense, the strong duality theorems in weak and strong sense are then proposed. The results in this paper improve and extend the results of Wu in [11–14] for interval-valued nonlinear optimization problems.

In Section 2 we present some basic concepts and properties for closed intervals and interval-valued functions, respectively. In Section 3, The Wolfe's primal and dual pair problems are proposed for interval-valued optimization problems. In Section 4, We extend the concepts of preinvex and invex for real-valued functions to interval-valued functions. Under the assumption of invexity, the Karush-Kuhn-Tucker optimality sufficient and necessary conditions for interval-valued optimization problems are derived. In Section 5, we discuss the solvability for Wolfe's primal and dual problems under the assumption of invexity. In Section 6, the duality theorems in weak and strong sense are established for the invex interval-valued nonlinear optimization problems.

#### 2. Preliminaries

Let us denote by the class of all closed intervals in if denotes a closed interval, where and mean the lower and upper bounds of , respectively. Let and be in ; we have(i) and ;(ii);(iii) and , where and .

Then, we can see that where is a real number. The real number can be regarded as a closed interval . Let be a closed interval; we write that will mean . For more details on the topic of interval analysis, one can refer to [22].

We say that and are *comparable* if and only if or . We write that if and only if and and that if and only if and ; that is, the following (a1) or (a2) or (a3) is satisfied:(a1) and ;(a2) and ;(a3) and .

Therefore if and are *not comparable*, then the following (b1) or (b2) or (b3) or (b4) or (b5) or (b6) is satisfied:(b1) and ; (b2) and ;(b3) and ; (b4) and ;(b5) and ; (b6) and .

In other words, if and are not comparable, then and or .

The function defined on the Euclidean space is called *interval-valued function* if is a closed interval in for each . can be also written as , where and are real-valued functions defined on and satisfy for every . Wu ([23]) has shown the concepts of limit, continuity, and two kinds of differentiation of interval-valued function.

*Definition 1 (see [23]). *Let be an open set in . An interval-valued function with is called *weakly differentiable* at if the real-valued functions and are differentiable at (in the usual sense).

Let ; if there exists a such that , then is called the *Hukuhara difference.* One also writes , when we say that the Hukuhara difference exists, which means that and .

*Definition 2 (see [23]). *Let be an open set in . An interval-valued function is called *H-differentiable* at if there exists a closed interval such that the limits
both exist and are equal to . In this case, is called the *H-derivative* of at .

Let be an interval-valued function defined on . One says that is *continuous* at if

*Definition 3 (see [23]). *Let be an interval-valued function defined on and let be fixed.(i)One say that is *weakly continuously differentiable* at if the real-valued functions and are continuously differentiable at (i.e., all the partial derivatives of and exist on some neighborhoods of and are continuous at ).(ii)One says that is *continuously H-differentiable* at if all of the partial H-derivatives exist on some neighborhoods of and are continuous at (in the sense of interval-valued function).

Proposition 4 (see [23]). *Let be an interval-valued function defined on . If is H-differentiable at , then is weakly differentiable at ; if is continuously H-differentiable at , then is weakly continuously differentiable at .*

#### 3. The Wolfe's Primal and Dual Problems

In this section, we introduce the Wolfe's primal and dual pair problems for conventional nonlinear programming problem following Wu in [12]. We consider the interval-valued optimization problem as follows: where is an interval-valued function and and , , are real-valued functions.

We denote by the feasible set of primal problem (). We also denote by the set of all objective values of primal problem ().

*Definition 5 (see [12]). *Let be a feasible solution of primal problem (). One says that is a *nondominated solution* of problem () if there exists no such that . In this case, is called the *nondominated objective value* of .

We denote by the set of all nondominated objective values of problem ().

If we assume that the interval-valued function and the real-valued functions and , are differentiable on , the dual problem of () is formulated as follows:
We denote by the feasible set of dual problem () consisting of elements . We write
and denote by
the set of all objective values of primal problem ().

*Definition 6 (see [12]). *Let be a feasible solution of primal problem (). One says that is a *nondominated solution* of problem () if there exists no such that . In this case, is called the *nondominated objective value* of problem ().

We denote by the set of all nondominated objective values of problem ().

#### 4. The KKT Optimality Conditions for Interval-Valued Optimization Problems

In this section, we extend the concepts of preinvex and invex for real-valued functions to interval-valued functions. Under the assumption of invexity, we propose the KKT optimality sufficient and necessary conditions for interval-valued optimization problems.

##### 4.1. Preinvexity and Invexity of the Interval-Valued Functions

The concept of convexity plays an important role in the optimization theory. In recent years, the concept of convexity has been generalized in several directions using novel and innovative techniques. An important generalization of convex functions is the introduction of preinvex function, which was introduced by Weir and Mond ([19]) and by Weir and Jeyakumar ([20]). Yang et al. ([21]) has established the characterization of prequasi-invex functions under the condition of lower semicontinuity, upper semicontinuity, and semistrict prequasi-invexity, respectively.

*Definition 7 (see [19, 20]). *A set is said to be invex if there exists a vector function such that

*Definition 8 (see [19, 20]). *Let be an invex set with respect to . Let . One says that is preinvex if
Hanson has also introduced the concept of invex function in [17].

*Definition 9 (see [17]). *Let be an invex set with respect to . Let . One says that is invex if

Pini ([18]) has shown that, if is defined on an invex set and if it is preinvex and differentiable, then is also invex with respect to , but the converse is not true in general. Wu has extended the concept of convexity to the interval-valued functions in [11–14].

Now, we extend the concepts of preinvexity and invexity to the interval-valued functions.

*Definition 10. *Let be an invex set with respect to , and let be an interval-valued function defined on . One says that is -*preinvex* at with respect to if
for each and each .

*Definition 11. *Let be an invex set with respect to , and let be an interval-valued function defined on . One says that is *invex* at if the real-valued functions and are invex at .

It is obvious that the particular case of H-differentiable -convex interval-valued function is obtained by choosing in H-differentiable invex interval-valued function, but H-differentiable invex interval-valued function may not be H-differentiable -convex interval-valued function.

*Example 12. *Consider that , ; this interval-valued function is invex since and have a unique global minimizer at , where and is therefore invex. However, is not -convex at and therefore not -preinvex. As and , then for . Consider the following:
Taking , , we get . Then, , , so the real-valued function is not convex at and the interval-valued function is not -convex at .

Proposition 13. *Let be an invex set with respect to , and let be an interval-valued function defined on . The interval-valued function is -preinvex at with respect to if and only if the real-valued functions and are preinvex at with respect to the same .*

*Proof. *By Definition 10, we have

Since and , then

The proof is complete.

Proposition 14. *Let be an invex set with respect to , and let be an interval-valued function defined on . If the interval-valued function is -preinvex with respect to and H-differentiable at , then the interval-valued functions is invex at with respect to the same .*

*Proof. *From Definition 10 and Proposition 13, we have

We can rewrite the two above inequalities as

Since , , and the interval-valued function is H-differentiable at , then the real-valued functions and are differentiable at by Definition 3. Divide by to obtain

Taking the limit as , we get

From the two above inequalities, we can see that and are invex at with respect to the same . By Definition 11, it can be shown that the interval-valued function is invex at with respect to the same .

##### 4.2. The KKT Optimality Conditions for Invex Interval-Valued Optimization Problems

Now we consider the following two real-valued optimization problems:

Wu ([12]) has proposed the following result.

Proposition 15 (see [12]). *(1) If is an optimal solution of problem , then is a nondominated solution of problem ();**(2) If is an optimal solution of problem , then is a nondominated solution of problem ().*

Now, we show that the KKT conditions are necessary and sufficient for optimality under the assumptions of invexity and modified Slater condition is satisfied.

Let us rename the constraint functions for as for . Let denote the set of active constraints at , which is defined by

Theorem 16 (KKT necessary conditions for P_{LU}). *Suppose that is an optimal solution of the problem of and there exists a point such that and that for all . Suppose, also, that and are differentiable for at and and are invex with respect to the same vector function . Then there exist , for such that
*

*Proof. *Since denote the set of active constraints at . Then,

If we can show that
the result will follow as in [16, 24] by applying Farkas' Lemma, where is a real-valued function.

Assume that (26) does not hold; then there exists such that

Since the Slater-type condition holds, then

By the invexity of , we have

Then
for all . Therefore, for some positive are small enough such that
which can shown that is a feasible solution of P_{LU}. Since is an optimal solution of the problem of P_{LU}, we have
then
for all . When , we have
which contradicts to (27). Then, (26) is satisfied. By applying Farkas' Lemma and setting for , it can be shown that there exists such that

From (35), and = , ; , ; if . Then, we get
The result follows.

Theorem 17 (KKT necessary conditions for ). *Suppose that is a nondominated solution of primal problem () and there exists a point such that and that for all . Suppose, also, that is H-differentiable and are differentiable for at and and are invex with respect to the same vector function . Then there exist , for such that
*

*Proof. *Since denote the set of active constraints at . Then,

Suppose that there exists such that

Since the Slater-type condition holds, then
by the invexity of , we have
then
for all . Therefore, for some positive are small enough such that
which can show that is a feasible solution of primal problem (). Since is a nondominated solution of primal problem (), there exists no feasible solution such that , which means that there exists no feasible solution such that the following are satisfied.(1), and ;(2), and ;(3), and .

That is to say, we have the following results for the feasible solution of primal problem ():
then
for all . When , we have
which contradicts to (39). Therefore, we conclude that the system of inequalities presented in (39) has no solution. According to Farkas' lemma [24] and setting for , it can be shown that there exists such that

From (47), , ; , ; if . Then, we get
The result follows.

We can also show that the KKT sufficient condition holds under the assumption of invexity.

Theorem 18 (KKT sufficient conditions). *Suppose that the interval-valued function is H-differentiable and is differentiable for at and , , and are invex with respect to the same vector function . If there exist Lagrange multipliers for such that
**
where with components, then is a nondominated solution of primal problem ().*

*Proof. *Suppose the contrary that is not a nondominated solution of (). Then, there exists a feasible solution such that . From Definition 11 and the assumptions, it can be shown that , and are invex at with respect to the same vector function for all .

From the feasibility of , we get

From (49), we have

Since and are invex at with respect to the same , and for all . Then

From (50) and (51), we have

From (52), we get

Since the interval-valued function is invex at with respect to , then and are invex at with respect to the same . We have

By (53)–(55), (57), we obtain
Similarly, from (53)–(54), (56), and (58), we have
which contradicts that . The result follows.

#### 5. Solvability

In this section, we discuss the solvability for Wolfe's primal and dual problems.

Lemma 19. *Let , , , and , be continuously differentiable on . Suppose that is a feasible solution of primal problem () and is a feasible solution of dual problem (). If , , and , , are invex at with respect to the same vector function , then the following statements hold true.*(i)*If , then .*(ii)*If , then .*(iii)*If , then .*(iv)*If , then .*

*Proof. *From Definitions 3 and 11, it can be shown that and are continuously differentiable on and invex at with respect to the same .

Since is a feasible solution of primal problem (), then
for all . Then we have
Then statement (i) holds true. If , then
it can be shown that statement (ii) holds. On the other hand, considering the real-valued function , we can also obtain statements (iii) and (iv) by using the similar arguments.

Lemma 20. *Let , , and , , be continuously differentiable on . Suppose that is a feasible solution of primal problem () and is a feasible solution of dual problem (). If , and , , are invex at with respect to the same vector function , then the following statements hold true.*(i)*If , then .*(ii)*If , then .*(iii)*If , then .*(iv)*If , then .*

*Proof. *From Definitions 3 and 11, it can be shown that and are continuously differentiable on and invex at with respect to the same . Consider the following:

Then statement (i) holds true. If , then statement (ii) holds true by using Lemma 19(iv). On the other hand, we can also obtain statements (iii) and (iv) by using the similar arguments and Lemma 19(i) and (ii), respectively.

Proposition 21. *Let , and , , be continuously differentiable on . Suppose that is a feasible solution of primal problem () and is a feasible solution of dual problem (). If , and , , are invex at with respect to the same vector function , then the following statements hold true.*(i)If and are comparable, then .(ii)If and are not comparable, then or .

*Proof. *If , then using Lemma 19(i) and (iii). On the other hand, if , then using Lemma 20(i) and (iii).

If and are not comparable, then we have(1) and ;(2) and ;(3) and ;(4) and ;(5) and ;(6) and .

By using Lemma 19 (ii) and (iv), and Lemma 20 (ii) and (iv), it can be shown that or .

Theorem 22 (solvability). *Let , , and , , be invex with respect to the same vector function and continuously differentiable on . Suppose that is a feasible solution of dual problem () and . Then solves dual problem (); that is, .*

*Proof. *Suppose that is not a nondominated solution of dual problem (). Then there exists a feasible solution of dual problem () such that . According to the assumption of , there exists a feasible solution of primal problem () such that
It means that the following (a1) or (a2) or (a3) is satisfied:(a1) and ;(a2) and ;(a3) and .

If and are comparable. Then, from Proposition 21(i), we get , which contradicts (65). If and are not comparable, we have or by using Proposition 21(ii), which contradicts one of (a1)–(a3). We complete the proof.

Theorem 23 (solvability). *Let , and , be invex with respect to the same vector function and continuously differentiable on . Suppose that is a feasible solution of primal problem () and . Then solves primal problem (); that is, .*

*Proof. *Suppose that is not a nondominated solution of primal problem (). Then there exists a feasible solution of primal problem () such that . According to the assumption of , there exists a feasible solution of dual problem () such that

It means that the following (c1) or (c2) or (c3) is satisfied:(c1) and ;(c2) and ;(c3) and .

If and are comparable, then, from Proposition 21(i), we get , which contradicts (66). If and are not comparable, we have or by using Proposition 21(ii), which contradicts one of (c1)–(c3). We complete the proof.

Theorem 24 (solvability). *Let , and , , be invex with respect to the same vector function and continuously differentiable on . Suppose that is a feasible solution of primal problem () and is a feasible solution of dual problem (). If , then solves primal problem () and solves dual problem ().*

*Proof. *The proof follows Theorems 22 and 23.

#### 6. Duality Theorems

In this section, we present the weak and strong duality theorems under the assumption of invexity. Our results generalize the results of duality theorems by Wu in [11, 12].

Under the assumption convexity, Wu ([11, 12]) has introduced two kinds of concepts of no duality gap and studied strong duality theorems.

*Definition 25 (see [11, 12]). *Two kinds of concepts of no duality gap are presented below.(i)We say that the primal problem () and dual problem () have no duality gap in weak sense if and only if Min.(ii)We say that the primal problem () and dual problem () have no duality gap in strong sense if and only if there exist and such that .

Wu ([11, 12]) has shown that the primal problem () and dual problem () have no duality gap in strong sense which implies that the primal problem () and dual problem () have no duality gap in weak sense.

Now, we establish strong duality theorems in weak and strong sense under the assumption of invexity, respectively.

Theorem 26 (strong duality theorem in weak sense). *Let , , and , , be invex with respect to the same vector function and continuously differentiable on . If one of following conditions is satisfied:*(i)*there exists a feasible solution of primal problem () such that ,*(ii)*there exists a feasible solution of dual problem () such that ,*

Then the primal problem () and dual problem () have no duality gap in weak sense.

*Proof. *Since the condition (i) is satisfied, from Theorem 23, it can be shown that . According to the assumption of , there exists a feasible solution of dual problem () such that . Using the similar arguments in the proof of Theorem 22 by looking at (65), we have . Suppose that condition (ii) is satisfied; from Theorem 22, we have . Since , there exists a feasible solution of primal problem () such that . Using the similar arguments in the proof of Theorem 23 by looking at (66), we have . Then, the primal problem () and dual problem () have no duality gap in weak sense.

Theorem 27 (strong duality theorem in strong sense). *Let , be invex with respect to the same vector function and continuously differentiable on . Suppose that is a solution of the problem (also is a nondominated solution of primal problem () by Proposition 15). If there exists a point such that and that for all , , and if . Then the primal problem () and dual problem () have no duality gap in strong sense; that is to say, there exist such that solves dual problem () and .*

*Proof. *According to the assumptions and Theorem 16, there exist such that
It can be shown that is a feasible solution of dual problem () and . Using Theorem 24, we complete the proof.

#### 7. Conclusion

The Karush-Kuhn-Tucker optimality conditions and duality for interval-valued nonlinear optimization problems under the assumption of invexity are represented in this paper. Our results generalize the results of Wu in [11, 12]. Interval-valued optimization provides a deterministic framework for studying mathematical programming problems in the face of data uncertainty. The result of Karush-Kuhn-Tucker optimality conditions can be also used to obtain the nondominated solution of interval-valued optimization problems. In the future research, we may extend to consider the Karush-Kuhn-Tucker optimality conditions and duality for multiobjective interval-valued nonlinear optimization problems under the assumption of generalized convexity.

#### Acknowledgments

This work was partially supported by Natural Science Basic Research Plan in Shaanxi Province of China (Program nos. 2013JQ1020, 2013KJXX-29), National Natural Science Foundation of China (Program nos. 61302050, 61362029, 61100165, 11301415, and 61100166), Special funds for the construction of key disciplines funded projects in Shaanxi Province, and the Science Plan Foundations of the Education Bureau of Shaanxi Province (no. 11JK1051, 12JK0887, 2013JK1130, and 2013JK1182).

#### References

- J. R. Birge and F. Louveaux,
*Introduction to Stochastic Programming*, Physica, New York, NY, USA, 1997. View at MathSciNet - P. Kall and J. Mayer,
*Stochastic Linear Programming: Models, Theory and Computation*, Springer, New York, NY, USA, 2nd edition, 2011. View at Publisher · View at Google Scholar · View at MathSciNet - A. Prékopa,
*Stochastic Programming*, Kluwer Academic Publishers Group, Boston, Mass, USA, 2004. - M. Delgado, J. Kacprzyk, J. L. Verdegay, and M. A. Vila, Eds.,
*Fuzzy Optimization: Recent Advances*, Physica, New Yourk, NY, USA, 1994. View at MathSciNet - A. L. Soyster, “Convex programming with set-inclusive constraints and applications to inexact linear programming,”
*Operations Research*, pp. 1154–1157, 1973. View at Google Scholar - A. Ben-Tal, L. El Ghaoui, and A. Nemirovski,
*Robust Optimization*, Princeton Series in Applied Mathematics, Princeton University Press, Princeton, NJ, USA, 2009. View at MathSciNet - L. El Ghaoui, F. Oustry, and H. Lebret, “Robust solutions to uncertain semidefinite programs,”
*SIAM Journal on Optimization*, vol. 9, no. 1, pp. 33–52, 1999. View at Publisher · View at Google Scholar · View at MathSciNet - L. El Ghaoui and H. Lebret, “Robust solutions to least-squares problems with uncertain data,”
*SIAM Journal on Matrix Analysis and Applications*, vol. 18, no. 4, pp. 1035–1064, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - D. Bertsimas and M. Sim, “Tractable approximations to robust conic optimization problems,”
*Mathematical Programming*, vol. 107, no. 1-2, pp. 5–36, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. Rohn, “Duality in interval linear programmin,” in
*Proceedings of an International Symposium on Interval Mathematics*, pp. 521–529, Academic Press, New York, NY, USA, 1980. - H. C. Wu, “Wolfe duality for interval-valued optimization,”
*Journal of Optimization Theory and Applications*, vol. 138, no. 3, pp. 497–509, 2008. View at Publisher · View at Google Scholar · View at MathSciNet - H.-C. Wu, “On interval-valued nonlinear programming problems,”
*Journal of Mathematical Analysis and Applications*, vol. 338, no. 1, pp. 299–316, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. C. Wu, “Duality theory for optimization problems with interval-valued objective functions,”
*Journal of Optimization Theory and Applications*, vol. 144, no. 3, pp. 615–628, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H.-C. Wu, “Duality theory in interval-valued linear programming problems,”
*Journal of Optimization Theory and Applications*, vol. 150, no. 2, pp. 298–316, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. C . Zhou and Y. J. Wang, “Optimality condition and mixed duality for interval-valued optimization,”
*Fuzzy Information and Engineering*, vol. 62, pp. 1315–1323, 2009. View at Google Scholar - A. Ben-Israel and B. Mond, “What is invexity?”
*Australian Mathematical Society B*, vol. 28, no. 1, pp. 1–9, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. A. Hanson, “On sufficiency of the Kuhn-Tucker conditions,”
*Journal of Mathematical Analysis and Applications*, vol. 80, no. 2, pp. 545–550, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - R. Pini, “Invexity and generalized convexity,”
*Optimization*, vol. 22, no. 4, pp. 513–525, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - T. Weir and B. Mond, “Pre-invex functions in multiple objective optimization,”
*Journal of Mathematical Analysis and Applications*, vol. 136, no. 1, pp. 29–38, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - T. Weir and V. Jeyakumar, “A class of nonconvex functions and mathematical programming,”
*Bulletin of the Australian Mathematical Society*, vol. 38, no. 2, pp. 177–189, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. M. Yang, X. Q. Yang, and K. L. Teo, “Characterizations and applications of prequasi-invex functions,”
*Journal of Optimization Theory and Applications*, vol. 110, no. 3, pp. 645–668, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - R. E. Moore,
*Methods and Applications of Interval Analysis*, vol. 2, SIAM, Philadelphia, Pa, USA, 1979. View at MathSciNet - H.-C. Wu, “The Karush-Kuhn-Tucker optimality conditions in an optimization problem with interval-valued objective function,”
*European Journal of Operational Research*, vol. 176, no. 1, pp. 46–59, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. W. Kuhn and A. W. Tucker, “Nonlinear programming,” in
*Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability*, J. Neyman, Ed., pp. 481–492, University of California Press, Berkeley, Calif, USA, 1951. View at MathSciNet