Abstract

We consider the linear programming problem with uncertainty set described by -norm. We suggest that the robust counterpart of this problem is equivalent to a computationally convex optimization problem. We provide probabilistic guarantees on the feasibility of an optimal robust solution when the uncertain coefficients obey independent and identically distributed normal distributions.

1. Introduction

Robust optimization is a rapidly developing methodology to address the optimization problems under uncertainty. Compared with sensitivity analysis and stochastic programming, the robust optimization approach can handle cases where fluctuations of the data may be large and can guarantee satisfaction of hard constraints which are required in some practical settings. The advantage of the robust optimization is that it could prevent the optimal solution against any realization of the uncertainty in a given bounded uncertainty set. The robust linear optimization problem where the data are uncertain was first introduced by Soyster [1]. The basic idea is to assume that the vector of uncertain data can be any point (scenario) in the uncertainty set, to find a solution that satisfies all the constraints for any possible scenario from the uncertainty set, and to optimize the worst-case value of the objective function. Ben-Tal and Nemirovski [2, 3] and El Ghaoui et al. [4, 5] addressed the overconservatism of robust solutions by allowing the uncertainty sets for the data to be ellipsoids and proposed some efficient algorithms to solve convex optimization problems under data uncertainty. Bertsimas et al. [6, 7] proposed a different approach to control the level of conservatism on the solution that has the advantage that leads to a linear optimization model. For more about the robust optimization, we refer to [815].

Consider the following linear programming problem: where , is a uncertain matrix which belongs to an uncertainty set , , and is a given set. The robust counterpart of problem (1) is

An optimal solution is said to be a robust solution if and only if it satisfies all the constraints for any .

In this paper, we consider the linear optimization problem (1) with uncertainty set described by -norm for the reason not only to make up the disadvantages of the uncertain parameters of all possible values that will give the same weight, but also to consider the robust cost of the robust optimization model which is mentioned in [6]. We suggest the robust counterpart of problem (1) that is a computationally convex optimization problem. We also provide probabilistic guarantees on the feasibility of an optimal robust solution when the uncertain coefficients obey independent and identically distributed normal distributions.

Here is the structure of this paper. In Section 2, we introduce the -norm and its dual norm and give the comparison with the Euclidean norm. In Section 3, we show that the linear optimization problem (1) with uncertainty set described by -norm is equivalent to a convex programming. In Section 4, we provide probabilistic guarantees on the feasibility of an optimal robust solution when the uncertainty set is described by the -norm.

2. The -Norm

In this section, we introduce the -norm and its dual norm. Furthermore, we show worst-case bounds on the proximity of the -norm as opposed to the Euclidean norm considered in Ben-Tal and Nemirovski [2, 10] and El Ghaoui et al. [4, 5].

2.1. The -Norm and Its Dual

We consider the th constraint of the problem (1), . We denote by the set of coefficients , and , takes values in the interval according to a symmetric distribution with mean equal to the nominal value . For every , we introduce a parameter , which takes values in the interval . It is unlike the case that all of the , will change, which is proposed by [1]. Our goal is to protect the cases that are up to of these coefficients which are allowed to change and take the worst-case values at the same time. Next, we introduce the following definition of -norm.

Definition 1. For a given nonzero vector, with , , we define the -norm as with .

Remark 2. Obviously, is indeed a norm, since(1), and if and only if , since is not a zero matric, with , ;(2)(3)

Remark 3. (1) Supposed that(i);(ii), , , ;then the -norm degenerates into -norm studied by Bertsimas and Sim [6]; that is,
(2) If and , then -norm degenerates into , and we can get , .
(3) If and , then -norm degenerates into , and we have , .
Next we derive the dual norm.

Proposition 4. The dual norm of the -norm is with , .

Proof. The norm is equivalent to
According to linear programming strong duality, we have
Then, if and only if is feasible.
We give the following dual norm by
From (10) we obtain that
Using the LP duality again we have Thus, and we obtain that

Remark 5. (1) When the -norm degenerates into -norm, we can get its dual norm:

(2) When the -norm degenerates into , we can get its dual norm:

(3) When the -norm degenerates into , we can get its dual norm:

2.2. Comparison with the Euclidean Norm

The uncertainty sets in the related literatures have been described using the Euclidean norm and it is of interest to study the proximity between the -norm and the Euclidean norm.

Proposition 6. For every ,

Proof. First, we will give a lower bound on by solving the following problem: where .
Let ; we can get that , , and , . It is easy to see that the objective function can never decrease if we let , ; then we have that (20) is equivalent to the following problem:
Our goal is to maximize the convex function over a polytope; then there exists an extreme point optimal solution for the above problem. We can get the extreme points: where is the unit vector with the th element equal to one and the rest is equal to zero. Obviously, the problem can get the optimum value of . Then the inequality follows as by taking the square root.
By the same way, in order to obtain an upper bound of , by solving the following nonlinear optimization problem: clearly, the objective function can never increase with , , and we can show that the above problem is equivalent to the following problem:
Firstly, we use Lagrange multiplier methods reformulating the problem as
Applying the KKT conditions for this problem, an optimal solution can be found: otherwise, . It is easy to see that the optimal objective value is . By taking the square root, we have that that is, Since
So we can deduce that with , .
Thus, we have Therefore, the results hold.

Remark 7. (1) When , we obtain the comparison between -norm and Euclidean norm easily; that is, The comparison between the duality of -norm and Euclidean norm is
(2) When -norm degenerates into and , the comparison results are

3. Robust Counterpart

In this section, we will show that the robust formulation of (1) with the -norm is equivalent to a linear programming problem.

We consider the following robust formulation of (1) with the -norm:

If is selected as an integer, the protection function of the th constraint is Note that when , , the constraints are equivalent to the nominal problem. And if , , , we have the method of Soyster [1]. Likewise, if we assume that and , , we have the method of Bertsimas and Sim [6]. Therefore, by varying , we have the flexibility of adjusting the robustness of the method against the level of conservatism of the solution.

We need the following proposition to reformulate (36) as a linear programming problem.

Proposition 8. Given a vector , the protection function of the th constraint, is equivalent to the following linear programming problem:

Proof. An optimal solution of problem (39) obviously consists of variables at 1, which is equivalent to a subset The objection function of problem (39) converts to which is equivalent to problem (38).

Next we will reformulate problem (36) as a linear programming problem.

Theorem 9. Problem (36) is equivalent to the following linear programming problem:

Proof. First, we consider the dual problem of (39): Since problem (39) is feasible and bounded for all , by strong duality, we know that the dual problem (43) is also feasible and bounded and their objective values coincide. By Proposition 8, we obtain that is equivalent to the objective function value of (43). Substituting in problem (36), we have that problem (36) equals the linear programming problem (42).

Remark 10. When -norm degenerates into -norm, we have the following robust counterpart of problem (36) [6]:

4. Probabilistic Guarantees

In this section, we will provide probabilistic guarantees on the feasibility of an optimal robust solution when the uncertainty set is described by the -norm.

Proposition 11. We denote by and the set and the index, respectively, which achieve the maximum for in (38). Assume that is an optimal solution of problem (42). The violated probability of the th constraint satisfies where

Proof. Let , be the solution of problem (36). Then the violated probability of the th constraint is Let ; we get the result.

Remark 12. Clearly, is related to and ; the role of the parameter or (for is a given vector) is to adjust the robustness of the proposed method against the level of conservatism of the solution. We define and as robust cost and protection level, which control the tradeoff between the probability of violation and the effect to the objective function of the nominal problem.
Naturally we want to bound the probability . The following result provides a bound that is independent of the solution .

Theorem 13. Let , be independent and symmetrically distributed random variables in ; then we have

Proof. Let . Then we obtain that where we use the knowledge of Markov’s inequality, the independence and symmetric distribution, and . Selecting , we obtain the result of Theorem 13.

5. Conclusions

In this paper, we introduce the definition of -norm, its dual, and some propositions to show a new uncertainty set. We suggest that the robust counterpart of linear programming problem described by -norm is a computationally convex optimization problem. We provide probabilistic guarantees on the feasibility of an optimal robust solution when the uncertain coefficients obey independent and identically distributed normal distributions.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (11126346, 11201379) and the Fundamental Research Funds for the Central Universities of China (JBK130401).