Advances in Operations Research

Volume 2016, Article ID 5017369, 11 pages

http://dx.doi.org/10.1155/2016/5017369

## Certificates of Optimality for Mixed Integer Linear Programming Using Generalized Subadditive Generator Functions

^{1}School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, ON, Canada K1S 5B6^{2}Department of Computer Science, University of California, Davis, CA 95616, USA

Received 10 November 2015; Revised 17 May 2016; Accepted 18 July 2016

Academic Editor: Imed Kacem

Copyright © 2016 Kevin K. H. Cheung and Babak Moazzez. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We introduce generalized subadditive generator functions for mixed integer linear programs. Our results extend Klabjan’s work from pure integer programs with nonnegative entries to general MILPs. These functions suffice to achieve strong subadditive duality. Several properties of the functions are shown. We then use this class of functions to generate certificates of optimality for MILPs. We have performed a computational test study on knapsack problems to investigate the efficiency of the certificates.

#### 1. Introduction

Most optimization problems in industry and science today are solved using commercial software packages. Such software packages are to some extent reliable and fast but they generally lack one important feature: verifiability of computations and/or optimality. Verification of reported benchmark results is very crucial in computational research [1]. Commercial solvers rarely, if at all, release to the user the exact procedure that has been followed to solve a specific problem. Without this, on what basis does one trust a solution generated using these solvers? Even if one uses his/her own code for optimization, there are possibilities of bugs, mistakes, and so forth.

This issue becomes more apparent when we encounter MILP instances for which different solvers return different solutions as optimal. See [2] for such examples and related discussions.

The answer to this problem is to obtain a certificate of optimality: some information that proves the optimality of the solution at hand but is easier to check compared to the original optimization problem.

Suppose that a MILP has been solved using a solver. Verifying the optimality of the solution could be an arduous task. Let us assume that we have used cutting planes to solve a MILP instance. In order to verify the optimality, we need to verify that each cutting plane that has been generated is indeed valid for MILP. Both numerical round-off errors and/or coding errors can cause generation of invalid inequalities. Verifying validity of a cut is equivalent to solving another MILP as hard as the original problem. If we use a branch and bound method, then verification of optimality is equivalent to a tree traversal and, in each node of the tree, the correctness of LP relaxation bound must be verified. This can get very complicated if a branch and cut method is used.

For example, see [3] where the authors provide a certificate of optimality for an optimal TSP tour through 85,900 cities. This certificate consists of traversing the branch and bound tree and checking optimality at each node and also checking the validity of each cutting plane that has been added. The time needed to check the optimality using this certificate might reach up to 568.9 hours (24 days). The actual time for solving the problem is 286.2 days.

On the other hand, a dual function with the same value as the objective value of the (doubtfully) optimal solution verifies the optimality immediately. In the case of linear programming, this is trivial since the dual vector provides such a tool for certification of optimality. However, in the case of MILP, subadditive dual must be used to avoid duality gaps. This calls for a family of functions which are feasible to the subadditive dual and are also easy to evaluate.

Subadditive generator functions defined by Klabjan [4] can serve as certificates of optimality for specific families of integer programs. However, they are restricted to pure integer programs with nonnegative entries. In this paper, we first generalize these functions to general mixed integer linear programming and then we show that these functions are feasible in the subadditive dual and that they are enough to get strong duality; that is, if we consider this family only, strong duality still holds. Then, we will make use of these functions for generating certificates of optimality.

Sensitivity analysis is another important topic in MILP studies. It has been studied in the work of Wolsey [5] using Chvátal functions and value functions of integer programs and also by Hooker [6] using branch and bound tree and inference duality (see also [7] for a survey). However, in both cases, for a large-size problem, it is not easy to perform sensitivity analysis. In this paper, we will see that if we have an optimal dual feasible function, we can generalize the tools for sensitivity analysis in linear programming to mixed integer programming.

Section 2 states some preliminaries about subadditive duality. Generalized subadditive generator functions are introduced in Section 3. In Section 4, we show some more properties of these functions. Certificates of optimality and sensitivity analysis using subadditive generator functions are discussed in Section 5, and, in the last section, we present some of our computational experiments and numerical results.

#### 2. Preliminaries

Let and be positive integers. Let , and . Let . Consider the following mixed integer linear program: The subadditive dual of [7] is given by where denotes the th column of and is the set of subadditive functions with and . Recall a function , where is closed under addition (i.e., a monoid) and is said to be subadditive if with the convention that .

Lemma 1. *If , then, for any with and , .*

*Proof. *See [7].

Theorem 2 (see [7] (weak duality)). *Suppose is a feasible solution to and is a feasible solution to the subadditive dual . Then, .*

*Proof. * The inequalities follow from subadditivity of and Lemma 1.

Theorem 3 (see [8]). *If the primal problem has a finite optimum, then so does the dual problem and in this case they are equal.*

Unlike linear programming, finding a dual optimal subadditive function does not seem to be straightforward. Two well-known families are known [9], but they appear to be difficult to work with. However, in the case when all the entries of and are nonnegative and all variables are required to be integers, Klabjan [4] defined a family of subadditive functions sufficient for strong duality that are computationally attractive.

In the case of linear programming, it is easy to see that the function is a feasible subadditive function and, using this function, the subadditive dual will reduce to the LP dual.

The subadditive dual plays an important role in the study of mixed integer programming. Any feasible solution to the dual gives a lower bound to the . A dual feasible function with is a certificate of optimality for the . It will be equivalent to the dual vector in LP and the reduced cost of column can be defined as . Many other properties of LP, such as complementary slackness, for example, can also be extended to MILP. Similar to LP, all optimal solutions can only have nonzero values for variables indexed by with for an optimal (see [4] for more details).

#### 3. Generalized Subadditive Generator Functions

Klabjan [4] defined subadditive generator functions for pure integer programming problems with nonnegative entries as follows.

For a pure integer program with nonnegative and , a* subadditive generator function* is defined for a given aswhere .

Also, a* subadditive ray generator function* is defined for a given aswhere .

In [4], it is shown that these functions are feasible to the subadditive dual, and they are sufficient to achieve strong duality for an IP with nonnegative rational data. In this paper, we extend Klabjan’s work to with no restriction on input data. Let and define . Let be such that . Also, let be such that

*Definition 4. *For , and as specified above, define the subadditive generator function by Also, for , a subadditive ray generator function is defined as

Note that if and , then . We will show in Section 5 that empty means that the LP relaxation of solves the as well, so is an optimal LP dual vector. Also, if and , then is identically . In general, one can choose and in many different ways. However, later in Section 5, we will show that, with specific choices of and ( and ), we can generate good certificates for . Also, note that .

For a polyhedron , the recession cone of is defined as . We will need the following result of Meyer [10].

Theorem 5 (Meyer [10]). *Given rational matrices and and a rational vector , let and let . *(1)*There exist rational matrices and and a rational vector such that*(2)*If is nonempty, the recession cones of and coincide.*

*Theorem 6. and .*

*Proof. *First, we will show that . We have There are two cases: for , is either in or not. If , then is a feasible solution to the maximization problem where is the unit vector with 1 as the th component and zero elsewhere. This gives us . If , then , so which implies that is a feasible solution to the maximization problem and gives us .

To show that , first note that Gomory and Johnson [11] shows that if is finite, then the limsup and the ordinary limit coincide. Now, we have If is in , then is a feasible solution to the maximization problem where is the unit vector. Then, the maximum will be greater than or equal to , so the limit will be greater than or equal to since . This gives us . If , then is a feasible solution to the maximization problem and a similar argument gives us .

*Theorem 7. Let be a finitely generated convex cone satisfying (5). Then, is subadditive for any choice of and .*

*Proof. *Let . To prove subadditivity, that is, , it suffices to show If and are optimal solutions for two maximization problems on the left, then is a feasible solution to the problem on the right and the result follows.

*The following lemma shows that for any .*

*Lemma 8. If is not identically , then .*

*Proof. *First, note thatSince satisfies , we get .

Suppose that . If , then, by subadditivity, Hence, is identically , contradicting the assumption. If , then, by subadditivity, implying that . The result now follows.

*Theorem 9 (strong duality). If is feasible, then there exist , a set , and a finitely generated convex cone with and . If is infeasible, then there exist , and a finitely generated convex cone with *

*Proof. *Without loss of generality, we can assume that , since otherwise one can multiply rows of with negative right hand side by to make nonnegative and then consider the new problem.

Let . Let be feasible, and let be valid inequalities for the set such that Finding such valid inequalities is possible by Theorem 5 where it is shown that if and in are rational, then the convex hull of the feasible points is a polyhedron (finitely generated). Let be an optimal dual vector where corresponds to constraints . Let . Then, we have We show that, for these and , .

The dual program of the above LP is The optimal value of this problem is . Let be a nonnegative vector with integer for and . We have The last inequality holds because and is a valid inequality for the set (which follows from the fact that is valid for and and that contains all columns which are not entirely nonnegative). Recall here that and belong to (this implies that ). So we getThis implies . Also, we know that since is feasible to subadditive dual problem for . So . Moreover, by Lemma 8, .

If is infeasible, with the same and as above, the problem has optimal value . So there exists some such thatwhere and . Since is feasible to the maximization problem in (22), we have and the proof is complete.

*4. Properties of Generalized Subadditive Generator Functions*

*If is any subadditive function with and is dual feasible, then is a valid inequality for [8].*

*In this section, we will show that, for any MILP, there is a finite set of generalized subadditive generator functions such that their corresponding valid inequalities give a finite description of the convex hull of the MILP. For this reason, it is enough to restrict our attention to a subset of subadditive generator functions called basic subadditive generator functions.*

*In this section, denotes a submatrix of with column indices in , and is a subvector of with indices in .*

*Theorem 10. The optimum value of is equal to , where for some and .*

*Proof. *By Theorem 9, there exists with . Choose , let , and consider . Then,

*Theorem 11. If is a finitely generated convex cone, then is a polyhedron.*

*Proof. * has a finite number of extreme rays. Let denote the set of extreme rays of . Since is finite, we have which is obviously a polyhedron (finitely generated).

*Definition 12. *A subadditive generator function is called basic if is an extreme point of (25).

*It is obvious that one only needs basic subadditive generator functions. But since there are finite choices for and has a finite number of extreme points, there are only a finite number of basic subadditive generator functions. So the following theorem holds.*

*Theorem 13. Given , and , there exists a finite set of subadditive generator functions such that the linear programhas the following properties: (1)LP (28) is infeasible if and only if is infeasible.(2)LP (28) has unbounded optimum value if and only if has unbounded optimum value.(3)If neither of the cases above holds, then LP (28) has an optimal extreme point solution which is also optimal for .*

*If is a convex polyhedron, then a face of is defined as , where is a valid inequality for . A facet is a face with dimension . Since we know that the convex hull of feasible solutions to is polyhedral, that is, it can be described by a finite set of facet defining valid inequalities, we have the following corollary.*

*Corollary 14. For and rational, there exists a finite set of subadditive generator functions such that is the convex hull of solutions to with right hand side .*

*5. Certificates of Optimality and Sensitivity Analysis*

*5. Certificates of Optimality and Sensitivity Analysis*

*In this section, we show that subadditive generator functions can be used as certificates of optimality for MILPs.*

*Definition 15. *A certificate of optimality for a MILP is the information that can be used to check optimality without having to solve the MILP itself.

*Ideally, we are interested in types of certificates that allow us to perform the checking in (much) shorter time. By Theorem 9, any subadditive generator function with for which is smaller than can be used as a certificate of optimality.*

*Definition 16. *For with optimal solution , is a certificate of optimality if . One calls a “good” certificate if . is called minimal if one has

*Theorem 17. Assume that . Let . Then, is a minimal certificate of optimality for if and only if the optimal solution of the linear programming relaxation of solves .*

*Proof. *Without loss of generality, assume that . If is such that , then with . This means that is the optimal solution to the linear programming dual of the LP relaxation of . Conversely, suppose that is the optimal solution to the linear programming dual of the LP relaxation. Then, we have with and obviously .

*The following example shows that there might not be a unique optimal .*

*Example 18. *Consider the following IP: Any with and gives with . Any with will give .

*Note that if the size of is much smaller than , then the certificate that we have is much easier to check since the number of variables is remarkably reduced. However, there still might be instances where or size of is comparable to . In this case, obviously, we do not have a good certificate. Fortunately, in our computational experiments, such cases are not observed for the families of problems considered. For example, consider the following pure ILP: *

*Clearly, there is a unique optimal solution ( and ). In this example, with .*

*Note that, in such cases where has large cardinality, we still have a certificate; that is, we have a (subadditive) dual feasible function with . Verifying the optimality using this certificate is still less expensive than using the branch and bound tree as a certificate and verifying the validity of all cutting planes (as mentioned before, verification of validity of a cutting plane is equivalent to solving a mixed integer of the same size as the original MILP).*

*Remark 19 (see [12]). *For with optimal solution , with can be used for sensitivity analysis.

*We refer the reader to [5, 13] where the authors state the conditions under which primal feasibility, dual feasibility, and optimality still hold for and (optimal primal vector and subadditive dual function) after changes made to the input of .*

*In Definition 4, can be any convex cone satisfying (5). However, choosing to be the nonnegative orthant makes us able to generate better certificates. For the rest of the paper, we focus on subadditive generator functions with and as stated previously in the proof of Theorem 9.*

*5.1. Obtaining *

*5.1. Obtaining*

*Theorem 9 tells us that if we add a family of cutting planes , to the LP relaxation of , such that we havethen if is an optimal dual vector where corresponds to constraints , . In other words, the existence of such is guaranteed by Theorem 9.*

*Note that if we add the cut to , then obviously we can use calculated using Theorem 9 since for that we have . However, based on our empirical observations through computational experiments, the size of in this case is usually equal to or comparable to . We have observed that the more cuts we add to the LP relaxation of , the better we can get using Theorem 9. We have also observed that if we add enough cuts so that the optimal solution of the LP relaxation is , then the size of is much less than , so we can consider it as a good certificate.*

*We should note that the cutting planes , must be valid inequalities for the set by Theorem 9. This leads us to the lifting problem: each cutting plane that we add to the LP relaxation of must be lifted first to become valid for the set .*

*Next, we show that the set is a face of . So the problem of lifting is well defined. In other words, the following theorem shows that the feasible region of is a face of the polyhedral set .*

*Theorem 20. The polyhedral set is a face of the polyhedral set where and have rational entries.*

*In order to prove this theorem, we first need few lemmas.*

*Lemma 21. Let be a polyhedron and let be a supporting hyperplane of . Then, is a face of .*

*Proof. *See [14].

*Let and . Also, let and . Let and and , where is the vector of ones. Obviously, is a supporting hyperplane of . To see this, let be a feasible point of . This point is on the hyperplane and is also in and so in . Also, any point in will satisfy and hence (note that and ).*

*Lemma 22. , where is the set of extreme points.*

*Proof. *Suppose that . This means that and is integral. So, we have . Now, we conclude that and . Since is integral, , which implies , and this completes the proof.

*Lemma 23. .*

*Proof. *Let be an extreme point of with . Since , we have and also is integral. Since and , we conclude that (we can take proper combinations of the rows of the two systems). Since is integral, and so .

*Proof of the Theorem. *From Theorem 5, we know that Also, it is obvious that since , , soMoreover, we know Now, let . Then, by a theorem of Minkowski and Weyl (see [14, page 88]), we can write where and are extreme points and are extreme rays of . By Lemma 22, . Also, we have But since are extreme rays of and , , and . This shows that . Now, using (37) and the fact that and are both convex sets, we can conclude .

Conversely, let . We can writewhere and are extreme points and are extreme rays of . By Lemma 21 and by Lemma 23.

Now, we claim that for every . We know ; thus, we have . Moreover, But since , it follows . Since and , we conclude that and therefore for every , and the claim follows.

Now, by the above claim and the fact that is a convex set, we get the desired result.

*Assume that has been solved to optimality using a cutting plane method and that we have access to all the cutting planes that have been added to the LP relaxation of .*

*Now, we can use a standard method mentioned in [15] by Espinoza et al. to lift all the cuts that we need and then, by Theorem 9, the optimal dual vector of the LP relaxation will give us the desired . The steps of this algorithm can be stated as follows.*

*Algorithm 24 (algorithm to obtain certificate of optimality for MILP). **Data*. Mixed integer linear program with a finite set of cutting planes was used to solve the problem. *Result* (*Certificate of Optimality *)(1)For all , lift cutting plane using algorithm in [15] to become valid for the set in Theorem 9. This is possible by Theorem 20.(2)Add these cuts to the LP relaxation of the and solve this LP which will give (3)Find the optimal dual vector of the LP . is the portion of the dual vector corresponding to the original constraints in . is a certificate of optimality and .

*In [16], it is shown that if we have a branch and bound tree instead of a set of cutting planes, we can still get the cutting planes that we need from the tree. This is done by using two different families of cuts, namely, infeasibility cuts (extracted from an infeasible node of the tree) and disjunctive cuts (extracted from a disjunction of the tree or a branch). It is also shown that these cuts could be lifted inexpensively to become valid for the original MILP. However, in this article, we assume that the cutting planes are already available to the user. This can be achieved by using a solver such as Cplex or Coin-OR Cgl (Cut Generating Library). We have used Cgl in some of our computational experiments.*

*6. Computational Studies and Numerical Results*

*6. Computational Studies and Numerical Results*

*When we allow entries of the original matrix to be any number, we have to include in all columns with at least one negative element (note that, in case of knapsack problems, a column has only one element). This will increase the size of compared to the case when all entries of are nonnegative. Hence, in the case when there are negative entries, we only report the ratio of the number of nonnegative columns in , that is, , over the number of variables with nonnegative entries .*

*A 0-1 knapsack problem is an optimization problem of the form If we have continuous variables in the problem, then it is called a mixed integer 0-1 knapsack problem.*

*0-1 knapsack problems are probably the simplest problems that one could find a certificate for since has length one. Our computational experiments show that, in most 0-1 mixed integer knapsack problems, the size of is significantly smaller than . In lower dimensions, the size of is usually about 10% of the size of , but when is large this ratio will decrease to 1% on average and even less depending on the problem type. In the best case, we had .*

*Also, the results are even better when we are working with nonnegative entries, that is, when and in are nonnegative. This is obvious because all columns with at least one negative element should be put in . However, if we have a problem with lots of negative entries, we can multiply rows of by −1 to get a better structure of the problem.*

*All the instances that we work with are generated randomly; that is, the coefficients, objective values, and the right hand side are chosen randomly from specific intervals. These instances include problems with coefficients in and also instances where coefficients vary as the size of the problem increases.*

*In this section, we represent our numerical experiments for each family of problems. Coin-OR Cbc [17] has been used as MILP solver. Solving times have been only reported for the problems for which it takes Cbc more than 0.1 seconds to solve. All computations have been performed on a 64-bit Intel Core i3 M380 quad core 2.53 GHz CPU with 4 GB of RAM.*

*Tables 1 and 2 show the results for pure and mixed integer knapsack problems with nonnegative coefficients, respectively. In Tables 3 and 4, we show the results for pure and mixed integer knapsack problems, respectively, without any restriction in input data.*