Abstract

Consider a large mixed integer linear problem where structure of the constraint matrix is sparse, with independent blocks, and coupling constraints and variables. There is one of the groups of constraints to make difficult the application of Benders scheme decomposition. In this work, we propose the following algorithm; a Lagrangian relaxation is made on the mentioned set of constraints; we presented a process heuristic for the calculation of the multiplier through the resolution of the dual problem, structured starting from the bundle methods. According to the methodology proposed, for each iteration of the algorithm, we propose Benders decomposition where quotas are provided for the value function and -subgradient.

1. Introduction

The main objective of this work is to develop a methodology that combines the Benders decomposition and a heuristic for calculating the multiplier applied to a relaxed problem to solve a linear problem of large integer where and .

Relaxing a part of the restrictions has been updated by multiplying the respective heuristic process, solving a local model of dual relaxed. As the algorithm presented, for each iteration with the multiplier obtained, apply iterations of Benders decomposition on the relaxed problem, obtaining an -subgradient quotas and lower and upper optimal solution. The motivation of this work arises from an integer linear problem from the large expansion planning of the transmission of a digital telecommunication system in an urban area equivalent to a city the size of Rio de Janeiro/Brazil [1]. Section 2 gives an outline about decomposition methods for large scale, with all the relevant features for this work. Moreover, Section 3 shows the integer linear model in this work. More details of the methodology regularization and Benders decomposition are given in Section 4. Section 5 gives an improvement the Approximate Algorithm Bundle. In Section 6 is presented some conclusion and future works obtained from this work.

2. Decomposition Methods for Large Scale

The large mixed integer linear programming problem has highlighted the difficulty to be solved directly through commercial software. In such cases the Lagrangian, combined with subgradient optimization, is often used to lower levels to find the optimal value of the objective function. These quotes can be used, for example, the method of Branch-and-Bound [2], or just to measure the quality of feasible solutions. These properties are currently incorporated in commercial software [3]. Other strategies are also considered: obtaining upper bounds [4], more efficient routines on the generation of cuts and the use of parallel processing [5]. The Lagrangian was used by [6, 7] with its work on the traveling salesman problems, and methods of Branch-and-Bound and implicit enumeration had considerable gain in [8] with the Lagrangian, in [9]. There are several questions directed to the Lagrangian in integer linear problems, among them how to calculate the Lagrange multipliers, how to choose among the various relaxations of the problem, and how to obtain viable solutions to the primal problem. Techniques for solving the Lagrangian dual relaxation of combinatorial optimization problems in polynomial time by applying the algorithm as a subroutine of ellipsoids [10] have been presented with [11]. Other methodologies use heuristics decomposition lagrange combining the solution of the Lagrangian Dual by the method of subgradient, also considering the feasible solutions primal heuristics [12]. These techniques were applied to flow problems in networks “multicommodity” in [13], the capacitated location problems [14]. The decomposition Benders [15] is an exact method, finite, effective when the number of integer variables is much smaller than the number of continuous variables in which case the master problem has dimension much smaller than the original problem. However, for large problems, the Benders master problem can be difficult to solve because of the large size. It joins the convergence speed generally slow, making this method inefficient in many cases. Moreover, computational experiments have shown that a general code of Branch-and-Bound applied to solve the problem Benders master produces a tree often much larger than for solving the original problem. Thus, the disadvantage of this decomposition is often the difficulty in solving the master problem, making it inefficient.

Several papers were presented with the objective of solving the problem about master with a higher overall efficiency. Among these, [2, 9, 13, 21, 56, 62] implement the Benders decomposition with Lagrangian applied in cuts master problem. [2225]. This transfers the difficulty of the master problem in solving iteratively the maximum dual function. In [26, 27] this method is denied due to lack of controllability (the optimal solution in the Benders master problem can never achieve the optimum in relaxed master problem) in the solution of the relaxed master problem. There are also suggestions on how to get a good initial set of cuts for the Benders master problem [28, 29]. In [30] suggests the use of linear relaxation for the Benders master problem in a number of initial iterations.

Motivated by these failures, [31, 32] developed the “Cross Decomposition” while exploiting the structures of primal and dual problems, combining the advantages of Dantzig-Wolfe decomposition [33, 34] and Benders [34, 35]. Reference [27] carried out a comparative study of several approaches to the problem Benders master, presenting an efficient method for solving a linear problem, the whole “Cross Decomposition” [31, 32, 3639]. Theoretical aspects of Benders decomposition together with the “Cross Decomposition” are also discussed in [36, 40]. Changes in “Cross Decomposition” for integer linear programming problems were made by [13, 37]. These changes are made through the generalization of the method of Kornai and Liptak [41], which eliminates the need to use the master primal and dual problems. The dynamics of this decomposition is the subproblems, which iterates the primal and dual subproblems. Instead of using the last sub-problem solution as input to another, using an average of all previous subproblem solutions. The convergence proof of this methodology is found in [42]. For a certain class of location problems are presented structured exact solution methods from the “Cross Decomposition” in [36, 40, 43]. A comparison of techniques Kornai and Liptak for Decomposition and Cross-linear problems with block-angular structures and computational results is discussed in [16, 44]. Both methodologies have also been applied to problems of organizational planning [45]. Reference [46] presented a simplified algorithm of “Cross Decomposition” for multiple-choice right-side constraints. Applications involving stochastic transport problems were addressed in [47], involving comparative study with other methods.

The update of the multipliers can be made by various methods. If formulated as a linear problem, the simplex is traditionally used. Moreover, in general, a dual nondifferentiable and classical approach is the method of subgradient [1, 2325, 48], which is known not to be a method of lowering. Although more complex, the techniques originally developed for bundle [4951] are being increasingly used. The method exploits bundle data from previous iterations, vectors iterated, objective function, and subgradients, the bundle information, to produce new iteration. The method of -descend [52] considers the method of programming differentiable subgradient conjugates [49, 50]. Kiwiel in [5355] provides new insight into the method of Bundle based on classical methods of cutting planes developed by [56, 57]. The basic idea of generalization of cutting planes is to add a quadratic regularization to the linear approximation by convex parts to the objective function; this linearization is generated by using the subgradient. To avoid a large bundle, it is necessary to limit it. Reference [58], for example, presented a selection strategy based on the subgradient multipliers associated with the local model, where the bundle that remains in subgradient , being the size of a variable of the problem, considered three approaches to specify the quadratic stabilization process, which are essentially equivalent. The first technique uses the confidence regions; see [59, 60]. The Moreau-Yosida regularization generates the proximal method used by [61]. A modern synthesis technique using bundle and metric variable is made from the concept of Moreau-Yosida regularization in [62, 63]. Applications in control problems involving the method of bundle can be found in [64], and other applications using Lagrangian decomposition, networks, and comparative tests with other algorithms are developed in [60] and decompositions of large and parallel optimization in [65]. Lemarechal according to [66] “is not an exaggeration to say that 90 percent of the applications of nondifferentiability appear in decompositions of one form or another, while the remaining 10 percent are shown via the calculation of eigenvalues.” We mention also [67] when C. Lemaréchal says, “the nondifferentiable optimization has the biggest deficiency of the speed of convergence.”

3. Model

Consider the integer linear problem , motivated by an application in a telecommunications system [1], as follows: where the matrices and have appropriate dimensions with the vectors and .

On the other hand, consider where and where , supposed nonempty and limited, that is finite.

Consider the integer linear programming problem : A relaxation of the continuous variable generates : where , and .

Also to relax the variable is obtained is follows: where .

Relaxing the last block of constraints , we have the dual where for all defines the dual function. where .

The purpose of this relaxation is to ensure separability of blocks of variables and , over in order, then applying the Benders decomposition.

4. Methodology Regularization and Benders Decomposition

The slow convergence of the algorithms in structured Benders decomposition applied large-scale integer linear programming problems motivated the development of the methodology to accelerate the classical method. Applies to Benders decomposition to the large-scale integer linear programming problem with a Lagrangian relaxation, to update the Lagrange multipliers by a Bundle Methods.

4.1. Benders Decomposition for the Relaxed Problem

The Benders decomposition applied to the relaxed problem is to reformulate this problem in an equivalent containing only -integer variables and a continuous variable. Without loss of generality, assume that the problem has a finite optimal solution for all .

For each , can be rewritten as: where , nonempty.

For with fixed, the inner minimization subproblem (with explicit ) has given its dual where and .

We assume that the polyhedral are uniformly bounded, if necessary adding dimensions to variables and .

Thus we can define the set for all (finite) of extreme points of . In this case, is equal to Consider that , the argument of the minimum, has for any subset , the relaxed Benders master problem:

4.2. Quotas

For fixed, consider upper limit, where was obtained in relaxed primal dual subproblem and was obtained in relaxed primal dual subproblem .

Then For variable, if we assume that, as done in the algorithm, the restrictions relaxed Benders master problem will remain the same from one to another iteration in , then .

4.3. Regularization Dual Quadratic Problem

The iterative solution of the dual problem of maximizing into , that updates the multiplier , is done using a local regulated model, such as the Bundle. However we do not know for each , the value of , we only have lower quotas and upper .

Suppose that we are in the th iteration . Consider the model where , which determines the size of direction .

For application of the Bundle method, consider the following. (a) The value of corresponds to some -subgradient of in .

Indeed, for and any ,

Defining , that is, (subdifferencial of ).(b) The linear cuts, corresponding to the local polyhedral model, and also . This value is replaced by the upper bound , provided by the primal relaxed dual subproblem . To get , , may require a few iterations of the Benders algorithm.

Indeed, consider acceptable if test quality of the approximation of is verified as follows:

It is noted that the convergence Benders method ensures that the test will be checked in a finite number of iterations [15]. With this test it is ensured that the maximum error in calculating , from one iteration to another in , decreases. Indirectly we also expect .

With this set of information, we have the model “approximate”

Thus, equivalently, is as follows: The regularized model has embedded in it the process (decomposed) plans secants and intends to determine a direction of ascent through the accumulated residue, with the approximate calculation of the dual function in , through Benders decomposition. The lemma and the proposition that follow seek to justify the existence and uniqueness of the solution of the quadratic subproblem, like it’s the aggregate subgradient.

Lemma 1 (see Lemma in [69]). The problem (8) has a unique solution characterized by Furthermore where

Proof. Suppose nonempty set generated by linear constraints, the existence and uniqueness of the solution following the definition of a positive quadratic. The optimality condition for this solution is
Then which is equivalent to Considering (9) recognizes the expression (11) of .

Proposition 2 (see Lemma in [69]). With the notation of Lemma 1, consider a quadratic function satisfying where equality in . Then maximizes the function

Proof. Applying (9) and (11) and defining relations can be written as follows: with equality in . Subtracting the term both sides, even with equality . Now note that the function of the right side is maximized when corresponding to , given by (9).

The function is known to aggregate linearization approximation of , where the limit on the model , as described in Lemma 1.

A more convenient way to solve is to be made through the dual problem. Define the Lagrangian and their optimality conditions as follows:

For , Complementarity, Substituting these equations into , it has the following quadratic linear problem: where

Consider that the optimality conditions have also to update the multiplier where is single solution of .

5. The Approximate Algorithm of Bundle

5.1. Algorithm Partial Benders

At each iteration, the multiplier is used in subproblem , what, resolved, provides an upper bound and generates a new Benders cut to be included in the relaxed master problem . Solving this provides a lower limit and a variable y for subproblem , which in turn is resolved in . With fixed, this process is repeated and accumulated up all the cuts in the master problem Benders , until the test (6) is satisfied. At the end of this process the values of , , , and are taken to the regularized model, a new update to the multiplier.

Note. We chose to include the quadratic model only the cut that corresponds to the test (6). However, we can include all cuts, leaving for future work the selection policy and proper disposal.

5.2. Approximate Test Armijo

One approach test Armijo [8] here will determine that the direction of the approximation increases . Thereby, where and is given by (7).

Approaching the values of by lower quotas and upper has

For provided, an approximation of the test Armijo will be satisfied in if where the left side is positive because

If we compare this to the test that corresponded to the exact calculation of the function , observed that the difference between the current and the candidate has been replaced by an increase as much as is an increase of the exact value. This expects that the test stop approximate Bundle method will not be anticipated, since also ensures a good approximation to the function .

5.3. Regularization Algorithm for Updating the Multipliers with Relaxation

Before we present the algorithm and in order to keep the notation, replace the model that is equivalent to where

We used without distinctions   and   .

Algorithm 3. Initialization: They are given tolerance stop and . Consider the maximum size of the Bundle, . Get an initial dual feasible solution , and initial feasible solution ; that is, for , is solution of Calculate . Make . Estimate , for example, through an iteration of the Benders method. Choice reducing the test Armijo, is the reduction in quality test approximation . Initialize the set of ascent , the accountant of iterations , and the size of the bundle . For , corresponding to the initial bundle , and the initial model

Step 1 (principal calculation and test stop). Whether is the unique solution of the quadratic problem such that
Make Calculate through of the algorithm If and , stop.

Step 2 (approximation test Armijo). If , is “serious step”; otherwise, it is “null step,” check to Step 4.

Step 3 (serious step). Make . Add the set ; for .Permute and by, respectively,

Step 4 (control of bundle size). If , then eliminate at least two of the bundle elements and insert the element .
Consider the new bundle obtained .

Step 5. Insert to the bundle, where in the case of serious step, and in the case of null step, Replace by , and update the model .

Step 6. Make , and return to Step 1.

5.4. Partial Benders Algorithm for Integer Linear Problem with Relaxation

Initialization: make .

Step 1. Solve If there is no solution, stop: has no feasible solution. Otherwise, solution, and then Generate a new constraint (cut) from , and . Continue with Step 2.

Step 2. Solve
Consider the optimal solution. Continue with Step 3.

Step 3. Solve
Whether solution; continue with Step 4.

Step 4 (test quality approach ). If end.
Otherwise, make , , and return to Step 1.

Remarks(1)The test for stopping the algorithm adds to the usual tolerance of bundles, the requirement that the function approximation is reasonable. In fact, for crude approximations of , it is possible to have false serious steps with error false small, hence the need for -approximation.(2)The Benders relaxed master problem should have some heuristics for selecting cuts, given that the accumulations of all inequalities explode the subproblem.

We present Figure 1 of the algorithm for the problem with integer linear relaxation.

5.5. About Convergence

We opted to observe that small enough for the results cited correspond to guarantee the stability of the algorithm of bundle. This can be observed by adding a positive parameter , the expression of errors linearization, and the gains predicted by the model (see, Lemma   in [69]. Thus if only guarantee the local convergence. Furthermore, the quality test approximation should be sufficient for obtaining convergence of the overall strength because the iterative process to arrive at the usual formulation of the bundle, with θ = 0. Undoubtedly, with the risk of being a high cost computational algorithm, as already noted. It presents the known result that guarantees no cycling algorithm Benders.

Theorem 4. The vectors composed of the vertices and their multipliers , and generated at each iteration by the algorithm are different.

Proof. Suppose that the first extreme points, say the problem generated and obtained problem regularized .
Then with the Step 2, it has The optimal solution of this problem , , that is, for some    , As is a lower bound on the optimal cost primal relaxed , , and with (42), On the other hand, in the next iteration of the solution is a vertex . Then where is a solution of .
As is a viable solution it has Equivalently Combining (43) and (44), it has If then solves the relaxed integer linear problem .
Otherwise, in which case .
But inequality (42) is as follows: On the other hand, of (48), and therefore .

Corollary 5. If , the th iteration internal, then .

Proof. Suppose the contrary, that to solve with cuts, the solution is repeated. In this case, to solve the problem , we would obtain a vector satisfying for some . However, this only occurs when the optimality criterion is reached.

6. Conclusion and Future Works

Our main goal was to present an alternative technique using Lagrangian relaxation in solving a problem in integer linear programming. The work introduced a new algorithm structured from a block of relaxation of constraints that the problem presents difficulties when approached by traditional techniques Benders. We hope to take advantage of the computational process smoothing over other heuristic algorithms (Dantzig-Wolfe, subgradient) because its search direction is determined by processes similar to bundle method, which has shown proven results superior to those in many large problems [60]. It seems also unlikely that the technique of “Cross Decomposition” would be adaptable. As future works, we investigate other applications in order to verify the efficiency of the methods on structured problems and extend the decomposition to nonlinear problems and integer nonlinear, using the Lagrangian heuristic process along with the regularization. It is expected that other hybrid methodologies [7174] can be applied in the solution of the problem [1].

Acknowledgment

The authors are thankful to National Counsel of Technological and Scientific Development (CNPq).