Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2012, Article ID 482193, 28 pages
http://dx.doi.org/10.1155/2012/482193
Research Article

A Two-Phase Support Method for Solving Linear Programs: Numerical Experiments

1Department of Technology, University of Laghouat, 03000, Algeria
2Laboratory of Modelling and Optimization of Systems (LAMOS), University of Bejaia, 06000, Algeria

Received 6 September 2011; Accepted 7 February 2012

Academic Editor: J. Jiang

Copyright © 2012 Mohand Bentobache and Mohand Ouamer Bibi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We develop a single artificial variable technique to initialize the primal support method for solving linear programs with bounded variables. We first recall the full artificial basis technique, then we will present the proposed algorithm. In order to study the performances of the suggested algorithm, an implementation under the MATLAB programming language has been developed. Finally, we carry out an experimental study about CPU time and iterations number on a large set of the NETLIB test problems. These test problems are practical linear programs modelling various real-life problems arising from several fields such as oil refinery, audit staff scheduling, airline scheduling, industrial production and allocation, image restoration, multisector economic planning, and data fitting. It has been shown that our approach is competitive with our implementation of the primal simplex method and the primal simplex algorithm implemented in the known open-source LP solver LP_SOLVE.

1. Introduction

Linear programming is a mathematical discipline which deals with solving the problem of optimizing a linear function over a domain delimited by a set of linear equations or inequations. The first formulation of an economical problem as a linear programming problem is done by Kantorovich (1939, [1]), and the general formulation is given later by Dantzig in his work [2]. LP is considered as the most important technique in operations research. Indeed, it is widely used in practice, and most of optimization techniques are based on LP ones. That is why many researchers have given a great interest on finding efficient methods to solve LP problems. Although some methods exist before 1947 [1], they are restricted to solve some particular forms of the LP problem. Being inspired from the work of Fourier on linear inequalities, Dantzig (1947, [3]) developed the simplex method which is known to be very efficient for solving practical linear programs. However, in 1972, Klee and Minty [4] have found an example where the simplex method takes an exponential time to solve it.

In 1977, Gabasov and Kirillova [5] have generalized the simplex method and developed the primal support method which can start by any basis and any feasible solution and can move to the optimal solution by interior points or boundary points. The latter is adapted by Radjef and Bibi to solve LPs which contain two types of variables: bounded and nonnegative variables [6]. Later, Gabasov et al. developed the adaptive method to solve, particularly, linear optimal control problems [7]. This method is extended to solve general linear and convex quadratic problems [818]. In 1979, Khachian developed the first polynomial algorithm which is an interior point one to solve LP problems [19], but it’s not efficient in practice. In 1984, Karmarkar presented for the first time an interior point algorithm competitive with the simplex method on large-scale problems [20].

The efficiency of the simplex method and its generalizations depends enormously on the first initial point used for their initialization. That is why many researchers have given a new interest for developing new initialization techniques. These techniques aim to find a good initial basis and a good initial point and use a minimum number of artificial variables to reduce memory space and CPU time. The first technique used to find an initial basic feasible solution for the simplex method is the full artificial basis technique [3]. In [21, 22], the authors developed a technique using only one artificial variable to initialize the simplex method. In his experimental study, Millham [23] shows that when the initial basis is available in advance, the single artificial variable technique can be competitive with the full artificial basis one. Wolfe [24] has suggested a technique which consists of solving a new linear programming problem with a piecewise linear objective function (minimization of the sum of infeasibilities). In [2531], crash procedures are developed to find a good initial basis.

In [32], a two-phase support method with one artificial variable for solving linear programming problems was developed. This method consists of two phases and its general principle is the following: in the first phase, we start by searching an initial support with the Gauss-Jordan elimination method, then we proceed to the search of an initial feasible solution by solving an auxiliary problem having one artificial variable and an obvious feasible solution. This obvious feasible solution can be an interior point of the feasible region. After that, in the second phase, we solve the original problem with the primal support method [5].

In [33, 34], we have suggested two approaches to initialize the primal support method with nonnegative variables and bounded variables: the first approach consists of applying the Gauss elimination method with partial pivoting to the system of linear equations corresponding to the main constraints and the second consists of transforming the equality constraints to inequality constraints. After finding the initial support, we search a feasible solution by adding only one artificial variable to the original problem, thus we get an auxiliary problem with an evident support feasible solution. An experimental study has been carried out on some NETLIB test problems. The results of the numerical comparison revealed that finding the initial support by the Gauss elimination method consumes much time, and transforming the equality constraints to inequality ones increases the dimension of the problem. Hence, the proposed approaches are competitive with the full artificial basis simplex method for solving small problems, but they are not efficient to solve large problems.

In this work, we will first extend the full artificial basis technique presented in [7], to solve problems in general form, then we will combine a crash procedure with a single artificial variable technique in order to find an initial support feasible solution for the initialization of the support method. This technique is efficient for solving practical problems. Indeed, it takes advantage of sparsity and adds a reduced number of artificial variables to the original problem. Finally, we show the efficiency of our approach by carrying out an experimental study on some NETLIB test problems.

The paper is organized as follows: in Section 2, the primal support method for solving linear programming problems with bounded variables is reviewed. In Section 3, the different techniques to initialize the support method are presented: the full artificial basis technique and the single artificial variable one. Although the support method with full artificial basis is described in [7], it has never been tested on NETLIB test problems. In Section 4, experimental results are presented. Finally, Section 5 is devoted to the conclusion.

2. Primal Support Method with Bounded Variables

2.1. State of the Problem and Definitions

The linear programming problem with bounded variables is presented in the following standard form:max𝑧=𝑐𝑇𝑥,(2.1)s.t.𝐴𝑥=𝑏,(2.2)𝑙𝑥𝑢,(2.3) where 𝑐 and 𝑥 are 𝑛-vectors; 𝑏 an 𝑚-vector; 𝐴 an (𝑚×𝑛)-matrix with rank(𝐴)=𝑚<𝑛; 𝑙 and 𝑢 are 𝑛-vectors. In the following sections, we will assume that 𝑙< and 𝑢<. We define the following index sets:𝐼={1,2,,𝑚},𝐽={1,2,,𝑛},𝐽=𝐽𝑁𝐽𝐵,𝐽𝑁𝐽𝐵||𝐽=,𝐵||||𝐽=𝑚,𝑁||=𝑛𝑚.(2.4) So we can write and partition the vectors and the matrix 𝐴 as follows: 𝑥𝑥=𝑥(𝐽)=𝑗𝑥,𝑗𝐽,𝑥=𝑁𝑥𝐵,𝑥𝑁𝐽=𝑥𝑁=𝑥𝑗,𝑗𝐽𝑁,𝑥𝐵𝐽=𝑥𝐵=𝑥𝑗,𝑗𝐽𝐵𝑐;𝑐=𝑐(𝐽)=𝑗𝑐,𝑗𝐽,𝑐=𝑁𝑐𝐵,𝑐𝑁𝐽=𝑐𝑁=𝑐𝑗,𝑗𝐽𝑁,𝑐𝐵𝐽=𝑐𝐵=𝑐𝑗,𝑗𝐽𝐵,𝑙𝑙=𝑙(𝐽)=𝑗𝑢,𝑗𝐽,𝑢=𝑢(𝐽)=𝑗,𝑎,𝑗𝐽𝐴=𝐴(𝐼,𝐽)=𝑖𝑗=𝑎,𝑖𝐼,𝑗𝐽1,,𝑎𝑗,,𝑎𝑛=𝐴𝑇1𝐴𝑇𝑖𝐴𝑇𝑚,𝑎𝑗=𝑎1𝑗𝑎2𝑗𝑎𝑚𝑗,𝑗=1,𝑛;𝐴𝑇𝑖=𝑎𝑖1,𝑎𝑖2,,𝑎𝑖𝑛,𝑖=𝐴1,𝑚,𝐴=𝑁𝐴𝐵,𝐴𝑁=𝐴𝐼,𝐽𝑁,𝐴𝐵=𝐴𝐼,𝐽𝐵.(2.5)(i)A vector 𝑥 verifying constraints (2.2) and (2.3) is called a feasible solution for the problem (2.1)–(2.3).(ii)A feasible solution 𝑥0 is called optimal if 𝑧(𝑥0)=𝑐𝑇𝑥0=max𝑐𝑇𝑥, where 𝑥 is taken from the set of all feasible solutions of the problem (2.1)–(2.3).(iii) A feasible solution 𝑥𝜖 is said to be 𝜖-optimal or suboptimal if 𝑧𝑥0𝑧(𝑥𝜖)=𝑐𝑇𝑥0𝑐𝑇𝑥𝜖𝜖,(2.6) where 𝑥0 is an optimal solution for the problem (2.1)–(2.3), and 𝜖 is a positive number chosen beforehand.(iv)We consider the set of indices 𝐽𝐵𝐽 such that |𝐽𝐵|=|𝐼|=𝑚. Then 𝐽𝐵 is called a support if det(𝐴𝐵)=det(𝐴(𝐼,𝐽𝐵))0.(v)The pair {𝑥,𝐽𝐵} comprising a feasible solution 𝑥 and a support 𝐽𝐵 will be called a support feasible solution (SFS).(vi)An SFS is called nondegenerate if 𝑙𝑗<𝑥𝑗<𝑢𝑗,𝑗𝐽𝐵.

Remark 2.1. An SFS is a more general concept than the basic feasible solution (BFS). Indeed, the nonsupport components of an SFS are not restricted to their bounds. Therefore, an SFS may be an interior point, a boundary point or an extreme point, but a BFS is always an extreme point. That is why we can classify the primal support method in the class of interior search methods within the simplex framework [35].

(i)We define the Lagrange multipliers vector 𝜋 and the reduced costs vector Δ as follows:

𝜋𝑇=𝑐𝑇𝐵𝐴𝐵1,Δ𝑇=Δ𝑇(𝐽)=𝜋𝑇𝐴𝑐𝑇=Δ𝑇𝑁,Δ𝑇𝐵,(2.7)

where Δ𝑇𝑁=𝑐𝑇𝐵𝐴𝐵1𝐴𝑁𝑐𝑇𝑁,Δ𝑇𝐵=𝑐𝑇𝐵𝐴𝐵1𝐴𝐵𝑐𝑇𝐵=0.

Theorem 2.2 (the optimality criterion [5]). Let {𝑥,𝐽𝐵} be an SFS for the problem (2.1)–(2.3). So the relations: Δ𝑗0𝑓𝑜𝑟𝑥𝑗=𝑙𝑗,Δ𝑗0𝑓𝑜𝑟𝑥𝑗=𝑢𝑗,Δ𝑗=0𝑓𝑜𝑟𝑙𝑗<𝑥𝑗<𝑢𝑗,𝑗𝐽𝑁(2.8) are sufficient and, in the case of nondegeneracy of the SFS {𝑥,𝐽𝐵}, also necessary for the optimality of the feasible solution 𝑥.

The quantity 𝛽(𝑥,𝐽𝐵) defined by:𝛽𝑥,𝐽𝐵=Δ𝑗>0,𝑗𝐽𝑁Δ𝑗𝑥𝑗𝑙𝑗+Δ𝑗<0,𝑗𝐽𝑁Δ𝑗𝑥𝑗𝑢𝑗,(2.9) is called the suboptimality estimate. Thus, we have the following theorem [5].

Theorem 2.3 (sufficient condition for suboptimality). Let {x,JB} be an SFS for the problem (2.1)–(2.3) and 𝜖 an arbitrary positive number. If 𝛽(x,JB)𝜖, then the feasible solution x is 𝜖-optimal.

2.2. The Primal Support Method

Let {𝑥,𝐽𝐵} be an initial SFS and 𝜖 an arbitrary positive number. The scheme of the primal support method is described in the following steps:(1)Compute 𝜋𝑇=𝑐𝑇𝐵𝐴𝐵1; Δ𝑗=𝜋𝑇𝑎𝑗𝑐𝑗,𝑗𝐽𝑁.(2)Compute 𝛽(𝑥,𝐽𝐵) with the formula (2.9).(3)If 𝛽(𝑥,𝐽𝐵)=0, then the algorithm stops with {𝑥,𝐽𝐵}, an optimal SFS.(4)If 𝛽(𝑥,𝐽𝐵)𝜖, then the algorithm stops with {𝑥,𝐽𝐵}, an 𝜖-optimal SFS.(5)Determine the nonoptimal index set: 𝐽𝑁𝑁𝑂=𝑗𝐽𝑁Δ𝑗<0,𝑥𝑗<𝑢𝑗Δor𝑗>0,𝑥𝑗>𝑙𝑗.(2.10)(6)Choose an index 𝑗0 from 𝐽𝑁𝑁𝑂 such that |Δ𝑗0|=max𝑗𝐽𝑁𝑁𝑂|Δ𝑗|.(7)Compute the search direction 𝑑 using the relations: 𝑑𝑗0=signΔ𝑗0,𝑑𝑗=0,𝑗𝑗0,𝑗𝐽𝑁,𝑑𝐽𝐵=𝐴𝐵1𝐴𝑁𝑑𝐽𝑁=𝐴𝐵1𝑎𝑗0𝑑𝑗0.(2.11)(8)Compute 𝜃𝑗1=min𝑗𝐽𝐵𝜃𝑗, where 𝜃𝑗 is determined by the formula: 𝜃𝑗=𝑢𝑗𝑥𝑗𝑑𝑗,if𝑑𝑗𝑙>0;𝑗𝑥𝑗𝑑𝑗,if𝑑𝑗<0;,if𝑑𝑗=0.(2.12)(9)Compute 𝜃𝑗0 using the formula 𝜃𝑗0=𝑥𝑗0𝑙𝑗0,ifΔ𝑗0𝑢>0;𝑗0𝑥𝑗0,ifΔ𝑗0<0.(2.13)(10)Compute 𝜃0=min{𝜃𝑗1,𝜃𝑗0}.(11)Compute 𝑥=𝑥+𝜃0𝑑, 𝑧=𝑧+𝜃0|Δ𝑗0|.(12)Compute 𝛽(𝑥,𝐽𝐵)=𝛽(𝑥,𝐽𝐵)𝜃0|Δ𝑗0|.(13)If 𝛽(𝑥,𝐽𝐵)=0, then the algorithm stops with {𝑥,𝐽𝐵}, an optimal SFS.(14)If 𝛽(𝑥,𝐽𝐵)𝜖, then the algorithm stops with {𝑥,𝐽𝐵}, an 𝜖-optimal SFS.(15)If 𝜃0=𝜃𝑗0, then we put 𝐽𝐵=𝐽𝐵.(16)If 𝜃0=𝜃𝑗1, then we put 𝐽𝐵=(𝐽𝐵{𝑗1}){𝑗0}.(17)We put 𝑥=𝑥 and 𝐽𝐵=𝐽𝐵. Go to the step (1).

3. Finding an Initial SFS for the Primal Support Method

Consider the linear programming problem written in the following general form:max𝑧=𝑐𝑇𝑥,s.t.𝐴1𝑥𝑏1,𝐴2𝑥=𝑏2,𝑙𝑥𝑢,(3.1) where 𝑐=(𝑐1,𝑐2,,𝑐𝑝)𝑇, 𝑥=(𝑥1,𝑥2,,𝑥𝑝)𝑇, 𝑙=(𝑙1,𝑙2,,𝑙𝑝)𝑇, 𝑢=(𝑢1,𝑢2,,𝑢𝑝)𝑇 are vectors in 𝑝; 𝐴1 is a matrix of dimension (𝑚1×𝑝), 𝐴2 is a matrix of dimension (𝑚2×𝑝), 𝑏1𝑚1, 𝑏2𝑚2. We assume that 𝑙< and 𝑢<.

Let 𝑚=𝑚1+𝑚2 be the number of constraints of the problem (3.1) and 𝐼={1,2,,𝑚}, 𝐽0={1,2,,𝑝} are, respectively, the constraints indices set and the original variables indices set of the problem (3.1). We partition the set 𝐼 as follows: 𝐼=𝐼1𝐼2, where 𝐼1 and 𝐼2 represent, respectively, the indices set of inequality and equality constraints. We note by 𝑒(𝑗) the 𝑗-vector of ones, that is, 𝑒(𝑗)=(1,1,,1)𝑗 and 𝑒𝑗 the 𝑗th vector of the identity matrix 𝐼𝑚 of order 𝑚.

After adding 𝑚1=|𝐼1| slack variables to the problem (3.1), we get the following problem in standard form:max𝑧=𝑐𝑇𝑥,(3.2)s.t.𝐴𝑥+𝐻𝑒𝑥𝑒=𝑏,(3.3)𝑙𝑥𝑢,(3.4)0𝑥𝑒𝑢𝑒,(3.5) where 𝐴=(𝑎𝑖𝑗,𝑖𝐼,𝑗𝐽0)=𝐴1𝐴2, 𝑏=(𝑏𝑖,𝑖𝐼)=𝑏1𝑏2, 𝐻𝑒=𝐼𝑚1021)(𝑚×𝑚=(𝑒𝑖,𝑖𝐼1)=(𝑒1,𝑒2,,𝑒𝑚1),𝑥𝑒=(𝑥𝑝+𝑖,𝑖𝐼1)=(𝑥𝑝+1,𝑥𝑝+2,,𝑥𝑝+𝑚1)𝑇 is the vector of the added slack variables and 𝑢𝑒=(𝑢𝑝+𝑖,𝑖𝐼1)=(𝑢𝑝+1,𝑢𝑝+2,,𝑢𝑝+𝑚1)𝑇 its upper bound vector. In order to work with finite bounds, we set 𝑢𝑝+𝑖=𝑀, 𝑖𝐼1, that is, 𝑢𝑒=𝑀𝑒(𝑚1), where 𝑀 is a finite, positive and big real number chosen carefully.

Remark 3.1. The upper bounds, 𝑢𝑝+𝑖, 𝑖𝐼1, of the added slack variables can also be deduced as follows:𝑢𝑝+𝑖=𝑏𝑖𝐴𝑇𝑖𝑖,𝑖𝐼1, where 𝑖 is a 𝑝-vector computed with the formula 𝑖𝑗=𝑙𝑗,if𝑎𝑖𝑗𝑢>0;𝑗,if𝑎𝑖𝑗<0;0,if𝑎𝑖𝑗=0.(3.6) Indeed, from the system (3.3), we have 𝑥𝑝+𝑖=𝑏𝑖𝑝𝑗=1𝑎𝑖𝑗𝑥𝑗, 𝑖𝐼1. By using the bound constraints (3.4), we get 𝑥𝑝+𝑖𝑏𝑖𝑝𝑗=1𝑎𝑖𝑗𝑖𝑗=𝑏𝑖𝐴𝑇𝑖𝑖, 𝑖𝐼1. However, the experimental study shows that it’s more efficient to set 𝑢𝑝+𝑖,𝑖𝐼1, to a given finite big value, because for the large-scale problems, the deduction formula (3.6) given above takes much CPU time to compute bounds for the slack variables.

The initialization of the primal support method consists of finding an initial support feasible solution for the problem (3.2)–(3.5). In this section, being inspired from the technique used to initialize interior point methods [36] and taking into account, the fact that the support method can start with a feasible point and a support which are independent, we suggest a single artificial variable technique to find an initial SFS. Before presenting the suggested technique, we first extend the full artificial basis technique, originally presented in [7] for standard form, to solve linear programs presented in the general form (3.1).

3.1. The Full Artificial Basis Technique

Let 𝑥+ be a 𝑝-vector chosen between 𝑙 and 𝑢 and 𝑤 an 𝑚-vector such that 𝑤=𝑏𝐴𝑥+. We consider the following subsets of 𝐼1: 𝐼+1={𝑖𝐼1𝑤𝑖0}, 𝐼1={𝑖𝐼1,𝑤𝑖<0}, and we assume without loss of generality that 𝐼+1={1,2,,𝑘}, with 𝑘𝑚1 and 𝐼1={𝑘+1,𝑘+2,,𝑚1}. Remark that 𝐼+1 and 𝐼1 form a partition of 𝐼1; |𝐼+1|=𝑘 and |𝐼1|=𝑚1𝑘.

We make the following partition for 𝑥𝑒, 𝑢𝑒 and 𝐻𝑒: 𝑥𝑒=𝑥𝑒+𝑥𝑒, where𝑥𝑒+=𝑥𝑝+𝑖,𝑖𝐼+1=𝑥𝑝+1,𝑥𝑝+2,,𝑥𝑝+𝑘𝑇,𝑥𝑒=𝑥𝑝+𝑖,𝑖𝐼1=𝑥𝑝+𝑘+1,𝑥𝑝+𝑘+2,,𝑥𝑝+𝑚1𝑇;(3.7)

𝑢𝑒=𝑢𝑒+𝑢𝑒, where𝑢𝑒+=𝑢𝑝+𝑖,𝑖𝐼+1=𝑢𝑝+1,𝑢𝑝+2,,𝑢𝑝+𝑘𝑇,𝑢𝑒=𝑢𝑝+𝑖,𝑖𝐼1=𝑢𝑝+𝑘+1,𝑢𝑝+𝑘+2,,𝑢𝑝+𝑚1𝑇,(3.8) and 𝐻𝑒=(𝐻𝑒+,𝐻𝑒), where𝐻𝑒+=𝑒𝑖,𝑖𝐼+1=𝐼𝑚𝐼,𝐼+1=𝑒1,𝑒2,,𝑒𝑘,𝐻𝑒=𝑒𝑖,𝑖𝐼1=𝐼𝑚𝐼,𝐼1=𝑒𝑘+1,𝑒𝑘+2,,𝑒𝑚1.(3.9) Hence, the problem (3.2)–(3.5) becomesmax𝑧=𝑐𝑇𝑥,(3.10)s.t.𝐴𝑥+𝐻𝑒𝑥𝑒+𝐻𝑒+𝑥𝑒+=𝑏,(3.11)𝑙𝑥𝑢,(3.12)0𝑥𝑒+𝑢𝑒+,0𝑥𝑒𝑢𝑒.(3.13)

After adding 𝑠=𝑚𝑘 artificial variables to the equations 𝑘+1,𝑘+2,,𝑚 of the system (3.11), where 𝑠 is the number of elements of the artificial index set 𝐼𝑎=𝐼1𝐼2={𝑘+1,𝑘+2,,𝑚1,𝑚1+1,,𝑚}, we get the following auxiliary problem:max𝜓=𝑒(𝑠)𝑇𝑥𝑎,s.t.𝐴𝑥+𝐻𝑒𝑥𝑒+𝐻𝑒+𝑥𝑒++𝐻𝑎𝑥𝑎=𝑏,𝑙𝑥𝑢,0𝑥𝑒+𝑢𝑒+,0𝑥𝑒𝑢𝑒,0𝑥𝑎𝑢𝑎,(3.14) where𝑥𝑎=𝑥𝑝+𝑚1+𝑖,𝑖==𝑥1,𝑠𝑝+𝑚1+1,,𝑥𝑝+𝑚1+𝑠𝑇(3.15) represents the artificial variables vector,𝐻𝑎=𝑤sign𝑖𝑒𝑖,𝑖𝐼𝑎=𝑤sign𝑘+1𝑒𝑘+1𝑤,sign𝑘+2𝑒𝑘+2𝑤,,sign𝑚𝑒𝑚,𝑢𝑎=||𝑤(𝐼𝑎)||+𝛿𝑒(𝑠)=||𝑤𝑖||+𝛿,𝑖𝐼𝑎,(3.16) where 𝛿 is a real nonnegative number chosen in advance.

If we put 𝑥𝑁=𝑥𝑥𝑒𝑝+𝑚1𝑘, 𝑥𝐵=𝑥𝑒+𝑥𝑎𝑚, 𝐴𝑁=(𝐴,𝐻𝑒), 𝐴𝐵=(𝐻𝑒+,𝐻𝑎), 𝑙𝑁=𝑙0𝑚1𝑘, 𝑢𝑁=𝑢𝑢𝑒, 𝑙𝐵=0𝑘+𝑠=0𝑚, 𝑢𝐵=𝑢𝑒+𝑢𝑎, 𝑐𝐵=0𝑘𝑒(𝑠), then we get the following auxiliary problem:max𝜓=𝑐𝑇𝐵𝑥𝐵,(3.17)s.t.𝐴𝑁𝑥𝑁+𝐴𝐵𝑥𝐵𝑙=𝑏,(3.18)𝑁𝑥𝑁𝑢𝑁,𝑙𝐵𝑥𝐵𝑢𝐵.(3.19) The variables indices set of the auxiliary problem (3.17)–(3.19) is𝐽=𝐽0𝑝+𝑖,𝑖𝐼1𝑝+𝑚1+𝑖,𝑖==1,𝑠1,,𝑝,𝑝+1,,𝑝+𝑚1,𝑝+𝑚1+1,,𝑝+𝑚1.+𝑠(3.20) Let us partition this set as follows: 𝐽=𝐽𝑁𝐽𝐵, where𝐽𝑁=𝐽0𝑝+𝑖,𝑖𝐼1=1,,𝑝,𝑝+𝑘+1,,𝑝+𝑚1,𝐽𝐵=𝑝+𝑖,𝑖𝐼+1𝑝+𝑚1+𝑖,𝑖==1,𝑠𝑝+1,,𝑝+𝑘,𝑝+𝑚1+1,,𝑝+𝑚1.+𝑠(3.21) Remark that the pair {𝑦,𝐽𝐵}, where𝑦𝑦=𝑁𝑦𝐵=𝑥𝑁𝑥𝐵=𝑥𝑥𝑒𝑥𝑒+𝑥𝑎=𝑥+0𝑚1𝑘𝑤𝐼+1||𝑤(𝐼𝑎)||,(3.22) with 𝑤(𝐼+1)=(𝑤𝑖,𝑖𝐼+1) and 𝑤(𝐼𝑎)=(𝑤𝑖,𝑖𝐼𝑎), is a support feasible solution (SFS) for the auxiliary problem (3.17)–(3.19). Indeed, 𝑦 lies between its lower and upper bounds: for 𝛿0 we have𝑙0𝑚1𝑘0𝑘0𝑠𝑥𝑦=+0𝑚1𝑘𝑤𝐼+1||𝑤(𝐼𝑎)||𝑢𝑀𝑒(𝑚1𝑘)𝑀𝑒(𝑘)||𝑤(𝐼𝑎)||+𝛿𝑒(𝑠).(3.23) Furthermore, the (𝑚×𝑚)-matrix𝐴𝐵=𝑒1,𝑒2,,𝑒𝑘𝑤,sign𝑘+1𝑒𝑘+1𝑤,sign𝑘+2𝑒𝑘+2𝑤,,sign𝑚𝑒𝑚(3.24) is invertible because𝐴det𝐵=𝑖𝐼𝑎𝑤sign𝑖=𝑚𝑖=𝑘+1𝑤sign𝑖=±1,(3.25) and 𝑦 verifies the main constraints: by replacing 𝑥, 𝑥𝑒, 𝑥𝑒+, and 𝑥𝑎 with their values in the system (3.18), we get𝐴𝑁𝑥𝑁+𝐴𝐵𝑥𝐵=(𝐴,𝐻𝑒)𝑥+0𝑚1𝑘+𝐻𝑒+,𝐻𝑎𝑤𝐼+1||𝑤(𝐼𝑎)||=𝐴𝑥++𝐻𝑒+𝑤𝐼+1+𝐻𝑎||𝑤(𝐼𝑎)||=𝐴𝑥++𝑘𝑖=1𝑤𝑖𝑒𝑖+𝑚𝑖=𝑘+1𝑤𝑖𝑒𝑖=𝐴𝑥++𝐼𝑚𝑤=𝐴𝑥++𝑤=𝐴𝑥++𝑏𝐴𝑥+=𝑏.(3.26) Therefore, the primal support method can be initialized with the SFS {𝑦,𝐽𝐵} to solve the auxiliary problem. Let {𝑦,𝐽𝐵} be the obtained optimal SFS after the application of the primal support method to the auxiliary problem (3.17)–(3.19), where𝑦=𝑥𝑥𝑒𝑥𝑒+𝑥𝑎,𝜓=𝑒(𝑠)𝑇𝑥𝑎.(3.27) If 𝜓<0, then the original problem (3.2)–(3.5) is infeasible.

Else, when 𝐽𝐵 does not contain any artificial index, then {𝑥𝑥𝑒,𝐽𝐵} will be an SFS for the original problem (3.2)–(3.5). Otherwise, we delete artificial indices from the support 𝐽𝐵, and we replace them with original or slack appropriate indices, following the algebraic rule used in the simplex method.

In order to initialize the primal support method for solving linear programming problems with bounded variables written in the standard form, in [7], Gabasov et al. add 𝑚 artificial variables, where 𝑚 represents the number of constraints. In this work, we are interested in solving the problem written in the general form (3.1), so we have added artificial variables only for equality constraints and inequality constraints with negative components of the vector 𝑤.

Remark 3.2. We have 𝐼=𝐼1𝐼2=𝐼+1𝐼1𝐼2=𝐼+1𝐼𝑎. Since in the relationship (3.22), we have 𝑦𝐵=𝑥𝐵=𝑤(𝐼+1)|𝑤(𝐼𝑎)| and𝑤(𝐼+1)0, we get 𝑦𝐵=|𝑤(𝐼+1𝐼𝑎)|=|𝑤(𝐼)|=|𝑤|.

Remark 3.3. If we choose 𝑥+ such that 𝑥+=𝑙 or 𝑥+=𝑢, then the vector 𝑦, given by the relationship (3.22), will be a BFS. Hence, the simplex algorithm can be initialized with this point.

Remark 3.4. If 𝑏10𝑚1, 𝑙0𝑝 and 𝑢0𝑝, two cases can occur.

Case 1. If 𝑏20𝑚2, then we put 𝑥+=0𝑝.

Case 2. If 𝑏2<0𝑚2, then we put 𝐴2=𝐴2, 𝑏2=𝑏2 and 𝑥+=0𝑝.
In the two cases, we get 𝑏0𝑚, 𝑤=𝑏𝐴𝑥+=𝑏0, therefore 𝐼1=. Hence 𝐼𝑎=𝐼1𝐼2=𝐼2, so we add only 𝑚2 artificial variables for the equality constraints.

3.2. The Single Artificial Variable Technique

In order to initialize the primal support method using this technique, we first start by searching an initial support, then we proceed to the search of an initial feasible solution for the original problem.

The application of the Gauss elimination method with partial pivoting to the system of equations (3.3) can give us a support 𝐽𝐵={𝑗1,𝑗2,,𝑗𝑟}, where 𝑟𝑚. However, the experimental study realized in [33, 34] reveals that this approach takes much time in searching the initial support, that is, why it’s important to take into account the sparsity property of practical problems and apply some procedure to find a triangular basis among the columns corresponding to the original variables, that is, the columns of the matrix 𝐴. In this work, on the base of the crash procedures presented in [26, 2830], we present a procedure to find an initial support for the problem (3.2)–(3.5).

Procedure 1 (searching an initial support). (1) We sort the columns of the (𝑚×𝑝-)matrix 𝐴 according to the increasing order of their number of nonzero elements. Let 𝐿 be the list of the sorted columns of 𝐴.
(2) Let 𝑎𝑗0=𝐿(𝑟) (the 𝑟th column of L) be the first column of the list 𝐿 verifying: 𝑖0𝐼 such that ||𝑎𝑖0𝑗0||=max𝑖𝐼||𝑎𝑖𝑗0||,||𝑎𝑖0𝑗0||>pivtol,(3.28) where pivtol is a given tolerance. Hence, we put 𝐽𝐵={𝑗0}, 𝐼piv={𝑖0}, 𝑘=1.
(3) Let 𝑗𝑘 be the index corresponding to the column (𝑟+𝑘) of the list 𝐿, that is, 𝑎𝑗𝑘=𝐿(𝑟+𝑘). If 𝑎𝑗𝑘 has zero elements in all the rows having indices in 𝐼piv and if 𝑖𝑘𝐼 such that ||𝑎𝑖𝑘𝑗𝑘||=max𝑖𝐼𝐼piv||𝑎𝑖𝑗𝑘||,||𝑎𝑖𝑘𝑗𝑘||>pivtol,(3.29) then we put 𝐽𝐵=𝐽𝐵{𝑗𝑘}, 𝐼piv=𝐼piv{𝑖𝑘}.
(4) We put 𝑘=𝑘+1. If 𝑟+𝑘𝑝, then go to step (3), else go to step (5).
(5) We put 𝑠=0, 𝐼𝑎=, 𝐽𝑎=.
(6) For all 𝑖𝐼𝐼piv, if the 𝑖th constraint is originally an inequality constraint, then we add to 𝐽𝐵 an index corresponding to the slack variable 𝑝+𝑖, that is, 𝐽𝐵=𝐽𝐵{𝑝+𝑖}. If the 𝑖th constraint is originally an equality constraint, then we put 𝑠=𝑠+1 and add to this latter an artificial variable 𝑥𝑝+𝑚1+𝑠 for which we set the lower bound to zero and the upper bound to a big and well-chosen value 𝑀. Thus, we put 𝐽𝐵=𝐽𝐵{𝑝+𝑚1+𝑠}, 𝐽𝑎=𝐽𝑎{𝑝+𝑚1+𝑠}, 𝐼𝑎=𝐼𝑎{𝑖}.

Remark 3.5. The inputs of Procedure 1 are:(i)the (𝑚×𝑝)-matrix of constraints, 𝐴;(ii)a pivoting tolerance fixed beforehand, pivtol;(iii)a big and well chosen value 𝑀.
The outputs of Procedure 1 are:(i)𝐽𝐵={𝑗0,𝑗1,,𝑗𝑚1}: the initial support for the problem (3.2)–(3.5);(ii)𝐼𝑎: the indices of the constraints for which we have added artificial variables;(iii)𝐽𝑎: the indices of artificial variables added to the problem (3.2)–(3.5);(iv)𝑠=|𝐼𝑎|=|𝐽𝑎|: the number of artificial variables added to the problem (3.2)–(3.5).

After the application of the procedure explained above (Procedure 1) for the problem (3.2)–(3.5), we get the following linear programming problem:max𝑧=𝑐𝑇𝑥,s.t.𝐴𝑥+𝐻𝑒𝑥𝑒+𝐻𝑎𝑥𝑎=𝑏,𝑙𝑥𝑢,0𝑥𝑒𝑀𝑒(𝑚1),0𝑥𝑎𝑀𝑒(𝑠),(3.30) where 𝑥𝑎=(𝑥𝑝+𝑚1+1,𝑥𝑝+𝑚1+2,,𝑥𝑛)𝑇 is the 𝑠-vector of the artificial variables added during the application of Procedure 1, with 𝑛=𝑝+𝑚1+𝑠, 𝐻𝑎=(𝑒𝑖,𝑖𝐼𝑎) is an (𝑚×𝑠)-matrix.

Since we have got an initial support, we can start the procedure of finding a feasible solution for the original problem.

Procedure 2 (searching an initial feasible solution). Consider the following auxiliary problem: max𝜓=𝑥𝑛+1𝑗𝐽𝑎𝑥𝑗,s.t.𝐴𝑥+𝐻𝑒𝑥𝑒+𝐻𝑎𝑥𝑎+𝜌𝑥𝑛+1=𝑏,𝑙𝑥𝑢,0𝑥𝑒𝑀𝑒(𝑚1),0𝑥𝑎𝑀𝑒(𝑠),0𝑥𝑛+11,(3.31) where 𝑥𝑛+1 is an artificial variable, 𝜌=𝑏𝐴𝑥+, and 𝑥+ is a 𝑝-vector chosen between 𝑙 and 𝑢. We remark that 𝑥𝑥𝑦=𝑒𝑥𝑎𝑥𝑛+1=𝑥+0𝑚10𝑠1(3.32) is an obvious feasible solution for the auxiliary problem. Indeed, 𝐴𝑥++𝐻𝑒0𝑚1+𝐻𝑎0𝑠+𝑏𝐴𝑥+=𝑏.(3.33) Hence, we can apply the primal support method to solve the auxiliary problem (3.31) by starting with the initial SFS {𝑦,𝐽𝐵}, where 𝐽𝐵={𝑗0,𝑗1,,𝑗𝑚1} is the support obtained with Procedure 1. Let 𝑦=𝑥𝑥𝑒𝑥𝑎𝑥𝑛+1,𝐽𝐵,𝜓(3.34) be, respectively, the optimal solution, the optimal support, and the optimal objective value of the auxiliary problem (3.31).
If 𝜓<0, then the original problem (3.2)–(3.5) is infeasible.
Else, when 𝐽𝐵 does not contain any artificial index, then {𝑥𝑥𝑒,𝐽𝐵} will be an SFS for the original problem (3.2)–(3.5). Otherwise, we delete the artificial indices from the support 𝐽𝐵 and we replace them with original or slack appropriate indices, following the algebraic rule used in the simplex method.

Remark 3.6. The number of artificial variables 𝑛𝑎=𝑠+1 of the auxiliary problem (3.31) verifies the inequality: 1𝑛𝑎𝑚2+1.
The auxiliary problem will have only one artificial variable, that is, 𝑛𝑎=1, when the initial support 𝐽𝐵 found by Procedure 1 is constituted only by the original and slack variable indices (𝐽𝑎=), and this will hold in two cases.

Case 1. When all the constraints are inequalities, that is, 𝐼2=.

Case 2. When 𝐼2 and step (4) of Procedure 1 ends with 𝐼piv=𝐼2.
The case 𝑛𝑎=𝑚2+1 holds when the step (4) of Procedure 1 stops with 𝐼piv=𝐼1.

Remark 3.7. Let’s choose a 𝑝-vector 𝑥+ between 𝑙 and 𝑢 and two nonnegative vectors 𝑣𝑒𝑅𝑚1 and 𝑣𝑎𝑅𝑠. If we put in the auxiliary problem (3.31), 𝜌=𝑏𝑣𝐴𝑥+, with 𝑣=𝐻𝑒𝑣𝑒+𝐻𝑎𝑣𝑎, then the vector 𝑥𝑥𝑦=𝑒𝑥𝑎𝑥𝑛+1=𝑥+𝑣𝑒𝑣𝑎1(3.35) is a feasible solution for the auxiliary problem. Indeed, 𝐴𝑥+𝐻𝑒𝑥𝑒+𝐻𝑎𝑥𝑎+𝜌𝑥𝑛+1=𝐴𝑥++𝐻𝑒𝑣𝑒+𝐻𝑎𝑣𝑎+𝑏𝐻𝑒𝑣𝑒𝐻𝑎𝑣𝑎𝐴𝑥+=𝑏.(3.36) We remark that if we put 𝑣𝑒=0𝑚1,𝑣𝑎=0𝑠, we get 𝑣=0𝑚, then 𝜌=𝑏𝐴𝑥+ and we obtain the evident feasible point that we have used in our numerical experiments, that is, 𝑥𝑥𝑦=𝑒𝑥𝑎𝑥𝑛+1=𝑥+0𝑚10𝑠1.(3.37) It’s important here, to cite two other special cases.
Case 1. If we choose the nonbasic components of 𝑦 equal to their bounds, then we obtain a BFS for the auxiliary problem, therefore we can initialize the simplex algorithm with it.Case 2. If we put 𝑣𝑒=𝑒(𝑚1) and 𝑣𝑎=𝑒(𝑠), then 𝑣=𝐻𝑒𝑒(𝑚1)+𝐻𝑎𝑒(𝑠)=𝑖𝐼1𝑒𝑖+𝑖𝐼𝑎𝑒𝑖. Hence, the vector 𝑥𝑥𝑦=𝑒𝑥𝑎𝑥𝑛+1=𝑥+𝑒(𝑚1)𝑒(𝑠)1=𝑥+𝑒(𝑚1+𝑠+1)(3.38) is a feasible solution for the auxiliary problem (3.31), with 𝜌=𝑏𝑣𝐴𝑥+.

Numerical Example
Consider the following LP problem: max𝑐𝑇𝑥,s.t.𝐴1𝑥𝑏1,𝐴2𝑥=𝑏2,𝑙𝑥𝑢,(3.39) where 𝐴1=00122003,𝑏1=33,𝐴2=03021023,𝑏2=22,𝑐=(2,3,1,1)𝑇,𝑙=04,𝑢=(10,10,10,10)𝑇𝑥,𝑥=1,𝑥2,𝑥3,𝑥4𝑇.(3.40) We put 𝑀=1010,pivtol=106,𝑥+=04, and we apply the two-phase primal support method using the single artificial variable technique to solve the problem (3.39).

Phase 1. After adding the slack variables 𝑥5 and 𝑥6 to the problem (3.39), we get the following problem in standard form: max𝑧=𝑐𝑇𝑥,s.t.𝐴𝑥+𝐻𝑒𝑥𝑒=𝑏,𝑙𝑥𝑢,02𝑥𝑒𝑢𝑒,(3.41) where 𝐴=0012200303021023,𝐻𝑒=332210010000,𝑏=,𝑥𝑒=𝑥5𝑥6,𝑢𝑒=𝑀𝑀.(3.42) The application of Procedure 1 to the problem (3.41) gives us the following initial support: 𝐽𝐵={2,1,3,5}. In order to find an initial feasible solution to the original problem, we add an artificial variable 𝑥7 to problem (3.41), and we compute the vector 𝜌: 𝜌=𝑏𝐴𝑥+=𝑏. Thus, we obtain the following auxiliary problem: max𝜓=𝑥7,s.t.𝐴𝑥+𝐻𝑒𝑥𝑒+𝜌𝑥7=𝑏,𝑙𝑥𝑢,02𝑥𝑒𝑢𝑒,0𝑥71.(3.43) Remark that the SFS {𝑦,𝐽𝐵}, where 𝑦=(𝑥1,𝑥2,𝑥3,𝑥4,𝑥5,𝑥6,𝑥7)𝑇=(0,0,0,0,0,0,1)𝑇, and 𝐽𝐵={2,1,3,5} is an obvious support feasible solution for the auxiliary problem. Hence, the primal support method can be applied to solve the problem (3.43) starting with the SFS {𝑦,𝐽𝐵}.

Iteration 1. The initial support is 𝐽𝐵={2,1,3,5}, 𝐽𝑁={4,6,7}; the initial feasible solution and the corresponding objective function value are: 𝑦=(0,0,0,0,0,0,1)𝑇 and 𝜓=1.
The vector of multipliers is 𝜋𝑇=(0,0,0,0), and Δ𝑇𝑁=(0,0,1). Hence, the reduced costs vector is Δ=(0,0,0,0,0,0,1)𝑇.
The suboptimality estimate is 𝛽(𝑥,𝐽𝐵)=Δ7(𝑥7𝑙7)=1>0. So the current solution is not optimal. The set of nonoptimal indices is 𝐽𝑁𝑁𝑂={7}𝑗0=7.
In order to improve the objective function, we compute the search direction 𝑑: we have 𝑑7=signΔ7=1, so 𝑑𝑁=(𝑑4,𝑑6,𝑑7)𝑇=(0,0,1)𝑇; 𝑑𝐵=𝐴𝐵1𝐴𝑁𝑑𝑁=(2/3,3/2,1/4,11/4)𝑇. Hence, the search direction is 𝑑=(3/2,2/3,1/4,0,11/4,0,1)𝑇.
Since 𝑑𝑗>0,𝑗𝐽𝐵, 𝜃𝑗=(𝑢𝑗𝑥𝑗)/𝑑𝑗,𝑗𝐽𝐵. So 𝜃2=15, 𝜃1=20/3,𝜃3=40and𝜃5=4𝑀/11𝜃𝑗1=𝜃1=20/3, and 𝜃𝑗0=𝜃7=𝑥7𝑙7=1. Hence, the primal step length is 𝜃0=min{𝜃1,𝜃7}=1=𝜃7. The new solution is 𝑦=𝑦+𝜃0𝑑=(3/2,2/3,1/4,0,11/4,0,0)𝑇, and the new objective value is 𝜓=𝜓+𝜃0|Δ7|=0𝑦 is optimal for the auxiliary problem. Therefore, the pair {𝐱,𝐽𝐵}, where 𝐱=(3/2,2/3,1/4,0,11/4,0)𝑇 and 𝐽𝐵={2,1,3,5}, is an SFS for the problem (3.41).

Phase 2. Iteration 1. The initial support is 𝐽𝐵={2,1,3,5}, and 𝐽𝑁={4,6}.
The initial feasible solution is 𝐱=(3/2,2/3,1/4,0,11/4,0)𝑇, and the objective value is 𝑧=3/4.
The multipliers vector and the reduced costs vector are: 𝜋𝑇=(0,5/4,1,1/2) and Δ=(0,0,0,33/4,0,5/4)𝑇. The suboptimality estimate is 𝛽(𝐱,𝐽𝐵)=Δ4(𝑥4𝑢4)+Δ6(𝑥6𝑙6)=165/2>0. So the initial solution is not optimal.
The set of nonoptimal indices is 𝐽𝑁𝑁𝑂={4} the entering index is 𝑗0=4.
The search direction is 𝑑=(3/2,2/3,9/4,1,1/4,0)𝑇.
The primal step length is 𝜃0=min{𝜃3,𝜃4}=min{1/9,1}=1/9=𝜃3. So the leaving index is 𝑗1=3. The new solution is 𝐱=𝐱+𝜃0𝑑=(5/3,16/27,0,1/9,25/9,0)𝑇, and 𝑧=𝑧+𝜃0|Δ4|=5/3.
Iteration 2. The current support is 𝐽𝐵={2,1,4,5}, and 𝐽𝑁={3,6}.
The current feasible solution is 𝐱=(5/3,16/27,0,1/9,25/9,0)𝑇, and the objective value is 𝑧=5/3.
The multipliers vector and the reduced costs vector are: 𝜋𝑇=(0,1/3,1,4/3) and Δ=(0,0,11/3,0,0,1/3)𝑇. The suboptimality estimate is 𝛽(𝐱,𝐽𝐵)=Δ3(𝑥3𝑙3)+Δ6(𝑥6𝑙6)=0.
Therefore, the optimal solution and the optimal objective value of the original problem (3.39) are 𝐱=53,16127,0,9𝑇,𝑧=53.(3.44)

4. Experimental Results

In order to perform a numerical comparison between the simplex method and the different variants of the support method, we have programmed them under the MATLAB programming language version 7.4.0 (R2007a).

Since the test problems available in the NETLIB library present bounds on the variables which can be infinite, we have built a sample of 68 test problems having finite bounds and a same optimal objective value as those of NETLIB. The principle of the building process is as follows: let 𝑥𝑙 and 𝑥𝑢 be the obtained bounds after the conversion of a given test problem from the “mps” format to the “mat” format and 𝑥 the optimal solution. We put 𝑥min=min{𝑥𝑗,𝑗=1,𝑝} and 𝑥max=max{𝑥𝑗,𝑗=1,𝑝}. Thus, the constraint matrix and the objective coefficients vector of the new problem remains the same as those of NETLIB, but the new lower bounds ̃𝑙 and upper bounds ̃𝑢 will be changed as follows:̃𝑙𝑗=𝑥min,if𝑥𝑙𝑗=;𝑥𝑙𝑗,if𝑥𝑙𝑗>,̃𝑢𝑗=𝑥max,if𝑥𝑢𝑗=+;𝑥𝑢𝑗,if𝑥𝑢𝑗<+.(4.1) Table 1 represents the listing of the main characteristics of the considered 68 test problems, where NC, NV, NEC, and 𝐷 represent, respectively, the number of constraints, the number of variables, the number of equality constraints, and the density of the constraints matrix (the ratio between the number of nonzero elements and the number of total elements of the constraints matrix, multiplied by 100).

tab1
Table 1: Characteristics of the test problems.

There exist many LP solvers which include a primal simplex algorithm package for solving LP problems, we cite: commercial solvers such as CPLEX [37], MINOS [28] and open-source solvers such as LP_SOLVE [38], GLPK [39], and LPAKO [30]. The LP_SOLVE solver is well documented and widely used in applications and numerical experiments [40, 41]. Moreover, the latter has a mex interface called mxlpsolve which can be easily installed and used with the MATLAB language. That is why we compare our algorithm with the primal simplex algorithm implemented in LP_SOLVE. However, MATLAB is an interpreted language, so it takes much time in solving problems than the native language C++ used for the implementation of LP_SOLVE. For this reason, the numerical comparison carried out between our method and LP_SOLVE concerns only the iterations number. In order to compare our algorithm with the primal simplex algorithm in terms of CPU time, we have developed our own simplex implementation. The latter is based on the one given in [42].

In order to compare the different solvers, we have given them the following names.

Solver1. SupportSav
The two-phase support method, where we use the suggested single artificial variable technique in the first phase, that is, the initial support is found by applying Procedure 1, and the feasible solution is found by applying Procedure 2.

Solver2. SupportFab
The two-phase support method, where we use the full artificial basis technique presented above in the first phase.

Solver3. SimplexFab
The two-phase primal simplex method, where we use the full artificial basis technique presented above in the first phase (we use the BFS presented in Remark 3.3, with 𝑥+=̃𝑙).

Solver4. LP_Solve_PSA
The primal simplex method implemented in the open-source LP solver LP_SOLVE. The different options are (Primal-Primal, PRICER_DANTZIG, No Scaling, No Presolving). For setting these options, the following MATLAB commands are used.(i)mxlpsolve(“set_simplextype”, lp, 5); % (Phase 1: Primal, Phase 2: Primal).(ii)mxlpsolve(“set_pivoting”, lp, 1); % (Dantzig’s rule).(iii)mxlpsolve(“set_scaling”, lp, 0); % (No scaling).

The Lagrange multipliers vector and the basic components of the search direction in the first three solvers are computed by solving the two systems of linear equations: 𝐴𝑇𝐵𝜋=𝑐𝐵 and 𝐴𝐵𝑑𝐵=𝑎𝑗0𝑑𝑗0, using the 𝐿𝑈 factorization of 𝐴𝐵 [43]. The updating of the 𝐿 and 𝑈 factors is done, in each iteration, using the Sherman-Morrison-Woodbury formula [42].

In the different implementations, we have used the Dantzig’s rule to choose the entering variable. To prevent cycling, the EXPAND procedure [44] is used.

We have solved the 68 NETLIB test problems listed in Table 1 with the solvers mentioned above on a personal computer with Intel (R) Core (TM) 2 Quad CPU Q6600 2.4 GHz, 4 GB of RAM, working under the Windows XP SP2 operating system, by setting the different tolerances appropriately. We recall that the initial point 𝑥+, needed in the three methods (Solvers 1, 2 and 3), must be located between its lower and upper bounds, so after giving it some values and observing the variation of the CPU time, we have concluded that it’s more efficient to set 𝑥+=̃𝑙. The upper bounds for the slack and artificial variables added for the methods SupportSav, SupportFab, and SimplexFab are put to 𝑀=1010.

Numerical results are reported in Tables 2, 3, 4, and 5, where nit1, nit, and cput represent respectively the phase one iterations number, the total iterations number, and the CPU time in seconds of each method necessary to find the optimal objective values presented in [25] or [45]; cput1, shown in columns 2 and 7 of Table 2, represents the CPU time necessary to find the initial support with Procedure 1. The number of artificial variables added to the original problem in our implementations is listed in Table 6.

tab2
Table 2: CPU time and iterations number for Solver1 (SupportSav).
tab3
Table 3: CPU time and iterations number for Solver2 (SupportFab).
tab4
Table 4: CPU time and iterations number for Solver3 (SimplexFab).
tab5
Table 5: Iterations number for Solver4 (LP_SOLVE_PSA).
tab6
Table 6: The number of artificial variables added in Phase one of the three Solvers.

In order to compare the different solvers, we have ordered the problems according to the increasing order of the sum of the constraints and variables numbers (NC+NV), as they are presented in the different tables and we have computed the CPU time ratio (Table 7) and the iteration ratio (Table 8) of the different solvers over the solver SupportSav (Solver1), where for each test problem, we havecputr1j=cput(Solver𝑗)cput(Solver1),𝑗=2,3,nitr1j=nit(Solver𝑗)nit(Solver1),𝑗=2,4.(4.2) The above ratios (see [46]) indicate how many times SupportSav is better than the other solvers. Ratios greater than one mean that our method is more efficient for the considered problem: let S be one of the solvers 2, 3, and 4. If cputr1S1, (resp., nitr1S1), then our solver (Solver1) is competitive with SolverS in terms of CPU time, (resp., in terms of iterations number). Ratios greater or equal to one are mentioned with the bold character in Tables 7 and 8.

tab7
Table 7: CPU time ratio of the different solvers over SupportSav.
tab8
Table 8: Iteration ratio of the different solvers over SupportSav.

We plot the CPU time ratios of the solvers SupportFab and SimplexFab over SupportSav (Figures 1(a) and 1(c)), and the iteration ratios of each solver over SupportSav (Figures 1(b), 1(d), and 1(e)). Ratios greater than one correspond to the points which are above the line 𝑦=1 in the graphs of Figure 1.

fig1
Figure 1: Ratios of the different solvers over SupportSav.

The analysis of the different tables and graphs leads us to make the following observations.(i)In terms of iterations number, SupportSav is competitive with SupportFab in solving 74% of the test problems with an average iteration ratio of 1.33. In terms of CPU time, SupportSav is competitive with SupportFab in solving 66% of the test problems with an average CPU time ratio of 1.20. Particularly, for the LP “scagr25” (Problem number 32), SupportSav is 5.14 times faster than SupportFab in terms of iterations number and solves the problem 4.55 times faster than SupportFab in terms of CPU time.(ii)In terms of iterations number, SupportSav is competitive with SimplexFab in solving 79% of the test problems with an average iteration ratio of 1.57. In terms of CPU time, SupportSav is competitive with SimplexFab in solving 68% of the test problems with an average CPU time ratio of 1.35. The peaks of the graph ratios correspond to the problems where our approach is 2 up to 5 times faster than SimplexFab. Particularly, for the LP “capri” (Problem number 21), SupportSav is 5.29 times faster than SimplexFab in terms of iterations number and solves the problem 3.99 times faster than SimplexFab in terms of CPU time. (iii)In terms of iterations number, SupportSav is competitive with LP_SOLVE_PSA in solving 72% of the test problems with an average iteration ratio of 1.49 (the majority of iteration ratios are above the line 𝑦=1 in Figure 1(e)). Particularly, for the LP “scsd8” (Problem number 55), our method is 9.58 times faster than the primal simplex implementation of the open-source solver LP_SOLVE.(iv)We remark from the last row of Table 6 that the total number of artificial variables added in order to find the initial support for SupportSav (13234) is considerably less than the total number of artificial variables added for SimplexFab (27389) and SupportFab (27389).(v)If we compute the total number of iterations necessary to solve the 68 LPs for each solver, we find (143737) for SupportSav, (159306) for SupportFab, (163428) for SimplexFab, and (158682) for LP_SOLVE_PSA. Therefore, the total number of iterations is the smallest for our solver.

5. Conclusion

In this work, we have proposed a single artificial variable technique to initialize the primal support method with bounded variables. An implementation under the MATLAB environnement has been developed. In the implementation, we have used the LU factorization of the basic matrix to solve the linear systems and the Sherman-Morrison-Woodbury formula to update the LU factors. After that, we have compared our approach (SupportSav) to the full artificial basis support method (SupportFab), the full artificial basis simplex method (SimplexFab), and the primal simplex implementation of the open-source solver LP_SOLVE (LP_SOLVE_PSA). The obtained numerical results are encouraging. Indeed, the suggested method (SupportSav) is competitive with SupportFab, SimplexFab, and LP_SOLVE_PSA.

Note that during our experiments, we have remarked that the variation of the initial support and the initial point 𝑥+ affects the performances (CPU time and iterations number) of the single artificial variant of the support method. Thus, how to choose judiciously the initial point and the initial support in order to improve the efficiency of the support method? In future works, we will try to apply some crash procedure like those proposed in [25, 27] in order to initialize the support method with a good initial support. Furthermore, we will try to implement some modern sparse algebra techniques to update the LU factors [47].

References

  1. L. V. Kantorovich, Mathematical Methods in the Organization and Planning of Production, Publication House of the Leningrad State University, 1939, Translated in Management Science, vol. 6, pp. 366–422, 1960.
  2. G. B. Dantzig, “Maximization of a linear function of variables subject to linear inequalities,” in Activity Analysis of Production and Allocation, R. C. Koopmans, Ed., pp. 339–347, Wiley, New York, NY, USA, 1951. View at Google Scholar · View at Zentralblatt MATH
  3. G. B. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton, NJ, USA, 1963.
  4. V. Klee and G. J. Minty, “How good is the simplex algorithm?” in Inequalities III, O. Shisha, Ed., pp. 159–175, Academic Press, New York, NY, USA, 1972. View at Google Scholar · View at Zentralblatt MATH
  5. R. Gabasov and F. M. Kirillova, Methods of Linear Programming, vol. 1–3, The Minsk University, 1977, 1978 and 1980.
  6. S. Radjef and M. O. Bibi, “An effective generalization of the direct support method,” mathematical Problems in Engineering, vol. 2011, Article ID 374390, 18 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. R. Gabasov, F. M. Kirillova, and S. V. Prischepova, Optimal Feedback Control, Springer, London, UK, 1995.
  8. R. Gabasov, F. M. Kirillova, and O. I. Kostyukova, “Solution of linear quadratic extremal problems,” Soviet Mathematics Doklady, vol. 31, pp. 99–103, 1985. View at Google Scholar · View at Zentralblatt MATH
  9. R. Gabasov, F. M. Kirillova, and O. I. Kostyukova, “A method of solving general linear programming problems,” Doklady AN BSSR, vol. 23, no. 3, pp. 197–200, 1979 (Russian). View at Google Scholar
  10. R. Gabasov, F. M. Kirillova, and A. I. Tyatyushkin, Constructive Methods of Optimization. Part1. Linear Problems, Universitetskoje, Minsk, Russia, 1984.
  11. E. A. Kostina and O. I. Kostyukova, “An algorithm for solving quadratic programming problems with linear equality and inequality constraints,” Computational Mathematics and Mathematical Physics, vol. 41, no. 7, pp. 960–973, 2001. View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  12. E. Kostina, “The long step rule in the bounded-variable dual simplex method: numerical experiments,” Mathematical Methods of Operations Research, vol. 55, no. 3, pp. 413–429, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  13. M. O. Bibi, “Support method for solving a linear-quadratic problem with polyhedral constraints on control,” Optimization, vol. 37, no. 2, pp. 139–147, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  14. B. Brahmi and M. O. Bibi, “Dual support method for solving convex quadratic programs,” Optimization, vol. 59, no. 6, pp. 851–872, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  15. M. Bentobache and M. O. Bibi, “Two-phase adaptive method for solving linear programming problems with bounded variables,” in Proceedings of the YoungOR 17, pp. 50–51, University of Nottingham, UK, 2011.
  16. M. Bentobache and M. O. Bibi, “Adaptive method with hybrid direction: theory and numerical experiments,” in Proceedings of the Optimization, pp. 24–27, Universidade Nova de Lisboa, Portugal, 2011.
  17. M. O. Bibi and M. Bentobache, “The adaptive method with hybrid direction for solving linear programming problems with bounded variables,” in Proceedings of the colloque International sur l’Optimisation et les Systèmes d’Information (COSI’ 11), pp. 80–91, University of Guelma, Algeria, 2011.
  18. M. O. Bibi and M. Bentobache, “An hybrid direction algorithm for solving linear programs,” in Proceedings of the International Conference on Discrete Mathematics & Computer Science (DIMACOS’11), pp. 28–30, University of Mohammedia, Morocco, 2011.
  19. L. G. Khachian, “A polynomial algorithm for linear programming,” Soviet Mathematics Doklady, vol. 20, pp. 191–194, 1979. View at Google Scholar
  20. N. Karmarkar, “A new polynomial-time algorithm for linear programming,” Combinatorica, vol. 4, no. 4, pp. 373–395, 1984. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  21. S. I. Gass, Linear Programming: Methods and Applications, McGraw-Hill, New York, NY, USA, 1964.
  22. C. M. Shetty, “A simplified procedure for quadratic programming,” Operations Research, vol. 11, pp. 248–260, 1963. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  23. C. B. Millham, “Fast feasibility methods for linear programming,” Opsearch, vol. 13, pp. 198–204, 1976. View at Google Scholar
  24. P. Wolfe, “The Composite Simplex Algorithm,” SIAM Review, vol. 7, no. 1, pp. 42–54, 1965. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  25. R. E. Bixby, “Implementing the simplex method: the initial basis,” ORSA Journal on Computing, vol. 4, no. 3, pp. 1–18, 1992. View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  26. N. I. M. Gould and J. K. Reid, “New crash procedures for large systems of linear constraints,” Mathematical Programming, vol. 45, no. 1–3, pp. 475–501, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  27. I. Maros and G. Mitra, “Strategies for creating advanced bases for large-scale linear programming problems,” INFORMS Journal on Computing, vol. 10, no. 2, pp. 248–260, 1998. View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  28. B. A. Murtagh and M. A. Saunders, “MINOS 5.5 User’s Guide,” Tech. Rep. SOL 83–20, Systems Optimization Lab., Stanford University, Stanford, Calif, USA, 1998. View at Google Scholar
  29. W. Orchard-Hays, Advanced Linear-Programming Computing Techniques, McGraw-Hill, New York, NY, USA, 1968.
  30. S. Lim and S. Park, “LPAKO: a simplex-based linear programming program,” Optimization Methods and Software, vol. 17, no. 4, pp. 717–745, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  31. H. D. Sherali, A. L. Soyster, and S. G. Baines, “Nonadjacent extreme point methods for solving linear programs,” Naval Research Logistics Quarterly, vol. 30, no. 1, pp. 145–161, 1983. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  32. M. Bentobache, A new method for solving linear programming problems in canonical form and with bounded variables, M.S. thesis, University of Bejaia, Algeria, 2005.
  33. M. Bentobache and M. O. Bibi, “Two-phase support method for solving linear programming problems with nonnegative variables: numerical experiments,” in Proceedings of the Colloque International sur l’Optimisation et les Systèmes d’Information (COSI’08), pp. 314–325, University of Tizi Ouzou, Algeria, 2008.
  34. M. Bentobache and M. O. Bibi, “Two-phase support method for solving linear programming problems with bounded variables: numerical experiments,” in Proceedings of the Colloque International sur l’Optimisation et les Systèmes d’Information (COSI’09), pp. 109–120, University of Annaba, Algeria, 2009.
  35. G. Mitra, M. Tamiz, and J. Yadegar, “Experimental investigation of an interior search method within a simplex framework,” Communications of the ACM, vol. 31, no. 12, pp. 1474–1482, 1988. View at Publisher · View at Google Scholar · View at Scopus
  36. R. J. Vanderbei, Linear Programming: Foundations and Extensions, Kluwer Academic Publichers, Princeton University, 2001.
  37. ILOG CPLEX, “9.0 User’s Manual,” 2003, http://www.ilog.com.
  38. “LP_SOLVE,” http://sourceforge.net/projects/lpsolve/files/lpsolve.
  39. A. Makhorin, “GNU linear programming Kit,” Reference Manual Version 4.9, Draft Edition, 2006, http://www.gnu.org/software/glpk/glpk.html.
  40. P. G. García and Á. Santos-Palomo, “A deficient-basis dual counterpart of Paparrizos, Samaras and Stephanides’s primal-dual simplex-type algorithm,” in Proceedings of the 2nd Conference on Optimization Methods & Software and 6th EUROPT Workshop on Advances in Continuous Optimization, Prague, Czech Republic, 2007.
  41. S. R. Thorncraft, H. R. Outhred, and D. J. Clements, “Evaluation of open-source LP optimization codes in solving electricity spot market optimization problems,” in Proceedings of the 19th Mini-Euro Conference on Operations Research Models and Methods in the Energy Sector, Coimbra, Portugal, 2006. View at Publisher · View at Google Scholar
  42. M. C. Ferris, O. L. Mangasarian, and S. J. Wright, “Linear programming with MATLAB,” MPSSIAM Series on Optimization, 2007. View at Google Scholar
  43. R. H. Bartels and G. H. Golub, “The simplex method of linear programming using LU decomposition,” Communications of the ACM, vol. 12, no. 5, pp. 266–268, 1969. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  44. P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright, “A practical anti-cycling procedure for linearly constrained optimization,” Mathematical Programming, vol. 45, no. 1-3, pp. 437–474, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  45. M. Gay, “Electronic mail distribution of linear programming test problems,” Mathematical Programming Society COAL, Bulletin no. 13, 1985, http://www.netlib.org/lp/data.
  46. K. Paparrizos, N. Samaras, and G. Stephanides, “An efficient simplex type algorithm for sparse and dense linear programs,” European Journal of Operational Research, vol. 148, no. 2, pp. 323–334, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  47. J. J. H. Forrest and J. A. Tomlin, “Updated triangular factors of the basis to maintain sparsity in the product form simplex method,” Mathematical Programming, vol. 2, no. 1, pp. 263–278, 1972. View at Publisher · View at Google Scholar · View at Scopus