Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2011, Article ID 374390, 18 pages
http://dx.doi.org/10.1155/2011/374390
Research Article

An Effective Generalization of the Direct Support Method

1Department of Mathematics, Faculty of Sciences, USTOMB, Oran 31000, Algeria
2Department of Operations Research, LAMOS Laboratory, University of Béjaia, Béjaia 06000, Algeria

Received 4 November 2010; Accepted 17 February 2011

Academic Editor: Ezzat G. Bakhoum

Copyright © 2011 Sonia Radjef and Mohand Ouamer Bibi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The main objective of our paper is to solve a problem which was encountered in an industrial firm. It concerns the conception of a weekly production planning with the aim to optimize the quantities to be launched. Indeed, one of the problems raised in that company could be modeled as a linear multiobjective program where the decision variables are of two kinds: the first ones are upper and lower bounded, and the second ones are nonnegative. During the resolution process of the multiobjective case, we were faced with the necessity of developing an effective method to solve the mono-objective case without any increase in the linear program size, since the industrial case to solve is already very large. So, we propose an extension of the direct support method presented in this paper. Its particularity is that it avoids the preliminary transformation of the decision variables. It handles the bounds as they are initially formulated. The method is really effective, simple to use, and permits speeding up the resolution process.

1. Introduction

The company Ifri is one of the largest and most important Algerian companies in the agroalimentary field. Ifri products mainly mineral water and various drinks.

From January to October of the year 2003, the company production was about 175 million bottles. Expressed in liters, the production in this last period has exceeded the 203 million liters of finishedproducts (all products included). Having covered the national market demand, Ifri left to the acquisition of new international markets.

The main objective of our application [1] is to conceive a data-processing application to carry out an optimal weekly production planning which will replace the planning based primarily on the good management and the experiment of the decision makers.

This problem relates to the optimization problem of the quantities to launch in production. It is modeled as a linear multi-objective program where the objective functions involved are linear, the constraints are linear and the decision variables are of two kinds: the first ones are upper and lower bounded, and the second ones are nonnegative.

Multicriteria optimization problems are a class of difficult optimization problems in which several different objective functions have to be considered at the same time. It is seldom the case that one single point will optimize all the several objective functions. Therefore, we search the so-called efficient points, that is, feasible points having the property that no other feasible point improves all the criteria without deteriorating at least one.

In [2], we developed a method to solve the multi-objective linear programming problem described above. To avoid the preliminary transformation of the constraints, hence the augmentation of the problem dimension, we propose to extend the direct support method of Gabasov et al. [3] known in single-objective programming.

In [2], we proposed a procedure for finding an initial efficient extreme point, a procedure to test the efficiency of a nonbasic variable, and a method to compute all the efficient extreme points, the weakly efficient extreme points, and the 𝜖-weakly efficient extreme points of the problem.

A multiobjective linear program with the coexistence of the two types of the decision variables can be presented in the following canonical form: 𝑑𝐶𝑥+𝑄𝑦max,𝐴𝑥+𝐻𝑦=𝑏,𝑥𝑑+,𝑦0,(1.1) where 𝐶 and 𝑄 are 𝑘×𝑛𝑥 and 𝑘×𝑛𝑦 matrices, respectively, 𝐴 and 𝐻 are matrices of order 𝑚×𝑛𝑥 and 𝑚×𝑛𝑦, respectively, with rang(𝐴𝐻)=𝑚<𝑛𝑥+𝑛𝑦, 𝑏𝑚, 𝑑𝑛𝑥, and 𝑑+𝑛𝑥.

We denote by 𝑆 the set of feasible decisions: 𝑆=(𝑥,𝑦)𝑛𝑥+𝑛𝑦,𝐴𝑥+𝐻𝑦=𝑏,𝑑𝑥𝑑+,𝑦0.(1.2)

Definition 1.1. A feasible decision (𝑥0,𝑦0)𝑛𝑥+𝑛𝑦 is said to be efficient (or Pareto optimal) for the problem (1.1), if there is no other feasible solution (𝑥,𝑦)𝑆 such that 𝐶𝑥+𝑄𝑦𝐶𝑥0+𝑄𝑦0 and 𝐶𝑥+𝑄𝑦𝐶𝑥0+𝑄𝑦0.

Definition 1.2. A feasible decision (𝑥0,𝑦0)𝑛𝑥+𝑛𝑦 is said to be weakly efficient (or Slater optimal) for the problem (1.1), if there is no other feasible solution (𝑥,𝑦)𝑆 such that 𝐶𝑥+𝑄𝑦>𝐶𝑥0+𝑄𝑦0.

Definition 1.3. Let 𝜖𝑘, 𝜖0. A feasible decision (𝑥𝜖,𝑦𝜖)𝑆 is said to be 𝜖-weakly efficient for the problem (1.1), if there is no other feasible solution (𝑥,𝑦)𝑆 such that 𝐶𝑥+𝑄𝑦𝐶𝑥𝜖𝑄𝑦𝜖>𝜖.

The multiobjective linear programming consists of determining the whole set of all the efficient decisions, all weakly efficient decisions and all 𝜖-weakly efficient decisions of problem (1.1) for given 𝐶,𝑄,𝐴,𝐻,𝑏,𝑑,and𝑑+.

During the resolution process, we need to use an efficiency test of nonbasic variables. This problem can be formulated as a single-objective linear program where the decision variables are of two types: upper and lower bounded variables and nonnegative variables. We propose in this paper to solve this latter problem by an adapted direct support method. Our approach is based on the principle of the methods developed by Gabasov et al.[3], which permit to solve a single-objective linear program with nonnegative decision variables or a single-objective linear program with bounded decision variables. Our work aims to propose a generalization for the single-objective linear program with the two types of decision variables: the upper and lower bounded variables and the nonnegative variables.

This work is devoted to present this method. Its particularity is that it avoids the preliminary transformation of the decision variables. It handles the constraints of the problems such as they are initially formulated. The method is really effective, simple to use, and direct. It allows us to treat problems in a natural way and permits speeding up the whole resolution process. It generates an important gain in memory space and CPU time. Furthermore, the method integrates a suboptimal criterion which permits to stop the algorithm with a desired accuracy. To the best of our knowledge, no other linear programming method uses this criterion which could be useful in practical applications.

The principle of this iterative method is simple: starting with an initial feasible solution and an initial support, each iteration consists of finding an ascent direction and a step along this direction to improve the value of the objective function without leaving the problem’s feasible space. The initial feasible solution and the initial support could be computed independently. In addition to this, the initial feasible point need not to be an extreme point such as in the simplex method. The details of our multiobjective method will be presented in our future works.

2. Statement of the Problem and Definitions

The canonical form of the program is as follows: 𝑧(𝑥,𝑦)=𝑐𝑡𝑥+𝑘𝑡𝑑𝑦max,(2.1)𝐴𝑥+𝐻𝑦=𝑏,(2.2)𝑥𝑑+,(2.3)𝑦0,(2.4) where 𝑐 and 𝑥 are 𝑛𝑥-vectors, 𝑘 and 𝑦 are 𝑛𝑦-vectors, 𝑏 an 𝑚-vector, 𝐴=𝐴(𝐼,𝐽𝑥) an 𝑚×𝑛𝑥-matrix, 𝐻=𝐻(𝐼,𝐽𝑦) an 𝑚×𝑛𝑦-matrix, with rank(𝐴H)=𝑚<𝑛𝑥+𝑛𝑦; 𝐼={1,2,,𝑚}, 𝐽𝑥={1,2,,𝑛𝑥}, 𝐽𝑦={𝑛𝑥+1,𝑛𝑥+2,,𝑛𝑥+𝑛𝑦}.

Let us set 𝐽=𝐽𝑥𝐽𝑦, such that 𝐽𝑥=𝐽𝑥𝐵𝐽𝑥𝑁, 𝐽𝑦=𝐽𝑦𝐵𝐽𝑦𝑁, with 𝐽𝑥𝐵𝐽𝑥𝑁=𝜙, 𝐽𝑦𝐵𝐽𝑦𝑁=𝜙 and |𝐽𝑥𝐵|+|𝐽𝑦𝐵|=𝑚.

We set 𝐽𝐵=𝐽𝑥𝐵𝐽𝑦𝐵, 𝐽𝑁=𝐽𝐽𝐵=𝐽𝑥𝑁𝐽𝑦𝑁, and we note by 𝐴𝐻 the 𝑚×(𝑛𝑥+𝑛𝑦)-matrix (𝐴𝐻).

Let the vectors and the matrices be partitioned in the following way: 𝐽𝑥=𝑥𝑥=𝑥𝑗,𝑗𝐽𝑥𝐽,𝑦=𝑦𝑦=𝑦𝑗,𝑗𝐽𝑦,𝑥𝑥=𝐵𝑥𝑁,𝑥𝐵𝐽=𝑥𝑥𝐵=𝑥𝑗,𝑗𝐽𝑥𝐵,𝑥𝑁𝐽=𝑥𝑥𝑁=𝑥𝑗,𝑗𝐽𝑥𝑁,𝑦𝑦=𝐵𝑦𝑁,𝑦𝐵𝐽=𝑦𝑦𝐵=𝑦𝑗,𝑗𝐽𝑦𝐵,𝑦𝑁𝐽=𝑦𝑦𝑁=𝑦𝑗,𝑗𝐽𝑦𝑁,𝑐𝑐=𝐵𝑐𝑁,𝑐𝐵𝐽=𝑐𝑥𝐵=𝑐𝑗,𝑗𝐽𝑥𝐵,𝑐𝑁𝐽=𝑐𝑥𝑁=𝑐𝑗,𝑗𝐽𝑥𝑁,𝑘𝑘=𝐵𝑘𝑁,𝑘𝐵𝐽=𝑘𝑦𝐵=𝑘𝑗,𝑗𝐽𝑦𝐵,𝑘𝑁𝐽=𝑘𝑦𝑁=𝑘𝑗,𝑗𝐽𝑦𝑁,𝐴=𝐴𝐼,𝐽𝑥=𝑎𝑖𝑗,1𝑖𝑚,1𝑗𝑛𝑥=𝑎𝑗,𝑗𝐽𝑥=𝐴𝐵𝐴𝑁,𝐴𝐵=𝐴𝐼,𝐽𝑥𝐵,𝐴𝑁=𝐴𝐼,𝐽𝑥𝑁,𝑎𝑗isthe𝑗thcolumnof𝐴,𝐻=𝐻𝐼,𝐽𝑦=𝑖𝑗,1𝑖𝑚,𝑛𝑥+1𝑗𝑛𝑥+𝑛𝑦=𝑗,𝑗𝐽𝑦=𝐻𝐵𝐻𝑁,𝐻𝐵=𝐻𝐼,𝐽𝑦𝐵,𝐻𝑁=𝐻𝐼,𝐽𝑦𝑁,𝑗𝐴isthe𝑗thcolumnof𝐻,𝐻=𝐴𝐻𝑎(𝐼,𝐽)=𝐻𝑖𝑗,1𝑖𝑚,1𝑗𝑛𝑥+𝑛𝑦=𝑎𝐻𝑗,𝑗𝐽𝑥𝐽𝑦=𝐴𝐻𝐵𝐴𝐻𝑁,𝐴𝐻𝐵=𝐴𝐻𝐼,𝐽𝑥𝐵𝐽𝑦𝐵=𝐴𝐵𝐻𝐵,𝐴𝐻𝑁=𝐴𝐻𝐼,𝐽𝑥𝑁𝐽𝑦𝑁=𝐴𝑁𝐻𝑁.(2.5)

Definition 2.1. (i) A vector (𝑥,𝑦), satisfying the constraints (2.2)–(2.4), is called a feasible solution of the problem (2.1)–(2.4).
(ii) A feasible solution (𝑥0,𝑦0) is said to be optimal if 𝑧(𝑥0,𝑦0)=𝑐𝑡𝑥0+𝑘𝑡𝑦0=max(𝑐𝑡𝑥+𝑘𝑡𝑦), where (𝑥,𝑦) is taken among all the feasible solutions of the problem (2.1)–(2.4).
(iii) On the other hand, a feasible solution (𝑥𝜖,𝑦𝜖) is called 𝜖-optimal or suboptimal if 𝑧𝑥0,𝑦0𝑧(𝑥𝜖,𝑦𝜖)=𝑐𝑡𝑥0𝑐𝑡𝑥𝜖+𝑘𝑡𝑦0𝑘𝑡𝑦𝜖𝜖,(2.6) where (𝑥0,𝑦0) is an optimal solution of the problem (2.1)–(2.4), and 𝜖 is a nonnegative number, fixed in advance.
(iv) The set 𝐽𝐵=𝐽𝑥𝐵𝐽𝑦𝐵𝐽,|𝐽𝐵|=𝑚 is called a support if det𝐴𝐻𝐵=det(𝐴𝐵,𝐻𝐵)0.
(v) A pair {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)}, formed by a feasible solution (𝑥,𝑦) and a support (𝐽𝑥𝐵,𝐽𝑦𝐵), is called a support feasible solution.
(vi) The support feasible solution is said to be nondegenerate, if 𝑑𝑗<𝑥𝑗<𝑑+𝑗,forany𝑗𝐽𝑥𝐵,𝑦𝑗>0,forany𝑗𝐽𝑦𝐵.(2.7)

3. Increment Formula of the Objective Function

Let {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} be a support feasible solution for the problem (2.1)–(2.4), and let us consider any other feasible solution (𝑥,𝑦)=(𝑥+Δ𝑥,𝑦+Δ𝑦).

We define two subsets 𝐽𝑦𝑁+ and 𝐽𝑦𝑁0 of 𝐽𝑦𝑁 as follows: 𝐽𝑦𝑁+=𝑗𝐽𝑦𝑁,𝑦𝑗>0,𝐽𝑦𝑁0=𝑗𝐽𝑦𝑁,𝑦𝑗.=0(3.1) The increment of the objective function is as follows: 𝑐Δ𝑧=𝑡𝐵,𝑘𝑡𝐵𝐴𝐻1𝐵𝐴𝑁𝑐𝑡𝑁Δ𝑥𝑁𝑐𝑡𝐵,𝑘𝑡𝐵𝐴𝐻1𝐵𝐻𝑁𝑘𝑡𝑁Δ𝑦𝑁.(3.2)

The potential vector 𝑢 and the estimations vector 𝐸 are defined by 𝑢𝑡=𝑐𝑡𝐵,𝑘𝑡𝐵𝐴𝐻1𝐵,𝐸𝑡=𝐸𝑡𝐵,𝐸𝑡𝑁,𝐸𝑡𝐵=𝐸𝑡𝑥𝐵,𝐸𝑡𝑦𝐵𝐸=(0,0),𝑡𝑁=𝐸𝑡𝑥𝑁,𝐸𝑡𝑦𝑁,𝐸𝑡𝑥𝑁=𝑢𝑡𝐴𝑁𝑐𝑡𝑁,𝐸𝑡𝑦𝑁=𝑢𝑡𝐻𝑁𝑘𝑡𝑁.(3.3) Then, the increment formula presents the following final form: Δ𝑧=𝐸𝑡𝑥𝑁Δ𝑥𝑁𝐸𝑡𝑦𝑁Δ𝑦𝑁.(3.4)

4. Optimality Criterion

Theorem 4.1. Let {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} be a support feasible solution for the problem (2.1)–(2.4). Then, the following relations 𝐸𝑥𝑗0,if𝑥𝑗=𝑑𝑗,𝑗𝐽𝑥𝑁,𝐸𝑥𝑗0,if𝑥𝑗=𝑑+𝑗,𝑗𝐽𝑥𝑁,𝐸𝑥𝑗=0,if𝑑𝑗<𝑥𝑗<𝑑+𝑗,𝑗𝐽𝑥𝑁,𝐸𝑦𝑗0,if𝑦𝑗=0,𝑗𝐽𝑦𝑁,𝐸𝑦𝑗=0,if𝑦𝑗>0,𝑗𝐽𝑦𝑁,(4.1) are sufficient for the optimality of the feasible solution (𝑥,𝑦). They are also necessary if the support feasible solution is nondegenerate.

Proof. Sufficiency
Let {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} be a support feasible solution satisfying the relations (4.1). For any feasible solution (𝑥,𝑦) of the problem (2.1)–(2.4), the increment formula (3.4) gives the following: Δ𝑧=𝑗𝐽𝑥𝑁𝐸𝑥𝑗𝑥𝑗𝑥𝑗𝑗𝐽𝑦𝑁+𝐸𝑦𝑗𝑦𝑗𝑦𝑗𝑗𝐽𝑦𝑁0𝐸𝑦𝑗𝑦𝑗𝑦𝑗.(4.2)
Since 𝑑𝑗𝑥𝑗𝑑+𝑗, for all 𝑗𝐽𝑥, and from the relations (4.1), we have 𝑗𝐽𝑥𝑁𝐸𝑥𝑗𝑥𝑗𝑥𝑗=𝑗𝐽𝑥𝑁,𝐸𝑥𝑗>0𝐸𝑥𝑗𝑥𝑗𝑑𝑗𝑗𝐽𝑥𝑁,𝐸𝑥𝑗<0𝐸𝑥𝑗𝑥𝑗𝑑+𝑗0.(4.3)
On the other hand, the condition 𝑦𝑗0,forall𝑗𝐽𝑦, implies that 𝑗𝐽𝑦𝑁+𝐸𝑦𝑗𝑦𝑗𝑦𝑗𝑗𝐽𝑦𝑁0𝐸𝑦𝑗𝑦𝑗𝑦𝑗=𝑗𝐽𝑦𝑁0𝐸𝑦𝑗𝑦𝑗0.(4.4) Hence, Δ𝑧=𝑧𝑥,𝑦𝑧(𝑥,𝑦)0,𝑧𝑥,𝑦𝑧(𝑥,𝑦).(4.5) The vector (𝑥,𝑦) is, consequently, an optimal solution of the problem (2.1)–(2.4).
Necessity
Let {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} be a nondegenerate optimal support feasible solution of the problem (2.1)–(2.4) and assume that the relations (4.1) are not satisfied, that is, there exists at least one index 𝑗0𝐽𝑁=𝐽𝑥𝑁𝐽𝑦𝑁 such that 𝐸𝑥𝑗0>0,for𝑥𝑗0>𝑑𝑗0,𝑗0𝐽𝑥𝑁𝐸,or,𝑥𝑗0<0,for𝑥𝑗0<𝑑+𝑗0,𝑗0𝐽𝑥𝑁𝐸,or,𝑦𝑗0<0,for𝑗0𝐽𝑦𝑁0𝐸,or,𝑦𝑗00,for𝑗0J𝑦𝑁+.(4.6) We construct another feasible solution (𝑥,𝑦)=(𝑥+𝜃𝑙𝑥,𝑦+𝜃𝑙𝑦), where 𝜃 is a positive real number, and 𝑙𝑥𝑙𝑦=𝑙(𝐽𝑥)𝑙(𝐽𝑦)=𝑙(𝐽)=𝑙 is a direction vector, constructed as follows.
For this, two cases can arise: (i)if 𝑗0𝐽𝑥𝑁, we set 𝑙𝑥𝑗0=sign𝐸𝑥𝑗0,𝑙𝑥𝑗=0,𝑗𝑗0,𝑗𝐽𝑥𝑁,𝑙𝑦𝑗=0,𝑗𝐽𝑦𝑁,𝑙𝐵=𝑙𝑥𝐵𝑙𝑦𝐵=𝐴𝐻1𝐵𝑎𝑗0sign𝐸𝑥𝑗0,(4.7) where 𝑎𝑗0 is the 𝑗0th column of the matrix 𝐴; (ii)if 𝑗0𝐽𝑦𝑁, we set 𝑙𝑦𝑗0=sign𝐸𝑦𝑗0,𝑙𝑦𝑗=0,𝑗𝑗0,𝑗𝐽𝑦𝑁,𝑙𝑥𝑗=0,𝑗𝐽𝑥𝑁,𝑙𝐵=𝑙𝑥𝐵𝑙𝑦𝐵=𝐴𝐻1𝐵𝑗0sign𝐸𝑦𝑗0,(4.8)where 𝑗0 is the 𝑗0th column of the matrix 𝐻. From the construction of the direction 𝑙, the vector (𝑥,𝑦) satisfies the principal constraint 𝐴𝑥+𝐻𝑦=𝑏.
In order to be a feasible solution of the problem (2.1)–(2.4), the vector (𝑥,𝑦) must in addition satisfy the inequalities 𝑑𝑥𝑑+ and 𝑦0, or in its developed form 𝑑𝑗𝑥𝑗𝜃𝑙𝑥𝑗𝑑+𝑗𝑥𝑗,𝑗𝐽𝑥𝐵,𝑑𝑗𝑥𝑗𝜃𝑙𝑥𝑗𝑑+𝑗𝑥𝑗,𝑗𝐽𝑥𝑁,𝜃𝑙𝑦𝑗𝑦𝑗,𝑗𝐽𝑦𝐵,𝜃𝑙𝑦𝑗𝑦𝑗,𝑗𝐽𝑦𝑁.(4.9) As the support feasible solution {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} is nondegenerate, we can always find a small positive number 𝜃 such that the relations (4.9) are satisfied. Thus, for a small positive number 𝜃, we can state that the vector (𝑥,𝑦) is a feasible solution for the problem (2.1)–(2.4). The increment formula gives in both cases 𝑧𝑥,𝑦𝑧(𝑥,𝑦)=𝜃𝐸𝑗0sign𝐸𝑗0||𝐸=𝜃𝑗0||>0,(4.10) where 𝐸𝑗0=𝐸𝑥𝑗0if𝑗0𝐽𝑥𝑁or𝐸𝑗0=𝐸𝑦𝑗0if𝑗0𝐽𝑦𝑁.(4.11) Therefore, we have found another feasible solution (𝑥,𝑦)(𝑥,𝑦) with the inequality 𝑧(𝑥,𝑦)>𝑧(𝑥,𝑦) which contradicts the optimality of the feasible solution (𝑥,𝑦). Hence the relations (4.1) are satisfied.

5. The Suboptimality Condition

In order to evaluate the difference between the optimal value 𝑧(𝑥0,𝑦0) and another value 𝑧(𝑥,𝑦) for any support feasible solution {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)}, when 𝐸𝑦0, we use the following formula: 𝛽𝐽(𝑥,𝑦),𝑥𝐵,𝐽𝑦𝐵=𝑗𝐽𝑥𝑁𝐸𝑥𝑗>0𝐸𝑥𝑗𝑥𝑗𝑑𝑗+𝑗𝐽𝑥𝑁𝐸𝑥𝑗<0𝐸𝑥𝑗𝑥𝑗𝑑+𝑗+𝑗𝐽𝑦𝑁𝐸𝑦𝑗𝑦𝑗,(5.1) which is called the suboptimality condition.

Theorem 5.1 (the suboptimality condition). Let {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} be a support feasible solution of the problem (2.1)–(2.4) and 𝜖 an arbitrary nonnegative number.
If 𝐸𝑦0 and 𝑗𝐽𝑥𝑁𝐸𝑥𝑗>0𝐸𝑥𝑗𝑥𝑗𝑑𝑗+𝑗𝐽𝑥𝑁𝐸𝑥𝑗<0𝐸𝑥𝑗𝑥𝑗𝑑+𝑗+𝑗𝐽𝑦𝑁𝐸𝑦𝑗𝑦𝑗𝜖,(5.2) then the feasible solution (𝑥,𝑦) is 𝜖-optimal.

Proof. We have 𝑧𝑥0,𝑦0𝐽𝑧(𝑥,𝑦)𝛽(𝑥,𝑦),𝑥𝐵,𝐽𝑦𝐵.(5.3) Then, if 𝛽𝐽(𝑥,𝑦),𝑥𝐵,𝐽𝑦𝐵𝜖,(5.4) we will have 𝑧𝑥0,𝑦0𝑧(𝑥,𝑦)𝜖,(5.5) therefore, (𝑥,𝑦) is 𝜖-optimal.
In the particular case where 𝜖=0, the feasible solution (𝑥,𝑦) is consequently optimal.

6. Construction of the Algorithm

Given any nonnegative real number 𝜖 and an initial support feasible solution {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)}, the aim of the algorithm is to construct an 𝜖-optimal solution (𝑥𝜖,𝑦𝜖) or an optimal solution (𝑥0,𝑦0). An iteration of the algorithm consists of moving from {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} to another support feasible solution {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} such that 𝑧(𝑥,𝑦)𝑧(𝑥,𝑦). For this purpose, we construct the new feasible solution (𝑥,𝑦) as follows: (𝑥,𝑦)=(𝑥,𝑦)+𝜃(𝑙𝑥,𝑙𝑦), where 𝑙=(𝑙𝑥,𝑙𝑦) is the appropriate direction, and 𝜃 is the step along this direction.

In this algorithm, the simplex metric is chosen. We will thus vary only one component among those which do not satisfy the relations (4.1).

In order to obtain a maximal increment, we must take 𝜃 as great as possible and choose the subscript 𝑗0 such that ||𝐸𝑗0|||||𝐸=max𝑥𝑗0|||,|||𝐸𝑦𝑗0|||,(6.1) with |||𝐸𝑥𝑗0||||||𝐸=max𝑥𝑗|||,𝑗𝐽𝑥𝑁𝑁𝑂,|||𝐸𝑦𝑗0||||||𝐸=max𝑦𝑗|||,𝑗𝐽𝑦𝑁𝑁𝑂,(6.2) where 𝐽𝑥𝑁𝑁𝑂 and 𝐽𝑦𝑁𝑁𝑂 are the subsets, respectively, of 𝐽𝑥𝑁 and 𝐽𝑦𝑁, whose the subscripts do not satisfy the relations of optimality (4.1).

6.1. Computation of the Direction 𝑙

We have two cases.(i)If |𝐸𝑗0|=|𝐸𝑥𝑗0|, we set: 𝑙𝑥𝑗0=sign𝐸𝑥𝑗0,𝑙𝑥𝑗=0,𝑗𝑗0,𝑗𝐽𝑥𝑁,𝑙𝑦𝑗=0,𝑗𝐽𝑦𝑁,𝑙𝐵=𝑙𝑥𝐵𝑙𝑦𝐵=𝐴𝐻1𝐵𝑎𝑗0sign𝐸𝑥𝑗0.(6.3)(ii)If |𝐸𝑗0|=|𝐸𝑦𝑗0|, we will set 𝑙𝑦𝑗0=sign𝐸𝑦𝑗0,𝑙𝑦𝑗=0,𝑗𝑗0,𝑗𝐽𝑦𝑁,𝑙𝑥𝑗=0,𝑗𝐽𝑥𝑁,𝑙𝐵=𝑙𝑥𝐵𝑙𝑦𝐵=𝐴𝐻1𝐵𝑗0sign𝐸𝑦𝑗0.(6.4)

6.2. Computation of the Step 𝜃

The step 𝜃0 must be taken as follows: 𝜃0𝜃=min𝑥,𝜃𝑦.(6.5)(i)If |𝐸𝑗0|=|𝐸𝑥𝑗0|, then 𝜃𝑥=min(𝜃𝑥𝑗0,𝜃𝑥𝑗1), where 𝜃𝑥𝑗0=𝑑+𝑗0𝑥𝑗0,if𝐸𝑥𝑗0𝑥<0,𝑗0𝑑𝑗0,if𝐸𝑥𝑗0𝜃>0,𝑥𝑗1𝜃=min𝑥𝑗,𝑗𝐽𝑥𝐵,(6.6) with 𝜃𝑥𝑗=𝑑+𝑗𝑥𝑗𝑙𝑥𝑗,if𝑙𝑥𝑗𝑑>0,𝑗𝑥𝑗𝑙𝑥𝑗,if𝑙𝑥𝑗<0,,if𝑙𝑥𝑗=0.(6.7) The number 𝜃𝑦 will be computed in the following way: 𝜃𝑦=𝜃𝑦𝑗1𝜃=min𝑦𝑗,𝑗𝐽𝑦𝐵,(6.8) where 𝜃𝑦𝑗=𝑦𝑗𝑙𝑦𝑗,if𝑙𝑦𝑗<0,,if𝑙𝑦𝑗0.(6.9)(ii)If |𝐸𝑗0|=|𝐸𝑦𝑗0|, then 𝜃𝑥=𝜃𝑥𝑗1𝜃=min𝑥𝑗,𝑗𝐽𝑥𝐵,(6.10)

where 𝜃𝑥𝑗=𝑑+𝑗𝑥𝑗𝑙𝑥𝑗,if𝑙𝑥𝑗𝑑>0,𝑗𝑥𝑗𝑙𝑥𝑗,if𝑙𝑥𝑗<0,,if𝑙𝑥𝑗𝜃=0,𝑦𝜃=min𝑦𝑗0,𝜃𝑦𝑗1,(6.11) with 𝜃𝑦𝑗0=𝑦𝑗0,if𝐸𝑦𝑗0>0,,if𝐸𝑦𝑗0𝜃<0,𝑦𝑗1𝜃=min𝑦𝑗,𝑗𝐽𝑦𝐵,(6.12) where 𝜃𝑦𝑗=𝑦𝑗𝑙𝑦𝑗,if𝑙𝑦𝑗<0,,if𝑙𝑦𝑗0.(6.13)

The new feasible solution is 𝑥,𝑦=𝑥+𝜃0𝑙𝑥,𝑦+𝜃0𝑙𝑦.(6.14)

6.3. The New Suboptimality Condition

Let us calculate the suboptimality condition of the new support feasible solution in the case of 𝐸𝑦0. We have 𝛽𝑥,𝑦,𝐽𝑥𝐵,𝐽𝑦𝐵=𝑗𝐽𝑥𝑁𝐸𝑥𝑗>0𝐸𝑥𝑗𝑥𝑗𝑑𝑗+𝑗𝐽𝑥𝑁𝐸𝑥𝑗<0𝐸𝑥𝑗𝑥𝑗𝑑+𝑗+𝑗𝐽𝑦𝑁𝐸𝑦𝑗𝑦𝑗.(6.15)(i)If 𝑗0𝐽𝑥𝑁, then the components 𝑥𝑗, for 𝑗𝐽𝑥𝑁, are equal to 𝑥𝑗=𝑥𝑗,for𝑗𝑗0,𝑥𝑗0𝜃0,if𝐸𝑥𝑗0𝑥>0,𝑗0+𝜃0,if𝐸𝑥𝑗0<0,(6.16) and the components 𝑦𝑗 are 𝑦𝑗=𝑦𝑗,𝑗𝐽𝑦𝑁.(6.17) Hence, 𝛽𝑥,𝑦,𝐽𝑥𝐵,𝐽𝑦𝐵𝐽=𝛽(𝑥,𝑦),𝑥𝐵,𝐽𝑦𝐵𝜃0|||𝐸𝑥𝑗0|||.(6.18)(ii)If 𝑗0𝐽𝑦𝑁, then the components 𝑥𝑗, for 𝑗𝐽𝑥𝑁, are equal to 𝑥𝑗=𝑥𝑗,𝑗𝐽𝑥𝑁,(6.19)

and the components 𝑦𝑗, for 𝑗𝐽𝑦𝑁, are 𝑦𝑗=𝑦𝑗,for𝑗𝑗0,𝑦𝑗0𝜃0,for𝑗=𝑗0.(6.20)

Hence, 𝛽𝑥,𝑦,𝐽𝑥𝐵,𝐽𝑦𝐵𝐽=𝛽(𝑥,𝑦),𝑥𝐵,𝐽𝑦𝐵𝜃0𝐸𝑦𝑗0.(6.21) In both cases, we will have 𝛽𝑥,𝑦,𝐽𝑥𝐵,𝐽𝑦𝐵𝐽=𝛽(𝑥,𝑦),𝑥𝐵,𝐽𝑦𝐵𝜃0||𝐸𝑗0||,(6.22) with |𝐸𝑗0|=|𝐸𝑥𝑗0|𝐸𝑦𝑗0.

6.4. Changing the Support

If 𝛽((𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵))𝜖, then the feasible solution (𝑥,𝑦) is 𝜖-optimal and we can stop the algorithm; otherwise, we will change 𝐽𝐵 as follows: (i)if 𝜃0=𝜃𝑥𝑗0𝜃𝑦𝑗0, then 𝐽𝐵=𝐽𝐵, 𝑥=𝑥+𝜃0𝑙𝑥, 𝑦=𝑦+𝜃0𝑙𝑦, (ii)if 𝜃0=𝜃𝑥𝑗1𝜃𝑦𝑗1, then 𝐽𝐵=(𝐽𝐵𝑗1)𝑗0, 𝑥=𝑥+𝜃0𝑙𝑥, 𝑦=𝑦+𝜃0𝑙𝑦.

Then we start a new iteration with the new support feasible solution {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)}, where the support 𝐽𝐵 satisfies the algebraic condition det𝐴𝐻𝐵=det𝐴𝐻𝐵𝐼,𝐽𝐵0.(6.23)

Remark 6.1. The step 𝜃0= may happen only if 𝐽𝑥𝐵=, |𝐸𝑗0|=|𝐸𝑦𝑗0| and 𝜃𝑦=. In such a case, the objective function is unbounded with respect to 𝑦.

7. Algorithm

Let 𝜖 be any nonnegative real number and {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} an initial support feasible solution. The steps of the algorithm are as follows. (1)Compute the estimations vector: 𝐸𝑡𝑁=𝐸𝑡𝐽𝑁=𝐸𝑡𝑥𝑁,𝐸𝑡𝑦𝑁=𝑢𝑡𝐴𝑁𝑐𝑡𝑁,𝑢𝑡𝐻𝑁𝑘𝑡𝑁,𝑢𝑡=𝑐𝑡𝐵,𝑘𝑡𝐵𝐴𝐻1𝐵.(7.1)(2)Optimality test of the support feasible solution {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)}. (i)If 𝐸𝑦0, then (a)calculate the value of suboptimality 𝛽((𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)), (b)if 𝛽((𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵))=0, the process is stopped with {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} as an optimal support solution, (c) if 𝛽((𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵))𝜖, the process is stopped with {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} as an 𝜖-optimal support solution, (d)if 𝛽((𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵))>𝜖, go to (3), (ii) if 𝐸𝑦̸0, go directly to (3). (3)Change the feasible solution (𝑥,𝑦) by (𝑥,𝑦): 𝑥=𝑥+𝜃0𝑙𝑥 and 𝑦=𝑦+𝜃0𝑙𝑦. (i)Choose a subscript 𝑗0. (ii)Compute the appropriate direction 𝑙=𝑙𝑥𝑙𝑦. (iii)Compute the step 𝜃0. (a)If 𝜃0= then the objective function is unbounded with respect to 𝑦 and the process is stopped. (b)Otherwise, compute (𝑥,𝑦)=(𝑥+𝜃0𝑙𝑥,𝑦+𝜃0𝑙𝑦). (4)Optimality test of the new feasible solution (𝑥,𝑦). (i)If 𝐸𝑦0, then (a)calculate 𝛽((𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵))=𝛽((𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵))𝜃0|𝐸𝑗0|, (b)if 𝛽((𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵))=0, the process is stopped with {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} as an optimal support solution, (c) if 𝛽((𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵))𝜖, the process is stopped with {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)} as an 𝜖-optimal support solution,(d)otherwise, go to (5). (ii)If 𝐸𝑦̸0, then go to (5). (5)Change the support 𝐽𝐵 by 𝐽𝐵.(i)If 𝜃0=𝜃𝑥𝑗0𝜃𝑦𝑗0, then 𝐽𝑥𝐵=𝐽𝑥𝐵,𝐽𝑥𝑁=𝐽𝑥𝑁,𝐽𝑦𝐵=𝐽𝑦𝐵,𝐽𝑦𝑁=𝐽𝑦𝑁,𝑥=𝑥+𝜃0𝑙𝑥,𝑦=𝑦+𝜃0𝑙𝑦.(7.2)(ii)If 𝜃0=𝜃𝑥𝑗1𝜃𝑦𝑗1, then two cases can arise: (a)case where 𝐸𝑗0=|𝐸𝑥𝑗0|*if 𝜃0=𝜃𝑥𝑗1, then 𝐽𝑥𝐵=𝐽𝑥𝐵𝑗1𝑗0,𝐽𝑥𝑁=𝐽𝑥𝑁𝑗0𝑗1,𝐽𝑦𝐵=𝐽𝑦𝐵,𝐽𝑦𝑁=𝐽𝑦𝑁,(7.3)*if 𝜃0=𝜃𝑦𝑗1, then 𝐽𝑥𝐵=𝐽𝑥𝐵𝑗0,𝐽𝑥𝑁=𝐽𝑥𝑁𝑗0,𝐽𝑦𝐵=𝐽𝑦𝐵𝑗1,𝐽𝑦𝑁=𝐽𝑦𝑁𝑗1.(7.4)(b)Case where 𝐸𝑗0=|𝐸𝑦𝑗0|: *if 𝜃0=𝜃𝑦𝑗1, then 𝐽𝑦𝐵=𝐽𝑦𝐵𝑗1𝑗0,𝐽𝑦𝑁=𝐽𝑦𝑁𝑗0𝑗1,𝐽𝑥𝐵=𝐽𝑥𝐵,𝐽𝑥𝑁=𝐽𝑥𝑁,(7.5)*if 𝜃0=𝜃𝑥𝑗1, then 𝐽𝑥𝐵=𝐽𝑥𝐵𝑗1,𝐽𝑥𝑁=𝐽𝑥𝑁𝑗1,𝐽𝑦𝐵=𝐽𝑦𝐵𝑗0,𝐽𝑦𝑁=𝐽𝑦𝑁𝑗0.(7.6)(iii)Go to (1) with the new support feasible solution {(𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)}, where 𝑥=𝑥+𝜃0𝑙𝑥 and 𝑦=𝑦+𝜃0𝑙𝑦.

8. Numerical Example

For the sake of clarity, let us illustrate the theoretical development of the method by considering the following linear program: 𝑧(𝑥,𝑦)=2𝑥13𝑥2𝑦3+𝑦4𝑥max,1𝑥2+3𝑦3+2𝑦4=2,7𝑥1+𝑥2+2𝑦3+3𝑦4=2,2𝑥12,4𝑥2𝑦4,3𝑦0,40,(8.1) where 𝑥=(𝑥1,𝑥2) and 𝑦=(𝑦3,𝑦4).

We define )𝐴=(1171, )𝐻=(3223, 𝑐𝑡)=(23, and 𝑘𝑡)=(11.

Let (𝑥,𝑦)=(1302) be an initial feasible solution of the problem. We set 𝐽𝐵=𝐽𝑥𝐵,𝐽𝑦𝐵={1,3},𝐽𝑁={2,4}.(8.2) Let 𝜖=0.

Thus, we have an initial support feasible solution {(𝑥,𝑦),𝐽𝐵} with 𝐴𝐻𝐵=1372,𝐴𝐻𝑁=,𝑧1213(𝑥,𝑦)=5.(8.3)

First Iteration
Let us calculate 𝑢𝑡=𝑐𝑡𝐵,𝑘𝑡𝐵𝐴𝐻1𝐵=3723𝐸23𝑡𝑁=𝑢𝑡𝐴𝐻𝑁𝑐𝑡𝑁,𝑘𝑡𝑁=652350.23(8.4)Choice of 𝑗0:
Among the nonoptimal indices 𝐽𝑥𝑁𝑁𝑂𝐽𝑦𝑁𝑁𝑂={2,4}, 𝑗0 is chosen such that |𝐸𝑗0| is maximal; we then have 𝑗0=2.
Computation of 𝑙: 𝑙𝑥2=1,𝑙𝑦4𝑙=0,𝐵=𝑙𝑥1𝑙𝑦3=𝐴𝐻1𝐵𝑎2=5623.23(8.5) Hence 𝑙𝑥=5231,𝑙𝑦=6023.(8.6)Computation of 𝜃0: 𝜃𝑥𝑗0=𝜃𝑥2=𝑥2𝑑2𝜃=7,𝑥1=𝑑1𝑥1𝑙𝑥1=695,𝜃𝑦3𝑦=3𝑙𝑦3=0.(8.7) The maximal step is then 𝜃0=𝜃𝑦𝑗1=𝜃𝑦3=0.(8.8)Computation of (𝑥,𝑦): 𝑥=𝑥+𝜃0𝑙𝑥=,13𝑦=𝑦+𝜃0𝑙𝑦=.02(8.9)Change the support: 𝐽𝑥𝐵={1,2},𝐽𝑥𝑁=,𝐽𝑦𝐵=,𝐽𝑦𝑁={3,4}.(8.10)

Second Iteration
We have ,𝐽(𝑥,𝑦)=1302𝑥𝐵={1,2},𝐽𝑥𝑁𝐽=,𝑦𝐵=,𝐽𝑦𝑁𝐴={3,4},𝐻𝐵=1171,𝐴𝐻𝑁=.3223(8.11) We compute 𝑢𝑡=𝑐𝑡𝐵,𝑘𝑡𝐵𝐴𝐻1𝐵=19616,𝐸𝑡𝑁=𝑢𝑡𝐴𝐻𝑁𝑐𝑡𝑁,𝑘𝑡𝑁=656356.(8.12) Computation of 𝛽((𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)): 𝛽𝐽(𝑥,𝑦),𝑥𝐵,𝐽𝑦𝐵=𝐸𝑦3𝑦3+𝐸𝑦4𝑦4=353.(8.13) Then, (𝑥,𝑦) is not optimal.
Choice of 𝑗0: As the set of nonoptimal indices is 𝐽𝑥𝑁𝑁0𝐽𝑦𝑁𝑁0={4}, we have 𝑗0=4.
Computation of 𝑙: 𝑙𝑦4=1,𝑙𝑦3=0,𝑙𝐵=𝑙𝑥1𝑙𝑥2=𝐴𝐻1𝐵4=56176.(8.14) Hence, 𝑙𝑥=(5/617/6),𝑙𝑦=(01).
Computation of 𝜃0: 𝜃𝑥𝑗1𝜃=min𝑥1,𝜃𝑥2𝑑=min1𝑥1𝑙𝑥1,𝑑2𝑥2𝑙𝑥2=min185,42=174217=𝜃𝑥2,𝜃𝑦3=,𝜃𝑦𝑗0=𝜃𝑦4=2.(8.15) The maximal step is thus 𝜃0=𝜃𝑦4=2.
Consequently, the support remains unchanged: 𝐽𝐵=𝐽𝐵={1,2},𝐽𝑁=𝐽𝑁={3,4}.(8.16)Computation of (𝑥,𝑦): 𝑥=𝑥+𝜃0𝑙𝑥=2383,𝑦=𝑦+𝜃0𝑙𝑦=.00(8.17)Computation of 𝛽((𝑥,𝑦),(𝐽𝑥𝐵,𝐽𝑦𝐵)): 𝛽𝑥,𝑦,𝐽𝑥𝐵,𝐽𝑦𝐵=0.(8.18) Then, the vector (𝑥,𝑦)=(2/3,8/3,0,0) is an optimal solution and the maximal value of the objective function is 𝑧=20/3.

9. Conclusion

The necessity of developing the method presented in this paper occurred during a more complex optimization scheme involving the resolution of a multicriteria decision problem [2]. Indeed, an efficiency test of nonbasic variables is necessary to be executed several times along the resolution process. This efficiency test yields to solve a monocriteria program with two kinds of variables: upper and lower bounded variables and nonnegative ones. This kind of linear models can be found as subproblems in quadratic programming [4] and optimal control for example. In these cases, the use of the simplex method is not suitable since the transformed problems are often degenerated. An other particularity of our method is that it uses a suboptimal criterion which can stop the algorithm with a desired precision. It is effective, fast, simple, and permits a time reduction in the whole optimization process.

References

  1. K. Ait Yahia and F. Benkerrou, “Méthodes d'aide à la planification de la production au niveau de la laiterie de Djurdjura (Ifri),” in Mémoire d'Ingéniorat d'Etat, Université de Béjaïa, 2001. View at Google Scholar
  2. S. Radjef, Sur la programmation linaire multi-objectifs. Mémoire de Magister, Université de Béjaïa, 2001.
  3. R. Gabasov et al., Constructive Methods of Optimization, P.I. University Press, Minsk, Belarus, 1984.
  4. B. Brahmi and M. O. Bibi, “Dual support method for solving convex quadratic programs,” Optimization, vol. 59, no. 6, pp. 851–872, 2010. View at Google Scholar · View at Zentralblatt MATH