Abstract

The main objective of our paper is to solve a problem which was encountered in an industrial firm. It concerns the conception of a weekly production planning with the aim to optimize the quantities to be launched. Indeed, one of the problems raised in that company could be modeled as a linear multiobjective program where the decision variables are of two kinds: the first ones are upper and lower bounded, and the second ones are nonnegative. During the resolution process of the multiobjective case, we were faced with the necessity of developing an effective method to solve the mono-objective case without any increase in the linear program size, since the industrial case to solve is already very large. So, we propose an extension of the direct support method presented in this paper. Its particularity is that it avoids the preliminary transformation of the decision variables. It handles the bounds as they are initially formulated. The method is really effective, simple to use, and permits speeding up the resolution process.

1. Introduction

The company Ifri is one of the largest and most important Algerian companies in the agroalimentary field. Ifri products mainly mineral water and various drinks.

From January to October of the year 2003, the company production was about 175 million bottles. Expressed in liters, the production in this last period has exceeded the 203 million liters of finishedproducts (all products included). Having covered the national market demand, Ifri left to the acquisition of new international markets.

The main objective of our application [1] is to conceive a data-processing application to carry out an optimal weekly production planning which will replace the planning based primarily on the good management and the experiment of the decision makers.

This problem relates to the optimization problem of the quantities to launch in production. It is modeled as a linear multi-objective program where the objective functions involved are linear, the constraints are linear and the decision variables are of two kinds: the first ones are upper and lower bounded, and the second ones are nonnegative.

Multicriteria optimization problems are a class of difficult optimization problems in which several different objective functions have to be considered at the same time. It is seldom the case that one single point will optimize all the several objective functions. Therefore, we search the so-called efficient points, that is, feasible points having the property that no other feasible point improves all the criteria without deteriorating at least one.

In [2], we developed a method to solve the multi-objective linear programming problem described above. To avoid the preliminary transformation of the constraints, hence the augmentation of the problem dimension, we propose to extend the direct support method of Gabasov et al. [3] known in single-objective programming.

In [2], we proposed a procedure for finding an initial efficient extreme point, a procedure to test the efficiency of a nonbasic variable, and a method to compute all the efficient extreme points, the weakly efficient extreme points, and the πœ–-weakly efficient extreme points of the problem.

A multiobjective linear program with the coexistence of the two types of the decision variables can be presented in the following canonical form: 𝑑𝐢π‘₯+π‘„π‘¦βŸΆmax,𝐴π‘₯+𝐻𝑦=𝑏,βˆ’β‰€π‘₯≀𝑑+,𝑦β‰₯0,(1.1) where 𝐢 and 𝑄 are π‘˜Γ—π‘›π‘₯ and π‘˜Γ—π‘›π‘¦ matrices, respectively, 𝐴 and 𝐻 are matrices of order π‘šΓ—π‘›π‘₯ and π‘šΓ—π‘›π‘¦, respectively, with rang(𝐴∣𝐻)=π‘š<𝑛π‘₯+𝑛𝑦, π‘βˆˆβ„π‘š, π‘‘βˆ’βˆˆβ„π‘›π‘₯, and 𝑑+βˆˆβ„π‘›π‘₯.

We denote by 𝑆 the set of feasible decisions: 𝑆=(π‘₯,𝑦)βˆˆβ„π‘›π‘₯+𝑛𝑦,𝐴π‘₯+𝐻𝑦=𝑏,π‘‘βˆ’β‰€π‘₯≀𝑑+ξ€Ύ,𝑦β‰₯0.(1.2)

Definition 1.1. A feasible decision (π‘₯0,𝑦0)βˆˆβ„π‘›π‘₯+𝑛𝑦 is said to be efficient (or Pareto optimal) for the problem (1.1), if there is no other feasible solution (π‘₯,𝑦)βˆˆπ‘† such that 𝐢π‘₯+𝑄𝑦β‰₯𝐢π‘₯0+𝑄𝑦0 and 𝐢π‘₯+𝑄𝑦≠𝐢π‘₯0+𝑄𝑦0.

Definition 1.2. A feasible decision (π‘₯0,𝑦0)βˆˆβ„π‘›π‘₯+𝑛𝑦 is said to be weakly efficient (or Slater optimal) for the problem (1.1), if there is no other feasible solution (π‘₯,𝑦)βˆˆπ‘† such that 𝐢π‘₯+𝑄𝑦>𝐢π‘₯0+𝑄𝑦0.

Definition 1.3. Let πœ–βˆˆβ„π‘˜, πœ–β‰₯0. A feasible decision (π‘₯πœ–,π‘¦πœ–)βˆˆπ‘† is said to be πœ–-weakly efficient for the problem (1.1), if there is no other feasible solution (π‘₯,𝑦)βˆˆπ‘† such that 𝐢π‘₯+π‘„π‘¦βˆ’πΆπ‘₯πœ–βˆ’π‘„π‘¦πœ–>πœ–.

The multiobjective linear programming consists of determining the whole set of all the efficient decisions, all weakly efficient decisions and all πœ–-weakly efficient decisions of problem (1.1) for given 𝐢,𝑄,𝐴,𝐻,𝑏,π‘‘βˆ’,and𝑑+.

During the resolution process, we need to use an efficiency test of nonbasic variables. This problem can be formulated as a single-objective linear program where the decision variables are of two types: upper and lower bounded variables and nonnegative variables. We propose in this paper to solve this latter problem by an adapted direct support method. Our approach is based on the principle of the methods developed by Gabasov et al.[3], which permit to solve a single-objective linear program with nonnegative decision variables or a single-objective linear program with bounded decision variables. Our work aims to propose a generalization for the single-objective linear program with the two types of decision variables: the upper and lower bounded variables and the nonnegative variables.

This work is devoted to present this method. Its particularity is that it avoids the preliminary transformation of the decision variables. It handles the constraints of the problems such as they are initially formulated. The method is really effective, simple to use, and direct. It allows us to treat problems in a natural way and permits speeding up the whole resolution process. It generates an important gain in memory space and CPU time. Furthermore, the method integrates a suboptimal criterion which permits to stop the algorithm with a desired accuracy. To the best of our knowledge, no other linear programming method uses this criterion which could be useful in practical applications.

The principle of this iterative method is simple: starting with an initial feasible solution and an initial support, each iteration consists of finding an ascent direction and a step along this direction to improve the value of the objective function without leaving the problem’s feasible space. The initial feasible solution and the initial support could be computed independently. In addition to this, the initial feasible point need not to be an extreme point such as in the simplex method. The details of our multiobjective method will be presented in our future works.

2. Statement of the Problem and Definitions

The canonical form of the program is as follows: 𝑧(π‘₯,𝑦)=𝑐𝑑π‘₯+π‘˜π‘‘π‘‘π‘¦βŸΆmax,(2.1)𝐴π‘₯+𝐻𝑦=𝑏,(2.2)βˆ’β‰€π‘₯≀𝑑+,(2.3)𝑦β‰₯0,(2.4) where 𝑐 and π‘₯ are 𝑛π‘₯-vectors, π‘˜ and 𝑦 are 𝑛𝑦-vectors, 𝑏 an π‘š-vector, 𝐴=𝐴(𝐼,𝐽π‘₯) an π‘šΓ—π‘›π‘₯-matrix, 𝐻=𝐻(𝐼,𝐽𝑦) an π‘šΓ—π‘›π‘¦-matrix, with rank(𝐴∣H)=π‘š<𝑛π‘₯+𝑛𝑦; 𝐼={1,2,…,π‘š}, 𝐽π‘₯={1,2,…,𝑛π‘₯}, 𝐽𝑦={𝑛π‘₯+1,𝑛π‘₯+2,…,𝑛π‘₯+𝑛𝑦}.

Let us set 𝐽=𝐽π‘₯βˆͺ𝐽𝑦, such that 𝐽π‘₯=𝐽π‘₯𝐡βˆͺ𝐽π‘₯𝑁, 𝐽𝑦=𝐽𝑦𝐡βˆͺ𝐽𝑦𝑁, with 𝐽π‘₯𝐡∩𝐽π‘₯𝑁=πœ™, π½π‘¦π΅βˆ©π½π‘¦π‘=πœ™ and |𝐽π‘₯𝐡|+|𝐽𝑦𝐡|=π‘š.

We set 𝐽𝐡=𝐽π‘₯𝐡βˆͺ𝐽𝑦𝐡, 𝐽𝑁=𝐽⧡𝐽𝐡=𝐽π‘₯𝑁βˆͺ𝐽𝑦𝑁, and we note by 𝐴𝐻 the π‘šΓ—(𝑛π‘₯+𝑛𝑦)-matrix (𝐴∣𝐻).

Let the vectors and the matrices be partitioned in the following way: 𝐽π‘₯=π‘₯π‘₯ξ€Έ=ξ€·π‘₯𝑗,π‘—βˆˆπ½π‘₯𝐽,𝑦=𝑦𝑦=𝑦𝑗,π‘—βˆˆπ½π‘¦ξ€Έ,ξ‚΅π‘₯π‘₯=𝐡π‘₯𝑁,π‘₯𝐡𝐽=π‘₯π‘₯𝐡=ξ€·π‘₯𝑗,π‘—βˆˆπ½π‘₯𝐡,π‘₯𝑁𝐽=π‘₯π‘₯𝑁=ξ€·π‘₯𝑗,π‘—βˆˆπ½π‘₯𝑁,𝑦𝑦=𝐡𝑦𝑁,𝑦𝐡𝐽=𝑦𝑦𝐡=𝑦𝑗,π‘—βˆˆπ½π‘¦π΅ξ€Έ,𝑦𝑁𝐽=𝑦𝑦𝑁=𝑦𝑗,π‘—βˆˆπ½π‘¦π‘ξ€Έ,𝑐𝑐=𝐡𝑐𝑁,𝑐𝐡𝐽=𝑐π‘₯𝐡=𝑐𝑗,π‘—βˆˆπ½π‘₯𝐡,𝑐𝑁𝐽=𝑐π‘₯𝑁=𝑐𝑗,π‘—βˆˆπ½π‘₯𝑁,ξ‚΅π‘˜π‘˜=π΅π‘˜π‘ξ‚Ά,π‘˜π΅ξ€·π½=π‘˜π‘¦π΅ξ€Έ=ξ€·π‘˜π‘—,π‘—βˆˆπ½π‘¦π΅ξ€Έ,π‘˜π‘ξ€·π½=π‘˜π‘¦π‘ξ€Έ=ξ€·π‘˜π‘—,π‘—βˆˆπ½π‘¦π‘ξ€Έ,𝐴=𝐴𝐼,𝐽π‘₯ξ€Έ=ξ€·π‘Žπ‘–π‘—,1β‰€π‘–β‰€π‘š,1≀𝑗≀𝑛π‘₯ξ€Έ=ξ€·π‘Žπ‘—,π‘—βˆˆπ½π‘₯ξ€Έ=ξ€·π΄π΅βˆ£π΄π‘ξ€Έ,𝐴𝐡=𝐴𝐼,𝐽π‘₯𝐡,𝐴𝑁=𝐴𝐼,𝐽π‘₯𝑁,π‘Žπ‘—ξ€·isthe𝑗thcolumnof𝐴,𝐻=𝐻𝐼,𝐽𝑦=ξ€·β„Žπ‘–π‘—,1β‰€π‘–β‰€π‘š,𝑛π‘₯+1≀𝑗≀𝑛π‘₯+𝑛𝑦=ξ€·β„Žπ‘—,π‘—βˆˆπ½π‘¦ξ€Έ=ξ€·π»π΅βˆ£π»π‘ξ€Έ,𝐻𝐡=𝐻𝐼,𝐽𝑦𝐡,𝐻𝑁=𝐻𝐼,𝐽𝑦𝑁,β„Žπ‘—π΄isthe𝑗thcolumnof𝐻,𝐻=π΄π»ξ‚€π‘Ž(𝐼,𝐽)=𝐻𝑖𝑗,1β‰€π‘–β‰€π‘š,1≀𝑗≀𝑛π‘₯+𝑛𝑦=ξ‚€π‘Žπ»π‘—,π‘—βˆˆπ½π‘₯βˆͺ𝐽𝑦=ξ€·π΄π»π΅βˆ£π΄π»π‘ξ€Έ,𝐴𝐻𝐡=𝐴𝐻𝐼,𝐽π‘₯𝐡βˆͺ𝐽𝑦𝐡=ξ€·π΄π΅βˆ£π»π΅ξ€Έ,𝐴𝐻𝑁=𝐴𝐻𝐼,𝐽π‘₯𝑁βˆͺ𝐽𝑦𝑁=ξ€·π΄π‘βˆ£π»π‘ξ€Έ.(2.5)

Definition 2.1. (i) A vector (π‘₯,𝑦), satisfying the constraints (2.2)–(2.4), is called a feasible solution of the problem (2.1)–(2.4).
(ii) A feasible solution (π‘₯0,𝑦0) is said to be optimal if 𝑧(π‘₯0,𝑦0)=𝑐𝑑π‘₯0+π‘˜π‘‘π‘¦0=max(𝑐𝑑π‘₯+π‘˜π‘‘π‘¦), where (π‘₯,𝑦) is taken among all the feasible solutions of the problem (2.1)–(2.4).
(iii) On the other hand, a feasible solution (π‘₯πœ–,π‘¦πœ–) is called πœ–-optimal or suboptimal if 𝑧π‘₯0,𝑦0ξ€Έβˆ’π‘§(π‘₯πœ–,π‘¦πœ–)=𝑐𝑑π‘₯0βˆ’π‘π‘‘π‘₯πœ–+π‘˜π‘‘π‘¦0βˆ’π‘˜π‘‘π‘¦πœ–β‰€πœ–,(2.6) where (π‘₯0,𝑦0) is an optimal solution of the problem (2.1)–(2.4), and πœ– is a nonnegative number, fixed in advance.
(iv) The set 𝐽𝐡=𝐽π‘₯𝐡βˆͺπ½π‘¦π΅βŠ‚π½,|𝐽𝐡|=π‘š is called a support if det𝐴𝐻𝐡=det(𝐴𝐡,𝐻𝐡)β‰ 0.
(v) A pair {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)}, formed by a feasible solution (π‘₯,𝑦) and a support (𝐽π‘₯𝐡,𝐽𝑦𝐡), is called a support feasible solution.
(vi) The support feasible solution is said to be nondegenerate, if π‘‘βˆ’π‘—<π‘₯𝑗<𝑑+𝑗,foranyπ‘—βˆˆπ½π‘₯𝐡,𝑦𝑗>0,foranyπ‘—βˆˆπ½π‘¦π΅.(2.7)

3. Increment Formula of the Objective Function

Let {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} be a support feasible solution for the problem (2.1)–(2.4), and let us consider any other feasible solution (π‘₯,𝑦)=(π‘₯+Ξ”π‘₯,𝑦+Δ𝑦).

We define two subsets 𝐽𝑦𝑁+ and 𝐽𝑦𝑁0 of 𝐽𝑦𝑁 as follows: 𝐽𝑦𝑁+=ξ€½π‘—βˆˆπ½π‘¦π‘,𝑦𝑗>0,𝐽𝑦𝑁0=ξ€½π‘—βˆˆπ½π‘¦π‘,𝑦𝑗.=0(3.1) The increment of the objective function is as follows: 𝑐Δ𝑧=βˆ’π‘‘π΅,π‘˜π‘‘π΅ξ€Έπ΄π»βˆ’1π΅π΄π‘βˆ’π‘π‘‘π‘ξ‚Ξ”π‘₯π‘βˆ’ξ‚€ξ€·π‘π‘‘π΅,π‘˜π‘‘π΅ξ€Έπ΄π»βˆ’1π΅π»π‘βˆ’π‘˜π‘‘π‘ξ‚Ξ”π‘¦π‘.(3.2)

The potential vector 𝑒 and the estimations vector 𝐸 are defined by 𝑒𝑑=𝑐𝑑𝐡,π‘˜π‘‘π΅ξ€Έπ΄π»βˆ’1𝐡,𝐸𝑑=𝐸𝑑𝐡,𝐸𝑑𝑁,𝐸𝑑𝐡=𝐸𝑑π‘₯𝐡,𝐸𝑑𝑦𝐡𝐸=(0,0),𝑑𝑁=𝐸𝑑π‘₯𝑁,𝐸𝑑𝑦𝑁,𝐸𝑑π‘₯𝑁=π‘’π‘‘π΄π‘βˆ’π‘π‘‘π‘,𝐸𝑑𝑦𝑁=π‘’π‘‘π»π‘βˆ’π‘˜π‘‘π‘.(3.3) Then, the increment formula presents the following final form: Δ𝑧=βˆ’πΈπ‘‘π‘₯𝑁Δπ‘₯π‘βˆ’πΈπ‘‘π‘¦π‘Ξ”π‘¦π‘.(3.4)

4. Optimality Criterion

Theorem 4.1. Let {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} be a support feasible solution for the problem (2.1)–(2.4). Then, the following relations 𝐸π‘₯𝑗β‰₯0,ifπ‘₯𝑗=π‘‘βˆ’π‘—,π‘—βˆˆπ½π‘₯𝑁,𝐸π‘₯𝑗≀0,ifπ‘₯𝑗=𝑑+𝑗,π‘—βˆˆπ½π‘₯𝑁,𝐸π‘₯𝑗=0,ifπ‘‘βˆ’π‘—<π‘₯𝑗<𝑑+𝑗,π‘—βˆˆπ½π‘₯𝑁,𝐸𝑦𝑗β‰₯0,if𝑦𝑗=0,π‘—βˆˆπ½π‘¦π‘,𝐸𝑦𝑗=0,if𝑦𝑗>0,π‘—βˆˆπ½π‘¦π‘,(4.1) are sufficient for the optimality of the feasible solution (π‘₯,𝑦). They are also necessary if the support feasible solution is nondegenerate.

Proof. Sufficiency
Let {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} be a support feasible solution satisfying the relations (4.1). For any feasible solution (π‘₯,𝑦) of the problem (2.1)–(2.4), the increment formula (3.4) gives the following: Δ𝑧=βˆ’π‘—βˆˆπ½π‘₯𝑁𝐸π‘₯𝑗π‘₯π‘—βˆ’π‘₯π‘—ξ€Έβˆ’ξ“π‘—βˆˆπ½π‘¦π‘+πΈπ‘¦π‘—ξ€·π‘¦π‘—βˆ’π‘¦π‘—ξ€Έβˆ’ξ“π‘—βˆˆπ½π‘¦π‘0πΈπ‘¦π‘—ξ€·π‘¦π‘—βˆ’π‘¦π‘—ξ€Έ.(4.2)
Since π‘‘βˆ’π‘—β‰€π‘₯𝑗≀𝑑+𝑗, for all π‘—βˆˆπ½π‘₯, and from the relations (4.1), we have βˆ’ξ“π‘—βˆˆπ½π‘₯𝑁𝐸π‘₯𝑗π‘₯π‘—βˆ’π‘₯𝑗=βˆ’π‘—βˆˆπ½π‘₯𝑁,𝐸π‘₯𝑗>0𝐸π‘₯𝑗π‘₯π‘—βˆ’π‘‘βˆ’π‘—ξ€Έβˆ’ξ“π‘—βˆˆπ½π‘₯𝑁,𝐸π‘₯𝑗<0𝐸π‘₯𝑗π‘₯π‘—βˆ’π‘‘+𝑗≀0.(4.3)
On the other hand, the condition 𝑦𝑗β‰₯0,forallπ‘—βˆˆπ½π‘¦, implies that βˆ’ξ“π‘—βˆˆπ½π‘¦π‘+πΈπ‘¦π‘—ξ€·π‘¦π‘—βˆ’π‘¦π‘—ξ€Έβˆ’ξ“π‘—βˆˆπ½π‘¦π‘0πΈπ‘¦π‘—ξ€·π‘¦π‘—βˆ’π‘¦π‘—ξ€Έξ“=βˆ’π‘—βˆˆπ½π‘¦π‘0𝐸𝑦𝑗𝑦𝑗≀0.(4.4) Hence, Δ𝑧=𝑧π‘₯,π‘¦ξ€Έξ€·βˆ’π‘§(π‘₯,𝑦)≀0,𝑧π‘₯,𝑦≀𝑧(π‘₯,𝑦).(4.5) The vector (π‘₯,𝑦) is, consequently, an optimal solution of the problem (2.1)–(2.4).
Necessity
Let {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} be a nondegenerate optimal support feasible solution of the problem (2.1)–(2.4) and assume that the relations (4.1) are not satisfied, that is, there exists at least one index 𝑗0βˆˆπ½π‘=𝐽π‘₯𝑁βˆͺ𝐽𝑦𝑁 such that 𝐸π‘₯𝑗0>0,forπ‘₯𝑗0>π‘‘βˆ’π‘—0,𝑗0∈𝐽π‘₯𝑁𝐸,or,π‘₯𝑗0<0,forπ‘₯𝑗0<𝑑+𝑗0,𝑗0∈𝐽π‘₯𝑁𝐸,or,𝑦𝑗0<0,for𝑗0βˆˆπ½π‘¦π‘0𝐸,or,𝑦𝑗0β‰ 0,for𝑗0∈J𝑦𝑁+.(4.6) We construct another feasible solution (π‘₯,𝑦)=(π‘₯+πœƒπ‘™π‘₯,𝑦+πœƒπ‘™π‘¦), where πœƒ is a positive real number, and 𝑙π‘₯𝑙𝑦=𝑙(𝐽π‘₯)𝑙(𝐽𝑦)=𝑙(𝐽)=𝑙 is a direction vector, constructed as follows.
For this, two cases can arise: (i)if 𝑗0∈𝐽π‘₯𝑁, we set 𝑙π‘₯𝑗0=βˆ’sign𝐸π‘₯𝑗0,𝑙π‘₯𝑗=0,𝑗≠𝑗0,π‘—βˆˆπ½π‘₯𝑁,𝑙𝑦𝑗=0,π‘—βˆˆπ½π‘¦π‘,𝑙𝐡=𝑙π‘₯𝐡𝑙𝑦𝐡=π΄π»βˆ’1π΅π‘Žπ‘—0sign𝐸π‘₯𝑗0,(4.7) where π‘Žπ‘—0 is the 𝑗0th column of the matrix 𝐴; (ii)if 𝑗0βˆˆπ½π‘¦π‘, we set 𝑙𝑦𝑗0=βˆ’sign𝐸𝑦𝑗0,𝑙𝑦𝑗=0,𝑗≠𝑗0,π‘—βˆˆπ½π‘¦π‘,𝑙π‘₯𝑗=0,π‘—βˆˆπ½π‘₯𝑁,𝑙𝐡=𝑙π‘₯𝐡𝑙𝑦𝐡=π΄π»βˆ’1π΅β„Žπ‘—0sign𝐸𝑦𝑗0,(4.8)where β„Žπ‘—0 is the 𝑗0th column of the matrix 𝐻. From the construction of the direction 𝑙, the vector (π‘₯,𝑦) satisfies the principal constraint 𝐴π‘₯+𝐻𝑦=𝑏.
In order to be a feasible solution of the problem (2.1)–(2.4), the vector (π‘₯,𝑦) must in addition satisfy the inequalities π‘‘βˆ’β‰€π‘₯≀𝑑+ and 𝑦β‰₯0, or in its developed form π‘‘βˆ’π‘—βˆ’π‘₯π‘—β‰€πœƒπ‘™π‘₯𝑗≀𝑑+π‘—βˆ’π‘₯𝑗,π‘—βˆˆπ½π‘₯𝐡,π‘‘βˆ’π‘—βˆ’π‘₯π‘—β‰€πœƒπ‘™π‘₯𝑗≀𝑑+π‘—βˆ’π‘₯𝑗,π‘—βˆˆπ½π‘₯𝑁,πœƒπ‘™π‘¦π‘—β‰₯βˆ’π‘¦π‘—,π‘—βˆˆπ½π‘¦π΅,πœƒπ‘™π‘¦π‘—β‰₯βˆ’π‘¦π‘—,π‘—βˆˆπ½π‘¦π‘.(4.9) As the support feasible solution {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} is nondegenerate, we can always find a small positive number πœƒ such that the relations (4.9) are satisfied. Thus, for a small positive number πœƒ, we can state that the vector (π‘₯,𝑦) is a feasible solution for the problem (2.1)–(2.4). The increment formula gives in both cases 𝑧π‘₯,π‘¦ξ€Έβˆ’π‘§(π‘₯,𝑦)=πœƒπΈπ‘—0sign𝐸𝑗0||𝐸=πœƒπ‘—0||>0,(4.10) where 𝐸𝑗0=𝐸π‘₯𝑗0if𝑗0∈𝐽π‘₯𝑁or𝐸𝑗0=𝐸𝑦𝑗0if𝑗0βˆˆπ½π‘¦π‘.(4.11) Therefore, we have found another feasible solution (π‘₯,𝑦)β‰ (π‘₯,𝑦) with the inequality 𝑧(π‘₯,𝑦)>𝑧(π‘₯,𝑦) which contradicts the optimality of the feasible solution (π‘₯,𝑦). Hence the relations (4.1) are satisfied.

5. The Suboptimality Condition

In order to evaluate the difference between the optimal value 𝑧(π‘₯0,𝑦0) and another value 𝑧(π‘₯,𝑦) for any support feasible solution {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)}, when 𝐸𝑦β‰₯0, we use the following formula: 𝛽𝐽(π‘₯,𝑦),π‘₯𝐡,𝐽𝑦𝐡=ξ“ξ€Έξ€Έπ‘—βˆˆπ½π‘₯𝑁𝐸π‘₯𝑗>0𝐸π‘₯𝑗π‘₯π‘—βˆ’π‘‘βˆ’π‘—ξ€Έ+ξ“π‘—βˆˆπ½π‘₯𝑁𝐸π‘₯𝑗<0𝐸π‘₯𝑗π‘₯π‘—βˆ’π‘‘+𝑗+ξ“π‘—βˆˆπ½π‘¦π‘πΈπ‘¦π‘—π‘¦π‘—,(5.1) which is called the suboptimality condition.

Theorem 5.1 (the suboptimality condition). Let {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} be a support feasible solution of the problem (2.1)–(2.4) and πœ– an arbitrary nonnegative number.
If 𝐸𝑦β‰₯0 and ξ“π‘—βˆˆπ½π‘₯𝑁𝐸π‘₯𝑗>0𝐸π‘₯𝑗π‘₯π‘—βˆ’π‘‘βˆ’π‘—ξ€Έ+ξ“π‘—βˆˆπ½π‘₯𝑁𝐸π‘₯𝑗<0𝐸π‘₯𝑗π‘₯π‘—βˆ’π‘‘+𝑗+ξ“π‘—βˆˆπ½π‘¦π‘πΈπ‘¦π‘—π‘¦π‘—β‰€πœ–,(5.2) then the feasible solution (π‘₯,𝑦) is πœ–-optimal.

Proof. We have 𝑧π‘₯0,𝑦0ξ€Έξ€·ξ€·π½βˆ’π‘§(π‘₯,𝑦)≀𝛽(π‘₯,𝑦),π‘₯𝐡,𝐽𝑦𝐡.ξ€Έξ€Έ(5.3) Then, if 𝛽𝐽(π‘₯,𝑦),π‘₯𝐡,π½π‘¦π΅ξ€Έξ€Έβ‰€πœ–,(5.4) we will have 𝑧π‘₯0,𝑦0ξ€Έβˆ’π‘§(π‘₯,𝑦)β‰€πœ–,(5.5) therefore, (π‘₯,𝑦) is πœ–-optimal.
In the particular case where πœ–=0, the feasible solution (π‘₯,𝑦) is consequently optimal.

6. Construction of the Algorithm

Given any nonnegative real number πœ– and an initial support feasible solution {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)}, the aim of the algorithm is to construct an πœ–-optimal solution (π‘₯πœ–,π‘¦πœ–) or an optimal solution (π‘₯0,𝑦0). An iteration of the algorithm consists of moving from {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} to another support feasible solution {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} such that 𝑧(π‘₯,𝑦)β‰₯𝑧(π‘₯,𝑦). For this purpose, we construct the new feasible solution (π‘₯,𝑦) as follows: (π‘₯,𝑦)=(π‘₯,𝑦)+πœƒ(𝑙π‘₯,𝑙𝑦), where 𝑙=(𝑙π‘₯,𝑙𝑦) is the appropriate direction, and πœƒ is the step along this direction.

In this algorithm, the simplex metric is chosen. We will thus vary only one component among those which do not satisfy the relations (4.1).

In order to obtain a maximal increment, we must take πœƒ as great as possible and choose the subscript 𝑗0 such that ||𝐸𝑗0||ξ‚€|||𝐸=maxπ‘₯𝑗0|||,|||𝐸𝑦𝑗0|||,(6.1) with |||𝐸π‘₯𝑗0|||ξ‚€|||𝐸=maxπ‘₯𝑗|||,π‘—βˆˆπ½π‘₯𝑁𝑁𝑂,|||𝐸𝑦𝑗0|||ξ‚€|||𝐸=max𝑦𝑗|||,π‘—βˆˆπ½π‘¦π‘π‘π‘‚ξ‚,(6.2) where 𝐽π‘₯𝑁𝑁𝑂 and 𝐽𝑦𝑁𝑁𝑂 are the subsets, respectively, of 𝐽π‘₯𝑁 and 𝐽𝑦𝑁, whose the subscripts do not satisfy the relations of optimality (4.1).

6.1. Computation of the Direction 𝑙

We have two cases.(i)If |𝐸𝑗0|=|𝐸π‘₯𝑗0|, we set: 𝑙π‘₯𝑗0=βˆ’sign𝐸π‘₯𝑗0,𝑙π‘₯𝑗=0,𝑗≠𝑗0,π‘—βˆˆπ½π‘₯𝑁,𝑙𝑦𝑗=0,π‘—βˆˆπ½π‘¦π‘,𝑙𝐡=𝑙π‘₯𝐡𝑙𝑦𝐡=π΄π»βˆ’1π΅π‘Žπ‘—0sign𝐸π‘₯𝑗0.(6.3)(ii)If |𝐸𝑗0|=|𝐸𝑦𝑗0|, we will set 𝑙𝑦𝑗0=βˆ’sign𝐸𝑦𝑗0,𝑙𝑦𝑗=0,𝑗≠𝑗0,π‘—βˆˆπ½π‘¦π‘,𝑙π‘₯𝑗=0,π‘—βˆˆπ½π‘₯𝑁,𝑙𝐡=𝑙π‘₯𝐡𝑙𝑦𝐡=π΄π»βˆ’1π΅β„Žπ‘—0sign𝐸𝑦𝑗0.(6.4)

6.2. Computation of the Step πœƒ

The step πœƒ0 must be taken as follows: πœƒ0ξ€·πœƒ=minπ‘₯,πœƒπ‘¦ξ€Έ.(6.5)(i)If |𝐸𝑗0|=|𝐸π‘₯𝑗0|, then πœƒπ‘₯=min(πœƒπ‘₯𝑗0,πœƒπ‘₯𝑗1), where πœƒπ‘₯𝑗0=𝑑+𝑗0βˆ’π‘₯𝑗0,if𝐸π‘₯𝑗0π‘₯<0,𝑗0βˆ’π‘‘βˆ’π‘—0,if𝐸π‘₯𝑗0πœƒ>0,π‘₯𝑗1ξ‚€πœƒ=minπ‘₯𝑗,π‘—βˆˆπ½π‘₯𝐡,(6.6) with πœƒπ‘₯𝑗=⎧βŽͺβŽͺ⎨βŽͺβŽͺβŽ©π‘‘+π‘—βˆ’π‘₯𝑗𝑙π‘₯𝑗,if𝑙π‘₯𝑗𝑑>0,βˆ’π‘—βˆ’π‘₯𝑗𝑙π‘₯𝑗,if𝑙π‘₯𝑗<0,∞,if𝑙π‘₯𝑗=0.(6.7) The number πœƒπ‘¦ will be computed in the following way: πœƒπ‘¦=πœƒπ‘¦π‘—1ξ‚€πœƒ=min𝑦𝑗,π‘—βˆˆπ½π‘¦π΅ξ‚,(6.8) where πœƒπ‘¦π‘—=⎧βŽͺ⎨βŽͺβŽ©βˆ’π‘¦π‘—π‘™π‘¦π‘—,if𝑙𝑦𝑗<0,∞,if𝑙𝑦𝑗β‰₯0.(6.9)(ii)If |𝐸𝑗0|=|𝐸𝑦𝑗0|, then πœƒπ‘₯=πœƒπ‘₯𝑗1ξ‚€πœƒ=minπ‘₯𝑗,π‘—βˆˆπ½π‘₯𝐡,(6.10)

where πœƒπ‘₯𝑗=⎧βŽͺβŽͺ⎨βŽͺβŽͺβŽ©π‘‘+π‘—βˆ’π‘₯𝑗𝑙π‘₯𝑗,if𝑙π‘₯𝑗𝑑>0,βˆ’π‘—βˆ’π‘₯𝑗𝑙π‘₯𝑗,if𝑙π‘₯𝑗<0,∞,if𝑙π‘₯π‘—πœƒ=0,π‘¦ξ‚€πœƒ=min𝑦𝑗0,πœƒπ‘¦π‘—1,(6.11) with πœƒπ‘¦π‘—0=𝑦𝑗0,if𝐸𝑦𝑗0>0,∞,if𝐸𝑦𝑗0πœƒ<0,𝑦𝑗1ξ‚€πœƒ=min𝑦𝑗,π‘—βˆˆπ½π‘¦π΅ξ‚,(6.12) where πœƒπ‘¦π‘—=⎧βŽͺ⎨βŽͺβŽ©βˆ’π‘¦π‘—π‘™π‘¦π‘—,if𝑙𝑦𝑗<0,∞,if𝑙𝑦𝑗β‰₯0.(6.13)

The new feasible solution is ξ€·π‘₯,𝑦=ξ€·π‘₯+πœƒ0𝑙π‘₯,𝑦+πœƒ0𝑙𝑦.(6.14)

6.3. The New Suboptimality Condition

Let us calculate the suboptimality condition of the new support feasible solution in the case of 𝐸𝑦β‰₯0. We have 𝛽π‘₯,𝑦,𝐽π‘₯𝐡,𝐽𝑦𝐡=ξ“ξ€Έξ€Έπ‘—βˆˆπ½π‘₯𝑁𝐸π‘₯𝑗>0𝐸π‘₯𝑗π‘₯π‘—βˆ’π‘‘βˆ’π‘—ξ€Έ+ξ“π‘—βˆˆπ½π‘₯𝑁𝐸π‘₯𝑗<0𝐸π‘₯𝑗π‘₯π‘—βˆ’π‘‘+𝑗+ξ“π‘—βˆˆπ½π‘¦π‘πΈπ‘¦π‘—π‘¦π‘—.(6.15)(i)If 𝑗0∈𝐽π‘₯𝑁, then the components π‘₯𝑗, for π‘—βˆˆπ½π‘₯𝑁, are equal to π‘₯𝑗=⎧βŽͺ⎨βŽͺ⎩π‘₯𝑗,for𝑗≠𝑗0,π‘₯𝑗0βˆ’πœƒ0,if𝐸π‘₯𝑗0π‘₯>0,𝑗0+πœƒ0,if𝐸π‘₯𝑗0<0,(6.16) and the components 𝑦𝑗 are 𝑦𝑗=𝑦𝑗,βˆ€π‘—βˆˆπ½π‘¦π‘.(6.17) Hence, 𝛽π‘₯,𝑦,𝐽π‘₯𝐡,𝐽𝑦𝐡𝐽=𝛽(π‘₯,𝑦),π‘₯𝐡,π½π‘¦π΅ξ€Έξ€Έβˆ’πœƒ0|||𝐸π‘₯𝑗0|||.(6.18)(ii)If 𝑗0βˆˆπ½π‘¦π‘, then the components π‘₯𝑗, for π‘—βˆˆπ½π‘₯𝑁, are equal to π‘₯𝑗=π‘₯𝑗,βˆ€π‘—βˆˆπ½π‘₯𝑁,(6.19)

and the components 𝑦𝑗, for π‘—βˆˆπ½π‘¦π‘, are 𝑦𝑗=𝑦𝑗,for𝑗≠𝑗0,𝑦𝑗0βˆ’πœƒ0,for𝑗=𝑗0.(6.20)

Hence, 𝛽π‘₯,𝑦,𝐽π‘₯𝐡,𝐽𝑦𝐡𝐽=𝛽(π‘₯,𝑦),π‘₯𝐡,π½π‘¦π΅ξ€Έξ€Έβˆ’πœƒ0𝐸𝑦𝑗0.(6.21) In both cases, we will have 𝛽π‘₯,𝑦,𝐽π‘₯𝐡,𝐽𝑦𝐡𝐽=𝛽(π‘₯,𝑦),π‘₯𝐡,π½π‘¦π΅ξ€Έξ€Έβˆ’πœƒ0||𝐸𝑗0||,(6.22) with |𝐸𝑗0|=|𝐸π‘₯𝑗0|βˆ¨πΈπ‘¦π‘—0.

6.4. Changing the Support

If 𝛽((π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡))β‰€πœ–, then the feasible solution (π‘₯,𝑦) is πœ–-optimal and we can stop the algorithm; otherwise, we will change 𝐽𝐡 as follows: (i)if πœƒ0=πœƒπ‘₯𝑗0βˆ¨πœƒπ‘¦π‘—0, then 𝐽𝐡=𝐽𝐡, π‘₯=π‘₯+πœƒ0𝑙π‘₯, 𝑦=𝑦+πœƒ0𝑙𝑦, (ii)if πœƒ0=πœƒπ‘₯𝑗1βˆ¨πœƒπ‘¦π‘—1, then 𝐽𝐡=(𝐽𝐡⧡𝑗1)βˆͺ𝑗0, π‘₯=π‘₯+πœƒ0𝑙π‘₯, 𝑦=𝑦+πœƒ0𝑙𝑦.

Then we start a new iteration with the new support feasible solution {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)}, where the support 𝐽𝐡 satisfies the algebraic condition det𝐴𝐻𝐡=det𝐴𝐻𝐡𝐼,𝐽𝐡≠0.(6.23)

Remark 6.1. The step πœƒ0=∞ may happen only if 𝐽π‘₯𝐡=βˆ…, |𝐸𝑗0|=|𝐸𝑦𝑗0| and πœƒπ‘¦=∞. In such a case, the objective function is unbounded with respect to 𝑦.

7. Algorithm

Let πœ– be any nonnegative real number and {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} an initial support feasible solution. The steps of the algorithm are as follows. (1)Compute the estimations vector: 𝐸𝑑𝑁=𝐸𝑑𝐽𝑁=𝐸𝑑π‘₯𝑁,𝐸𝑑𝑦𝑁=ξ€·π‘’π‘‘π΄π‘βˆ’π‘π‘‘π‘,π‘’π‘‘π»π‘βˆ’π‘˜π‘‘π‘ξ€Έ,𝑒𝑑=𝑐𝑑𝐡,π‘˜π‘‘π΅ξ€Έπ΄π»βˆ’1𝐡.(7.1)(2)Optimality test of the support feasible solution {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)}. (i)If 𝐸𝑦β‰₯0, then (a)calculate the value of suboptimality 𝛽((π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)), (b)if 𝛽((π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡))=0, the process is stopped with {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} as an optimal support solution, (c) if 𝛽((π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡))β‰€πœ–, the process is stopped with {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} as an πœ–-optimal support solution, (d)if 𝛽((π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡))>πœ–, go to (3), (ii) if 𝐸𝑦̸β‰₯0, go directly to (3). (3)Change the feasible solution (π‘₯,𝑦) by (π‘₯,𝑦): π‘₯=π‘₯+πœƒ0𝑙π‘₯ and 𝑦=𝑦+πœƒ0𝑙𝑦. (i)Choose a subscript 𝑗0. (ii)Compute the appropriate direction 𝑙=𝑙π‘₯𝑙𝑦. (iii)Compute the step πœƒ0. (a)If πœƒ0=∞ then the objective function is unbounded with respect to 𝑦 and the process is stopped. (b)Otherwise, compute (π‘₯,𝑦)=(π‘₯+πœƒ0𝑙π‘₯,𝑦+πœƒ0𝑙𝑦). (4)Optimality test of the new feasible solution (π‘₯,𝑦). (i)If 𝐸𝑦β‰₯0, then (a)calculate 𝛽((π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡))=𝛽((π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡))βˆ’πœƒ0|𝐸𝑗0|, (b)if 𝛽((π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡))=0, the process is stopped with {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} as an optimal support solution, (c) if 𝛽((π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡))β‰€πœ–, the process is stopped with {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)} as an πœ–-optimal support solution,(d)otherwise, go to (5). (ii)If 𝐸𝑦̸β‰₯0, then go to (5). (5)Change the support 𝐽𝐡 by 𝐽𝐡.(i)If πœƒ0=πœƒπ‘₯𝑗0βˆ¨πœƒπ‘¦π‘—0, then 𝐽π‘₯𝐡=𝐽π‘₯𝐡,𝐽π‘₯𝑁=𝐽π‘₯𝑁,𝐽𝑦𝐡=𝐽𝑦𝐡,𝐽𝑦𝑁=𝐽𝑦𝑁,π‘₯=π‘₯+πœƒ0𝑙π‘₯,𝑦=𝑦+πœƒ0𝑙𝑦.(7.2)(ii)If πœƒ0=πœƒπ‘₯𝑗1βˆ¨πœƒπ‘¦π‘—1, then two cases can arise: (a)case where 𝐸𝑗0=|𝐸π‘₯𝑗0|∢*if πœƒ0=πœƒπ‘₯𝑗1, then 𝐽π‘₯𝐡=𝐽π‘₯𝐡⧡𝑗1ξ€Έβˆͺ𝑗0,𝐽π‘₯𝑁=𝐽π‘₯𝑁⧡𝑗0ξ€Έβˆͺ𝑗1,𝐽𝑦𝐡=𝐽𝑦𝐡,𝐽𝑦𝑁=𝐽𝑦𝑁,(7.3)*if πœƒ0=πœƒπ‘¦π‘—1, then 𝐽π‘₯𝐡=𝐽π‘₯𝐡βˆͺ𝑗0,𝐽π‘₯𝑁=𝐽π‘₯𝑁⧡𝑗0,𝐽𝑦𝐡=𝐽𝑦𝐡⧡𝑗1,𝐽𝑦𝑁=𝐽𝑦𝑁βˆͺ𝑗1.(7.4)(b)Case where 𝐸𝑗0=|𝐸𝑦𝑗0|: *if πœƒ0=πœƒπ‘¦π‘—1, then 𝐽𝑦𝐡=𝐽𝑦𝐡⧡𝑗1ξ€Έβˆͺ𝑗0,𝐽𝑦𝑁=𝐽𝑦𝑁⧡𝑗0ξ€Έβˆͺ𝑗1,𝐽π‘₯𝐡=𝐽π‘₯𝐡,𝐽π‘₯𝑁=𝐽π‘₯𝑁,(7.5)*if πœƒ0=πœƒπ‘₯𝑗1, then 𝐽π‘₯𝐡=𝐽π‘₯𝐡⧡𝑗1,𝐽π‘₯𝑁=𝐽π‘₯𝑁βˆͺ𝑗1,𝐽𝑦𝐡=𝐽𝑦𝐡βˆͺ𝑗0,𝐽𝑦𝑁=𝐽𝑦𝑁⧡𝑗0.(7.6)(iii)Go to (1) with the new support feasible solution {(π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)}, where π‘₯=π‘₯+πœƒ0𝑙π‘₯ and 𝑦=𝑦+πœƒ0𝑙𝑦.

8. Numerical Example

For the sake of clarity, let us illustrate the theoretical development of the method by considering the following linear program: 𝑧(π‘₯,𝑦)=2π‘₯1βˆ’3π‘₯2βˆ’π‘¦3+𝑦4π‘₯⟢max,1βˆ’π‘₯2+3𝑦3+2𝑦4=2,βˆ’7π‘₯1+π‘₯2+2𝑦3+3𝑦4=2,βˆ’2≀π‘₯1≀2,βˆ’4≀π‘₯2𝑦≀4,3𝑦β‰₯0,4β‰₯0,(8.1) where π‘₯=(π‘₯1,π‘₯2) and 𝑦=(𝑦3,𝑦4).

We define )𝐴=(1βˆ’1βˆ’71, )𝐻=(3223, 𝑐𝑑)=(2βˆ’3, and π‘˜π‘‘)=(βˆ’11.

Let (π‘₯,𝑦)=(1302) be an initial feasible solution of the problem. We set 𝐽𝐡=𝐽π‘₯𝐡,𝐽𝑦𝐡={1,3},𝐽𝑁={2,4}.(8.2) Let πœ–=0.

Thus, we have an initial support feasible solution {(π‘₯,𝑦),𝐽𝐡} with 𝐴𝐻𝐡=ξ‚΅ξ‚Ά13βˆ’72,𝐴𝐻𝑁=ξ‚΅ξ‚Ά,π‘§βˆ’1213(π‘₯,𝑦)=βˆ’5.(8.3)

First Iteration
Let us calculate 𝑒𝑑=𝑐𝑑𝐡,π‘˜π‘‘π΅ξ€Έπ΄π»βˆ’1𝐡=ξ‚€βˆ’3βˆ’723𝐸23𝑑𝑁=π‘’π‘‘π΄π»π‘βˆ’ξ€·π‘π‘‘π‘,π‘˜π‘‘π‘ξ€Έ=ξ‚€65βˆ’2350.23(8.4)Choice of 𝑗0:
Among the nonoptimal indices 𝐽π‘₯𝑁𝑁𝑂βˆͺ𝐽𝑦𝑁𝑁𝑂={2,4}, 𝑗0 is chosen such that |𝐸𝑗0| is maximal; we then have 𝑗0=2.
Computation of 𝑙: 𝑙π‘₯2=βˆ’1,𝑙𝑦4𝑙=0,𝐡=𝑙π‘₯1𝑙𝑦3ξ‚Ά=π΄π»βˆ’1π΅π‘Ž2=βŽ›βŽœβŽœβŽœβŽβˆ’5βˆ’623⎞⎟⎟⎟⎠.23(8.5) Hence 𝑙π‘₯=ξƒ©βˆ’5ξƒͺ23βˆ’1,𝑙𝑦=ξƒ©βˆ’60ξƒͺ23.(8.6)Computation of πœƒ0: πœƒπ‘₯𝑗0=πœƒπ‘₯2=π‘₯2βˆ’π‘‘βˆ’2πœƒ=7,π‘₯1=ξ€·π‘‘βˆ’1βˆ’π‘₯1𝑙π‘₯1=695,πœƒπ‘¦3𝑦=βˆ’3𝑙𝑦3=0.(8.7) The maximal step is then πœƒ0=πœƒπ‘¦π‘—1=πœƒπ‘¦3=0.(8.8)Computation of (π‘₯,𝑦): π‘₯=π‘₯+πœƒ0𝑙π‘₯=ξ€·ξ€Έ,13𝑦=𝑦+πœƒ0𝑙𝑦=ξ€·ξ€Έ.02(8.9)Change the support: 𝐽π‘₯𝐡={1,2},𝐽π‘₯𝑁=βˆ…,𝐽𝑦𝐡=βˆ…,𝐽𝑦𝑁={3,4}.(8.10)

Second Iteration
We have ξ€·ξ€Έ,𝐽(π‘₯,𝑦)=1302π‘₯𝐡={1,2},𝐽π‘₯𝑁𝐽=βˆ…,𝑦𝐡=βˆ…,𝐽𝑦𝑁𝐴={3,4},𝐻𝐡=ξ‚΅ξ‚Ά1βˆ’1βˆ’71,𝐴𝐻𝑁=ξ‚΅ξ‚Ά.3223(8.11) We compute 𝑒𝑑=𝑐𝑑𝐡,π‘˜π‘‘π΅ξ€Έπ΄π»βˆ’1𝐡=ξ‚€19616,𝐸𝑑𝑁=π‘’π‘‘π΄π»π‘βˆ’ξ€·π‘π‘‘π‘,π‘˜π‘‘π‘ξ€Έ=ξ‚€656356.(8.12) Computation of 𝛽((π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)): 𝛽𝐽(π‘₯,𝑦),π‘₯𝐡,𝐽𝑦𝐡=𝐸𝑦3𝑦3+𝐸𝑦4𝑦4=353.(8.13) Then, (π‘₯,𝑦) is not optimal.
Choice of 𝑗0: As the set of nonoptimal indices is 𝐽π‘₯𝑁𝑁0βˆͺ𝐽𝑦𝑁𝑁0={4}, we have 𝑗0=4.
Computation of 𝑙: 𝑙𝑦4=βˆ’1,𝑙𝑦3=0,𝑙𝐡=𝑙π‘₯1𝑙π‘₯2ξ‚Ά=π΄π»βˆ’1π΅β„Ž4=βŽ›βŽœβŽœβŽœβŽβˆ’56βˆ’176⎞⎟⎟⎟⎠.(8.14) Hence, 𝑙π‘₯=(βˆ’5/6βˆ’17/6),𝑙𝑦=(0βˆ’1).
Computation of πœƒ0: πœƒπ‘₯𝑗1ξ€·πœƒ=minπ‘₯1,πœƒπ‘₯2𝑑=minβˆ’1βˆ’π‘₯1𝑙π‘₯1,ξ€·π‘‘βˆ’2βˆ’π‘₯2𝑙π‘₯2ξƒͺξ‚€=min185,42=174217=πœƒπ‘₯2,πœƒπ‘¦3=∞,πœƒπ‘¦π‘—0=πœƒπ‘¦4=2.(8.15) The maximal step is thus πœƒ0=πœƒπ‘¦4=2.
Consequently, the support remains unchanged: 𝐽𝐡=𝐽𝐡={1,2},𝐽𝑁=𝐽𝑁={3,4}.(8.16)Computation of (π‘₯,𝑦): π‘₯=π‘₯+πœƒ0𝑙π‘₯=ξ‚€βˆ’23βˆ’83,𝑦=𝑦+πœƒ0𝑙𝑦=ξ€·ξ€Έ.00(8.17)Computation of 𝛽((π‘₯,𝑦),(𝐽π‘₯𝐡,𝐽𝑦𝐡)): 𝛽π‘₯,𝑦,𝐽π‘₯𝐡,𝐽𝑦𝐡=0.(8.18) Then, the vector (π‘₯,𝑦)=(βˆ’2/3,βˆ’8/3,0,0) is an optimal solution and the maximal value of the objective function is 𝑧=20/3.

9. Conclusion

The necessity of developing the method presented in this paper occurred during a more complex optimization scheme involving the resolution of a multicriteria decision problem [2]. Indeed, an efficiency test of nonbasic variables is necessary to be executed several times along the resolution process. This efficiency test yields to solve a monocriteria program with two kinds of variables: upper and lower bounded variables and nonnegative ones. This kind of linear models can be found as subproblems in quadratic programming [4] and optimal control for example. In these cases, the use of the simplex method is not suitable since the transformed problems are often degenerated. An other particularity of our method is that it uses a suboptimal criterion which can stop the algorithm with a desired precision. It is effective, fast, simple, and permits a time reduction in the whole optimization process.