Mathematical Problems in Engineering

Volume 2011, Article ID 374390, 18 pages

http://dx.doi.org/10.1155/2011/374390

## An Effective Generalization of the Direct Support Method

^{1}Department of Mathematics, Faculty of Sciences, USTOMB, Oran 31000, Algeria^{2}Department of Operations Research, LAMOS Laboratory, University of Béjaia, Béjaia 06000, Algeria

Received 4 November 2010; Accepted 17 February 2011

Academic Editor: Ezzat G. Bakhoum

Copyright © 2011 Sonia Radjef and Mohand Ouamer Bibi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The main objective of our paper is to solve a problem which was encountered in an industrial firm. It concerns the conception of a weekly production planning with the aim to optimize the quantities to be launched. Indeed, one of the problems raised in that company could be modeled as a linear multiobjective program where the decision variables are of two kinds: the first ones are upper and lower bounded, and the second ones are nonnegative. During the resolution process of the multiobjective case, we were faced with the necessity of developing an effective method to solve the mono-objective case without any increase in the linear program size, since the industrial case to solve is already very large. So, we propose an extension of the direct support method presented in this paper. Its particularity is that it avoids the preliminary transformation of the decision variables. It handles the bounds as they are initially formulated. The method is really effective, simple to use, and permits speeding up the resolution process.

#### 1. Introduction

The company Ifri is one of the largest and most important Algerian companies in the agroalimentary field. * Ifri * products mainly mineral water and various drinks.

From January to October of the year 2003, the company production was about 175 million bottles. Expressed in liters, the production in this last period has exceeded the 203 million liters of finishedproducts (all products included). Having covered the national market demand, * Ifri *left to the acquisition of new international markets.

The main objective of our application [1] is to conceive a data-processing application to carry out an optimal weekly production planning which will replace the planning based primarily on the good management and the experiment of the decision makers.

This problem relates to the optimization problem of the quantities to launch in production. It is modeled as a linear multi-objective program where the objective functions involved are linear, the constraints are linear and the decision variables are of two kinds: the first ones are upper and lower bounded, and the second ones are nonnegative.

Multicriteria optimization problems are a class of difficult optimization problems in which several different objective functions have to be considered at the same time. It is seldom the case that one single point will optimize all the several objective functions. Therefore, we search the so-called *efficient* points, that is, feasible points having the property that no other feasible point improves all the criteria without deteriorating at least one.

In [2], we developed a method to solve the multi-objective linear programming problem described above. To avoid the preliminary transformation of the constraints, hence the augmentation of the problem dimension, we propose to extend the direct support method of Gabasov et al. [3] known in single-objective programming.

In [2], we proposed a procedure for finding an initial efficient extreme point, a procedure to test the efficiency of a nonbasic variable, and a method to compute all the efficient extreme points, the weakly efficient extreme points, and the -weakly efficient extreme points of the problem.

A multiobjective linear program with the coexistence of the two types of the decision variables can be presented in the following canonical form: where and are and matrices, respectively, and are matrices of order and , respectively, with , , , and .

We denote by the set of feasible decisions:

*Definition 1.1. *A feasible decision is said to be efficient (or Pareto optimal) for the problem (1.1), if there is no other feasible solution such that and .

*Definition 1.2. *A feasible decision is said to be weakly efficient (or Slater optimal) for the problem (1.1), if there is no other feasible solution such that .

*Definition 1.3. *Let , . A feasible decision is said to be -weakly efficient for the problem (1.1), if there is no other feasible solution such that .

The multiobjective linear programming consists of determining the whole set of all the efficient decisions, all weakly efficient decisions and all -weakly efficient decisions of problem (1.1) for given .

During the resolution process, we need to use an efficiency test of nonbasic variables. This problem can be formulated as a single-objective linear program where the decision variables are of two types: upper and lower bounded variables and nonnegative variables. We propose in this paper to solve this latter problem by an adapted direct support method. Our approach is based on the principle of the methods developed by Gabasov et al.[3], which permit to solve a single-objective linear program with nonnegative decision variables or a single-objective linear program with bounded decision variables. Our work aims to propose a generalization for the single-objective linear program with the two types of decision variables: the upper and lower bounded variables and the nonnegative variables.

This work is devoted to present this method. Its particularity is that it avoids the preliminary transformation of the decision variables. It handles the constraints of the problems such as they are initially formulated. The method is really effective, simple to use, and direct. It allows us to treat problems in a natural way and permits speeding up the whole resolution process. It generates an important gain in memory space and CPU time. Furthermore, the method integrates a suboptimal criterion which permits to stop the algorithm with a desired accuracy. To the best of our knowledge, no other linear programming method uses this criterion which could be useful in practical applications.

The principle of this iterative method is simple: starting with an initial feasible solution and an initial support, each iteration consists of finding an ascent direction and a step along this direction to improve the value of the objective function without leaving the problem’s feasible space. The initial feasible solution and the initial support could be computed independently. In addition to this, the initial feasible point need not to be an extreme point such as in the simplex method. The details of our multiobjective method will be presented in our future works.

#### 2. Statement of the Problem and Definitions

The canonical form of the program is as follows: where and are -vectors, and are -vectors, an -vector, an -matrix, an -matrix, with ; , , .

Let us set , such that , , with , and .

We set , , and we note by the -matrix .

Let the vectors and the matrices be partitioned in the following way:

*Definition 2.1. *(i) A vector , satisfying the constraints (2.2)–(2.4), is called a * feasible solution* of the problem (2.1)–(2.4).

(ii) A feasible solution is said to be * optimal* if , where is taken among all the feasible solutions of the problem (2.1)–(2.4).

(iii) On the other hand, a feasible solution is called *-optimal* or * suboptimal * if
where is an optimal solution of the problem (2.1)–(2.4), and is a nonnegative number, fixed in advance.

(iv) The set is called a * support* if .

(v) A pair , formed by a feasible solution and a support , is called a * support feasible solution*.

(vi) The support feasible solution is said to be *nondegenerate*, if

#### 3. Increment Formula of the Objective Function

Let be a support feasible solution for the problem (2.1)–(2.4), and let us consider any other feasible solution .

We define two subsets and of as follows: The increment of the objective function is as follows:

The potential vector and the estimations vector are defined by Then, the increment formula presents the following final form:

#### 4. Optimality Criterion

Theorem 4.1. *Let be a support feasible solution for the problem (2.1)–(2.4). Then, the following relations
**
are sufficient for the optimality of the feasible solution . They are also necessary if the support feasible solution is nondegenerate.*

*Proof. **Sufficiency *

Let be a support feasible solution satisfying the relations (4.1). For any feasible solution of the problem (2.1)–(2.4), the increment formula (3.4) gives the following:

Since , for all , and from the relations (4.1), we have

On the other hand, the condition , implies that
Hence,
The vector is, consequently, an optimal solution of the problem (2.1)–(2.4).*Necessity*

Let be a nondegenerate optimal support feasible solution of the problem (2.1)–(2.4) and assume that the relations (4.1) are not satisfied, that is, there exists at least one index such that
We construct another feasible solution , where is a positive real number, and is a direction vector, constructed as follows.

For this, two cases can arise: (i)if , we set
where is the th column of the matrix ; (ii)if , we set
where is the th column of the matrix . From the construction of the direction , the vector satisfies the principal constraint .

In order to be a feasible solution of the problem (2.1)–(2.4), the vector must in addition satisfy the inequalities and , or in its developed form
As the support feasible solution is nondegenerate, we can always find a small positive number such that the relations (4.9) are satisfied. Thus, for a small positive number , we can state that the vector is a feasible solution for the problem (2.1)–(2.4). The increment formula gives in both cases
where
Therefore, we have found another feasible solution with the inequality which contradicts the optimality of the feasible solution . Hence the relations (4.1) are satisfied.

#### 5. The Suboptimality Condition

In order to evaluate the difference between the optimal value and another value for any support feasible solution , when , we use the following formula:
which is called the *suboptimality condition*.

Theorem 5.1 (the suboptimality condition). *Let be a support feasible solution of the problem (2.1)–(2.4) and an arbitrary nonnegative number. **If and
**
then the feasible solution is -optimal.*

*Proof. *We have
Then, if
we will have
therefore, is -optimal.

In the particular case where , the feasible solution is consequently optimal.

#### 6. Construction of the Algorithm

Given any nonnegative real number and an initial support feasible solution , the aim of the algorithm is to construct an -optimal solution or an optimal solution . An iteration of the algorithm consists of moving from to another support feasible solution such that . For this purpose, we construct the new feasible solution as follows: , where is the appropriate direction, and is the step along this direction.

In this algorithm, the simplex metric is chosen. We will thus vary only one component among those which do not satisfy the relations (4.1).

In order to obtain a maximal increment, we must take as great as possible and choose the subscript such that with where and are the subsets, respectively, of and , whose the subscripts do not satisfy the relations of optimality (4.1).

##### 6.1. Computation of the Direction

We have two cases.(i)If , we set: (ii)If , we will set

##### 6.2. Computation of the Step

The step must be taken as follows: (i)If , then , where with The number will be computed in the following way: where (ii)If , then

where with where

The new feasible solution is

##### 6.3. The New Suboptimality Condition

Let us calculate the suboptimality condition of the new support feasible solution in the case of . We have (i)If , then the components , for , are equal to and the components are Hence, (ii)If , then the components , for , are equal to

and the components , for , are

Hence, In both cases, we will have with .

##### 6.4. Changing the Support

If , then the feasible solution is -optimal and we can stop the algorithm; otherwise, we will change as follows: (i)if , then , , , (ii)if , then , , .

Then we start a new iteration with the new support feasible solution , where the support satisfies the algebraic condition

*Remark 6.1. *The step may happen only if , and . In such a case, the objective function is unbounded with respect to .

#### 7. Algorithm

Let be any nonnegative real number and an initial support feasible solution. The steps of the algorithm are as follows. (1)Compute the estimations vector: (2)Optimality test of the support feasible solution . (i)If , then (a)calculate the value of suboptimality , (b)if , the process is stopped with as an optimal support solution, (c) if , the process is stopped with as an -optimal support solution, (d)if , go to (3), (ii) if , go directly to (3). (3)Change the feasible solution by : and . (i)Choose a subscript . (ii)Compute the appropriate direction . (iii)Compute the step . (a)If then the objective function is unbounded with respect to and the process is stopped. (b)Otherwise, compute . (4)Optimality test of the new feasible solution . (i)If , then (a)calculate , (b)if , the process is stopped with as an optimal support solution, (c) if , the process is stopped with as an -optimal support solution,(d)otherwise, go to (5). (ii)If , then go to (5). (5)Change the support by .(i)If , then (ii)If , then two cases can arise: (a)case where *if , then *if , then (b)Case where : *if , then *if , then (iii)Go to (1) with the new support feasible solution , where and .

#### 8. Numerical Example

For the sake of clarity, let us illustrate the theoretical development of the method by considering the following linear program: where and .

We define , , , and .

Let be an initial feasible solution of the problem. We set Let .

Thus, we have an initial support feasible solution with

*First Iteration*

Let us calculate
*Choice of **: *

Among the nonoptimal indices , is chosen such that is maximal; we then have . *Computation of **: *
Hence
*Computation of **: *
The maximal step is then
*Computation of **: **Change the support: *

*Second Iteration*

We have
We compute
* Computation of **: *
Then, is not optimal. *Choice of **:* As the set of nonoptimal indices is , we have . *Computation of **: *
Hence, .*Computation of **: *
The maximal step is thus .

Consequently, the support remains unchanged:
*Computation of **: **Computation of **: *
Then, the vector is an optimal solution and the maximal value of the objective function is .

#### 9. Conclusion

The necessity of developing the method presented in this paper occurred during a more complex optimization scheme involving the resolution of a multicriteria decision problem [2]. Indeed, an efficiency test of nonbasic variables is necessary to be executed several times along the resolution process. This efficiency test yields to solve a monocriteria program with two kinds of variables: upper and lower bounded variables and nonnegative ones. This kind of linear models can be found as subproblems in quadratic programming [4] and optimal control for example. In these cases, the use of the simplex method is not suitable since the transformed problems are often degenerated. An other particularity of our method is that it uses a suboptimal criterion which can stop the algorithm with a desired precision. It is effective, fast, simple, and permits a time reduction in the whole optimization process.

#### References

- K. Ait Yahia and F. Benkerrou, “Méthodes d'aide à la planification de la production au niveau de la laiterie de Djurdjura (Ifri),” in
*Mémoire d'Ingéniorat d'Etat*, Université de Béjaïa, 2001. View at Google Scholar - S. Radjef,
*Sur la programmation linaire multi-objectifs. Mémoire de Magister*, Université de Béjaïa, 2001. - R. Gabasov et al.,
*Constructive Methods of Optimization*, P.I. University Press, Minsk, Belarus, 1984. - B. Brahmi and M. O. Bibi, “Dual support method for solving convex quadratic programs,”
*Optimization*, vol. 59, no. 6, pp. 851–872, 2010. View at Google Scholar · View at Zentralblatt MATH