Advances in Operations Research

Advances in Operations Research / 2015 / Article

Research Article | Open Access

Volume 2015 |Article ID 487271 | 7 pages | https://doi.org/10.1155/2015/487271

Novel Interior Point Algorithms for Solving Nonlinear Convex Optimization Problems

Academic Editor: Ching-Jong Liao
Received13 Apr 2015
Revised26 Aug 2015
Accepted30 Aug 2015
Published16 Sep 2015

Abstract

This paper proposes three numerical algorithms based on Karmarkar’s interior point technique for solving nonlinear convex programming problems subject to linear constraints. The first algorithm uses the Karmarkar idea and linearization of the objective function. The second and third algorithms are modification of the first algorithm using the Schrijver and Malek-Naseri approaches, respectively. These three novel schemes are tested against the algorithm of Kebiche-Keraghel-Yassine (KKY). It is shown that these three novel algorithms are more efficient and converge to the correct optimal solution, while the KKY algorithm fails in some cases. Numerical results are given to illustrate the performance of the proposed algorithms.

1. Introduction

The simplex method has never had any serious competition until 1984 when Karmarkar proposed a new-polynomial-time algorithm in order to solve linear programming problems, especially for solving large scale problems [1, 2]. Polynomial complexity of Karmarkar’s algorithm has an advantage in comparison with exponential complexity of the simplex algorithm [35]. The improvement of the Karmarkar’s algorithm by Schrijver [6, 7] resulted in less number of iterations compared to Karmarkar’s method. In 2004, Malek and Naseri [8] proposed a modified technique based on the interior point algorithm to solve linear programming problems more efficiently.

After the appearance of Karmarkar’s algorithm for solving linear programming problems, the researchers developed the algorithm to solve the convex quadratic programming problem (e.g., see [9]). Considering the success of the interior methods for solving linear programming problems [10, 11], the researchers used the linearization methods to solve convex nonlinear programming problems. In 2007, Kebbiche et al. [12] introduced a projective interior point method to solve more general problems with nonlinear convex objective function. The objective of this paper is to propose an optimal step length in each iteration in combination with Karmarkar’s algorithm in order to decrease the nonlinear objective function as fast as possible.

In Section 2, extension of Karmarkar’s algorithm for nonlinear programming problem is considered. In Section 3, by considering the technique associated with KKY’s algorithm, a modified algorithm is presented. In Section 4, the convergency of modified algorithm in Section 3 is proved. Then this algorithm is combined with Schrijver’s and Malek-Naseri’s algorithms successfully. In Section 5, numerical results are illustrated for KKY algorithm and the three suggested modified algorithms are compared to each other.

2. Extension of Karmarkar’s Algorithm for Nonlinear Problems

Consider the following nonlinear optimization problem: where is a nonlinear, convex, and differentiable function. is matrix of rank and is . The starting point is chosen to be feasible. is the identity matrix with order and is a row vector of ones.

Assume , the simplex , and the projective transformation of Karmarkar defined by such that . Consider , where is a nonlinear, convex, and differentiable function and the optimal value of is zero. Using the linearization process to the function in neighborhood of the ball of center as for along with Karmarkar projective transformation [12], we conclude that Problem (1) is equivalent to

As a result, the optimal solution of the preceding problem lies along the negative projection of the gradient and is given as , where is the center of the simplex . is the projected gradient which can be shown to be , where , , , and is the number of iterations. The selection of as a step length is crucial to enhance the efficiency of the algorithm.

The function on is convex, since the function on is convex. The optimal solution of Problem (2) is given by [12].

Consider the algorithm of Kebiche-Keraghel-Yassine (KKY) for solving Problem (2).

KKY Algorithm. Let be a given tolerance and let .

Step 1. Compute and . Put .

Step 2. Build and .
Compute , , and .

Step 3. While , let , and go to Step 2.

However, as we can see in the following two examples, the KKY algorithm does not work for every nonlinear problem.

Example 1. Consider the quadratic convex problem:

For tolerance , after iterations for KKY algorithm we have, , , and . Since the condition holds, the algorithm stopped, whereas this solution is not feasible.

Example 2. Consider the nonlinear convex problem: For , , and iteration using KKY algorithm the solution is calculated. This solution is not feasible.

Here, two difficulties arise. First, must stay feasible in each iteration. Second, in the moderate time the required tolerance must be satisfied. To overcome these difficulties we modify the KKY algorithm in the next section.

3. Modified Algorithm

3.1. Modifications

In the KKY [12] algorithm . As it is observed in Examples 1 and 2, for these values of , the KKY algorithm may produce infeasible solution. In the standard Karmarkar [1, 3, 4] instead of , is used. Hence the solution in each iteration remains feasible. In this case, the optimal solution for Problem (2) is given by , where . Applying KKY algorithm with to the problem in Example 2, it gives the feasible solution , , , and . The algorithm stops after iterations since , while the suitable accuracy is not reached. In each iteration, linear search method is used to find such that , where the suitable tolerance satisfies. Thus one may write Problem (2) in the following form:

Lemma 3. The optimal solution for Problem (5) is given by .

Proof. We put , and then we have and Problem (5) is equivalent to is a solution of Problem (6) if and only if there exist and such that Multiplying both sides of (7) by and since , we get . Then ; by substituting in (7) we find By assuming , we have And we have .

Note that here we have proposed the algorithm similar to KKY, where . This modified algorithm has the advantage that can find the feasible approximate solution in the suitable tolerance.

3.2. Modified KKY Algorithm (MKKY)

Let be a given tolerance and is a strictly feasible point.

Step 1. Compute , , and . Put and .

Step 2. Build , and .
Compute , , and .

Step 3. While , put .
Build , and .
Compute and .
Then .
Let , and go to Step 3.

Step 4. While , compute , , and .

Step 5. Let , and go to Step 3.

4. Convergence for Modified Algorithm (MKKY)

In order to establish the convergence of the modified algorithm, we introduce a potential function associated with Problem (1) defined by where is the optimal value of the objective function.

Lemma 4. If is the optimal solution of Problem (5), then .

Proof. Since and , then Thus and a reduction is obtained in each iteration.

Lemma 5. If is the optimal solution of Problem (5), then

Proof. Let be the optimal solution of Problem (1); we can write . is a ball of center with radius . There are two cases:(i)If , then and Lemma 5 holds.(ii)If , since is convex, intersection point of the boundary of and the line segment between and should be feasible for Problem (5). Let be the intersection point; then satisfies for , and . Thus, Hence,and we have Thus,From (14) we also have ; therefore Substituting in the above inequality, we have . Since is convex and , then Furthermore, ; then .

Theorem 6. In every iteration of the algorithm MKKY, potential function is reduced by a constant value such that .

Proof. ConsiderWe used the result demonstrated by Karmarkar [1]: Then , where .
If , then .
Therefore .

Theorem 7. If is the feasible solution of Problem (1) and is the optimal solution with optimal value , then one has following assumptions:(1)(2); for any feasible solution one has .
The algorithm MKKY finds an optimal solution after iteration. is the number of bytes.

Proof. Consider whereBy assumptions (1) and (2) we have in the feasible region; then According to Theorem 6, after iterations, we have . Thus, Therefore, for

In the next section MKKY algorithm is combined with the algorithm of Schrijver [6, 7] and Malek-Naseri’s algorithm [8] to propose novel hybrid algorithms called Sch-MKKY (Schrijver-Modified Kebiche-Keraghel-Yassine) and MN-MKKY (Malek-Naseri-Modified Kebiche-Keraghel-Yassine). These algorithms are different from MKKY in the use of optimal length in each iteration.

4.1. Hybrid Algorithms

Let us assume that and in the modified algorithm are expressed as follows: and . In the Sch-MKKY and MN-MKKY algorithms, choose and , respectively. It is easy to prove that the theorems in Section 4 satisfy Sch-MKKY and MN-MKKY algorithms. Thus with the recent step length the convergence is guaranteed.

5. Numerical Results

It is observed that the approximate solution from KKY algorithm is not feasible in some instances (see Examples 8, 9, and 11 in Table 1). All programs have been written with MATLAB 7.0.4 for as tolerance. The computed solution for KKY, Sch-MKKY, and MN-MKKY is given in Table 1.


Examples KKY MKKY Sch-MKKY MN-MKKY

Example 81.11762856770.79931047550.79928850910.7984677572
0.89259225451.20062834191.20064789421.2013756245
−0.01022082220.00006118260.00006359690.0001566183
1.33244405880.39805379160.39799272080.395765081

Example 9−5.21997974460.75146295090.75142971880.7514274997
−1.88719251950.00000326990.00000659670.0000068198
0.44633043120.00000105110.00000212340.0000022020
0.37916515311.01759372721.01723399951.0172101531
−0.00873129130.00073355510.00071905830.0007180983
0.55366956880.99999894890.99999787660.9999977980
0.62322307720.00000066750.00000134720.0000013940

Example 100.00000000250.00000001560.00000000550.0000000080
1.99999998291.99999998581.99999997491.9999999586
0.00000000250.00000001560.00000000550.0000000080

Example 110.31245605110.71428577070.71428573500.7142857453
0.34385079360.14285721680.14285716840.1428571600
0.00015764310.00000021010.00000007020.0000000498
−1.40648262790.00000010870.00000003630.0000000258

Example 8. Consider the quadratic convex problem:

As it is shown, the KKY algorithm does not converge to the correct solution, while the computed solution of MKKY and two hybrid algorithms is feasible. Example 8 is solved by using Fmincon from MATLAB 7.0.4 and the absolute difference of the objective functions stated as Error. In Table 2 number of iterations, solution norms, optimal values of the objective function, Error, and the elapsed time for each algorithm have been given.


ExamplesAlgorithmErrorCPU time (sec)

Example 8KKY (infeasible)
MKKY5491.4962795449−7.1998265727182.8281
Sch-MKKY5151.4962672544−7.1998196616153.4063
MN-MKKY1871.4958093895−7.199551120783.5313

Example 9KKY (infeasible)
MKKY1811.6125110840−1.3693639744213.1250
Sch-MKKY871.6122679375−1.3693475555122.4229
MN-MKKY671.6122518087−1.369346390479.1875

Example 10KKY171.9999999829−3.999999931711.4844
MKKY801.9999999858−3.999999943346.5469
Sch-MKKY371.9999999749−3.999999899422.5625
MN-MKKY151.9999999586−3.999999834310.7031

Example 11KKY (infeasible)
MKKY610.7284314289−0.203962610135.7188
Sch-MKKY300.7284313844−0.203962625618.4531
MN-MKKY120.7284313929−0.20396262129.0781

In Table 2 we show that MN-MKKY algorithm is more efficient than the other algorithms comparing the number of iterations and elapsed time.

In Figure 1, for various tolerances, the number of iterations is compared for different algorithms. It is observed that the MN-MKKY algorithm has better performance for each required tolerance.

Example 9. Consider Example 2. Table 2 shows the computed solution for four different algorithms. From Figure 1, it is obvious that MN-MKKY algorithm is the best for this example.

Example 10. Consider the quadratic convex problem:

Example 11. Consider the nonlinear convex problem:

6. Conclusion

Having two ideas in our mind, (i) calculation of feasible solution in each iteration and (ii) the fact that the objective function value must decrease in each iteration with the fixed desired tolerance, this paper proposed three hybrid algorithms for solving nonlinear convex programming problem based on the interior point idea using various ’s of Karmarkar, Schrijver, and Malek-Naseri techniques. These methods have better performance than the standard Karmarkar algorithm, since in the latter algorithm one may not check the feasibility of solution in each iteration.

Our numerical simulation shows that the MN-MKKY algorithm has the best performance among the other algorithms. This algorithm uses less number of iterations to solve the general nonlinear optimization problems with linear constraints, since it uses the step length of Malek-Naseri type.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. N. Karmarkar, “A new polynomial-time algorithm for linear programming,” Combinatorica, vol. 4, no. 4, pp. 373–395, 1984. View at: Publisher Site | Google Scholar | MathSciNet
  2. H. Navidi, A. Malek, and P. Khosravi, “Efficient hybrid algorithm for solving large scale constrained linear programming problems,” Journal of Applied Sciences, vol. 9, no. 18, pp. 3402–3406, 2009. View at: Publisher Site | Google Scholar
  3. M. S. Bazzara, J. Jarvis, and H. D. Sherali, Linear Programming and Network Flows, John Wiley & Sons, New York, NY, USA, 1984.
  4. A. T. Hamdy, Operations Research, Macmillan, New York, NY, USA, 1992. View at: MathSciNet
  5. R. M. R. Karp, “George Dantzig's impact on the theory of computation,” Discrete Optimization, vol. 5, no. 2, pp. 174–185, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  6. C. Roos, T. Terlaky, and J. Vial, Theory and Algorithms for Linear Optimization, Princton University, 2001.
  7. A. Schrijver, Theory of Linear and Integer Programming, John Wiley & Sons, New York, NY, USA, 1986. View at: MathSciNet
  8. A. Malek and R. Naseri, “A new fast algorithm based on Karmarkar's gradient projected method for solving linear programming problem,” Advanced Modeling and Optimization, vol. 6, no. 2, pp. 43–51, 2004. View at: Google Scholar | MathSciNet
  9. E. Tse and Y. Ye, “An extension of Karmarkar's projective algorithm for convex quadratic programming,” Mathematical Programming, vol. 44, no. 2, pp. 157–179, 1989. View at: Publisher Site | Google Scholar | MathSciNet
  10. Z. Kebbiche and D. Benterki, “A weighted path-following method for linearly constrained convex programming,” Revue Roumaine de Mathématique Pures et Appliquées, vol. 57, no. 3, pp. 245–256, 2012. View at: Google Scholar | MathSciNet
  11. P. Fei and Y. Wang, “A primal infeasible interior point algorithm for linearly constrained convex programming,” Control and Cybernetics, vol. 38, no. 3, pp. 687–704, 2009. View at: Google Scholar | MathSciNet
  12. Z. Kebbiche, A. Keraghel, and A. Yassine, “Extension of a projective interior point method for linearly constrained convex programming,” Applied Mathematics and Computation, vol. 193, no. 2, pp. 553–559, 2007. View at: Publisher Site | Google Scholar | MathSciNet

Copyright © 2015 Sakineh Tahmasebzadeh et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

879 Views | 518 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.