/ / Article

Research Article | Open Access

Volume 2015 |Article ID 487271 | 7 pages | https://doi.org/10.1155/2015/487271

# Novel Interior Point Algorithms for Solving Nonlinear Convex Optimization Problems

Revised26 Aug 2015
Accepted30 Aug 2015
Published16 Sep 2015

#### Abstract

This paper proposes three numerical algorithms based on Karmarkar’s interior point technique for solving nonlinear convex programming problems subject to linear constraints. The first algorithm uses the Karmarkar idea and linearization of the objective function. The second and third algorithms are modification of the first algorithm using the Schrijver and Malek-Naseri approaches, respectively. These three novel schemes are tested against the algorithm of Kebiche-Keraghel-Yassine (KKY). It is shown that these three novel algorithms are more efficient and converge to the correct optimal solution, while the KKY algorithm fails in some cases. Numerical results are given to illustrate the performance of the proposed algorithms.

#### 1. Introduction

The simplex method has never had any serious competition until 1984 when Karmarkar proposed a new-polynomial-time algorithm in order to solve linear programming problems, especially for solving large scale problems [1, 2]. Polynomial complexity of Karmarkar’s algorithm has an advantage in comparison with exponential complexity of the simplex algorithm . The improvement of the Karmarkar’s algorithm by Schrijver [6, 7] resulted in less number of iterations compared to Karmarkar’s method. In 2004, Malek and Naseri  proposed a modified technique based on the interior point algorithm to solve linear programming problems more efficiently.

After the appearance of Karmarkar’s algorithm for solving linear programming problems, the researchers developed the algorithm to solve the convex quadratic programming problem (e.g., see ). Considering the success of the interior methods for solving linear programming problems [10, 11], the researchers used the linearization methods to solve convex nonlinear programming problems. In 2007, Kebbiche et al.  introduced a projective interior point method to solve more general problems with nonlinear convex objective function. The objective of this paper is to propose an optimal step length in each iteration in combination with Karmarkar’s algorithm in order to decrease the nonlinear objective function as fast as possible.

In Section 2, extension of Karmarkar’s algorithm for nonlinear programming problem is considered. In Section 3, by considering the technique associated with KKY’s algorithm, a modified algorithm is presented. In Section 4, the convergency of modified algorithm in Section 3 is proved. Then this algorithm is combined with Schrijver’s and Malek-Naseri’s algorithms successfully. In Section 5, numerical results are illustrated for KKY algorithm and the three suggested modified algorithms are compared to each other.

#### 2. Extension of Karmarkar’s Algorithm for Nonlinear Problems

Consider the following nonlinear optimization problem: where is a nonlinear, convex, and differentiable function. is matrix of rank and is . The starting point is chosen to be feasible. is the identity matrix with order and is a row vector of ones.

Assume , the simplex , and the projective transformation of Karmarkar defined by such that . Consider , where is a nonlinear, convex, and differentiable function and the optimal value of is zero. Using the linearization process to the function in neighborhood of the ball of center as for along with Karmarkar projective transformation , we conclude that Problem (1) is equivalent to

As a result, the optimal solution of the preceding problem lies along the negative projection of the gradient and is given as , where is the center of the simplex . is the projected gradient which can be shown to be , where , , , and is the number of iterations. The selection of as a step length is crucial to enhance the efficiency of the algorithm.

The function on is convex, since the function on is convex. The optimal solution of Problem (2) is given by .

Consider the algorithm of Kebiche-Keraghel-Yassine (KKY) for solving Problem (2).

KKY Algorithm. Let be a given tolerance and let .

Step 1. Compute and . Put .

Step 2. Build and .
Compute , , and .

Step 3. While , let , and go to Step 2.

However, as we can see in the following two examples, the KKY algorithm does not work for every nonlinear problem.

Example 1. Consider the quadratic convex problem:

For tolerance , after iterations for KKY algorithm we have, , , and . Since the condition holds, the algorithm stopped, whereas this solution is not feasible.

Example 2. Consider the nonlinear convex problem: For , , and iteration using KKY algorithm the solution is calculated. This solution is not feasible.

Here, two difficulties arise. First, must stay feasible in each iteration. Second, in the moderate time the required tolerance must be satisfied. To overcome these difficulties we modify the KKY algorithm in the next section.

#### 3. Modified Algorithm

##### 3.1. Modifications

In the KKY  algorithm . As it is observed in Examples 1 and 2, for these values of , the KKY algorithm may produce infeasible solution. In the standard Karmarkar [1, 3, 4] instead of , is used. Hence the solution in each iteration remains feasible. In this case, the optimal solution for Problem (2) is given by , where . Applying KKY algorithm with to the problem in Example 2, it gives the feasible solution , , , and . The algorithm stops after iterations since , while the suitable accuracy is not reached. In each iteration, linear search method is used to find such that , where the suitable tolerance satisfies. Thus one may write Problem (2) in the following form:

Lemma 3. The optimal solution for Problem (5) is given by .

Proof. We put , and then we have and Problem (5) is equivalent to is a solution of Problem (6) if and only if there exist and such that Multiplying both sides of (7) by and since , we get . Then ; by substituting in (7) we find By assuming , we have And we have .

Note that here we have proposed the algorithm similar to KKY, where . This modified algorithm has the advantage that can find the feasible approximate solution in the suitable tolerance.

##### 3.2. Modified KKY Algorithm (MKKY)

Let be a given tolerance and is a strictly feasible point.

Step 1. Compute , , and . Put and .

Step 2. Build , and .
Compute , , and .

Step 3. While , put .
Build , and .
Compute and .
Then .
Let , and go to Step 3.

Step 4. While , compute , , and .

Step 5. Let , and go to Step 3.

#### 4. Convergence for Modified Algorithm (MKKY)

In order to establish the convergence of the modified algorithm, we introduce a potential function associated with Problem (1) defined by where is the optimal value of the objective function.

Lemma 4. If is the optimal solution of Problem (5), then .

Proof. Since and , then Thus and a reduction is obtained in each iteration.

Lemma 5. If is the optimal solution of Problem (5), then

Proof. Let be the optimal solution of Problem (1); we can write . is a ball of center with radius . There are two cases:(i)If , then and Lemma 5 holds.(ii)If , since is convex, intersection point of the boundary of and the line segment between and should be feasible for Problem (5). Let be the intersection point; then satisfies for , and . Thus, Hence,and we have Thus,From (14) we also have ; therefore Substituting in the above inequality, we have . Since is convex and , then Furthermore, ; then .

Theorem 6. In every iteration of the algorithm MKKY, potential function is reduced by a constant value such that .

Proof. ConsiderWe used the result demonstrated by Karmarkar : Then , where .
If , then .
Therefore .

Theorem 7. If is the feasible solution of Problem (1) and is the optimal solution with optimal value , then one has following assumptions:(1)(2); for any feasible solution one has .
The algorithm MKKY finds an optimal solution after iteration. is the number of bytes.

Proof. Consider whereBy assumptions (1) and (2) we have in the feasible region; then According to Theorem 6, after iterations, we have . Thus, Therefore, for

In the next section MKKY algorithm is combined with the algorithm of Schrijver [6, 7] and Malek-Naseri’s algorithm  to propose novel hybrid algorithms called Sch-MKKY (Schrijver-Modified Kebiche-Keraghel-Yassine) and MN-MKKY (Malek-Naseri-Modified Kebiche-Keraghel-Yassine). These algorithms are different from MKKY in the use of optimal length in each iteration.

##### 4.1. Hybrid Algorithms

Let us assume that and in the modified algorithm are expressed as follows: and . In the Sch-MKKY and MN-MKKY algorithms, choose and , respectively. It is easy to prove that the theorems in Section 4 satisfy Sch-MKKY and MN-MKKY algorithms. Thus with the recent step length the convergence is guaranteed.

#### 5. Numerical Results

It is observed that the approximate solution from KKY algorithm is not feasible in some instances (see Examples 8, 9, and 11 in Table 1). All programs have been written with MATLAB 7.0.4 for as tolerance. The computed solution for KKY, Sch-MKKY, and MN-MKKY is given in Table 1.

 Examples KKY MKKY Sch-MKKY MN-MKKY Example 8 1.1176285677 0.7993104755 0.7992885091 0.7984677572 0.8925922545 1.2006283419 1.2006478942 1.2013756245 −0.0102208222 0.0000611826 0.0000635969 0.0001566183 1.3324440588 0.3980537916 0.3979927208 0.395765081 Example 9 −5.2199797446 0.7514629509 0.7514297188 0.7514274997 −1.8871925195 0.0000032699 0.0000065967 0.0000068198 0.4463304312 0.0000010511 0.0000021234 0.0000022020 0.3791651531 1.0175937272 1.0172339995 1.0172101531 −0.0087312913 0.0007335551 0.0007190583 0.0007180983 0.5536695688 0.9999989489 0.9999978766 0.9999977980 0.6232230772 0.0000006675 0.0000013472 0.0000013940 Example 10 0.0000000025 0.0000000156 0.0000000055 0.0000000080 1.9999999829 1.9999999858 1.9999999749 1.9999999586 0.0000000025 0.0000000156 0.0000000055 0.0000000080 Example 11 0.3124560511 0.7142857707 0.7142857350 0.7142857453 0.3438507936 0.1428572168 0.1428571684 0.1428571600 0.0001576431 0.0000002101 0.0000000702 0.0000000498 −1.4064826279 0.0000001087 0.0000000363 0.0000000258

Example 8. Consider the quadratic convex problem:

As it is shown, the KKY algorithm does not converge to the correct solution, while the computed solution of MKKY and two hybrid algorithms is feasible. Example 8 is solved by using Fmincon from MATLAB 7.0.4 and the absolute difference of the objective functions stated as Error. In Table 2 number of iterations, solution norms, optimal values of the objective function, Error, and the elapsed time for each algorithm have been given.

 Examples Algorithm Error CPU time (sec) Example 8 KKY (infeasible) — — — — — MKKY 549 1.4962795449 −7.1998265727 182.8281 Sch-MKKY 515 1.4962672544 −7.1998196616 153.4063 MN-MKKY 187 1.4958093895 −7.1995511207 83.5313 Example 9 KKY (infeasible) — — — — — MKKY 181 1.6125110840 −1.3693639744 213.1250 Sch-MKKY 87 1.6122679375 −1.3693475555 122.4229 MN-MKKY 67 1.6122518087 −1.3693463904 79.1875 Example 10 KKY 17 1.9999999829 −3.9999999317 11.4844 MKKY 80 1.9999999858 −3.9999999433 46.5469 Sch-MKKY 37 1.9999999749 −3.9999998994 22.5625 MN-MKKY 15 1.9999999586 −3.9999998343 10.7031 Example 11 KKY (infeasible) — — — — — MKKY 61 0.7284314289 −0.2039626101 35.7188 Sch-MKKY 30 0.7284313844 −0.2039626256 18.4531 MN-MKKY 12 0.7284313929 −0.2039626212 9.0781

In Table 2 we show that MN-MKKY algorithm is more efficient than the other algorithms comparing the number of iterations and elapsed time.

In Figure 1, for various tolerances, the number of iterations is compared for different algorithms. It is observed that the MN-MKKY algorithm has better performance for each required tolerance.

Example 9. Consider Example 2. Table 2 shows the computed solution for four different algorithms. From Figure 1, it is obvious that MN-MKKY algorithm is the best for this example.

Example 10. Consider the quadratic convex problem:

Example 11. Consider the nonlinear convex problem:

#### 6. Conclusion

Having two ideas in our mind, (i) calculation of feasible solution in each iteration and (ii) the fact that the objective function value must decrease in each iteration with the fixed desired tolerance, this paper proposed three hybrid algorithms for solving nonlinear convex programming problem based on the interior point idea using various ’s of Karmarkar, Schrijver, and Malek-Naseri techniques. These methods have better performance than the standard Karmarkar algorithm, since in the latter algorithm one may not check the feasibility of solution in each iteration.

Our numerical simulation shows that the MN-MKKY algorithm has the best performance among the other algorithms. This algorithm uses less number of iterations to solve the general nonlinear optimization problems with linear constraints, since it uses the step length of Malek-Naseri type.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

1. N. Karmarkar, “A new polynomial-time algorithm for linear programming,” Combinatorica, vol. 4, no. 4, pp. 373–395, 1984. View at: Publisher Site | Google Scholar | MathSciNet
2. H. Navidi, A. Malek, and P. Khosravi, “Efficient hybrid algorithm for solving large scale constrained linear programming problems,” Journal of Applied Sciences, vol. 9, no. 18, pp. 3402–3406, 2009. View at: Publisher Site | Google Scholar
3. M. S. Bazzara, J. Jarvis, and H. D. Sherali, Linear Programming and Network Flows, John Wiley & Sons, New York, NY, USA, 1984.
4. A. T. Hamdy, Operations Research, Macmillan, New York, NY, USA, 1992. View at: MathSciNet
5. R. M. R. Karp, “George Dantzig's impact on the theory of computation,” Discrete Optimization, vol. 5, no. 2, pp. 174–185, 2008. View at: Publisher Site | Google Scholar | MathSciNet
6. C. Roos, T. Terlaky, and J. Vial, Theory and Algorithms for Linear Optimization, Princton University, 2001.
7. A. Schrijver, Theory of Linear and Integer Programming, John Wiley & Sons, New York, NY, USA, 1986. View at: MathSciNet
8. A. Malek and R. Naseri, “A new fast algorithm based on Karmarkar's gradient projected method for solving linear programming problem,” Advanced Modeling and Optimization, vol. 6, no. 2, pp. 43–51, 2004. View at: Google Scholar | MathSciNet
9. E. Tse and Y. Ye, “An extension of Karmarkar's projective algorithm for convex quadratic programming,” Mathematical Programming, vol. 44, no. 2, pp. 157–179, 1989. View at: Publisher Site | Google Scholar | MathSciNet
10. Z. Kebbiche and D. Benterki, “A weighted path-following method for linearly constrained convex programming,” Revue Roumaine de Mathématique Pures et Appliquées, vol. 57, no. 3, pp. 245–256, 2012. View at: Google Scholar | MathSciNet
11. P. Fei and Y. Wang, “A primal infeasible interior point algorithm for linearly constrained convex programming,” Control and Cybernetics, vol. 38, no. 3, pp. 687–704, 2009. View at: Google Scholar | MathSciNet
12. Z. Kebbiche, A. Keraghel, and A. Yassine, “Extension of a projective interior point method for linearly constrained convex programming,” Applied Mathematics and Computation, vol. 193, no. 2, pp. 553–559, 2007. View at: Publisher Site | Google Scholar | MathSciNet

#### More related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.