Optimization Theory, Methods, and Applications in Engineering 2013View this Special Issue
A Branch and Bound Reduced Algorithm for Quadratic Programming Problems with Quadratic Constraints
We propose a branch and bound reduced algorithm for quadratic programming problems with quadratic constraints. In this algorithm, we determine the lower bound of the optimal value of original problem by constructing a linear relaxation programming problem. At the same time, in order to improve the degree of approximation and the convergence rate of acceleration, a rectangular reduction strategy is used in the algorithm. Numerical experiments show that the proposed algorithm is feasible and effective and can solve small- and medium-sized problems.
Quadratic programming problems with quadratic constraints play a very important role in global optimization because quadratic functions are relatively simple functions among all nonlinear functions, and quadratic functions can approach many other functions. Therefore, it is necessary for us to research quadratic problems for researching nonlinear problems better, and quadratic programming problems with quadratic constraints have an important applications in Science and technology. Then, in spite of researching local optimization problems or global optimization problems, quadratic programming problems have got extensive attention; it is obvious that researching this kind of problems is very necessary. In this paper, we consider the following quadratic programming problems with quadratic constraints: where are -dimension symmetric matrices, , , , , and .
In recent years, many researchers have researched this kind of problems and made certain progress. In , an effective lower bound of the optimal value of original problem is provided using Lagrange lower estimate, and the local optimal solutions are obtained by Newton methods; then to accelerate the convergence of the global optimal solutions, the local Newton methods are used. A decompose-approach method is put forward in . Literature  organically combines the outer approximation method with the branch and bound technique and presents a new branch-reduce algorithm. Literature  combines the cutting plane algorithm with the branch and bound algorithm, and puts forwards a new algorithm. Literature  presents a branch and bound algorithm by the linear lower function of the bilinear function. Based on , literature  puts forward a branch-reduce method aiming at objective function and constraint conditions of the linear relaxation programming. A simplex branch and bound algorithm is raised in . There are many different methods for solving quadratic programming problems with quadratic constraints in [8–15].
The rest of this paper is organized as follows. In Section 2, we give the linear relaxation programming problem of the problem . In Section 3, we give the rectangle subdivision and reduce strategy. We explain the branch and bound algorithm in detail in Section 4, and the convergence of the algorithm is proved. Finally, some numerical results turn out the effectiveness of the present algorithm.
2. Linear Relaxation Programming
In this section, we construct a linear relaxation programming problem of the original problem.
Assume that is the minimum eigenvalue of the matrices , for . If , let ; otherwise, let , where ; then is semipositive definite.
On the rectangle , for each , we construct a linear lower function on .
Suppose that and are the th indicators of and , respectively. We know that, for each , a linear lower function of is on the interval . Therefore, is a linear lower function of on the rectangle ; we construct the following linear function: where
We can obtain the following two theorems.
Theorem 1. For each , let be semipositive definite. For each , the linear function is a lower function of on the rectangle ; that is , for all .
Proof. From the formula (1) and the definitions of the functions and , for each , we have Moreover, the matrix is semipositive definite; then, Consequently, , for all , .
Theorem 2. Assume that is the spectral radius of the rectangle ; then
Proof. From the formula (1) and the definitions of the functions and , we have Hence, the conclusion is established.
Therefore, from Theorem 1, we obtain the linear relaxation programming problem of on the rectangle : Solving the problem , its optimal value is obtained, which is a lower bound of the global optimum of the problem on the rectangle .
3. The Subdivision and Reduction of the Rectangle
In this section, we give the bisection and reduction methods of the rectangle. Let be a rectangle on , and .
3.1. The Subdivision of the Rectangle
The method of the subdivision of the rectangle is described as follows.(i)Select the longest edge of the rectangle ; that is .(ii)Let . Then
3.2. The Reduction of the Rectangle
Based on , in order to improve the convergence of the algorithm, we give two pruning methods of problem . For all , , suppose that the objective function of is , the constraint functions are , and the upper bound of is denoted by ; let
Theorem 3 (see ). For any , if , then there is no optimal solution of on ; otherwise, if , then there is no optimal solution of on ; if , then there is no optimal solution of on , where
Theorem 4 (see ). For any , if , then there is no optimal solution of on ; otherwise, if , then there is no optimal solution of on ; if , then there is no optimal solution of on , where
Rule 1. Compute , if , then is deleted; otherwise, for any .
If , let .
If , let .
Rule 2. Compute , if , then is deleted; otherwise, for any .
If , let .
If , let , where .
4. The Algorithm Description and Convergence Analysis
Suppose when the iteration proceeds in step , the feasible region of the problem is denoted by , represents the feasible set at present, represents the divided rectangle soon, the set of remained rectangle after pruning is denoted by , and the current lower bound and upper bound of the global optimal value of the problem are denoted by and , respectively.
Step 1 (initializing). Set , and let , , , and . Solving the problem , its optimal solution and optimal value are denoted by and , respectively. Let ; then is a lower bound of global optimal value of the problem ; if , let , the upper bound is , and find a current optimal solution .
Step 2 (termination rule). If there was a condition satisfying between () or , then stop; the global optimal solution and the global optimal value are outputted; otherwise, go to the next step.
Step 3 (selection rule). Select a rectangle which has a minimum lower bound in the rectangle set ; that is, .
Step 4 (subdivision rule). Using the subdivision method in the former section, then the rectangle can be divided into subrectangles and , and .
Step 5 (reduction technique). Reducing the subrectangles after dividing using the reduction method in the former section, without loss of generality, the new rectangles after reduction are also denoted by , , where is the index set of the rectangles after reduction.
Step 6 (bounding rule). Lower bound is ; upper bound is .
The current best feasible solution is .
Step 7 (pruning rule). Let .
Step 8. Set ; go to Step 2.
Theorem 5. (a) If the algorithm terminates in limited steps, then is a -global optimal solution of problem .
(b) For each , let be the solution after step . If the algorithm is infinite, then is a feasible solution sequence of problem , and any accumulation is a global optimal solution of problem , and .
Proof. (a) If the algorithm is finite, suppose that it terminates in step . Because is obtained by solving , then , and is a feasible solution of problem . When , the algorithm terminate. From Steps 1 and 6, we have ; from the algorithm, , where is the global optimal value of problem . Because is a feasible solution of problem , so . Thus
Therefore, is a -global optimal solution of problem .
(b) If the algorithm is infinite, then it produces a solution sequence of problem , where for each , is obtained by solving problem . For each , for the optimal solution , the sequence constitute a solution sequence of problem ; from the iteration of the algorithm, we have Because the series do not decrease and have an upper bound, and do not increase and have a lower bound, then the series and are both convergent. Taking the limits on both sides of (14), we have Let , ; then the formula (15) converts into Without loss of generality, assume that the sequence of rectangle satisfy and . In our algorithm, the rectangles are divided into two equal parts continuously; then , and because of the continuity of function , So any accumulation of is a global optimal solution of problem .
5. Numerical Experiment
Several experiments are given to turn out the feasibility and effectiveness of our algorithm.
From the algorithm, the initial rectangle is ; first, we solve the problem , its optimal solution is , and optimal value is ; then is a lower bound of the global optimal value of problem . Because is feasible, then is a set of current feasible solutions, and the current upper bound is ; the current optimal solution is .
After that, based on our selection rule, select the rectangle with the minimum lower bound to divide; then is divided into two subrectangles and from the dividing method in Section 3.1; then reduce the rectangles using the reduction technique in Section 3.2, and the new rectangle after reduction is denoted by . Solving the linear relaxation programming problem on the rectangle , its optimal value is ; then the lower bound of the original problem is not updated, also being . Next, we choose to divide, until the 15th iteration, ; solve the linear relaxation programming problem , its optimal solution is , and optimal value is ; while, the current upper bound is , the current optimal solution is . Because , it satisfies our termination rule; then the optimal value of the original problem is , the lower bound of the optimal value is , and the optimal solution is ; here the lower bound of the optimal value is also approximate optimal value, where the accuracy is .
Table 2 shows the different results of Example 1 under different accuracy.
Example 2. The optimal value is 118.3838.
Example 3. The optimal value is 6.7778.
Example 4. The optimal value is 1.0000.
Example 5. The optimal value is −1.1629.
Example 6. The optimal value is −31.8878.
Example 7. The optimal value is −3.3205.
Example 8. The optimal value is .
We choose ; then the approximate optimal value satisfying accuracy and the CPU running time are obtained; the results are shown in Table 1.
In this paper, we presented a branch and bound reduced algorithm for solving the quadratic programming problems with quadratic constraints. By constructing a linear relaxation programming problem, the lower bound of the optimal value of original problem can be obtained. Meanwhile, we used a rectangle reduction technique to improve the degree of approximation and the convergence rate of acceleration. Numerical experiments show the effectiveness of our algorithm.
The work is supported by the Foundation of National Natural Science of China under Grant no. 11161001.
X. J. Zheng, X. L. Sun, and D. Li, “Convex relaxations for nonconvex quadratically constrained quadratic programming: matrix cone decomposition and polyhedral approximation,” Mathematical Programming B, vol. 129, no. 2, pp. 301–329, 2011.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
S.-J. Qu, Y. Ji, and K.-C. Zhang, “A deterministic global optimization algorithm based on a linearizing method for nonconvex quadratically constrained programs,” Mathematical and Computer Modelling, vol. 48, no. 11-12, pp. 1737–1743, 2008.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
H. Tuy and N. T. Hoai-Phuong, “A robust algorithm for quadratic optimization under quadratic constraints,” Journal of Global Optimization, vol. 37, no. 4, pp. 557–569, 2007.View at: Google Scholar