Research Article  Open Access
Honglan Zhu, Qin Ni, Liwei Zhang, Weiwei Yang, "A Fractional Trust Region Method for Linear Equality Constrained Optimization", Discrete Dynamics in Nature and Society, vol. 2016, Article ID 8676709, 10 pages, 2016. https://doi.org/10.1155/2016/8676709
A Fractional Trust Region Method for Linear Equality Constrained Optimization
Abstract
A quasiNewton trust region method with a new fractional model for linearly constrained optimization problems is proposed. We delete linear equality constraints by using null space technique. The fractional trust region subproblem is solved by a simple dogleg method. The global convergence of the proposed algorithm is established and proved. Numerical results for test problems show the efficiency of the trust region method with new fractional model. These results give the base of further research on nonlinear optimization.
1. Introduction
In this paper, we consider the linear equality constrained optimization problem:where are continuously differentiable, , , and .
Trust region methods have the advantage in the theoretical analysis of convergence properties and the practice. Besides, Davidon proposed a conic model which makes use of more incorporate information at each iteration (see [1]). It is believed that combing trust region techniques with a conic model would be appealing. And it has attracted more and more attention of the researchers (see [2–8]).
For unconstrained optimization problems, we proposed a new fractional model (see [9]):where are horizontal vectors, , is symmetric and an approximate Hessian of at . Then, the trust region subproblem of the unconstrained optimization problems iswhere is a sufficiently small positive number, refers to the Euclidean norm, and is a trust region radius. If , then is reduced to the conic model. If , then is the quadratic model. In order to ensure that the fractional model function is bounded over the trust region , we assume :We denotethen, (4) reduces to a simplified fractional trust region subproblem:However, the subproblem (7) is solved only in quasiNewton direction in [9]. In this paper, we made a further research, where the subproblem (7) is solved by a generalized dogleg algorithm.
For linear equality constraints problem, we focus on the problems which are solved by a trust region method with new fractional model. If the constraints are linear inequality constraints or some constraint functions are nonlinear, then a few difficulties may arise. However, for these cases, the linear equality constraints problem may be as the subproblems of them. For example, the inequality constraints can be removed by an active set technique or a barrier transformation, and then the nonlinear constraints are linearized.
In [10], Sun et al. established the algorithm for the problem (1)(2) and prove the convergence. However, they do not consider the model computation. In [11], Lu and Ni proposed a trust region method with new conic model for solving (1)(2) and carried out the numerical experiments.
In this paper, we use a simple dogleg method to solve fractional model subproblems and present a quasiNewton trust region algorithm for solving linear equality constrained optimization problems. This is a continuing work of the fractional model (see [9]), where the linear equality constraints (2) are deleted by using null space techniques.
This paper is organized as follows. In Section 2, we give a description of the fractional trust region subproblem. In Section 3, we give a generalized dogleg algorithm for solving the fractional subproblem. In Section 4, we propose a new quasiNewton method based on the fractional model for solving linearly constrained optimization problems and prove the global convergence of the proposed method under the reasonable assumptions. The numerical results are presented in Section 5.
2. The Fractional Trust Region Subproblem
In order to solve the problem (1)(2), we assume that is column full rank and constraints are consistent. That is, the current point always satisfies . Obviously, the constrained condition is equivalent to if . Therefore, combing with (1)–(4), we can obtain that the trial step is computed by the following subproblem:It can be found that our trust region subproblem (8)–(11) is the minimization of a fractional function subject to the trust region constraint and the linear constraints.
In order to solve (8)–(11), firstly we consider removing the constraint (10) by the same assumption as in [9]. That is, we assume the parameters , , and satisfied (5); then, subproblem (8)–(11) can be rewritten as the following reduced subproblem:where is defined as (6).
The null space technology (see [4, 12, 13]) is an important technique for solving equality constraints programming problems. In the following, we use this technology to eliminate constraint (13). Since has full column rank, then there exist an orthogonal matrix and a nonsingular upper triangular matrix such thatwhere , , and . Then, (13) can be rewritten as Therefore, the feasible point for (13) can be presented byfor any , where lies in the null space of . Then, the subproblem (12)–(14) becomeswhere , , and are reduced horizontal vectors, is the reduced gradient, is Hessian approximation, andIt can be seen that this subproblem has the same form as the subproblem (7) of the unconstrained optimization problems and it can be considered as the subproblem of the unconstrained minimization over . Therefore, we can find a solution of (18)(19) by the dogleg method. Besides, it is easy to find that is equivalent to due to .
3. The Dogleg Method of Fractional Trust Region Subproblem
Now we consider calculating the trial step of the new subproblem (18)(19) by a simple dogleg method. Firstly, we need to recall the choice of the Newton point as the following subalgorithm (see [9]).
Subalgorithm 3.1. Given ,
Step 1. Calculate , , , , and as defined in (20) and
Step 2. If , then , where
Step 3. If , then set . If , then . If , then set and .
In the following, we consider determining the steepest descent point of (18)(19), where the steepest descent point is defined by Definition 1. Let and from (18) we havethen, (18)(19) becomeswhere
Definition 1. Let be the solution of (25). Then, is called a steepest descent point of (18)(19).
In order to discuss the stationary points of , by computation we have that the derivative of iswhereBy direct calculating, we have
During the analysis, we find that if and , then (29) and (30) may have no positive real zero points in most of the cases, respectively. Thus, in order to simplify the discussion, we assume
In order to discuss the feasible stationary points of and , we first define the sets and which contain all the positive extremum points of and , respectively, and these extremum points should be inside the feasible region . It is easy to obtain thatwhere
Now, we have the the following conclusions.
Remark 2. Suppose that (5) and (35) hold. (i) If , then from (34) we know that . (ii) If , then . From (37) and (33), we havewhere is defined in (37). However, from (37) and (35) we have . We assume ; then, from (5) and (27) we have where the last inequality is obtained by (21) and this conflicts with (40). Therefore, .
Similarly, we can prove that if , then .
Then, combining with (36) and Remark 2., we define a setwhere , , , , and are determined by (37)–(39).
Hence, it is easy to get the following theorems.
Theorem 3. Suppose that (5) and (35) hold. Then, the solution of (25) iswhereand is defined by (36).
Therefore, the steepest descent point is an approximate solution of the fractional trust region subproblem (18)(19), where is defined by (43).
Similarly, the fractional trust region subproblem (18)(19) has the following property. The proof of this theorem is similar to Theorem 3.1 in [14], so we omit its proof.
Theorem 4. Suppose that (5) and (35) hold, where . If , then the optimal solution of (18)(19) must be on the boundary of trust region, where is defined in Subalgorithm 3.1.
In order to propose a generalized dogleg method of (18)(19), we setand calculate such that . DenoteIf , then there exist two real roots and :where and (see [14, 15]).
Based on the preceding theorems and analysis, we now give a generalized dogleg algorithm for solving (18)(19).
Algorithm 5.
Step 1. Compute by Subalgorithm 3.1.
Step 2. If , then , and stop.
Step 3. If , then compute as defined in (32), and go to Step 4. Otherwise, go to Step 5.
Step 4. If or , go to Step 5. Otherwise, compute , where is defined by (43). If , where is defined by (27), then , and stop. If , go to Step 5. Otherwise, go to Step 6.
Step 5. Set and computewhereIf , then , and stop. Otherwise, calculate , go to Step 6.
Step 6. Calculate and as defined in (47); then,where is defined by (45). , and stop.
Then, we give the predicted decent bound in each iteration, which is the lower bound of the predicted reduction in each iteration:where is defined by (18).
Theorem 6. Suppose that (5) and (35) hold, where . If is obtained by Algorithm 5, thenwhere .
This theorem is similar to that in [9] and its proof is omitted.
4. New QuasiNewton Algorithm and Its Global Convergence
In this section, we propose a quasiNewton method with a fractional model for linearly equality constrained optimization and prove its convergence under some reasonable conditions. In order to solve problem (1)(2), we consider the fractional model approximation for about ; that iswhere , , and is defined as (18). Thus, , , and are the corresponding gradient and Hessian approximations of the function at the th iteration. We choose to minimize . There is a unique minimizer if and only if is positive definite. In the following, we give our algorithm. If the current iteration is the feasible point , then an equivalent form of (1)(2) is to solve the reduced unconstrained problem
In the following, we consider the choice of the parameter vectors , , and . We choose these vectors such that (53) satisfies the following conditions:whereObviously, (55) holds. Then, from (56), we havewhere and
If we choosethen these unknown parameters , , and can be obtained from (58)(59). In the following, we give the derivation process of , , and . First, we define some notations:where the vectors and are chosen to satisfyFor convenience, we omit the index of and .
On one hand, from (58) we havewhereIf the sequence is monotonically decreasing and is positive definite, then we know that and (64) becomes
On the other hand, by leftmultiplying on (59) and combining with (63), we haveThen, from (66), we havewhereSimilarly, by left multiplying on (59), from (63) and (66) we havewhere . Substituting (68) into the above equation, we havewhereAnd then from (66), we have
Now we give the new quasiNewton algorithm based on the fractional model (53).
Algorithm 7.
Step 0. Choose , , , and the initial trust region radius . Compute as defined in (15). Set .
Step 1 (stopping criterion). Compute , , and . If , then , and stop. If , go to Step 3.
Step 2. Compute . Update bywhere Step 3. If , then set , , and , compute such that WolfePowell conditions are satisfied, and set and , , and go to Step 1.
Step 4. By the parameters , , and get and .
Step 5. Compute and as defined in (62). If , then set . Calculateand setOtherwise, compute and as defined in (72) and (69). If or , then set . Otherwise, calculate , , and , where , , and in (61) are determined by (71), (68), and (73).
Step 6. If , then . Update and with the same way such that (5) are satisfied.
Step 7. By the parameters , , , , and , solve the subproblem (18)(19) by Algorithm 5 to get . Set .
Step 8. ComputewhereStep 9. Update the trust region radius:Step 10. If , then . Set , and go to Step 1. Otherwise, , , and go to Step 6.
Next we present the global convergence theorem which says the reduced gradients converge to zero.
Theorem 8. Assume that (5) and (35) hold, where . If is continuously differentiable and bounded below in some set containing all iterations generated by Algorithm 7, the sequences and are uniformly bounded. Then,
Proof. Assume that the theorem is false and there is such that for all . From the assumption, we can assume thathold for all . From (52) and the assumptions in the theorem, we havewhere and are some positive constants. Then, from Step 10 of Algorithm 7, we haveSince is bounded from below and for all , we have that is convergent, and as .
On the other hand, when , . From Step 6 of Algorithm 7, we havewhere . Thus, we haveBy computing, we obtainThen, from (80), we havewhich indicates thatBy the updating in Step 9 of Algorithm 7, we have , which is a contradiction to . The theorem is proved.
5. Numerical Tests
In this section, Algorithm 7 (abbreviated as FTR) is tested with some test problems which are chosen from [16, 17]. These test problems are listed in Table 1. We choose linear constrained problems HS9, HS48, HS49, HS50, Chen 3.3.1, and Chen 3.3.2. Moreover, in order to test Algorithm 7 more generally, we designed some problems where the objective functions are Pro. 7–18 (see [14, 18]) and the linear equality constraints are Pro. 1–6. If in Algorithm 7, we can obtain the conic model algorithm and call this algorithm CTR. We solve the following 18 test problems by FTR and CTR and compare their results.

All the computations are carried out in Matlab R2012b on a microcomputer in double precision arithmetic. These tests use the same stopping criterion . The columns in the tables have the following meanings: Pro. denotes the numbers of the test problems; is the dimension of the test problems; Iter is the number of iterations; nf and ng are the numbers of function and gradient evaluations, respectively; is the Euclidean norm of the final reduced gradient; CPU(s) denotes the total iteration time of the algorithm in seconds. The parameters in these algorithms are
The numerical comparison for 18 smallscale test problems is listed in Table 2. We can see that FTR is better than CTR for 15 tests in the number of iterations and the remaining 3 tests are similar. Because FTR needs some extra algebra computation for some parameters, FTR takes more time than CTR for small problems.

The numerical results of some largescale problems are presented in Table 3. From Table 3, we find that for largescale problems the CPU time of FTR is approximately the same as that of CTR but it has fewer number of iterations. From the above comparison, we see that FTR is slightly more effective and robust for these largescale test problems.

The fractional model in Algorithm 7 is the extension of conic model. By using more information of function and gradient from the previous iterations and choosing parameters flexibly, the fractional model can be more approximate to the original problem. And the global convergence of the proposed quasiNewton trust region algorithm is also proved. Numerical experiment shows the algorithm is effective and robust, including for largescale test problems. The theoretical results and the numerical results lead us to believe that the method is worthy of further study. For example, we can consider using fractional model to solve the nonlinear equality constrained optimization problem.
Competing Interests
The authors have no competing interests regarding this paper.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (Grant no. 11071117 and 71301060), the Natural Science Foundation of Jiangsu Province (BK20141409), Funding of Jiangsu Innovation Program for Graduate Education (KYZZ_0089) (“the Fundamental Research Funds for the Central Universities”), and the Humanistic and Social Science Foundation of Ministry of Education of China (12YJA630122).
References
 W. C. Davidon, “Conic approximations and collinear scalings for optimizers,” SIAM Journal on Numerical Analysis, vol. 17, no. 2, pp. 268–281, 1980. View at: Publisher Site  Google Scholar  MathSciNet
 R. Schnabel, “Conic methods for unconstrained minimization and tensor methods for nonlinear equations,” in Mathematical Programming: The State of the Art, A. Bachem, M. Grötschel, and B. Korte, Eds., pp. 417–438, Springer, Heidelberg, Germany, 1982. View at: Google Scholar
 D. C. Sorensen, “Newton's method with a model trust region modification,” SIAM Journal on Numerical Analysis, vol. 19, no. 2, pp. 409–426, 1982. View at: Publisher Site  Google Scholar  MathSciNet
 W. Sun and Y. X. Yuan, “A conic trustregion method for nonlinearly constrained optimization,” Annals of Operations Research, vol. 103, pp. 175–191, 2001. View at: Publisher Site  Google Scholar  MathSciNet
 C. X. Xu and X. Y. Yang, “Convergence of conic quasiNewton trust region methods for unconstrained minimization,” Mathematical Application, vol. 11, no. 2, pp. 71–76, 1998. View at: Google Scholar  MathSciNet
 Y. X. Yuan, “A review of trust region algorithms for optimization,” in Proceedings of the International Congress on Industrial and Applied Mathematics (ICIAM '00), vol. 99, pp. 271–282, 2000. View at: Google Scholar
 D. M. Gay, “Computing optimal locally constrained steps,” SIAM Journal on Scientific and Statistical Computing, vol. 2, no. 2, pp. 186–197, 1981. View at: Publisher Site  Google Scholar  MathSciNet
 J.M. Peng and Y.X. Yuan, “Optimality conditions for the minimization of a quadratic with two quadratic constraints,” SIAM Journal on Optimization, vol. 7, no. 3, pp. 579–594, 1997. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. L. Zhu, Q. Ni, and M. L. Zeng, “A quasiNewton trust region method based on a new fractional model,” Numerical Algebra, Control and Optimization, vol. 5, no. 3, pp. 237–249, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 W. Y. Sun, J. Y. Yuan, and Y. X. Yuan, “Conic trust region method for linearly constrained optimization,” Journal of Computational Mathematics, vol. 21, no. 3, pp. 295–304, 2003. View at: Google Scholar  MathSciNet
 X. P. Lu and Q. Ni, “A trust region method with new conic model for linearly constrained optimization,” Or Transactions, vol. 12, pp. 32–42, 2008. View at: Google Scholar
 Q. Ni, Optimization Method and Program Design, Science Press, Beijing, China, 2009.
 L. W. Zhang and Q. Ni, “Trust region algorithm of new conic model for nonlinearly equality constrained optimization,” Journal on Numerical Methods and Computer Applications, vol. 31, no. 4, pp. 279–289, 2010. View at: Google Scholar  MathSciNet
 M. F. Zhu, Y. Xue, and F. S. Zhang, “A quasiNewton type trust region method based on the conic model,” Numerical Mathematics, vol. 17, no. 1, pp. 36–47, 1995 (Chinese). View at: Google Scholar  MathSciNet
 X. P. Lu and Q. Ni, “A quasiNewton trust region method with a new conic model for the unconstrained optimization,” Applied Mathematics and Computation, vol. 204, no. 1, pp. 373–384, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 W. Hock and K. Schittkowski, Test Examples for Nonlinear Programming Codes, Springer, Berlin, Germany, 1981. View at: Publisher Site  MathSciNet
 X. Y. Chen, Research on the geometric algorithms for programs with constraints of linear equalities [M.S. thesis], Fujian Normal University, 2012.
 J. J. More, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” ACM Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1981. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2016 Honglan Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.