Abstract

A quasi-Newton trust region method with a new fractional model for linearly constrained optimization problems is proposed. We delete linear equality constraints by using null space technique. The fractional trust region subproblem is solved by a simple dogleg method. The global convergence of the proposed algorithm is established and proved. Numerical results for test problems show the efficiency of the trust region method with new fractional model. These results give the base of further research on nonlinear optimization.

1. Introduction

In this paper, we consider the linear equality constrained optimization problem:where are continuously differentiable, , , and .

Trust region methods have the advantage in the theoretical analysis of convergence properties and the practice. Besides, Davidon proposed a conic model which makes use of more incorporate information at each iteration (see [1]). It is believed that combing trust region techniques with a conic model would be appealing. And it has attracted more and more attention of the researchers (see [28]).

For unconstrained optimization problems, we proposed a new fractional model (see [9]):where are horizontal vectors, , is symmetric and an approximate Hessian of at . Then, the trust region subproblem of the unconstrained optimization problems iswhere is a sufficiently small positive number, refers to the Euclidean norm, and is a trust region radius. If , then is reduced to the conic model. If , then is the quadratic model. In order to ensure that the fractional model function is bounded over the trust region , we assume :We denotethen, (4) reduces to a simplified fractional trust region subproblem:However, the subproblem (7) is solved only in quasi-Newton direction in [9]. In this paper, we made a further research, where the subproblem (7) is solved by a generalized dogleg algorithm.

For linear equality constraints problem, we focus on the problems which are solved by a trust region method with new fractional model. If the constraints are linear inequality constraints or some constraint functions are nonlinear, then a few difficulties may arise. However, for these cases, the linear equality constraints problem may be as the subproblems of them. For example, the inequality constraints can be removed by an active set technique or a barrier transformation, and then the nonlinear constraints are linearized.

In [10], Sun et al. established the algorithm for the problem (1)-(2) and prove the convergence. However, they do not consider the model computation. In [11], Lu and Ni proposed a trust region method with new conic model for solving (1)-(2) and carried out the numerical experiments.

In this paper, we use a simple dogleg method to solve fractional model subproblems and present a quasi-Newton trust region algorithm for solving linear equality constrained optimization problems. This is a continuing work of the fractional model (see [9]), where the linear equality constraints (2) are deleted by using null space techniques.

This paper is organized as follows. In Section 2, we give a description of the fractional trust region subproblem. In Section 3, we give a generalized dogleg algorithm for solving the fractional subproblem. In Section 4, we propose a new quasi-Newton method based on the fractional model for solving linearly constrained optimization problems and prove the global convergence of the proposed method under the reasonable assumptions. The numerical results are presented in Section 5.

2. The Fractional Trust Region Subproblem

In order to solve the problem (1)-(2), we assume that is column full rank and constraints are consistent. That is, the current point always satisfies . Obviously, the constrained condition is equivalent to if . Therefore, combing with (1)–(4), we can obtain that the trial step is computed by the following subproblem:It can be found that our trust region subproblem (8)–(11) is the minimization of a fractional function subject to the trust region constraint and the linear constraints.

In order to solve (8)–(11), firstly we consider removing the constraint (10) by the same assumption as in [9]. That is, we assume the parameters , , and satisfied (5); then, subproblem (8)–(11) can be rewritten as the following reduced subproblem:where is defined as (6).

The null space technology (see [4, 12, 13]) is an important technique for solving equality constraints programming problems. In the following, we use this technology to eliminate constraint (13). Since has full column rank, then there exist an orthogonal matrix and a nonsingular upper triangular matrix such thatwhere , , and . Then, (13) can be rewritten as Therefore, the feasible point for (13) can be presented byfor any , where lies in the null space of . Then, the subproblem (12)–(14) becomeswhere , , and are reduced horizontal vectors, is the reduced gradient, is Hessian approximation, andIt can be seen that this subproblem has the same form as the subproblem (7) of the unconstrained optimization problems and it can be considered as the subproblem of the unconstrained minimization over . Therefore, we can find a solution of (18)-(19) by the dogleg method. Besides, it is easy to find that is equivalent to due to .

3. The Dogleg Method of Fractional Trust Region Subproblem

Now we consider calculating the trial step of the new subproblem (18)-(19) by a simple dogleg method. Firstly, we need to recall the choice of the Newton point as the following subalgorithm (see [9]).

Subalgorithm  3.1. Given ,

Step 1. Calculate , , , , and as defined in (20) and

Step 2. If , then , where

Step 3. If , then set . If , then . If , then set and .
In the following, we consider determining the steepest descent point of (18)-(19), where the steepest descent point is defined by Definition 1. Let and from (18) we havethen, (18)-(19) becomeswhere

Definition 1. Let be the solution of (25). Then, is called a steepest descent point of (18)-(19).

In order to discuss the stationary points of , by computation we have that the derivative of iswhereBy direct calculating, we have

During the analysis, we find that if and , then (29) and (30) may have no positive real zero points in most of the cases, respectively. Thus, in order to simplify the discussion, we assume

In order to discuss the feasible stationary points of and , we first define the sets and which contain all the positive extremum points of and , respectively, and these extremum points should be inside the feasible region . It is easy to obtain thatwhere

Now, we have the the following conclusions.

Remark 2. Suppose that (5) and (35) hold. (i) If , then from (34) we know that . (ii) If , then . From (37) and (33), we havewhere is defined in (37). However, from (37) and (35) we have . We assume ; then, from (5) and (27) we have where the last inequality is obtained by (21) and this conflicts with (40). Therefore, .
Similarly, we can prove that if , then .
Then, combining with (36) and Remark 2., we define a setwhere , , , , and are determined by (37)–(39).

Hence, it is easy to get the following theorems.

Theorem 3. Suppose that (5) and (35) hold. Then, the solution of (25) iswhereand is defined by (36).

Therefore, the steepest descent point is an approximate solution of the fractional trust region subproblem (18)-(19), where is defined by (43).

Similarly, the fractional trust region subproblem (18)-(19) has the following property. The proof of this theorem is similar to Theorem 3.1 in [14], so we omit its proof.

Theorem 4. Suppose that (5) and (35) hold, where . If , then the optimal solution of (18)-(19) must be on the boundary of trust region, where is defined in Subalgorithm  3.1.

In order to propose a generalized dogleg method of (18)-(19), we setand calculate such that . DenoteIf , then there exist two real roots and :where and (see [14, 15]).

Based on the preceding theorems and analysis, we now give a generalized dogleg algorithm for solving (18)-(19).

Algorithm 5.
Step  1. Compute by Subalgorithm  3.1.
Step  2. If , then , and stop.
Step  3. If , then compute as defined in (32), and go to Step  4. Otherwise, go to Step  5.
Step  4. If or , go to Step  5. Otherwise, compute , where is defined by (43). If , where is defined by (27), then , and stop. If , go to Step  5. Otherwise, go to Step  6.
Step  5. Set and computewhereIf , then , and stop. Otherwise, calculate , go to Step  6.
Step  6. Calculate and as defined in (47); then,where is defined by (45). , and stop.
Then, we give the predicted decent bound in each iteration, which is the lower bound of the predicted reduction in each iteration:where is defined by (18).

Theorem 6. Suppose that (5) and (35) hold, where . If is obtained by Algorithm 5, thenwhere .

This theorem is similar to that in [9] and its proof is omitted.

4. New Quasi-Newton Algorithm and Its Global Convergence

In this section, we propose a quasi-Newton method with a fractional model for linearly equality constrained optimization and prove its convergence under some reasonable conditions. In order to solve problem (1)-(2), we consider the fractional model approximation for about ; that iswhere , , and is defined as (18). Thus, , , and are the corresponding gradient and Hessian approximations of the function at the th iteration. We choose to minimize . There is a unique minimizer if and only if is positive definite. In the following, we give our algorithm. If the current iteration is the feasible point , then an equivalent form of (1)-(2) is to solve the reduced unconstrained problem

In the following, we consider the choice of the parameter vectors , , and . We choose these vectors such that (53) satisfies the following conditions:whereObviously, (55) holds. Then, from (56), we havewhere and

If we choosethen these unknown parameters , , and can be obtained from (58)-(59). In the following, we give the derivation process of , , and . First, we define some notations:where the vectors and are chosen to satisfyFor convenience, we omit the index of and .

On one hand, from (58) we havewhereIf the sequence is monotonically decreasing and is positive definite, then we know that and (64) becomes

On the other hand, by left-multiplying on (59) and combining with (63), we haveThen, from (66), we havewhereSimilarly, by left multiplying on (59), from (63) and (66) we havewhere . Substituting (68) into the above equation, we havewhereAnd then from (66), we have

Now we give the new quasi-Newton algorithm based on the fractional model (53).

Algorithm 7.
Step  0. Choose , , , and the initial trust region radius . Compute as defined in (15). Set .
Step  1 (stopping criterion). Compute , , and . If , then , and stop. If , go to Step  3.
Step  2. Compute . Update bywhere Step  3. If , then set , , and , compute such that Wolfe-Powell conditions are satisfied, and set and , , and go to Step  1.
Step  4. By the parameters , , and get and .
Step  5. Compute and as defined in (62). If , then set . Calculateand setOtherwise, compute and as defined in (72) and (69). If or , then set . Otherwise, calculate , , and , where , , and in (61) are determined by (71), (68), and (73).
Step  6. If , then . Update and with the same way such that (5) are satisfied.
Step  7. By the parameters , , , , and , solve the subproblem (18)-(19) by Algorithm 5 to get . Set .
Step  8. ComputewhereStep  9. Update the trust region radius:Step  10. If , then . Set , and go to Step  1. Otherwise, , , and go to Step  6.

Next we present the global convergence theorem which says the reduced gradients converge to zero.

Theorem 8. Assume that (5) and (35) hold, where . If is continuously differentiable and bounded below in some set containing all iterations generated by Algorithm 7, the sequences and are uniformly bounded. Then,

Proof. Assume that the theorem is false and there is such that for all . From the assumption, we can assume thathold for all . From (52) and the assumptions in the theorem, we havewhere and are some positive constants. Then, from Step  10 of Algorithm 7, we haveSince is bounded from below and for all , we have that is convergent, and as .
On the other hand, when , . From Step  6 of Algorithm 7, we havewhere . Thus, we haveBy computing, we obtainThen, from (80), we havewhich indicates thatBy the updating in Step  9 of Algorithm 7, we have , which is a contradiction to . The theorem is proved.

5. Numerical Tests

In this section, Algorithm 7 (abbreviated as FTR) is tested with some test problems which are chosen from [16, 17]. These test problems are listed in Table 1. We choose linear constrained problems HS9, HS48, HS49, HS50, Chen 3.3.1, and Chen 3.3.2. Moreover, in order to test Algorithm 7 more generally, we designed some problems where the objective functions are Pro. 7–18 (see [14, 18]) and the linear equality constraints are Pro. 1–6. If in Algorithm 7, we can obtain the conic model algorithm and call this algorithm CTR. We solve the following 18 test problems by FTR and CTR and compare their results.

All the computations are carried out in Matlab R2012b on a microcomputer in double precision arithmetic. These tests use the same stopping criterion . The columns in the tables have the following meanings: Pro. denotes the numbers of the test problems; is the dimension of the test problems; Iter is the number of iterations; nf and ng are the numbers of function and gradient evaluations, respectively; is the Euclidean norm of the final reduced gradient; CPU(s) denotes the total iteration time of the algorithm in seconds. The parameters in these algorithms are

The numerical comparison for 18 small-scale test problems is listed in Table 2. We can see that FTR is better than CTR for 15 tests in the number of iterations and the remaining 3 tests are similar. Because FTR needs some extra algebra computation for some parameters, FTR takes more time than CTR for small problems.

The numerical results of some large-scale problems are presented in Table 3. From Table 3, we find that for large-scale problems the CPU time of FTR is approximately the same as that of CTR but it has fewer number of iterations. From the above comparison, we see that FTR is slightly more effective and robust for these large-scale test problems.

The fractional model in Algorithm 7 is the extension of conic model. By using more information of function and gradient from the previous iterations and choosing parameters flexibly, the fractional model can be more approximate to the original problem. And the global convergence of the proposed quasi-Newton trust region algorithm is also proved. Numerical experiment shows the algorithm is effective and robust, including for large-scale test problems. The theoretical results and the numerical results lead us to believe that the method is worthy of further study. For example, we can consider using fractional model to solve the nonlinear equality constrained optimization problem.

Competing Interests

The authors have no competing interests regarding this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant no. 11071117 and 71301060), the Natural Science Foundation of Jiangsu Province (BK20141409), Funding of Jiangsu Innovation Program for Graduate Education (KYZZ_0089) (“the Fundamental Research Funds for the Central Universities”), and the Humanistic and Social Science Foundation of Ministry of Education of China (12YJA630122).