Abstract

The preconditioned generalized shift-splitting (PGSS) iteration method is unconditionally convergent for solving saddle point problems with nonsymmetric coefficient matrices. By making use of the PGSS iteration as the inner solver for the Newton method, we establish a class of Newton-PGSS method for solving large sparse nonlinear system with nonsymmetric Jacobian matrices about saddle point problems. For the new presented method, we give the local convergence analysis and semilocal convergence analysis under Hölder condition, which is weaker than Lipschitz condition. In order to further raise the efficiency of the algorithm, we improve the method to obtain the modified Newton-PGSS and prove its local convergence. Furthermore, we compare our new methods with the Newton-RHSS method, which is a considerable method for solving large sparse nonlinear system with saddle point nonsymmetric Jacobian matrix, and the numerical results show the efficiency of our new method.

1. Introduction

In this paper, we will explore effective and convenient methods for solving nonlinear nonsymmetric saddle-point problem:where is a continuous differentiable nonlinear function and the function with and is defined on an open convex subset of (n + m)-dimensional real linear space . Moreover, the Jacobian matrix is large, sparse, and nonsymmetric saddle point with the formwhere is a real positive definite matrix and is a full-column rank matrix . This kind of large sparse nonsymmetric saddle-point nonlinear systems (1) always arises in many scientific and engineering computing areas, such as elastomechanics equations and Stokes equation. Some of them have not been solved analytically, so we can only explore the method to obtain the numerical simulation at our utmost.

In the past, researchers have developed some methods to solve nonlinear function [110]. In these methods, the most typical and popular method for solving the nonlinear system (1) is the Newton method. The principle of solving nonlinear equations by the Newton method is very simple. In each step, we expand the nonlinear equation at by Taylor expansion and take its linear part to construct the approximate equation of the nonlinear equation. Then, we calculate the zero point of the approximate equation as the next iteration point, and it is represented as follows:

The sequence calculated by this iteration will converge to the numerical solution eventually as under certain conditions. We know that an excellent algorithm is not only accurate but also efficient. When the dimension n is large, the cost of each step of the traditional Newton algorithm is very expensive. The reason for this phenomenon is that, at each iterative step, a linear systemmust be exactly and accurately solved. We hope to give up a little bit of “precision” in exchange for greater “efficiency.” This idea led to the development of inexact Newton methods which were first proposed by Dembo et al. [11]. In recent decades, the inexact Newton method has been extensively studied and applied in some fields. The linear equation (4) can be solved efficiently by some methods which will discard some precision, but the calculation amount and time will be greatly reduced. In addition, we know that the traditional Newton method is second-order convergence, and increasing the order of convergence can make the algorithm converge to the exact solution faster. Therefore, we consider improving the Newton method to improve the order of convergence. Next, we introduce the traditional Newton method and the improved Newton method. In the inexact Newton methods, the termination condition of the Newton equation (4) iswhere is obtained by applying some linear iterative methods. The inexact Newton methods usually have the unified form as shown in Algorithm 1 .

(1)Let the initial guess be given.
(2)For until “convergence” do:
   
 Find some and that satisfy
   
(3)Set .

Here, is the Jacobian matrix and is commonly called forcing term which is used to control the level of accuracy. The algorithm mentioned above has R-order of convergence two at least. The researchers present the modified Newton iteration to improve convergence order as shown in Algorithm 2.

(1)Let the initial guess be given.
(2)For until “convergence” do:
   
 Find some and that satisfy
   
(3)Set .

From what is mentioned above, inexact-modified Newton methods only need to calculate once per m step and have less computation compared with inexact-modified Newton methods. This kind of method has R-order of convergence m+1 at least as the outer iteration and the PGSS iteration method as the inner iteration. In this paper, we can establish the modified Newton-PGSS as m = 2.

The inexact Newton methods consist of two parts: inner iteration and outer iteration. The outer iteration is the Newton method, which is used to solve nonlinear problems, and each iteration has to solve a linear equation in order to generate the sequence . Linear iterative methods, such as the classical splitting methods or the modern Krylov subspace methods [12, 13], are applied inside the Newton methods to solve the Newton equations approximately. A significant advantage of such inner-outer iterations is that one can reduce the inverse of the Jacobian matrix storage and calculation of each step, so as to improve the operation efficiency. Therefore, this kind of inner-outer iterative methods has been widely studied. Newton–Krylov subspace [3] methods which utilize the Krylov subspace iteration methods as the inner iterations have been effectively and successfully used in many fields, see [1416].

By introducing the inexact Newton method [14, 7, 8], we know that the efficiency of the inner iteration will affect the efficiency of the whole algorithm. Thus, we want to explore the excellent inner iteration to obtain efficient inner-outer iterative methods. In other words, efficient linear iteration should be employed to solve the Newton equation (4) with real nonsymmetric saddle-point Jacobian matrix. There are many ways to solve the saddle point linear problem [3, 1725]. Recently, Cao et al. [2629] proposed a method which is based on the shift-splitting iteration method presented by Bai et al. [30] to solve the saddle-point problem. This method is more efficient than other algorithms such as the Uzawa-type iteration methods, the successive over-relaxation (SOR-like) iteration methods [31, 32], and the Hermitian and skew-Hermitian splitting (HSS) iteration methods [3335]. In addition, the PGSS iteration method is convergent unconditionally and the preconditioner generated by it is also very excellent [26]. When applying the PGSS method for solving complex linear system, at each iterative step, it needs to solve single linear subsystem with their coefficient matrices being the one . Furthermore, in order to increase the efficiency of algorithm, we optimized the outer iteration and then we propose modified Newton-PGSS method to solve the saddle problems. Because there was no Newton method to solve the saddle point system problem, we compare the Newton-PGSS method with the traditional methods, for example, the Newton-RHSS method [31, 36, 37].

The organization of the paper is as follows. In Section 3, we introduce the Newton-PGSS method. In Sections 4 and 5, we offer the convergence properties of this method. We establish local convergence theorem and semilocal convergence properties under some proper hypothesis for the Newton-PGSS method, respectively. We show the modified Newton-PGSS method in Section 6. Numerical examples are presented to confirm the efficiency of our new method in Section 7. Finally, in Section 8, some brief conclusions are given.

2. Preliminaries

First of all, we review the PGSS method [26] for solving large sparse nonsymmetric saddle-point linear system:where is a real nonsymmetric saddle-point matrix.

The PGSS Iteration Method [27]. Given an initial guess , compute for using the following iteration scheme until satisfies the stopping criterion:where is a matrix with the form , where is a identity matrix and is a m m identity matrix and and are real numbers greater than 0. We can get from (7) leading to the PGSS iterative scheme:where

Here, is the iteration matrix of the PGSS iteration method.

Theorem 1. (see [27]).  = b is a nonsymmetric saddle-point matrix is a nonnegative constant, and is a positive constant. Then, the iteration matrix of PGSS iswhich satisfies

3. The Newton-PGSS Method

In this section, we describe an inner-outer iteration method for solving systems of nonlinear equations with complex symmetric Jacobian matrices.

We use Newton methods as outer iteration and apply the PGSS method as the inner solver for the modified Newton method, in other words, the PGSS iteration is employed to solve the following two linear systems:

Then, we get the Newton-PGSS method for solving nonlinear system (1).

The Newton-PGSS Method. Let be a continuously differentiable function with the complex symmetric Jacobian matrix at any , and letwhere is a real-positive definite matrix and is a full column rank matrix . Given an initial guess , two positive constants and and sequence of positive integers, compute for until converges. The algorithm can be concluded as Algorithm 3.

(1)Given an initial guess , a nonnegative constant , a positive constant , and a positive integer sequences .
(2)For until do:
 (2.1)Set .
 (2.2)For , apply algorithm PGSS to the linear system (12):
    
  and obtain such that
    
where
    
   
 (2.3)Set
    
obtain the following uniform expressions for ,
    
where
    
   and
    
Then, the Newton-PGSS method can be rewritten as
    
From the definitions of and , we can obtain
    
Then, the Newton-PGSS method can be equivalently expressed as
    
The Jacobian matrix can be rewritten as
    
with
    

4. Local Convergence of the Newton-PGSS Method

In this section, we prove the local convergence of Newton-PGSS method under the Hölder condition.

Let be -differentiable on an open neighborhood . Suppose is modified generalized shift-splitting of the Jacobian matrix , where and and V(x) and W(x) are defined as follows. Suppose is continuous and positive definite at a point , at which .

Denote with an open ball centered at with radius .

Assumption 1. For all , assume the following conditions hold.(A1)The bounded condition: there exist positive constants and such that(A2)The Hölder condition: there exist nonnegative constants and such thatwith the exponent .

Remark 1. We can know the fact that Lipschitz condition is a special case of Hölder condition when , and we can call Hölder condition Lipschitz. Hence, Lipschitz condition is stronger than Hölder condition.
Now, under Assumption 1, we establish the local convergence theorem for the Newton-PGSS, and we can know the properties of function F around the numerical solution and the information about the radius of the neighborhood. The properties and information mentioned above will affect the given method about the local convergence.

Lemma 1. Under Assumption 1, for all , if , then exists. And, the following inequalities hold with for all :

Proof. Sinceby Banach lemma exists and inequalityholds, andMoreover, sinceit holds thatThis completes the proof of Lemma 1.

Theorem 2. Under the assumptions of Lemma 1, suppose and define , wherewith , and the constant satisfieswhere the symbol is used to denote the smallest integer no less than the corresponding real number, is a prescribed positive constant, andwhere are more than 0.
Then, for any and , it holds that

Proof. DenotethenFrom the bounded condition, we haveand we can get the inequalityHence, by making use of the Banach lemma, we can obtainSimilarly,Then, we haveWe can use (33); hence,Now, we turn to estimate the error about the iteration defined above. Clearly, it holds thatwhereHence, we can obtainwhereThis function is about t increasing and about c decreasing; hence,In fact, for , we have, as . It follows thatHence, , and by making use of mathematical methods of induction, suppose is valid for some positive integer . Then, by making use of the function above again, we can straightforwardly deduce the estimatewhich show that it also holds true for km1 as the following. In addition, we haveand, hence, Now, the conclusion what we are proving above is as follows.

5. Semilocal Convergence of the Newton-PGSS Method

Assumption 2. For all , where , assume the following conditions hold.(A1)The bounded condition: there exist positive constants and such that(A2)The Hölder condition: there exist nonnegative constants and for all x, with the exponent , and we define .

Lemma 2. Under Assumption 2, for all x, y, then exists, and we have the following inequations:

Proof. The proof is omitted since it is the same as Lemma 1.

Theorem 3. Under Assumption 2, for all x, y , then exists, and we have the inequations in (45).
Now, we construct the following sequence of functions:with the constants satisfyingwhere and ; let , and the sequence are generated by the following formula:Some properties of the function and and the sequence are given by the following lemmas.

Lemma 3. Assume that constants satisfyDenote , and then, when , the following inequalities hold that

Proof. The proof is omitted since it is straightforward.

Theorem 4. Under the assumptions of lemma in this section, withsatisfyingAnd, define , and the constant satisfieswhere the symbol is used to denote the smallest integer no less than the corresponding real number, a prescribed positive constant, andThen, the iteration sequence generated by the is well defined and converges to , which satisfies .

Proof. Firstly, we construct the sequenceWe haveFurthermore; hence, we have which satisfies , where because (49) and (50). Hence, we haveTherefore, we haveNow, we assume that , and by making use of mathematical methods of induction, we haveBecausehenceFurthermore, ; then, is an increasing function in and ; hence, we have , and it exists as point .
Next, prove the following inference by mathematical induction:whereBecausewe can derive inequalityHence,And becausewe can giveThen,and we can have inequalitySince the sequence converges to andwhere , the sequence also converges to . The proof has been completed as above.

6. The Modified Newton-PGSS Method and Its Local Convergence

In this section, we improve Newton-PGSS and introduce the modified Newton-PGSS and prove the local convergence of the modified Newton-PGSS method briefly.

The modified Newton method is a kind of algorithm based on the Newton method. Its principle is to reduce the calculation times of the inverse matrix of Jacobian matrix, making the algorithm more efficient. It only needs to calculate inverse matrix once every two steps. The format of the algorithm is shown below:

Then, we get the modified Newton-PGSS method for solving nonlinear system (1) (Algorithm 4).

(1)Given an initial guess , a nonnegative constant , a positive constant , and two positive integer sequences .
(2)For until do:
 (2.1)Set .
 (2.2)For , apply algorithm PGSS to the linear system (12):
    
  and obtain such that
    
 (2.3)Set
    
 (2.4)For , apply algorithm PGSS to the linear system (12):
    
  and obtain such that
    
 (2.5)Set
    
Where
    
  
obtain the following uniform expressions for and ,
    
and are defined as well as Section 3. Then, the modified Newton-PGSS method can be rewritten as
    
The modified Newton-PGSS method can be equivalently expressed as
    
In the following, we analyze the local convergence, and its condition (including assumption) and local convergence theorem are the same as Theorem 2 because their and are the same. Thus, we only restate them now.

Assumption 3. For all , assume the following conditions hold.(A1)The bounded condition: there exist positive constants and such that(A2)The Hölder condition: there exist nonnegative constants and such thatwith the exponent .

Lemma 4. Under Assumption 3, for all , if , then exists. And, the following inequalities hold with for all x, y:

Theorem 5. Under the assumptions of Lemma 4, suppose and define , wherewith , and the constant satisfieswhere the symbol is used to denote the smallest integer no less than the corresponding real number, is a prescribed positive constant, andwith are more than 0.
Then, for any and , it holds that

Proof. It is the same as Theorem 2.
In Theorem 5, we get the fact that which is the modified Newton-PGSS has the similar result as the following.

Theorem 6. Under the conditions of Theorem 5, we have the fact that, for any with corresponding of positive integers, the iteration sequence which is generated by the modified Newton-PGSS method is well defined and converges to . Furthermore, it has the following properties:

Proof. The proof of is the same as in Theorem 2. And, from the defination of and in Lemma 4, we can easily get thatwhereBy Lemma 4 and similar to the proof of , we can get it:Combining (84) with (85), we can obtainBy utilizing mathematical induction, we can get the fact that any and nonnegative integer , and we haveBecause , we can make a conclusion that converges to as from (86. The proof of theorem is completed.

7. Numerical Example

In this section, we show the efficiency of the modified Newton-PGSS method. Because such problems have not been analyzed before, in this paper, the first step is that we just compare the modified Newton-PGSS method with the Newton-PGSS method and Newton-RHSS as their inner iterations are splitting methods. And, the second step, we will discuss which is more effective as preconditioner in Newton-GMRES algorithm. The numerical results in Example 1 were computed using MATLAB Version R2011b, on an iMac with a 3.20 GHz Intel Core i5-6500 CPU, and 8.00 GB RAM, with machine accuracy eps =  .

Example 1. Consider the Stokes flow problem. Find and such thatwhere , with is the boundary of is a vector-valued function representing the velocity 0 is the viscosity constant is the componentwise Laplace operator, and is a scalar function representing the pressure. By discreting the function above with the upwind scheme, we obtain the saddle point problem in whichwherewith being the Kronecker product symbol. By applying the centered finite difference scheme on the equidistant discretization grid with the step size , the system of nonlinear equations (1) is obtained with the following form:whereThen, the Jacobian matrix isFirstly, we compare the algorithms whose inner iterations are splitting methods, such as Newton RHSS, Newton PGSS, and modified Newton PGSS. The parameters needed in the problem are chosen by using the traversal method for the purpose of comparison: the initial guess , the stopping criterion for the outer iteration is set to beand the prescribed tolerance and for controlling the accuracy of the iteration are both set to be , which satisfies inequalityFor different inner tolerance  = 0.4, 0.2, and 0.1 and problem parameters  = 1 and 0.1, the results about outer IT, inner IT, and CPU are listed in the numerical tables corresponding to the referred inexact Newton methods. Because the linear matrix of the solution is different in each iteration, there is no way to find the optimal parameters in theory. Thus, we get the most efficient algorithm by traversing for the parameters of different algorithms, and then, we tabulate these results. For the selection of a single parameter, we traverse the parameters from 0 with an interval of 1 in the beginning. When the number of steps, time, and error show an earlier increase and later decrease trend, the iteration is stopped to determine the range of parameters. We use this method to narrow the parameter range and get “the best parameters at present” until the result (such as step) does not change. For the selection of two parameters (denoted them as and ), first, we fix the parameter and traverse the parameter by using the single parameter traversal method. Then, we fix the parameter and traverse the parameter . We repeat the process until the result does not change. We can get information from Tables 16 that Newton-PGSS performs better than Newton-RHSS in the iterative CPU. Moreover, the Newton-MPGSS algorithm is much better than Newton-RHSS in the number of generation steps.
As we known, Krylov subspace method is more efficient than the stationary iterative methods in saddle point. Secondly, we will compare the effects of PGSS and RHSS as preconditioners on Newton-GMRES. In Tables 712, we can find it that PGSS and RHSS are more efficient as preprocessing operators than without them as using GMRES methods. Furthermore the PGSS is more efficient than RHSS as preconditioners. In the inner iteration, RHSS and PGSS are treated as preprocessing operators, and then the Krylov subspace method is used to solve the problem, which is better than the Krylov subspace method in CPU and step number. Although the effect of PGSS as preconditioner is not much better than that of RHSS when n is small, it can be seen that PGSS has great advantages in both steps and CPU compared with RHSS with the increase of n.

8. Conclusions

The Newton-PGSS method is a considerable method for solving large sparse nonlinear system with nonsymmetric saddle point problems with the nonsymmetric Jacobian matrix. This is the first time to solve this kind of problem, and we utilize the PGSS iteration as the inner solver for the Newton equation. And, we establish a modified Newton-PGSS method for solving large sparse nonlinear system with nonsymmetric saddle point problems with the nonsymmetric Jacobian matrix. We give the local convergence and semilocal convergence analysis of the new method under proper conditions. Finally, the numerical results show that the modified Newton-PGSS outperforms the other splitting method in the sense of CPU time and iterative steps. Furthermore, when we apply the Newton-GMRES method to solve the problems, PGSS will accelerate the algorithm as preconditioner and make it more efficient than RHSS.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant nos. 11771393 and 11632015).