Abstract

The complexity of open linear algebraic equations makes it difficult to obtain analytical solutions, and preprocessing techniques can be applied to coefficient matrices, which has become an effective method to accelerate the convergence of iterative methods. Therefore, it is important to preprocess the structure of open linear algebraic equations to reduce their complexity. Open linear algebraic equations can be divided into symmetric linear equations and asymmetric linear equations. The former is based on 2 × 2. The latter is preprocessed by the improved QMRGCGS method, and the applications of the two methods are analyzed, respectively. The experimental results show that when the step is 500, the pretreatment time of quasi-minimal residual generalized conjugate gradient square 2 method is 34.23 s, that of conjugate gradient square 2 method is 35.14 s, and that of conjugate gradient square method is 45.20 s, providing a new reference method and idea for solving and preprocessing non-closed linear algebraic equations.

1. Introduction

With the development of computer science and technology, computational mathematics has become increasingly important. The numerical calculation method in computational mathematics is commonly adopted to deal with scientific and engineering calculation problems [1]. The effective solution methods of large-scale unclosed algebraic systems are considered one of the important research topics in computational mathematics because many scientific and engineering fields are inseparable from the numerical solution of differential equations or integral equations, such as structural mechanics, computational fluid mechanics, electromagnetic field calculation, material simulation and design, life science, aerodynamics, system science, medical science, astronomy, financial engineering, social science, and other soft sciences. For linear partial differential equation or integral equation, it is difficult to find their analytical solutions because of their high complexity. The numerical method is adopted as a tool to solve this problem. The common numerical methods in computational mathematics, such as finite element, finite difference, finite volume, moment method, and meshless discretization methods, have been extensively studied. These computational problems are ultimately transformed into solving one or a group of large-scale open linear algebraic equations [2].

In practical application and engineering calculation, the iterative method is commonly used to solve linear equations. However, with the rapid development of science and technology and the increasing scale of the required solutions, the low-efficiency solution of the steady iteration method can no longer meet the needs of scientific calculation, and in fact it has rarely been used alone in the solution of equations. At the same time, tropical algebra can be used to solve combinatorial optimization problems. It combines the concepts and methods of statistical physics, machine learning, and other fields and has been well applied in noise removal and optimal control. In addition, the homotopy perturbation method transforms the problem of solving some non-linear partial differential equations into the initial value problem of solving ordinary differential equations through traveling wave transformation and homotopy perturbation theory, which is a relatively common method for solving non-linear problems. Later, Young proposed the definition and basic concept of non-stationary iterative method. The non-stationary Richardson iterative method is the first non-stationary iterative method, which can be directly extended to the steepest descent method, Chebyshev semi-iterative method, preconditioned conjugate gradient (PCG) method, and generalized CG (GCG) method. Based on the stationary iterative method of Krylov subspace represented by the CG method, the convergence rate of the iterative method is still related to the spectral distribution of the coefficient matrix. When the distribution of the eigenvalues of the iterative matrix is more concentrated, the convergence speed of the iterative method is faster. When the spectral distribution is more dispersed, the convergence speed of the non-stationary iterative method tends to be slow in general, and sometimes the iterative method does not converge. In this case, the preprocessing technology can be applied to the coefficient matrix to make the spectral radius of the iterative matrix tend to gather, which is an effective way to solve the dispersion of spectral distribution and accelerate the convergence speed of the iterative method [3]. In order to further reduce the complexity of open algebraic equations and effectively optimize the convergence performance of this method, this paper studies the structure preprocessing method of the open linear algebraic equations. According to the type of equations, the treatment can be divided into two parts, that is, the preprocessing of symmetric linear equations and the preprocessing of asymmetric linear equations.

2. The Structure Preprocessing Method for the System of Unclosed Linear Algebraic Equations

In the fields of natural science, aerodynamics, economic management, engineering technology, fluid mechanics, structural mechanics, and aerospace engineering, many practical problems can be solved by solving linear equations.where matrix is an matrix and and are -dimensional column vectors.

The methods for solving linear equations mainly include square root method, elimination method, direct triangular decomposition method, determinant and matrix inversion, Seidel iteration method, Jacobi iteration method, super relaxation iteration method, and other iteration methods [4].

However, the above methods are not suitable for all linear equations. Strictly speaking, they have great limitations in solving some specific problems. Therefore, some scholars have made efforts on developing preprocessing methods for linear equations [5]. These preprocessing methods are generally to split coefficient matrix 8 of linear equations in different ways, so that the iterative method converges to obtain the solution of linear equations. In [6], the general coefficient matrix of a linear equation system is transformed into a symmetric positive definite matrix, so that the question of solving the original linear equation system can be transformed into the question of finding the minimum value of an equivalent variational question. In addition, in [7], various algorithms for solving a linear equation system are given. Among them, the authors in [8] developed an improved Gauss–Seidel iterative method to solve the non-convergent linear equations and selected appropriate processing factors to realize the iterative convergence of the linear equations.

Scholars have made great efforts on solving large sparse linear equations. GMRES algorithm, preconditioned conjugate gradient method, ICCG method, and other common methods are used to solve large sparse linear equations. Among them, GMRES algorithm is one of the most effective algorithms for solving large sparse asymmetric linear equations at present, and the Krylov subspace method is commonly used to solve the preprocessing problem of large sparse linear equations. Moreover, various iterative methods have been developed on this basis, including the conjugate gradient method, the generalized minimal residual method, and so on. Conjugate gradient is a method between the steepest descent method and Newton method. It only uses the information of the first derivative, but it overcomes the shortcomings of the slowest descent method and Newton method, which need to store, calculate, and inverse Hesse matrix. The generalized minimum residual method is also a kind of unsteady method, which has the advantages of saving storage space and having less computation, and is suitable for parallel computing. However, the convergence of the two algorithms is easily affected by applications and boundary conditions [913].

The preprocessing method of linear equations generally involves the calculation of the iterative method. In such a rapidly developing information age, there are more requirements for solving linear equations, and the requirements for solving speed are becoming higher than ever. Therefore, how to make the solution closer to the practical application, faster, and more accurate is a major problem to be considered.

For the preprocessing methods for solving linear equations, the search and selection of preprocessor is the key link. The preprocessor G in this paper selects the approximation to the inverse of coefficient matrix A of linear equations. The method to obtain the inverse matrix of coefficient matrix is elaborated in detail in algebra book of undergraduate course; in fact, the elementary transformation method is the most common one. It is one of the teaching contents of undergraduate course to find the inverse matrix by elementary transformation, but the scale of matrix that can be transformed by elementary transformation is generally small, and it can even be deduced and calculated in the exercise book. However, in the fields of natural science, economic management, engineering technology, fluid mechanics, aerodynamics, structural mechanics, aerospace engineering, etc., the matrices used in practice are usually large-scale matrices or matrices with relatively large condition numbers, which can hardly be transformed [14]. According to the previous considerations, the mathematical knowledge is combined with the current computer knowledge, and the algorithm is constructed to realize the operation of matrix inversion in the computer software, so as to reduce the difficulty of matrix inversion encountered in the actual calculation process [15].

In this section, the preprocessing methods of symmetric linear equations are discussed. To be specific, the coefficient matrix is divided into component block forms, and then the commonly used preprocessing methods are applied to the saddle point question after partition, and three kinds of preprocessors are obtained.

2.1. Principle and Method

At present, the preprocessors for saddle point problems can be divided into block diagonal preprocessor, block triangle preprocessor, constraint preprocessor, and so on. For the two types of linear equations obtained previously, they can be uniformly written in the following form by repartitioning:

In general, the above linear system can be considered as saddle point question, where A is saddle point matrix [16]. is the scrambling parameter, is a point in the set, is the penalty parameter of the matrix, is the constraint coefficient, is the penalty parameter after the union, and is the penalty parameter after the union.

For a matrix with the above structure, the corresponding block diagonal preprocessor and block lower triangle preprocessor are constructed:where is the block diagonal preprocessor and is the lower triangular preprocessor of the block.

If the coefficient matrix A is symmetric, a constraint-type preprocessor with the following forms can also be constructed:where is the constraint-type preprocessor and is a symmetric matrix, which is the approximate value of .

For the block diagonal preprocessor, the following conclusion can be obtained.

Lemma 1. If is used to preprocess A, then the coefficient matrix after preprocessing satisfies

This theorem shows that is diagonalizable and has at most four different eigenvalues: 0, 1, and [17].

When is non-singular, there are only three non-zero eigenvalues. This shows that when T is non-singular, the Krylov subspace is

The number of dimensions is no more than 3, where is the initial residual. Therefore, the Krylov subspace method is terminated in at most 3 steps [18].

Similarly, for the subblock triangle preprocessor, we have the following conclusions.

Lemma 2. Set

If is non-singular and is full rank, then the eigenvalues of matrix after preprocessing are equal to 1.

For the constrained preprocessor, the following conclusion can be obtained.

Lemma 3. Let A be a symmetric indefinite matrix in the following form:where is full rank, is symmetric, , and represents non-singular. Let be a group of bases of the zero space of B, i.e., ; then, has the following properties:(1) has 2n eigenvalues of 1.(2)The remaining m − n eigenvalues satisfy the following generalized eigenvalue question:where is the residual matrix and is the coefficient.

2.2. Application Research

The following three preprocessing methods are applied to symmetric algebraic linear equations. Let equations be written as follows:

Letwhere represents the coefficient, represents the mass matrix of symmetric positive definite, and are the mass matrices of symmetric indefinite, and is the stagnation point of the Lagrange function [19].

Then, A can be written as

For linear equation (10), block diagonal and block bottom triangle preprocessors are expressed as follows:

For constrained preprocessor, selectwhere L is the constraint matrix. Then, column becomes zero space of B, andwhere Z is a zero-space vector. So, the concrete form of is

3. Preprocessing of Unsymmetrical Algebraic Linear Equations

In this section, the preprocessing of the system of positive definite linear equations is studied. The BCG method and its CGS method of unsymmetrical linear equations have typical irregular convergence behavior. Freund and Nachtigal put forward quasi-minimal residual method (QMR) to remedy the convergence of the BCG method and produce smooth convergence curve. However, like the BCG method, the QMR method uses coefficient matrix A and the product of transposition AT and vector. In order to solve this problem, Freund proposed the TFQMR method, which has the property of quasi-minimum residuals and does not use the product of AT and vectors. It is a Krylov subspace algorithm and performs well in the experiment of solving large sparse linear equations [2026].

In order to improve the convergence of the CGS method, Fokkema et al. extended the CGS method and derived the generalized CGS method, which is called the GCGS method. The GCGS method effectively changes the convergence of the CGS method and BiCGSTAB method. At the same time, two new methods, CGS2 and SCGS, have been derived, but they do not have quasi-minimum surplus, which leads to irregular convergence behavior for some complex problems [27, 28]. In this section, quasi-minimal residuals are introduced into the GCGS method, yielding the QMRGCGS method. This kind of method includes the TFQMR method and QMRCGSTAB method as special cases. At the same time, two new methods, namely, QMRGCGS2 method and QMRSCGS method, are derived based on the QMRGCGS method.

3.1. GCGS Method

The orientation quantity is taken as the initial solution of equation , and its residual vector is . The residual vector of the n-th iteration of BCG method is . It can be expressed in the form of , where and . The residual vector of the n-th iteration of the CGS method is . The residual vector of the n-th iteration of the GCGS method is different from that of the CGS method, and its expression is . Among them, is still BCG polynomial, and is the approximate form of BCG polynomial, so the GCGS method can be derived, and GCGS is given by Algorithm 1 [29].

Select an initial guess and some .
for
Select
Choose
If is accurate enough, then quit
End

In Algorithm 1, taking and then , the CGS method is derived. Taking , and the BiCGSTAB method is derived. Related BCG polynomial is used, and and are taken correspondingly, where , , and then the GCGS2 method is derived. When , ( is the reciprocal of the maximum eigenvalue of coefficient matrix) and ; when , and , so that the SCGS method is exported [3032].

3.2. QMRGCGS Method

From the above analysis, it can be seen that the CGS method uses the square of BCG polynomial, and the GCGS method has been extended on this basis. Using the product of BCG polynomials and approximate BCG polynomials, the GCGS method does not have quasi-minimal residuals [33, 34]. Therefore, quasi-minimal residuals are introduced into the GCGS method, and the QMRGCGS method is derived [35, 36].

Setwhere represents a QMRGCGS function, represents a function, and represents a function, andwhere represents the logarithmic equation of the QMRGCGS function, represents a function, and represents a function.

Let

According to Algorithm 1,where represents the weight matrix. So,

Letwhere represents the weight set. Then, it can be written aswhere represents the function matrix, wherein

For the GCGS method, the following relation holds.

Since matrix and matrix are of order k, according to the definition of , there is the following equation:where represents a vector and represents a point on the weight set. Quasi-minimal residuals are introduced. From equation (26), the iteration result of Step m can be written in the following form, and there is a certain , so that

Thus,where . In fact, of equation (27) can be selected to make minimal. However, since is dense and its column vectors are not orthogonal to each other, the amount of calculation is too large, and quasi-minimal residuals are introduced.

As a weight matrix, take , and equation (28) can be written as

Let be the solution of minimal quadratic question, that is, satisfies

So, the solution of the QMRGCGS method can be expressed aswhere satisfies equation (31).

The Givens transform is applied to , and a series of Givens transform is set to . is transformed into a matrix in the form of upper triangle, namely,

Let be the matrix obtained by removing the last row of and be the vector obtained by removing the last element of ; then, satisfying equation (31) can be expressed as

There are

Letwhere is a minimum function. Then,where , is the last element of vector , and is the cosine of ; then,

Thus,

In this way, the result of preprocessing of the unsymmetrical algebraic linear equations can be obtained.

3.2.1. Application Test of QMRCGS2 Method

The QMRGCGS2 method is used to preprocess the following asymmetric algebraic linear equations.

The equation has Dirichlet boundary condition in . The size of the grid is , , and . On this basis, the Reynolds number on each element is less than 1. A stable discrete scheme is generated by using the central difference scheme [37, 38]. Figure 1 shows the relative residual vectors and iterative steps obtained by using CGS, CGS2, and QMRGCGS2 methods for 200 steps. The relative residual norm is , and the initial value is . The workload and storage of every two iterations of the QMRCGS2 method are equal to those of the CGS method and CGS2 method, so that k and n satisfy the following relationship: or [39, 40]. The iterative steps are shown in Figure 1.

From Figure 1, it can be seen that the relative residual norm of CGS2,SCGS, GCGS CGS, and GCGS2 methods shows irregular convergence behavior, while the relative residual norm of the QMRGCGS2 method tends to show regular convergence behavior. That is to say, the QMRGCGS2 method has smoother convergence behavior than the CGS method and GCGS2 method [4143]. Finally, CGS2 and CGS methods with good performance are selected to compare their performance with the proposed QMRGCGS2 method for practical application problems. The actual problem selected is Grenoble, which comes from the Harwell-Boeing sparse matrix set with extensive application background. The order is 115, and the number of non-zero elements is 421. The relative residual norm and iteration steps obtained by 150 steps of the three methods are shown in Figure 2.

It can be seen from Figure 2 that the relative residual norm of CGS2 and CGS methods in the practical application of Grenoble shows an irregular convergence behavior, while the relative residual norm of the proposed QMRGCGS2 method tends to a constant, that is, the method has a smoother convergence. Then, calculate the pretreatment time to verify the effectiveness of the pretreatment. The convergence criterion requires that the relative residual norm is less than 10–8. Table 1 shows the iteration and calculation time required for convergence. The problem in Table 1 is the two-dimensional three-temperature energy discretization linear algebraic equation. The scale size of the grid node is 8000 (160 × 53).

According to Table 1, when the step is 100, the pretreatment time of the QMRGCGS2 method is 15.36 s, that of the CGS2 method is 20.36 s, and that of the CGS method is 22.36 s. When the step is 500, the pretreatment time of the QMRGCGS2 method is 34.23 s, that of the CGS2 method is 35.14 s, and that of the CGS method is 45.20 s. It can be seen from the test results that the QMRGCGS2 method is faster than CGS, and the CGS2 method has a faster convergence speed, indicating higher efficiency of the preprocessing method proposed in this paper.

4. Conclusions

In this paper, the preconditioning of symmetric linear equations and asymmetric linear equations is studied, respectively. The experiment shows that when the step size is 500, the pretreatment time of the QMRGCGS2 method is 34.23 s, that of the CGS2 method is 35.14 s, and that of the CGS method is 45.20 s. The QMRGCGS2 method has more smooth convergence behavior and faster convergence speed than the CGS method and GCGS2 method. It provides a new method for solving asymmetric linear equations and has certain promotion value.

However, in the research of asymmetric linear equations, only m-step polynomial preprocessing numerical experiment has been done. In future research work, the effectiveness of the preprocessor when it is applied to other Krylov subspace methods will be studied. For the polynomial preprocessor in this paper, how to choose an optimal value is a question worthy of further study.

Data Availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was funded by the Natural Science Foundation of China (Study on non-smooth decentralized control for a class of uncertain non-linear large-scale systems) (no. 61503122) and Tackling of Key Scientific and Technical Project of Henan Province (Non-smooth intelligent control of non-linear system and its application) (no. 202102210142).