Journal of Mathematics

Journal of Mathematics / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 8531403 | https://doi.org/10.1155/2020/8531403

Jing-Mei Feng, San-Yang Liu, "A Three-Step Iterative Method for Solving Absolute Value Equations", Journal of Mathematics, vol. 2020, Article ID 8531403, 7 pages, 2020. https://doi.org/10.1155/2020/8531403

A Three-Step Iterative Method for Solving Absolute Value Equations

Academic Editor: Nan-Jing Huang
Received12 May 2020
Accepted17 Jun 2020
Published25 Jul 2020

Abstract

In this paper, we transform the problem of solving the absolute value equations (AVEs) with singular values of greater than 1 into the problem of finding the root of the system of nonlinear equation and propose a three-step algorithm for solving the system of nonlinear equation. The proposed method has the global linear convergence and the local quadratic convergence. Numerical examples show that this algorithm has high accuracy and fast convergence speed for solving the system of nonlinear equations.

1. Introduction

In the last few decades, the system of absolute value equations has been recognized as a kind of NP-hard and nondifferentiable problem, which can be equivalent to many mathematical problems, such as generalized linear complementarity problem, bilinear programming problem [1], knapsack feasibility problem [2], traveling agent problem [3], etc…However, compared with the above problems, it has the characteristics of simple structure and easy solution, which attracts much attention. This paper mainly designed a three-step iterative algorithm to solve the absolute value equations (AVEs) of the following type:where , , and indicates the absolute value. In [4], some theoretical conclusions about the solutions of AVEs (1) are given, and the sufficient conditions for the existence of unique solutions, nonnegative solutions, solutions, and no solutions are given for the first time. Next, many scholars began to design effective algorithms to solve equations with different types of solutions. Mangasarian [1] proposed a generalized Newton method for solving AVEs (1) when it has a unique solution; then 100 AVEs (1) of 1000 dimensions generated randomly are used to test effectiveness of this method, and the accuracy is up to . On this basis, an improved generalized Newton algorithm is proposed by adding a dynamic step size [5], and the accuracy of the solution is of . Feng and Liu [6] presented a two-step iterative algorithm which decreases the probability of the occurrence of poor solutions in the first iteration, thus improving the global convergence of the whole algorithm. A new method for solving the system of absolute value equations is proposed [7] by the preconditioning technique and successive over-relaxation. It was inspired by the Picard-Hermitian and skew-Hermitian splitting iteration method to solve the system nonlinear equations. Salkuyeh [8] presented the Picard-HSS iteration method for solving the system of absolute value equations, and there are also two CSCS-based iteration methods [9] and other iterative algorithms [1012]. In addition, some iterative algorithms for solving the system of nonlinear equations are also interesting, for example, many three-step higher-order iterative algorithms are designed to solve the system nonlinear equations [13], Cordero et al. proposed two iterative algorithms with fourth-order and fifth-order convergences to solve the system nonlinear equations in [14].

Considering the nondifferentiation of absolute value equation, taking the advantages of the above algorithms and overcoming their disadvantages, a three-step iterative algorithm is designed for effectively solving AVEs (1) in this paper. Firstly, we give a three-step iterative formula, which absorbs the advantages of both the generalized classic Newton method and higher-order iterative algorithm; it has the advantages of small error and fast iteration speed. Then, we can prove that this method can linearly converge to the global optimal solution of AVEs (1) and possess locally quadratic convergence. Finally, some numerical experiments and comparison with both the classic generalized Newton method [1] and two-step iterative algorithm [6] show that our method has high accuracy and fast convergence speed.

The rest of this article is arranged as follows. Section 2 gives the preliminary basic knowledge about AVEs (1). We give a three-step iterative algorithm and prove the global convergence and local quadratic convergence of the proposed method in Section 3. In Section 4, we present some numerical experiments. Some concluding remarks to end the paper in Section 5.

2. Preliminaries

We are now describing some symbols and backgrounds. Let and be a unit vector and a unit matrix, respectively. is the two-norm , and sign () is a vector with components equal to 1, 0, or −1 depending on whether the corresponding component of is positive, zero, or negative. In addition, diag (sign ()) is a diagonal matrix whose diagonal elements are the components of sign (). The diagonal matrix represents a generalized Jacobian matrix of :

We define a nonlinear function as follows:

A generalized Jacobian of at iswhere .

For solving AVEs (1), a generalized Newton iteration algorithm was presented in [1] and the iteration formula is as follows:

Effectiveness of this method was tested by the 100 randomly generated 1,000-dimensional AVEs (1), when the singular values of A are not less than 1, and the accuracy is up to .

A two-step iterative method for solving AVEs (1) is presented in [6] and the iterative formula is

Example in [1] is used to verify the effectiveness of this algorithm. Compared with generalized Newton method, the convergence speed is faster and the solution accuracy is improved.

For solving the system of nonlinear equations , where , many methods are given in [1315], these methods also give us a lot of good ideas. Taking some of the advantages of the above approach, considering the particularity of the system of absolute value equations, this paper designed the following three-step iterative algorithm for AVEs (1).

3. Three-Step Iterative Algorithm

In this section, a three-step iterative algorithm is presented in the following form:

Based on this iterative formula (7), we design a new algorithm for solving AVEs (1).

The algorithm steps and the specific process are shown as follows:Step 1. Randomly generated an initial vector to AVEs (1), set ;Step 2. Compute by the (7);Step 3. If , stop, otherwise go to step 4;Step 4. Set , go to Step 2.

The global linear convergence and the local quadratic convergence of the Algorithm 1 are given below.

(1)Initialization:
An initial vector:
  .
(2)whiledo
(3) By (7), calculate .
(4) By (3), calculate .
(5).
(6)end while

Lemma 1. (see [4]). The minimum singular value of the matrix is greater than 1 if and only if all the eigenvalues of are greater than 1.

Lemma 2. (see [4]). If the minimum singular value of exceeds 1 for the algorithm (6), then for any diagonal matrix whose diagonal elements , exists.

Lemma 3. (see [1]). Lipschitz continuity of the absolute value. Let the vectors , then

Theorem 1. If the minimum singular value of the matrix is greater than 1 and for some for the three-step algorithm (6), then is a solution to the AVEs (1).

Proof. Let’s first simplify in formula (7),Then, , so,Hence, is the solution to the AVEs (1).

Theorem 2. (global linear Convergence). If , make , and for any diagonal matrix whose diagonal elements are or 0, then the three-step iterative algorithm (6) from any initial point converges linearly to a solution for any solvable AVEs (1).

Proof. Suppose that is a solution of the AVE (1), noting that and , for simplicity, let , .
The proof of Theorem 1 shows that the first one in formula (7) can be reduced to , that is, .
Because , let us take minus ,So,Therefore,According to Lemma 3 and given conditions,Because , , the second onein formula (7) can be simplified to the following form:In the same way,Now let us simplify the third one in formula (7) toAlso get,Hence,So the converges to zero, the sequence iterated by formula (7) linearly converges to a solution .

Theorem 3. (locally quadratic convergence). For any diagonal matrix with diagonal elements being or 0, if is nonsingular, then the sequence from three-step iterative algorithm (6) converges to and .

Proof (see [16]). We omit it here.

4. Numerical Examples

In this section, six AVEs (1) examples are worked out to illustrate the feasibility and effectiveness of the proposed methods. All the numerical computations are performed in MATLAB R2017a. We compare the proposed method (Three-SIA) with the two-step iterative algorithm [6] (Two-SIA) and the generalized Newton’s method [1] (GNM).

Example 1. The initial value is , and the exact solution to this problem is .

Example 2. The initial value is , and the exact solution to this problem is .

Example 3. The initial value is , and the exact solution to this problem is .

Example 4. The initial value is , and the exact solution to this problem is .

Example 5.

Example 6. For the Examples 14, the numerical solutions and errors () generated by the three algorithms in the first three iterations are given. Let represent the numerical solution after the i-th iteration, and represent the error generated by the i-th iteration.
In Tables 14, from the numerical solutions and errors obtained from the three algorithms in each iteration, it is clear that the Three-SIA method has a fast convergence speed and high solution accuracy, especially in Examples 1 and 3.
The convergence curves of the three algorithms for solving the high-dimensional system of absolute value equations Examples 4 and 5 are shown in Figures 1 and 2. It can be seen from the figures that the Three-SIA method is the best in terms of convergence speed and accuracy. So the Algorithm 1 is superior in convergence and the quality of solution for solving AVEs (1). And this algorithm takes a short time to calculate the low-dimensional absolute value equations, about 2 seconds. When calculating high-dimensional absolute value equations, such as 1000-dimensional equations, it takes about 10 seconds. Overall, the efficiency is higher than other algorithms.


Algorithm

GNM2.081802426343145;
0

Two-SIA1.267518301536992;
0

Three-SIA0;
0


Algorithm

GNM;
;

Two-SIA;
;

Three-SIA;
;


Algorithm

GNM;
;
;

Two-SIA;
;
;

Three-SIA0;
0;
0;
0


Algorithm

GNM;
;
;