Abstract

In this paper, we provide two new generalized Gauss-Seidel (NGGS) iteration methods for solving absolute value equations , where , , and are unknown solution vectors. Also, convergence results are established under mild assumptions. Eventually, numerical results prove the credibility of our approaches.

1. Introduction

Consider the absolute value equation (AVE): where , , and . A more general structure of the AVE is where , When , where denotes the identity matrix, then Eq. (2) reduces to the special form (1). The AVEs are significant nondifferentiable and nonlinear problems that appear in optimization, e.g., linear programming, journal bearings, convex quadratic programming, linear complementarity problems (LCPs), and network prices [113].

The numerical techniques for AVEs have received a lot of attention in recent years, and several approaches have been suggested, such as Li [14] proposed the preconditioned AOR iterative technique to determine AVE (1) and established the novel convergence results of the suggested scheme. To solve the AVE (1), Ke and Ma [15] introduced an SOR-like approach. Chen et al. [16] studied the concept of [15] extensively and presented an optimal parameter SOR-like approach. Huang and Hu [17] reformulated the AVE system as a standard LCP without any premise and showed some convexity and existence outcomes for determining AVE (1). Fakharzadeh and Shams [18] recommended the mixed-type splitting (MTS) iterative scheme for determining AVE (1) and established the novel convergence properties. Zhang et al. [19] developed a novel algorithm that transformed the AVE problem into an optimization problem associated with convexity. Caccetta et al. [20] examined a smoothing Newton technique for determining (1) and showed that the technique is globally convergent when . Saheya et al. [11] investigated smoothing type techniques for determining (1) and showed that their techniques have global and local quadratic convergence. Gu et al. [21] proposed the nonlinear CSCS-like approach as well as the Picard-CSCS approach in order to determine (1), which concerns the Toeplitz matrix. Wu and Li [22] developed a special shift splitting technique to determine AVE (1) and demonstrated the novel convergence outcomes for the approach. Edalatpour et al. [23] established the generalized Gauss-Seidel (GGS) techniques for determining (1) and analyzed its convergence properties and others; see [2435] and the references therein.

This article describes two new iterative approaches to determine AVEs. The main contributions we made to the article are as follows: (i)We extend the GGS technique [23] to the general case. To achieve this goal, we impose two additional parameters ( and ) that can accelerate the convergence procedure.(ii)A variety of novel conditions are used to investigate the convergence properties of the newly developed methods.

The remainder of this paper is organized in the following manner. In Section 2, we present some notations and a lemma that will be used throughout the remainder of this study. In Section 3, we propose the NGGS procedures and discuss their convergence. We demonstrate the efficiency of these algorithms in Section 4 by providing numerical examples. In the last section, we make concluding remarks.

2. Preliminaries

Here we briefly examine some of the notations and concepts used in this article.

Let , we indicate the norm and absolute value as and , respectively. The matrix is called an -matrix if for and an -matrix if it is a nonsingular -matrix and with

Lemma 1. [36]. Suppose and , then .

3. NGGS Iteration Methods

Here, we examine the suggested methods (NGGS method I and NGGS method II) for determining AVE (1).

3.1. NGGS Method I for AVE

Recalling that the AVE (1) has the following form,

Multiplying then we get

Let where , and are strictly lower and upper triangular parts of , respectively. Furthermore, , where and denote the identity matrix. Using (3) and (4), the NGGS Method I is suggested as

Using the iterative scheme, so (5) can be written as

where and (see Appendix). Note that if and , then Eq. (6) reduces to the GGS method [23].

The next step in the analysis is to verify the convergence of NGGS method I by using the following theorem.

Theorem 2. Suppose that AVE (1) is solvable, let the diagonal values of and matrix be the strictly row wise diagonally dominant. If then the sequence of the NGGS method I converges to the unique solution of AVE (1).

Proof. We will prove first . Clearly, if we put , then . If we assume that , we get
if we take

Taking both side by , we get



where and . Also, we have
Thus, from (8) and (9), we get


So, we obtain

Uniqueness: Let and be two different solutions of the AVE (1). Using (5), we get From (10) and (11), we get

Based on Lemma 1 and Eq. (7), the above equation can be expressed as follows:




which is a contradiction. Thus, .

Convergence: We will consider as the unique solution to AVE (1). Consequently, from (10) and

we deduce

By taking infinity norm and Lemma 1, we have

and since , it follows that

According to the inequality above, the presented approach converges to the solution when condition (7) is met.

3.2. NGGS Method II for AVE

In this section, we describe the NGGS method II. Based on (3) and (4), we can express the suggested method for determining AVE (1) as follows (see Appendix):

In the following, we will examine the convergence results for NGGS method II.

Theorem 3. Suppose that AVE (1) is solvable, let the diagonal values of and be row diagonally dominant, and then the sequence of the NGGS method II converges to the unique solution of AVE (1).

Proof. The uniqueness can be inferred directly from Theorem 2. For convergence, consider

From (4) and (12), we have

Therefore, solves the system of AVE (1).

4. Numerical Tests

Here, four examples are provided to illustrate the performance of the novel approaches from three different perspectives: (i)The number of iterations (indicated by “Itr”)(ii)The computational time (s) (exposed by “Time”)(iii)The residual error (represented by “RSV”)

Here, “RSV” is defined by

All numerical tests were conducted on a personal computer with 1.80 GHz CPU (Intel(R) Core (TM) i5-3337U) and 4 GB of memory using MATLAB (2016a). In addition, the zero vector is the initial vector for Example 4

Example 4. Let

Calculate with such that . Here, the proposed methods are compared to two existing methods: the SOR-like optimal parameters technique shown in [16] (expressed by SLM using ) and the shift splitting iteration approach described in [22] (represented by SM). The results are provided in Table 1.

Table 1 presents the solution for various values of . The result of this comparison shows that our proposed techniques are more efficient than SLM and SM approaches in terms of “Itr” and “Time.”

Example 5. Consider and with
where , , being a unit matrix and For Examples 5 and 6, use the same stopping criterion and initial guess mentioned in [18]. The recommended methods are compared with the AOR approach [14] and the mixed-type splitting (MTS) iterative technique [18]. The outcomes are summarized in Table 2.

In Table 2, we present the numeric outcomes of the AOR method, MTS method, NGGS method I, and NGGS method II, respectively. Our results indicate that the proposed methods are more effective than both AOR and MTS approaches.

Example 6. Consider and with
where and The findings are summarized in Table 3.

Table 3 presents the solution for various values of . The result of this comparison shows that our proposed techniques are more efficient than AOR and MTS approaches in terms of “Itr” and “Time.”

Example 7. Let
and . Applying the same stopping criteria and initial guess as given in [37], we compare the novel approaches with the technique shown in [37] (expressed by SA using ) and the SOR-like technique presented in [15] (denoted by SOR).

Table 4 shows that all tested techniques can quickly compute AVE (1). However, we see that the “Itr” and “Time” of the proposed approaches are less than the other known approaches. In conclusion, we find that the proposed approaches are feasible and useful for AVEs.

5. Conclusions

In this work, two novel NGGS approaches are presented for the purpose of determining AVEs, and their convergence properties are discussed in detail. Then, numerical experiments are used to demonstrate their effectiveness. Ultimately, the numerical tests show that the recommended procedures are more efficient in iteration steps and computing time than the existing methods.

Appendix

Here, we describe the implementation of the novel methods. From we have

Thus, we can approximate as follows:

This approach is known as the Picard approach [9]. Our next discussion concerns the algorithm for NGGS Method I.

Algorithm for the NGGS Method I is as follows: (1)Select the parameters and , an initial guess , and put (2)Compute(3)Calculate(4)If then end. Otherwise, put and go to step 2

Similar considerations apply to the NGGS Method II.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

Author has no conflict of interest for this submission.

Acknowledgments

The author would like to thank the anonymous referees for their significant comments and suggestions.