Abstract

The absolute value equations (AVEs) are significant nonlinear and non-differentiable problems that arise in the optimization community. In this article, we provide two new iteration methods for determining AVEs. These two approaches are based on the fixed point principle and splitting of the coefficient matrix with three extra parameters. The convergence of these procedures is also presented using some theorems. The validity of our methodologies is demonstrated via numerical examples.

1. Introduction

In the last few decades, the AVE has been identified as a type of NP-hard and non-differentiable problem, which can be equivalent to numerous mathematical problems, such as bimatrix games, linear and quadratic programming, contact problems, network prices, and network equilibrium problems; see [16] for more details.

We consider the AVE problem of finding an such that

Here , and signifies the absolute values of the components of . Note that Eq. (1) is a special case of the following generalized AVE: where was familiarized by Rohn [1] and more studied in [711].

Furthermore, AVEs are equivalently reformulated into linear complementarity problems (LCPs). These formulations are discussed in [1214] and the references therein. Taking the well-known LCP as an example: Let us assume that the LCP consists of determining such that where and . The system (3) can be expressed as AVE with where and . Meszzadri [15] established the equivalence among horizontal LCPs and AVEs. Furthermore, the unique solvability of system (1) and its relation to mixed-integer programming and LCP have been discussed by Prokopyev [16].

Recently, the problem of determining the AVEs has enticed much consideration and has been studied in the literature. For instance, Ali et al. [17] introduced the generalized successive overrelaxation (GSOR) methods to determine AVE (1) and provided the necessary conditions for the convergence of the methods. Zhang and Wei [18] introduced a generalized Newton approach for obtaining (1) and designated the global and finite convergence with the condition that is regular. Cruz et al. [19] established an inexact semi-smooth Newton approach for the AVE (1) and showed that the approach is globally convergent with the condition if . Ke [20] presented the new iteration algorithm for determining (1) and proved the new convergence conditions under suitable assumptions. Moosaei et al. [21] presented two approaches for determining the NP-hard AVEs when the singular values of exceed 1. Cacceta et al. [22] investigated the smoothing Newton method for obtaining (1) and discussed that this method is globally convergent with condition that . Chen et al. [23] discussed the optimal parameter SOR-like iteration technique for Eq. (1). Wu as well as Li [24] used the shift splitting (SS) technique to develop an iterative shift splitting technique to find the AVE (1), and others; see [2527] and the references therein.

Recent studies have revealed that Li and Dai [28] as well as Najafi and Edalatpanah [29] provide methods for determining LCPs utilizing the fixed point principle. The objective of this study is to apply this approach to AVEs based on the fixed point principle, and to suggest efficient approaches for calculating AVE (1). We have made the following contributions in our study: (i)We divide the matrix into various parts and then connect this splitting with the fixed point formula, which can accelerate the convergence of the proposed iterative procedures.(ii)We consider the convergent conditions of newly designed approaches under various new situations.

The analysis is structured as follows. The offered strategies for defining AVE (1) are examined in Section 2. In addition, the numerical tests are discussed in Section 3, while the conclusion is presented in Section 4.

2. Suggested Methods

In this part, we propose strategies to determine AVE (1). We begin by discussing some symbols and auxiliary outcomes.

We illustrate the spectral radius, infinity norm, and tridiagonal matrix of , respectively, as , and .

Lemma 1 (see [30]). Suppose be the two vectors . Then .
In order to propose and examine the new iteration methods, the matrix is divided as follows: with where . Furthermore, , , and , are the diagonal, the strictly upper, the transpose of strictly upper and the strictly lower triangular parts of , respectively. The AVE (1) is equivalent to the fixed point problem of solving such that where and is a positive diagonal matrix (then by choice of , see [31, 32] for more details). By utilizing splitting (6), we offer the following two new schemes to obtain AVEs (see Appendix A):

2.1. Method I

2.2. Method II

Now, we study the convergence investigation of the presented iteration methods.

Theorem 2. Assume that the system (1) is solvable and be a splitting of , then where Moreover, if , the sequence designed by method I will lead to the unique solution of the system (1).

Proof. Suppose be a solution of system (1). Therefore, After subtracting (14) from (10), we obtain Considering the absolute values on each side, we have Using Lemma 1, we get Since is invertible. Therefore, exists as well as non-negative, we have where Note that the matrix is non-negative. Based on [31, 32], if then the sequence designed by Method I converges to the solution of system (1).
Uniqueness: suppose that represents another AVE solution. Based on the equations presented as we obtain And since , we get The proof has been completed.

Theorem 3. Let and are the sequences generated by Method II, then where Moreover, if , the sequence designed by method II will lead to the unique solution of the system (1).

Proof. Suppose be a solution of system (1). Then After subtracting (26) from (11), we obtain Taking absolute values on both sides and using Lemma 1, we have So, Based on theorem 4.1 of [31] and theorem 3.1 of [28, 29], if , the iteration sequence created by Method II is convergent.
The proof of the uniqueness is similar as the proof in Theorem 2 and is omitted here.

3. Numerical Examples

In this unit, five examples are provided to analyze the performance of the proposed methods from three stances: (i)‘Iter’ indicates the iteration steps.(ii)‘Time’ implies the CPU time (s).(iii)‘RES’ signifies the 2-norm of residual vectors.

Here, ‘RES’ is determined by

All calculations were done on a computer with an Intel(R) Core(TM) i5-3337U CPU 1.80 GHz processor and Memory 4GB using MATLAB R2016a. In Examples 4 and 5, the starting guess is supposed to be .

First, we use numerical experiments to satisfy the convergence conditions and . Table 1 delivers the outcomes.

In Table 1, we performed the convergence conditions of both theorems using numerical experiments. Obviously, these two methods meet these conditions. To examine the implementation of our novel methods, we consider the following tests.

Example 4. Assume that and , such that. Here, , and . The outcomes are shown in Table 2, and the graphs are displayed in Figures 1 and 2, respectively.

In Table 2, the given methods calculate the AVE solution for different and values. We notice that if we increase , the convergence of the given strategies becomes quicker. The curves in Figures 1 and 2 display the effectiveness of the given procedures. Graphically sketch demonstrates that the convergence of the presented approaches is faster when the value of is bigger.

Example 5. Assume that and , such that where , as well as . The outcomes are presented in Table 3, and the graphical representations are shown in Figures 3 and 4, respectively.

In Table 3, we presented the convergence behavior of the given methods using the values of and . Obviously, if the value of is larger, the convergence of the given approaches grows faster. The graphical representation is illustrated in Figures 3 and 4. These curves explain the efficiency of the suggested approaches at various values.

Example 6. Assume that and , such that . This example uses the same stopping criteria and starting guess as shown in [24]. Moreover, we compare the new techniques with the procedure described in [20] (symbolized by AM) as well as the shift splitting iteration method reported in [24] (represented by SS). These outcomes are presented in Table 4.

The results of Table 4 show that all methods are capable of determining the problem efficiently and precisely. Our techniques are more valuable than existing strategies, such as the AM and SS methods, in terms of iterations (Iter) and solving time (Time).

Example 7. Consider Here, is a unit matrix, and indicates the Kronecker product. In addition, and , as shown below. Here, where . In Examples 7 and 8, using the same starting guess as well as the stopping criterion as given in [23]. Moreover, we compare the recommended techniques with the process shown in [23] (exposed by SRM) and the iteration scheme introduced in [17] (represented by GSOR). These data are explained in Table 5.

All techniques in Table 5 consider the solution for various numbers of . Based on the data in Table 5, we can identify that our recommended procedures provide better results than both SRM and GSOR procedures.

Example 8. We consider the AVE (1) with and . The data are reported in Table 6.

Based on Table 6, we perceive that all procedures can determine the problem efficiently and precisely. We can clearly distinguish that our techniques are more beneficial than existing processes, such as SRM and GSOR, with respect to the iteration steps (Iter) and the solving time (Time).

4. Conclusion

We have introduced two novel iterative techniques for obtaining the AVE (1) and demonstrated that the offered approaches converge to the system (1) under proper selections of the involved parameters. The effectiveness of the recommended methods has also been evaluated numerically. The numerical outcomes indicate that the suggested strategies are effective for large and sparse AVEs. For future research, the theoretical comparison and analysis of these iteration methods are of great interest.

Appendix

Here, we describe how to perform the suggested methods.

A. Method I

B. Method II

In both methods, the right-hand side also contains which is the unknown. From we have

Therefore, can be approximated as follows:

This technique is named the Picard technique [27]. Here, we present the algorithms for the proposed methods.

B.1. Algorithms for Method I and Method II

Step 1: Select the parameters , an starting vector and fix .

Step 2: Compute

Step 3: Calculate (Method I)

Step 4: Calculate (Method II)

Step 5: Stop if . Otherwise, set and return to step 2.

Note that the idea behind considering certain types of structures in Method I and Method II comes from [28, 29]. Several authors have discussed the use of these types of methods for the solution of LCPs; see [33, 34] and the reference therein. In our study, we applied this concept to AVEs. In addition, the concept of , whose diagonal contains positive entries, is inspired by the work of [31, 32].

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors are thankful to the Deanship of Scientific Research, King Khalid University, Abha, Saudi Arabia, for financially supporting this work through the General Research Project under Grant no. R.G.P.2/160/43.