Abstract

In this work, the following system of nonlinear matrix equations is considered, where are arbitrary matrices and is the identity matrix of order . Some conditions for the existence of a positive-definite solution as well as the convergence analysis of the newly developed algorithm for finding the maximal positive-definite solution and its convergence rate are discussed. Four examples are also provided herein to support our results.

1. Introduction and Preliminaries

Consider the system of matrix equations of the formwhere is an Hermitian positive definite matrix (HPD, for short), are complex matrices of order , , , , , , and are mappings from the set of positive definite matrices to itself, and is the conjugate transpose of .

We can see that the above equations incorporate a few linear as well as nonlinear matrix equations (NMEs, in short). Over the last few decades, many researchers studied systems (1) and (2) with different types of , and , .For (1), , , and , and [1]For (1), , , , and [2, 3]For (1), , , and [4]For (1), , , and , [5, 6]For (1), , , and , [7]For (1) and (2), , , , and , and [8]For (1) and (2), , , , and , and [9, 10]For (1) and (2) , , , and , and [11]

For different types of applications of the Riccati equation, one can check [4, 12, 13].

Throughout the article, denotes the set of real numbers, denotes the set of natural numbers, , denotes the set of positive numbers, and .

(resp. and ) denotes the set of all Hermitian (resp. positive semidefinite and positive definite) matrices over and stands for the set of all matrices over .

For a matrix , denotes its singular values and denotes the sum of these values, i.e., the trace norm of . The Frobenius norm will be denoted by . For , (resp. ) indicates that is positive semidefinite (resp. positive definite). and stand for the zero and unit matrix in , respectively.

2. Conditions in Support of the Existence of a Positive Definite Solution

We begin with two useful lemmas.

Lemma 1 (see [14]). If and are Hermitian matrices having the same order, then .

Lemma 2 (see [15]). If , then , for all .

Now, let us consider the system of NMEs of the form

Taking and , the system of nonlinear matrix equation (3) is equivalent to

Theorem 1. Let be a positive-definite solution (PDS, for short) of (4). Then, and .

Proof. Suppose that system (4) has a PDS . Then, from Lemma 2,which givesThis, in turn, implies that and .

Lemma 3 (see Theorem 2.1 in [16]). Let and be positive operators acting on a Hilbert space in such a way that , , and . Then,hold for any .

Remark 1. In the case that in Lemma 3, we have and .

Theorem 2. Suppose that system (4) has a PDS . Then,(i) and (ii), , , and (iii) if is nonsingular, if is nonsingular, if is nonsingular, and if is nonsingular

Proof. To prove (i), since is a PDS of the NMEs (4), we have and . Together with (4), this givesandTo prove (ii), from (6), we haveSimilarly, by (9), we can obtain and .
To prove (iii), if is nonsingular, then, by , we obtainWe haveSince , where and are the maximal and the minimal singular value of the matrix , respectively, by (11) and (12) and Remark 1, we obtain .
For the case of nonsingular matrices , and , we can obtain similarly.

Theorem 3. System (4) attains a PDS iff admit the subsequent factorizations:where the matrices are nonsingular and the columns and are orthonormal. In this case, is a solution of (4).

Proof. If the system of NMEs (4) has a PDS , then for some nonsingular matrices . Rewrite the equation (4) asoror equivalentlyLetThen,Now, (16) implies that and are orthonormal. Conversely, assume that have decomposition (13). Set . Then,which implies is a PDS of (4).

Example 1. We can find (see Theorem 7.2.7, page 440, in [17]) for the following numerical experiment (2) which satisfies all the conditions of Theorem 3:Also, it is easy to deriveIt meets all of the requirements of Theorem 2.

3. Construction of Iteration Schemes

This section contains a new iteration scheme for the NMEs (4) in the context of [2, 11].

By pre- and postmultiplying the first and the second equation of (4), respectively, by and , we obtain

After some simple calculation, we have

Apparently, is a solution of (23) if solves equation (4). Conversely, if is a nonsingular solution of equation (23), solves equation (4), too.

Thus, to attain a HPDS of (4), we need to solve (23). From (23), the iterative scheme is as follows.

4. Convergence Analysis

This section contains the proof that the sequences generated by Algorithm 1, where and are initial conditions and converge to the minimal positive-definite solution of equation (4).

Step 1: input .
Take and for initialization and tolerance error . Set .
Step 2: find out and by the following iterative scheme:
Step 3: stop if . Otherwise, , and go to Step 2.

Theorem 4. Suppose that the system of NMEs (4) has a PDS and the sequence is generated by Algorithm 1 with initial values and . Let the pair be the minimal PDS of (4). Then, the sequence is well defined:

Proof. Let be any PDS of the system of NMEs (4). We will prove that and for all using mathematical induction.
For , we have and . Then, by (24), we haveFollowing Theorem 4, . Using this condition in Lemma 1 together with NMEs (4),since . That is, . Using Lemma 2, this implies that . Using this fact in Lemma 1 together with NMEs (4), we obtainHence, and holds true for .
Assume that and hold for . Since , by equation (24), Lemma 1, and the fact , we obtainthat is, . Then, by this condition and Lemma 2, we have . Using this fact in Lemma 1 together with the NMEs (22) and the fact , we haveAlso, from the NMEs (24), we get thatNext, by using the fact together with the NMEs (24), Lemma 1, and the fact , it follows thatthat is, and . Using this fact with , in equation (31), we have . Using this fact with Lemma 2, we haveThis fact, together with the NMEs (24), Lemma 1, and the fact , yieldsthat is, or . Thereby,From the above, and hold for .
Thus, by using the principle of induction, we have that the relations and hold for . So, the sequence is well defined, increasing, and bounded above. LetThen, is a PDS of the NMEs (4) by Algorithm 1. Since for any PDS of NMEs (4), is the minimal PSD of NMEs (4).

Remark 2. If is the maximal PDS of the system of NMEs (3), then by the relationship between the NMEs (3) and (4), we get that is the minimal PDS of NMEs (4). Therefore, from Theorem 4, the sequence generated by Algorithm 1 with the initialization and converges to the inverse of the maximal PDS of NMEs (3), respectively.

5. Rate of Convergence

Lemma 4 (see [18]). If and with , then, for every unitarily invariant norm, and .

Theorem 5. If is a PDS of the matrix equation (4) under Algorithm 1 and is arbitrary, then we get the following.
(A)and(B)andfor all large enough.

Proof. To prove (A), let be the PDS of equation (4). Then, from Algorithm 1,Thus, since , we haveNow, since , using Lemma 4, we havewhich is (35).
Sinceusing Lemma 4, we have relation (38).
In the same way, we can prove (B).

6. Numerical Examples

Two examples are presented in this section in support of Algorithm 1. Take Residue , tolerance  = 1e − 10, and norm as Frobenius norm. For comparative analysis, a basic fixed point algorithm (BFP, for short) has been used:

These examples, together with tables and graphs showing different input data, solutions, iteration number of different schemes, error, CPU time, average computational time, etc., are illustrated here. For a better understanding, we have used line graphs, bar graphs, pie graphs, and surface plotting.

Example 2. TakeAfter applying Algorithm 1 with the initial conditions and , we obtain .
Table 1 presents a comparison between our algorithm and the basic fixed point algorithm.
Figures 1 and 2 represent CPU time graph and iteration vs. error graph.
Figures 3 and 4 represent pie chart for average CPU time based on ten experiments and solution’s surface plot.

Example 3. In the current example, a new, randomly chosen set of coefficients is used. The construction is followed byHere, “r” and “fro” stand for dimension and Frobenius norm of the matrix, respectively. A different set of initial conditions has been chosen here asThe result of this experiment using Algorithm 1 is shown in Table 2.

Example 4. Here, we consider some randomly generated matrices:where r = dimension of the matrices, rand(r) = random matrices of order , and tol = 1e − 10. After applying Algorithm 1 (ALgo1) and basic fixed-point method under the initial conditions and , we get Table 3 that shows the various outputs for different r. The 5th column of Table 3 ensures positive definiteness of Example 4 w.r.t. different order matrices.
Associated graphs are (Figures 525)(i)dim = 3 (ii)dim = 5 (iii)dim = 8 (iv)dim = 12 (v)dim = 20 (vi)dim = 32 (vii)dim = 64 (viii)Figure 26 represents the average CPU time through bar graphs for different dimensions

Remark 3. We can infer from the above discussions that our algorithm is less expensive in terms of computation after evaluating these examples w.r.t. different sets of parameters.

Remark 4. We can infer from the above discussions that the solutions are positive definite.

7. Conclusion

In this study, a new iterative algorithm has been developed. All numerical tests are in agreement with the theoretical findings of this research done. Finally, based on the numerical results, we have concluded that the new iterative approach is extremely powerful and efficient in finding numerical solutions for a wide range of nonlinear matrix equations including complex ones. It also produces very accurate results with less iterations and lower computational costs, compared with the basic fixed-point approach.

Data Availability

No data were used to support the findings of the study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors are grateful to the Science and Engineering Research Board (SERB), India, for providing project funding under the scheme-CRG/2018/000615.