Abstract

This article presents a general six-step discrete-time Zhang neural network (ZNN) for time-varying tensor absolute value equations. Firstly, based on the Taylor expansion theory, we derive a general Zhang et al. discretization (ZeaD) formula, i.e., a general Taylor-type 1-step-ahead numerical differentiation rule for the first-order derivative approximation, which contains two free parameters. Based on the bilinear transform and the Routh–Hurwitz stability criterion, the effective domain of the two free parameters is analyzed, which can ensure the convergence of the general ZeaD formula. Secondly, based on the general ZeaD formula, we design a general six-step discrete-time ZNN (DTZNN) for time-varying tensor absolute value equations (TVTAVEs), whose steady-state residual error changes in a higher order manner than those presented in the literature. Meanwhile, the feasible region of its step size, which determines its convergence, is also studied. Finally, experiment results corroborate that the general six-step DTZNN model is quite efficient for TVTAVE solving.

1. Introduction

Tensors are higher order generalizations of matrices. One-order tensor is vector, and two-order tensor is matrix. Let be the real field and be the set of -th-order -dimension tensors over the real field . In this paper, we are concerned with the following time-varying tensor absolute value equations (TVTAVEs):

where , , , denotes the time variable, is some vector in with

and is a vector in with

When and are time invariant, the TVTAVE (1) reduces to the tensor absolute value equations [1]:

and when , the above tensor absolute value equations further reduce to the well-known absolute value equations:

where .

The significance of the tensor arises from the fact that it has many applications in scientific and engineering fields. For example, when is even, the positive definiteness of the homogeneous polynomial form

plays an important role in the stability analysis of nonlinear autonomous systems [2]. In addition, the system of time-varying (tensor) absolute value equations arises in a number of applications, such as -person noncooperative games [3], nonlinear compressed sensing [4, 5], and so on. For example, due to the dynamic change in the market, the payment matricestensors of -person in noncooperative games also dynamically change, and the corresponding dynamic Nash equilibrium can be obtained by solving a system of time-varying matrix tensor absolute value equations. Since the absolute value equations is NP-hard, TVTAVEs (1) is also NP-hard. Thus, it is significant to find solutions of TVTAVEs (1) when they exist.

Recurrent neural network has been deemed as an important tool to handle many kinds of online computing problems. In this paper, we are going to apply a special recurrent neural network, i.e., the Zhang neural network (ZNN), to solve TVTAVEs (1) and propose a general six-step discrete-time Zhang neural network, which includes many existing DTZNN models as its special cases. Let us briefly review ZNN. ZNN was initially proposed by Chinese scholar Yunong Zhang on 2001, which has become an efficient tool for solving various time-varying problems, such as time-varying linear matrix equations, time-varying nonlinear optimization, time-varying matrix eigen problem, etc. ZNN is an interesting research topic in the literature, and has been extensively studied in past years. Many efficient DTZNN models have been proposed in the literature, including the general two-step DTZNN model in [6] (its steady-state residual error changes in an manner), the general three or four-step DTZNN models in [7, 8] (their steady-state residual errors change in an or manner), and the general five-step DTZNN model in [9] (its steady-state residual error changes in an manner), where is the sampling gap. Other special DTZNN models, which are included in the above general DTZNN models, can be found in [1012] and the references therein. Note that the step size is an important parameter which determines the convergence of the corresponding DTZNN, and the feasible region of the step size in [610] is rigorously proved, while [11, 12] only present the feasible region without rigorous proof. Meanwhile, the corresponding optimal step size is also investigated in [6, 7, 9] by establishing some optimization model.

The remainder of the paper is organized as follows. In Section 2, we first introduce some basic definitions and notations, and then transform the time-varying tensor absolute value equations into a tensor complementarity problem. In Section 3, we present a general Zhang et al. discretization (ZeaD) formula, i.e., a general Taylor-type 1-step-ahead numerical differentiation rule for the first-order derivative approximation. In Section 4, a general six-step discrete-time Zhang neural network for time-varying tensor absolute value equations is designed with rigorous theoretical analyses, and some special cases are given. In Section 5, some examples and their simulations are given to show the applications and efficiency of the obtained results. In Section 6, a brief conclusion is presented. Before ending this section, the main contributions of this paper are summarized below.(1)A general Taylor-type 1-step-ahead numerical differentiation rule for the first-order derivative approximation, whose truncation error is .(2)A general six-step DTZNN model is designed to solve the time-varying tensor absolute value equations based on the above first-order derivative approximation. By the Routh–Hurwitz stability criterion, we prove that each steady-state residual error in the general six-step DTZNN changes in an manner, where denotes an dimensional vector with every entries being .(3)The effective domain of the step-size in the general six-step DTZNN model is studied.(4)The efficiency of the general six-step DTZNN model is substantiated by the numerical simulations.

2. Preliminaries

In this section, we first introduce some basic definitions and notations, and then transform the time-varying tensor absolute value equations into a tensor complementarity problem.

Let be an -th-order -dimension unit tensor, whose entries are 1 if and only if and otherwise zero. A tensor is called a nonnegative tensor if all its entries are nonnegative, denoted by . Let , is some matrix in with

Furthermore, if a scalar and a nonzero vector satisfy

then we call is an eignevalue of and is the corresponding eignevector [2]. The spectral radius of a tensor is defined by

Definition 1. Let , then is called(1)A -tensor if all its diagonal entries are nonnegative and off-diagonal entries are nonpositive.(2)An -tensor if it can be written as with and . Furthermore, it is called a strong -tensor if .The following theorems list the existence of solutions to the tensor absolute value equations [1], which can be easily extended to the time-varying tensor absolute value equations.

Theorem 2. Let . If can be written as with and , then for every positive vector , tensor absolute value equations (4) has a unique positive solution.

Theorem 3. Let be a -tensor. Then can be written asif and only if for every positive vector , tensor absolute value equations (4) has a unique positive solution.

Theorem 4. Let and be in the form of with and . If there exists a vector such that , then tensor absolute value equations (4) has a nonnegative solution.

Theorem 5. Let and . If the multilinear system of equationhas a solution, then tensor absolute value equations (4) also has a solution.

The following theorem transforms the time-varying tensor absolute value equations into a tensor complementarity problem, which is a direct extension of Theorem 4.2 in [1].

Theorem 6. Let and . Define two mappings as follows:

Then TVTAVE (1) is equivalent to the following generalized time-varying complementarity problem (GTVCP):

For two constants and , the Fischer–Bermeister function is defined as

An attractive of is that if and only if and . A drawback of the function is that it is nonsmooth at the point . Then, Kanzow [13] designed a smooth form of by incorporating a parameter , which is defined as follows:

Evidently, . For any , we have if and only if and . Further, for any fixed , the function is smooth with respect to and .

Remark 7. The parameter is often termed as smoothing parameter in nonlinear optimization literature, whose essence is to smooth the nonsmooth function . That is, when , the smooth function can approximate infinite the nonsmooth function . In practice, we should set the initial value of the time-varying parameter to be a relatively small positive value, and compel it converge to zero quickly as time goes on.

For the linear discrete system

its Routh’s tableau is defined in Table 1. The elements in first and second lines are the coefficients of , and from the third line, the elements are computed according the following rule:

In the following, all elements in the Routh’s tableau are collected in a matrix, which is termed as Routh’s matrix. In this paper, we shall use the following stability criterion proposed by Routh [14] and Hurwitz [15].

Theorem 8. Polynomial (16) is stable if and only if its coefficients are positive and all elements in the first column of its Routh’s tableau are positive.

Remark 9. There are many different stability criteria for different discrete systems or dynamical systems. For example, Zhang and Zhou [16] have established a global asymptotic stability of a periodic solution for BAM neural network with time-varying delays, and Zhang and Cao [17] have presented a global exponential stability of periodic solutions of the complex-valued neural networks of neutral type. Notably, the difference between the stability criterion proposed by Routh and Hurwitz and those in [16, 17] mainly lie in: (1) they are proposed for different systems; (2) the former is a sufficient and necessary condition, while the latter are only sufficient conditions.

The following is the definition of the Taylor-type 1-step-ahead numerical differentiation rule for the first-order derivative approximation, which is often termed as Zhang et al. discretization (ZeaD) formula [18].

Definition 10. The general -step ZeaD formula is defined as follows:where is the amount of the steps of the ZeaD formula; () denotes the coefficients; denotes the truncation error; is the value of at time instant , i.e., ; denotes the updating index. Equation (19) with and is termed as -step th-order ZeaD formula.

Definition 11. Let : be a function. If for all and , there exists an index such that andthen the function is said to be a -function.

3. The General Six-Step Fourth-Order ZeaD Formula

In this section, based on Concepts 1–3 excerpted from [19] which are listed in Appendix A, we shall present a general six-step fourth-order ZeaD formula with truncation error and analyze its convergence.

Theorem 12. The following general six-step fourth-order ZeaD formula withis convergent if and only if the following six inequalities hold:Moreover, the general six-step fifth-order ZeaD formulawithis divergent.

Proof. The proof is presented in Appendix B.

Remark 13. The system (23) is nonempty. In fact, for and , the corresponding Routh’s matrix is where “” means that the element is vacant. Then, the left-hand sides of the inequalities in (23) are , respectively, which indicate that and satisfy this system. Thus, the six-step fourth-order ZeaD formula [20]is convergent. Dropping the term on the right-hand side of (21), we define a computable general six-step DTZNN modelwhere are the same as those in . Evidently, If we use to approximate , the truncation error is .

4. The General Six-Step DTZNN for TVTAVEs

In this section, based on the general Taylor-type 1-step-ahead numerical differentiation rule presented in Section 3, we shall design a general six-step DTZNN for TVTAVEs.

We have transformed TVTAVEs into GTVCP (13) in Section 2, which is NP-hard in general. Now, let us further transform GTVCP (13) into a nonsmooth equations, and then using a sequence of smooth equations to approximate it.

GTVCP (13) can be written as finding such that

Then, based on defined by (14), this system can be written as

where

However, this system is nonsmooth. Then, based on defined by (15), it can be equivalently written as

where

Set . For any fixed , the partial derivative of with respect to is

where

Lemma 14. When the matrixis nonsingular, the Jacobian matrix is also nonsingular.

Proof. We only need to prove the matrixis nonsingular. Since the matrix is nonsingular, we only need to prove the matrixis nonsingular. Then, from the singularity of the matrix , we only need to ensure the matrix is nonsingular. Then the assumption of the Lemma completes the proof.

Remark 15. From Theorem 3.3 in [21], when is a continuously differentiable -function and is a positive diagonal matrix, the matrix is nonsingular.
Setting in the following Zhang neural network (ZNN) design formula [22]:we get the continuous-time ZNN (CTZNN) model for TVTAVEs:Setting with () in (41), we get the discrete-time ZNN model for TVTAVEs as follows:Substituting the general six-step fourth-order ZeaD formula (28) into the left-hand side of (42), we get the following general six-step DTZNN model:where , the two free parameters and satisfy the six inequalities given in (23), and is termed as the step length of general six-step DTZNN model (43), which determines the convergence of (43).
For any given and satisfying (23), let us analyze the effective domain of to ensure the convergence of (43). Firstly, let us rewrite (43) as a six-order homogeneous difference equation, whose proof is similar to that of Theorem 4.1 in [6], and for the completeness of the paper, we give the detailed proof.

Theorem 16. Suppose be the sequence generated by the general six-step DTZNN model (43) and the sequence is bounded. Then, the sequence satisfies the following homogeneous difference equation:where .

Proof. General six-step DTZNN model (43) can be written as:Substituting (21) into the left-hand side of the above equality, we havei.e.,in which the term is absorbed into due to the boundness of the sequence . Then (47) impliesSince (21) also holds for the mapping , then expanding the left-hand side of (48), the following equation is obtained:By some simple manipulations, we havefrom which we can easily derive (44). This completes the proof.

The characteristic equation of Equation (44) is

Theorem 17. For any and satisfy (23), general six-step DTZNN model (43) is convergent, i.e, its steady-state residual error changes in an manner, if and only if the step size satisfies:where is defined in the proceeding proof.

Proof. Substituting the bilinear transform into (51) giveswhereAll numbers in Routh’s tableau of polynomial (51) are collected in the matrix . Then, according to Theorem 8, we haveThen, by the Routh–Hurwitz stability criterion, when satisfies the following 10 inequalities:the general six-step DTZNN model (43) is convergent. The proof is completed.

Remark 18. When and , we get the six-step DTZNN model in [20] . For this case, by Theorem 17, we get the effective domain of iswhich is the same as that given in [20].

Before ending this section, let us investigate the optimal step size of general six-step DTZNN model (43) . Let are the six roots of (51). Generally speaking, the smaller is, the smaller is. Then, based on the Vieta’s formula, the optimal step size can be determined by the following model:

where denotes the complex set; are the coefficients of (51). When and , we get the following concrete model:

Solving this model, we get the optimal step length . If we set the objective function as , the optimal step length , the same as that in [20].

5. Numerical Results

In this section, we shall present some numerical examples and their simulations to show the effectiveness of general six-step DTZNN model (43), which is compared with the simplest discrete-time Zhang neural network, i.e., the DTZN model in [23] . Furthermore, we set and in (43) as and , and the corresponding model is denoted by NDTZNN. Given the initial point , we generate the next four iterations of NDTZNN by

which are one-, two-, three- and five-step DTZNN models presented in the literature recently. We use the following function to evaluate the accuracy of the two tested models:

Example 19. Consider the following time-invariant tensor value equations:where

We set the sampling gap as , , and set the initial points as . The numerical results are plotted in Figure 1. Figure 1 illustrates that NDTZNN successfully solved this problem. More specifically, Figure 1(a) shows that the states of NDTZNN firstly oscillate, and after  s, they nearly overlap with the true solution . In addition, Figure 1(b) shows that the errors of NDTZNN are almost strictly descent with respect to , and the final error is about . These numerical results substantiate the efficacy of the proposed NDTZNN for time-invariant tensor absolute value equations.

Example 20. Consider the following time-varying tensor value equations:where

We use DTZN and NDTZNN to solve this problem. The initial points is set as . Furthermore, we set , in DTZN and in NDTZNN. The numerical results are plotted in Figure 2, which illustrate that though DTZN becomes oscillating more quickly than NDTZNN, the performance of NDTZNN is quite better than that of DTZN, and the errors generated by DTZN and NDTZNN change in and manner, respectively. Furthermore, the curve generated by NDTZNN is always at the bottom of that generated by DTZN, which indicates that the performance of NDTZNN is always better than that of DTZN.

Now, let us verify Theorem 16, that is the final error changes in manner. We choose different values of and . The numerical results generated by DTZN and NDTZNN are listed in Table 2.

The following results are summarized from Table 2.(i)The error generated by DTZN changes in an manner, i.e., the error reduces by 10 times when the value of decreases by 10 times. This coincides with the theoretical result of [23].(ii)The error generated by NDTZNN changes in an manner, i.e., the error reduces by times when the value of decreases by 10 times. This coincides with the theoretical result of Theorem 16.

Example 21. Consider the following medium scale time-invariant value equations:where

We set , . We use NDTZNN with to solve this problem, and the generated numerical results are plotted in Figure 3, which show that NDTZNN successfully solve this medium scale problem, and the generated curve is strictly decreasing with respect to the iteration counter .

6. Conclusion

In this paper, we have proposed a general six-step DTZNN model for the time-varying tensor absolute equations, which is an NP-hard problem. The steady-state residual error of the proposed DTZNN model changes in an manner. Based on the bilinear transform and the Routh–Hurwitz stability criterion, the effective domains of the free parameters and the step size are presented. Some numerical results are presented to illustrate the efficiency of the proposed DTZNN model.

It is worth pointing out here that future research will focus on the following three directions: (1) Can the analysis procedure of Theorem 12 be extended to deduce novel ZeaD formula with higher order truncation error? For example, does seven-step fifth-order ZeaD formula exist? (2) It is well known that continuous-time neural networks can be accelerated by incorporating a nonlinear activation function. However, to the best of the authors’ knowledge, there is no research on the discrete-time Zhang neural network equipped with nonlinear activation function in the literature, and designing such a discrete-time Zhang neural network maybe an interesting research direction. (3) It is worth to research the application of discrete-time Zhang neural network in the nonsmooth LASSO problem [7], the multi-augmented Sylvester matrix problem [9, 24].

Appendix

A. Zero-Stability and Consistency

In this appendix, we list three concepts about zero-stability and consistency of the discrete-time modelsmethods, which are excerpted from [19].

Concept 1. The zero-stability of an -step discrete-time method:

can be checked by determining the roots of the characteristic polynomial . If the roots of are such that(i)all roots lie in the unit disk, i.e., ; and,(ii) any roots on the unit circle (i.e., ) are simple (i.e., not multiple);

then, the -step discrete-time method (A.1) is zero-stable.

Concept 2. An -step discrete-time method is said to be consistency with order , if its truncation error is with for the smooth exact solution.

Concept 3. For an -step discrete-time method, it is convergent, i.e., for all , as , where is the solution of the studied problem, if and only if such an algorithm is zero-stable and consistent (see Concepts 1 and 2). That is, zero-stability and consistency result in convergence. In particular, a zero-stable and consistent method converges with the order of its truncation error.

B. Proof of Theorem 12

The Taylor series expansions of , , , , and at are

Substituting (B.1)–(B.6) into (19) with and , i.e.,

gives

where

Then, from (B.8), we have

Solving this system of linear equations, we get

Substituting (B.11) into (B.8), we get general six-step fourth-order ZeaD formula (21).

Now let us analyze the convergence of general six-step fourth-order ZeaD formula (21), which can be written as

where the truncation error is omitted. Its characteristic polynomial is

where satisfy (B.11). Substituting the bilinear transform into the above equation yields:

where

All numbers in Routh’s tableau of polynomial (B.14) are collected in the matrix . Then, according to Theorem 8, we have

and other elements do not exist. Then, by the Routh–Hurwitz stability criterion and Definitions 1–3 in [25], general six-step fourth-order ZeaD formula (21) is convergent if and only if and satisfy the six inequalities listed in (23).

Now, we are going to prove any six-step fifth-order ZeaD formula is divergent. In fact, the Taylor series expansions of and at are

Substituting (B.17)–(B.22) into (19) with and , i.e.,

gives

where are the same as the above, and

Then, from (B.24), we have

Solving this system of linear equations, we get satisfy the system (25). Substituting (25) into (B.23), we get general six-step fourth-order ZeaD formula (24). Now let us analyze the convergence of general six-step fifth-order ZeaD formula (24), whose characteristic polynomial is the same as that of (21), and satisfy (25). Similarly, substituting the bilinear transform into the characteristic equation yields a similar equation as (B.14) but with

Since , according to Theorem 8, the general six-step fifth-order ZeaD formula (24) is divergent. The proof is completed.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors thank three anonymous reviewers for their valuable comments and suggestions that have helped them in improving the paper. This research was partially supported by the National Natural Science Foundation of Shandong Province (No. ZR2016AL05) and the Doctoral Foundation of Zaozhuang University.