Abstract

As a special kind of recurrent neural networks, Zhang neural network (ZNN) has been successfully applied to various time-variant problems solving. In this paper, we present three Zhang et al. discretization (ZeaD) formulas, including a special two-step ZeaD formula, a general two-step ZeaD formula, and a general five-step ZeaD formula, and prove that the special and general two-step ZeaD formulas are convergent while the general five-step ZeaD formula is not zero-stable and thus is divergent. Then, to solve the time-varying nonlinear optimization (TVNO) in real time, based on the Taylor series expansion and the above two convergent two-step ZeaD formulas, we discrete the continuous-time ZNN (CTZNN) model of TVNO and thus get a special two-step discrete-time ZNN (DTZNN) model and a general two-step DTZNN model. Theoretical analyses indicate that the sequence generated by the first DTZNN model is divergent, while the sequence generated by the second DTZNN model is convergent. Furthermore, for the step-size of the second DTZNN model, its tight upper bound and the optimal step-size are also discussed. Finally, some numerical results and comparisons are provided and analyzed to substantiate the efficacy of the proposed DTZNN models.

1. Introduction

As a subcase of nonlinear programming, nonlinear optimization has been widely encountered in a variety of scientific and engineering applications, and many applications can be modeled or reformulated as nonlinear optimization, e.g., the Markowitz mean-variance model in finance, the transportation problem in management, the shortest path in network model, the (non)convex separable optimization in image denoising, etc. [1, 2]. Due to its fundamental roles, nonlinear optimization has been extensively studied by many researchers during the last several centuries, and many efficient algorithms have been developed and investigated in the literature [37], which can be classified into two categories. The first category includes the first-order iteration methods, which only use the first derivative information of the objective function, such as the steepest descent method, the conjugate gradient method, and the memory gradient method. These methods are suitable to solve large-scale nonlinear optimization due to their simple structure and low storage. Numerically the conjugate gradient method and the memory gradient method usually perform better than the steepest descent method [6]. The second category includes the Newton method and its variants, e.g., the quasi-Newton BFGS and DFP methods, which need to compute the second derivative of the objective function or an approximation of it. Therefore they are not suitable to solve large scale nonlinear optimization though they possess locally fast convergent rate. Then, to overcome the drawback of the quasi-Newton method, Nocedal [7] designed a limited memory BFGS method (L-BFGS) for nonlinear optimization, and numerical results indicate that the L-BFGS method is very efficient due to its low storage. In addition, during the last decades neural network has drawn extensive attention of researchers and practitioners due to its nice properties, e.g., distributed storage, high-speed parallel-processing, hardware applications, and superior performance in large-scale online applications [8]. Some neural networks have been developed to solve nonlinear optimization during the last decades [911].

Most of the above algorithms are designed intrinsically for solving static nonlinear optimization (SNO); therefore they might not be effective enough for solving time-varying nonlinear optimization (TVNO), whose objective function, denoted by , is a multivariate function with respect to the decision variable and the time variable . The connection between SNO and TVNO is obvious: (1) SNO is a special case of TVNO, and when for any , TVNO reduces to SNO; (2) the discrete-time form of TVNO, whose objective function is and denoting the sampling gap, can be viewed as a sequence of SNO. At each single time instant , TVNO can be viewed as a static nonlinear optimization, and consequently it can be solved by the above mentioned algorithms. However this treatment is not advisable due to the following three reasons: (1) it is usually inefficient and lowly precise due to it need to solve a sequence of SNO [12]; (2) in the online solution process of discrete-time TVNO, the present and/or previous data with respect to should be used sufficiently to generate the unknown decision variable ; (3) most importantly, at time instant , we do not know the future information such as the function values , and only the current and past information for time instances with can be used. Therefore, the conventional static methods and the neural networks in [911], which are based on the future information, cannot solve TVNO.

As a special recurrent neural network, Zhang neural network (ZNN), named after Chinese scholar Zhang Yunong, serves as a unified approach to solve various online time-varying problems, such as time-varying quadratic function minimization [13], future minimization [14], time-varying matrix pseudoinversion [8], and TVNO [12, 15, 16]. For example, based on ZNN, Jin et al. [15] presented a one-step discrete-time ZNN (DTZNN) model for TVNO, whose maximal residual error is theoretically . Subsequently, Guo et al. [12] proposed two DTZNN models for TVNO, which belong to three-step DTZNN with steady-state residual error (SSRE) changing in an manner. Then, quite recently Zhang et al. [16] presented a general four-step discrete-time derivative dynamics model and a general four-step DTZNN model for TVNO; both models contain a free parameter which can take any values of the interval and is convergent with truncation error of .

Generally speaking, in the real-time solution process of TVNO, the more the past data is utilized, the smaller the truncation error of the corresponding Zhang et al. discretization (ZeaD) formula is. For example, the truncation errors of the one-step DTZNN model in [15], the three-step DTZNN models in [12], the general three-step DTZNN models in [17], and the four-step DTZNN models in [16] are , and , respectively. The corresponding ZeaD formulas are summarized in Table 1, in which the effective domains of the parameter in [17] and [16] are and , respectively. Obviously, the general ZeaD formula with in [17] reduces the two special ZeaD formulas in [12].

In this paper we are going to further study the ZeaD formula, and three ZeaD formulas are presented, including a special two-step ZeaD formula with truncation error , a general two-step ZeaD formula with truncation error , and a general five-step ZeaD formula with truncation error . We prove that the first two ZeaD formulas is convergent while the third ZeaD formula is not zero-stable and thus is divergent. Then, based on the Taylor series expansion and the above two convergent ZeaD formulas, we discrete the continuous-time ZNN (CTZNN) model for TVNO and thus get a special two-step DTZNN model and a general two-step DTZNN model for TVNO. Theoretical analyses indicate that the first DTZNN model is divergent, while the second DTZNN model is convergent for any and the step-size . In addition, the tight upper bound of the step-size and the optimal step-size are also discussed.

The rest of the paper is organized as follows. We first recall some basic definitions and results in Section 2, including problem formulation of TVNO, continuous-time ZNN (CTZNN) model for TVNO, and the general -step ZeaD formula. In Section 3, a special two-step ZeaD formula with truncation error of and a general two-step ZeaD formula with truncation error of are presented and analyzed, and we also prove that the five-step ZeaD formula with truncation error of or is divergent in this section. Furthermore, based on the two convergent ZeaD formulas, two discrete-time ZNN (DTZNN) models for TVNO are presented, and we prove that the first DTZNN model is not convergent and the second DTZNN is convergent. Later, in Section 4, some numerical experiments are presented to illustrate and compare the performances of the convergent two-step DTZNN model with other variants. Finally, a concluding remark with future research direction is given in Section 5. The main contributions of this paper are summarized as follows.

(1) Three ZeaD formulas are presented, including a special two-step ZeaD formula, a general two-step ZeaD formula, and a general five-step ZeaD formula, whose convergence and stability are discussed in detail.

(2) Two DTZNN models are given to solve TVNO based on two convergent two-step ZeaD formulas, whose convergence is also studied in detail.

(3) The feasible region of the step-size in the convergent DTZNN model is studied, and its tight upper bound and the optimal step-size are also discussed.

(4) The high precision of the convergent DTZNN model is substantiated in numerical tests.

2. Preliminaries

In this section, the results in [12, 16] are summarized for the foundation of further discussion, including the problem formulation of TVNO, the CTZNN model for TVNO, and the general ZeaD formula.

Firstly, the problem formulation of the TVNO is as follows [16]:where the time-varying nonlinear function is second-order differentiable and bounded. Problem (1) aims to find such that the function achieves its minimum at any time . Thus in the sequent analyse, we assume that the solution of problem (1) exists at any time .

It is well known that it is often hard to find the global optimum solution of time-invariant nonlinear optimization by traditional numerical algorithms [3, 6]. Therefore, researchers have resorted to find the stationary point of time-invariant nonlinear optimization. We also transform problem (1) into finding its stationary point , which satisfies the following nonlinear equations:where symbol denotes the computational assignment operation. In the following, we aim to find the solutions of problem (2) at any time . Generally speaking, the solutions of problem (1) are the solutions of problem (2), but the inverse may be not, and if is convex with respect to , the inverse also holds [3].

Setting in the following Zhang neural network (ZNN) design formula [18]we get the following continuous-time ZNN (CTZNN) model of problem (1) [15]where is the partial derivative of the mapping with respect to its second variable , i.e.,and is the Hessian matrix of problem (1), i.e.,which is assumed to be positive definite throughout the paper to ensure that the stationary point of problem (1) is also its solution.

Remark 1. The main difference of the CTZNN model (4) and the neural network models in [911] lies in the former including the information of the time derivative to get fast and accurate solution of TVNO, while the motion equation in the latter neural network models for SNO and other optimization problems, such as variational inequalities and complementarity problems, can be expressed byand , which is a mapping with respect to the decision variable and is an implicit function with respect to . Furthermore, it generally satisfies , where is a solution of problem solving. When for any , that is to say that TVNO only contains the decision variable and thus reduces to SNO, the CTZNN model (4) reduces toObviously for any being a solution of SNO; thus the CTZNN model (4) becomes a special neural network for SNO when for any .

The general -step ZeaD formula is defined as follows [19]:where is the amount of the steps of ZeaD formula (9); denotes the coefficients; denotes the truncation error; is the value of at time instant , i.e., ; denotes the updating index. Equation (9) with is termed as -step th-order ZeaD formula.

3. Multistep ZeaD Formulas and Discrete-Time Models

In this section, we first propose two two-step ZeaD formulas with truncation error of and and prove that the five-step ZeaD formula with truncation error of or is not convergent. Then, two DTZNN models for TVNO are presented and analyzed subsequently.

3.1. Concepts of Convergence of Discrete-Time Models

The following concepts about zero-stability and consistency are used to analyze the theoretical results of our proposed discrete-time models [20].

Concept 1. The zero-stability of an -step discrete-time methodcan be checked by determining the roots of the characteristic polynomial . If the roots of are such that(i)all roots lie in the unit disk, i.e., ,(ii)any roots on the unit circle (i.e., ) are simple (i.e., not multiple), then, the -step discrete-time method is zero-stable.

Concept 2. An -step discrete-time method is said to be consistent with order , if its truncation error is with for the smooth exact solution.

Concept 3. For an -step discrete-time method, it is convergent, i.e., for all , as , if and only if such an algorithm is zero-stable and consistent (see Concepts 1 and 2). That is, zero-stability and consistency result in convergence. In particular, a zero-stable and consistent method converges with the order of its truncation error.

3.2. Multistep ZeaD Formulas

Based on the Taylor series expansionwhere is a nonnegative integer, we can derive the two-step ZeaD formula, which is presented in the following lemma.

Lemma 2 (see [21]). The two-step ZeaD formula with truncation error of can be expressed aswhich is convergent, and the general two-step ZeaD formula with truncation error of can be expressed aswhich is convergent for any .

Proof. The proof is presented in Appendix A.

After we received the reviewers’ comments on the paper, we were brought to the attention of the references [21, 22], which have presented the rigorous proofs of Lemma 2 and the following Lemma 5. However, for the completeness of the paper, we have decided to give the proofs in the Appendix.

Remark 3. If , then the general two-step ZeaD formula (13) reduces to the one-step ZeaD formula in [15]Then, in the following the effective domain of is set as .

Corollary 4. For any fixed and sufficiently small sampling gap , the truncation error of the general two-step ZeaD formula (13) is decreasing as the parameter .

Proof. For any fixed and sufficiently small sampling gap , from the proof of Lemma 2, the truncation error is dominated by the termwhich obviously becomes smaller as .

The following theorem reveals that five-step ZeaD formula with truncation error of or is not convergent.

Lemma 5 (see [22]). Five-step ZeaD formula with truncation error of or is not convergent.

Proof. The proof is presented in Appendix B.

Remark 6. The polynomial (B.15) (see Appendix B) has five roots, which are denoted by . Defining the moduli maximum function as

Figure 1 shows the graph of the function , from which we observe that is always bigger than 1 except . However, from the definition of the general -step ZeaD formula, we have that .

3.3. Discrete-Time ZNN Models

In this subsection, two discrete-time ZNN (DTZNN) models are presented for TVNO based on the two convergent two-step ZeaD formulas (12) and (13).

Firstly, applying the two-step th-order ZeaD formula (12) to discretize the CTZNN model (4), we get the following DTZNN model for TVNO:where the step-size .

Similarly, applying the general two-step th-order ZeaD formula (13) to discretize the CTZNN model (4), we get the following DTZNN model for TVNO:where the step-size and .

Remark 7. (I) The DTZNN models (17) and (18) can also be used to solve SNO. In this case, , and thus as pointed in [12], the DTZNN model (17) reduces to . Furthermore, when , we havewhich is exactly the Newton algorithm for SNO [3]. Furthermore, in this case, the DTZNN model (18) reduces towhich can be viewed a two-step iterative method for SNO when .
(II) The iterative schemes of DTZNN models (17) and (18) are similar to those of discrete neural network (DNN) models in [911], which are obtained by discreting the motion equation of neural network (7) by the Euler method, and can be expressed bywhere . Generally, the sequence generated by the iterative scheme (21) converges globally to a solution of the static problem solving, while in the following we shall prove that the sequence generated by the DTZNN model (18) is convergent in the sense that the sequence of steady-state residual error (SSRE) converges to zero with order . In fact, the following Theorem 9 indicates that can be written as , where . Therefore, at the time instant (for sufficiently large ), the generated iterate can approximate the solution of TVNO with high precision when the sampling gap is sufficiently small, such as .

Theorem 8. Suppose is the sequence generated by the two-step DTZNN model (17), and let be the generated steady-state residual error (SSRE). Then the sequence is divergent.

Proof. Let and . Then, the proposed two-step DTZNN model (17) can be reformulated asOn the other hand, by the Taylor series expansion, we haveandwhere and are absorbed into as they are assumed to be of the same order of magnitude [20]. By the algebraic manipulation “(23)/2 − (24)/2,” the following results can be obtained:which together with (22) impliesi.e.,Setting , (27) can be written asThe characteristic equation of the difference equation (28) iswhich has two different real roots from the discriminant . By [23], at least one root of the real quadratic equation (29) is greater than or equal to one in modulus. Thus, the sequence is divergent, so is the sequence . The proof is completed.

Theorem 9. Suppose is the sequence generated by the two-step DTZNN model (18), and let be the generated steady-state residual error (SSRE). Then for any and the step-size , we have that is of order , and thus the sequence convergence of order is to zero.

Proof. Let and . Then, the two-step DTZNN model (18) can be reformulated asBy the algebraic manipulation “ (23) (24),” we get the following equation: which together with (30) impliesi.e.,Similarly, letting , (33) can be written asThe characteristic equation of the difference equation (34) isBy [23], two roots of (35) are less than one in modulus if and only ifObviously, the first inequality always holds for any ; therefore we only analyze the second inequality, which is equivalent toand thusSoThen, the sequence is convergent for any and step-size , so is the sequence . The proof is completed.

Obviously, two initial states (i.e.,) are needed to start the iteration of the DTZNN model (18). We use the following DTZNN model [15] to initiate the iterative computation:

Remark 10. The upper bound of the step-size , that is, , is an increasing function with respect to the parameter , and when , it converges to 4.

The following theorem shows that the upper bound of the step-size , that is, , is tight.

Theorem 11. For any , if , then the sequence generated by the DTZNN model (18) does not converge to zero.

Proof. If , then the characteristic equation of the difference equation (34) reduces towhich has two different real rootsThus the general solution of the difference equation (34) iswhere are two arbitrary constant which are determined by two initial states . So, the limit of the sequence general does not exist except , which indicates that the sequence generally does not converge to zero. This completes the proof.

In the remainder of this subsection, let us investigate the optimal step-size for given . The discriminant of (35) iswhere . Set , which is a quadratic function with respect to , and its discriminant isThe following analyses are divided into two cases according to the sign of .

(1) If , is less than zero, and thus is positive, which indicates that (35) has two different real roots, which are denoted by . Therefore, it holds thatand the second equation indicates that have contrary sign; then we assume that without loss of generality. Furthermore, for any and the step-size , from Theorem 9, we have . The general solution of (34) can be written aswhich together with (46) results in the following model to determine the optimal step-sizeHowever, the nonlinear optimization problem (48) is often difficult to solve, and in the following we give an intuitive analyses about the optimal step-size. Obviously, the smaller is, the smaller the term is. Thus, under the constraint conditions of (48), we aim to minimize the term and equivalently minimize the term , which can be written as Obviously obtains the minimum value at .

(2) If , is nonnegative. (I) For any , . Similar to the above analyse, the optimal step-size can be approximately by which belongs to the interval . (II) For any , which indicates that (35) has a multiple root or two complex conjugate roots, which are denoted by again and satisfy . Then, according to the general solution formula of the difference equation, we minimize to approximate the optimal step-size. Obviously , which is independent of the step-size ; therefore the optimal step-size can also be approximated by .

Overall, we get the following theorem.

Theorem 12. For any given , the optimal step-size of the DTZNN model (18) can be approximated by .

4. Numerical Results

In this section, we present some numerical results to substantiate the efficiency and superiority of the proposed DTZNN model (18) (denoted by DTZNN-I) for TVNO and compared with the one-step DTZNN model (14) (denoted by DTZNN-II) in [15]. All the numerical experiments are performed on an Thinkpad laptop with Intel Core 2 CPU 2.10 GHZ and RAM 4.00 GM. All the programs are written in Matlab R2014a.

Consider the following TVNO [16]:and we can get its stationary point by Matlab, which is omitted due to its complicated expression. Now, we use DTZNN-I and DTZNN-II to solve problem (50), and the parameters are set as follows: s or s, , and . The initial state vector with time duration being 10s. The trajectories of SSRE of TVNO problem (50) generated by the two tested DTZNN models are depicted in Figure 2.

Figure 2 illustrates that the performance of the DTZNN model (18) with is better than that of the DTZNN model (14), and both generated SSREs converge to zero in an manner. So when the sampling gap decreases, both SSREs can be made sufficiently small.

Figure 3 depicts the trajectories of the theoretical solution of problem (50) and generated by the DTZNN model (18) with , which shows that the numerical results generated by the DTZNN model (18) approximate the theoretical solution with high accuracy.

Figure 4 shows the trajectories of generated by the two tested models and their differences when , from which we can find that generated by the DTZNN-I model is generally smaller than that generated by the DTZNN-II model, which means that the former is more accurate than the latter.

Now, let us verify Theorem 11, and we compare the numerical results generated by the DTZNN model (18) with and those generated by the DTZNN model (18) with . The numerical results are depicted in Figure 5, from which we find that the generated sequence does not converge to zero when or is equal to the upper bound , and these are consistent to Theorem 11.

In the remainder of this section, let us verify Theorem 12 with and the optimal step-size for fixed . The numerical results are depicted in Figure 6, which shows the performance of the DTZNN model with is better than that of the DTZNN model with , and this is consistent to Theorem 12.

5. Conclusion

In this paper, we have investigated a convergent two-step ZeaD formulas with truncation error of , a convergent general two-step ZeaD formula with truncation error of and a general five-step ZeaD formula with truncation error of , which is not convergent. Then, based on the two convergent ZeaD formulas, we presented two DTZNN models for TVNO and proved that one is divergent and the other with the free parameter and step-size is convergent. We also proved that is tight upper bound of and is the optimal step-size. Numerical results illustrate that the proposed DTZNN model is efficient for solving TVNO.

In the future the following two issues related to this paper deserve further studying: (I) Theorem 12 only considers the optimal step-size for any given , and therefore we need to study the optimal step-size for any given ; (II) the general three-step DTZNN model and the general four-step DTZNN model proposed in [16, 17] both do not give the relationship of the free parameter and step-size , and therefore we are going to extend the technique used in Theorems 8 and 9 to study the two general multistep DTZNN models and explore the relationship of the parameter and the step-size .

Appendix

A. Proof of Lemma 2

Based on (11), the Taylor series expansions of and at are given asSubstituting (A.1) and (A.2) into (9) with , i.e.,we get the following equation:where , , . If , from Concept 2, to ensure that the two-step ZeaD formula (A.3) has a truncation error of , we only need to ensure that the following conclusions are satisfied:Solving the above three linear equations, we getSubstituting (A.6) into (A.3), we get the two-step ZeaD formula (12) with truncation error of . Then, the characteristic polynomial of (12) iswhose two roots are and . By Concept 1, the two-step ZeaD formula (12) is zero-stable and thus is convergent by Concept 3.

Similarly, if , to ensure that the two-step ZeaD formula (A.3) has a truncation error of , we only need to ensure that the following conclusions are satisfied:Solving the above two linear equations and letting be a free parameter, we getIn this case, . Then, (A.4) is reformulated aswhich is obviously true. Substituting (A.9) into (A.3), we get the two-step ZeaD formula (13) with truncation error of . Then, the characteristic polynomial of (13) isBy adopting bilinear transform [24], we get the following equation:where , , . Then, according the Routh’s stability criterion [25], the general two-step ZeaD formula is zero-stable if and only ifTherefore the general two-step ZeaD formula (13) is convergent with truncation error of if . Since , then the effective domain of is . This completes the proof.

B. Proof of Lemma 5

Based on (11), the Taylor series expansions of , , , , and at are given asSubstituting (B.1)-(B.5) into (9) with , i.e.,we get the following equation:where

If , from Concept 2, to ensure that the five-step ZeaD formula (B.6) has a truncation error of , we only need to ensure that the following conclusions are satisfied:Solving the above six linear equations with respect to , we getSubstituting (B.10) into (B.6), we get a five-step ZeaD formula with truncation error of , whose characteristic polynomial isof which the roots are 1, −6.9614, 0.26698, 0.13887 - 0.33945 and 0.13887 + 0.33945 with denoting imaginary unit. By Concept 1, the resulting five-step ZeaD formula is not zero-stable since the root −6.9614 lies outside unit disk. Therefore the five-step ZeaD formula with truncation error of is not zero-stable and thus is not convergent.

Similarly, if , to ensure that the five-step ZeaD formula (B.6) has a truncation error of , we only need to ensure that the first five linear equations of (B.9) holds; that is, the following conclusions are satisfied:Solving the above five linear equations and letting be a free parameter, we haveIn this case, . Then, (B.7) is reformulated aswhich is obviously true. Substituting (B.13) into (B.6), we can get a general five-step ZeaD formula with truncation error of , whose characteristic polynomial isThen, by adopting bilinear transform again, we get the following equation:where , , . Unfortunately, . Then, according the Routh’s stability criterion again, the general five-step ZeaD formula with truncation error of is not zero-stable and thus is divergent. This completes the proof.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

This research was partially supported by the National Natural Science Foundation of China and Shandong Province (Nos. 11671228, 11601475, and ZR2016AL05).