Stability and Boundedness of Solutions to Some Multidimensional Time-Varying Nonlinear Systems
Assessment of the degree of boundedness/stability of multidimensional nonlinear systems with time-dependent and nonperiodic coefficients is an important problem in various applied areas which has no adequate resolution yet. Most of the known techniques provide computationally intensive and conservative stability criteria in this field and frequently fail to estimate the regions of boundedness/stability of solutions to the corresponding systems. Recently, we outlined a new approach to this problem which is based on the analysis of solutions to a scalar auxiliary equation bounded from the above time histories of the norms of solutions to the original system. This paper develops a novel technique casting the auxiliary equation in a modified form which extends the application domain and reduces the computational burden of our prior approach. Consequently, we developed more general boundedness/stability criteria and estimated trapping/stability regions for some multidimensional nonlinear systems with nonperiodic time-dependent coefficients that are common in various application domains. This lets us to assess in target simulations the extent of boundedness/stability of some multidimensional, nonlinear, and time-varying systems which were intractable with our prior technique.
1. Introduction and Motivation Example
Analysis of boundedness and stability of nonlinear systems with variable and nonperiodic coefficients plays a vital role in various engineering and natural science problems, which, for instance, are concerned with the design of controllers and observers. It appears that currently there are no confirmable necessary and sufficient conditions of local stability of the trivial solution to homogeneous systems of such kind (see, e.g., [1–7]). It was shown by Lyapunov  that under some additional conditions, the trivial solution to a homogeneous system with time-varying coefficients is asymptotically stable if the linearization of this system at zero is regular and its maximal Lyapunov exponent is negative (see also  for contemporary review of this subject). Yet, the confirmation of the former condition presents a considerable problem in applications. Subsequently, it was demonstrated by Perron  that arbitrary small perturbations can reverse sign of the Lyapunov exponents of a linearized system with time-varying coefficients and alter its stability. This attested that Lyapunov’s regularity condition is essential. Consequent works were focused on examining the Lyapunov stability conditions through a review of the stability of Lyapunov exponents for linearized nonautonomous systems. Finally, the necessary and sufficient conditions of stability of Lyapunov exponents for solutions to the linearized system were given in [10, 11]. However, it turns out that authentication of these conditions requires the apprehension of the fundamental set of solutions for the underlying systems, which is dubious in applications.
Application of the concept of generalized exponents, that was introduced in , provides sufficient conditions of stability of the trivial solution to time-varying nonlinear systems which, in principle, can be checked with the aid of numerical simulations. Still, the upper generalized exponent is larger than or equal to the maximal Lyapunov exponent, which heightens the conservatism of this more robust approach.
To retrieve a simple sufficient stability condition from the last approach, we firstly write the following equation with marked linearization:where functions, , , and matrix, , are continuous, , , and is a neighborhood of .
is a real dimensional Euclidean space, , , stands for induced 2-norm of a matrix or 2-norm of a vector, and is a solution to (1). To shorten the notation, we will write below that and assume that (1) possesses a unique solution and .
Due to continuity of the right side of (1), the last assumption implies that is a continuous and continuously differential function that is bounded for any . Consequently, the remainder of this paper focuses on behavior of for .
We also will assess stability of the trivial solution to the following homogeneous equation:and its linear counterpart:
Next, we write Lipschitz continuity condition for as follows:where is a bounded neighborhood of and , is a continuous function. In turn, let us assume thatand
A more general but less tractable condition on the norm of the transition matrices was presented in [1, 2] in the formwhere it was shown that (4) and (7) and the following condition:embrace the asymptotic stability of the trivial solution to (2). More compelling stability conditions of (2) were given in , yet verification of these conditions in applications can be challenging as well.
In the control literature, the analysis of stability of nonautonomous nonlinear systems was frequently aided by application of the Lyapunov function method [15–23]. Nonetheless, the application of this methodology is strenuous in this area since adequate Lyapunov functions are rarely known for multidimensional and nonlinear systems with nonperiodic coefficients.
The problem of estimating the stability regions of autonomous nonlinear systems has attracted numerous publications in the last few decades [24–38], but these techniques fail for systems with time-varying coefficients. To our knowledge, the problem of estimating the trapping regions for nonautonomous nonlinear systems has not been virtually addressed in the current literature.
Assessment of stability of nonlinear systems based on the analysis of convergence of adjacent trajectories was developed in . However, to our knowledge, this approach rarely leads to computationally sound stability conditions for multidimensional systems.
In , we developed a novel technique for estimation of upper bounds of the norms of solutions to (1) or (2) and use it to distinguish the boundedness/stability criteria and estimate the trapping/stability regions for these systems. Consequently, under normalizing condition, , we derive the following inequality: , where is a solution to (1) and ( is a set of nonnegative real numbers) is a solution to a scalar equation we derived in :
In turn, Pinsky and Koblik  also introduced a nonlinear extension of the Lipschitz continuity condition which was presented as follows:where is continuous function in and and locally Lipschitz in , , and is a bounded neighborhood of . Note that Pinsky and Koblik  defined (12) in a closed form if is either a piece-wise polynomial function in or can be approximated by such function, e.g., bounded in error term, which, for instance, can be written in Lagrange form. In the former case, and in the latter case, if the error term is also globally bounded for . Thus, condition holds for a large set of nonlinear systems emerging in science and engineering applications and we are going to use it in the remainder of this paper.
Let us illustrate how to define for a simple but representative vector function. Assume that and a vector function is defined, e g., as follows: . Then, , where we use that , and is a set of positive integers. Note that additional and more complex examples of this kind are provided in Section 5 (see also [13, 14]).
To reduce the notation, we adopt in (13) and throughout this paper that .
It turned out that (13) provides sound estimates of the trapping/stability regions under the assumption that only the bound of is known—a frequent pronouncement in the control literature [3–5]. Nonetheless, such estimates may become more conservative if is defined explicitly. Consequently, we refined this methodology in , where (13) was used to estimate the error of successive approximations of solutions to (1) or (2) stemming from the trapping/stability regions of these systems. This modified approach enhanced our boundedness/stability criteria and delivered approximations that increased successively the accuracy of estimation of the boundaries of the corresponding trapping/stability regions.
Nonetheless, the methodologies developed in [13, 14] work under the condition that , which considerably limits the scope of its applications. Nonetheless, it follows from (11) that frequently even if is a stable and time-invariant matrix.
The current paper lifts this limitation for a practically important class of nonlinear systems with time-varying and nonperiodic coefficients. Its main contribution is in the development of a modified auxiliary equation with and under some conditions that are frequently met in various applications. This new technique prompts the development of novel criteria of bondedness/stability and estimation of bondedness/stability regions for a wide class of systems that were intractable to our former methodology  and voids elaborate simulations of , , and that were required previously.
To make this paper more inclusive, we present the definitions of some standard principles of stability theory which are going to be used in the remainder of this paper. Note that the standard definitions of Lyapunov stability and asymptotic stability for time-varying nonlinear systems that are accepted below can be found, e g., in [1–5]. In the remainder of this paper, we will call these properties shortly either stability or asymptotic stability.
For (2), the Lyapunov exponents , where is a solution to (2). The maximal Lyapunov exponent, , measures the maximal rate of exponential growth/decay of the corresponding solutions as and bears a pivoting role in stability theory. For a linear system (3), the Lyapunov exponents are defined as , where are the singular values of the fundamental solution matrix of (3), and for linear systems, . This lets us to bound the transition matrix of (3) as follows:where is a small positive number .
In order, let us bring the definition of the comparison principle  which is frequently used below. Consider a scalar differential equationwhere function is continuous in and locally Lipschitz in for . Suppose that the solution to this equation is . Next, consider a differential inequalitywhere denotes the upper right-hand derivative in of  and . Then, .
Now we present for convenience some conventional definitions of the trapping/stability regions as follows.
Definition 1. A connected and compact set of all initial vectors, , is called a trapping region of equation (1) if condition implies that .
Clearly, this definition acknowledges that is the invariant set of (1) containing zero.
Definition 2. A connected and open set of all initial vectors, , that includes zero vector, is called a region of stability of the trivial solution to (2) if condition implies that is stable.
Definition 3. A connected and open set of all initial vectors, , that includes zero vector, is called a region of asymptotic stability of the trivial solution to (2) if condition implies that .
1.2. Motivation Example
The first part of this section derives a novel auxiliary equation with and for a simple planar system. The second part infers stability and estimates the stability region of the derived auxiliary equation and extends these inferences to the stability assessment of our planar model (2).
Let us firstly apply our former methodology  to a simple case, where , . Obviously, in this case, which implies that . Thus, and if which makes our prior estimates of , which is based on (13), overconservative for large values of . Nonetheless, if are complex conjugates and our prior inferences hold in this case.
Let us break out our current approach into a sequence of straightforward steps for a simple version of system (1) where , , , , and . Assume also that matrix is diagonalizable and eigenvalue and eigenvector matrices of are , , where , , .
(13) can be applied to the stability analysis of such a planar system if (11) implies that which, in general, is difficult to warranty in advance. Thus, we outline a different technique recasting the auxiliary equation in the form, where and .(1)Firstly, let us write our planar model of system (1) in eigenbasis of as follows: where , is a two-dimensional space of complex numbers, , , and .(2)Next, we rewrite the last equation as follows: where is an identity matrix, , , and is going to be determined subsequently. In order, let us select a linear subsystem of (19) with an underlined diagonal matrix as For this subsystem, a diagonal fundamental solution matrix can be written as follows: , which implies that since . Same reasoning yields that which shows that , and due to (10), . Thus, if we use (18) as the underlying linear system, then the first addition in right side of the auxiliary equation (13), which is derived for (17), is .(3)Further application of (13) to (17) brings the second term in the right side of our modified auxiliary equation in the following form: Then, the utility of standard inequality yields where if and if . This lets us to write (13) as follows: where .(4)Lastly, we select to maximize the degree of stability of a linear equation which minimizes conservatism of our estimates and reduces to the following condition.since is a diagonal matrix with equal eigenvalues. Resolving (24) yields that which casts (23) in the following form:
For further references, we write a homogeneous counterpart to (25) as follows:
To further abridge the stability analysis of the trivial solution to (26), we are going to develop for this equation its linear, scalar, and, thus, integrable upper bound. Due to the comparison principle , solutions to such equations resolve the essential boundedness/stability properties of equation (26). In general, Lipschitz continuity condition (4) can be used to develop such a linear equation. Nonetheless, in our case, we apply a less conservative bound, . Application of the last inequality brings the following linear equation:where . Note that solutions to (27) can be used to estimate solutions to (26) if .
In turn, to simplify the assessment of stability of (27), we set that , where we assumed that , and .
The last assessment can be drawn through the utility of a more general reasoning which is repeatedly applied in this paper. In fact, determine a comparison equation to (27) as follows:
Let us show that if and (28) holds. Clearly, under our assumption. Assume, on the contrary, that is the smallest value of for which . Then, (30) holds for , and thus . Hence, if (28) holds and . Consequently, the trivial solutions to (25) and, in turn, to our planar model of equation (2) are asymptotically stable.
Furthermore, solving for equation trumps estimation of the range of values of which assure asymptotic stability of (29), i.e., values of , where
The last formula leads to the estimation of the stability regions for (26) and the corresponding model of equation (2). Really, (28) and, subsequently, (30) hold if . Hence, the last inequality implies that (29) and, in turn, (27) are asymptotically stable under that condition which, in turn, yields that the trivial solution to (26) is asymptotically stable as well, and a solution to (26) approaches zero as if . In order, this condition ensures asymptotic stability of the trivial solutions to our simplified model of (2) which relates to (26). Hence, due to the second relation in (26), the region of asymptotic stability of the trivial solution to our planar model of (2) includes interior points of the ellipsoid, which is defined as follows: .
2. Modified Auxiliary Equation
This section derives a modified auxiliary equation with and for a broad class of nonlinear systems that are frequently found in applications. Solutions to the auxiliary equation bound from above the matching solutions (1) or (2) under some conditions which we specify below in a theorem which encapsulates our inferences and the underlined assumptions:(1)At this point, we assume that the average of exists and is defined as follows: . Additionally, we assume that is nonzero and diagonalizable matrix except possibly for some isolated values of . This last condition is met for any generic set of matrices which depends upon a scalar parameter, i.e., such a set of matrices that is structurally stable under small perturbations (see, e.g., , chapter 6, pp.235–256 for more details). In order, we set that complex conjugate eigenvalues of , , , and real eigenvalues of , , , where and , where is a set of positive numbers. Additionally, we presume that and also define a square diagonal matrix, with , , .(2)Next, we write (1) as follows: where is a zero mean and continuous matrix. Note that the immediate application of (13) to (32), which is based on utility of the fundamental matrix of solutions to equation, , i.e., , frequently fails since in this case and if . To escape this shortcoming, we rewrite (32) into the eigenbasis of as follows: where , is eigenmatrix of , is -dimensional space of complex numbers, , , and . To reduce the notation, we adopted in (33) that . (i)Subsequently, we rewrite (33) as where is defined below. Then, we select as our underlying linear equation with diagonal matrix, , and write the fundamental matrix of solutions to this equation as follows: where is identity matrix. Henceforth, , since . In turn, application of (10) and (11) yields that and if is defined by (35).(3)Next, adopting (35), we write equation (13) for (34) as follows: where , , is a bounded neighborhood of , and is continuous in and and is locally Lipschitz in function with . Note that this function can be developed through application of inequality (12) to function, . Hence, (12) in this case takes the following form: For developing for polynomial vector fields, see examples in Section 1.1 and Section 1.2 and more inclusive examples in Section 5 as well as examples in [13, 14]. As mentioned earlier, inequality (37), for instance, holds if is a piece-wise polynomial function or can be approximated by such function with bounded Lagrange-type error term. In the former case, , and in the latter case, if the error term is bounded in . To straightforward further references, we assume in remainder of this paper that .(4)In the sequel, we define using the following condition: , which maximizes the degree of stability of a scalar linear equation, . Since is a diagonal matrix, which yields that Clearly, the last condition yields that and, consequently, (see proof in appendix of this paper). This lets us to write (36) as follows:(5)To develop a less conservative version of (39), we set that matrix , , , and and rewrite (34) as follows:
The fundamental matrix of solution for equation, , again can be written as follows: since is a diagonal matrix. As prior, this implies that , , , and . Lastly, a less conservative counterpart of (39) can be written as follows:
In order, we encapsulate our derivations and the underlying assumptions in the following.
Theorem 1. Assume that is a continuous matrix, and the average of , i.e., matrix , exists and is diagonalizable for except possibly some isolated values of . Also, assume that function is continuous and function is continuous in both variables and inequality (37) holds with . Lastly, we assume that equations (1), (39), and (41) possess unique solutions for , , and . Then, the norms of solutions to equation (1) are bounded by matching solutions to the scalar equations (39) or (41) as follows:
Proof. The proof of this statement directly follows from the above assumptions and inferences made prior in this section. In fact, our first set of conditions allows to set up equations (32)–(35). The next set of conditions implies inequality (37) with and, in turn, equations (36), (39), and (41). The last assumption assures that functions and exist for and the specified initial values.
Note that the above inferences naturally filter out the effect of imaginary components in matrix on evolution of . This equates the degree of stability of equation , which is measured by the maximal real part of eigenvalues of , and equation . Furthermore, (41) is currently defined explicitly, whereas the definition of its counterpart given in  involves numerical evaluation of , , and .
Albeit that the current version of the auxiliary equation is gaining computational efficiency and widening the scope of the relevant applications, its former counterpart can provide sharper estimates in the common application domain. In fact, our former technique  simulates for equation (3) which encapsulates the effects of matrices and on the time histories of the corresponding solutions. In contrast, our current technique treats as a conservative perturbation.
To straight-up further referencing, we write a homogeneous counterpart of (41) as follows:Analysis of boundedness/stability of solutions to (41) or (43) is simplified under a condition that brings their linear upper bounds which are defined through utility of the following inequality:where we assumed that function is continuous in and bounded in . For instance, (44) can be developed via the application of Lipschitz continuity condition to a scalar nonnegative function . Note that definition of is exceedingly simplified in this scalar case.
Furthermore, definition of is also abridged if is a polynomial or piece-wise polynomial function in which, in former case, can be written as follows: , where we assume that . In this case, since .
Clearly, in some cases, may merely depend upon or be a scalar parameter which, however, complies with our further inferences that prompt closed-form boundedness/stability criteria and estimation of boundedness/stability regions for (1) and (2).
3. Boundedness/Stability of Nonautonomous Nonlinear Systems via Application of Modified Linear Auxiliary Equation
To shorten the notation, we adopted in (45) and frequently, but always, will set below that .
Thus, under the condition of Theorem 1, (45) admits the following solution:where function is continuous in and , , , and . As prior, we frequently, but always, are going to use reduce notations, i.e., and . Clearly, and . Thus, the last inequality yields that as well. We are going to use these inequalities in the proof of Lemma 2.
3.1. Stability Criteria and Estimation of Stability Region
To simplify the subsequent references, we will write a homogeneous counterpart to (45) as follows:
Necessary and sufficient conditions for the stability or asymptotic stability of a linear system directly follow from accessing the behavior of its fundamental solution matrix (see, e.g., a recent paper , Lemma 1, and additional references therein). For (47), with continuous in and bounded in function , these conditions can be formulated as follows. Equation (47) is stable if and only ifand asymptotically stable if and only ifwhere and are some values of for which either (48) or (49) holds.
In turn, due to comparison principal, the stability of a linear scalar equation (47) implies stability of the trivial solutions to nonlinear equation (43) and, in turn, to (2) if since compliance with this condition enables the linearization of (43) via application of (44). Next lemma provides an upper bound of which, subsequently, aids estimation of and, in turn, .
Theorem 2. Assume that assumptions of Theorem 1 are met, is continuous in and bounded in function, and equation (47) is stable for some . Then, the trivial solution to (2) is stable and inequality (42) takes the formwhere , and are solutions to equations (47), (43), and (2), respectively, and the region of stability of the trivial solution to (2) includes all values of containing the interior of ellipsoid which is defined as follows:
Proof. Let us show that under conditions of this theorem, if . Pretend on the contrary that is the smallest value of such that under prior conditions. Then, (44) and, thus, equation (47) hold for which, due to comparison principle  and (50), implies that . This contradiction shows that which, in turn, enables linearization of (43) by (44) and brings a linear equation (47) that enables our inferences. In order, the application of comparison principal prompts that which implies that the trivial solution to (43) is stable.
The last inequality lets us to write (42) as (51) which shows that the stability of (47) implies the stability of the trivial solutions to (43) and (2) under the conditions of this theorem.
Let us now estimate the regions of stability of the trivial solutions to (2) and (43). The solution to (43) is stable for all values of which are contingent by the following inequality: . In sequel, since , region of stability of the trivial solution to (2) includes all values of meeting condition (52).
Let us recall that (52) depends upon —a characteristic property of time-dependent dynamic systems.
Example 1. Let us apply inequality (52) for the estimation of the stability basin of the planar, nonlinear, and nonautonomous equation considered in Section 1.2. Homogeneous counterpart of this planar equation corresponds to nonlinear auxiliary equation (26), which, in turn, relates to its linear complement (27). For this last equation, we derived in Section 1.2 that which implies that .
Hence, setting in (52) that