Abstract

Estimation of solution norms and stability for time-dependent nonlinear systems is ubiquitous in numerous engineering, natural science, and control problems. Yet, practically valuable results are rare in this area. This paper develops a novel approach, which bounds the solution norms, derives the corresponding stability criteria, and estimates the trapping/stability regions for some nonautonomous and nonlinear systems, which arise in various application domains. Our inferences rest on deriving a scalar differential inequality for the norms of solutions to the initial systems. Utility of the Lipschitz inequality linearizes the associated auxiliary differential equation and yields both the upper bounds for the norms of solutions and the relevant stability criteria. To refine these inferences, we introduce a nonlinear extension of the Lipschitz inequality, which improves the developed bounds and allows estimation of the stability/trapping regions for the corresponding systems. Finally, we confirm the theoretical results in representative simulations.

1. Introduction

This paper derives a scalar differential inequality and corresponding first-order nonlinear auxiliary equation that bounds in norm the solutions of nonautonomous nonlinear systemwhere functions , and matrix are continuous, and , and a scalar . Note also that throughout this paper symbol stands for 2-norm unless it is indicated otherwise. To simplify notation, we will write that , where be a solution to the initial value problem for (1), i.e., . We assume below that, is uniquely defined for and , where is a neighborhood of containing, . Note that the pertained conditions can be found, e.g., in [1, 2].

We also examine the solutions to homogeneous counterpart to (1):

Development of efficient stability criteria for the trivial solution to (2) is essential in numerous applied and control problems. For instance, these criteria enable the design and analysis of performance of robust controllers and observers [3].

There are two main approaches to this problem: the Lyapunov functions method (see, for instance, [2, 3]) and the first approximation methodology (see, e.g., reviews [4, 5] as well as [6, 7] for additional references and historical perspectives). The former approach is widespread in control literature; see [2, 816] and additional references therein. However, adequate Lyapunov functions are rare for time-dependent and nonlinear systems.

The latter approach delivers sufficient stability criterion under the following conditions; see [6, 7]. The first is the Lipschitz condition:where is a bounded subset in containing and the function is continuous, and . The second condition,

bounds the growth rate of the transition matrix, , where is the fundamental solution matrix of the linearized equation (2):

Inequality (4) comprises necessary and sufficient conditions for asymptotic/exponential stability of (5), e.g., [3, 6]. Consequently, it was shown that the trivial solution to (2) is exponentially stable if (3) and (4), and the following condition,

holds [6, 7]. A somewhat more flexible condition on the growth of was introduced in [5]; see also [4].where is an integrable function. Clearly, (7) reduces to (4) for . In turn, (3) and (7) provide asymptotic stability of the trivial solution to (2) if [4]

While the existence of (4) is acknowledged under some broad conditions [6], to our knowledge, there were no attempts to adequately define function, in (7) and to apply either criterion to stability analysis of practically relevant systems. Furthermore, it was shown, e.g., in [17], that the time-histories of different estimates of the Euclidian norms for the second-order fundamental matrix, i.e., , , can diverge from each other and the exact values of . This raises concern of the practical value of the above-listed sufficient stability criteria. Furthermore, in Section 3, we show that (4) and (7) can be viewed as conservative versions of the estimate of the norm of transition matrix that follows from our approach.

An attempt to escape the utility of prior bounds on in stability analysis of (2) was undertaken in [18]. However, authentication of the developed stability conditions for relatively complex systems can present a challenging task for this approach as well.

The problem of estimating the norms of solutions to (1) subject to (3) and (4) was reviewed in [6, 7].

The problem of estimating the states of linear and nonlinear systems was considered in [1923].

The contribution of this paper is two-fold. Firstly, it is methodological. This paper derives a novel scalar differential inequality for the norms of solutions to a practically important class of systems governing by the equations (1) or (2), which collapses the dimension of the original estimation problem to one. Due to the comparison principle [3], solutions to this inequality are bounded from above by the solutions to the auxiliary scalar first-order linear or nonlinear equations with variable coefficients, which are devised and analyzed in this paper. The linear auxiliary equation is obtained via application of Lipschitz condition, whereas the nonlinear auxiliary equation is devised through application of a nonlinear version of Lipschitz condition, which is also derived in this paper. The second contribution is in application of the conceived methodology to various local and nonlocal estimation and stability problems. This includes utility of the linear auxiliary equation in derivations of relaxed and more general local boundedness and stability criteria as well as application of the nonlinear auxiliary equation to estimation of solution bounds and trapping/stability regions of solutions to the original systems. Our approach bypasses utility of Lyapunov functions method. The conceived approach enhances stability criteria (i.e., (6) and (8)) that are devised in the context of Lyapunov first approximation methodology and develops novel stability and boundedness criteria.

Our inferences are validated in simulations of the Van der-Pol-like model, which includes a time-dependent linear block and oscillatory external force.

This paper is organized as follows. The next section derives the pivotal differential inequality and pertained auxiliary equation. The subsequent section linearizes the auxiliary equation via utility of Lipschitz inequality and develops the corresponding solution bounds and stability criteria. Section 4 introduces a nonlinear extension of the Lipschitz inequality and develops its various applications, Section 5 presents the simulation results and Section 6 concludes this study.

2. Differential Inequality for Solution Norms

This section derives the pivoting scalar differential inequality for 2-norms of solutions to (1) or (2), which is analyzed subsequently in this paper. Note that an attempt to derive directly from (1) a scalar differential equation governing evolution of only fails. In fact, if (1) is written in spherical coordinates, then the equation for the radius-vector, , includes also the angel variables that cannot be discounted in general.

Instead, for the broad class of nonlinear systems, we derive below the initial value problem including a scalar differential inequality for and the matching initial condition for this function. Using the comparison principle [3], we bound from above a solution to this problem by the matching solution to the associated initial value problem for the auxiliary scalar differential equation. Finally, this last solution bounds in norm from above the solution to (1) with consistent initial value. This allows to collapse dimension and drastically simplify the problem of estimating time-histories of the actual norms of solutions to equations (1) or (2).

In fact, the application of variation of parameters lets us derive from (1) the following equation, e.g., [3]:where is frequently normalized to satisfy the condition, , where is the identity matrix. In Section 5, we present normalization of , which is more natural for our studies and, hence, used consequently in our simulations. Presently, we only assume that . The last equation leads to the following inequality:

Next, we attempt to match the solutions to (10) with the solution to the initial value problem to a scalar inequality that can be written as follows:where is a continuous function, which is used to mimic in (10), is Dini’s upper right-hand derivative in [3], functions and are from to and to , respectively, is a nonnegative scalar and function is a solution to (9). Note that functions, and , and the initial value, are uniquely defined below via matching the solutions to (11) and the right hand-side of (10).

Due to comparison principle [3], pp.102–104, solutions of inequality (11) are bounded from above by solutions of the matching differential equation:

Hence, .

Then, the application of variation of parameters to (12) yieldswhere and .

Consequently, we determine , , and by matching the right-hand sides of (10) and (13). Comparison of the first additions in the right-hand sides of (10) and (13), i.e., and , returns

Next, from (14), . Equating the last additions in the right hand-side of (10) and (13) and multiplying and dividing the former function by , returns

The last relation yields that is the running condition number of , and and are maximal and minimal singular values of . Note that and since is nonsingular matrix and is continuous since both and are continuous functions due to our initial condition on matrix .

We will assume throughout this paper that is unique . It follows from (14) thatis continuous since is unique.

Hence, our definition of and implies that solutions to (10) and (11) corresponding to the same are equal each other and that , where is a solution to (1) or (2).

Next, multiplication of (14) by and utility of a standard norm’s inequality let us rewrite (14) in the following form:where we use that since . Next, it follows from (10) that and from (15) that . Hence,

Subsequent multiplication of (18) by returnswhere, in the above formula, is a solution to linear equation (5).

Next, we examine the relation between the above formulas and the assumptions (4) and (7), which were used prior in stability theory of the corresponding systems. It follows from (18) that scalars and in (4) can be interpreted as follows:where we assume that both and . Furthermore, in (7) scalar can be also interpreted as above, whereas, unknown function, can be interpreted as . Hence, our approach allows to explicate unspecified parameters and function in (4) and (7) and disclose the conservative nature of these inequalities. Consequently, we show below that stability conditions (6) and (8) can be interpreted as conservative versions of ones that are derived in Section 3 of this paper; see Remarks 1 and 2.

To write (12) in the standard form, we introduce a nonlinear extension of Lipschitz inequality:where is a continuous function in both variables and , and is a bounded subset in containing . Apparently, for polynomial and some other vector-fields, ; see Section 4.1, which derives (22) for some sets of vector-functions. Clearly, (22) reduces to (3) if is linear in .

Afterwards, let us assume that , where is an open ball that is centered at with conceivably a sufficiently small radius . Then, due to continuity in , , where be a conceivably small value, is a solution to (2), and is an open ball that is centered at with radius, . In turn, the solutions to nonhomogeneous equation (1), if both, , and is a sufficiently small value. Clearly, the latter condition is implication of continuity of solutions to (1) in both, and parameter .

Under this last condition, application of (22) to (12) yields the following differential inequality,where continuous , , and is a sufficiently small value. Clearly, if , then the condition can be voided and (23) holds and .

Note also that (23) is defined , whereas its relation to (12) and, in turn, to (1) is embraced yet for .

In turn, due to comparison principle [3], solutions of (23) are bounded by the consistent solutions to the associated differential equation:where . Let us assume that the initial value problem for (24) possesses a unique solution for and denote that . Note that, in general, can be infinity. In Section 4, we assume that , where be a ball with radius . This condition implies that solutions to (24) bound in norm from above the solutions to (12) and, in turn, solutions to (1) for .

In the following section, we use (3) instead of (22) to linearize (24) in the neighborhood . This subsequently leads to a scalar, linear, and integrable auxiliary equation, which is defined, in general, on a short time-interval. Next, we formulate conditions assuring that the solutions of the corresponding equation remain in and derive some explicit upper bounds for solutions to equation (1) and the corresponding stability criteria for the trivial solution to (2). Finally, we show that stability conditions (6) and (8) can be regarded as conservative counterparts of ones we outlined below.

3. Linearization of Auxiliary Equation via Application of Lipschitz Inequality

In analogy with Section 2, we assume in this section that , where is an open ball that is centered at with conceivably small radius and is a sufficiently small number. Then, due to continuity in , , where is a solution to (1), is a sufficiently small number and is an open ball that is centered at with radius . Next, application of (3) to (11) and utility of the comparison principle [3] let us substitute (24) by a scalar linear equation:where continuous functions , . Let us also define . Clearly, is infinity if homogeneous counterpart to (25) is unstable. Some inferences with utility of this quantity are drawn in Remark 4.

Note that due to our previous assumptions, , and are continuous functions, which implies that (25) possesses a unique solution for . However, this solution may not bound in norm the corresponding solution to (1) if it exits for some . These inferences are given as follows.

Theorem 1. Assume that , , , and are conceivably sufficiently small, , inequality (3) holds and, due to our previous assumptions, is unique and is continuous. Then,where is a solution to (1) and is the solution of (25), andand the transition function .

Proof. Clearly, if , then, due to continuity in , and solutions to (1) are bounded in norm from above by the consistent solutions to (25) on the correspondent time interval. The latter linear equation assumes a unique solution, which is defined by (26)–(28). Due to continuity of the underlying functions, the integrals in the last formulas are defined for .
Hence, the problems of assessing the asymptotic/exponential stability of the trivial solution to (2) or boundedness of solutions to (1) are simplified and comprised in evaluation of the matching problems to the auxiliary linear first-order homogeneous/nonhomogeneous equations and assuring that solutions to (1) or (2), i.e., . The latter, in turn, can be inferenced under some conditions that are listed below.
Note that the necessary and sufficient conditions for various types of stability of a scalar linear equation is known, e.g., [6], and recently are reviewed in [23], where additional references can be found. Application of these conditions to our first-order linear auxiliary equation facilitates development of the matching stability criteria for the trivial solution to nonlinear equation (2). Below we present only some of the most explicit boundedness/stability conditions for equations (1) and (2), which directly follow from [2224].
Note that subsequent Corollaries 14 assume that conditions of Theorem 1 and formulas (26)-(28) are embraced and include only the additional conditions that are essential to a specific statement listed below.

Corollary 1. Assume that , , and the equality in the last formula can be attained only for some isolated values of . Then, the trivial solution to (2) is asymptotically stable.

Proof. In fact, since, and, due to (27), for sufficiently small . Differentiating (27) in implies that for either or for some isolated values of , which yields that monotonically with . Hence, and can be made arbitrary small for sufficiently small .
Next, application of (26) yields that . Then, due to continuity of in , (26) can be further applied for , where is conceivably a small number. In turn, . Hence, (26) can be further protracted for with conceivably small .
Let us show that repetition of these steps yields that if . In fact, if, in contrary, , then and belongs to the boundary of . However, due to continuity of in , . This contradiction infers that if . Consequently, our last relation yields that .

Corollary 2. Assume that . Then, the trivial solution of (2) is exponentially stable. If, in addition, both and are sufficiently small and , then the corresponding solutions to [1] are bounded in norm and .

Proof. Assume firstly that and let . Due to Theorem 1, (26) holds, . Then application of (14) and replacement of by in (27) yields that . Next, we recall that and since is continuous in . Thus, if . Due to continuity of in , this last inequality let us to extend application of (26) for with sufficiently small and . Hence, . Replication of these steps yields that . Next, following the steps used in Corollary 1, we infer that , which, in turn, yields that exponentially with and proves the first part of this statement.
Let us assume next that and both and are sufficiently small. Due to the made assumption, . Thus, and .
Next, due to the first part of this statement, and . This implies that , where can be made arbitrary small by the appropriate choice of and .
Henceforth, due to Theorem 1, for sufficiently small and , application of (26) implies that . Thus, due to continuity of in , application of (26) can be extended for with sufficiently small . Consequently, and can be made arbitrary small if both and are sufficiently small. As in Corollary 1, repetition of these steps imply that, with and, consequently, for sufficiently small and .

Remark 1. Clearly, stability condition (6) can be considered as a conservative version of the conditions of the last statement. The former can be derived from the latter condition by application of (21) and setting and . In fact, exponential stability of the trivial solution to (2) is assured if , but a somewhat more conservative condition implies uniform exponential stability of trivial solution to (2). Additionally, the condition of the above statement is evoked only for . This discards the behavior of solutions on the initial time-interval, where they can diverge from the fixed solution, a common thesis in stability theory.
To formulate less conservative stability criteria, we evoke the definitions of the characteristic and Lyapunov exponents, which determine the fate of solutions to (2) or (5) if ; see, e.g., [4, 20]. The characteristic exponents assess the rate of exponential growth/decay of if . For a linear system (5), the Lyapunov exponents are defined as , where are the singular values of .
Let us also recall that, for linear systems, the maximal characteristic and Lyapunov exponents are matched; see, e.g., [4].
Firstly, we notice thatwhere is the maximal Lyapunov exponent of solutions to (5). Hence, (5) is asymptotically stable if ; see, e.g., [24], p.94.
Next, using (27) we calculate the characteristic exponent of as follows:Consequently, we set, , where, be a nonlinear correction to the maximal Lyapunov exponent of (5). Afterward, we convey the following.

Corollary 3. Assume that . Then the trivial solution to (2) is asymptotically stable.

Proof. In fact, it follows from (27) that . Due to our assumption, and be an arbitrary small value; see, e.g., [24], p. 18 and p. 93-94. Next, since , and exponentially with . As in the first part of Corollary 2, (26) implies that and can be made arbitrary small for sufficiently small . Due to continuity of in , we can extend application of (26) on the adjacent time-interval, , which, in turn, infers that . Replication of the arguments used in the first part of Corollary 2 yields that and , which, in turn, yields that exponentially with .

Remark 2. Condition (8) can viewed as conservative version of the above statement if we assume that , and . In this case, the left hand-side of (8) presents a conservative upper bound for the maximal characteristic exponent of (2). Corollary 3 enhances this bound and discloses its underlined logic.
Let us assume next that and is given by (24) and . Then, , where both and are defined above; see [24], p.101. In turn, we assume below that . This comprises the following.

Corollary 4. Assume that , and both and are sufficiently small values, and . Then, solutions of (1) are bounded in norm and , where is a solution to (1), , , and is an arbitrary small value, and a constant .

Proof. In fact, due to (28),and . Next, due to both Corollary 3 and analogy with second part of Corollary 2, we infer that , where scalar can be made arbitrary small for sufficiently small and . Henceforth, due to (26), . Thus, due to continuity of in , application of (26) can be extended for with sufficiently small . As in second part of Corollary 2, repetition of such extensions yields that for sufficiently small and .

Remark 3. We notice that the application of stability criteria, which are developed in [19] for a scalar linear system, to our auxiliary equation (25) might lead to somewhat less conservative stability criteria for nonlinear equation (2). These types of augments of the above statements are left out of this paper.

Remark 4. The proofs of Corollaries 1–4 are simplified under rather more conservative condition on solutions to (25), i.e., , where is the ball with radius . In turn, this condition infers that the solutions to (1) with corresponding initial vectors remain in . Note that similar condition facilitates our inferences in Section 4.2.

4. Nonlinear Extension of Lipschitz Inequality and Its Applications

4.1. Extended Lipschitz Inequality

Though application of Lipschitz inequality is widespread in stability and control theories, e.g., [35, 13, 14], its utility frequently leads to overconservative inferences, which also evoke dependence of the Lipschitz constant upon the size of the pertaining neighborhood, i.e., . A rigorous assessment of the last relation can present a challenging task, which is avoided frequently. Yet, this can affect the accuracy of the pertained results. Additionally, admission of (3) linearizes (12) and abates representation of intrinsically nonlocal nonlinear phenomena like, e.g. estimation of trapping/stability regions for the corresponding systems.

To temper these problems, we introduce in this paper a nonlinear extension of Lipschitz inequality, i.e., (22). In principle, a relatively conservative form of (22) can be readily derived for various commonly used functions. For instance, (22) converts to a global inequality, i.e., for polynomial vector fields or ones, which can be presented as a vector Taylor polynomial with globally bounded Lagrange error term. In these cases, (22) can be attained in a more conservative, but polynomial form, e.g., by successive applications of the following inequalities: and , where is the -th component of vector and is the absolute value.

For instance, let, and , then .

If the error term in the polynomial approximation of is bounded for , then (22) is validated in the same neighborhood. Yet, such nonlinear inequality frequently appears to be less conservative than (3) in extended neighborhoods of and let to better represent the underlying behavior of nonlinear systems.

4.2. Solution Bounds and Estimation of Trapping/Stability Regions

This section bonds the norms of solutions to either (1) or (2) and estimates the trapping/stability regions for these equations, respectively. Frequently used definitions of these sets of initial vectors are presented below for convenience.

Definition 1. A compact set of initial vectors, that includes zero-vector, is called a trapping region for (1), if condition, implies that .

Definition 2. An open set of initial vectors, that includes zero-vector, is called a stability region of the trivial solution to (2) if condition implies that .
Utility of extended Lipschitz inequality (22) frequently sharpens the estimates of the norms of solutions and lessens their dependence upon the size of the pertaining neighborhood, , but leads to analysis of solution to the initial value problem for a nonlinear scalar equation with variable coefficients, i.e., (20), which has close form solutions only in some special cases, e.g., if (1) or (2) is an autonomous system. Still qualitative analysis and numerical simulations of solutions to a scalar equation is significantly simplified and offers compelling inferences on behavior of solutions to multidimensional systems (1) or (2).
Note that the last two terms in the right side of (24) are nonnegative, whereas can be either positive or negative or switch the sign for certain values of . This will be used in further analysis of (24).
For this reason, utility of solutions to a scalar equation (24) for estimation of the trapping/stability regions for multidimensional equations (1) or (2) requires to relate one-dimensional and -dimensional initial data sets for the corresponding equations. For this sense, we define a close set of initial vectors to either (1) or (2) as follows:where the set is bounded by ellipsoid, , which is centered at . Let the open set, . Due to (19), , where be a ball with radius . Note also that , where is a ball with radius . This leads to the following.

Theorem 2. Assume that , equation (24) possesses a unique and bounded solution for and , where is the ball with radius ; see Section 2 for definition of . Then,where and are solutions to either (1) or (2) and (24), respectively.

Proof. Clearly, since , inequality, with holds for . Assume now that . Then, since, due to uniqueness, solution curves to a scalar equation (24) do not intersect, which implies (33).

Remark 5. It follows from Theorem 2 that increases in . This simplifies simulation of (24).
Inequality (33) enables numerical estimation of the trapping/stability regions for (24), which, in turn, leads to estimation of the corresponding regions for the systems (1) or (2).
Note that, in the subsequent statement, we assume without repetition that conditions of Theorem 2 hold and include only the additional conditions related to this statement.

Corollary 5. Assume that solutions to (24) are subjected to one of the following conditions:(1) and for some . Then, the trivial solution to (2) is asymptotically stable and is enclosed in its stability region.(2) and . Then, , where is a solution to (1), i.e., is enclosed in the trapping region of solutions to (1).

Proof. This corollary directly follows from Theorem 2. In fact, assume firstly that and . Then, since and , inequality holds for and, due to our assumption, yields .
Next, let . Then, .
Clearly, the best estimates of the trapping/stability regions yield the maximal admissible values of . These values can be readily assessed in simulations of a scalar equation (24), especially, since is an increasing function in .
Below, we formulate two complementary analytical approaches for estimating such values of , which also enhances comprehension of the qualitative structure of solutions to (24). Consequently, next two subsections outline the techniques that bound or approximate (24) by its autonomous and integrable counterparts. The first replaces all time-dependent coefficients in (24) by their superior bounds. This yields an autonomous and integrable counterpart of (24) with solutions that bound from above the solution of (24).
The second techniques averages time-dependent coefficient in (24), which yields an autonomous equation approximating (24) under certain conditions. Both techniques lead to explicit solution bounds and boundedness/stability criteria as well as allow to estimate the trapping/stability regions for both autonomous and time-dependent systems.

4.3. Reduction of Auxiliary Equation to Autonomous Form

Taking superior bounds of all time-dependent functions in the right-side of (24) yields its autonomous and integrable counterpart:wherewhere . Let us assume that , , and is a continuous function, and (34) admits a unique solution, . Clearly, .

Then, under assumptions , we infer that , where is a solution to (1).

As is known, the nonnegative roots of algebraic equation,i.e., split the initial values of solutions to (34) into subsets, which stem solutions with different behavior on long time-intervals. Subsequently, equation,links these point-wise boundaries to -dimensional ellipsoids in the phase space of (1) or (2), i.e., , which, in turn, can be used for estimation of the trapping/stability regions for the conforming systems if . The latter condition can be assured under the additional assumption, i.e., , where and be the ball with radius , which is centered at . Note also that .

Below, we review the application of this procedure to some characteristic, but relatively simple cases.

Let us recall that all terms in (34), except are nonnegative scalars, whereas can be either positive or negative.

Firstly, we assume for simplicity that and . Then and, monotonically with if either and or and . Yet, in both cases, the norms of the corresponding solutions to (1) or (2) can either approach positive infinity, zero, or remain to be bounded.

Assume next that , , and (36) has one simple root, . Such fixed solution to (34) can be either stable or unstable. This yields the following.

Theorem 3. Assume that , , , and be a unique fixed solution to (34) corresponding to a simple root of (36).

If this solution is unstable, then the trivial solution to (2) is asymptotically stable. Furthermore, and , i.e., is enclosed into the stability region of the trivial solution to (2).

If this solution is stable, then , i.e., is enclosed into the trapping region of the trivial solution to (2) and , where is a solution to (2).

Proof. The proof of this statement immediately follows from (33) and the assessment of behavior of solutions to a scalar and autonomous nonlinear equation (34) in these two cases. In fact, assume that and be a unique unstable fixed solution to (34). Then, is a stable solution to (34) that attracts all solutions of this equation with initial values, , which monotonically approach zero. Next, since , inequality holds for , where is a solution to (2). In turn, this implies that and that . This comprises the first part of the above statement.
Assume next that is a unique stable fixed solution to (34). Then, is the unstable solution to this scalar equation, which implies that both, and . Then, since , application of inequality infers that and, in turn, that, .
Next, we assume that , , and (36) has two positive simple roots, , which can be either equal or distinct. Let, , then we can readily show that, and, are unstable and stable fixed solutions to (34), respectively. In fact, in this case, . Hence, continuous . Since is simple root of (36) corresponding to the attractive fixed solution to (34), and consequently is the repelling solution to this equation. This comprises the following.

Theorem 4. Let , , , and be a solution to (1). In addition, assume that one of the following two conditions hold:
Equation (36) has only two simple roots, corresponding to unstable and stable fixed solutions to (34), respectively, and . Then,Let , then, .

Proof. The proof of this statement immediately follows from (33) and the assessment of behavior of solutions to a scalar equation (34) in the corresponding cases. In fact, as prior, the condition, implies that , where is a solution to (1). Next, assume firstly that (36) has only two simple roots, corresponding to unstable and stable fixed solutions to (34). Then, since solutions to (34) do not intersect due to uniqueness, if and . This with (33) yields the first part of the above statement.
Assume next that . In this case, , which with (33) implies the second part of this theorem.

Obviously, (36) can admit more than two positive solutions if , and in addition, the corresponding fixed solutions to (34) can bifurcate due to variation of parameters of this equation. Yet, the corresponding analysis can be extended on these more complex cases alike.

4.4. Approximation of Auxiliary Equation Using Averaging Technique

For systems with time-dependent linear part, frequently can be regarded as a highly oscillatory function, i.e., , , where is a characteristic time scale. Let us also assume for simplicity that , , , , and introduce fast time , transforming (24) intowhere . Since (39) is a replica of (24), it admits a unique solution, due to our prior assumption on (24). Furthermore, if , solutions to (39) with bound in norm from above the corresponding solutions to either (1) or (2), respectively.

Next, application of the averaging technique to (39) yields an autonomous approximation to this equation that can be written as follows:where . We assume that the first three limits exist and the last limit exists uniformly in .

Sufficient conditions for the closeness of some solutions of the averaged and initial equations on large and infinite time-intervals can be found in [3, 6, 25, 26] and references therein. For instance, for , the following conditions imply closeness of some solutions to (39) and (40); see [6], section 7.7.

Proposition. Let equation (40) possess a positive fixed solution, and be a ball with radius , which is centered at . Assume that function in (39) admits the following conditions:
, and the limit, , exists uniformly, and , and .
and are continuous functions.
are uniformly defined for .

Then, for sufficiently small , there is , such that for , (39) admits a unique solution, , which obeys the inequalityand is a stable/unstable solution to (39) if is a stable/unstable solution to (40).

Yet, under assumption of the proposition, positive solutions, , belong to —neighborhoods of the fixed solutions to (40), , and assume their stability/instability properties. This let us to use the fixed solutions for estimation of the trapping/stability regions of (39) nearly the same way it was done prior with utility of the fixed solutions to (34). Subsequent inferences of the behavior of solutions to (1) or (2) can be made under additional condition , where be a ball with radius , which is centered at zero, and . Consequently, we note that .

Next, due to (42), equation (37) should be attuned into the following more conservative form:

Hence, the ellipsoids, , estimate the sets of the corresponding initial vectors for solutions to (1) or (2), which are included in the trapping/stability regions of these equations if . This let us to adjust the statements of Theorems 3 and 4 as follows.

Theorem 5. Let be a unique solution to (40), , , and functions and admit the conditions of the Proposition. Assume also that , , and (40) possess a unique unstable fixed solution, . Then, the trivial solution to (2) is asymptotically stable. In addition, , and .

If, in turn, be a unique stable fixed solution to (40) then, and , where is a solution to (2).

If , , and (40) possess unstable and stable simple fixed solutions, and, , then,

If , , and (40) possess a repeated fixed solution, . Then, , where is a solution to (1).

Proof. The proof of this theorem immediately follows from (43), which infers modifications of Theorems 3 and 4 in the considered cases.

We notice that theoretical estimates for admissible values of and turn out to be quite conservative [13], but more accurate estimates frequently can be obtained in numerical simulations.

Remark 6. Note that Theorem 5 offers the most lucid application of averaging approach to analysis of solutions to (24). Yet, application of averaging technique to (24) with two significantly different time scales yields the equation possessing only slow time; see [25, 26] and more references therein. It was shown in [15] that, under some conditions, stability of the system averaged over fast time implying the stability of the original system with two-time scales. These inferences can be applied to (24) in the corresponding cases. Moreover, after averaging over fast-time, slow-varying coefficients in (24) frequently can be effectively bounded, which allows efficient convergent of (24) to its more conservative but time-invariant and integrable form.

Remark 7. (24) turns into the integrable Bernoulli equation [27] if and obeys Holder’s inequality, i.e., , , which streamlines stability analysis and estimation of the solution bounds for such equations.

Remark 8. For the sake of completeness, we briefly compare application of Lyapunov functions methodology aided by one-sided Lipschitz condition with the approach developed above. The former methodology can offer less conservative stability and stabilization conditions of an equilibrium of nonlinear systems than its classical counterpart since the Lipschitz constant is larger or equal to its one-sided analog; see [2831] and additional references therein. For some functions, one-sided Lipschitz constant can assume zero or negative values, which, in principle, can significantly reduce conservatism of the underlined methodology. Nonetheless, the choice of the Lyapunov functions also affects the outcomes of such combined approach. However, efficient Lyapunov functions are rarely available for nonautonomous and nonlinear systems. In contrast, our methodology does not rest on utility of Lyapunov functions.
It was shown in [29, 30] that application of one-sided Lipschitz condition to the Lyapunov-aided design of nonlinear observers is simplified under additional so-known quadratic inner-boundedness condition, which is enforced on nonlinear components of the underlined systems. Utility of both conditions decrease the efficacy of this approach and involve estimation of three parameters that depend upon the size of the equilibrium’s neighborhood. The computational burden of such task quickly increases in higher dimensions. In contrary, our nonlinear version of Lipschitz inequality can be readily devised in higher dimensions for polynomial vector fields or ones that can be represented by vector-polynomials with locally/globally bounded error terms. Additionally, the extended Lipschitz inequality becomes global if the error terms in polynomial approximations of the vector-field are globally bounded.
Furthermore, our approach naturally enables estimation of trapping/stability regions for time-varying nonlinear systems, whereas the methodology based on utility of one-sided Lipschitz condition, as its classical counterpart, has been primarily used in local stability analysis and stabilization problems.
Finally, we contrast application of both methodologies to some standard cases. Assume, for instance, that . Then, . Hence, in this case, (22) is global and sharp inequality. In turn, one-sided Lipschitz inequality returns, . The last inequality fails if . Additionally, let and . Then, one-sided Lipschitz inequality yields that and subsequently, . The last inequality is sharp if ; otherwise, it is inferior than (22).
The above inferences can be extended on polynomial vector-fields. Apparently, less conservative one-sided Lipschitz inequality bears some of shortcomings of its classical counterpart. Nonetheless, one-sided Lipschitz inequality can deliver superior estimates in some application domains.

5. Simulations

This section initially applies the developed above methodology for estimating the solution norms as well as trapping/stability regions of Van der-Pol-like model with both time-varying linear part and external time-dependent perturbation. The system is written in dimensionless variables as follows:where , , and , , , . In all further simulations, we set and . The fundamental solution matrix of the linearized system (45), in general, cannot be found in a closed form. Consequently, we gage the running condition number of such matrix in simulations, which show that oscillates within certain fixed interval about its mean value; see, e.g., Figure 1.

Firstly, we notice that time-histories of , which are ubiquitous in our analysis, are affected by the normalization of . For instance, we tested two different normalizations: (1) and (2) , where is the fundamental matrix of solutions of the system, with and . In Figure 2, black and blue lines plot time-histories of corresponding to the first and second normalizations of , respectively. The red line plots running time-average of for the first normalization of , i.e., . It follows from Figure 2 that the first normalization yields widely oscillatory , but the second, it yields , whereas . Further simulations show that the second normalization also reduces the variability of if assumes relatively small or intermediate values. Hence, the second normalization is adopted in our simulations throughout.

The estimations of the norms of solutions to (45) are shown in Figures 3(a) and 3(b), where red, blue, and black lines plot time-histories of the actual norms of solutions and their two upper bounds, which are comprised either by utility of (3) or nonlinear extension of Lipschitz inequality, i.e., , respectively. Hence, for system (45), . Yet, the value of Lipschitz constant in (3) depends upon attaining in these simulations. This value in our simulations is estimated using energy integral for the linearized, time invariant, and homogeneous model of (45).

The plots on Figures 3(a) and 3(b) correspond to ; and in Figure 3(a), and in Figure 3(b).

Clearly, time-histories of the solution bounds comprising the nonlinear extension of Lipschitz inequality outperform the ones utilizing (3) everywhere except a small initial time interval, where the latter is somewhat more accurate than former. Both bounds provide superior accuracy on the initial time intervals, which, however, decreases when time elapses. Application of nonlinear extension of Lipschitz inequality delivers tolerable accuracy on extended time intervals for the homogeneous system. Yet, the estimation accuracy declines for the nonhomogeneous system.

We notice that the task of finding a suitable Lipschitz constant turns out to be rather deceptive for systems in higher dimensions. In contrast, devising a global extended Lipschitz inequality, i.e., (22) is effortless for polynomial vector fields.

In turn, let us recall that behavior of solutions to nonautonomous equation (45) naturally unfolds in 3D space, , where the trajectories of this system do not intersect due to uniqueness, but projections of these trajectories on plane can intersect with themselves and each other. To picture the boundary of stability/trapping region in space, we plot in Figure 4 the trajectories of system (45) computed in reverse time for different values of this system’s parameters. All trajectories on these figures are stemmed from the points locating in interior of stability/trapping regions of (45) and approach the cylinders with axes parallel to -axis that encircle the corresponding regions in space. Every other trajectory of system (45), simulated in reverse time, approaches this cylindrical boundary as well. In Figures 46, we set , and . Additionally, in Figure 4(a), , whereas in Figure 4(b), and in Figure 4(c), . Clearly, in the first case, the projection of the corresponding trajectory on plane yields an oval-like curve (Figure 5(a)) whereas in the latter two cases, such projections yield fairly irregular bundles; see Figures 5(c) and 5(e). In each of these cases, the interior regions of these bundles/curve shape the sets of initial vectors, i.e., , which are included in stability/trapping regions of equation (45).

Two estimates of these regions, defined by application of formula , are shown in dotted-blue and magenta lines in Figures 5(a), 5(c) and 5(e). Firstly, is approximated in simulations of the corresponding equation (24). Secondary, is defined as a positive unstable fixed solution of (34) or (40). The second estimate in Figure 5(a) is defined through utility of (34), whereas in Figures 5(c) and 5(e), such estimate is simulated by using the averaged equation (40).

Figures 5(b), 5(d) and 5(f) plot trajectories of (45) stemmed from initial vectors, which are fitted into the estimated boundaries of stability/trapping regions that are displayed on Figures 5(a), 5(b) and 5(c), respectively.

Clearly, simulations of (24) yield a central part of the actual stability/trapping regions for this equation. We notice that the analytical approximations, which define the pertaining initial values of through utility of (34), are close to ones that are obtained in direct simulations of the corresponding equation (24) if is relatively small and . If , the analytical approximations remain intact if , since in this case positive roots of (36) sensitively depend upon and vanish if increases. Figures 5(a) and 5(b) show that, under these last conditions, the pertained values of , which are defined either by (34) (blue line) or in direct simulations of (24) (magenta line), are close to each other.

In turn, the averaging technique leads to tolerable analytical estimates of the trapping/stability regions for larger values of both and . Figures 5(c)5(f) compare numerical and analytical estimates that are developed in simulations of either (24) or (40) corresponding to (45). Clearly, the former two estimates, i.e., magenta and blue lines are sufficiently close to each other for and determine the central part of the actual trapping/stability regions for (45).

Figure 1 plots time-histories of and and their running time-averages in blue, yellow, red, and magenta lines, respectively. Both functions notably oscillate, but their running time-averages quickly approach some constant values, which yield the principal contribution to the solutions to (24).

Finally, we apply our methodology for estimating the bounds of solutions to (45) with and . Such Duffing-like equation is frequently used in simplified modeling of vibrations of elastic structures enforced by oscillations of system’s loads [32]. Interestingly, in this case, matches the estimate of used prior in the Van der-Pol-like model. Hence, the auxiliary equations (24) for the Duffing-like and Van der-Pol-like models are the same. Figure 6 plots in blue and red lines time-histories of norms of solutions for the Duffing-like and Van der-Pol-like models, respectively, emanating from the same initial vector. The black line on this figure plots the estimate of norms of the corresponding solutions, which delivered in simulations of (24). Note that the values of parameters used in Figures 3 and 6 are identical.

6. Conclusions and Future Work

This paper presents a novel approach to estimation of solution bounds and trapping/stability regions of nonautonomous nonlinear systems. This approach is based on developing of the pivoting differential inequality for the norm of solutions to the initial systems and subsequent analysis of the associated first-order auxiliary differential equation. The solutions of this auxiliary equation bound in norm from the above corresponding solutions of the initial systems.

We cast the auxiliary equation in the standard form by using either the Lipschitz condition or its nonlinear extension. The utility of Lipschitz condition linearizes the auxiliary equation and yields the corresponding solution bounds and stability criteria for the conforming nonlinear and nonautonomous systems in the local neighborhoods of the phase space, where the Lipschitz inequality holds. We show that the developed stability criteria turn out to be less conservative than the known ones.

In turn, we developed a nonlinear extension of Lipschitz inequality and applied it to recast the auxiliary equation in a more accurate, but nonlinear and nonautonomous form that, in general, does not admit close form solution. Yet, for autonomous and some other nonlinear systems, the solutions of auxiliary equation can be written in close forms.

We formulate the characteristic properties simplifying numerical estimation of the trapping/stability regions of the nonlinear auxiliary equation and consequently apply them for estimation of the corresponding regions for the initial systems. Next, we introduce two approximations reducing the nonlinear auxiliary equation to its autonomous and integrable forms. Analysis of solutions to these autonomous counterparts of the auxiliary equation infers explicit estimates of the trapping/stability regions for the corresponding initial systems, which are contrasted in simulations.

Our theoretical inferences are validated in inclusive numerical simulations that are partly presented in this paper. The simulations show that the accuracy of our estimates inversely correlates with the magnitudes of and , since the auxiliary equation includes only the norms of these perturbations. Hence, the precision of the developed estimates turns out to be adequate if the upper bounds on and are only known—a frequent premise in theory of systems under uncertainties. But, our approach can yield rather conservative estimates if both and are defined precisely.

Yet, the developed approach can be combined with some successive approximations yielding bilateral bounds for the norms of solutions that approach the norms of the accurate solutions under some broad conditions. Application of such refined methodology will be the topic of our subsequent paper.

Data Availability

No data were used to support this study.

Conflicts of Interest

There are no conflicts of interest concerning this paper.

Acknowledgments

This paper was developed in collaboration of its coauthors. The second coauthor developed the programs and contributed to interpretation of the simulation results, whereas the first coauthor developed the underlined methodology, simulated the pertained systems, and drafted the paper.