Abstract

Dealing with practical control systems, it is equally important to establish the controllability of the system under study and to find corresponding control functions explicitly. The most challenging problem in this path is the rigorous analysis of the state constraints, which can be especially sophisticated in the case of nonlinear systems. However, some heuristic considerations related to physical, mechanical, or other aspects of the problem may allow coming up with specific hierarchic controls containing a set of free parameters. Such an approach allows reducing the computational complexity of the problem by reducing the nonlinear state constraints to nonlinear algebraic equations with respect to the free parameters. This paper is devoted to heuristic determination of control functions providing exact and approximate controllability of dynamic systems with nonlinear state constraints. Using the recently developed approach based on Green’s function method, the controllability analysis of nonlinear dynamic systems, in general, is reduced to nonlinear integral constraints with respect to the control function. We construct parametric families of control functions having certain physical meanings, which reduce the nonlinear integral constraints to a system of nonlinear algebraic equations. Regimes such as time-harmonic, switching, impulsive, and optimal stopping ones are considered. Two concrete examples arising from engineering help to reveal advantages and drawbacks of the technique.

1. Introduction

The ability of a controlled system to accommodate a required state at a given instant by means of attached controllers is called controllability. It is one of the most crucial properties of applied control systems along with stability and reliability. Generally, two main types of controllability are considered for deterministic systems: exact and approximate. If by a specific choice of admissible controls the system can be transmitted from a given state to a required state within a finite amount of time exactly, it is called exactly controllable. If the system is not exactly controllable by any choice of admissible control, but its state implemented at the required instant for at least one admissible control is “sufficiently” close to the designated terminal state, it is called approximately controllable. Evidently, exact controllability implies approximate controllability with arbitrarily small accuracy. However, in general, approximate controllability does not imply exact controllability. For further introduction to the concept of controllability with diverse applications, refer to major contributions [110] and related references therein. The concept of controllability with applications in engineering has been studied by many authors, a part of which can be found in [1116] and in related references therein. Results obtained in this paper can be used in relevant applied studies in engineering, e.g., [1720], for derivation of diverse control regimes.

Let us describe the aforesaid mathematically. Assume that the state of a dynamic system is characterized by the vector-function ,  , satisfying some constraints, e.g., governing equation, boundary, and initial conditions, among other possible constraints (hereinafter, all those constraints are referred to as state constraints). Then, mathematically the controllability of the system is verified by evaluating the mismatch between the designated state, , and the state implemented by a specific choice of admissible controls at the given instant , i.e., the residueThus, if for at least one admissible control ,then the system is exactly controllable. If this is not the case, but for at least one admissible control ,for a given precision , then the system is approximately controllable.

Here is the control vector-function, is the set of admissible controls, is the control time, is the space of the terminal states (appropriate Hilbert space), and is the norm in . At this, the subscript at the residue is a short form of denoting the obvious dependence .

Hereinafter, we assume that the set of admissible controls has the following general form [10]:where is the support of and is an appropriate Hilbert space. Note that can be complemented by some other possible constraints on . For instance, in the case of boundary controls, if the required state transition is necessarily time-continuous, then is complemented by the compatibility conditions of the initial and boundary conditions (see Section 3 below). The admissible controls providing (2) or (3) are called resolving controls (in corresponding sense). The set of all resolving controls is denoted by . Obviously, . If , lack of controllability occurs (see [2125] and related references therein).

The analysis of controllability for a particular control system can be quite sophisticated and can require burdensome computational costs. This may happen in the case of systems with singularities/discontinuities, uncertain systems, systems with nonlinear state constraints, and so on. However, the evaluation of (1) on can be made less costly if the state vector , that is, the solution of the state constraints, is found explicitly. Unfortunately, there does not exist a unified technique that allows solving general state constraints explicitly. However, there are such powerful techniques as the Adomian decomposition method [26], the homotopy analysis method [27], and nonlinear Green’s function method [28, 29] for finding an approximate analytical solution to general types of nonlinear differential equations. Nevertheless, even having the explicit dependence , it is a very challenging problem to find resolving controls providing (2) or (3) explicitly.

In this paper we develop a systematic algorithm for heuristic determination of explicit expressions for resolving controls providing exact or approximate controllability at the required instant (i.e., (2) or (3)) of systems with nonlinear state constraints. Parametric hierarchies of functions are constructed to satisfy the resolving systems derived by the recently developed Green’s function approach [10]. In the case of exact controllability, the resolving system is expanded into series of orthogonal functions. Considered controls include trigonometric, piecewise-constant, impulsive, piecewise-continuous, constant, quasi-polynomial, trigonometric polynomial, and stopping regimes. At this, we cover the cases of both distributed and boundary controls. A rigorous comparison of our approach with the well-known moments problem approach shows that under proper restrictions the known explicit solutions obtained by moments problem approach coincide with our solutions. We test the technique on the examples of a finite elastic beam subjected to an external force with an uncertainty and the viscous Burgers’ nonlinear equation. Both cases have diverse applications in modern engineering.

The paper is organized as follows. In Section 2 we outline the two mostly used methods for the analysis of exact and approximate controllability of nonlinear dynamic systems. Then, in Section 3 we derive diverse hierarchies of resolving controls and corresponding constraints on the free parameters for both exact and approximate controllability analysis. In Section 4 we perform a comparative analysis between the controls derived by our approach and those derived by the moments problem approach. Finally, we provide a demonstration of how the derived explicit expressions of resolving controls can be used to ensure exact or approximate controllability of particular dynamic systems.

2. Existing Approaches

The verification of (2) or (3) is enough only for establishing whether a particular system is exactly or approximately controllable or not. However, when dealing with real-life control systems, from the practical implementation point of view, it is equally important to find admissible controls providing either type of controllability explicitly, which would significantly reduce controller performance costs. There exist several efficient techniques for analyzing particular systems on exact or approximate controllability. In this section we outline the two most commonly used techniques.

The first technique involves a norm minimizing algorithm to minimize (1) (see [6]). In general statement, the problem is formulated as a constrained minimization problem:where is subjected to the state constraints. If the minimum is attained and is zero, thenprovides exact controllability of the system at . If the minimum is not equal to zero, but it remains smaller than the required precision, then the system is approximately controllable. Otherwise, we arrive at the lack of controllability. In other words, following this approach, by a single numerical procedure it is possible to identify whether the system is exactly or approximately controllable or it is not controllable at all. The disadvantage of this approach is in burdensome computational costs (machinery time) required to run the corresponding numerical scheme in the case of nonlinear state constraints.

The second approach (see [10]) is based on the usage of the systemwhich is equivalent to exact controllability of the system according to the definition of norm. Here is an open subset of occupied by the system. Then the set of resolving controls is defined asIn some exceptional cases, is found from the state constraints explicitly, so that (7) becomes an explicit constraint on . Then, the Newton-Raphson iteration (or similar others) can be involved to determine resolving controls approximately with required precision. The disadvantage of this approach is that the rigorous derivation of from the state constraints is complicated and, in some cases, is impossible. Another disadvantage of this approach may be the necessity of using derivatives of (7) in the numerical procedure.

3. Heuristic Determination of Resolving Controls

In this section we describe a method of heuristic determination of resolving controls providing exact or approximate controllability of dynamic systems with nonlinear state constraints. Using Green’s function approach [10], the state constraints are reduced to nonlinear integral constraints on the admissible controls. Parametric solutions to the reduced constraints are constructed explicitly. Eventually, a system of nonlinear algebraic equations for the parameters is derived. For the sake of simplicity, hereinafter we restrict ourselves to the one-dimensional case. All the derivations and transformations can be straightforwardly generalized to the case of higher dimensions.

3.1. Exact Controllability

In general statement, the exact controllability of a particular system in one space dimension is equivalent to equality type constraint of the form [10]where and are given functions. The subscript indicates that the corresponding quantity depends on .

The explicit determination of resolving controls from (9) by direct methods can be quite complicated since its both sides depend on , while the control depends only on . This means that (9) cannot be considered as a Fredholm integral equation of the first kind. Nevertheless, in cases whenfor an integrable function , (9) becomes a Fredholm integral equation of the first kindwhich can be solved efficiently for generic kernels (refer to [30] for details).

An efficient way of solving (9) in general can be developed using the expansion of and into series of orthogonal functions. Let be a family of orthogonal functions in (in some specific cases a family of orthogonal functions with some weight can also be involved); that is,where is the Kronecker’s delta:Denote by and the expansion coefficients of the functions and , respectively, so that (in real problems, and are expressed in terms of Green’s function of the system under study and its integral [10], so their expansion is convergent)Then, since are orthogonal, (9) is equivalent to the infinite system

Note that (16) can be treated in different ways. It is an infinite system of Fredholm integral equations of the first kind, which can be efficiently solved numerically [30, 31]. On the other hand, it can be treated as an infinite-dimensional problem of moments (see Section 4 below). In specific problems, depending on, for example, the required accuracy of computations, the infinite system (16) is truncated and considered for some finite .

System (16) also can be resolved heuristically. Namely, based on some considerations, say, physical treatment of the problem, controller capability, and so forth, the control function is chosen to have a specific form containing a set of free parameters. Then, this function is substituted into system (16) or its truncated version and a discrete system of, in general, nonlinear algebraic equations is derived with respect to the set of free parameters.

Consider some specific solutions. Let the infinite system (16) be truncated for some finite . Then, the control function can be sought, for instance, in the form of trigonometric polynomialwhere and , , and are free parameters determined appropriately to satisfy (16) exactly. Substituting (17) into the truncated part of system (16), the system of nonlinear equationsis obtained. Hereand the superscript T means transposition.

Consider also the piecewise-constant regimecorresponding to a finite jump from the constant regime to when switches from to . Here and are free parameters. At this, instants satisfy the inequality type constraintsforeclosing the overlap of different regimes. Here is the Heaviside function:

In this case, (16) is reduced towith

Further, consider the impulsive regimeformally describing instantaneous impacts at . Here is the Dirac function:

In this case also the free parameters are and . Substituting (25) into (16) leads towithMoreover, here also the instants satisfy

In many applications the piecewise-continuous controlis considered, whereis the characteristic function of the interval .

This regime corresponds to switching between the time-dependent regimes . At this, any of the regimes (17), (20), and (25) can be used as . In this case the resolving equation is reduced towith

In the case of boundary control, is linear in ; that isfor given functions and .

In addition, satisfies the boundary conditionsreduced from the compatibility of given boundary and initial data.

Then, (18), (23), and (27) are reduced to linear systems for . Indeed, in the case of, for example, (25), the resulting system for the free parameters will bewhereHere and are the expansion coefficients of and into series of , respectively. At this, (35) implies additional restrictions on : when and , thenOtherwise, (25) is applied in cases when .

Eventually, different order numerical methods can be involved to approximate the nonlinear systems derived above for the free parameters. See, for instance, [32, 33].

Remark 1. In general, the -dimensional system (18) contains 3 unknowns; therefore it might be irresolvable. Nevertheless, if some of the free parameters, say and/or ,  , are prescribed, then (18) may become solvable. Moreover, in the case of boundary control, (18) is reduced to a linear system for . Therefore, if and are prescribed and , are found straightforwardly. Otherwise, that is, when , for finding specific solutions, techniques of nonlinear programming [34] must be involved.
The same reasoning concerns also systems (23) and (27) containing, in general, 2 unknowns.

Remark 2. Any solution derived from the truncated version of the infinite-dimensional system (16) is approximate, which means that (9), in general, is satisfied approximately.

3.2. Approximate Controllability

Assume that the system under study is linear in control. Then, its approximate controllability is reduced to the evaluation of an integral equation of the form [10]for , bounded kernel , and . In the case of boundary control, the control function is additionally constrained by the discrete constraints (35).

It is easy to see that one of the obvious solutions to (39) is the constant regimeleading (39) toWhen the control is carried out by the boundary data, this regime is applicable only in the case when .

In vibration control problems usually time-harmonic controls of the formare considered, leading (39) to the nonlinear equationwith respect to the free parameters , , and . In the case of boundary control, (35) provides the additional constraint

A particular solution of (39), (35) can be constructed using the quasi-polynomial controlwhere and are free parameters satisfying system (39) (note that in a proper treatment, and may also accept negative values). Assume, for simplicity, that . Then, (39) providesfor determination of , , and . HereEvidently, when and are prescribed, (46) becomes a linear equation for .

In the case of boundary control, in order to satisfy (35) by the quasi-polynomial regime, the termmust be added to (45). Then, (46) is reduced to

A fast verification of approximate controllability is provided by the trigonometric regimewhere , , , and are the free parameters to satisfy (39). Assuming that , for the determination of the free parameters, the nonlinear equationis derived, whereIn the case of boundary control, the term above should be added to the control.

Other particular solutions appropriate for the physical treatment of the problem are also possible. In many applied problems it becomes necessary to involve sliding modes [35]. As an example, consider the switching or piecewise-constant control regimesubject to the inequality type constraintsHere , , and are free parameters. Note that piecewise-continuous regimes with are also often considered. In such cases, any of continuous regimes above can be considered as .

Assuming that , (39) can be reduced to the nonlinear equationwhere

Note that in the case of boundary control, the additional constraintsare derived when . Otherwise, should be added to the right hand side of (53).

Another application of sliding mode control is the so-called optimal stopping regime usually given byand (39) must be satisfied by an appropriate choice of and . In addition, satisfies the inequality type constraintIn this case, (39) is reduced to the nonlinear equationwhereTherefore, if is prescribed and , thenIn the case of boundary control, if , only the first condition in (35) can be satisfied by and necessarily should be zero. Otherwise, both conditions must be satisfied by .

The set of reachable terminal states can be significantly extended, by complementing with impulsive actions of the formsubject to the inequality type constraintsThe free parameters are , , and . If in addition all , then (39) is reduced to the nonlinear constraintwhere

In the case of boundary control, if and , (35) is satisfied only whenOtherwise, (63) is applicable if and only ifrespectively.

Remark 3. In all cases above, together with constraints above, according to the definition of above, free parameters must satisfy also the inequality type constraint This means that more likely techniques of nonlinear programming must be involved.

Remark 4. As it was noted above, the derived systems of nonlinear algebraic equations can be efficiently approximated by existing powerful numerical techniques. In applied problems the derivation of (9) and (39) remains the most difficult step towards application of this method.

4. Comparison with the Moments Problem Approach

One of the well developed and widely used approaches that allows finding resolving controls explicitly is Krasovskii’s moments problem approach. It has been initially developed in [36] for the linear finite-dimensional moments problem. See also [2] for more details. Later the algorithm has been extended to nonlinear finite-dimensional systems in [37] related to applications in mobile control problems [38]. Using an integral representation formula for the general solution of the state constraints (equivalent to Green’s function solution) and satisfying the terminal conditions, the explicit determination of the resolving controls is reduced to a moments problem. At this, for concentrated parameter systems, the resulting moments problem is finite-dimensional, while for distributed parameter systems, it is infinite-dimensional.

When (16) is linear in , its treatment as a problem of moments has several advantages, including the availability of explicit -optimal solutions for and necessary and sufficient conditions for the optimal solution existence. Let us formulate the following theorem giving the explicit form of the -optimal for solution of (16) linear in (see [39] for details). For the nonlinear case see [37].

Theorem 5. The general solution ,  , of the linear finite-dimensional moments problemexists, is unique, and reads aswhere and the Lagrange multipliers are determined from the following two equivalent problems of conditional extremum: (i)findunder the condition(ii)findunder the conditionMoreover, the minimal value of the norm is equal toIn the special case when   , the resolving admissible controls are determined as a weak limit of (71) as follows:where are determined from and -from (70) with substituted (77) into it.

Thus, (71) and (77) are the solutions of (70) minimizing

It is easy to see that, under some restrictions on the parameters and , the control regime (25) obtained above heuristically coincides with the -optimal solution (77). It is noteworthy that when and is a set of sine functions or quasi-polynomials, then, under proper constraints on the free parameters, the heuristically determined solutions (17) and (45) also coincide with the corresponding -optimal solutions. Finally, when , the solution of the moments problem (70) is represented in the form of a switching regime, which, under proper constraints, coincides with (20).

The explicit solution of the general finite-dimensional nonlinear moments problem is not known. It is hard to obtain explicit solutions even for some simple forms of [40]. Therefore, the explicit expressions of controls heuristically determined above can play into hands of applied mathematicians and engineers to directly identify control regimes suitable in their studies of nonlinear control problems.

5. Examples

In this section we demonstrate the ways of using the heuristically determined controls in practical cases.

5.1. Beam Subjected to Load with Uncertainty

Consider an elastic beam of finite length subjected to a concentrated dynamical load of intensity . The exact location of the load application, denoted by , is not known. However it is given that for given and . Assume that end of the beam is simply supported, while at only the bending moment is fixed, and the transverse deflection is controlled. The aim is to find boundary controls providing controllability of the beam in a given finite time . Suppose that the load vanishes at some ,  . Since there exists an uncertainty in the system, it is hardly possible to provide exact controllability in finite time [41]; therefore approximate controllability of the beam is studied.

Assume also that the beam is sufficiently thin and the load intensity varies in such a range that the beam undergoes merely infinitesimal strains. In that case, the Euler-Bernoulli assumptions can be involved for deriving the mathematical model of the beam. Then, limiting the consideration by the linear elasticity, the transverse deflection of the beam, denoted here by , satisfies the fourth order differential equationsubject to the boundary conditionsHere is the Young’s modulus, is the density, is the cross-sectional moment of inertia, and is the cross-sectional area of the beam. The quantity measures the resistance of the beam against bending load and is referred to as bending stiffness.

At the initial time instant the state of the beam is known:Mathematically, the problem is to find such a boundary control function that provides the inequalityin a required finite with desired precision . The functions are given. As in many applications; here it is assumed that ; i.e., it is required to suspend the vibrations of the beam.

The initial and boundary data are supposed to be consistent:Then, the set of admissible controls is considered to be

For further analysis it is convenient to use the dimensionless quantitiesfor the coordinate, time, deflection, and load intensity, respectively. Here is the speed of elastic wave propagation in the beam:New symbols for those quantities are not introduced, in order to make the reading convenient. Then, the general solution of (80), (81), and (82) according to the Green’s representation formula is given byHere [42]Obviously, , so that the residue (83) is well-defined.

Evaluating the expression (88) at and substituting into the residue (83), by virtue of the triangle and Cauchy-Schwartz inequalities, the following estimate is derived:where

Furthermore, the direct integration leads toIt is evident from the expression for that when or , then becomes independent of . It is also obvious that the “worst” influence of on occurs when . A consequence which should be expected.

However, computations show that when , the elastic displacements of the beam are of order when the load is active and of order when it vanishes (see Figure 1).

Therefore the required precision must be at least . On the other hand, the residual axial stresses arising in the beam after the load vanishes have larger values and can serve as controllability criterion. Thus, as a residue considering the quantitywhereis the axial stress of the beam evaluated at and is a prescribed threshold value. Thus, evaluates how close is the axial stress to a given threshold at the instant .

For specific analysis, let us return to the quantities with dimension. Consider a beam of length m and of square cross-section with dimensions 0.1 m × 0.1 m; therefore  m4. Let the material of the beam be made of copper; i.e.,  N/m2, kg/m3. Limit the computations to the case when . Assume thatwith  s. Set  s.

Note that the maximal value of  N/m2 (see Figure 2(b)).

In order to reduce this value, the threshold value is set to  N/m2 and the optimal stopping boundary control regimeis chosen to ensure the inequalityfor (93) with the precision . Above , , , and are free parameters. At this, is constrained by . Then, inequality (97) holds with whenMoreover, Figure 3 shows that beside ensuring (97), the considered control regime reduces the total displacement of the beam almost twice.

Consider now the switching regimewhere , , , , and are free parameters. Let the threshold value in this case be  N/m2. Then,provide (97) with . Note that the total displacement in this case also is reduced almost twice (see Figure 4).

5.2. Burgers’ Equation

Consider the nonlinear Burgers’ equationwhere is a positive constant, arising in fluid mechanics, nonlinear acoustics, traffic flow, etc. Using the well-known Hopf-Cole transformationit is reduced to the linear heat equationThe controllability of the Burgers’ equation is studied separately by many authors, e.g., in [4346]. See also the related references therein.

Our aim is to determine a distributed control regimegoverning the heat equationwith the aim of providing at a given instant the terminal conditionexactly or approximately for the viscous Burgers’ flow (102). Here is a given nonnegative function, and is a given function.

In terms of the residuethe problem is to determine admissible controls ensuring at the given instant the equalityor the inequalitywith a given accuracy .

From (102) it becomes obvious that as soon asholds withthen (108) holds as well. Similarly, it is shown that (108) implies (110) as a particular case. Therefore, in the case of exact controllability, (108) and (110) are equivalent. This means that the controls satisfying (110) are resolving for (108).

Assume that . Then, on the basis of Green’s function method [42], (110) is equivalent to the equalitywhereTherefore, (112) can be expanded into series of orthogonal functions as it was done above in Section 3.1.

Consider the initial statefor a given . Let and be positive. Then, both and have exponential decay in meaning that we can consider (112) in , . Expanding and into Fourier series and taking into account the fact that both functions are even, we obtainThe resolving system is formed by equating the coefficients of cosines’ for corresponding in both sides of the equation above:

As in Section 3.1, truncate system (116) by a finite . In order to derive a consistent system of algebraic equations, considerwhere are unknown constants that need to be determined. Substituting into the resolving system, the following system of linear algebraic equations is derived:where

Remark 6. The unknown coefficients can be found from the linear system (118) efficiently. Nevertheless, because of truncation, the control (117) will ensure only approximate controllability of the Burger’s equation.

Now let us consider the case of approximate controllability. Obviously, the equivalency of exact controllability established above does not occur here. Therefore, we need to verify (109) directly.

Consider a flow governed by (102) in a thin infinite layer. Assume that the flow source is located at section of the layer and generates a harmonic fluxwhich vanishes when . Therefore, the Burgers’ equation must be complemented by the conditionLet in particularThe problem is to find such an admissible control that ensureswith . Obviously, the desired must satisfy , where satisfiesRestricting the consideration by , it is obtained that (see Figure 5).

Involving the distributed control regime (42), it is possible to achieve the approximate null-controllability of the flow for when , , and (see Figure 6). Furthermore, it is evident from Figure 7 that when , , and , the flow is approximately null-controllable at .

On the other hand, the same accuracy can be achieved using a single impulsive actionwhere . Indeed, Figure 8 shows that when and , the flow is approximately null-controllable at .

Furthermore, involving the switching regime (53), it is possible to achieve approximate null-controllability of the flow for less . Specifically, choosingapproximate null-controllability is achieved for (see Figure 9).

6. Conclusions

A systematic technique for deriving resolving controls for exact and approximate controllability analysis of systems with nonlinear state constraints heuristically is proposed. Representing the solution of the state constraints in terms of Green’s function, the exact and approximate controllability are reduced to nonlinear integral type constraints. The exact solution of the constraints is highly complicated, so that numerical schemes are usually applied. The proposed technique allows representing some of the resolving controls explicitly and provides nonlinear algebraic constraints on the unknown coefficients. The hierarchy of the heuristic controls includes (i) polynomial, (ii) time-harmonic, (iii) switching or piecewise-continuous, (iv) optimal stopping, and (v) impulsive regimes. The corresponding constraints are derived straightforwardly. The derived regimes show a good correspondence with the -optimal solutions, , of linear moments problem derived earlier rigorously. The technique is checked on two specific examples: elastic finite beam bent under external force with uncertainty and a viscous flow governed by Burger’s nonlinear equation. The technique is efficiently applied in both cases. Numerical simulations confirm theoretical predictions.

New studies about practical implementation of the technique in other problems of mechanical engineering are initiated. In proceeding papers we plan to report some possibilities of determination of resolving controls extremizing cost functionals that have specific importance in the context of the considered problem.

Data Availability

No data were used to support this study.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

Some specific ideas of the manuscript were raised in intensive discussions with Professor Ara S. Avetisyan (Institute of Mechanics, National Academy of Sciences of Armenia), to whom the author is heartily thankful.