Journal of Control Science and Engineering

Volume 2018, Article ID 3298286, 21 pages

https://doi.org/10.1155/2018/3298286

## Approximate Prediction-Based Control Method for Nonlinear Oscillatory Systems with Applications to Chaotic Systems

^{1}Universidade Estadual de Santa Cruz (UESC), 45662-900 Ilhéus, BA, Brazil^{2}Lab. J.-L. Lions UMR CNRS, Inria, UPMC University Paris 06, Sorbonne Universités, 7598 Paris, France^{3}Fundação Getúlio Vargas, Rio de Janeiro, RJ, Brazil^{4}Instituto Tecnológico de Aeronáutica (ITA), 12228-900 São José dos Campos, SP, Brazil

Correspondence should be addressed to Thiago P. Chagas; moc.liamg@sagahcht

Received 22 October 2017; Revised 15 December 2017; Accepted 19 December 2017; Published 1 March 2018

Academic Editor: Sundarapandian Vaidyanathan

Copyright © 2018 Thiago P. Chagas et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The approximate Prediction-Based Control method (aPBC) is the continuous-time version of the well-known Prediction-Based Chaos Control method applied to stabilize periodic orbits of nonlinear dynamical systems. The method is based on estimating future states of the free system response of continuous-time systems using the solution from the Runge-Kutta implicit method in real time. Some aspects of aPBC are evaluated in the present work, particularly its robustness to low future states estimation precision is exemplified.

#### 1. Introduction

Oscillatory systems are typical in many problems of engineering, biological sciences, physics, economy, and other areas [1–7]. In general, oscillations shall be damped in order to reduce amplitude or stopped avoiding damage, reducing costs, increasing precision, and others [1–4]. These oscillations are mainly divided into periodic and aperiodic. Chaotic aperiodic oscillations are related to unpredictability, disorder, and instability while periodic oscillations are associated with order [8, 9].

Chaotic sets are composed of an infinite number of unstable periodic orbits (UPOs) [10, 11] and the stabilization of one of these orbits leads to periodic oscillation, possibly, reducing amplitude. Different chaos control methods have been developed aiming at stabilizing these orbits with low control effort using the main characteristics of chaos [9].

One classical chaos control method is the Delayed Feedback Control (DFC) proposed by Pyragas (1992) [12]. This is a state-feedback method whose control signal is computed through the difference between delayed and current time measured system states. DFC was initially proposed for continuous-time and has many applications on discrete-time systems [13]. Nevertheless, DFC has a well-known limitation proofed for discrete-time systems, the odd-number limitation [14–17], which means that DFC cannot stabilize orbits with an odd number of Floquet multipliers larger than . This limitation is questioned in literature for continuous-time systems with counter example [18–20].

Different modifications on DFC have been proposed to overcome the odd-number limitation [13] and one of the most interesting modifications is the Prediction-Based Control (PBC) [21] developed for discrete-time systems. Instead of the delayed state, it uses the future state one period of the target UPO ahead computed along the trajectories of the free system response as reference for the control signal. Developments of PBC are proposed always for discrete-time systems; for example, in [22] a method is proposed for tuning control gain without previous knowledge about the target period- UPO that leads to fast convergence rate of trajectories to the stabilized periodic orbit; in [23] it is proved that period- UPOs can be stabilized by a pulsed control signal reducing the required control effort for stabilization; and in [24] equilibrium points (or period- UPOs) are stabilized in the presence of multiplicative or additive noise.

The literature provides some results on a PBC-like strategy applied to continuous-time systems. In [25] it is proposed to use the system Jacobian matrix at each point of the trajectory, instead of system future states, for stabilization of equilibrium points. The results showed that this strategy does not lead, necessarily, to stabilization of free system equilibrium points or UPOs and control effort does not vanish in steady state, two main advantages of chaos control proposed by Ott et al. [26]. In [27], the strategy proposed in [25] is incremented using Neural Networks also for stabilization of equilibrium points.

In fact, PBC has a practical limitation for continuous-time systems if it is applied as proposed for discrete-time systems [21], the need of future state values in real time. In [28], we proposed the approximate Prediction-Based Control (aPBC) with a methodology based on the implicit Runge-Kutta method and state estimation applied to predict future states for the free system in real time based on system model. The authors claim that aPBC is the continuous-time applicable version of PBC because it uses prediction and stabilizes free system UPOs, ideally, vanishing steady state control effort. On the contrary, predicting future states has some drawbacks because the proposed future state prediction scheme requires an increase in the closed-loop system order and consequently computational power for numerical integration. Moreover, it is also subjected to prediction model and real system mismatch and integration method precision for application.

Continuing the aPBC development and study, one of the main questions that remained in [28] is about the robustness of the method for low future state estimation precision. The forced van der Pol (vdP) oscillator (nonautonomous) and Rössler system (autonomous) are used in the present work for numerical examples on the aPBC robustness and trade-off between estimation accuracy and computational cost. We present a methodology and performance indexes that aid finding this trade-off and show results evidencing that it is possible to find a lower bound for precision. Besides that, this work presents an UPO of the forced van der Pol oscillator with one of the Floquet multipliers larger than that is stabilized by the aPBC and could not be stabilized by the DFC with constant control gain following the procedures proposed in [18–20]. The optimization procedure proposed in [29] was also applied to the DFC without success. This is evidence that the aPBC maintain the advantage of discrete-time PBC of not being sensitive to the odd-number limitation.

The paper is divided as follows. The aPBC is reviewed in Section 3, generalizing its proposition. In Section 4, aPBC application is presented using orthogonal collocation method [30, 31] as implicit Runge-Kutta method. In Section 5, the forced van der Pol (vdP) oscillator and Rössler system and their chaotic behavior are presented. Both systems are used for numerical examples on the aPBC robustness and trade-off between estimation accuracy and computational cost in Section 6. The example of an UPO of the forced vdP oscillator that could not be stabilized by DFC and is stabilized by aPBC is shown in Section 6.1.4.

#### 2. Problem Statement

Consider the following continuous-time dynamical system:where , , , , and is a -periodic function with respect to time ; that is, by definition

Moreover, we assume the existence of a -periodic solution to the free system (1), which is the system obtained by setting , . In other words,

We assume that this periodic solution is unstable and can be stabilized by the continuous-time version of the PBC with feedback law defined aswhere is the control signal, is the control gain, is the value at time of the state of (1) with and , . In other words, is the value at time of the state along the trajectory departing from at time of the free system ().

For an appropriate control gain , is a stable solution of the closed-loop systemand control signal ideally verifies

The general formulation of the continuous-time PBC method can be written as partial differential equation where the solution of system (5) with control signal (4) is the solution of the PDE (7a) and (7b). Consider The function is such that and .

Condition (6) ensures zero control effort when the trajectory is on the unstable periodic solution of the free system. However, applying in real systems is based on the exact prediction and real-time computation of future states . Due to these practical constrains, alternatively, we are also interested in stabilizing an orbit* close to* the UPO of (1) resulting in a low control effort. This task is solved using the aPBC reviewed in the sequel.

#### 3. Principles of the Approximate Prediction-Based Control Method (aPBC)

Computing the future state of the prediction term requires solving the free system ODE at each time , from time to . This cannot be done exactly in real time and its solution is the basis of the aPBC. This task is divided into two steps, the first one consists in approximating the solution of by an implicit Runge-Kutta (R-K) ODE integration method, and the second is expressing the solution of the R-K method as a state observer that can be integrated by any explicit method in real time.

##### 3.1. Approximation of the Prediction Term: 1st Step

The first step consists in approximating the solution of (7b) by an implicit R-K ODE integration method [32], in order to estimate the prediction term, that is, the terminal value,

In order to estimate given by (9), the state transition map of the free system is first approximated by the operator defined bywhere and are weights chosen according to the implicit method used [32]. The approximation of , , is calculated at the discretization points , . shall be chosen accordingly such that the sums in (10a) and (10b) lead to an adequate approximation of the integral in (9) for the given .

For simplicity, (10b) is written in the vector form where and is defined by

In order to compute through (10a), it is necessary to solve the algebraic system of (11) with unknown .

Writing and closing (5) by yield the differential algebraic equation (DAE)

The real-time solution of the DAE (16a) and (16b) requires the computation of , its algebraic term, at each time . We therefore introduce an observer equation in the sequel, to transform the controlled system into a system of ODEs.

##### 3.2. Approximation of the Prediction Term: 2nd Step

We now approximate (16b) by solving the -dimensional ODE (17) whose solution is an estimation of . The initial value is intended to be (precisely) computed off-line to provide good tracking quality for .

The scalar gain is chosen positive in order that of (17) tends asymptotically towards the solution of (16b) when and typically in such a way that the estimator dynamics are faster than the controlled system dynamics. If the evolution of may indeed be chosen in order to fulfill (17), convergence does occur.

From (17), we obtainwhere and is the partial derivative with respect to the th variable. is the identity matrix and is the column vector of dimension with all elements equal to . is the Kronecker product.

By determining from (18) we get Clearly, solving (19) in order to obtain (17) requires invertibility of the first factor.

We then define ,

From (16a), (16b), (17), and (20) and denoting as the components of , the control law yields the following closed-loop system of ODEs:The solution of (21) is an approximation of the solution of the PDE given in (7a) and (7b) and therefore only an approximate stabilization of the initial orbit is expected or rather the stabilization to an orbit close to the initial one.

The ODE (21) has two types of state components, corresponding to the controlled system dynamics and to the dynamical state controller. Once stands for a set of unmeasured state variable components, (17) can be interpreted as a state observer. Notice that this estimator introduces dynamical feedback whose state has a dimension equal to the number of points of the R-K method adopted multiplied by the dimension of the initial system to be controlled ().

#### 4. aPBC Application Issues

Two aspects of the aPBC shall be considered for application. One is the choice of an implicit R-K method to solve (10a) and (10b) and the other is the control gain tuning for stabilization. Here, we use the orthogonal collocation method with Lagrange polynomials and a constant control gain computed through the closed-loop monodromy matrix described in the sequel.

##### 4.1. Orthogonal Collocation Method with Lagrange Polynomials

The implicit R-K method given in (10a) and (10b) is a general formulation used for the integration of differential equations whose application depends on the choice of a specific implementation. Herein, using the results in [28], we apply the orthogonal collocation [30–32] as implicit R-K method.

Collocation methods amount to approximate the prediction term by , where is defined on the whole interval bywhere , , is a column vector. The functions are the Lagrange polynomialsattached to the choice of the points .

The link with the implicit R-K method is as follows.

Theorem 1 (adapted from Theorem in [32]). *The collocation method (22) with Lagrange polynomials is equivalent to the -stage implicit Runge-Kutta method (10a) and (10b) with coefficients*

It is possible to choose the collocation points in order to fulfill the orthogonality relations:

Note that, for each , as (with , the Kronecker delta symbol). A characteristic of the orthogonal collocation method is that each is an approximation of the state that differs from in (10a) and (10b), which are the derivatives.

Equation (21) is obtained applying the substitutions provided by (22) and Theorem 1 with interpolating times obtained by solving (25).

##### 4.2. Closed-Loop Monodromy Matrix

The aPBC was developed for the general control gain of system (21). However, methods to choose the time and state dependent control gain have not yet been studied. Moreover, the control gain is tuned for the PBC method and applied to the aPBC, justified by the precision of the estimation of future states [28]. For simplicity, a constant control gain is applied that depends upon the ability of computing the closed-loop monodromy matrix of . The computation of this matrix requires the integration of the closed-loop system and its variational equation (26) along a trajectory in the vicinity of (linearised dynamics around the periodic orbit) [33, Appendix B]. To integrate this trajectory, the initial condition is chosen close to . Integrating (26) over a period yields the corresponding closed-loop monodromy matrix .where is the free system monodromy matrix.

Using (26) and (27), we compute the closed-loop monodromy matrix of given gain . The Floquet multipliers are computed to measure the local stability of the controlled orbit for the chosen .

In practice, we fix , compute the monodromy matrix by integrating (26) with an explicit R-K method, and is computed by integrating the free system over a period at each step of the integration of (26). After that, we obtain the corresponding Floquet multipliers of the closed-loop system.

#### 5. Case Studies

Two continuous-time systems are used as case study for the aPBC—the forced van der Pol oscillator and Rössler system. In this section we provide a brief analysis about them evidencing their chaotic behavior through bifurcation diagrams and chaotic attractors.

##### 5.1. Forced van der Pol Oscillator

The forced van der Pol (vdP) oscillator is described by the nonlinear state space model (28):where and . In the present work we set and and provide bifurcation diagrams by attractor Poincaré map (Figure 1(a)) and largest nonzero Lyapunov exponent (, Figure 1(b)) for evidencing the route to chaos. Poincaré section is defined as usual for nonautonomous systems using time, and in this case where is the modulo operation or the remainder of the integer division of by .