Probability-Weighted Optimal Control for Nonlinear Stochastic Vibrating Systems with Random Time Delay
A probability-weighted optimal control strategy for nonlinear stochastic vibrating systems with random time delay is proposed. First, by modeling the random delay as a finite state Markov process, the optimal control problem is converted into the one of Markov jump systems with finite mode. Then, upon limiting averaging principle, the optimal control force is approximately expressed as probability-weighted summation of the control force associated with different modes of the system. Then, by using the stochastic averaging method and the dynamical programming principle, the control force for each mode can be readily obtained. To illustrate the effectiveness of the proposed control, the stochastic optimal control of a two degree-of-freedom nonlinear stochastic system with random time delay is worked out as an example.
In many complicated control systems, such as manufacturing plants, vehicles, aircraft, and spacecraft, a communication network is used to gather sensor data and send control signals. Time delay occurs while exchanging data among these devices, which is usually random [1–3]. Time delay, particularly random time delay, can dramatically degrade the performance of the control system and even destabilize it. Therefore, time delay should be taken into account when designing control strategy.
The control problem of linear or nonlinear systems with constant time delay has been examined in the literatures [4–7]. Far less known is about the systems with time-varying delay [8, 9], especially random time delay. A key difficulty is how to model the random time delay. A simple method was to regard the delay time as a constant . However, the delay time is much longer than necessary, which usually results in poor performance. Randomness of the delay time was taken into account by Nilsson et al. . To keep the model simple for analyzing, the current delay is assumed to be independent of previous delay times. However, in real system, the current delay is usually correlated with the last one. Thus, a more reasonable way is to model the random delay time as a Markov jump process . Based on this model, a finite horizon optimal control of linear systems with randomly varying time delay is studied . Zhang et al.  investigated feedback stabilization of random-time-delayed linear system. Wu et al.  analyzed the stability of the stochastic linear system with time-varying delay. Wang et al.  investigated the problem of robust fault detection for networked Markov jump systems with random time delay. The problem of robust H∞ control for a class of uncertain stochastic system with random delay against actuator failures is studied by Sakthivel et al. . Huan et al.  investigated the dynamics of nonlinear stochastic systems with random time delay. The previous works focused mainly on the linear systems. Far less known is about the nonlinear systems.
In this paper, we present a probability-weighted optimal control strategy for nonlinear stochastic systems with random time delay. The organization of this paper is as follows: in Section 2, such a system is converted into the Markovian jump one by modeling the random delay as a Markov process. Section 3 converted the system into the one with time delay as a parameter. Probability-weighted optimal control law is determined in Section 4. In Section 5, the application and effectiveness of the proposed procedure are demonstrated by an example. A summary of findings is given in Section 6.
2. Formulation of Problem
2.1. Modeling of Random Delay
For many complex systems, i.e., networked control systems, the random delay time usually has finite number of levels due to different network traffic conditions . may switch randomly from one level to another with time. Usually the switching of different delay levels follows the rules of Markov chain. Then, the random time delay can be modeled as the following Markov jump form:where is a finite state Markov process, which points to the mode of time delay. takes discrete values from a finite set with the following mode transition probability [18, 19]where is a sufficiently small time interval; ; denotes the transition probability from mode to mode ; is the transition rate; and .
2.2. Nonlinear Stochastic System with Random Delay
Consider a multi-degree-of-freedom (MDOF) nonlinear stochastic system with random time delay. The motion equation of the nonlinear stochastic system is of the following form:where is the nonlinear restoring force; is a small parameter; denotes coupled nonlinear damping; represent the coefficients of random excitations; and are Gaussian white noises in the sense of Stratonovich with zero mean and correlation functions . are feedback control forces with random delay time. By using Equation (1), system (3) can be rewritten as the following Markov jump form:
Note that by modeling the random delay time as a finite state Markov process, system (3) has been formulated as a Markov jump system with finite mode. The optimal feedback control problem of system (3) then can be studied in the framework of the Markov jump system.
The purpose of the present study is to derive an optimal feedback control law to minimize the response of system (3) or (4). For semiinfinite time interval control, the performance index can be expressed as follows:where is called cost function and is terminal time, .
3. Converted System with Time Delay as a Parameter
For the case of small time delay, i.e., (), it is proved that the following approximate expressions hold :where is the averaging frequency which is the average of the instantaneous frequency , i.e.,and represents the averaging with respect to in .
According to expression (7), the time-delayed control forces can be approximately expressed in terms of the state variables with time delay as a parameter, i.e., .
Then, Equation (4) becomes
4. Probability-Weighted Optimal Control Law
According to limiting averaging principle [20, 21], the solution of Equation (9) converges in probability to the solution of the following probability-weighted averaged equations as :where and is probability-weighted control force, given bywhere is the control force when is fixed at and is the stationary probability distribution of for , which satisfies .where is transition rate given in Equation (2). Combining the normalizing condition , the stationary probability distribution can be calculated.
Own to the relationship in Equation (11), the optimal control force for a fixed mode should be determined first. Let the Markov jump process be arbitrarily fixed at , Equation (9) can be rewritten in the fix-mode quasi-Hamiltonian form:where , . The Hamiltonian system associated with Equation (13) is fully integrable. The Hamiltonian iswhere are called first integrals.
According to the stochastic averaging method for quasi-integrable Hamiltonian systems , the partially averaged Itô equations associated with system (13) are obtained:where and and are the averaged drifts and diffusion coefficients respectively, given by
Accordingly, the performance index in Equation (5) becomes
Based on the stochastic dynamical programming principle, the following dynamical programming equation (DPE) can be established for averaged system (15) and performance index (17):where is value function and is optimal average cost. The optimal control force then can be obtained by minimizing the right-hand side of Equation (18), i.e.,
Suppose that has the formwhere and is a positive-definite symmetric matrix. Substituting Equation (20) into Equation (19) and exchanging the order of the deriving with respect to and averaging, the optimal control force for a fixed mode of system (13) can be obtained:where is undetermined. By inserting into Equation (18) for replacing , the final DPE can be obtained. will be then determined by solving this equation.
Note that the optimal control force in Equation (21) is a function of current system’s states . However, in system (4), the delayed states are observed. Thus, the optimal control force should be expressed in functions of delayed states by using approximation relationship in Equation (7):
It is seen that the optimal control force is probability-weighted summation of the control force associated with different modes of the system, weighted by the stationary probabilities . The obtained optimal control force is continuous and independent of the Markov jump parameter, which can be easily executed by actuators.
Inserting in Equation (23) into system (15) for replacing , the optimal controlled system is obtained. The response of the optimal controlled system will be predicted by solving the associated Fokker–Planck–Kolmogorov (FPK) equation or by using Monte Carlo simulation directly from the original system (4). The control effectiveness and control efficiency can then be evaluated (see the following example, i.e., Equations (32) and (33)).
5. Numerical Example
Consider two oscillators coupled by linear and polynomial type nonlinear dampings subject to external random excitations and random time-delayed feedback control. The motion equations of the system are of the following form:where () are uncorrelated Gaussian white noises with intensities and are control forces with random time delay.
Three-level time delay is considered here, which means takes values in set : for low network load, for medium network load, and for high network load . Then, system (24) is converted into a Markov jump system (4) with three modes. Prescribe the transition matrix of by (). The stationary probabilities () of can be calculated from Equation (12).
Let , . By using the approximation in Equation (7) and applying the stochastic averaging method, the partially averaged Itô stochastic differential equations for a fixed mode can be obtained as the form of Equation (15):where , and
Based on the dynamical programming principle, the DPE is derived as the form of Equation (18), from which the optimal control force for a fixed mode can be obtained as the form of Equation (21), i.e.,
To obtain the value of function , the value function is assumed to have the following polynomial form:
in Equation (20) is specified by
To evaluate the proposed control strategy, the control effectiveness and control efficiency are proposed:
They are the relative reduction in root-mean-square displacement and the ratio of the control effectiveness to the normalized root-mean-square control force, respectively. Obviously, higher and values indicate a better control strategy.
Three special Markov jump rules are considered, with
Observe that if so that the system is more likely to take the low time delay mode . The system favors the medium time delay mode if , while it favors the large time delay mode if .
The stationary joint probability densities of the first oscillator of the uncontrolled system and of the optimal controlled system are, respectively, shown in Figures 1(a) and 1(c). Obviously, has much larger mode and smaller dispersion around their equilibrium for optimally controlled system than those for the uncontrolled system. This implies that the proposed control law has high effectiveness for attenuating the system’s response. The results of direct Monte Carlo simulation of system (24) are also obtained and shown in Figures 1(b) and 1(d). Favorable agreements between these two results can be seen, which demonstrates the validity of the proposed method.
Figure 2 shows the stationary probability density of displacement of the first oscillator of the optimal controlled system for different transition rules. It can be seen that has the largest mode around if , and the mode will decrease as the system cycles through , , and . This implies that the higher probability of the system working in large time delay mode will lead to worse performance of the control force. In Figure 2, the lines are obtained by the proposed method, while the dots are obtained by direct simulation of system (24). Observe that the dots match closely with the corresponding lines. The same observation can be made about in Figure 3.
In Figure 4, the variation of the control effectiveness and control efficiency with excitation intensity is displayed for different transition rules. It is seen that the proposed control strategy keeps high control effectiveness and control efficiency for varying excitation intensities. It is also seen from Figure 3 that as takes values through , , and and and decrease monotonously. The observation can be explained in a similar fashion as in Figure 2. Figure 5 shows the control effectiveness and control efficiency for varying excitation intensity .
Finally, sample time histories of the displacement for the controlled system compared with uncontrolled system are displayed in Figure 6.
In this paper, a probability-weighted optimal control strategy for nonlinear systems with random time delay is proposed. Based on the random switch model of random time delay, the optimal control problem of such a system was converted into the one in framework of Markov jump systems. Upon limiting averaging principle, the optimal control force was approximately expressed as probability-weighted summation of the control force associated with different modes of the system. The proposed optimal control force is continuous and independent of the Markov jump parameter, which can be easily executed by actuators. The feasibility and effectiveness of the proposed optimal control strategy were demonstrated by dealing with an example of a two-DOF nonlinear stochastic system with random time delay.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This work was supported by the Natural Science Foundation of China through grants 11432012, 11602216, 11772293, and 11621062. The opinions, findings, and conclusions expressed in this paper are those of the authors and do not necessarily reflect the views of the sponsors.
J. D. Cao, R. Rakkiyappan, K. Maheswari, and A. Chandrasekar, “Exponential H∞ filtering analysis for discrete-time switched neural networks with random delays using sojourn probabilities,” Science China Technological Sciences, vol. 59, no. 3, pp. 387–402, 2016.View at: Publisher Site | Google Scholar
J. Nilsson and B. Bernhardsson, “LQG control over a Markov communication network,” in Proceedings of the 36th Conference on Decision & Control, San Diego, CA, USA, December 1997.View at: Google Scholar
L. Q. Zhang, Y. Shi, T. W. Chen, and B. Huang, “A new method for stabilization of networked control systems with random delays,” IEEE Transactions on Automatic Control, vol. 50, no. 8, pp. 1177–1181, 2005.View at: Google Scholar
A. V. Skorokhod, Asymptotic Methods of the Theory of Stochastic Differential Equations, American Mathematical Society, Providence, RI, USA, 1989.
Y. Tsarkov, “Asymptotic methods for stability analysis of Markov impulse dynamical systems,” Nonlinear Dynamics and Systems Theory, vol. 1, pp. 103–115, 2002.View at: Google Scholar