Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2016, Article ID 3460492, 8 pages
http://dx.doi.org/10.1155/2016/3460492
Research Article

Organization Learning Oriented Approach with Application to Discrete Flight Control

1School of Humanities, Economics and Law, Northwestern Polytechnical University, Xi’an 710072, China
2Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China

Received 25 December 2015; Accepted 28 February 2016

Academic Editor: Driss Boutat

Copyright © 2016 Lin Yu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In nature and society, there exist many learning modes; thus, in this paper the goal is to incorporate features from the social organizations to improve the learning of intelligent systems. Inspired by future prediction, in the high level, the discrete dynamics is further written into the equivalent prediction model which can provide the bridge from now to the future. In the low level, the efficiency could be improved in way of group learning. The philosophy is integrated into discrete neural flight control where the cascade dynamics is written into the prediction form and the minimal-learning-parameter technique is designed for parameter learning. The effectiveness of the proposed method is verified with simulation.

1. Introduction

Optimization and control exist everywhere in nature and society. Human beings are trying all their best to learn from the nature to see how the optimization is going by observing the process of biology. Genetic algorithm [1] is proposed in use of techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover. By mimicking ants’ behavior to find food with pheromone trail, ant colony optimization [2] is widely studied. Similarly, there exist many other evolutionary algorithms, such as particle swarm optimization [3] and estimation of distribution algorithm [4].

During the controller design, different systems are analyzed such as strict-feedback system [5], pure-feedback system [6], networked system [7], and multiagent system [8]. One main topic is to deal with the uncertainty. For example, unknown parameters widely exist in many industrial processes and therefore intelligent control is an important area in several decades. Towards the unknown dynamics, fuzzy logic system (FLS) [9, 10] and neural networks (NNs) [1114] are widely employed as function approximation. In view of approximation role, in the indirect design [9, 15], the functions are approximated separately and then the controller is constructed. In the direct design, the desired control input is approximated by the NN [1618]. While many papers are with backstepping scheme [19] to deal with complex dynamics, the interesting design [2022] is developed without backstepping by transforming the dynamics into the new form. Also adaptive dynamic programming [2326] or reinforcement learning [2729] is gaining more and more attention since the optimal performance is expected.

In nature and society, discrete signals are everywhere. For example, the population statistics and migration are all in discrete-time domain. With the new development of hardware, applications are required with digital computer or microprocessor. As a result, the controller design in discrete case is widely studied [3033]. In particular, for flight vehicle and robotic systems [34, 35], the online algorithm should be produced by the digital computer and one concern is the computation burden. How to define the efficient learning algorithm is crucial for online application.

For social organization, it is very complex system. In this system, agents are learning by themselves or learning from other agents to improve the capability to adapt to different situations. For the learning process, the goal should be clear while the rule should be specific. Furthermore, there should be such kind of mechanism to motivate different agents to learn from each other and share the knowledge or experience to others. Also for social organization, the vision should be far enough to lead the group to be more intelligent.

In this paper, we try to construct new learning scheme from the analysis of social organization. To make the idea more intuitive, the flight dynamics [36] is considered as example. First, the discrete dynamics is obtained with Euler approximation. Then, with the idea of future prediction, the equivalent prediction model is obtained and, in this way, the control input is designed according to future output. During the backstepping design, in each step, the learning algorithm is with group learning and the computation burden is greatly reduced.

This paper is organized as follows. The social organization is briefly discussed in Section 2. Section 3 describes the longitudinal dynamics of the flight vehicle. Section 4 presents the dynamics transformation and adaptive neural controller design. The simulation result is included in Section 5. Section 6 presents several comments and final remarks.

2. Organization Learning

With the dramatic changes in the external environment and the continuous development of information technology, there exist optimization problems in both nature and human society organizational learning. However, most of our organizational structure is pyramid-hierarchical structure with overstaffed organization, which seriously affects the efficiency of the organization.

In tense global competition, the complexity and uncertainty of the index explosion of technology and the growing market are increasing, the needs of modern organizations in the turbulent business environment, constantly seeking new sources of competitive advantage. Theorists and leaders of both organizations increasingly consider learning as the most critical factor in achieving sustainable development and competitiveness of excellent organizational performance, which means a continuous generation, dissemination, and integration of new knowledge. And thus, terms such as “organizational learning” and “learning organization” have raised concerns to both academics and organizational practitioners. A reasonable explanation for this note is that organizational learning is often regarded as a solution to problems caused by the hierarchy and bureaucracy of the organizations.

The advent of information age and knowledge economy society requires a more flexible organization with a flat organizational structure in order to prompt a faster way to meet market demand and improve the efficiency of the organization. In the current volatile environment, the importance of organizational learning and learning organization has been increasingly recognized, and the research of these issues has obtained a corresponding result. One very important issue is how to establish a link of intrinsic logic model which can well explain and predict changes in the fluctuant environment and under this environment the organization also has the ability to survive and maintain sustainable development; how to establish a set of learning methods that impact and promote organizational learning, so as to continuously improve organizational performance; how to build the mechanism between individual self-learning, learning from others, and team learning and the transformation between these three levels of learning.

The concept of organizations as learning system has undergone continuous development and evolution process. Learning organization refers to the organization which has the ability to consciously, systematically, and consistently create, accumulate, and use the knowledge resources, to change or redesign itself in order to adapt to the changing external environment, so as to maintain a sustainable competitive advantage of the organization. Organizational learning refers to members of the organization who continue to gain knowledge, to improve their behavior, and to optimize the organization of the system as well as to maintain a sustainable, healthy, and harmonious development process of the organization under the changing external environment.

Organizational learning capacity refers to the members of the organization to make the organization as a whole have the ability to maintain a sustainable and healthy organizational development. These learning abilities can be summed up from the existing law of experience or history through self-innovation and they can also be summarized from the experience of others by self-integration. Because of these ways of thinking, the birth of some new ideas can be made possible.

In organization learning, two important features should be considered. As demonstrated in Figure 1, the learning should be with future vision which means, according to experience, it should be able to predict what will happen in the future. Accordingly, with future prediction, one has to decide how they should act now. For learning, usually there exist too many things and the burden is huge. As a result, group learning is an efficient way since the number of parameters is greatly reduced. For example, for a large matrix, it is complicated to compute its inverse. However, if the matrix could be divided into several small parts, each of which is easy to calculate, then it is much easier to get the inverse. Similar idea exists in organization learning.

Figure 1: Organization learning.

3. Problem Formulation

Hypersonic flight is one key technology gaining increasing attention recently [3739]. Different from traditional flight vehicles, the flight condition of high mach numbers and high altitude makes the control system extremely sensitive to changes in atmospheric conditions as well as physical and aerodynamic parameters. Controller design on this topic is widely studied such as system uncertainty [40, 41], actuator constraint [42], fault tolerant control [43, 44], and non-minimum phase system [45]. Accordingly, robust control and adaptive control are designed for the dynamics. In [46], the detail of recent progress in hypersonic flight is reviewed.

To make the procedure clear, the paper considers only the altitude subsystem while the velocity is not considered in this paper. The altitude dynamics is as follows: More details of the dynamics could be found in [41].

Following the design [36], define , , , , , , and .

The strict-feedback form is obtained as where , , , , , , , and .

4. Backstepping Design for the Altitude Subsystem

Lemma 1. Given the function , there exists an ideal weight vector such that the smooth function can be approximated by an ideal NN on a compact set:where is the input to , is the nodes number, is the bounded approximation error, and is the supreme of .

4.1. Equivalent Prediction Model

In Section 2, the prediction function is mentioned. For the flight dynamics, it is with cascade structure. In this part, we will see how to get the prediction model. From (2), it is observed that is dependant on , , while is governed by . To clearly find out the relationship of the system states, by Euler approximation with sample time , systems (2) can be approximated by a discrete-time model as where are the state variables, is sample period, , , , , and .

Remark 2. In the one-step ahead model (4), is controlled by while is controlled by . It is difficult to see the future dynamics since , , is not directly connected to . Thus, additional efforts should be made to transform the dynamics into another prediction form.

Ignoring the analysis detail, the original system can be expressed as the following equivalent prediction model [36]:

Remark 3. The philology behind (5) is important since it observes the transfer of the dynamics where actually is governing instead of . Since in this paper we try to use new idea from social organization to construct novel NN approximation and learning, more detail is not presented here.

Remark 4. With the equivalent prediction model in (5), there is mapping between and . It can be observed that the future output could be deduced from current control input. Vice versa, it is expected to determine the current control input based on future reference.

4.2. Discrete Control Design

From Figure 1, it is important to construct the new learning scheme to reduce online computation burden. This is especially true for the hypersonic flight control since the system is changing fast and requires timely learning. Now the focus is on how to develop more efficient learning approach.

For neural approximation, is bounded and unknown. Let where is unknown constant.

With (6), inspired by the social organization, we try to update the system signal in batch instead of one by one.

Define signals with the following form: where , , and are signals to be constructed.

For the first equation, the virtual control is proposed as where is the design parameter.

Remark 5. One item is included in the controller to approximate the system uncertainty. It is interesting to find that is scalar and the item is easy to calculate.

To update , we have where , , and .

Remark 6. It is noted that the update of is simple since the online parameter is reduced to be only one. In this way, the computation burden could be greatly decreased. The main idea from social organization is that to improve efficiency updating should be group by group instead of one by one.

Remark 7. The subscript in (10) is instead of since the dynamics used for controller design is the equivalent prediction model.

For the second equation, the virtual control is designed as where is the design parameter and .

To update , we have where , , and .

For the third equation, the virtual control is designed as where is the design parameter and .

To update , we have where , , and .

For the fourth equation, the control input is designed as where is the design parameter and .

The robust updating law for NN weights is proposed as where , , and .

Remark 8. In (16), it shows because actually there is no change of this equation compared with the one-step ahead equation.

The following theorem is achieved.

Theorem 9. For system (4), if the signals (9), (11), (13), and (15) and the update laws (10), (12), (14), and (16) are designed, all the tracking errors are uniformly ultimately bounded.

The proof is similar to the procedure in [36]; thus, it is omitted here.

5. Simulation

The simulation is with initial states at  ft/s,  ft,  rad, , and  rad.

The tracking commands are given as  ft/s and as square signal with amplitude of 1000 ft and period 200 s. The filter is used to generate the reference signal where , , and .

For the velocity subsystem, the design in [36] is borrowed. For controller, we select , , , , and . For the updating law, we select and , . For simulation, the time interval is selected as  s.

From the altitude tracking depicted in Figure 2, it is observed that the controller can track the reference signal very well. The elevator deflection and throttle setting are illustrated in Figures 3 and 4. At the beginning, system is not responding from trim state and there exists certain variation. From the response of system states, under proposed controller, flight path angle in Figure 5, pitch angle in Figure 6, and the pitch rate in Figure 7 can track the virtual command very well. The adaptive estimation for group learning is shown in Figure 8. It indicates that, motivated by the learning of social organization, the system is working more efficiently since the tracking performance is retained while the online learning speed is much faster since the number of parameters is reduced to be one.

Figure 2: Altitude tracking.
Figure 3: Elevator deflection.
Figure 4: Throttle setting.
Figure 5: Flight path angle.
Figure 6: Pitch angle.
Figure 7: Pitch rate.
Figure 8: Adaptive estimation.

6. Conclusions and Future Work

By analysis of the social organization, the novel learning scheme is proposed for the equivalent prediction model of hypersonic flight dynamics. In this way, the online learning is much faster and the controller is working more efficiently. With simulation, the effectiveness of the proposed method is verified.

For efficiency, sometimes the system is learning too much which means even with more update the accuracy will not increase. In this case, the threshold should be included such that the system can save more computation time by keeping current parameters. In other words, the update is executed when the tracking error is out of the desired performance. Also in social organization, one agent cannot own all the capabilities and thus the agent should be combined with other agents to achieve more complex learning. This work could be further extended to multiagent systems [47, 48]. The composite learning [49] is of great interest since it can provide more accurate learning.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work was supported by Aeronautical Science Foundation of China (2015ZA53003), Natural Science Basic Research Plan in Shaanxi Province under Grant 2015JM6272, and Fundamental Research Funds for the Central Universities under Grants 3102015BJ008 and 3102015BJ(II)CG017.

References

  1. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  2. M. Dorigo, M. Birattari, and T. Stützle, “Ant colony optimization artificial ants as a computational intelligence technique,” IEEE Computational Intelligence Magazine, vol. 1, no. 4, pp. 28–39, 2006. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Kennedy, “Particle swarm optimization,” in Encyclopedia of Machine Learning, pp. 760–766, Springer, 2010. View at Google Scholar
  4. B. Jarboui, M. Eddaly, and P. Siarry, “An estimation of distribution algorithm for minimizing the total flowtime in permutation flowshop scheduling problems,” Computers & Operations Research, vol. 36, no. 9, pp. 2638–2646, 2009. View at Publisher · View at Google Scholar · View at Scopus
  5. B. Xu, C. Yang, and Y. Pan, “Global neural dynamic surface tracking control of strict-feedback systems with application to hypersonic flight vehicle,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 10, pp. 2563–2575, 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Na, X. Ren, and D. Zheng, “Adaptive control for nonlinear pure-feedback systems with high-order sliding mode observer,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 3, pp. 370–382, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. R. Lu, Y. Xu, A. Xue, and J. Zheng, “Networked control with state reset and quantized measurements: observer-based case,” IEEE Transactions on Industrial Electronics, vol. 60, no. 11, pp. 5206–5213, 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. W. Chen, S. Hua, and S. S. Ge, “Consensus-based distributed cooperative learning control for a group of discrete-time nonlinear multi-agent systems using neural networks,” Automatica, vol. 50, no. 9, pp. 2254–2268, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. B.-S. Chen, C.-H. Lee, and Y.-C. Chang, “H tracking design of uncertain nonlinear SISO systems: adaptive fuzzy approach,” IEEE Transactions on Fuzzy Systems, vol. 4, no. 1, pp. 32–43, 1996. View at Publisher · View at Google Scholar
  10. Y.-J. Liu and S. Tong, “Adaptive fuzzy control for a class of unknown nonlinear dynamical systems,” Fuzzy Sets and Systems, vol. 263, pp. 49–70, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. M. Chen and G. Tao, “Adaptive fault-tolerant control of uncertain nonlinear large-scale systems with unknown dead zone,” IEEE Transactions on Cybernetics, no. 99, p. 1, 2015. View at Publisher · View at Google Scholar · View at Scopus
  12. B. Xu, Q. Zhang, and Y. Pan, “Neural network based dynamic surface control of hypersonic flight dynamics using small-gain theorem,” Neurocomputing, vol. 173, pp. 690–699, 2016. View at Google Scholar
  13. Y. Xu, R. Lu, H. Peng, K. Xie, and A. Xue, “Asynchronous dissipative state estimation for stochastic complex networks with quantized jumping coupling and uncertain measurements,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–10, 2015. View at Publisher · View at Google Scholar
  14. M. Chen and S. S. Ge, “Adaptive neural output feedback control of uncertain nonlinear systems with unknown hysteresis using disturbance observer,” IEEE Transactions on Industrial Electronics, vol. 62, no. 12, pp. 7706–7716, 2015. View at Publisher · View at Google Scholar
  15. P. A. Phan and T. Gale, “Two-mode adaptive fuzzy control with approximation error estimator,” IEEE Transactions on Fuzzy Systems, vol. 15, no. 5, pp. 943–955, 2007. View at Publisher · View at Google Scholar · View at Scopus
  16. S. S. Ge and C. Wang, “Direct adaptive NN control of a class of nonlinear systems,” IEEE Transactions on Neural Networks, vol. 13, no. 1, pp. 214–221, 2002. View at Publisher · View at Google Scholar · View at Scopus
  17. C. Yang, S. S. Ge, C. Xiang, T. Chai, and T. H. Lee, “Output feedback NN control for two classes of discrete-time systems with unknown control directions in a unified approach,” IEEE Transactions on Neural Networks, vol. 19, no. 11, pp. 1873–1886, 2008. View at Publisher · View at Google Scholar · View at Scopus
  18. M. Chen and S. S. Ge, “Direct adaptive neural control for a class of uncertain nonaffine nonlinear systems based on disturbance observer,” IEEE Transactions on Cybernetics, vol. 43, no. 4, pp. 1213–1225, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. J. Liu, Y. Gao, S. Tong, and Y. Li, “Fuzzy approximation-based adaptive backstepping optimal control for a class of nonlinear discrete-time systems with dead-zone,” IEEE Transactions on Fuzzy Systems, vol. 24, no. 1, pp. 16–28, 2016. View at Publisher · View at Google Scholar
  20. J.-H. Park, S.-H. Kim, and C.-J. Moon, “Adaptive neural control for strict-feedback nonlinear systems without backstepping,” IEEE Transactions on Neural Networks, vol. 20, no. 7, pp. 1204–1209, 2009. View at Publisher · View at Google Scholar · View at Scopus
  21. J. Na, Q. Chen, X. Ren, and Y. Guo, “Adaptive prescribed performance motion control of servo mechanisms with friction compensation,” IEEE Transactions on Industrial Electronics, vol. 61, no. 1, pp. 486–494, 2014. View at Publisher · View at Google Scholar · View at Scopus
  22. B. Xu, Y. Fan, and S. Zhang, “Minimal-learning-parameter technique based adaptive neural control of hypersonic flight dynamics without back-stepping,” Neurocomputing, vol. 164, no. 1-2, pp. 201–209, 2015. View at Publisher · View at Google Scholar · View at Scopus
  23. D. Liu and Q. Wei, “Finite-approximation-error-based optimal control approach for discrete-time nonlinear systems,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 779–789, 2013. View at Publisher · View at Google Scholar · View at Scopus
  24. D. Vrabie, O. Pastravanu, M. Abu-Khalaf, and F. Lewis, “Adaptive optimal control for continuous-time linear systems based on policy iteration,” Automatica, vol. 45, no. 2, pp. 477–484, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. Y. Gao and Y.-J. Liu, “Adaptive fuzzy optimal control using direct heuristic dynamic programming for chaotic discrete-time system,” Journal of Vibration and Control, vol. 22, no. 2, pp. 595–603, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  26. J. Na and G. Herrmann, “Online adaptive approximate optimal tracking control with simplified dual approximation structure for continuous-time unknown nonlinear systems,” IEEE/CAA Journal of Automatica Sinica, vol. 1, no. 4, pp. 412–422, 2014. View at Publisher · View at Google Scholar · View at Scopus
  27. B. Xu, C. Yang, and Z. Shi, “Reinforcement learning output feedback NN control using deterministic learning technique,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 3, pp. 635–641, 2014. View at Publisher · View at Google Scholar · View at Scopus
  28. Y.-J. Liu, L. Tang, S. Tong, C. L. P. Chen, and D.-J. Li, “Reinforcement learning design-based adaptive tracking control with less learning parameters for nonlinear discrete-time MIMO systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 1, pp. 165–176, 2015. View at Publisher · View at Google Scholar · View at Scopus
  29. F. Lewis and D. Vrabie, “Reinforcement learning and adaptive dynamic programming for feedback control,” IEEE Circuits and Systems Magazine, vol. 9, no. 3, pp. 32–50, 2009. View at Google Scholar
  30. P. He and S. Jagannathan, “Reinforcement learning neural-network-based controller for nonlinear discrete-time systems with input constraints,” IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, vol. 37, no. 2, pp. 425–436, 2007. View at Publisher · View at Google Scholar · View at Scopus
  31. S. Jagannathan and F. Lewis, “Multilayer discrete-time neural-net controller with guaranteed performance,” IEEE Transactions on Neural Networks, vol. 7, no. 1, pp. 107–130, 1996. View at Google Scholar
  32. J. Sarangapani, Neural Network Control of Nonlinear Discrete-Time Systems, CRC Press Taylor & Francis Group, Boca Raton, Fla, USA, 2006.
  33. Y. J. Liu, S. Tong, D. J. Li, and Y. Gao, “Fuzzy adaptive control with state observer for a class of nonlinear discrete-time systems with input constraint,” IEEE Transactions on Fuzzy Systems, 2015. View at Publisher · View at Google Scholar
  34. R. F. Stengel, J. R. Broussard, and P. W. Berry, “Digital controllers for VTOL aircraft,” IEEE Transactions on Aerospace and Electronic Systems, vol. AES-14, no. 1, pp. 54–63, 1978. View at Publisher · View at Google Scholar · View at Scopus
  35. A. Chaudhuri and M. S. Bhat, “Output feedback-based discrete-time sliding-mode controller design for model aircraft,” Journal of Guidance, Control, and Dynamics, vol. 28, no. 1, pp. 177–181, 2005. View at Publisher · View at Google Scholar · View at Scopus
  36. B. Xu and Y. Zhang, “Neural discrete back-stepping control of hypersonic flight vehicle with equivalent prediction model,” Neurocomputing, vol. 154, pp. 337–346, 2015. View at Publisher · View at Google Scholar · View at Scopus
  37. F. R. Chavez and D. K. Schmidt, “Analytical aeropropulsive-aeroelastic hypersonic-vehicle model with dynamic analysis,” Journal of Guidance, Control, and Dynamics, vol. 17, no. 6, pp. 1308–1319, 1994. View at Publisher · View at Google Scholar · View at Scopus
  38. D. K. Schmidt, “Optimum mission performance and multivariable flight guidance for airbreathing launch vehicles,” Journal of Guidance, Control, and Dynamics, vol. 20, no. 6, pp. 1157–1164, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  39. D. O. Sigthorsson, P. Jankovsky, A. Serrani, S. Yurkovich, M. A. Bolender, and D. B. Doman, “Robust linear output feedback control of an airbreathing hypersonic vehicle,” Journal of Guidance, Control, and Dynamics, vol. 31, no. 4, pp. 1052–1066, 2008. View at Publisher · View at Google Scholar · View at Scopus
  40. L. Fiorentini, A. Serrani, M. A. Bolender, and D. B. Doman, “Robust nonlinear sequential loop closure control design for an air-breathing hypersonic vehicle model,” in Proceedings of the American Control Conference (ACC '08), pp. 3458–3463, IEEE, Seattle, Wash, USA, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  41. H. Xu, M. D. Mirmirani, and P. A. Ioannou, “Adaptive sliding mode control design for a hypersonic flight vehicle,” Journal of Guidance, Control, and Dynamics, vol. 27, no. 5, pp. 829–838, 2004. View at Publisher · View at Google Scholar · View at Scopus
  42. B. Xu, “Robust adaptive neural control of flexible hypersonic flight vehicle with dead-zone input nonlinearity,” Nonlinear Dynamics, vol. 80, no. 3, pp. 1509–1520, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  43. S. Wang, Y. Zhang, Y. Jin, and Y. Zhang, “Neural control of hypersonic flight dynamics with actuator fault and constraint,” Science China Information Sciences, vol. 58, no. 7, pp. 1–10, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  44. B. Xu, Y. Guo, Y. Yuan, Y. Fan, and D. Wang, “Fault-tolerant control using command-filtered adaptive back-stepping technique: application to hypersonic longitudinal flight dynamics,” International Journal of Adaptive Control and Signal Processing, 2015. View at Publisher · View at Google Scholar · View at Scopus
  45. L. Fiorentini and A. Serrani, “Adaptive restricted trajectory tracking for a non-minimum phase hypersonic vehicle model,” Automatica, vol. 48, no. 7, pp. 1248–1261, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  46. B. Xu and Z. Shi, “An overview on flight dynamics and control approaches for hypersonic vehicles,” Science China Information Sciences, vol. 58, no. 7, pp. 1–19, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  47. W. Chen, S. Hua, and H. Zhang, “Consensus-based distributed cooperative learning from closed-loop neural control systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 2, pp. 331–345, 2015. View at Publisher · View at Google Scholar · View at Scopus
  48. W. Chen and W. Ren, “Event-triggered zero-gradient-sum distributed consensus optimization over directed networks,” Automatica, vol. 65, pp. 90–97, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  49. B. Xu, Z. Shi, C. Yang, and F. Sun, “Composite neural dynamic surface control of a class of uncertain nonlinear systems in strict-feedback form,” IEEE Transactions on Cybernetics, vol. 44, no. 12, pp. 2626–2634, 2014. View at Publisher · View at Google Scholar · View at Scopus