Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2018 (2018), Article ID 5950678, 15 pages
https://doi.org/10.1155/2018/5950678
Research Article

A Stable Distributed Neural Controller for Physically Coupled Networked Discrete-Time System via Online Reinforcement Learning

Jian Sun1,2 and Jie Li3

1School of Electronic and Information Engineering, Southwest University, Chongqing, China
2Chongqing University Key Laboratory of Networks and Cloud Computing Security, Chongqing, China
3State Grid Chongqing Electric Power Co. Electric Power Research Institute, Chongqing, China

Correspondence should be addressed to Jian Sun; moc.361@nusj_qc

Received 28 July 2017; Revised 21 November 2017; Accepted 21 December 2017; Published 7 February 2018

Academic Editor: Christopher P. Monterola

Copyright © 2018 Jian Sun and Jie Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. J. Sun, H. Zheng, Y. Chai, Y. Hu, K. Zhang, and Z. Zhu, “A direct method for power system corrective control to relieve current violation in transient with UPFCs by barrier functions,” International Journal of Electrical Power & Energy Systems, vol. 78, pp. 626–636, 2016. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Sun, Y. Chai, Y. Hu, H. Zheng, R. Ling, and K. Zhang, “UPFCs control design for avoiding generator trip of electric power grid with barrier function,” International Journal of Electrical Power & Energy Systems, vol. 68, pp. 150–158, 2015. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Sun, Y. Hu, Y. Chai et al., “L-infinity event-triggered networked control under time-varying communication delay with communication cost reduction,” Journal of The Franklin Institute, vol. 352, no. 11, pp. 4776–4800, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  4. H. Li, G. Chen, T. Huang, and Z. Dong, “High-performance consensus control in networked systems with limited bandwidth communication and time-varying directed topologies,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 5, pp. 1043–1054, 2017. View at Publisher · View at Google Scholar · View at Scopus
  5. Z. Peng, D. Wang, H. Zhang, G. Sun, and H. Wang, “Distributed model reference adaptive control for cooperative tracking of uncertain dynamical multi-agent systems,” IET Control Theory & Applications, vol. 7, no. 8, pp. 1079–1087, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. H. Chu, J. Yuan, and W. Zhang, “Observer-based adaptive consensus tracking for linear multi-agent systems with input saturation,” IET Control Theory & Applications, vol. 9, no. 14, pp. 2124–2131, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. H. Su, M. Z. Q. Chen, X. Wang, H. Wang, and N. V. Valeyev, “Adaptive cluster synchronisation of coupled harmonic oscillators with multiple leaders,” IET Control Theory & Applications, vol. 7, no. 5, pp. 765–772, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. Y. Feng, Y. Lv, and Z. Duan, “Distributed adaptive consensus protocols for linearly coupled Lur'e systems over a directed topology,” IET Control Theory Applications, vol. 11, no. 15, pp. 2465–2474, 2017. View at Google Scholar
  9. A. Bemporad, M. Heemels, and M. Johansson, Networked Control Systems, Lecture Notes in Control and Information Sciences, Springer, London, UK, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  10. J. Qin, Q. Ma, W. X. Zheng, H. Gao, and Y. Kang, “Robust group consensus for interacting clusters of integrator agents,” Institute of Electrical and Electronics Engineers Transactions on Automatic Control, vol. 62, no. 7, pp. 3559–3566, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  11. J. Sarangapani, Neural network control of nonlinear discrete-time systems, CRC press, 2006.
  12. R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction, MIT press Cambridge, UK, 1998.
  13. D. V. Prokhorov and D. C. Wunsch, “Adaptive critic designs,” IEEE Transactions on Neural Networks and Learning Systems, vol. 8, no. 5, pp. 997–1007, 1997. View at Publisher · View at Google Scholar · View at Scopus
  14. X. Xu, C. Lian, L. Zuo, and H. He, “Kernel-based approximate dynamic programming for real-time online learning control: an experimental study,” IEEE Transactions on Control Systems Technology, vol. 22, no. 1, pp. 146–156, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. J. Y. Lee, J. B. Park, and Y. H. Choi, “Integral reinforcement learning for continuous-time input-affine nonlinear systems with simultaneous invariant explorations,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 5, pp. 916–932, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  16. J. Sarangapani, Neural Network Control of Nonlinear Discrete-Time Systems (Public Administration and Public Policy), CRC/Taylor & Francis, 2006.
  17. C. Zhang, S. Abdallah, and V. Lesser, “Efficient multi-agent reinforcement learning through automated supervision,” 1365-1370.
  18. S. Kumar Jilledi, “Comparison of multi-line power flow control using unified power flow controller (UPFC) and interline power flow controller (IPFC) in power transmission systems,” International Journal of Engineering Science & Technology, vol. 3, no. 4, pp. 3229–3235, 2011. View at Google Scholar
  19. B. Xu, C. Yang, and Z. Shi, “Reinforcement learning output feedback NN control using deterministic learning technique,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 3, pp. 635–641, 2014. View at Publisher · View at Google Scholar · View at Scopus
  20. L. Liu, Z. Wang, and H. Zhang, “Adaptive fault-tolerant tracking control for MIMO discrete-time systems via reinforcement learning algorithm with less learning parameters,” IEEE Transactions on Automation Science and Engineering, vol. 14, no. 1, pp. 299–313, 2017. View at Publisher · View at Google Scholar · View at Scopus
  21. R. Cui, C. Yang, Y. Li, and S. Sharma, “Adaptive neural network control of auvs with control input nonlinearities using reinforcement learning,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 6, pp. 1019–1029, 2017. View at Publisher · View at Google Scholar
  22. C. W. Anderson, P. M. Young, M. R. Buehner, J. N. Knight, K. A. Bush, and D. C. Hittle, “Robust reinforcement learning control using integral quadratic constraints for recurrent neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 18, no. 4, pp. 993–1002, 2007. View at Publisher · View at Google Scholar · View at Scopus
  23. M. Courbariaux, Y. Bengio, and J.-P. David, “Binaryconnect: training deep neural networks with binary weights during propagations,” in Proceedings of the 29th Annual Conference on Neural Information Processing Systems, NIPS, pp. 3123–3131, 2015. View at Scopus
  24. Z. Wang, F. Liu, S. H. Low, C. Zhao, and S. Mei, “Distributed frequency control with operational constraints, part II: network power balance,” IEEE Transactions on Smart Grid, vol. PP, no. 99, pp. 1–1, 2017. View at Publisher · View at Google Scholar