Complexity

Reinforcement Learning and Adaptive Optimisation of Complex Dynamic Systems and Industrial Applications


Publishing date
01 Dec 2020
Status
Closed
Submission deadline
07 Aug 2020

Lead Editor

1Anhui University, Anhui, China

2Jiangnan University, Jiangsu, China

3University of Kragujevac, Kraljevo, Serbia

4North Minzu University, Yinchuan, China

This issue is now closed for submissions.
More articles will be published in the near future.

Reinforcement Learning and Adaptive Optimisation of Complex Dynamic Systems and Industrial Applications

This issue is now closed for submissions.
More articles will be published in the near future.

Description

Reinforcement learning is one of the paradigms and methodologies of machine learning developed in the computational intelligence community. It is used to describe and solve the problem in which agents maximize returns or achieve specific goals through learning strategies in the process of interaction with complex environments. The goal of reinforcement learning is to get the best solution to the current problem through reward and punishment: by rewarding good strategy and punishing bad strategy, continuously strengthening the process, to finally arrive at the best solution. To some extent, it has close connections to both adaptive control and optimisation.

More general scenarios for reinforcement learning and adaptive optimisation present a major challenge in complex dynamic systems. The process of controlling complex dynamic systems and industrial plants, or parts of such, involves a variety of challenging aspects that reinforcement learning algorithms need to tackle. Dealing with the complexity of such industrial process can involve computer communication, complex networks, continuous state and action spaces, high-dimensional dynamics, partially observable state spaces, randomness induced by the heteroscedastic sensor noise and latent variables, delayed characteristics, and nonstationary in the optimal steering, i.e. the optimal policy will not approach a fixed operation point.

The aim of this Special Issue is to bring together work on reinforcement learning and adaptive optimisation of complex dynamic systems and industrial applications. We invite authors to contribute original research articles as well as review articles related to all aspects of reinforcement learning algorithms, complex dynamic modelling, optimisation theory, optimal control methods, signal processing, and practical applications. Of particular interest are papers devoted to the development of complex industrial applications. Papers presenting computational issues, search strategies, and modelling and solution techniques to practical industrial problems are also welcome.

Potential topics include but are not limited to the following:

  • Reinforcement learning algorithms in complex dynamics
  • Iterative learning and adaptive optimisation of complex systems
  • Decision optimisation in complex processes
  • Unmanned system control and computer communication
  • Multi-agent reinforcement learning and control
  • Neural network system and adaptive optimisation
  • Fuzzy dynamic systems and adaptive optimisation
  • Data driven modelling, control and optimisation
  • Signal processing and optimisation
  • Complex process control and optimisation
  • Complex industrial process and applications

Articles

  • Special Issue
  • - Volume 2021
  • - Article ID 4527878
  • - Erratum

Erratum to “Finite-Time Asynchronous Stabilization for Nonlinear Hidden Markov Jump Systems with Parameter Varying in Continuous-Time Case”

Lianjun Xiao | Xiaofeng Wang | Lingling Gao
  • Special Issue
  • - Volume 2021
  • - Article ID 5964540
  • - Research Article

Positive Periodic Solutions for a Class of Strongly Coupled Differential Systems with Singular Nonlinearities

Ruipeng Chen | Guangchen Zhang | Jiayin Liu
  • Special Issue
  • - Volume 2020
  • - Article ID 7643812
  • - Research Article

Fractional-Order Modeling and Analysis of a Variable Structure Hybrid Energy Storage System for EVs

Jianlin Wang | Dan Xu | ... | Jinlu Mao
  • Special Issue
  • - Volume 2020
  • - Article ID 7913050
  • - Research Article

Stochastically Globally Exponential Stability of Stochastic Impulsive Differential Systems with Discrete and Infinite Distributed Delays Based on Vector Lyapunov Function

Xiaoyan Liu | Quanxin Zhu
  • Special Issue
  • - Volume 2020
  • - Article ID 1659103
  • - Research Article

An Air Traffic Controller Action Extraction-Prediction Model Using Machine Learning Approach

Duc-Thinh Pham | Sameer Alam | Vu Duong
  • Special Issue
  • - Volume 2020
  • - Article ID 1208951
  • - Research Article

Finite-Time Asynchronous Stabilization for Nonlinear Hidden Markov Jump Systems with Parameter Varying in Continuous-Time Case

Lianjun Xiao | Xiaofeng Wang | Lingling Gao
  • Special Issue
  • - Volume 2020
  • - Article ID 8840784
  • - Research Article

Fuzzy Model-Based Asynchronous Control for Markov Switching Systems with Stochastic Fading Channels

Fayuan Wu | Jinhui Tang | ... | Shuangsi Xue
  • Special Issue
  • - Volume 2020
  • - Article ID 1063251
  • - Research Article

Optimality Conditions and Scalarization of Approximate Quasi Weak Efficient Solutions for Vector Equilibrium Problem

Yameng Zhang | Guolin Yu | Wenyan Han
  • Special Issue
  • - Volume 2020
  • - Article ID 8941652
  • - Research Article

A Penalized h-Likelihood Variable Selection Algorithm for Generalized Linear Regression Models with Random Effects

Yanxi Xie | Yuewen Li | ... | Dongqing Luan
  • Special Issue
  • - Volume 2020
  • - Article ID 6843730
  • - Research Article

Convergence Analysis of Iterative Learning Control for Two Classes of 2-D Linear Discrete Fornasini–Marchesini Model

Kai Wan
Complexity
Publishing Collaboration
More info
Wiley Hindawi logo
 Journal metrics
Acceptance rate43%
Submission to final decision64 days
Acceptance to publication35 days
CiteScore3.300
Journal Citation Indicator0.690
Impact Factor2.833
 Submit

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.