Abstract

Reinforcement learning (RL) methods can successfully solve complex optimization problems. Our article gives a systematic overview of major types of RL methods, their applications at the field of Industry 4.0 solutions, and it provides methodological guidelines to determine the right approach that can be fitted better to the different problems, and moreover, it can be a point of reference for R&D projects and further researches.

1. Introduction

Reinforcement learning (RL) has a significant chance to revolutionize the artificial intelligence (AI) applications by serving a novel approach of machine learning (ML) developments that lets the user to handle large-scale problems efficiently. These techniques together with widespread Internet of things tools have opened up new possibilities for optimizing complex systems, including domains of logistics, project planning, scheduling, and further industry-related domains. Extracting this potential can result in a fundamental progress of Industry 4.0 transformation [1]. During this digital transformation, the vertical and horizontal integration will be strengthened, the flexibility should be raised, and the human control and supervision need to be focused [2, 3]. Furthermore, the data produced by the integrated tools are increasing exponentially that requires a higher level of autonomous processes and decisions. Reinforcement learning can serve as a valuable tool in the development of self-optimising and organising Industry 4.0 solutions. The main challenge of developing these applications is that there are several methods and techniques and a wide range of parameters that need to be defined. As the definition of these parameters requires detailed knowledge of the nature of the RL algorithms, the main goal of this paper is to provide a comprehensive overview of RL methods from the viewpoint of Industry 4.0 and smart manufacturing.

On the basis of our best knowledge, there exists no similar overview article of reinforcement learning methods in Industry 4.0 applications. Next to the fundamental book [4], there are several overviews of reinforcement learning methods from theoretical point of view. A detailed semantic overview of Industry 4.0 frameworks [5] and a categorization of Industry 4.0 research fields are also described. An overview of key elements of Industry 4.0 researches and several application scenarios [6] highlighted the wide scope of smart manufacturing. Although many authors found that there is a lack of extensive review of Industry 4.0 revolution from different aspects, according to their persistent work nowadays, several articles are available in this topic [7]. A survey on the applications of optimal control to scheduling in production, supply chain, and Industry 4.0 systems [8] focused on maximum principle-based studies. Most of the surveys and review articles of Industry 4.0 declare the importance of optimization, but mostly only general approaches are discussed, and there are no detailed guidelines extracted. A comprehensive survey at field of Industry 4.0 and optimization [9] discussed the recent developments in data fusion and machine learning for industrial prognosis, placing an emphasis on the identification of research trends, niches of opportunity, and unexplored challenges. Even if it considered several ML methods and algorithms, RL was mentioned only shortly without extracting its key fundamentals.

The above collected facts strengthened our motivation of preparing a detailed overview of RL applications and methods used in the field of Industry 4.0. Our main goals with this are:Presenting a hands-on reference for researchers who are interested in RL applicationsGiving compact descriptions of applicable RL methodsServing a guideline to support them easily identify the best fitting subset of RL methods to their problems and hence letting them focus on the relevant part of the literature

Our systematic review is based on an examination of the literature available from Scopus by following the PRISMA-P (Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols). The PRISMA-P workflow contains a 17-item checklist that supports to facilitate the preparation and reporting of a robust protocol in a standardized way for systematic reviews. The literature source list was queried in February 2021 with the following keywords: TITLE-ABS-KEY (“reinforcement learning” AND (“smart factory” OR “IOT” OR “smart manufacturing” OR “industry 4.0” OR “CPS”)).

Both author keywords and index keywords were involved into the analysis. The keyword processing started with an extensive data cleansing process by:Building up a standardized keyword unit (SKU) list and splitting complex keywords into SKUsAssigning SKUs to one of the following keyword classification types:(i)Principle captured(ii)Industrial field of application(iii)Application field of solution(iv)Mathematical approach of application methodologyIdentifying major classification groups by classification types

781 articles were involved into the analysis. Out of 14,035 original author and index keywords, 2,579 duplications were filtered out. The remaining 11,456 keywords were sliced into 45,824 SKUs. Finally, 12,017 keywords were assigned to classification types that provide the major tendencies and relations of industrial applications of reinforcement learning methods. Figure 1 shows the change of the assessed literature size over the PRISMA steps.

Our article stands for the following major parts:First, in Section 2, we will give a short general introduction of reinforcement learning framework and summarize some major mathematical properties behind RL techniques. Furthermore, we will present a classification of RL methods that lets the reader to have a map for the further discussions.As a next step in Sections 3.13.3, we will present the key findings of systematic review and a hands-on reference for further researches.Then, in Section 3.4 and in Section 3.5, we will discuss the conclusions and give a detailed guideline to help the reader to choose to most adequate RL method for the different problems.Finally, in Appendices A–H, we will provide a compact overview of 18 different RL methods.

2. Theoretical Background of Reinforcement Learning

In this section, we will summarize the fundamental concept of reinforcement learning, then we will present a general classification of RL methods.

There are three main paradigms in machine learning: supervised learning, unsupervised learning, and reinforcement learning. In supervised learning, a functional relationship of a regression model of a classifier is learnt based on data that represent the input and output of the model. In unsupervised learning, the hidden structure of the data is explored, usually by clustering [9].

Reinforcement learning (RL) also refers to learning problems. As Figure 2 represents the process, an agent takes observations of the environment; then on the basis of that, it executes an action . As a result of the action in the environment, the agent will get a reward and it can take a new observation from the environment and the cycle is repeated. The problem is to let agent learning so as to maximize the total reward. Reinforcement learning concept was introduced in ([4], Section 3.1). While in supervised and unsupervised learning, the model fitting requires a complete set of observations; in reinforcement learning, the learning process is sequential.Reinforcement learning is based on the reward hypothesis which states that all goals can be described by the maximisation of expected cumulative rewards. Formally, the history is the sequence of observations, actions, and rewards: .

A state contains all the information to determine what happens next. Formally, state is a function of the history: . Let denote the total discounted reward from time-step : .

The state-value function gives the expected total discounted return if starting from state . Policy covers the agent’s behaviour in all possible cases, so it is essentially a map from states to actions. There are two major categories in it: (1) deterministic policy: , (2) stochastic policy: . The action-value function is the expected return starting from state , taking action , and then following policy : .

Practically, state-value function is a prediction of expected present values (PV) of future rewards that allows evaluating the goodness of states, so it is a map from states to scalars: . The optimal state-value function is the maximum state-value function overall policies: . It is easy to find that in case if an optimal state-value function is known that an optimal action-value function and an optimal policy can be derived.

Reinforcement learning concept is based on stochastic processes and on Markov chains. Markov property is fundamental of mathematical basis of reinforcement learning methods. A state is Markov if and only if the condition holds. By definition, a Markov decision process (MDP) is a tuple of , where is a finite set of states, is a finite set of actions, is a state transition probability matrix, , is a reward function, , is a discount factor, , and time-steps are discrete. The Bellman equation practically states that state-value function of an MDP can be decomposed into two parts: immediate reward and discounted value of successors states: .

Environments can be distinguished by its observability. Let us denote as the agent’s state at time-step and as the environment’s state. Environment can be (1) fully observable if the agent directly observes all states of environment , or partially observable if the agent has indirect observations .

Figure 3 summarizes a classification of reinforcement learning methods in tree structure. Further details of the different RL methods are described in Appendix.

3. Overview of the Industry 4.0 Relevant Applications

In this section, we will present the hands-on references in tabular format based on the results of our data cleansing process and some major results of systematic literature analysis that will highlight some general trends which is able to lead the reader to a successfully applicable RL methods by preventing the usage of inappropriate trials and hence shortening development periods. In the final part of the section, we will present a hands-on guideline to summarize the key conclusions.

3.1. Classification of Applications by Principle Captured

The main goal of this section is to give an overview what are the principal captured problem types that reinforcement learning was applied for and describe the major tools that gave an impressive performance for each and every problem category and finally to highlight some typical issues that needed to be taken care of during the implementation.

By performing SKU analysis, we identified the most relevant keywords that are assigned to a principle captured. In Table 1, the associated publications are listed by principle captured categories.

Furthermore, Figure 4 shows the principle captured classes by reinforcement learning methods. Although the related frequency table does not meet all the required criteria, in Table 2, a -test, calculation is presented, yet by principle captured classes, it makes the identification of some significant deviations from the overall distribution of RL methods possible.In the class of prediction, forecasting, and estimation, planning value function approximation methods and Markov decision processes are over-represented. This lets us to conclude that the complex methods in the focus are less, which is fully in line with the goal to understand better the behaviour of the environment without strong optimization aims.In the class of detection, recognition, prevention, avoidance, and protection, the policy gradient methods are over-represented, while MDPs are under-represented. This shows us that researchers are interested more in complex models with a higher predictive performance than in basic solutions.In the classes of evaluation, assessment and allocation, assignment, and resource management, the multiagent methods are more in focus, which tells us that this field is on the way to distribute the tasks to lower level tools instead of centralized data processes. But while in the first class, the distribution of further RL methods follows the overall distribution, and in the second class, the policy gradient methods are over-represented which comes from the fact that allocation-related problems prefer to create an optimal policy.In the classes of classification, clustering and decision making and scheduling, queuing, and planning, the situation is opposite: multiagent methods are under-represented, which means that researches of these kinds of operations are still focusing to a centralized solution.In the class of control, the temporal-difference methods and Markov decision process contractions and multiagent methods are over-represented, while complex approaches, like policy gradient methods, are under-represented.Discussions of specific parts of RL solution design problems occur in smaller number of cases, but these kinds of publication demonstrate the fact that constructing an appropriate RL application is not always trivial. We can highlight state space design [12, 25, 33, 107, 144, 179, 193, 208, 217, 220, 222, 224, 227, 266, 267] and action space design [109, 220, 246, 268], reward construction [14, 76, 110, 199, 220, 226, 246, 269273], and exploration strategy planning [86, 274] which can be determinants from the whole application point of view.

3.2. Classification of Publications by Industrial Field of Application

Similarly, as we have shown in Section 3.1, by performing SKU analysis, we also identified the most relevant keywords that are assigned to industrial fields. In Table 3, the associated publications are listed by industrial field categories.

Similarly, as we presented categories of principal captured, we also prepared Figure 5 that shows the industrial field classes by reinforcement learning methods. Although the related frequency table does not meet all the required criteria, in Table 4, a -test, calculation is presented, yet by industrial field classes, it makes the identification of some significant deviations from the overall distribution of RL methods possible.In the class of energy, solar, power, electric, the applications of Q-learning methods are over-represented, while more basic methods and policy gradient methods are under-representedIn the class of telecommunication, communication, networking, internet, 5G, Wi-Fi, and mobile, the policy gradient methods are over-represented and there is a strong focus on the applications of edge computingIn the class of wireless, radio, antenna, and signal, the applications of Markov decision processes are highlightedSimilarly, in the class of vehicle, unmanned aerial vehicle, drone, and aircraft, the applications of Markov decision process are over-represented together with policy gradient methods, while the multiagent solutions are less discussedIn the classes of cyber-physical system, robot and manufacturing, and factory, the basic dynamic methods and Q-learning approaches are more popularFinally, in the class of city and building, the multiagent methods are over-represented

3.3. Classification of Publications by Mathematical Approach of Application Methodology

Similarly, as we have shown in the previous sections, we also performed the SKU analysis for the third major dimension of keywords which is the methodological approach of the solution. The most relevant keywords were identified, and then in Table 5. the associated publications are listed by methodological approach categories.

Although it is not feasible to summarize all the different methodological approaches in details, we would like to highlight some specialities of selected cases to demonstrate how widely RL approaches are used and motivate researchers to find a solution for their problems from a new perspective.

As we described in Section 2, reinforcement learning methods are based on Markov property and hence it is fundamental to model the problems as Markov decision processes (MDPs), which is far not trivial in several cases. By formulating an MDP, we need to take care about state space design, especially guaranteeing that a state representation contains all the relevant information to evaluate a situation, or with other words anytime, when the system is in the same particular action, the environment will take its response by the same characteristic for a particular action [96, 104, 191, 203, 313, 346].

Actor-critic methods are model-free learning methods that learn both the optimal policy for taking an action and the value function for most accurate evaluating of the current state. Most of the publications discuss mainly distributed autonomous IoT device networks. In these cases, the focus is shifted towards the learning and knowledge transfer solutions:Stochastic model of cloud-based IoT for fog computing computation offload and radio resource allocation [97].Centralized joint resource allocation solution for handling shortage of frequency resources of cellular systems by using a neural network embedded reinforcement learning algorithm [176].Determining optimal sampling time for IoT devices for energy harvesting by saving batteries. Hence state space contains continuous quantities, a linear function approximation was used and a set of novel features were introduced to represent the large state space [349].A bio-inspired RL modular architecture is able to perform skill-to-skill knowledge transfer and called transfer expert RL (TERL) model. Its architecture is based on a RL actor-critic model where both the actor and critic have a hierarchical structure, inspired by the mixture-of-experts model [392].Deep reinforcement learning-based cooperative edge caching approach [338].Multiple IoT devices are sending data parallel, but in general, they do not provide additional information to the existing knowledge. So, it is not necessary to permanently send data. By using actor-critic method, it can be determined which data packages need to be sent to prevent redundant or irrelevant communication [221].Mobile edge computing and energy harvesting framework of centralized training with decentralized execution by adopting MD-hybrid-AC method [120].Asynchronous advantage actor-critic method for mobile edge computing because computation offloading cannot have good performance in many situations, but the optimal algorithm can be chosen to use on IoT side [196].Optimization of the robustness of IoT network topology with a scale-free network model which has good performance in random attacks. A deep deterministic learning policy (DDLP) is proposed to improve the stability for large-scale IoT applications [337].IoT devices have lack of storage capacity, therefore a jointly cache content placement and delivery policy for the cache-enabled D2D networks was constructed. [17].A federated reinforcement learning architecture was presented where each agent working on its independent IoT device shares its learning experience (i.e., the gradient of loss function) with each other [237].

By applying multiagent methods, there are multiple ways to organize learning:Local learning and no centralized knowledge (see Figure 6(a))Local knowledge deployment, local learning, and central knowledge collectionLocal knowledge deployment and local learning with knowledge transfer to close neighborhoods (see Figure 6(b))Local knowledge deployment and centralized learning (see Figure 6(c))

3.3.1. Centralized and Federated Methods

As Internet of things (IoT) services and applications are growing rapidly, most of the current optimization-based methods lack a self-adaptive ability in dynamic environments. To handle these challenges, learning-based approaches are implemented generally in a centralized way. However, network resources may be over-consumed during the training and data transmission process. To solve the complex and dynamic control issues, a federated deep reinforcement learning-based cooperative edge caching (FADE) framework is presented. FADE enables base stations (BSs) to cooperatively learn a shared predictive model by considering the first-round training parameters of the BSs as the initial input of the local training and then uploads near-optimal local parameters to the BSs to participate in the next round of global training [16].

Although the first researches have focused on designing learning algorithms with provable convergence time, but other issues, such as incentive mechanism, were explored later: a deep reinforcement learning-based incentive mechanism has been designed to determine the optimal pricing strategy for the parameter server and the optimal training strategies for edge nodes [147].

3.3.2. Hierarchical Methods

Hierarchical approaches are applied primarily to solve communication channel or information processing capacity issues. The model structure usually follows the structure of the information path. In a two-layer approach, a local IoT device needs to transfer information to a local hub and then the local hub transmits the collected information to the central decision maker. In this case, separated models can be set up for both layers to find optimal scheduling order for communication.

A new crowd sensing framework is introduced based on hierarchical structure to organize different resources and it is solved by using deep reinforcement learning-based strategy to ensure quality of service [88]. A hierarchical correlated Q-learning (HCEQ) approach is presented to solve the dynamic optimization of generation command dispatch (GCD) for automatic generation control (AGC) [231]. An enhanced version of a bio-inspired reinforcement learning modular architecture is presented to perform skill-to-skill knowledge transfer and called transfer expert RL (TERL) model. TERL architecture is based on a RL actor-critic model where both the actor and critic have a hierarchical structure, inspired by the mixture-of-experts model, formed by a gating network that selects experts specializing in learning the policies or value functions of different tasks [392]. A new cloud computing model is proposed that is hierarchically composed of two layers: a cloud control layer (CCL) and a user control layer (UCL). The CCL manages cloud resource allocation, service scheduling, service profile, and service adaptation policy from a system performance point of view. Meanwhile, the UCL manages end-to-end service connection and service context from a user performance point of view. The proposed model can support nonuniform service binding and its real-time adaptation using metaobjects by intelligent service-context management using a supervised and reinforcement learning-based machine learning framework [150]. A new cooperative resource allocation algorithm is presented which couples reinforcement learning networks and prediction neural networks for accurate mobile targets tracking. Specifically, a hierarchical structure that performs collaborative computing is designed for alleviating computing pressure of front-end devices which are supported by edge servers [397]. A slightly different approach is applied at a resilient control problem studied for cyber-physical systems (CPSs) under the denial-of-service (DoS) attack. The term resilience is interpreted as the ability to be robust to the physical layer external disturbance and defending against cyber layer DoS attacks. The overall resilient control system is described by a hierarchical game, where the cyber security issue is modeled as a zero-sum matrix game, and physical minimax control problem is described by a zero-sum dynamic game. In virtue of the reinforcement learning method, the defense/attack policy in the cyber layer can be obtained, and additionally, the physical layer control strategy can be obtained by using the dynamical programming method [398]. Further publications in hierarchical RL topics are related to balancing timeliness and criticality when gathering data from multiple sources [116], ubiquitous user connectivity, and collaborative computation offloading for smart cities [248].

3.3.3. Distributed and Parallel Methods

It can be stated with certainty that the biggest potential of industrial applications is in intelligent devices. In this context, intelligence means some kind ability for taking autonomously decisions and furthermore being able to perform learning steps locally. There were made significant efforts to develop functional solution to reach this goal.

Computation offloading can provide a solution for the issue of the high computation requirement of resource-constrained mobile devices. The mobile cloud is the well-known existing offloading platform, which is usually far-end network solution, but this can cause other issues, such as higher latency or network delay, which negatively affects the real-time mobile Internet of things (IoT) applications. Therefore, a deep Q-learning-based autonomic management framework is proposed as a near-end network solution of computation offloading in mobile edge [133].

Another way to extend single reinforcement learning applications is to handle multiple objectives. There are two major solution practices to handle such kind of problems. The most obvious idea is to construct a mixed reward function that returns a combined result according to the different objectives [161, 259, 370]. Another possible way is to combine multiobjective ant colony optimization methods with RL techniques like deep reinforcement learning or double Q-learning algorithms [83, 142].

3.4. General Trends of RL Applications

Before the beginning of Industry 4.0 revolution, the general methodology was based on centralized data collection, data processing, and predictive model development solutions. By spreading Internet of things (IoT) devices, it turns possible to delegate more computational task to them. This kind of potential gets being exploited by reacting to another major issue which is the lack of communication capability. On the one hand, the communication between IoT devices and central servers or nodes are relative energy intensive processes; on the other hand, there are significant limitations on communication channels or frequencies.

By distributing computational tasks to IoT devices, a fundamental change gets required: it is not possible to assign as much human effort to data processing and predictive model development supervision as before during the centralized era. This was the major reason of appreciating RL methods because it provides a general self-learning framework that basically requires no manual or human interactions to maintain.

The early researches focused on the applicability of reinforcement learning techniques with single agents. Then, more and more complex problems were solved, and the multiagent solutions started to analyze. In the last years, the focus of the researchers is shifting to multiagent structures. The set-up of the agents and their goals or reward functions are showing very creative solutions. At a new wave of researches, the agents are defined with different roles often with attacker-defender objectives and let each of the agent to be trained an optimal strategy according to it. Then, the stability and robustness of the system can be analyzed and the weakest items can be purposefully improved.

As Figure 7 demonstrates, the number of Industry 4.0-related reinforcement learning-based researches dynamically increases, and there is no sign for expecting a slowing in it.

3.5. Discussion and Guideline Process to Determine Appropriate RL Method to Use

On the basis of the previous section, it can be highlighted that there are several ways and methods how reinforcement learning can be applied for Industry 4.0-related problems, and it is far not trivial which one can provide a successful solution.

We prepared a questionnaire and we presented it in a decision flow diagram in Figure 8. Our primary goal was to set up a method to help the readers in formulating their RL tasks. The first questions of the questionary-based process verify whether state and action spaces are appropriately defined and how the reward can be obtained. The further questions systematically narrow down the set of applicable RL methods. The possibility of using simulation or learning from own experience can determine the general learning mechanism. In contrast, the nature of reward propagation can determine a smaller subset of the RL methods that can be applicable. Even if the conclusions are soft-defined, a user with some basic knowledge of RL methods can easily interpret them, or it can be a basis of some RL methods selector wizard. We believe that researchers will have fewer failed attempts by using our guideline, and the time-to-solution can be reduced significantly.

We should keep in mind that the whole reinforcement learning concept is based on Markov decision processes. A direct conclusion is that the state space should be constructed in a way that all the potential states should contain all the relevant information that can have any influence on the outcomes. Moreover, the action space should be constructed similarly: the effects of an action in a particular state should be based on the same deterministic or stochastic behaviour. This will let the RL agent to learn the effect mechanism behind.

Once the state and action spaces are defined, it needs to be investigated whether performing simulations is an option or not. If we are able to determine the environment’s behaviour when an action is made in a particular state, so deriving the reward value and the state transition, then an extensive learning process can be executed by using model-based RL methods in a cost-efficient way without significant risk of applying untrained agents. The general rule is also true in this case: the RL solution will be as adequate as the simulation is. If there is an option to validate the simulation outcomes to the real environment, then this can help to ensure the validity of the solution.

4. Conclusions

As we pointed out that reinforcement learning methods have a high potential also in Industry 4.0 applications which is a common agreement of researchers, one of the biggest reasons behind is that smart tools require a high level of optimizations which cannot be satisfied with human interventions. This continuously raises the demand of self-learning solutions, and RL techniques have been proven their efficiency at multiple fields. A major goal of our article was to give an overview of RL applications at the field of Industry 4.0. As a first step, we served a high-level overview of the general RL framework and a classification of RL methods to easily see through the possibilities, while we also presented a more detailed summary of the most widely used RL methods of Industry 4.0 applications in Appendix. Therefore, our publication can serve a starting point of further researches for RL applications.

Then, we highlighted the results of our systematic literature overview of reinforcement learning applications at the field of Industry 4.0. An extensive keyword analysis drove us to identify some typical patterns by choosing an adequate RL method for some particular combinations of principal captures and industrial fields. Although there are no unique optimal RL methods, there are RL methods that provide efficient solution for some problems. Our summary can be used as a hands-on-reference for further researches and it can help researchers to shorten the preparation time for their researches.

Furthermore, we prepared a questionnaire that provides a methodology to set up the reinforcement learning system in a proper way and to choose an appropriate method for the learning problem that the researcher is facing to. We believe that an extension of our questionnaire can be a basis of a wizard tool that enables the user to find the most fitting RL method for the learning task and guiding through the set-up processes. On the other hand, by knowing the key properties of the different RL methods, it becomes faster to adopt an existing one or to modify it to fit the specific needs and hence develop an own RL method.

We hope that our article lets the researchers strengthen to decide using RL methods for further applications as numerous successful applications show the high efficiency of them.

Appendix

In Appendix, we will describe one by one the major methods of reinforcement learning by highlighting their properties and evolutionary stages by following David Silver’s approach from the simplest ones to the more complex ones.

A. Dynamic Programming

Dynamic programming (DP) covers a decision process by breaking it down into a sequence of elementary decision steps over time. “Dynamic” refers to the sequential approach, while “programming” refers to its optimization objective.

In this section, all the methods work with the assumption that the environment is perfectly known. Iterative policy evaluation method is described for learning state-value function of a given policy , then value iteration method is used to determine optimal state-value function although actions are taken according to any given policy , and last but not least, policy iteration is presented to derive an optimal policy to the environment.

In general, there is limited usage of dynamic programming algorithms both because of its assumption to know the environment perfectly and its high computational requirements. On the other hand, dynamic programming methods provide the essence of ideas that are used in advanced methods in an easily understandable form.

Iterative Policy Evaluation. Let us assume that a policy is given and actions are taken according to it. The goal is to determine state-value function by iterative application of Bellman backup: . At each and every iteration steps, the state-value function should be updated in the following way:

The second term shows the cumulative rewards from state by taking action and applying a single Bellman decomposition while the first term provides the probability of taking action by following policy . It can be proven that with weak conditions, the proposed state-value function update will converge to ([4], Section 4.2).

Value Iteration. Iterative policy evaluation method can be extended to find an optimal state-value function . The main idea behind that iteration should be done by starting from the final reward and working backward. Let us assume that the solution of subproblem is known. Then, by the solution of the next iteration step, can be found by one-step look-ahead:

It can easily be seen that for finite state space , the determination of optimal state-value function for all the available states can be done in finite number of steps ([4], Section 4.4).

Policy Iteration. The iteratively learnt knowledge can be extracted by improving the policy by acting greedily with respect to . This practically means to pick that action from a particular state which maximizes the sum of immediate reward and discounted state-value of the successor state ([4], Section 4.6). The learning process of policy iteration is demonstrated on Figure 9.

B. Model-Free Prediction Methods

Unlike in dynamic programming, in model-free methods, perfectly known environment is not necessary, only experience samples are required or with other words just sequences of states, actions, and rewards, no prior knowledge of the environment.

In this section, Monte-Carlo learning method is presented for learning simply by averaging the experience, and then temporal-difference learning method is discussed to let the agent learn by more frequent but smaller steps by applying bootstrapping techniques, while temporal-difference learning method is described as an extension of temporal-difference method’s one-step learning to multiple-steps learning.

Monte-Carlo Learning. Monte-Carlo (MC) agent solves the reinforcement learning problem by applying average sample return, so it learns from complete episodes. Hence, it needs to be guaranteed always to terminate episodes; otherwise, the learning process cannot be performed. MC uses the simplest idea by assigning empirical mean of returns to a specific state ([4], Section 5.1). There are two major types of MC methods:First-visit MC: only the first visit of a state will be involved into the calculation during an episode. Let us assume that state is visited first time at time period . Let us denote as the total return from time period and the number of times that state is visited while is the sum of returns up to the current episode. In this case, the state-value estimate will be the empirical mean: . As experience grows, so as , the long-term mean will converge to the state-value function: .Every-visit MC: all the visits of a state will be involved into the calculation during an episode. Formally, the main difference to first-visit MC is that needs to be incremented at every time period whenever state is visited.

From computational point of view, it is important to mention that empirical mean is determined incrementally in practice. Let us denote as the value-function estimate while is the cumulative sum of returns after episode , then is the total return in episode from time period when state is visited and assume that state is visited th times overall.

Figure 10 demonstrates the learning process of Monte-Carlo method. As we can see, the learning step is performed at the end of an episode.

Temporal-Difference Learning. Temporal-difference (TD) agent learns from incomplete episodes by applying bootstrapping. Comparing to MC learning, TD uses best guess of total return, or formally instead of episodic experience to calculate value function estimates . This single difference indicates that TD agent can perform a learning step after each and every actions ([4], Section 6.1), as Figure 11 shows. As a consequence, it can be applied at never ending episodes.

Temporal-DifferenceLearning. There are intermediate solutions between TD that performs VF estimate updates after 1-step return and MC that performs updates only at the end of an episode (practically -step return). The main idea behind is to apply normalized geometric series for weighting -step returns ([4], Section 7.1). In this case, value function estimate will use a weighted total return of . It can be shown that TD(0) is equivalent to every-visit MC learning and TD(1) is equivalent to original TD learning methods. Furthermore, TD methods can be applied both forward and backward. The algorithms shown in this section can be used whetherIn offline mode: value function estimate updates are accumulated within episodes but applied only at the end of the episode, orIn online mode: value function estimate updates are accumulated within episodes and can be applied immediately.

A unified view of model-free prediction techniques is shown in Figure 12. First, it was created by Richard Sutton, but this version is prepared by David Silver. It highlights the two most important dimensions of learning methods: the vertical dimension represents the depth of the updates, while the horizontal dimension represents the width of the updates.

C. Model-Free Control Methods

In the previous section, model-free prediction methods were summarized. These are methods that learn from other’s experience so acting policies were managed from the external and called off-policy learning. In contrast, on-policy learning lets the algorithm to make actions on the basis of their own policy. Hence, a major objective steps to the front, to optimize policy.

In this section, -Greedy policy iteration is described to combine exploitation of the current knowledge of optimal decisions and exploration of unknown new potentials. Furthermore, on-policy temporal-difference control method known as SARSA method is presented by applying bootstrapping techniques to speed-up the learning process.

-Greedy Policy Iteration Control. -Greedy policy iteration covers a combined solution. On the one hand, MC method is applied to learn the action-value function . On the other hand, the agent can act greedily which means that it will choose the most optimal action on the basis of the actual action-value function . This kind of action policy exploits only the current experience and does not support to explore alternatives. With a small change in the strategy, this kind of issue can be solved: let the agent act randomly with probability and greedily with probability ([4], Section 5.4):

On-Policy Temporal-Difference Control Method, Aka SARSA Method. Similar to model-free prediction methods, there is also an algorithm to let agent learn from incomplete episodes by applying bootstrapping ([4], Section 6.4). In this case, -Greedy policy iteration method needs to be modified in the following way: instead of using MC method, TD learning should be applied for learning the action-value function that makes possible to perform a learning step after each and every actions and acting according to the most updated action-value function in a similar way than at -greedy policy iteration. The SARSA name comes from an acronym: state action reward state action . By following SARSA method, action-value function update should look like . It can be proved that under certain conditions, SARSA action-value function converges to optimal action-value function: .

D. Off-Policy Learning

There are several situations when the learning process is not based on just own experience. Formally, this means that target policy or state-value function or action-value function is determined by observing results of an external behaviour policy .

In this section, importance sampling is shown to determine the most accurate of the learning objective, and then Q-learning is described as an effective alternative to get the function iteration with a lower variance.

Importance Sampling. One possible way to handle the difference of target and behaviour policy is importance sampling when a correction multiplier shall be applied by processing observations ([4], Section 5.8). If MC learning is combined with importance sampling, then value function update will look like . But because corrections are made at the end of an episode, the product of multipliers can drive to a dramatically high variance and hence MC learning is not suitable for off-policy learning.

Therefore, TD learning seems much more adequate to combine with importance sampling, because correction multiplier should be applied for only a single step and not for a whole episode:

Q-Learning. Another possible way to handle the difference of target and behaviour policy is to modify the value function update logic as Q-learning does ([4], Section 6.5). Assume that in state , the very next action is derived by using behaviour policy: . By taking action , immediate reward and the next state will be determined. But for value function update, let us consider an alternative successor action on the basis of target policy: . Therefore, the importance sampling will be not necessary and Q-learning value function update will look like .

In a special case, if target policy is chosen as a pure greedy policy and behaviour policy follows -greedy policy, then the so-called SARSAMAX update can be defined as follows: . Last but not least, it was proven that Q-learning control converges to the optimal action-value function: .

E. Value Function Approximation

The reinforcement learning methods discussed in the previous sections represented value functions by lookup tables, but in practice, it is not feasible operating with state-level or state-action-level lookup tables. On the one hand, it would be very memory- and computation-intensive, and on the other hand, the learning process would be too slow if the state and/or action spaces are large. The solution for large problems is to estimate state-value and action-value functions with function approximation: , and similarly, .

There are many kinds of function approximation methods that can be applied: linear combination of features, neural network, decision tree, and Fourier bases. In this section, the first two types of methods are discussed. The first gradient descent method is presented that can be effectively combined with Monte-Carlo or temporal-difference methods for value function approximations, and then deep Q-network is described that serves a more sample-effective way from learning.

Value Function Approximation by Gradient Descent. A well-known tool for function approximation is gradient descent ([4], Section 9.3). Let us denote as a differentiable function of parameter vector . Define the gradient of as . To find a local minimum of , parameter needs to be adjusted in the direction of negative gradient by where is the learning step-size parameter.

An effective solution is to use gradient descent with linear combination of features, because in this case, the formulas become much simpler. Value function representation will look like , while objective function to minimise mean-squared error between true value function and its approximation can be calculated by the formula of . It is proven that stochastic gradient descent with linear combination of features converges to global optimum. Furthermore, the update rule is quite simple: , and then . The result shows that parameter adjustment stands for three components: learning step-size, prediction error, and feature value. In practice, the true value function is usually not known but a noisy sample of it is known at different methods:For MC method, the target is and hence parameter update .For TD(0) method, the target is the TD target while parameter updateFor TD , the target is -return and parameter update .

Whichever method is chosen, the RL learning process needs to update the value function approximation with the same frequency than at the original method.

Deep Q-Network. Even if gradient descent-based value function approximation methods can be very calculation-effective and updates can be managed incrementally, these are less sample-effective which means that the information that could be extracted from an observation will be not necessarily exploited.

There are batch methods that are working with experience replay. Preliminary all the observed experiences should be collected. Let us denote as the consisting experience of state-value pairs: . Artificial observations can be generated by random sampling from experience history: . Therefore, stochastic gradient descent can be applied on it: . In this way, converges to optimal least square solution.

One of a most commonly used RL methods was born by combining experience replay and Q-learning with periodically frozen target policy:(1)By using behaviour policy, action can be taken according to -greedy policy(2)Transitions should be stored in replay memory as (3)There can be generated random mini-batch samples of transitions from (4)On the basis of them, Q-learning targets will be determined by using fixed parameters (5)Minimise mean-squared error between Q-network and Q-learning targets:

F. Policy Gradient

In contrast to value-based methods where optimal action can be determined on the basis of learnt value function in a particular state, policy gradient methods approximate directly the optimal policy: .

It is necessary for an objective function to measure the goodness of fitting policy to the optimal policy. In this case, policy-based RL becomes an optimization problem to find optimal according to . There are methods that use gradient as gradient descent, conjugate gradient, or quasi-Newton method and there are methods that do not use as hill climbing, simplex, or genetic algorithms. In general, these kinds of methods show better convergence properties and can work effectively with high-dimensional or continuous action spaces, and last but not least, they can learn stochastic policies. On the other hand, policy gradient methods typically converge to a local rather than global optimum. It is important to highlight that value functions can be also used to learn the optimal parameter, but once it is learnt, value functions are not necessary to select optimal action.

Softmax. Let be a policy objective function. Policy gradient descent algorithms search for a local optimum in by ascending the gradient of the policy: . By assuming that policy is differentiable and its gradient is , likelihood ratios can be transformed to the following form: , where is called score function.

Softmax policy method is based on the approach of weighting actions by using linear combinations of features ([4], Section 13.2). Therefore, the probabilities of actions are proportional to exponentiated weights: . The score function looks like .

Gaussian/Natural Policy Gradient. In continuous action spaces, Gaussian policy is a natural option. In this case, the mean is a linear combination of features: . By fixing variance as , the policy will be Gaussian: . The score function will look like .

Monte-Carlo Policy Gradient Method Aka REINFORCE. Monte-Carlo policy gradient method or with more popular name the REINFORCE algorithm updates parameter by using stochastic gradient ascent. It is strongly based on-policy gradient theorem that generalizes likelihood ratio approach to multistep MDPs by replacing immediate reward with long-term values of with weak restrictions on . The key idea behind that the locally optimal policy can be found by gradient ascent on the objective function as follows: , where is an unbiased sample of .

Actor-Critic Policy Gradient. In practice, REINFORCE still has high variance. To handle it, action-value function can be also estimated: . In this way, there are two sets of parameters:Critic: it updates action-value function parameters Actor: it updates policy parameters according the actual version of critic

Updates should be done at each elementary steps as follows:Sample reward: Sample transition: Sample action:

G. Model-Based Methods

Model-free methods learn value function and/or policy directly from their experience of a real environment. The accuracy of the knowledge of RL can be raised by extending the experience collection process. This can be reached either by setting up an artificial virtual environment due to defining reward and state transition functions that describes the real environment well or by building an own model that approximates the real environment by learning its history.

If it is assumed that the state space and action space are known, then model is a representation of MDP if and . Learning model from experience is a supervised learning problem. Figure 13 presents the basic concept of model-based learning methods.

First, the model should learn and therefore an internal simulation environment can be defined. Then, using the model representation, the model-free RL methods can be used. So, model-based techniques differ from model-free techniques by using internal model representation to derive rewards and state transitions.

H. Multiagent Learning Systems

At Industry 4.0 applications, usually not a single RL agent is set up, but multiple ones. Multiagent RL topic addresses the sequential decision-making problem of multiple autonomous agents that operate in a common or quite similar environment, each of which aims to optimize its own long-term return by interacting with the environment and a central system and/or other agents.

Markov Games. One way to generalize MDPs for applying multiple agents is Markov games (MG) or also known as stochastic games. Formally, Markov game can be defined as a tuple , where denotes the set of agents, denotes the state space of all the agents, and denotes the action space of agent . By introducing , let be the transition probability function from any state to a particular state for a joint action of , while is the reward function that determines the immediate reward by starting from state , by taking action and by moving to state . Last but not least, is the discount factor. Figure 14 shows the general framework of Markov games.

MG problems can be classified by knowledge sharing strategies between agents and central system and their goals: whether they can learn from each other or is it worth to share observations or policies with each other or their goals are conflicting. The main categories areCooperative agents problemConflicting agents problemMixed problem

In a fully cooperative setting, all agents have the very same or identical reward function: . This is also referred as multiagent MDP (MMDP). With this approach, the state- and action-value functions are identical to all agents, which thus enables the single-agent RL algorithms to be applied, if all agents are coordinated as one decision maker. The global optimum for cooperation now constitutes a Nash equilibrium of the game.

Nash equilibrium (NE) characterizes an equilibrium point , from which none of the agents has any incentive to deviate. As a standard learning goal for MARL, NE always exists for discounted MGs, but may not be unique in general. Most of the MARL algorithms are contrived to converge to such an equilibrium point.

We believe that our summary of the major reinforcement learning methods gave a useful and efficient overview of the concept behind. As our literature overview shows there are numerous further modifications and extensions over the basis of the basic methods. By following our questionnaire in Figure 8, it becomes easier to determine the relevant area of RL methods that can provide an appropriate solution to be fitted to their learning problems.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the TKP2020-NKA-10 project financed under the 2020-4.1.1-TKP2020 Thematic Excellence Programme by the National Research, Development and Innovation Fund of Hungary.