Bio-Inspired Learning and Adaptation for Optimization and Control of Complex SystemsView this Special Issue
Research Article | Open Access
Lei Xi, Lang Liu, Yuehua Huang, Yanchun Xu, Yunning Zhang, "Research on Hierarchical and Distributed Control for Smart Generation Based on Virtual Wolf Pack Strategy", Complexity, vol. 2018, Article ID 2782314, 14 pages, 2018. https://doi.org/10.1155/2018/2782314
Research on Hierarchical and Distributed Control for Smart Generation Based on Virtual Wolf Pack Strategy
Nowadays, haze has become a big trouble in our society. One of the significant solutions is to introduce renewable energy on a large scale. How to ensure that power system can adapt to the integration and consumption of new energy very well has become a scientific issue. A smart generation control which is called hierarchical and distributed control based on virtual wolf pack strategy is explored in this study. The proposed method is based on multiagent system stochastic consensus game principle. Meanwhile, it is also integrated into the new win-lose judgment criterion and eligibility trace. The simulations, conducted on the modified power system model based on the IEEE two-area load frequency control and Hubei power grid model in China, demonstrate that the proposed method can obtain the optimal collaborative control of AGC units in a given regional power grid. Compared with some smart methods, the proposed one can improve the closed-loop system performances and reduce the carbon emission. Meanwhile, a faster convergence speed and stronger robustness are also achieved.
Recently, the thermal power generation makes the environmental pollution more serious, especially the air pollution. Therefore, more and more clean energies such as wind and photovoltaics are continuously merged into the strongly coupling interconnected power grid . However, new troubles, such as voltage over limit and power fluctuations as well as frequency instability [2–4], are brought out. Meanwhile, the safe operation of the power grid is also affected. The traditional centralized automatic generation control (AGC) cannot obtain the similar control performance with the decentralized AGC since the energy distributions are more dispersed. It will be an inevitable trend for the future smart grid to research the decentralized AGC.
In recent years, many scholars have devoted to the optimal control strategy of decentralized AGC [5–13]. Authors in  put forward the concept of optimal AGC by using the original dual transformation method, which is based on the optimal control theory. It showed that the dynamic equation and the constructed AGC control strategy of the interconnected system could realize multiarea decentralized optimal AGC control. However, the used optimal AGC controller needed to feedback all the state variables which were difficult to be obtained directly in the actual system. In , a new method was proposed based on the model predictive control. It focused on a decentralized optimal AGC control strategy based on cooperative synchronous power grid. While the stability and robustness of the multivariable predictive control method including the application in actual AGC system needed to be further studied, the method was a great amount of calculation and time-consuming. Yu et al.  demonstrated that an optimal AGC can be achieved under the circumstance that the agents are in small number. However, the algorithm is only applicable to systems with a small number of agents and its application is limited. In the same way, the decentralized control has been studied by the author in the early stage, namely, decentralized correlated equilibrium Q()-learning (DCEQ())  based on multiagent (MA). It can solve the complex stochastic dynamic characteristics and optimal coordination control of AGC after the access of distributed energy. Nevertheless, if the number of MA increases, the searching time for the MA equilibrium solution is geometric growth, which will limit the application of DCEQ() in larger systems. Therefore, the decentralized win or learn fast policy hill-climbing() (DWoLF-PHC())  based on MA was developed, in which by using average mixed strategy instead of equilibrium strategy. Thus, the dynamic characteristics of the system are effectively improved, and the dynamic optimization control of the total power is also obtained. However, the DWoLF-PHC() still has multisolution problem. It results in system instability when the number of MA increases sharply.
The above literatures have some limitations that they only focus on the control strategy of the total power in the AGC. However, the dynamic optimal allocation of the total power is not involved. In fact, the modern power grid has gradually been developed into a hierarchical and distributed control (HDC) structure, which integrates the large-scale new energy. For this reason, a single control strategy is difficult to meet the requirements of control performance standards (CPS). Therefore, a hierarchical and distributed control based on virtual wolf pack strategy (HDC-VWPS) is proposed in order to attenuate the stochastic disturbance problem caused by massive integration of new energy to the power grid. The proposed strategy is based on multiagent system stochastic consensus game (MAS-SCG). It is divided into two parts. The first part is an AGC optimal control method which combines a new win-lose judgment criterion, policy hill-climbing algorithm (PHC) , and eligibility trace . Especially, the new win-lose judgment criterion is named as policy dynamics-based WoLF (PDWoLF) . Moreover, the control method, which is called PDWoLF-PHC(), is based on multiagent system stochastic game (MAS-SG) theory. Meanwhile, the second part is the collaborative consensus (CC) algorithm  which is based on multi-agent system collaborative consensus game (MAS-CC) theory. This algorithm is used to distribute the total power dynamically and optimally. Consequently, the perfect combination of AGC control and distribution is realized. At the same time, the intelligence from the whole to the branch is truly obtained. The significant difference between smart generation control (SGC) and AGC is that the original proportional-integral (PI) control in AGC is replaced by the smart control in SGC.
The rest of the paper is as follows. The SGC framework based on HDC structure is proposed in Section 2. The HDC-VWPS is expounded in Section 3. Meanwhile, Section 4 is the AGC design based on HDC-VWPS. Section 5 covers the case study, and Section 6 summarizes the full text, respectively.
2. SGC Framework Based on HDC Structure
Hierarchical reinforcement learning (HRL)  is a hierarchical control method that can solve the problem of “curse of dimensionality” in traditional reinforcement learning effectively. A new method, namely, HDC-VWPS, is put forward to obtain the optimal total power and its optimal dispatch dynamically. The term “virtual wolf pack” is a generator set group (GSGs) of a certain control area. The PDWoLF-PHC(λ) with the win or learn fast (WoLF) attribute based on heterogeneous MAS-SG theory is adopted to obtain the total power of each GSG. Meanwhile, the ramp time CC algorithm based on homogeneous MAS-CC theory is used to distribute the total power to each unit dynamically in order to achieve the optimal coordination control of each GSG. The “leader” of virtual wolf pack refers to a new dispatcher who is responsible for communicating, contacting and cooperating with the leaders of the other GSGs, and sending the instructions to each unit in the GSGs. Each GSG only has one leader. The SGC framework based on HDC structure is shown in Figure 1, where is the tie-line exchange power, is the interconnected power grid frequency error, is the total power of GSGi, and is the regulation power of the uth unit in GSGi.
A HDC-VWPS is designed to coordinate and optimize the operation of GSGs in the SGC system with HDC structure through the integration of MAS-SG and MAS-CC.
3.1. MAS-SG Framework
Based on the MAS-SG framework, a PDWoLF-PHC() algorithm is proposed to the game among GSGs to obtain total power command of each GSG.
The WoLF principle can meet the convergence requirement by changing the learning rate without sacrificing rationality, namely, learn quickly when losing and cautiously when winning . However, in more than 2 × 2 games, the players cannot accurately calculate the win-lose criterion and can only rely on the estimation. Therefore, an improved WoLF version, PDWoLF, whose judgment criterion can be accurately computable in more than 2 × 2 games, was explored in . Also, it can converge to Nash equilibrium in more than 2 action games.
It indicates that PHC algorithm can meet the requirement of the rationality in . Therefore, PDWoLF-PHC can satisfy the requirements of the convergence and the rationality at the same time. It also converges faster with a higher learning rate ratio . The PDWoLF-PHC is the extension of the classical Q-learning . It combines the multistep backtracking idea of the SARSA()  to search the optimal action-value function through the continuous trial and error dynamically. The parameter refers to the use of an eligibility trace. It can solve the temporal credit assignment of time-delayed reinforcement learning. The optimal value function and strategy are as follows. where is the set of possible actions under state .
The eligibility trace is updated by where denotes the eligibility trace at the kth step iteration under state and action , is the discount factor, and is the trace-attenuation factor.
The function will be iteratively updated according to where is the Q-learning rate and is the reward function value from state to under the selected action is the value function when executing action in state , which uses look-up table method. is a greedy action. After sufficient trial and error iterations are done, the state-value function will converge to the matrix with the probability of one. Finally, an optimal control strategy, represented by the optimal function ( matrix), can be obtained.
The win-lose criterion of PDWoLF-PHC() is determined by two parameters and for a given agent. Strategy is updated for an agent according to (4) in the state state-action pair . where is the variable quantity of the updating strategy. The updating rule is described as follows.
In (6), is the number of possible actions. is the variable learning rate and . Also, is defined as the variable learning rate ratio. is updated by where is the decision space slope value and is the decision change rate at the kth step iteration. Meanwhile, and are updated by
3.2. MAS-CC Framework
The MAS-CC framework is introduced into the HDC-VWPS to dynamically allocate the total power command to each unit.
3.2.1. Graph Theory
The topology of MAS can be expressed as a directed graph with a set of nodes edges and a weighted adjacency matrix . Among them denotes the ith agent, edge means the relationship among agents, and constant is the weight factor between and . If there is a connection between any two vertices, then the graph is called a strongly connected graph. The Laplacian matrix of graph can be written as follows. where the matrix reflects the topology of the MA network.
3.2.2. Collaborative Consensus
In a MAS, it is usually called collaborative consensus (CC)  while an agent interacts with the adjacent one to reach the consensus. A MAS consisting of autonomous agents is regarded as a node in a directed graph . The purpose of CC is to obtain a consensus in each agent and to update state in real time after communicating with neighboring agents. Due to the communication delay among agents, the first-order CC algorithm of a discrete system is chosen as follows. where is the state of the ith agent, represents the discrete time series, and denotes the entry of the row stochastic matrix at discrete time. is given by
The CC algorithm can be achieved if and only if the directed graph is strongly connected on the condition of the continuous communication and constant gain .
3.2.3. Ramp Time Collaborative Consensus
The ramp time is chosen as the consensus variable among all units in a GSG. A unit which has a higher ramp rate will be distributed with more disturbances. The ramp time of the uth unit in GSGi can be obtained as follows. where is the regulation power of the uth unit in GSGi. is the ramp rate of the unit and is calculated as follows. where and are the upper and lower bounds of the ramp rate, respectively.
The ramp time of the uth unit in GSGi can be updated according to (10) as follows. where is the total number of units in GSGi. is the row stochastic matrix.
Then the ramp time of the GSGi leader can be updated as follows. where represents the GSGi’s adjustment factor of the power error. denotes the power error between the GSGi total power and the total power of all units. It is obtained from
In the condition of the total power command if the ramp time needs to be increased; otherwise needs to be reduced. Oppositely, will be increased or decreased in condition that .
As a ramp time CC algorithm among units is adopted, the power of some units may exceed their maximum power. At the same time, the smaller the unit maximum ramp time is adopted, the faster the power limit is reached. While the power limit is reached, the uth unit’s power and ramp time are as follows. where and are the maximum and minimum reserve capacity of the uth unit in GSGi, respectively. Furthermore, if the power of the uth unit exceeds its limit, the weight factor becomes as follows. where is the weighted adjacency matrix of the GSGi.
4. AGC Design Based on HDC-VWPS
4.1. Reward Function Selection
The impact of energy management system (EMS) on the environment is considered, and carbon emission (CE) as part of the reward function is also introduced. Meanwhile, in the load frequency control (LFC), each regional power grid will control the generator set in this area according to its own area control error (ACE). The main purpose is the ACE is zero when the steady state is reached. Therefore, in the reward function, the weighted sum of CE and ACE is taken as the objective function. The reward function in GSGi is defined as follows. where is the actual output power of the uth unit in GSGi at the kth iteration; indicates the instantaneous value of ACE at the kth iteration; and represent the weight factor of controlled area’s CE and ACE, respectively. Here, equals 0.5. is the CE intensity coefficient of the uth unit in GSGi, whose unit is kg/kWh. and are the upper and lower bounds of the uth unit’s capacity in GSGi, respectively. The CE intensity coefficients for each type of generator set are as follows. where is the uth unit regulation power of the GSGi in MW.
4.2. Parameter Setting
A reasonable set of six parameters and is required in the design of the control system.
The trace-attenuation factor allocates the credits among state-action pairs. Usually, the parameter is located between 0 and 1. It determines the convergence rate and the non-Markov decision process (MDP) effects for large time-delay systems. Generally, the factor can be interpreted as a time scaling element in the backtracking. For Q-function errors, a small means that few credit will be given to the historical state-action pairs while a large denotes that much credit will be assigned. Through trial and error, it shows that 0.7 < λ < 0.95 is acceptable. Here, is selected.
The discount factor is between 0 and 1, which discounts the future rewards in Q functions. A value close to 1 should be chosen as the latest rewards in the thermal-dominated LFC process which is the most important. Experiments demonstrate that 0.6 < γ < 0.95 is proper. Here, is chosen.
The Q-learning rate is set between 0 and 1, which weighs the convergence rate of the Q-functions, namely, algorithm stability. Note that a larger can accelerate the learning rate, while a smaller can enhance the system stability. In the prelearning process, the initial value of is chosen to be 0.1 to obtain the overall search. After that, in order to gradually increase the stability of the system, it will be reduced in a linear way.
The variable learning rate is between 0 and 1, which derives an optimal policy by maximizing the action value. Especially, the algorithm will be degraded into Q-learning if equals 1. The main reason is that a maximum value action is permanently executed in every iteration. For a fast convergence rate, the greedy strategy with a variable learning rate ratio is selected in a stochastic game. Through trial and error, it shows that can obtain stable control characteristics.
The value of power error adjustment factor in GSGi is related to which is shown in
is the total power of GSGi in MW.
4.3. HDC-VWPS Procedure
The Overall HDC-VWPS Procedure Is Described in Algorithm 1.
5. Case Study
5.1. The Modified Model with Two-Area LFC Power System in IEEE
In order to test the control performance of the proposed strategy, an IEEE-modified model with two-area LFC power system  is selected as the simulation object, whose framework is shown in Figure 2. The system parameters are taken from , and those of GSG1 and GSG2 are provided in Table 1.
The work cycle of the AGC is set to be 4 s. Note that HDC-VWPS has to undergo a sufficient prelearning through off-line trial and error before the final online implementation. It includes extensive explorations in CPS state space for the optimization of Q-functions and state-value functions . Figure 3 presents the prelearning of each area produced by a continuous 10 min sinusoidal disturbance. It is obvious that the HDC-VWPS can converge to the optimal strategy in each GSG with qualified CPS1 (the average of 10 min CPS1) and (the average of 10 min ACE).
(a) The average of 10-min CPS1
(b) The average of 10-min ACE
(c) The HDC-VWPS controller output
Furthermore, a matrix with 2 norms is used as the criterion for the prelearning termination of an optimal strategy . .1 is a specified positive constant. Both the value and look-up table will be automatically saved after the prelearning, such that HDC-VWPS can be applied into a real power system. The convergence result of Q-function differences is given in Figure 4. The result is obtained in each GSG during the prelearning, in which the HDC-VWPS can accelerate the convergence rate by nearly 26.7%~40% over that of Q(λ).
(a) The convergence result of Q(λ)
(b) The convergence result of HDC-VWPS
In order to evaluate the robustness of each algorithm, the control performances of DWoLF-PHC(λ), Q(λ), and Q-learning are compared with that of HDC-VWPS under a step and a stochastic load disturbance in GSG1. The simulation results under a step load disturbance are shown in Figure 5. In Figure 5(a) it is shown that the overshoots are around 6.3758%, 4.907%, 7.2614%, and 13.0435%, respectively. Meanwhile, in Figure 5(b), it refers that the average values of ACE are 0.1261 MW, 1.0682 MW, 1.2216 MW, and 1.0438 MW, respectively. In addition, in Figure 5(c), it is illustrated that the minimum CPS1 is 189.6487%, 186.7696%, 189.6426%, and 190.1703%, respectively. In the meantime, the simulation results under a stochastic load disturbance are described in Figure 6. In Figure 6(a), it is demonstrated that HDC-VWPS has the strongest robustness. Besides, in Figure 6(b), it refers that the average values of ACE are 22.7175 MW, 45.1846 MW, 66.6484 MW, and 75.7486 MW, respectively. Moreover, in Figure 6(c), it is presented that the minimum CPS1 is 167.7471%, 159.4400%, 150.6757%, and 127.3168%, respectively. Therefore, HDC-VWPS can provide better control performances for AGC units.
(a) Controller output
(a) Controller Output
The stochastic white noise is used as the load disturbance after the prelearning process, in which the control performance of each algorithm obtained in each GSG is summarized in Figure 7. CE, (average values of the frequency deviation), (average values of 1 min ACE), and CPS1 are the average values over 24 h. It can be seen from Figure 7 that compared with the other methods, HDC-VWPS can reduce CE by 1.21%~1.51%, by 4.5647 × 10−4~7.5851 × 10−4 Hz, and by 5.79%~44.22% and increase CPS1 by 0.0007%~0.02%.
5.2. Four-Area Model of Hubei Power Grid
Four-area model of Hubei power grid is shown in Figure 8. As shown in Figure 9, an AC/DC hybrid Hubei power grid model, which consists of totally 43 units of four GSGs, is analyzed in the paper. The control performance is CPS, and the work cycle of AGC is set to be 4 s. The of Hubei power grid model is 118 MW. is the governor output, and is the turbine output. At the same time, is the time constant of the governor, is the time constant of the turbine, and is the equivalent function of AC frequency response, respectively. Related parameters are set as follows. , , , , , , and . Generation rate constraint (GRC) is the in this study. GRC and all the other system parameters are given in Table 1.
The system includes coal-fired power plants, hydropower plants, and pumped storage power plants. The output of each plant is relative to its own governor, and the setting point of AGC is obtained according to the optimal dispatch. The long-term AGC control performance based on MA is evaluated by a statistic experiment with 30-day stochastic load disturbance. Four types of controllers are simulated, that is, Q-learning, Q(λ), DWoLF-PHC(λ), and HDC-VWPS. The statistic experiment results obtained under the impulsive perturbations and stochastic white noise load fluctuation are showed in Figures 10 and 11, respectively. Especially, and are the average values of the frequency deviation and ACE. CPS1, CPS2, and CPS are the monthly compliance percentages. The same weight of HDC-VWPS in each GSG is chosen, which has a more effective joint cooperation than other policvies. As a result, a higher scalability and self-learning efficiency can be achieved.
Figure 10 shows that the HDC-VWPS in GSG1, compared with other methods, reduces CE by 11.48%~29.45%, by 0.237~0.0325 Hz, and ACE by 8.57%~90.37% and increases CPS1 by 1.03%~29.9%.
Figure 11 shows that the HDC-VWPS in GSG1, compared with other methods, reduces CE by 0.17%~20.24%, by 0.003~0.078 Hz, ACE by 45%~94% and increases CPS1 by 0.03%~4%. Similar results can be obtained in other GSGs.
It can be seen from the simulation results that the HDC-VWPS has stronger adaptability and better control performance than that of other three methods. In each GSG area, the win-lose criteria of the unit depend on the sign of the product of and . By determining the “lose” or “win” of an agent, the corresponding variable learning rate is selected to obtain the optimal Q function through updating the Q value dynamically. Meanwhile, the variable quantity is determined in the mix strategy updating. Finally, the optimal mixed strategy is gained by the dynamic updating continuously. The results also demonstrate that the proposed strategy can effectively reduce the CE and improve the utilization rate of new energy.
Based on the MAS-SCG theory, a novel HDC-VWPS method with new win-lose judgment criterion and eligibility trace is proposed to dynamically obtain the optimal total power and its optimal dispatch. Also, it can attenuate the stochastic disturbance caused by massive integration of new energy to the power grid.
Based on MAS-SG, a PDWoLF-PHC(λ) algorithm is proposed to solve the universality problem which usually a strict knowledge system is required for agents under the traditional MAS-SG system. It also solved the problem which the agents cannot accurately calculate the judgment criterion and converge to Nash equilibrium slowly in more than 2 × 2 games. Based on MAS-CC theory, the ramp time CC algorithm is used to allocate the total power command to each unit dynamically.
The simulation results verify the effectiveness of the proposed strategy using modified power system model in the IEEE two-area LFC and Hubei power grid model in China. Compared with other four smart methods, the proposed one can satisfy the CPS requirements and improve the performance of the closed-loop system. Also, it can reduce the CE and maximize the utilization rate of energy.
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This work was supported by the National Natural Science Foundation of China (51707102 and 61603212).
- O. Dobzhanskyi, P. Gottipati, E. Karaman, X. Luo, E. A. Mendrela, and A. M. Trzynadlowski, “Multilayer-winding versus switched-flux permanent-magnet AC machines for gearless applications in clean-energy systems,” IEEE Transactions on Industry Applications, vol. 48, no. 6, pp. 2296–2302, 2012.
- A. Milicua, G. Abad, and M. A. Rodriguez Vidal, “Online reference limitation method of shunt-connected converters to the grid to avoid exceeding voltage and current limits under unbalanced operation—part I: theory,” IEEE Transactions on Energy Conversion, vol. 30, no. 3, pp. 852–863, 2015.
- A. Noruzi, T. Banki, O. Abedinia, and N. Ghadimi, “A new method for probabilistic assessments in power systems, combining Monte Carlo and stochastic-algebraic methods,” Complexity, vol. 21, no. 2, 110 pages, 2015.
- Y. A.-R. I. Mohamed, “Suppression of low and high-frequency instabilities and grid-induced disturbances in distributed generation inverters,” IEEE Transactions on Power Electronics, vol. 26, no. 12, pp. 3790–3803, 2011.
- M. Mohammadi, A. Danandeh, H. Nasir Aghdam, and N. Ojaroudi, “Wavelet neural network based on islanding detection via inverter-based DG,” Complexity, vol. 21, no. 2, 324 pages, 2015.
- O. Elgerd and C. Fosha, “Optimum megawatt-frequency control of multi-area electric energy systems,” IEEE Transactions on Power Apparatus and Systems, vol. PAS-89, no. 4, pp. 556–563, 1970.
- N. Ghadimi, “An adaptive neuro-fuzzy inference system for islanding detection in wind turbine as distributed generation,” Complexity, vol. 21, no. 1, 20 pages, 2015.
- N. Atic, D. Rerkpreedapong, A. Hasanovic, and A. Feliachi, “NERC compliant decentralized load frequency control design using model predictive control,” in 2003 IEEE Power Engineering Society General Meeting (IEEE Cat. No.03CH37491), vol. 559, Toronto, Ontario, Canada, July 2003.
- O. Abedinia, N. Amjady, and A. Ghasemi, “A new metaheuristic algorithm based on shark smell optimization,” Complexity, vol. 21, no. 5, 116 pages, 2016.
- A. Ahadi, N. Ghadimi, and D. Mirabbasi, “An analytical methodology for assessment of smart monitoring impact on future electric power distribution system reliability,” Complexity, vol. 21, no. 1, 113 pages, 2015.
- T. Yu, J. Liu, K. W. Chan, and J. J. Wang, “Distributed multi-step Q(λ) learning for optimal power flow of large-scale power grids,” International Journal of Electrical Power & Energy Systems, vol. 42, no. 1, pp. 614–620, 2012.
- T. Yu, L. Xi, B. Yang, Z. Xu, and L. Jiang, “Multiagent Stochastic Dynamic Game for Smart Generation Control,” Journal of Energy Engineering, vol. 142, no. 1, article 04015012, 2016.
- L. Xi, T. Yu, B. Yang, and X. Zhang, “A novel multi-agent decentralized win or learn fast policy hill-climbing with eligibility trace algorithm for smart generation control of interconnected complex power grids ☆,” Energy Conversion and Management, vol. 103, pp. 82–93, 2015.
- M. Bowling and M. Veloso, “Multi-agent learning using a variable learning rate,” Artificial Intelligence, vol. 136, no. 2, pp. 215–250, 2002.
- R. S. Sutton and A. G. Barto, Reinforcement Learning: an Introduction, MIT Press, Cambridge, 1998.
- B. Banerjee and J. Peng, “Adaptive policy gradient in multi-agent learning,” in Proceedings of the second international joint conference on Autonomous agents and multiagent systems - AAMAS '03, pp. 686–692, New York, NY, USA, July 2003.
- X. Zhang, H. Xu, T. Yu, B. Yang, and M. Xu, “Robust collaborative consensus algorithm for decentralized economic dispatch with a practical communication network,” Electric Power Systems Research, vol. 140, pp. 597–610, 2016.
- M. Ghavamzadeh, S. Mahadevan, and R. Makar, “Hierarchical multi-agent reinforcement learning,” Autonomous Agents and Multi-Agent Systems, vol. 13, no. 2, pp. 197–229, 2006.
- C. J. C. H. Watkins and P. Dayan, “Q-learning,” Machine Learning, vol. 8, no. 3-4, pp. 279–292, 1992.
- S. K. Gupta, K. Kar, S. Mishra, and J. T. Wen, “Collaborative energy and thermal comfort management through distributed consensus algorithms,” IEEE Transactions on Automation Science and Engineering, vol. 12, no. 4, pp. 1285–1296, 2015.
- G. Ray, A. N. Prasad, and G. D. Prasad, “A new approach to the design of robust load-frequency controller for large scale power systems,” Electric Power Systems Research, vol. 51, no. 1, pp. 13–22, 1999.
- O. I. Elgerd and H. H. Happ, “Electric energy systems theory: an introduction,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-2, no. 2, pp. 296-297, 1972.
- D. Ernst, M. Glavic, and L. Wehenkel, “Power systems stability control: reinforcement learning framework,” IEEE Transactions on Power Systems, vol. 19, no. 1, pp. 427–435, 2004.
- T. Yu, B. Zhou, K. W. Chan, L. Chen, and B. Yang, “Stochastic optimal relaxed automatic generation control in non-Markov environment based on multi-step Q(λ) learning,” IEEE Transactions on Power Systems, vol. 26, no. 3, pp. 1272–1282, 2011.
Copyright © 2018 Lei Xi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.