Mathematical Problems in Engineering

Volume 2014 (2014), Article ID 514608, 21 pages

http://dx.doi.org/10.1155/2014/514608

## Inverse Optimal Control with Speed Gradient for a Power Electric System Using a Neural Reduced Model

^{1}CUCEI, Universidad de Guadalajara, Apartado Postal 51-71, Col. Las Aguilas, 45079 Zapopan, JAL, Mexico^{2}CINVESTAV, Unidad Guadalajara, Apartado Postal 31-438, Plaza La Luna, 45091 Guadalajara, JAL, Mexico

Received 5 November 2013; Revised 30 January 2014; Accepted 30 January 2014; Published 16 March 2014

Academic Editor: Hamid R. Karimi

Copyright © 2014 Alma Y. Alanis et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper presented an inverse optimal neural controller with speed gradient (SG) for discrete-time unknown nonlinear systems in the presence of external disturbances and parameter uncertainties, for a power electric system with different types of faults in the transmission lines including load variations. It is based on a discrete-time recurrent high order neural network (RHONN) trained with an extended Kalman filter (EKF) based algorithm. It is well known that electric power grids are considered as complex systems due to their interconections and number of state variables; then, in this paper, a reduced neural model for synchronous machine is proposed for the stabilization of nine bus system in the presence of a fault in three different cases in the lines of transmission.

#### 1. Introduction

Many physical systems, such as electric power grids, computer and communication networks, networked dynamical systems, transportation systems, and many others, are complex large-scale interconnected systems [1]. To control such large scale systems, centralized control schemes are proposed in the literature assuming available global information for the overall system. Another problem in complex large-scale interconnected systems is the effect of delays that typically are unknown and time-variable [2, 3]. While using control centralization has theoretical advantages, it is very difficult for a complex large-scale system with interconnections due to technical and economic reasons [4]. Furthermore, centralized control designs are dependent upon the system structure and cannot handle structural changes. If subsystems are added or removed, the controller for the overall system should be redesigned. Therefore decentralized control for interconnected power systems has also attracted considerable attention of researchers in the field of complex and large-scale systems like multiarea interconnected power systems. Besides, due to physical configuration and high dimensionality of interconnected systems, centralized control is neither economically feasible nor even necessary. These facts motivate the design of decentralized controllers, using only local information while guaranteeing stability for the whole system [1].

The main issue in this paper is the analysis of a fault in the electric power system in different lines of transmission, the recurrent high order neural networks (RHONN) allow the identification of nonlinear systems, and then the RHONN model can be used for the controller design. Recently, some works have been published about synchronous generators in which reduced models have been proposed, such models are able to reproduce full order dynamics for synchronous generators [1, 5]. The system under study consists of three synchronous generators interconnected (nine bus system) and there are cases of study of power electric system, where a three-phase fault is introduced at the end of the line 7 [6]; in this paper, the analysis for the system is focused in other lines, at the end of buses 8 and 9, the fault is proposed and tested via simulation and the purpose is the production and distribution of a reliable and robust electric energy.

On the other hand, a model in discrete time has been proposed [7], in which a recurrent high order neural network has been incorporated to implement a control law as this reduced model allows the stabilization through the inverse optimal control law SG. In this work, a neural model of the multimachine system is proposed, which results useful, because it is focused in the variable states that are more relevant for this paper: position, velocity, and voltage rotor [7]; further, the control law is implemented for the power electric system that consists of three interconnected synchronous generators. A solution is proposed for the destabilization problem of multimachine power electric system in the presence of a fault in one of its lines of transmission that occurs at 10 seconds of simulation. A system identification of the complete multimachine power electric system model (nine bus system) is presented through a neural reduced model and this allows the design of a neural inverse optimal SG control law. Finally the results obtained are shown, in which it can be seen that the control law stabilizes the system in presence of the fault in the three cases of fault that are presented.

In literature, there are works that report the parameter identification for synchronous machines for full order models [5] as well as for reduced order ones [8]; however, these models are for nominal condition; that is, they do not consider fault scenarios; in [1], a reduced order neural model is considered; however, it is developed for continuous time; nevertheless, the need to real-time implementations makes necessary the use of digital models, besides, in [9], has been developed a discrete-time neural controller, which is proposed for a single machine system. Then, the paper main contributions can be stated as follows: first a RHONN is used to establish a discrete-time reduced order mathematical model for a multimachine power electric system model. Then this neural model is used to synthesize an inverse optimal SG control law to stabilize the system and, finally, three fault scenarios are considered in order to illustrate the applicability of the proposed scheme.

#### 2. Mathematical Preliminaries

##### 2.1. Discrete-Time High Order Neural Networks

The use of multilayer neural networks is well known for pattern recognition and for static systems modelling. The NN is trained to learn an input-output map. Theoretical works have proven that, even with just one hidden layer, a NN can uniformly approximate any continuous function over a compact domain, provided that the NN has a sufficient number of synaptic connections [10]. To implement the neural network (NN) design, a RHONN is used [7] and this model turns out to be very flexible because it allows incorporating priory information to the model: where is the state of the th neuron and is the respective online adapted weight vector. Now we define the vector: is the respective number of high-order connections, is a collection of nonordered subsets of , is the state dimension, and is the number of external inputs, with being nonnegative integers and defined as follows: = is the input vector to the neural network and is defined by where is any real value variable.

##### 2.2. The EKF Training Algorithm

The best well-known training approach for recurrent neural networks (RNN) is the backpropagation through time learning [11]. However, it is a first order gradient descent method and hence its learning speed could be very slow [12]. Recently, Extended Kalman Filter (EKF) based algorithms have been introduced to train neural networks [7, 9, 13, 14]. With the EKF based algorithm, the learning convergence is improved [14]. The EKF training of neural networks, both feedforward and recurrent ones, has proven to be reliable and practical for many applications over the past years [14]. It is known that Kalman filtering (KF) estimates the state of a linear system with additive state and output white noises [15, 16]. For KF-based neural network training, the network weights become the states to be estimated. In this case, the error between the neural network output and the measured plant output can be considered as additive white noise. Due to the fact that the neural network mapping is nonlinear, an EKF-type is required (see [17] and references therein). The training goal is to find the optimal weight values which minimize the prediction error. The EKF-based training algorithm is described by [15]: with where is the prediction error associated covariance matrix, is the weight (state) vector, is the th plant state component, is the th neural state component, is a design parameter, is the Kalman gain matrix, is the state noise associated covariance matrix, is the measurement noise associated covariance matrix, and is a matrix, for which each entry () is the derivative of one of the neural network output, , with respect to one neural network weight, , as follows: Usually , , and , are initialized as diagonal matrices, with entries , , and , respectively.

#### 3. Controller Design

Optimal control is related to finding a control law for a given system such that a performance criterion is minimized. This criterion is usually formulated as a cost functional, which is a function of the state and control variables. The optimal control problem can be solved using Pontryagin’s maximum principle (a necessary condition) [18] and the method of dynamic programming developed by Bellman [19, 20], which can lead to a nonlinear partial differential equation called the Hamilton-Jacobi-Bellman (HJB) equation (a sufficient condition); nevertheless, solving the HJB equation is not a feasible task [21, 22].

##### 3.1. Inverse Optimal Control via CLF

In this paper, the inverse optimal control and its solution by proposing a quadratic control Lyapunov function (CLF) are used [23] and the CLF depends on a fixed parameter in order to satisfy stability and optimality condition. A posteriori, the speed gradient algorithm is established to compute this CLF parameter and it is used to solve the inverse optimal control problem. Motivated by the favorable stability margins of optimal control systems, a stabilizing feedback control law is proposed, which will be optimal with respect to a meaningful cost functional. At the same time, it is desirable to avoid the difficult task of solving the HJB partial differential equation. In the inverse optimal control problem, a candidate CLF is used to construct an optimal control law directly without solving the associated HJB equation [24]. Inverse optimality is selected, because it avoids solving the HJB partial differential equations and still allows obtaining Kalman-type stability margins [21].

In contrast to the inverse optimal control via passivity approach, in which a storage function is used as a candidate CLF and the inverse optimal control law is selected as the output feedback, for the inverse optimal control via CLF, the control law is obtained as a result of solving the Bellman equation. Then, a candidate CLF for the obtained control law is proposed such that it stabilizes the system and a posteriori a meaningful cost functional is minimized.

In this paper, a quadratic candidate CLF is used to synthesize the inverse optimal control law. The following assumptions and definitions allow the inverse optimal control solution via the CLF approach.

The full state of system is measurable.

*Definition 1 (inverse optimal control law). *Let us define the control law [23]
to be inverse optimal (globally) stabilizing if(1)it achieves (global) asymptotic stability of 0 for system (8);(2) is (radially unbounded) positive definite function such that inequality
is satisfied. When , is selected; then is a solution for the HJB equation
where

It is possible to establish the main conceptual differences between optimal control and inverse optimal control as follows.(i)For optimal control, the meaningful cost indexes and are given a priory; then, they are used to calculate and by means of HJB equation solution.(ii)For inverse optimal control, a candidate CLF and the meaningful cost index are given a priory, and then these functions are used to calculate the inverse control law and the meaningful cost index , defined as .

As established in Definition 1, the inverse optimal control problem is based on the knowledge of . Thus, it is proposed as a CLF , such that (1) and (2) are guaranteed. That is, instead of solving (11) for , it is proposed a control Lyapunov function as for control law (9), in order to ensure stability of the equilibrium point of system (8), which will be achieved by defining an appropriate matrix . Moreover, it will be established that control law (9) with (13), which is referred to as the inverse optimal control law, optimizes a meaningful cost functional of the form:

Consequently, by considering as in (13), the control law takes the following form: where and . It is worth pointing out that and are positive definite and symmetric matrices; thus, the existence of the inverse in (15) is ensured.

##### 3.2. Speed-Gradient SG Algorithm

Given that (15) is redefined as where and , this will allow us to compute a time variant value in time for , which ensures stability to the system (8) by means of the algorithm SG.

In [25] a discrete-time application of the SG algorithm is formulated to find a control law which ensures the control goal: where is a control goal function, a constant , and is the time at which the control goal is achieved. ensures stability if it is a positive definite function.

Based on the SG application proposed in [25], the control law given by (15) is considered, with in (16) a state dependent function .

Consider the control law redefined for the speed gradient algorithm which at every time depends on the matrix . Let us define the matrix at every time as where is a given constant matrix and is a scalar parameter to be adjusted by the SG algorithm. Then the control law is transformed as follows: where The SG algorithm is now reformulated for the inverse optimal control problem.

*Definition 2 (SG goal function). *Consider a time-varying parameter with for all , and is the set of admissible values for [23]. A nonnegative function of the form
where with as defined in (8), is referred to as SG goal function for system (8), with .

*Definition 3 (SG control goal). *Consider a constant . The SG control goal for system (8) with (18) is defined as finding , so that the SG goal function [23], as in (20), fulfills
where
with and as defined in (18); is the time at which the SG control goal is achieved.

Solution must guarantee that > in order to obtain a positive definite function .

To conclude, the SG algorithm is used to calculate in order to achieve the SG control goal defined above.

Proposition 4. *Consider a discrete-time nonlinear system of the form (8) with (18) as input [23]. Let be a SG goal function as defined in (2) and denoted by . Let , be positive constant values and let be a positive definite function with and let be a sufficiently small positive constant. Assume the following.*(i)*There exist ** and ** such that*(ii)*For all **,**where ** denotes the gradient of ** with respect to **. Then, for any initial condition **, there exists a ** such that the SG control goal (16) is achieved by means of the following dynamic variation of parameter **:**with**Finally, for , becomes a constant value denoted by and the SG algorithm is completed.*

With as defined in (20), the dynamic variation of parameter in (25) results in where which is positive for all time if . Therefore positiveness for is ensured and requirement para with is guaranteed. When SG control goal (21) is achieved, then for . Thus, matrix in (18) is considered constant and where is computed as , with a design positive definite matrix. Under these constraints, we obtain where and .

##### 3.3. Tracking Reference

In the case of tracking reference, the control law is defined as follows [23]: where and .

#### 4. Multimachine Power System Control

##### 4.1. Multimachine Power System Complete Model

In this work, the proposed decentralized identification and control scheme is tested with the Western System Coordinating Council (WSCC) 3-machine, 9-bus system [6, 26]. The differential and algebraic equations which represent the th generator dynamics and power flow constraints respectively [1, 6] are given by where is the power angle of the th generator in rad, is the rotating speed of the th generator in rad/s, is the -axis internal voltage of the th generator in p.u., is the -axis internal voltage of the th generator in p.u., is the -axis flux linkage of the th generator in p.u., is the -axis flux linkage of the th generator in p.u., is the excitation control input, and and are the -axis flux linkage and -axis flux linkage of the th generator in p.u., respectively; is the synchronous rotor speed in rad/s, and are the -axis and -axis currents of the th generator in p.u., and is the transient voltage in -axis of the th generator. Besides, (4.1) is complemented with being parameters for each synchronous generator. It is important to consider that each machine model considered is a flux decay model (one axis model) given in [1, 6]; exciters and governors are not included in this model [1, 8].

##### 4.2. Reduced Neural Model of Multimachine Power System

The model mentioned above [1] is in continuous time and due to this fact, we proceed to discretize the states using Euler methodology; with the state variables discretized, the reduced neural model is proposed [7] as follows: where estimates . Given the neural reduced model, the inverse optimal SG control law is applied to the reduced neural model to each synchronous generator, that is, in a decentralized way. Thus, the control law is established from (30) where the matrix is given for different values for each fault as follows: in the case of the fault at the end of bus 7 (, , ), in the case of the fault at the end of bus 8 (, , ), and in the case of the fault at the end of bus 9 (, , ) for generators 1, 2, and 3 respectively, where is an identity matrix of .

From (33) , , the control law for the neural network is defined as

It is important to note, that [5] proves that low-order models are well-suited for stability analysis and feedback control design for industrial power generators. Moreover, the use of neural networks allows modelling system interconnections using only local information, as well as not modeled dynamics for the reduced model [1].

#### 5. Preliminary Calculations for Faults

For the design of the fault, a system data preparation is required and the following preliminary calculations are taken from [6],considering the parameters of the generators given in Tables 7 and 8.(1)All system data are converted to a common base; a system base of 100 MVA is frequently used.(2)The loads are converted to equivalent impedances or admittances. The needed data for this step are obtained from the load-flow study. Thus if a certain load bus has a voltage , power , reactive power , and current flowing into a load admittance , then The equivalent shunt admittance at that bus is given by (3)The internal voltages of the generators are calculated from the load-flow data. These internal angles may be computed from the pretransient terminal voltages as follows. Let the terminal voltage be used temporarily as a reference, as shown in Figure 1. If , then, from the relation it is possible to obtain . Since , then The initial generator angle is then obtained by adding the pretransient voltage angle to , or (4)The matrix for each network condition is calculated. The following steps are usually needed.(a)The equivalent load impedances (or admittances) are connected between the load buses and the reference node; additional nodes are provided for the internal generator voltages (nodes in Figure 2) and the appropriate values of are connected between these nodes and the generator terminal nodes. Also, simulation of the fault impedance is added as required, and the admittance matrix is determined for each switching condition.(b)All impedance elements are converted to admittances.(c)Elements of the matrix are identified as follows: is the sum of all the admittances connected to node , and is the negative of the admittance between node and node .(5)Finally, all the nodes except for the internal generator nodes are eliminated and obtain the matrix for the reduced network. The reduction can be achieved by matrix operation recalling all the nodes that have zero injection currents except for the internal generator nodes. This property is used to obtain the network reduction as shown below. Let where

Now the matrices and are partitioned accordingly to get where the subscript is used to denote generator nodes and the subscript is used for the remaining nodes. Thus for the network in Figure 2, and . Expanding (42), from which we eliminate to find The matrix is the desired reduced matrix , where is the number of the generators. The network reduction illustrated by (43)-(44) is a convenient analytical technique that can be used only when the loads are treated as constant impedances. If the loads are not considered to be constant impedances, the identity of the load buses must be retained. Network reduction can be applied only to those nodes that have zero injection current.

Once the preliminaries calculations are made to obtain the matrix for each fault in the correspondent bus, the network reduction for each fault is applied. For the first case of the analysis, the fault occurs at bus 7 and then the correspondent matrix is obtained as shown in Tables 9, 10, and 11 included at the Appendix. Then the network reduction of matrix is applied and is defined as in Table 1.

For the second case of the analysis, the fault occurs at bus 8 and then the correspondent matrix is obtained as shown in Tables 12, 13, and 14 included at the Appendix after the network reduction of matrix is realized to obtain the reduced networks defined as in Table 2.

For the third case of the analysis, the fault occurs at bus 9 and then the correspondent matrix is obtained as shown in Tables 15, 16 and 17 included at the Appendix after the network reduction of matrix is realized to obtain the reduced networks defined as in Table 3.

#### 6. Fault Simulation

The power electric system used in this paper is presented in Figure 3. It corresponds to the nine bus system. Figure 3 also includes the bus interconnection and the related parameters in the transmission lines. Data for simulation is given in Tables 7 and 8, respectively [6], where the modeling of the system is explained and the related parameters for each synchronous generator are described.

In this paper, the 18 state variables related to 3 synchronous generators are stabilized, using the neural reduced model [7], reaching stabilization for the system with the fault in three different lines of transmission, for simulation the sample time is fitted to 0.005 ms.

There are three cases contemplated in the system simulation.(1)The fault occurs near bus 7 at the end of the lines 5–7. Results are depicted in Figure 4 for generator 1, Figure 5 for generator 2, and Figure 6 for generator 3.(2)The fault occurs near bus 8 at the end of the lines 8-9. Results are depicted in Figure 7 for generator 1, Figure 8 for generator 2, and Figure 9 for generator 3.(3)The fault occurs near bus 9 at the end of the lines 6–9. Results are depicted in Figure 10 for generator 1, Figure 11 for generator 2, and Figure 12 for generator 3.

For the cases above mentioned, the fault is incepted at 10 seconds of simulation and then it is possible to see that the system has a prefault state (before 10 seconds), a fault state (at 10 seconds), and a postfault state (after 10 seconds). The admittances for the loads are given in p.u. in Table 4.

The initial conditions for the system are given in Table 5.

It is important to note that initial conditions of the generators are defined by their respective parameters [1]; however, in order to test the NN approximation capabilities, it is common to use signals that can represent a wide range of frequencies; then, it is possible that plant signals can exhibit a high frequency behavior [10].

The control goal is to stabilize the power electric system and this is why the references given for each state variable of the neural reduced model for the multimachine system are proposed as in Table 6.

#### 7. Conclusions

In this paper a SG discrete-time inverse optimal controller is synthesized for a reduced order neural model to stabilize a multimachine power electric system in the presence of a fault at line 7, at line 8, and at line 9; from simulation results, it can be seen that the proposed controller allows stabilizing the state in an efficient way in the three different cases, allowing the system stabilization after the fault occurs. As future work authors are considering the stability analysis including the neural decentralized controller, besides the analysis of control delay for closed loop system.

#### Appendix

In this appendix, parameters used for simulations are presented. Tables 7 and 8 show the parameters for generators and transmission lines, respectively. Tables 9, 10 and 11 display the matrix of network with fault near to bus 7 for prefault, fault, and fault cleared conditions. Tables 12, 13 and 14 show the matrix of network with fault near to bus 8 for prefault, fault, and fault cleared conditions. Tables 15, 16 and 17 present the matrix of network with fault near to bus 9 for prefault, fault, and fault cleared conditions.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors thank the support of CONACYT Mexico, through Projects 103191Y, 106838Y, and 156567Y. They also thank the very useful comments of the anonymous reviewers, which help to improve the paper. First author also thanks the scholarship L’Oreal-AMC (Mexican Academy of Sciences) for woman in science.

#### References

- V. H. B. Benitez,
*Decentralized continuous time neural control [Ph.D. thesis]*, Unidad Guadalajara, Cinvestav, Mexico, 2010. - H. Li, X. Jing, and H. R. Karimi, “Output-feedback-based
*H*_{∞}control for vehicle suspension systems with control delay,”*IEEE Transactions on Industrial Electronics*, vol. 61, no. 1, pp. 436–446, 2014. View at Publisher · View at Google Scholar - X.-L. Zhu, B. Chen, D. Yue, and Y. Wang, “An improved input delay approach to stabilization of fuzzy systems under variable sampling,”
*IEEE Transactions on Fuzzy Systems*, vol. 20, no. 2, pp. 330–341, 2012. View at Publisher · View at Google Scholar · View at Scopus - S. Huang, K. K. Tan, and T. H. Lee, “Decentralized control design for large-scale systems with strong interconnections using neural networks,”
*IEEE Transactions on Automatic Control*, vol. 48, no. 5, pp. 805–810, 2003. View at Publisher · View at Google Scholar · View at MathSciNet - M. A. Arjona, R. Escarela-Perez, G. Espinosa-Perez, and J. Alvarez-Ramirez, “Validity testing of third-order nonlinear models for synchronous generators,”
*Electric Power Systems Research*, vol. 79, no. 6, pp. 953–958, 2009. View at Publisher · View at Google Scholar · View at Scopus - P. M. Anderson and A. A. Fouad,
*Power System Control and Stability*, IEEE Press, New York, NY, USA, 1994. - A. Y. Alanis, E. N. Sanchez, and F. O. Tellez, “Discrete-time inverse optimal neural control for synchronous generators,”
*Engineering Applications of Artificial Intelligence*, vol. 26, pp. 697–705, 2013. View at Publisher · View at Google Scholar - T. Weckesser, H. Johannsson, and J. Østergaard, “Impact of model detail of synchronous machines on real-time transient stability assessment,” in
*Proceedings of the IREP Symposium-Bulk Power System Dynamics and Control*, 2013. - A. Y. Alanis, N. Arana-Daniel, and C. Lopez-Franco, “Neural-PSO second order sliding mode controller for unknown discrete-time nonlinear systems,” in
*Proceedings of the International Joint Conference on Neural Networks*, vol. 1, pp. 3065–3070, 2013. - S. Haykin,
*Kalman Filtering and Neural Networks*, John Wiley & Sons, New York, NY, USA, 2001. - R. J. Williams and D. Zipser, “A learning algorithm for continually runnig fully recurrent neural networks,”
*Neural Computation*, vol. 1, pp. 270–280, 1989. View at Publisher · View at Google Scholar - C.-S. Leung and L.-W. Chan, “Dual extended Kalman filtering in recurrent neural networks,”
*Neural Networks*, vol. 16, no. 2, pp. 223–239, 2003. View at Publisher · View at Google Scholar · View at Scopus - A. Y. Alanis, M. Lopez-Franco, N. Arana-Daniel, and C. Lopez-Franco, “Discrete-time neural control for electrically driven nonholonomic mobile robots,”
*International Journal of Adaptive Control and Signal Processing*, vol. 26, no. 7, pp. 630–644, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - L. A. Feldkamp, T. M. Feldkamp, and D. V. Prokhorov, “Neural network training with the nprKF,” in
*Proceedings of the International Joint Conference on Neural Networks (IJCNN '01)*, pp. 109–114, usa, July 2001. View at Scopus - R. Grover and P. Y. C. Hwang,
*Introduction to Random Signals and Applied Kalman Filtering*, John Wiley & Sons, New York, NY, USA, 2nd edition, 1992. - Y. Song and J. W. Grizzle, “The extended Kalman filter as a local asymptotic observer for discrete-time nonlinear systems,”
*Journal of Mathematical Systems, Estimation, and Control*, vol. 5, no. 1, pp. 59–78, 1995. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. S. Poznyak, E. N. Sanchez, and W. Yu,
*Differential Neural Networks for Robust Nonlinear Control*, World Scientific, Singapore, 2001. - L. S. Pontryagin, V. G. Boltyankii, R. V. Gamkrelizde, and E. F. Mischenko,
*The Mathematical Theory of Optimal Processes*, Interscience, New York, NY, USA, 1962. - R. Bellman,
*Dynamic Programming*, Princeton University Press, Princeton, NJ, USA, 1957. View at MathSciNet - R. E. Bellman and S. E. Dreyfus,
*Applied Dynamic Programming*, Princeton University Press, Princeton, NJ, USA, 1962. View at MathSciNet - M. Krstić and H. Deng,
*Stabilization of Nonlinear Uncertain Systems*, Springer, Berlin, Germany, 1998. View at MathSciNet - R. Sepulchre, M. Janković, and P. V. Kokotović,
*Constructive Nonlinear Control*, Springer, Berlin, Germany, 1997. View at Publisher · View at Google Scholar · View at MathSciNet - F. O. Ornelas-Tellez,
*Inverse optimal control for discrete-time nonlinear systems [Ph.D. thesis]*, Unidad Guadalajara, Cinvestav, Mexico, 2011. - R. A. Freeman and P. V. Kokotović,
*Robust Nonlinear Control Design: State-Space and Lyapunov Techniques*, Birkhäuser Boston, Cambridge, Mass, USA, 1996. View at Publisher · View at Google Scholar · View at MathSciNet - A. L. Fradkov and A. Yu. Pogromsky,
*Introduction to Control of Oscillations and Chaos*, vol. 35, World Scientific, Singapore, 1998. View at Publisher · View at Google Scholar · View at MathSciNet - Power System Dynamic Analysis-Phase I, “Electric power research institute,”
*EPRI Report*EL-484, 1977. View at Google Scholar