ISRN Robotics

Volume 2013, Article ID 173703, 11 pages

http://dx.doi.org/10.5402/2013/173703

## Comparative Study between Robust Control of Robotic Manipulators by Static and Dynamic Neural Networks

^{1}National Institute of Applied Science and Technology (INSAT), Northern Urban Center Mailbox 676, 1080 Tunis, Tunisia^{2}Department of Physics and Electrical Engineering, National Institute of Applied Science and Technology (INSAT), Tunisia

Received 28 January 2013; Accepted 20 March 2013

Academic Editors: A. Bechar, A. Sabanovic, R. Safaric, K. Terashima, and C.-C. Tsai

Copyright © 2013 Nadya Ghrab and Hichem Kallel. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A comparative study between static and dynamic neural networks for robotic systems control is considered. So, two approaches of neural robot control were selected, exposed, and compared. One uses a static neural network; the other uses a dynamic neural network. Both compensate the nonlinear modeling and uncertainties of robotic systems. The first approach is direct; it approximates the nonlinearities and uncertainties by a static neural network. The second approach is indirect; it uses a dynamic neural network for the identification of the robot state. The neural network weight tuning algorithms, for the two approaches, are developed based on Lyapunov theory. Simulation results show that the system response, equipped by dynamic neural network controller, has better tracking performance, has faster response time, and is more reliable to face disturbances and robotic uncertainties.

#### 1. Introduction

Several orders of neural robot control approaches have been proposed in the literature. These approaches are classified into two main classes: direct and indirect neural controls. If it requires prior identification of the controlled process model, it is called indirect control; otherwise it is called direct control. For the direct one, many architectures of control are mentioned in the literature [1–5]. For the second class, we cite neural control via dynamic neural network [6, 7], Model Reference Adaptive Control (MRAC) [8–10], Internal Model Control (IMC) [11–13], and predictive neural control [14, 15]. Both of these control classes are robust thanks to their ability to overcome the nonlinearities and uncertainties in the robot dynamics.

In this paper, the aim is to compare the performance of static neural networks to dynamic neural networks in robotic systems control. For this, two types of control, from the already mentioned, are selected, presented, and tested for a two-link robot. One uses a static neural network; the other uses a dynamic neural network. The first approach is a direct neural control for improvement of a classic controller proportional derivative (PD), proposed by Lewis [1]; it manages to approximate the nonlinearities and uncertainties in the robot dynamics by a static neural network. The second approach is an indirect neural control via a high-order dynamic neural network, proposed by Sanchez et al. [7], which manages to use a dynamic neural network for a dynamic identification of the robot state. Based on simulation results, a comparative study between these two approaches is presented using different performance criteria.

The rest of this paper is organized as follows. Section 2 presents the dynamic model of the robot manipulator. Section 3 describes the direct neural control proposed by Lewis [1]. Section 4 describes the indirect neural control proposed by Sanchez et al. [7]. Section 5 is intended for the simulation results, and a comparative study between the two approaches is mentioned in Sections 3 and 4. And finally, Section 6 draws conclusion and sums up the whole paper.

#### 2. Dynamic Model of the Robot Manipulator

In this section, the dynamic model of the robot manipulator is presented. The equation of the robot dynamics is denote the joint angle, the joint velocity, and the joint acceleration; denote the inertia matrix; denote the Centrifugal and Coriolis force matrix; denote the gravitational force vector; the friction term such as where is the coulomb parameter; the viscous parameter; represents disturbances; and is the torque vector.

#### 3. Direct Neural Controller via Static Neural Network

In this section, the approach of direct neural control for improvement of a classic controller proportional-derivative (PD), proposed by Lewis [1], is briefly presented. This approach manages to approximate the nonlinearities and uncertainties, in the robot dynamics, by a static neural network.

To make the dynamic of the robot manipulator, defined in (1), follow a prescribed desired trajectory ; the tracking error and the filtered tracking error are defined as follows: is a symmetric positive definite design parameter matrix. The dynamic of the robot (1), in terms of the filtered error (3), is as follows: where the unknown nonlinear robot function is defined as with

##### 3.1. Approximation of Nonlinearities and Uncertainties by a Static Neural Network

*The universal Function Approximation Property* [16]. Let be a general smooth function from to . Then, it can be shown that, as long as is restricted to a compact set of , there exist weights and thresholds such that one has
It is difficult to determine the ideal neural network weights, in matrices and that are required to best approximate a given nonlinear function . However, all one needs to know for controls purposes that, for a specified value of Neural Network, some ideal approximating weights exist. Then, an estimate of can be given by
The neural network architecture, proposed for the approximation of nonlinearities and uncertainties in the robot dynamics, is shown in Figure 1, where is the activation functions and is the number of hidden-layer neurons. The first-layer interconnection weights are denoted by and the second-layer interconnection weights by . The threshold offsets are denoted by .

##### 3.2. Synthesis of the Control Law

A general sort of approximation-based controller is derived by setting: with being the approximation of by the neural network, an outer PD tracking loop, and an auxiliary signal to provide robustness. The proposed neural network control structure is shown in Figure 2.

Substituting the control law (9) in (4), the closed-loop error dynamics become as follows: Let us define the functional approximation error are The weight approximation errors The Lyapunov function proposed for the stabilization of the error dynamic is with any constant matrices being and .

The robotic system is asymptotically stable if the following conditions, which guarantee , are satisfied.(i) The neural network weight tuning algorithms are With being a small scalar design parameter and the Jacobian of .(ii) The robustifying term is With , such that is the bound of ideal weights, is positive scalar parameter.(iii) PD controller gain is In practice, the tracking error can be kept as small as desired by increasing the gain .

#### 4. Indirect Neural Control via High-Order Dynamic Neural Network

In this section, the second approach, an indirect neural control via a high-order dynamic neural network proposed by Sanchez et al. [7], is briefly presented. This approach manages to use a dynamic neural network for the online identification of the robot state.

The proposed neural network control structure is shown in Figure 3.

The equation of the robot dynamics, defined in (1), under state representation is with

##### 4.1. Identification of the Robot State by the High-Order Dynamic Neural Network

The proposed neural network structure, for the identification of the robot state, is shown in Figure 4.

is the state matrix of the neural network with for , is the state vector of the neural network, and is the torque vector.

The dynamics of this neural network are resulted by the state feedback around a neural structure formed by two static neural networks RN1 and RN2 shown in Figure 5 [7, 17]: is weights matrix of RN1.

is nonlinear operator which defines the high-order connections, are called high-order connections, and is the number of high order connections.

; and are positive integers, , , is the activation function of RN1, here selected as the hyperbolic tangent.

The outputs of RN1 are denoted by is the activation function of RN2, here selected as the linear one.

is weights matrix of RN2. The outputs of RN2 are denoted by . Let us denote and to be the estimated value, respectively, for the unknown weight matrices and .

The weight estimation errors are The equation of this neural network dynamics is defined by The identification of the robot dynamics by the neural network is ensured by the following pole placement: Equation (23) is equivalent to the following equation:

##### 4.2. Synthesis of the Control Law

It is desired to design a robust controller which enforces asymptotic stability of the tracking error between the system and the reference signal. The equation of the reference signal dynamic is is the reference signal state vector, is the desired torque vector, and is the vector field for reference dynamics.

Let us denote the tracking error between the system and the reference signal to be To ensure the desired dynamic, the asymptotic stability of the tracking error must be ensured.

The time derivative of (26) is Now, it is proceeded to add and subtract in (27) the terms and so that It is assumed that there exists a function such that: is the pseudo inverse of calculated as follows: .

Then, it is proceeded to add and subtract the term in (28) so that we obtain: Let us define so that (30) is reduced to Then, it is proceeded to add and subtract the term and in (32) so that we obtain: Then by defining Equation (33) is reduced to Then, the tracking problem is reduced to a stabilization problem of the error dynamics defined in (35).

The Control Lyapunov Function (CLF) proposedfor the stabilization of the error dynamics is and are symmetric positive definite diagonal matrices.

Let us define the function and to be its Lipschitz constant.

The robotic system is asymptotically stable if the following conditions, which guarantee , , are satisfied.(i) The control law is Optimal with respect to the following cost [18]:(ii) The neural network weight tuning algorithms are (iii) The parameter of the neural network state matrix is .(iv) The parameter which manages the control law is .

#### 5. Simulation Results and Comparative Study on a Two-Link Robot

In order to test the applicability and compare the performance of the two proposed neural control types, the trajectory tracking problem for a robot manipulators model is considered. The dynamics of a 2-link rigid robot arm on 2D environment, with friction and disturbance terms, can be written as with

The robot model parameters are shown in Table 1.

The simulations of the variation of positions and torques, exerted at each of the two joints, as well as the weights of the neural network, were carried out over a period of 10 seconds.

The initial conditions are selected as follows:

The reference signal is

The external disturbances are

A variation in the coefficients of viscous friction by an error of 10% at time sec.

A variation in the masses of the two bodies of the arm by an error of 10% at time sec.

The neural network controller parameters are selected, for each of the two approaches, as follows.(i) For the first approach: neural control for improvement of a classic controller proportional-derivative (PD), we have the following.

After several simulation tests, we have found suitable values for the initialization of the weights of the neural network and the various parameters as follows: The number of neurons in the hidden layer of neural network is . The activation function sigmoid is and its Jacobian is . No initial neural network training phase was needed. The neural network weights were arbitrarily initialized at zero in this simulation.(ii) For the second approach: neural control via a high-order dynamic neural network, we have the following.

Initial state vector of the neural network is , and the number of high-order connections is .

The activation function of the subnet RN1 is the hyperbolic tangent .

Let us have: So that we define the high-order connections of the neural networks: The activation function of the subnet RN2 is the linear function .

For this approach, the initialization of neural network weights is not arbitrary and a training phase is necessary.

After several simulation tests, we have found suitable values for the initialization of the weights of the neural network and the various parameters as follows: The suitable values for the initialization of the weights are shown in Figure 11.

##### 5.1. Simulation Results of the First Approach of Control: The Neural Control for Improvement of a Classic Controller Proportional Derivative

Each line, that appears in both diagrams of Figure 8, represents one variation of weight value in the update of weight matrix or in the update of weight matrix .

*Analysis Results*. The analysis of the simulation results of the response system equipped with NN controller for improvement of a classic controller PD, seen in Figure 6, shows that this control law can satisfy the stability of the system despite the presence of disturbances and robotic uncertainties.

However, due to disturbances and robotic uncertainties, the peak in the torques response makes this control strategy not reliable Figure 7.

##### 5.2. Simulation Results of the Second Approach of Control: The Neural Control via a High-Order Dynamic Neural Network

For this approach, the initialization of neural network weights is not arbitrary and a training phase is necessary.

(i) *The Learning Step*. Each line, that appears in both diagrams of Figure 11, represents one variation of weight value in the update of weight matrix or in the update of weight matrix .

*Analysis Results*. The rigorous peak in the torques and joint angles response, due to disturbances and robotic uncertainties, seen in Figures 9 and 10, was able to be corrected thanks to the neural network adaptation.

At the end of this learning step, the best weight values, seen in Figure 11, are obtained.

(ii) *Final Simulation Results*. The best weights values, obtained in the end of the learning step, are used as initial values ****for this step.

Each line, that appears in both diagrams of Figure 14, represents one variation of weight value in the update of weight matrix or in the update of weight matrix .

*Analysis Results*. The analysis of the simulation results of the response system equipped with NN controller via a high-order dynamic neural network, seen in Figure 12, shows that this control law can satisfy the stability of the system despite the presence of disturbances and robotic uncertainties.

The learning step is necessary to obtain the best weight values, seen in Figure 11, which represent the true initial weight values.

After the learning step and despite the presence of disturbances and robotic uncertainties, no malfunction was identified in the torques response, seen in Figure 13 (as the peak in the torques response, seen in Figure 7).

In Figure 14, it is easy to see that weights have really achieved their best values in the learning step and they remain nearly constant.

##### 5.3. Comparative Study

The advantages and limitations of each approach, of neural network control, are presented in Table 2.

The run-time performance of each approach is presented in Table 3.

#### 6. Discussion and Conclusion

In this paper, two approaches of neural network robot control were selected, exposed, and compared. The aim of this comparative study is to find the performance differences between static and dynamic neural networks in robotic systems control. So, one of these two approaches uses a static neural network; the other uses a dynamic neural network. The first approach is a direct neural control for improvement of a classic controller proportional derivative (PD), proposed by Lewis [1]; it employs a static neural network to approximate the nonlinearities and uncertainties in the robot dynamics, so that the static neural network is used to compensate the nonlinearities and uncertainties; therefore it overcomes some limitation of the conventional controller PD and improves its accuracy. The second approach is an indirect neural control via a high-order dynamic neural network, proposed by Sanchez et al. [7]; it employs the dynamic neural network for an exact online identification of the robot state, and then it synthesizes the control law from this information recovered by this identification.

Simulation results under the framework MATLAB, of a two-link robot in 2D environment, showed the performance differences between the two neural network control approaches studied. Compared to the control with the static neural network, the neural control via dynamic neural network has significantly better tracking performance, has faster response time and is more reliable to face disturbances and robotic uncertainties. However, the indirect approach requires offline training to find the suitable initial neural network weights values, contrarily to the direct one in which the initialization of the neural network weights is arbitrary.

Although it is clear that further experimentation needs to take place, simulation results presented here indicate that dynamic neural networks have demonstrated a very good potential for applications in closed loop control of robot manipulators.

#### References

- F. L. Lewis, “Neural network control of robot manipulators,”
*IEEE Expert*, vol. 11, no. 3, pp. 64–75, 1996. View at Publisher · View at Google Scholar - W. Zhang, N. Qi, and H. Yin, “PD control of robot manipulators with uncertainties based on neural network,” in
*Proceedings of the International Conference on Intelligent Computation Technology and Automation (ICICTA '10)*, pp. 884–888, May 2010. View at Publisher · View at Google Scholar · View at Scopus - Y. H. Kim, F. L. Lewis, and D. M. Dawson, “Intelligent optimal control of robotic manipulators using neural networks,”
*Automatica*, vol. 36, no. 9, pp. 1355–1364, 2000. View at Publisher · View at Google Scholar · View at Scopus - Z. Tang, M. Yang, and Z. Pei, “Self-Adaptive PID control strategy based on RBF neural network for robot manipulator,” in
*Proceedings of the 1st International Conference on Pervasive Computing, Signal Processing and Applications (PCSPA '10)*, pp. 932–935, September 2010. View at Publisher · View at Google Scholar · View at Scopus - D. Popescu, D. Selisteanu, and L. Popescu, “Neural and adaptive control of a rigid link manipulator,”
*WSEAS Transactions on Systems*, vol. 7, no. 6, pp. 632–641, 2008. View at Google Scholar · View at Scopus - M. A. Brdys and G. J. Kulawski, “Stable adaptive control with recurrent networks,”
*Automatica*, vol. 36, no. 1, pp. 5–22, 2000. View at Google Scholar - E. N. Sanchez, L. J. Ricalde, R. Langari, and D. Shahmirzadi, “Rollover control in heavy vehicles via recurrent high order neural networks,” in
*Recurrent Neural Networks*, X. Hu and P. Balasubramaniam, Eds., Vienna, Austria, 2008. View at Google Scholar - F. G. Rossomando, C. Soria, D. Patino, and R. Carelli, “Model reference adaptive control for mobile robots in trajectory tracking using radial basis function neural networks,”
*Latin American Applied Research*, vol. 41, no. 2, 2010. View at Google Scholar - Z. Pei, Y. Zhang, and Z. Tang, “Model reference adaptive PID control of hydraulic parallel robot based on RBF neural network,” in
*Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO '07)*, pp. 1383–1387, December 2007. View at Publisher · View at Google Scholar · View at Scopus - M. G. Zhang and W. H. Li, “Single neuron PID model reference adaptive control based on RBF neural network,” in
*Proceedings of the International Conference on Machine Learning and Cybernetics*, pp. 3021–3025, August 2006. View at Publisher · View at Google Scholar · View at Scopus - H. X. Li and H. Deng, “An approximate internal model-based neural control for unknown nonlinear discrete processes,”
*IEEE Transactions on Neural Networks*, vol. 17, no. 3, pp. 659–670, 2006. View at Publisher · View at Google Scholar · View at Scopus - I. Rivals and L. Personnaz, “Nonlinear internal model control using neural networks: application to processes with delay and design issues,”
*IEEE Transactions on Neural Networks*, vol. 11, no. 1, pp. 80–90, 2000. View at Publisher · View at Google Scholar · View at Scopus - C. Kambhampati, R. J. Craddock, M. Tham, and K. Warwick, “Inverse model control using recurrent networks,”
*Mathematics and Computers in Simulation*, vol. 51, no. 3-4, pp. 181–199, 2000. View at Google Scholar · View at Scopus - M. Wang, J. Yu, M. Tan, and Q. Yang, “Back-propagation neural network based predictive control for biomimetic robotic fish,” in
*Proceedings of the 27th Chinese Control Conference (CCC '08)*, pp. 430–434, July 2008. View at Publisher · View at Google Scholar · View at Scopus - K. Kara, T. E. Missoum, K. E. Hemsas, and M. L. Hadjili, “Control of a robotic manipulator using neural network based predictive control,” in
*Proceedings of the 17th IEEE International Conference on Electronics, Circuits, and Systems (ICECS '10)*, pp. 1104–1107, December 2010. View at Publisher · View at Google Scholar · View at Scopus - S. Ferrari and R. F. Stengel, “Smooth function approximation using neural networks,”
*IEEE Transactions on Neural Networks*, vol. 16, no. 1, pp. 24–38, 2005. View at Publisher · View at Google Scholar · View at Scopus - E. B. Kosmatopoulos, M. A. Christodoulou, and P. A. Ioannou, “Dynamical neural networks that ensure exponential identification error convergence,”
*Neural Networks*, vol. 10, no. 2, pp. 299–314, 1997. View at Publisher · View at Google Scholar · View at Scopus - E. N. Sanchez, J. P. Perez, and L. Ricalde, “Recurrent neural control for robot trajectory tracking,” in
*Proceedings of the 15th World Congress International Federation of Automatic Control*, Barcelona, Spain, July 2002.