Mathematical Problems in Engineering

Volume 2015, Article ID 451049, 12 pages

http://dx.doi.org/10.1155/2015/451049

## Decentralized Identification and Control in Real-Time of a Robot Manipulator via Recurrent Wavelet First-Order Neural Network

^{1}Tecnológico Nacional de México, Instituto Tecnológico de La Laguna, Boulevard Revolución y Calzada Cuauhtémoc, 27000 Torreón, COAH, Mexico^{2}Centro Universitario de Ciencias Exactas e Ingenierías (CUCEI), Universidad de Guadalajara, Boulevard Marcelino García Barragán No. 1421, 44430 Guadalajara, JAL, Mexico

Received 27 October 2014; Revised 7 March 2015; Accepted 10 March 2015

Academic Editor: Fernando Torres

Copyright © 2015 Luis A. Vázquez et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A decentralized recurrent wavelet first-order neural network (RWFONN) structure is presented. The use of a wavelet Morlet activation function allows proposing a neural structure in continuous time of a single layer and a single neuron in order to identify online in a series-parallel configuration, using the filtered error (FE) training algorithm, the dynamics behavior of each joint for a two-degree-of-freedom (DOF) vertical robot manipulator, whose parameters such as friction and inertia are unknown. Based on the RWFONN subsystem, a decentralized neural controller is designed via backstepping approach. The performance of the decentralized wavelet neural controller is validated via real-time results.

#### 1. Introduction

The control of robotic manipulators is a mature yet fruitful field for research, development, and manufacturing. The industrial robots are basically positioning systems; therefore, the trajectory tracking control problem for this kind of robot manipulators has been a challenging problem to be solved in the last years. In this respect, the field of artificial neural networks (ANNs) has been playing an interesting role and poses a great challenge to its implementation in real-time. Some works have been dealing with the design of neural control schemes in discrete time where the extended Kalman filtering has been used as training algorithm for both recurrent high-order neural networks (RHONNs) [1–3] and high-order neural networks (HONNs) [4, 5]. The neural control schemes in continuous time have been proposed using the filtered error (FE) algorithm as training law for a RHONN [6–8]. In all of the works referenced above, the identification and control were carried out online at the same time.

The wavelet neural networks (WNNs) structure arises from combining wavelets concept with the ANNs approach in order to achieve a better identification performance [9–13]. Unlike sigmoidal activation functions used in conventional ANNs, these are replaced by a wavelet activation function in the WNN. WNNs have proved to be better than ANNs in the sense that their structure can provide more potential to enrich the mapping relationship between inputs and outputs [9]. WNNs simultaneously possess the advantages of ANN learning ability and wavelet decomposition ability. WNNs-based control systems have been adopted widely for control of complex dynamical systems owing to their fast learning properties and good generalization capability [10, 11]. Moreover, recurrent wavelet neural networks (RWNNs), which combine properties such as dynamic response of recurrent neural networks (RNNs) and the fast convergence of a WNN, have been proposed to identify and control nonlinear systems [14–18]. An intelligent control system using recurrent wavelet-based Elman neural network for position control of a permanent-magnet synchronous motor servo drive was proposed in [19]. In [20, 21], a self-recurrent wavelet neural network structure was proposed in order to identify a synchronous generator and the nonlinearities introduced in the system due to actuator saturation. In [22], two types of Haar wavelet neural network, feedforward and RWNN, were used to model discrete-time nonlinear systems. Some researches have been carried out in real-time with interesting results, such as in [23], where an adaptive RWNN uncertainty observer was proposed to estimate the required lumped uncertainty and the backstepping approach was used to control the motion of a linear synchronous motor drive system. A novel global PID control scheme for nonlinear MIMO systems was proposed and synthesized for a robot manipulator in [24]. Inverse dynamics identification was based on a radial basis neural network with daughter RASP1 wavelets activation functions in cascade with an infinite impulse response filter in the output to prune irrelevant signals and nodes. In [25], an intelligent control system based on a four-layer RWNN, trained via gradient descent method and which includes Gaussian wavelet functions, and using the sliding mode approach was proposed to achieve trajectory tracking for an UAV.

The use of WNNs and RWNNs is not a new theme in the literature; however, the implementation of these structures involves the use of multiple layers with multiple neurons. The use of offline training algorithms and a little bit of applications in real-time have generated a major motivation in the development of this work. Signal and images decomposition using a space of basis functions having local support in both the time and frequency domains has proven their usefulness in signal and images processing. Such motivations across various fields have led to the development of wavelets. Some view wavelets as a new basis for representing functions. Where the Fourier series maps a one-dimensional function of a continuous variable into a one-dimensional sequence of coefficients, the wavelet expansion maps it into a two-dimensional array of coefficients that allows localizing the signal in both time and frequency simultaneously. Wavelet algorithms are defined to process data at different scales of resolution in both the time and frequency domains and are useful tools for multichannel signal processing, for example, estimation or filtering, and multiresolution signal analysis (MRA). The MRA shows how orthonormal wavelet bases can be used as a tool to describe mathematically the increment of information needed to go from a coarse approximation to a higher resolution of approximation. For all of the above, we propose a continuous-time decentralized wavelet neural identification and control scheme based on the structure of a RHONN model where sigmoidal activation functions and high-order connections between them are replaced by a wavelet (Morlet) activation function. The resulting structure has been called recurrent wavelet first-order neural network (RWFONN), from which the design of the controller is carried out using the backstepping approach. The training for the RWFONN is performed online in a series-parallel configuration using the FE algorithm, shaping a new neural structure of a single layer and a single neuron, where its capacity for identification of more complex dynamics increases with the implementation of a wavelet Morlet. The performance of the proposed scheme for trajectory tracking is validated via experimental results when it is applied to a vertical robot manipulator of two DOF with unknown parameters.

This paper is organized as follows. The RWFONN model, its approximation properties, and the FE algorithm are described in Section 2. Section 3 shows the RWFONN model proposed as well as the decentralized neural controller design. In Section 4 are presented real-time results, and final comments conclude the paper.

#### 2. Recurrent Wavelet First-Order Neural Network Model

Consider the RHONN model [26] where the state of each neuron is governed by a differential equation of the formwhere is the state of the th neuron, and are positive real constants, is the th adjustable synaptic weight connecting the th state to the th neuron, represents the total number of weights used to identify the plant behavior, and is the activation function for each one of the connections. Each is either an external input or the state of a neuron through a sigmoidal function; that is, , where is a sigmoidal nonlinearity. In a recurrent second-order neural network, the total input to the neuron not only is a linear combination of the components , but also may be the product of two or more elements represented by triplets , quadruplets, and so forth. is a collection of nonordered subsets of and are nonnegative integers. This class of neural networks form a RHONN.

The input vector for each neuron is given bywhere is the vector of external control inputs to the network.

Introducing a -dimensional vector , defined asthe RHONN model (1) can be rewritten as

Replacing now the vector by a wavelet vector , considering that higher-order terms will not be used, the RHONN model (1) can be rewritten as

The adjustable parameter vector is defined by , so (5) becomeswhere the vectors , for , represent the adjustable weights of the network, while the coefficients , for , are part of the underlying network architecture and are fixed during training. The structure in form (6) here is called RWFONN.

This RWFONN, that is, a neuron with a single connection of first-order, is the most simple structure for a RHONN and is an expansion of the first-order Hopfield [27] and Cohen-Grossberg [28] models. The sigmoidal activation function is replaced by the real version of the modified wavelet Morlet [29, 30] of form , with parameters and representing expansion and dilation, respectively. Properties about wavelet functions are described in [31–35].

##### 2.1. Approximation Properties of the RWFONN

In the following, the problem of approximating a general nonlinear dynamical system by a RWFONN is described. The input-output behavior of the system to be approximated is given bywhere is the input to the system, is the state of the system, and is a smooth vector field defined on a compact set , where and are constants. The approximation problem consists in determining and using wavelet activation function for continuous time, if there exist weights such that (6) approximates the input-output behavior of an arbitrary dynamical system of form (7). Assume that is continuous and satisfies a local Lipschitz condition such that (7) has a unique solution and for all in some time interval , where represents the time period over which the approximation is performed. Based on the above assumptions, the next theorem, which is strictly an existence result and does not provide any constructive method in order to obtain the correct weights , proves that if a sufficiently large number of weights are allowed in (6), then it is possible to approximate any dynamical system to any degree of accuracy.

Theorem 1. *Suppose that system (7) and the RWFONN model (6) are initially at the same state . Then, for any and any finite , there exists an integer and a vector such that the state of the RWFONN model (6), with number of weights whose values , satisfies*

*Proof. *The proof proceeds along the same lines as the proof in [36]. The dynamic behavior of the RWFONN model is described byAssuming that each is positive, then the bounded-input bounded-output (BIBO) stability for each neuron is guaranteed. By adding and subtracting , system (7) can be rewritten aswhere . Since , the identification error satisfieswith .

By assumption, , where is a compact subset of .

Let It can be seen that is also a compact subset of and , where is the required degree of approximation. Since is a continuous function, it satisfies a Lipschitz conditionin the compact domain .

In what follows, it is shown that the function satisfies the conditions of the Stone-Weierstrass theorem and it can therefore approximate, over a compact domain, any continuous function. The preprocessing of the input via a continuous invertible function does not affect the ability of a network to approximate continuous functions. Therefore, it can be shown that if is large enough, that is, the number of weights is large enough, then there exist weight values such that can approximate , in a compact domain, to any degree of accuracy for all . Hence, there exists such thatwhere is a constant.

The solution to the differential equation (11) can be expressed asSince each is a positive real constant, there exists a positive constant such that and , where . Based on all of the above, let be chosen asNow, consider that . From (15), taking norms on both sides and using (13), (14), and (16), the following inequalities hold for : Using the Bellman-Gronwall lemma, it can be easily shown thatSuppose that does not belong to . Then, by the continuity of , there exists a , where , such that where denotes the boundary of . Furthermore, carrying out the same analysis for it results . Hence, (18) holds for all .

*2.2. Filtered Error Training Algorithm*

*Under the assumption that the unknown system is exactly modeled by a RWFONN architecture of form (6), the weight adjustment law and the FE training algorithm for this RWFONN are next summarized. Based on the assumptions of no modeling error, there exist unknown weight vectors such that each state of the unknown dynamical system (7) satisfieswhere is the initial state of the system. As is standard in systems identification procedures, here it is assumed that the input and the state remain bounded for all . Based on the definition for given by (3), this implies that is also bounded. In the sequel, unless there exists confusion, the arguments of the vector field will be omitted. Next, the approach for estimating the unknown parameters of the RWFONN model (19) is described.*

*Considering (19) as the differential equation describing the dynamics of the unknown system, the identifier structure is chosen with the same form as in (6), where is the estimate of the unknown weight vector . From (6) and (19), the identification error satisfies which can be rewritten aswhere denotes the parametric error [35]. The weights are adjusted according to the learning lawwhere the adaptive gain is a positive definite matrix. Stability and convergence properties for the weight adjustment law given above are analyzed in [37]. The following theorem establishes that this identification scheme has convergence properties with the gradient method for adjusting the weights.*

*Theorem 2 (see [36]). Consider the filtered error RWFONN model given by (21) whose weights are adjusted according to (22). Then,(1), (i.e., and are uniformly bounded);(2).*

*3. Neural Backstepping Controller Design*

*3. Neural Backstepping Controller Design*

*Consider now the mathematical model of an –DOF robot manipulator given bywhere is a input vector that represents the torques applied to each joint; , , and are the states of the system corresponding to position, velocity, and acceleration for each joint, respectively; represents the contribution of the inertial forces to the dynamical equation; hence the matrix represents the inertia matrix of the manipulator; represents the Coriolis forces, represents the gravitational forces, and is a vector that combines both viscous and Coulomb friction terms, that is, the so-called friction vector [38]. In this work, as a continuity of [35], a decentralized RWFONN trained via FE algorithm (21) with weights adjustment law (22) is proposed for identification and control of a two-DOF vertical robot manipulator. From (6), the decentralized RWFONN model is given as follows:with for the term, where denotes the number of the joints; is for the number of states of the th RWFONN model; represents the measurable local angular position and is for the calculated local angular velocity; is the control input. It must be noticed that this decentralized neural scheme is in the form of a strict-feedback system as that proposed in [8]; then, the use of the backstepping approach is still being suitable for the design of the neural controller. For each th joint the identification error between the neural identifier and the joint variable is defined as for the angular position and for the angular velocity. To update online the synaptic weights, the adaptive learning laws are given by and , with as the adaptive gain and with , , and for and for .*

*Next, our objective is to design a feedback control law to force the system output to follow a desired trajectory. The decentralized neural control scheme is based on the following.*

*Denoting as the identification error and the trajectory tracking error between the states of the neural network for position and the desired trajectory as , the output tracking error is rewritten aswhere . Consequently, the error dynamics is given as *

*Considering (21), (24), andthe error dynamics is then described by *

*The decentralized wavelet neural controller design can be formulated as follows.*

*Theorem 3. For a neural identifier in a strict-feedback form (24) the dynamics of (26) and the neural network tracking error (28) with control law (41) and positive values for and have an asymptotically stable equilibrium point and the output tracking error (26) tends to zero.*

*Proof. *Assuming that the system is Lipschitz, we proceed by proposing the augmented Lyapunov-like function withtaken from [36].

Considering (21)-(22), the time derivative of (31) is then given by [36] The time derivative of along the solution of (28) isIn order to guarantee that (33) is negative definite, the desired dynamics for is proposed asthus,with a real value.

Now, proposing the augmented Lyapunov-like functionand introducing the errorthe time derivative of (36) is given byFrom (37) and (24), is rewritten asFrom all of the above, (38) takes the formIn order for (40) to be negative definite, the control law is proposed asFrom (34), considering (28) and (24), we have Accordingly,with a real value.

*4. Real-Time Results*

*4. Real-Time Results*

*4.1. Robot Description*

*4.1. Robot Description*

*In order to evaluate in real-time the performance of the decentralized neural backstepping control scheme proposed, it is implemented on a -DOF robot manipulator, of our own design and unknown parameters shown in Figure 1, whose displacements are involved on the vertical plane [38]. The robot manipulator consists of two rigid links; brushless direct-drive servos are used to drive the joints, the first one with a 16 : 1 gear reduction for link 1 and direct connection of the last one for link 2. The robot arm is constituted by the MTR-70-3S-AA and MTR-55-3S-AA servomotors, manufactured by FESTO Pneumatic AG, for link 1 and link 2, respectively. Incremental encoders embedded on the servomotors deliver information, via RS-422 protocol, related with the angular displacements. Both servomotors exhibit a resolution of 4096 pulses/rev, that is, an accuracy of 0.00153398 rad/pulse. The angular velocities are computed via numerical differentiation of the angular position's signal. According to the actuator's manufacturer, the direct-drive servomotors are able to supply torques within the following bound:*