Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 347410, 12 pages
http://dx.doi.org/10.1155/2015/347410
Research Article

Real-Time Inverse Optimal Neural Control for Image Based Visual Servoing with Nonholonomic Mobile Robots

Computer Science Department, CUCEI, University of Guadalajara, 44430 Guadalajara, JAL, Mexico

Received 1 November 2014; Revised 21 January 2015; Accepted 21 January 2015

Academic Editor: Luis Rodolfo Garcia Carrillo

Copyright © 2015 Carlos López-Franco et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We present an inverse optimal neural controller for a nonholonomic mobile robot with parameter uncertainties and unknown external disturbances. The neural controller is based on a discrete-time recurrent high order neural network (RHONN) trained with an extended Kalman filter. The reference velocities for the neural controller are obtained with a visual sensor. The effectiveness of the proposed approach is tested by simulations and real-time experiments.

1. Introduction

Traditionally, robot motion control approaches have feedback provided by a taco-meter or encoder, whose advantages are its easy implementation and its low cost. However, in mobile robotics such information is not accurate due to the appearance of slip phenomenon. One sensor that can be used to overcome these problems is the visual sensor. In this work, we use computer vision techniques to overcome such disadvantages.

Although a visual sensor is more accurate, it can suffer from unknown external disturbances due to the robot motion. In addition, the robot model is inaccurate and it suffers from parameter uncertainties. To overcome such problems, we propose the use of a discrete-time recurrent high order neural network (RHONN) trained with an extended Kalman filter with visual feedback.

The main goal of optimal control theory is to determine the control signals that will force a process to satisfy physical constraints and minimize a performance criterion simultaneously [1]. In optimal control theory, a cost functional is defined as function of the state and the control variables. Unfortunately it requires solving the Hamilton-Jacobi-Bellman (HTB) equation, which is not an easy task. To avoid the solution of a HTB equation an inverse optimal control can be used [2]. In inverse optimal control, we start with the definition of a stabilizing feedback control, and then we have to show that it optimizes a cost functional.

In this work, the input of the inverse optimal control is determined by visual feedback. The visual sensor is responsible of tracking the target and the estimation of the robot velocities to achieve the desired task. In our case the task consists in moving the robot from an initial pose to a desired pose with respect to a target object.

1.1. State of the Art

An extensive class of controllers have been proposed for mobile robots [39]. Most of these references present only simulation results and the controllers are implemented in continuous time. A common problem when applying standard control theory is that the required parameters are often either unknown at time or are subject to change during operation. For example, the inertia of a robot as seen at the drive motor has many components, which might include the rotational inertia of the motor rotor, the inertia of gears and shafts, rotational inertia of its tires, the robot’s empty weight, and its payload. Worse yet, there are elements between these components such as bearings, shafts, and belts which may have spring constants and friction loads [10].

1.2. Main Contribution

The paper main contributions are as follows: presenting a controller for mobile robots which includes the robot dynamics and does not need the previous knowledge of robot parameters or model; computing the trajectory references for the controller on real-time using visual data, acquired from a camera mounted on the robot; using visual data the controller drives the nonholonomic robot from its current pose toward a desired one; real-time integration of visual servoing and an inverse optimal neural controller to allow nonholonomic mobile robots to perform autonomous navigation.

The rest of this work is organized as follows. First the problem formulation is presented in Section 2. Then the model of the mobile robot and framework setup is described in Section 3. After that the camera model is presented in Section 4. Later, the visual feedback algorithm is introduced in Section 5. Section 6 provides an introduction to the neural identification. The inverse optimal control approach is presented in Section 7. The neural identification and control of the mobile robot is presented in Section 8. The simulations results are presented in Section 9. The experimental results are presented in Section 10. Finally, the conclusions are given in Section 11.

2. Problem Formulation

The main focus of this work is the navigation of a differential drive robot from its current pose to a desired pose by using visual feedback and a neural controller, Figure 1. The camera is mounted on the robot, and therefore the robot motion induces camera motion. Before the task begins the desired feature is estimated applying a segmentation algorithm to the image of the target. The same process is done in real-time to compute the current future . When the algorithm begins the desired feature and the current feature are compared to compute the error. Then, using this error, the controller estimates the velocities to complete the task.

Figure 1: Controller diagram.

3. Model Formulation

In this paper we consider a two-wheel differential drive mobile robot that moves on a plane. The motion of the mobile robot is defined with respect to an inertial frame fixed in the world. Let be a frame attached to the current camera pose, a frame attached to the desired camera pose, and a frame attached to the mobile robot pose, Figure 2.

Figure 2: Coordinate frames of the robot , camera , and object .

The mobile robot used in this work is a differential drive mobile robot; its kinematic model iswhere the inputs and represent the driving velocity and the steering velocity, respectively.

The mobile robot has two actuated wheels, and its dynamics can be expressed in the following state-space model [4, 11, 12]:where each subsystem is defined aswithwhere the value represents the half of the width of the mobile robot and is the radius of the wheel and is the distance from the center of mass of the mobile robot to the middle point between the right and left driving wheels. The values , are the coordinates of and is the heading angle of the mobile robot, , represent the angular velocities of right and left wheels, respectively, and , represent motor currents of right and left wheels, respectively. The values and are the mass of the body and the wheel with a motor, respectively. , , and are the moment of inertia of the body about the vertical axis through , the wheel with a motor about the wheel axis, and the wheel with a motor about the wheel diameter, respectively. The positive terms , are the damping coefficients. The value is a vector of disturbances including unmodeled dynamics. is the motor torque constant, is the input voltage, is the resistance, is the inductance, is the back electromotive force coefficient, and is the gear ratio. Here, denotes the diagonal matrix. Model (2) is discretized using the Euler methodology.

4. Camera Model

In this paper the camera used is a traditional perspective camera. In this section we describe the perspective model; this model describes the relationship between a 3D point and its projection , which can be computed fromwhere is a rigid transformation that relates the world coordinate frame with respect to the camera coordinate frame and is defined aswhere is a rotation matrix and is a translation vector, and both terms represent the extrinsic camera parameters.

The point can be projected into the image plane aswhere the matrix represents the intrinsic camera parameters and is defined aswhere is the focal length in pixels, and are the coordinates of the principal point (in pixels), and represents the skew factor.

5. Visual Based Feedback

The use of visual feedback to control a robot is commonly termed visual servoing or visual control [1316]. In this work we assume that a monocular camera is mounted directly on the mobile robot, in which case motion of the robot induces camera motion.

The objective of the visual feedback is the minimization ofwhere denote the features extracted from the current pose and denote the features extracted from the desired pose. In our case the features of error (9) are two-dimensional vectors and are defined as and ; note that and are supplementary normalized coordinates [17].

During the motion of the robot the camera suffers a motion , where is the linear velocity and is the angular velocity. The relationship between this velocity and is

The relationship between the camera velocity and the time variation error can be determined with (9) and (10); that is,where the interaction matrix can be defined as [15]where the matrix is a motion transformation matrix defined as [18]where is the rotation matrix that relates the camera and robot frameworks and represents the skew symmetric matrix associated with the vector .

The interaction matrix can be defined as [13]where the value represents the depth of the feature relative to the camera frame. The values , are computed from the pixels coordinates , of the tracked feature and the camera calibration matrix aswhere and .

We can note from (14) that each feature point provides two equations; therefore for a 6-DOF problem we need at least three features. However, since we are dealing with a mobile robot with only 2 controllable DOF, then a single feature is enough.

From (11) and (12) we can define the velocity input to the robot as [15]where is the pseudoinverse of the matrix .

From the above expression, we can observe that velocities (16) can be used as reference for the neural controller.

6. Neural Identification

In this section, we present the neural identification process. First, let us consider a MIMO nonlinear systemwhere is the state of the system, is the control input, and is a nonlinear function.

System (17) is identified by a discrete-time recurrent high order neural network (RHONN), defined aswhere is the state of the th neuron, is the respective online adapted weight vector, is the state dimension, and is given bywhere is the respective number of high-order connections, is a collection of nonordered subsets of dimension , is the number of external inputs, is nonnegative integers, and is defined as follows:

In (20), is the input vector of the neural network, and is defined bywhere is any real value variable.

Based on the structure of discrete-time RHONN series parallel representation [19], we propose the following neural networks model:where are the adjustable weight matrices, are matrices with fixed parameters, and denotes a linear function of or corresponding to the plant structure or external inputs to the network, respectively.

The EKF-based training algorithm is described by [20]withwhere is the identification error, is the state estimation prediction error covariance matrix, is the th weight vector at step , is a design parameter such that , is the th plant state, is the th neural network state, is the number of states, is the Kalman gain matrix, is the measurement noise covariance matrix, is the state noise covariance matrix, and is a matrix, in which each entry of () is the derivative of th neural network state (), with respect to all adjustable weights (), as follows:

Usually , , and are initialized as diagonal matrices, with entries , , and , respectively [21]. It is important to note that , , and for the EKF are bounded [22].

It is possible to identify (17) by (18) due to the following theorem.

Theorem 1 (see [23]). The RHONN (18) trained with the EKF-based algorithm [20] to identify the nonlinear plant (17) ensures that the identification error [20] is semiglobally uniformly ultimately bounded (SGUUB); moreover, the RHONN weights remain bounded.

7. Inverse Optimal Control

Let us consider a nonlinear affine systemwhere is the state of the system at time and , are smooth and bounded mappings. We assume . denotes the set of nonnegative integers. The following meaningful cost functional is associated with the trajectory tracking problem for system (26):where with as the desired trajectory for ; ; ; is a positive semidefinite function and is a real symmetric positive definite weighting matrix. The entries of can be fixed or can be functions of the system state in order to vary the weighting on control efforts according to the state value [1].

For solving the trajectory tracking optimal control problem, it is necessary to solve the following HJB equation:which is a challenging task. To overcome this problem, we propose to solve the inverse optimal control problem.

Definition 2. Consider the tracking error as , being the desired trajectory for . Let one define the control law
It will be inverse optimal (globally) stabilizing along the desired trajectory if(i)it achieves (global) asymptotic stability of for system (26) along reference ;(ii) is (radially unbounded) positive definite function such that inequality is satisfied.
Selecting , then is a solution for (28) and cost functional (27) is minimized.

As established in Definition 2, the inverse optimal control law for trajectory tracking is based on knowledge of . Then, a CLF is proposed, such that (i) and (ii) are guaranteed. Hence, instead of solving (28) for a quadratic candidate CLF for (29) is proposed with the formwherein order to ensure stability of the tracking error , where

The control law (29) with (31), which is referred to as the inverse optimal control law, optimizes the meaningful cost functional of the form (27). Consequently, by considering as in (31), control law (29) takes the following form:

and are positive definite and symmetric matrices; thus, the existence of the inverse in (34) is ensured.

8. Neural Identification and Control of the Mobile Robot

In this section we describe the neural identification and neural control for the nonholonomic mobile robot; in the next section we will describe the reference for this controller. The reference velocities will be estimated using visual feedback.

8.1. Neural Identification Design

The physical parameters for the mobile robot simulations are selected as

It is important to note that these parameters are considered unknown for the controller design and are only included for simulation purposes. Then, we apply the neural identifier, developed in Section 6, to obtain a discrete-time neural model for the electrically driven nonholonomic mobile robot (2), with trained with the EKF [20], as follows:where and identify the and coordinates, respectively; identifies the robot angle; and identify the angular velocities of right and left wheels, respectively; finally, and identify the motor currents, respectively. The NN training is performed online, and all of its states are initialized in a random way. The RHONN parameters are heuristically selected as

It is important to consider that for the EKF-learning algorithm the covariances are used as design parameters [21, 24].

8.2. Control Synthesis

In order to facilitate the controller synthesis, we rewrite neural network (36) in a block structure form aswith , , , , , , , , , , , and being of appropriated dimension according to (38).

The goal is to force the state to track a desired reference signal . This is achieved by designing a control law as described in Section 7. First the tracking error is defined as

Then using (38) and introducing the desired dynamics for result inwhere with . The desired value for the pseudocontrol input is calculated from (40) asAt the second step, we introduce a new variable as Then using (38) and introducing the desired dynamics for result inwhere with . The desired value for the pseudocontrol input is calculated from (43) asAt the third step, we introduce a new variable asTaking one step ahead, we havewhere is defined aswhere the controllers parameters are selected heuristically as

9. Simulation Results

This section presents the simulation results of the proposed approach. The simulations have been performed using Simulink. In the simulation the robot moves under the action of the proposed controller; the controller uses as references the velocities provided by the visual feedback. In the simulation the initial pose of the robot was , and the desired pose was a rotation about the -axis and a translation . The camera frame is related with the robot frame with a rotation and a translation . The sampling time of the simulation was .

In Figure 3 we present the camera motion in the 3D space. Figure 4 shows the identification performance for -axis, -axis, and angle. Figure 5 shows the trajectory tracking results. In Figure 6 we present the tracking errors. In Figure 7 we show the applied control signal for the left and right wheels. Figure 8 presents the current identification for simulation in left and right wheels. Finally, Figure 9 shows the angular velocity identification for simulation in left and right wheels.

Figure 3: Camera trajectory in 3D space.
Figure 4: -axis identification (a), -axis identification (b), and angle (c); plant signal in solid line and neural signal in dashed line.
Figure 5: Trajectory tracking result for simulation (plant signal in solid line and neural signal in dashed line).
Figure 6: Tracking errors, -axis (a), -axis (b), and angle (c).
Figure 7: Applied control signal for the left and right wheels, respectively.
Figure 8: Current identification for simulation in left and right wheels, respectively (plant signal in solid line and neural signal in dashed line).
Figure 9: Angular velocity identification for simulation in left and right wheels, respectively (plant signal in solid line and neural signal in dashed line).
9.1. Comparison

In order to compare the proposed control scheme with respect to previous works as presented ones in [3], Table 1 is included, which is described as follows: the controllers used in this comparison are Neural Backstepping Controller (NBC) [3], High Order Sliding Mode (HOSM) controller [3], and inverse optimal neural controller (IONC) and are proposed in this paper.

Table 1: Comparison between inverse optimal neural controller (IONC) with respect to Neural Backstepping Controller (NBC) and High Order Sliding Mode (HOSM) controller.

10. Experimental Results

In this section, we present the experimental results obtained by applying the proposed approach. The mobile robot and the visual sensor used in the tests are shown in Figure 10. The sensor used in this work was a Kinect sensor; this sensor has a perspective camera, an IR emitter, and IR depth sensor which are used to estimate the depth of the feature.

Figure 10: Differential drive robot, with Kinect sensor.

The camera calibration matrix of our Kinect perspective camera is

The visual feature used in this work is the center of a circle. The target object contains one circle which is segmented using an hsv segmentation; then its centroid is used as features for the task, Figure 11.

Figure 11: Feature tracking process. (a) HSV conversion, (b) HSV segmentation, and (c) detected feature (centroid of circle).

The initial and desired pose of the robot are shown in Figure 12.

Figure 12: Initial pose (a), desired pose (b).

Figure 13 shows the identification performance for -axis, -axis, and angle. In Figure 14 we present the tracking errors. We have to note that the target is at a distance of approximately and the linear velocity of the robot is , and therefore the algorithm takes approximately  s to converge to zero.

Figure 13: -axis identification (a), -axis identification (b), and angle (c); plant signal in solid line and neural signal in dashed line.
Figure 14: Tracking errors, -axis (a), -axis (b), and angle (c).

In Figure 15 we show the applied control signal to the left and right wheels. Figure 16 presents the current identification of the left and right wheels.

Figure 15: Applied control signal to the left and right wheels, respectively.
Figure 16: Current identification of the left and right wheels, respectively (plant signal in solid line and neural signal in dashed line).

Finally, Figure 17 shows the angular velocity identification of the left and right wheels.

Figure 17: Angular velocity identification of the left and right wheels, respectively (plant signal in solid line and neural signal in dashed line).

It is important to note that differences between simulation and experimental results are due to some reasons found in real-time implementations as parametric uncertainties, external disturbances, unmodeled dynamics, limits on actuators, delays, and differences on processing times, among other circumstances that difficult real-time implementations, making them a truly difficult task, especially integrating different techniques like visual servoing, automatic control, and neural identification.

11. Conclusions

In this work, an inverse optimal neural controller for model identification and control of mobile robots with nonholonomic constraints and visual feedback has been presented. The reference velocities for the neural controller were computed from a desired target acquired from visual sensor and tracked by a visual algorithm. From the simulations and real-time experiments we can observe that the proposed approach can effectively drive the nonholonomic mobile robot from its current pose to the desired pose. In addition, the neural identifier is able to couple with unknown external disturbances and parameter uncertainties.

In the future we want to test the proposed approach with different computer vision algorithms. We will like to extend the proposed approach to omnidirectional vision systems due to its large field of view. We also want to extend the proposed approach to different robotic platforms, like UAVs.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank CONACYT, CUCEI, and the University of Guadalajara. This work has been partially supported by the CONACYT Projects CB-156567, CB-106838, and INFR-229696.

References

  1. D. E. Kirk, Optimal Control Theory: An Introduction, Dover Publications, 2004.
  2. M. Krstic, P. V. Kokotovic, and I. Kanellakopoulos, Nonlinear and Adaptive Control Design, John Wiley & Sons, New York, NY, USA, 1st edition, 1995.
  3. A. Salome, A. Y. Alanis, and E. N. Sanchez, “Discrete-time sliding mode controllers for nonholonomic mobile robots trajectory tracking problem,” in Proceedings of the 8th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE '11), pp. 1–6, October 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. K. D. Do, Z. P. Jiang, and J. Pan, “Simultaneous tracking and stabilization of mobile robots: an adaptive approach,” IEEE Transactions on Automatic Control, vol. 49, no. 7, pp. 1147–1151, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. R. Fierro and F. L. Lewis, “Control of a nonholonomic mobile robot using neural networks,” IEEE Transactions on Neural Networks, vol. 9, no. 4, pp. 589–600, 1998. View at Publisher · View at Google Scholar · View at Scopus
  6. Z.-P. Jiang and H. Nijmeijer, “A recursive technique for tracking control of nonholonomic systems in chained form,” IEEE Transactions on Automatic Control, vol. 44, no. 2, pp. 265–279, 1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. K. K. Kumbla and M. Jamshidi, “Neural network based identification of robot dynamics used for neuro-fuzzy controller,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '97), vol. 2, pp. 1118–1123, April 1997. View at Publisher · View at Google Scholar · View at Scopus
  8. V. Raghavan and M. Jamshidi, “Sensor fusion based autonomous mobile robot navigation,” in Proceedings of the IEEE International Conference on System of Systems Engineering (SOSE '07), April 2007. View at Publisher · View at Google Scholar · View at Scopus
  9. J.-M. Yang and J.-H. Kim, “Sliding mode control for trajectory tracking of nonholonomic wheeled mobile robots,” IEEE Transactions on Robotics and Automation, vol. 15, no. 3, pp. 578–587, 1999. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Holland, Designing Autonomous Mobile Robots: Inside the Mind of an Intelligent Machine, Newnes, Melbourne, Australia, 2003.
  11. T. Das and I. N. Kar, “Design and implementation of an adaptive fuzzy logic-based controller for wheeled mobile robots,” IEEE Transactions on Control Systems Technology, vol. 14, no. 3, pp. 501–510, 2006. View at Publisher · View at Google Scholar · View at Scopus
  12. B. S. Park, S. J. Yoo, J. B. Park, and Y. H. Choi, “A simple adaptive control approach for trajectory tracking of electrically driven nonholonomic mobile robots,” IEEE Transactions on Control Systems Technology, vol. 18, no. 5, pp. 1199–1206, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996. View at Publisher · View at Google Scholar · View at Scopus
  14. B. Espiau, F. Chaumette, and P. Rives, “A new approach to visual servoing in robotics,” IEEE Transactions on Robotics and Automation, vol. 8, no. 3, pp. 313–326, 1992. View at Publisher · View at Google Scholar · View at Scopus
  15. F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82–90, 2006. View at Publisher · View at Google Scholar · View at Scopus
  16. F. Chaumette and S. Hutchinson, “Visual servo control. II. Advanced approaches,” IEEE Robotics and Automation Magazine, vol. 14, no. 1, pp. 109–118, 2007. View at Publisher · View at Google Scholar · View at Scopus
  17. E. Malis, F. Chaumette, and S. Boudet, “2 1/2D visual servoing,” IEEE Transactions on Robotics and Automation, vol. 15, no. 2, pp. 238–250, 1999. View at Publisher · View at Google Scholar · View at Scopus
  18. R. Paul, Robot Manipulators: Mathematics, Programming and Control, MIT Press, Cambridge, Mass, USA, 1982.
  19. G. A. Rovithakis and M. A. Chistodoulou, Adaptive Control with Recurrent High-Order Neural Networks, Springer, Berlin, Germany, 2000.
  20. R. Grover and P. Y. C. Hwang, Introduction to Random Signals and Applied Kalman Filtering, John Wiley & Sons, New York, NY, USA, 1992.
  21. S. Haykin, Kalman Filtering and Neural Networks, John Wiley & Sons, New York, NY, USA, 2001.
  22. Y. Song and J. W. Grizzle, “Extended Kalman filter as a local asymptotic observer for nonlinear discrete-time systems,” in Proceedings of the American Control Conference, pp. 3365–3369, June 1992. View at Scopus
  23. A. Y. Alanis, M. Lopez-Franco, N. Arana-Daniel, and C. Lopez-Franco, “Discrete-time neural control for electrically driven nonholonomic mobile robots,” International Journal of Adaptive Control and Signal Processing, vol. 26, no. 7, pp. 630–644, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. L. A. Feldkamp, D. V. Prokhorov, and T. M. Feldkamp, “Simple and conditioned adaptive behavior from Kalman filter trained recurrent networks,” Neural Networks, vol. 16, no. 5-6, pp. 683–689, 2003. View at Publisher · View at Google Scholar · View at Scopus