Journal of Control Science and Engineering

Journal of Control Science and Engineering / 2015 / Article

Research Article | Open Access

Volume 2015 |Article ID 739894 | 10 pages | https://doi.org/10.1155/2015/739894

Adaptive Hybrid Visual Servo Regulation of Mobile Robots Based on Fast Homography Decomposition

Academic Editor: James Lam
Received07 Apr 2015
Revised07 Aug 2015
Accepted13 Aug 2015
Published07 Sep 2015

Abstract

For the monocular camera-based mobile robot system, an adaptive hybrid visual servo regulation algorithm which is based on a fast homography decomposition method is proposed to drive the mobile robot to its desired position and orientation, even when object’s imaging depth and camera’s position extrinsic parameters are unknown. Firstly, the homography’s particular properties caused by mobile robot’s 2-DOF motion are taken into account to induce a fast homography decomposition method. Secondly, the homography matrix and the extracted orientation error, incorporated with the desired view’s single feature point, are utilized to form an error vector and its open-loop error function. Finally, Lyapunov-based techniques are exploited to construct an adaptive regulation control law, followed by the experimental verification. The experimental results show that the proposed fast homography decomposition method is not only simple and efficient, but also highly precise. Meanwhile, the designed control law can well enable mobile robot position and orientation regulation despite the lack of depth information and camera’s position extrinsic parameters.

1. Introduction

Visual sensors have the advantage of sensing rich information at low costs. At the meantime, there have been considerable improvements in image processing and analysis techniques with the popularization of visual sensors. Therefore, using visual sensors on robots as the visual servo system, in order to significantly improve their environmental sensing ability and intelligence level, has become one of the hot topics for research in current robot community.

Because robot arms are unconstrained systems, their visual servo regulation problems are relatively simple and have been well studied in previous research work. References [13] comprehensively reviewed the visual servo regulation algorithms of robot arms. The key point for robot arm visual servo regulation is the selection of appropriate visual characteristic set and the determination of the interaction matrix that characterizes the nonlinear mapping between the characteristic set and camera movement (also known as the image Jacobian matrix). The major difficulty is how to deal with the image’s information about depth of field missing from the interaction matrix. In order to solve this problem, Malis [4, 5] proposed a 2.5D hybrid visual servo method, where the selected visual characteristic set includes both 3D position information and 2D image information. Homography matrix is also used to process the unknown depth information in the interaction matrix. Benhimane and Malis [6] proposed a homography matrix based visual servo method, which directly uses homography matrix and desired characteristic points to establish visual characteristic sets, in order to derive the interaction matrix that does not include time varying depth information. Piepmeier et al. [7] proposed a typical quasi-Newton method based online estimation for the interaction matrix and successfully applied it to uncalibrated visual servo.

Different from robot arms, mobile robot is a typical system with nonholonomic constraint. The existence of nonholonomic constraint makes the visual servo regulation of mobile robots more difficult than that for robot arms. The visual servo of mobile robots includes visual tracking and visual regulation. References [810] conducted in-depth research on visual tracking. However, in system stability analysis, the speed and path of the robot need to be constrained. Therefore, these methods can hardly be directly applied in visual regulation.

The resolution of mobile robot visual servo regulation problem requires the combination of the visual servo regulation technique and the regulation controller design technique for nonholonomic constrained system. Fang et al. [11] studied the mobile robot visual regulation problem under the situation where the coordinate of the camera completely overlapped that of the robot. They proposed the analytical expression of the homography matrix under this specific situation and decomposed it to obtain the angular difference and imaging depth ratio between the current position and the desired position. Eventually they designed the adaptive regulation control law using the Lyapunov direct method. Based on [11], Zhang et al. [12] further studied the mobile robot visual regulation problem under the situation where only pure translation extrinsic parameter exists between the coordinates of camera and robot. First, angular difference signal between the current posing and the desired posing was extracted using the Faugeras homography decomposition method [13] or other methods [14]; then, it was combined with 2D image error signal to solve the system open-loop error function; finally, an adaptive control law that can compensate the unknown translational displacement and the desired imaging depth was designed using Lyapunov’s direct method.

It is obvious that the studies in [11, 12] are both based on the conditions that the optical axis of the camera and the moving orientation of the mobile robot are coincident. They are not applicable when there are posture differences between the coordinates of the camera and the robot. Moreover, [13] and other general homography decomposition methods [15, 16] all exhibit computational complexities and nonunique decomposition results, and extra a priori knowledge is needed to eliminate the wrong decomposition results. Based on the analytical expression of homography matrix proposed in [11], [14] proposed a fast decomposition method for this type of homography matrix. However, it is only applicable to the cases where the coordinates of the camera and the robot are completely overlapped or only with translation transformations. Inspired by [6, 11, 12, 14], this research expands their work and studies the mobile robot visual servo regulation problem when the coordinates of the camera and the robot have both position and orientation transformations. A fast decomposition method for homography matrix which is applicable to a broader scope is also proposed in this work. To be specific, first, the fast homography decomposition method is used to determine the factor of the homography matrix and the angular difference between current position and the expected position; then, combined with the characteristic points of the desired view, a group of error functions are constructed, and consequently the open-loop error function of the system is obtained; finally, using Lyapunov’s direct method, the adaptive regulation control law, which is capable of simultaneously compensating for the translation extrinsic parameter and the desired imaging depth, is designed and experimentally verified.

2. Fast Homography Decomposition

2.1. Unique Properties of the Euclidean Homography Matrix

Figure 1 shows the projective model of the homography matrix.

The current and desired coordinates of the camera are represented by and , respectively. and are the corresponding image planes of the two cameras, respectively. The position-orientation relationship between the two coordinates can be described aswhere and are the relative orientation and translation parameters between and , respectively. Let the distances between the origins and of and and the target plane be and , respectively, and let the normal direction of 3D scene plane in and be and , respectively; then can be written as and in and , respectively. Let the 3D point on in and be and , respectively; then and , the normalized projective coordinates of on the image planes , , arewhere is the camera intrinsic matrix, and are the horizontal and vertical pixel coordinates of point in camera , and and are the horizontal and vertical pixel coordinates of point in camera . Then, Euclidean homography satisfying the following equation,can be expressed as where . According to (4), Euclidean homography matrix is a rank-1 modified matrix. Its determinant can be written as Property 1 can be derived from (5).

Property 1. The value of the Euclidean homography matrix determinant equals the ratio between the distances from the main points of current camera and expected camera to the scene plane. The ratio must be greater than zero.

Let be a rotation matrix resulting from rotating around unit vector ; the general formula of this rotation transformation is Based on (6), the following equations hold:

Figure 2 shows the coordinates of the mobile robot and the camera .

Let the position-orientation relationship between and be , where . If after movement , , and the robot arrives at position-orientation from original position-orientation , then the position-orientation relationship of the camera before and after the movement is , whereEquation (9) is derived from (7) and the Nanson formula .

Writing then, with the calibrated orientation extrinsic parameter of the camera, (10) can be used to solve for the a priori solution of the onboard camera rotation axis. Obviously, when the optical axis is perpendicular to the rotation axis of the robot, . According to (10) and (11),where equations and are used. Property 2 can be derived from (9) and (12).

Property 2. The corresponding rotation angle of the rotation matrix decomposed from the onboard homography is of the same size but opposite direction of the rotation angle of the robot. In addition, the corresponding rotation axis of and the decomposed translational displacement are mutually orthogonal.

According to (4), (8), and (12), the onboard homography satisfies the following equation: Property 3 can be further derived from (13).

Property 3. The corresponding rotation axis of the rotation matrix decomposed from the onboard homography is equal to the corresponding unit characteristic vector of the real eigenvalue at unity.

2.2. Fast Decomposition
2.2.1. Calculation of Rotation Axis

After the calibration of the onboard camera extrinsic parameter, (11) can be first used to solve for the a priori rotation axis . Then, with the known normalized projective coordinates of the four groups of coplanar matching points, direct linear transformation (DLT) can be used to solve for the homography that satisfies the constraints and . After that, the real eigenvalue of and its corresponding unit characteristic vector can be obtained:So, the constant axis of the rotation matrix iswhere is used to take the sign of the term. After is obtained, the Euclidean homography matrix is

At this point, both the homography matrix and the rotation axis of the rotation matrix are obtained. It is noteworthy to note that is a constant in most of the cases, which only needs to be solved for once.

2.2.2. Calculation of Rotation Angle

As shown in Figure 2, when the optical axis of the camera is perpendicular to the rotation axis of the mobile robot and the angle between the image plane vertical axis and the robot rotation axis is zero, with the coordinate of the camera being the reference frame, its rotation axis can be written as . In this case, the homography matrix is in a special form and can be decomposed more easily. On this basis, this section provides a new construct method to solve for rotation angle.

First, construct the rotation matrix , wherethen equation must hold. Left multiply by and right multiply by on both sides of (4) to get LetBecause and are orthogonal, so and are also orthogonal; that is, the component in . Let the element in the th row and th column of be written as ; then (18) can be written as Let the elements on both sides of the equality be equal and rearrange to get According to (21) and (23),where Letthen two groups of possible solutions for can be arrived at:

When or in (24) is not zero, substitute (28) into (22)-(23) to get and and substitute them back into (24) for verification, so as to determine the unique angle solution. When , it is impossible to determine the unique solution directly from (25). A priori knowledge, such as movement continuity or scene plane normal, needs to be used to eliminate false solutions.

When elements in the homography matrix , it means that the component of in . This corresponds to the situation where the scene plane is perpendicular to the floor.

2.2.3. Calculation of Translation and Scene Plane Normal

After determining the rotation angle of the rotation matrix, substitute it into (20) to get We can obtain Right multiply by on both sides of (29) to get Equations (30) and (31) can be used to calculate and ; then , , and in (4) can be solved from (19): According to Figure 1, the projection of the expected image plane point on the scene plane normal must be smaller than zero, which introduces the constraint that is, when or in (24) is not zero, a unique homography decomposition solution can be obtained. When , two groups of possible homography decomposition solutions can be obtained; in this case, the a priori information of needs to be used to eliminate the error solution.

According to the homography decomposition process described in this section, this decomposition method can avoid the singular value decomposition calculation of the matrix. At the meantime, it also avoids solving for cubic equation with one unknown when determining the scale factor between and . Therefore, it is a fairly good homography matrix decomposition method that is easy to program and operate.

3. Controller Design

As shown in Figure 3, the main purpose of monocular mobile robot visual servo regulation control is to construct an appropriate visual characteristic set from the image signals provided by visual sensors, through characteristic extraction and position-orientation estimation. Then, construct error signal and consequently solve for the open-loop error fuction of the system. Finally, design appropriate adaptive control law to compensate for the translation extrinsic parameter and the expected imaging depth information of the target, while controlling the mobile robot to move from current position-orientation to the desired . Its main steps include image characteristic extraction and visual position-orientation estimation, visual characteristic set construction and open-loop error function solution, and adaptive control law design.

3.1. Problem Formulation

According to (3),therefore, the image error signal can be defined aswhere is the coordinate of the spacial 3D point in coordinate frame. The origin of coordinate frame overlaps with that of the onboard camera coordinate frame ; its ,  ,   and axes are parallel with the ,  ,  and axes of the robot coordinate frame , respectively.

The linear and angular velocities of the coordinate system resulting from movement of the robot can be easily solved using rigid body kinematics: The linear velocity of the 3D point resulting from the coordinate system movement described by (36) and (37) is Substitute (35)–(37) into (38) and rearrange to havewhere in (40) are the components of the vector, respectively.

Define the orientation error signal as where is the angle solved from (20). According to (9), the mapping relationship between its rate of change and the rotation angular velocity of the robot is

Let then (39) and (42) can be finally written asEquation (44) is the open-loop error function of the system. It is easy to see that, when the constant terms ,  ,  , and are zero, this becomes the error function described in [11]; when , and are zero, this becomes the error function described in [12]. Therefore, this open-loop error function is more generalized.

3.2. Adaptive Controller Design

When ,  ,  and in the error function (44) are zero, the current and expected coordinates of the mobile robot coincide, which means the completion of the regulation control of the robot. Therefore, this section will design the adaptive regulation control law to compensate for the unknown depth and camera translation extrinsic parameter, as well as controlling the mobile robot to realize asymptotic regulation.

Reference [11] provides a classic controller design method for nonholonomic constraint system. Inspired by this method, the auxiliary signal is first designed: Letthen the differential of can be calculated according to (44): The control input can be written in the following format: In (49), is a positive real number gain and is an estimate of . The designed auxiliary signal is In (50), is an estimation for . Then, the control input can be written in the following format:

Substituting the control inputs in (49) and (51) into (44) and (48), we can acquire the close-loop error function: In (52), , .

In order to get the dynamic estimates of parameters and , define Lyapunov function as Take its derivative, substitute it into (52), and rearrange to getso their dynamic estimates are Substitute (55) into (54) to obtain

3.3. Stability Analysis

According to (53) and (56), ; . Also according to (47), (49), (50), and (51), . Finally, according to (44), (50), and (55), .

Write function asTake its derivative to gettherefore, according to the inference of Barbara theorem, Consequently, according to (52), (49), (51), and (55), According to (52),combined with (59) and (60) to getBecause are uniformly continuous and , according to the extended Barbara theorem, Take derivative for and rearrange to getwhere Combining (59), (60), and (64), it can be derived from (66) that Because is uniformly continuous and is bounded, according to (67) and the extended Barbara theorem, According to (50), (59), and (69), Finally, according to (45), (69), and (70), From (69)–(71), it can be eventually derived that, under the effect of designed adaptive regulation control law, the robot can asymptotically converge to the desired position-orientation.

4. Experiments and Analysis

Let the position-orientation relationship between the current and the desired coordinate systems of the robot be let the extrinsic parameter between the mobile robot and the camera be and let the scene plane in the expected camera system be

Then the images of the square on the scene plane in and are shown in Figure 4.

4.1. Fast Decomposition Experiment of the Homography Matrix

Apply different levels (0–2 pixels) of Gaussian noise on the tops of the two images, respectively. For each level of the noise, randomly select 1000 groups of the noise to be applied on the top of the image. Use DLT to estimate the homography matrix and decompose the matrix with traditional SVD-based decomposition method [13] and the proposed fast decomposition algorithm, respectively. The decomposition results are as follows: Let the error between the above results and the real values be

Taking the average value of the 1000 groups of errors, we can have the error maps in Figure 5.

According to Figure 5(a), the rotation angles calculated from the two methods have extremely close accuracy, with individual cases where the proposed method has even better results than the traditional one. On the other hand, from Figures 5(b) and 5(c), it can be seen that the accuracies of the camera translation and the scene plane normal are obviously higher than those of the traditional algorithm. Therefore, this algorithm effectively avoids the matrix SVD calculation. It requires less operation and is easy for programming, with a higher decomposition accuracy.

4.2. Experiment of Adaptive Hybrid Visual Servo Regulation for Mobile Robot

In order to verify the performance of the proposed algorithm, the mobile robot system was simulated in a MATLAB environment. From the simulation results, it can be seen that the proposed algorithm is capable of making the robot gradually converge to the desired location and has good control performance.

The gain settings of the controller and adaptive parameter update are

Setting the initial guess of , ; the adaptive parameters are illustrated in Figure 6.

It can be seen from Figure 6 that, given an arbitrary guess of the unknown parameters, they are gradually stabilized towards the true value.

The control inputs are illustrated in Figure 7.

The curves of system error changes are shown in Figure 8.

It can be observed that the control law designed in this study is capable of controlling the robot to gradually stabilize towards the expected position-orientation, with unknown translation extrinsic parameter of the camera.

5. Conclusion

This paper studied a fast homography decomposition method based adaptive hybrid visual servo regulation algorithm, with missing depth information of the target and unknown translation extrinsic parameter of the onboard camera. With the construction of auxiliary rotation matrix, a common homography decomposition problem was transformed into a special type of homography decomposition problem, which effectively reduced the decomposition complexity. Compared with traditional homography decomposition methods, this decomposition algorithm has higher decomposition accuracy and robustness. At the meantime, the designed adaptive visual controller is capable of providing online compensation for the unknown imaging depth and translation extrinsic parameter, which enables the mobile robot to gradually move from the original position-orientation to the desired position-orientation, with a good performance.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This project was funded by the National Natural Foundation of China (61375084); the Science and Technology Project of Fujian Education Department (JK2014049); and the Natural Science Foundation of Fujian Province (2015J01268).

References

  1. S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996. View at: Publisher Site | Google Scholar
  2. F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82–90, 2006. View at: Publisher Site | Google Scholar
  3. F. Chaumette and S. Hutchinson, “Visual servo control. II. Advanced approaches,” IEEE Robotics and Automation Magazine, vol. 14, no. 1, pp. 109–118, 2007. View at: Publisher Site | Google Scholar
  4. E. Malis, F. Chaumette, and S. Boudet, “2 1/2 D visual servoing,” IEEE Transactions on Robotics and Automation, vol. 15, no. 2, pp. 238–250, 1999. View at: Publisher Site | Google Scholar
  5. E. Malis and F. Chaumette, “21/2D visual servoing with respect to unknown objects through a new estimation scheme of camera displacement,” International Journal of Computer Vision, vol. 37, no. 1, pp. 79–97, 2000. View at: Publisher Site | Google Scholar
  6. S. Benhimane and E. Malis, “Homography-based 2D visual tracking and servoing,” The International Journal of Robotics Research, vol. 26, no. 7, pp. 661–676, 2007. View at: Publisher Site | Google Scholar
  7. J. A. Piepmeier, G. V. McMurray, and H. Lipkin, “Uncalibrated dynamic visual servoing,” IEEE Transactions on Robotics and Automation, vol. 20, no. 1, pp. 143–147, 2004. View at: Publisher Site | Google Scholar
  8. J. Chen, D. M. Dawson, W. E. Dixon, and A. Behal, “Adaptive homography-based visual servo tracking for a fixed camera configuration with a camera-in-hand extension,” IEEE Transactions on Control Systems Technology, vol. 13, no. 5, pp. 814–825, 2005. View at: Publisher Site | Google Scholar
  9. J. Chen, W. E. Dixon, D. M. Dawson, and M. McIntyre, “Homography-based visual servo tracking control of a wheeled mobile robot,” IEEE Transactions on Robotics, vol. 22, no. 2, pp. 406–415, 2006. View at: Publisher Site | Google Scholar
  10. G. Hu, N. Gans, N. Fitz-Coy, and W. Dixon, “Adaptive homography-based visual servo tracking control via a quaternion formulation,” IEEE Transactions on Control Systems Technology, vol. 18, no. 1, pp. 128–135, 2010. View at: Publisher Site | Google Scholar
  11. Y. Fang, W. E. Dixon, D. M. Dawson, and P. Chawda, “Homography-based visual servo regulation of mobile robots,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 35, no. 5, pp. 1041–1050, 2005. View at: Publisher Site | Google Scholar
  12. X.-B. Zhang, Y.-C. Fang, and X. Liu, “Adaptive visual servo regulation of mobile robots,” Control Theory and Applications, vol. 27, no. 9, pp. 1123–1130, 2010. View at: Google Scholar
  13. O. Faugeras and F. Lustman, “Motion and structure from motion in a piecewise planar environment,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 2, no. 3, pp. 485–508, 1988. View at: Publisher Site | Google Scholar
  14. X. Zhang, Y. Fang, B. Ma, X. Liu, and M. Zhang, “A fast homography decomposition technique for visual servo of mobile robots,” in Proceedings of the 27th Chinese Control Conference (CCC '08), pp. 404–409, Beihang University Press, Kunming, China, July 2008. View at: Publisher Site | Google Scholar
  15. Z. Zhang and A. R. Hanson, “3D reconstruction based on homography mapping,” in Proceedings of the ARPA Image Understanding Workshop, pp. 1007–1012, Palm Springs, Calif, USA, 1996. View at: Google Scholar
  16. E. Malis and M. Vargas, “Deeper understanding of the homography decomposition for vision-based control,” Research Report INRIA, INRIA, Sophia Antipolis, France, 2007. View at: Google Scholar

Copyright © 2015 Chunfu Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

936 Views | 382 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.