Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2016 (2016), Article ID 8594096, 16 pages
http://dx.doi.org/10.1155/2016/8594096
Research Article

Vision-Based Autonomous Underwater Vehicle Navigation in Poor Visibility Conditions Using a Model-Free Robust Control

1CONACYT-Instituto Politécnico Nacional-CITEDI, 22435 Tijuana, BC, Mexico
2Robotics and Advanced Manufacturing Group, CINVESTAV Campus Saltillo, 25900 Ramos Arizpe, COAH, Mexico

Received 25 March 2016; Revised 5 June 2016; Accepted 6 June 2016

Academic Editor: Pablo Gil

Copyright © 2016 Ricardo Pérez-Alcocer et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a vision-based navigation system for an autonomous underwater vehicle in semistructured environments with poor visibility. In terrestrial and aerial applications, the use of visual systems mounted in robotic platforms as a control sensor feedback is commonplace. However, robotic vision-based tasks for underwater applications are still not widely considered as the images captured in this type of environments tend to be blurred and/or color depleted. To tackle this problem, we have adapted the color space to identify features of interest in underwater images even in extreme visibility conditions. To guarantee the stability of the vehicle at all times, a model-free robust control is used. We have validated the performance of our visual navigation system in real environments showing the feasibility of our approach.

1. Introduction

The development of research in autonomous underwater vehicles (AUVs) began approximately four decades ago. Since then, a considerable amount of research has been presented. In particular, the localization and navigation problems represent a challenge in the AUVs development due to the unstructured and hazardous conditions of the environment and the complexity of determining the global position of the vehicle. An extensive review of the research related to this topic is presented in [14].

Sensor systems play a relevant role in the development of AUV navigation systems as they provide information about the system status and/or environmental conditions. There exist several sensors that provide relevant and accurate information [57]. However, global or local pose estimation of underwater vehicles is still an open problem, specially when a single sensor is used. Typically, underwater vehicles use multisensor systems with the intention of estimating their position and determining the location of objects in their workspace. Usually, inertial measurement units (IMUs), pressure sensors, compasses, and global positioning systems (GPS) are commonly used [8]. Note that even though GPS devices are widely used for localization, they show low performance in an underwater environment. Therefore, data fusion is needed to increase the accuracy of the pose estimation (for a review of sensor fusion techniques see [9].)

Vision-based systems are a good choice because they provide high resolution images with high speed acquisition at low cost [10]. However, in aquatic environments the color attenuation produces poor visibility when the distance increases. In contrast, at short distances the visibility may be good enough and the measurement accuracy higher than other sensors. Therefore, tasks in which visual information is used are limited to object recognition and manipulation, docking vehicle [11], reconstruction of the ocean floor structure [12], and underwater inspection and maintenance [13]. In [14], the authors discuss how visual systems can be used in underwater vehicles, and they present a vision system which obtains depth estimations based on a camera data. In [10], a visual system was introduced. This visual system, called Fugu-f, was designed to provide visual information in submarine tasks such as navigation, surveying, and mapping. The system is robust in mechanical structure and software components. Localization has been also addressed with vision systems. In [15] a vision-based localization system for an AUV with limited sensing and computation capabilities was presented. The vehicle pose is estimated using an Extended Kalman Filter (EKF) and a visual odometer. The work in [16] presents a vision-based underwater localization technique in a structured underwater environment. Artificial landmarks are placed in the environment and a visual system is used to identify the known objects. Additionally a Monte Carlo localization algorithm estimates the vehicle position.

Several works for visual feedback control in underwater vehicles have been developed [1728]. In [17], the authors present a Boosting algorithm which was used to identify features based on color. This method uses, as input, images in the RGB color space, and a set of classifiers are trained offline in order to segment the target object to the background and the visual error is defined as an input signal for the PID controller. In a similar way, a color-based classification algorithm is presented in [18]. This classifier was implemented using the JBoost software package in order to identify buoys of different color. Both methods require an offline training process which is a disadvantage when the environment changes. In [19], an adaptive neural network image-based visual servo controller is proposed; this control scheme allows placing the underwater vehicle in the desired position with respect to a fixed target. In [20], a self-triggered position based visual servo scheme for the motion control of an underwater vehicle was presented. The visual controller is used to keep the target in the center of image with the premise that the target will always remain inside the camera field of view. In [21], the authors present an evolved stereo-SLAM procedure implemented in two underwater vehicles. They computed the pose of the vehicle using a stereo visual system and the navigation was performed following a dynamic graph. A visual guidance and control methodology for a docking task is presented in [22]. Only one high-power LED light was used for AUV visual guidance without distance estimation. The visual information and a PID controller were employed in order to regulate the AUV attitude. In [23], a robust visual controller for an underwater vehicle is presented. The authors implemented genetic algorithms in a stereo visual system for real-time pose estimation, which was tested in environments under air bubble disturbance. In [24], the development and testing process of a visual system for buoys detection is presented. This system used the HSV color space and the Hough transformation in the detection process. These algorithms require the internal parameters adjusting depending on the work environment, which is a disadvantage. In general, the visual systems used in these papers were configured for a particular environment and when the environmental characteristics change, it is necessary to readjust some parameters. In addition, robust control schemes were not proposed for attitude regulation.

In this work, a novel navigation system for autonomous underwater vehicle is presented. The navigation system combines a visual controller with an inertial controller in order to define the AUV behavior in a semistructured environment. The AUV dynamic model is described and a robust control scheme is experimentally validated for attitude and depth regulation tasks. An important controller feature is that it can be easily implemented in the experimental platform. The main characteristics affecting the images taken underwater are described, and an adapted version of the perceptually uniform color space is used to find the artificial marks in a poor visibility environment. The exact positions of the landmarks in the vehicle workspace are not known, but an approximate knowledge of their localization is available.

The main contributions of this work include () the development of a novel visual system for detection of artificial landmarks in poor visibility conditions underwater, which does not require the adjustment of internal parameters when environmental conditions change, and a new simple visual navigation approach which does not require keeping the objects of interest in the field of view of the camera at all times, considering that only their approximate localization is given. In addition, a robust controller guarantees the stability of the AUV.

The remaining part of this paper is organized as follows. In Section 2 the visual system is introduced. The visual navigation system and details of the controller are presented in Section 3. Implementation details and the validated experimental results are presented in Section 4. Finally, Section 5 concludes this work.

2. The Visual System

Underwater visibility is poor due to the optical properties of light propagation, namely, absorption and scattering, which are involved in the image formation process. Although a big amount of research has focused on using mathematical models for image enhancement and restoration [25, 26], it is clear that the main challenge is the highly dynamic environment; that is, the limited number of parameters that are typically considered could not represent all the actual variables involved in the process. Furthermore, for effective robot navigation, the enhanced images are needed in real time, which is not always possible to achieve in all approaches. For that reason, we decided to explore directly the use of perceptually uniform color spaces, in particular the color space. In the following sections, we describe the integrated visual framework proposed for detecting artificial marks in aquatic environments, in which the color space was adapted for underwater imagery.

2.1. Color Discrimination for Underwater Images Using the Color Space

Three main problems are observed in underwater image formation [26]. The first is known as disturbing noise, which is due to suspended matter in water, such as bubbles, small particles of sand, and small fish or plants that inhabit the aquatic ecosystem. These particles block light and generate noisy images with distorted colors. The second problem is related with the refraction of light. When a camera set and objects are placed in two different environments with different refractive index, the objects in the picture have different distortion for each environment, and therefore the position estimation is not the same in both environments. The third problem in underwater images is the light attenuation. The light intensity decreases as the distance to the objects increases; this is due to the attenuation of the light in function of its wavelength. The effect of this is that the colors of the observed underwater objects look different from those perceived in the air. Figure 1 shows two images with the same set of different color objects taken underwater and in air. In these images, it is possible to see the characteristics of the underwater images mentioned above.

Figure 1: Photographs with multicolored objects taken underwater and in air.

A color space is a mathematical model through which the perceptions of color are represented. The color space selection is an important decision in the development of the image processing algorithm, because it can dramatically affect the performance of the vision system. We selected the color space [27], because it has features that simplify the analysis of the data coming from the underwater images. In underwater images, the background color (sea color) is usually blue or green; these colors correspond to the limits of the and channels, respectively, and, therefore, to identify objects with contrasting colors to the blue or green colors results much easier. A modification of the original transformation method form the RGB to the space color was made. The logarithm operation was removed from the transformation reducing the processing time while keeping the color distribution. Thus, the mapping between RGB and the modified color space is expressed as a linear transformation:where is the achromatic channel which determines the luminance value, is the yellow-blue opposite channel, and , is the red-cyan with a significant influence of green. The data in these channels include a wide variety of colors; however, the information in aquatic images is contained in a very narrow interval. Figure 2 shows an underwater image and the frequency histogram for each channel of the color space. In this image, the data of the objects are concentrated in a small interval.

Figure 2: Frequency histograms for each channel of color space.

Therefore, in order to increase the robustness of the identification method, new limits for each channel are established. These values help to increase the contrast between objects and the background in the image. The new limits are calculated using the frequency histogram for each of the channels, and, with this, the extreme values in the histogram with a higher frequency than a threshold value are computed. The difference between using the frequency histogram, and not only the minimum and maximum values, is that the first method eliminates outliers.

Finally, a data normalization procedure is performed using the new interval in each channel of the color space. After this, it is possible to obtain a clear segmentation of the objects with colors located at the end values of the channels. Figure 3 shows the result of applying the proposed algorithm in the , , channels. It can be observed that some objects are significantly highlighted from the greenish background; particularly, the red circle in the beta channel presents a high contrast.

Figure 3: Result of conversion of the input image to color space after adjusting the range of values.
2.2. Detection of Artificial Marks in Aquatic Environments

The localization problem for underwater vehicles requires identifying specific objects in the environment. Our navigation system relies on a robust detection of the artificial marks in the environment. Artificial red balls were selected as the known marks in the aquatic environment. Moreover, circles tags with different color were attached to the sphere in order to determine the section on the sphere that is being observed.

Detection of circles in images is an important and frequent problem in image processing and computer vision. A wide variety of applications such as quality control, classification of manufactured products, and iris recognition use circle detection algorithms. The most popular techniques for detecting circles are based on the Circle Hough Transform (CHT) [28]. However, this method is slow, demands a considerable amount of memory, and identifies many false positives, especially in the presence of noise. Furthermore, it has many parameters that must be previously selected by the user. This last characteristic limits their use in underwater environments since ambient conditions are constantly changing. For this reason, it is desirable a circle detection algorithm with a fixed set of internal parameters that does not require adjustment even if small or large circle identification is required or if the ambient light changes. The circle detection algorithm presented by Akinlar and Tobal [29] provides the desired properties. We have evaluated its performance in aquatic images with good results. Specifically, we applied the algorithm to the channel image which is the resulting image from the procedure described in the previous section. As it was mentioned, the channel presents the highest contrast between red color objects and the background color of underwater images. This enables the detection algorithm to find circular shapes in the field of view with more precision. This is an important discover to the best of our knowledge, this is the first time that this color space model is used in underwater images for this purpose.

Figure 4 shows the obtained results. The images are organized as follows: the first column shows the original input image; the second column corresponds to the graphical representation of the channel; and finally the third column displays the circles detected in the original image. The rows in the figure present the obtained results under different conditions. The first experiment analyzes a picture taken in a pool with clear water. Although the spheres are not close to the camera, they can be easily detected by our visual system. The second row is also a photograph taken in the pool, but in this case the visibility was poor; however, the method works appropriately and detects the circle. Finally, the last row shows the results obtained from a scene taken in the marine environment, in which visibility is poor. In this case, the presence of the red object in the image is almost imperceptible to the human eye; however the detector identifies the circle successfully.

Figure 4: Example results of applying the circle detection algorithm using the color space in underwater images with different visibility conditions.

The navigation system proposed in this work is the integration of the visual system, described above, with a novel control scheme that defines the behavior of the vehicle based on the available visual information. The block diagram in Figure 5 shows the components of the navigation system and the interaction between them.

Figure 5: Block diagram of the proposed navigation system for autonomous underwater vehicle.

3. Visual Navigation System

In this section the navigation system and its implementation in our robotic system are presented. The autonomous underwater vehicle, called Mexibot (see Figure 6), is part of the Aqua robot family [30] and an evolution of the RHex platform [31]. The Aqua robots are amphibious with the ability to work in both land and water environments. The underwater vehicle has a pair of embedded computers; one computer is used for the visual system and for other phases such as registration of data; the second computer is used for the low-level control. An important feature is that both computers are connected via Ethernet, so they are able to exchange data or instructions. The control loop of the robot runs on a real-time constraint; for this reason, QNX operating system is installed in the control computer. On the other hand, the vision computer has Ubuntu as the operating system. On this computer, high-level applications are developed which use the Robot Operating System (ROS). In addition, the vehicle has an IMU, which provides attitude and angular velocity of the vehicle. A pressure sensor is used to estimate the depth of the robot, and a set of three cameras are used, two in front of the robot and one in the back.

Figure 6: Our underwater vehicle Mexibot.
3.1. Model-Free Robust Control

The visual navigation system requires a control scheme to regulate the depth and attitude of the underwater vehicle. In this subsection, the underwater vehicle dynamics is analyzed and the controller used to achieve the navigation objective is presented. Knowing the dynamics of underwater vehicles and their interaction with the environment plays a vital role for the vehicles performance. The underwater vehicles dynamics include hydrodynamic parametric uncertainties, which are highly nonlinear, coupled, and time varying. The AUV is a rigid body moving in 3D space. Consequently, the AUV dynamics can be represented with respect to the inertial reference denoted by or with respect to the body reference frame . Figure 7 presents the AUV reference frames and their movements.

Figure 7: AUV representation including the inertial and body reference frame.

In [32], Fossen describes the method to obtain the underwater vehicle dynamics using Kirchhoff’s laws. Fluid damping, gravity-buoyancy, and all external forces are also included and the following representation is obtained:where is the pose of the vehicle, is the twist of the vehicle, is the linear velocity, and is the angular velocity expressed in the body reference frame. is the positive constant and symmetric inertia matrix which includes the inertial mass and the added mass matrix. is the skew-symmetric Coriolis matrix and is the positive definite dissipative matrix, which depends on the magnitude of the relative fluid velocity . is the fluid velocity in the inertial reference and is the potential wrench vector which includes gravitational and buoyancy effects. is the vector of external forces, expressed in the vehicle frame and produced by the vehicle thrusters, is the operator that maps the generalized velocity to the vehicle twist , and is the external disturbance wrench produced by the fluid currents.

Consider the following control law [33]:where , , , , and are constant positive definite matrices, is a positive scalar, and is the pose error: after which the extended (tracking) error is defined asExpressing this extended error as a velocity error for an artificial reference velocity , it raises the vehicle’s twist reference as This control scheme ensures stability for tracking tasks despite any inaccuracies in the dynamic parameters of the vehicle and the perturbations in the environment, [33]. Therefore, this control law can be used to define the behavior of both the inertial and the visual servoing mode of the underwater vehicle.

It is also important to highlight that this control law can be implemented easily, because it only requires measurements of , and rough estimates of and .

3.1.1. Stability Analysis

Model (2)-(3) is also known as the quasi-Lagrangian formulation since the velocity vector defines a quasi-Lagrangian velocity vector. The Lagrangian formulation upon which the stability analysis relies is found by using (3) and its time derivative on (2) and premultiply the resulting equation by the transpose of the velocity operator [34]:where ; , which implies that ; and all the terms are bounded, for nonnegative constants as follows:Then, control law (4) adopts the following shape in the Lagrangian space:with the following relationships:from which it raises or equivalently the following property:

Now consider that the left-hand side of Lagrangian formulation (9) can be expressed in the following regression-like expression: where is the regressor constructed by known nonlinear functions of the generalized coordinates and its first and second time derivatives and is the vector of unknown parameters.

Then for an arbitrary smooth (at least once differentiable) signal , there should exist a modified regressor such that

The difference between the estimate version and the real parameters produces an estimate system error: which after the above equivalence is properly bounded: Then the closed-loop dynamics is found using control law (11) in the open-loop Lagrangian expression (9):

Now consider the following Lyapunov candidate function:with for some constant vector . The time derivative of the Lyapunov candidate function along the trajectories of the closed-loop system (18), after property (13) and proper simplifications, becomesAssuming that is bounded implies that both and are also bounded. Then, assuming that and are also bounded it yields , which can be expressed in terms of the extended error as Then the last term in (20) is bounded as follows:Also, let , where is a vector of ones, such that . Then, after these bounding expressions, (20) is bounded as follows:where is the largest eigenvalue of matrix . The conditions to satisfy are summarized aswhich are conditions in the control law gains choice.

Under these conditions is negative definite and the extended error is asymptotically stable:

Finally, after definition (6) whenever it follows that which means that reaches the set point . Therefore, the stability for the system is proved. A detailed explanation and analysis of the controller can be found in [33].

The implementation of the control does not require knowledge of the dynamic model parameters; hence it is a robust control with respect to the fluid disturbances and dynamic parametric knowledge. However it is necessary to know the relationship between the control input and the actuators.

3.2. Thrusters Force Distribution

The propulsion forces of Mexibot are generated by a set of six fins which move along a sinusoidal path defined aswhere is the angle of the position of the flip, is the amplitude of motion, is the period of each cycle, is the central angle of the oscillation, and is the phase offset between different fins of the robot.

Both Georgiades in [35] and Plamondon in [36] show models for the thrust generated by the symmetric oscillation of the fins used in the Aqua robot family. Plamondon presents a relationship between the thrust generated by the fins and the parameters describing the motion in (26). Thus, the magnitude of force generated by each flip with the sinusoidal movement (26) is determined by the following equation:where , , and correspond to the dimensions of the fins, represents the density of water, is the amplitude, and is the period of oscillation. Thus, the magnitude of the force generated by the robot fins can be established in function of the period and the amplitude of the fin oscillation movement at runtime. Figure 8 shows the force produced by the fins, where defines the direction and the magnitude of the force vector expressed in the body reference frame asIn addition, due to the kinematic characteristics of the vehicle, . Therefore, the vector of forces and moments generated by the actuators is defined as follows:

Figure 8: Diagram of forces generated by the fins movements where the angle establishes the direction of the force.

Consider the fins numeration as shown in Figure 9; then the following equations state the relationship between the coordinates and of and the vector aswhere and are the distance coordinates of the th fin joint with respect to to the vehicle’s center of mass as shown in Figure 9. Note that the symmetry of the vehicle establishes that , , , and .

Figure 9: Fins distribution in the underwater vehicle.

System (30a), (30b), (30c), (30d), (30e), and (30f) has five equations with twelve independent variables. Among all possible solutions the one presented in this work arises after the imposition of the following constraints:Then one system solution is found to bewhere

Now, the oscillation amplitude of the th fin is computed after (27) using an oscillation period of , and the corresponding thrust is defined as Finally, the central angle of oscillation is computed as

3.3. Desired Signals Computation

In this navigation system the controller performs a set-point task. The desired values are computed based on the visual information. Due to the under-actuated nature of the vehicle and sensor limitations, only the attitude and depth of the vehicle can be controlled. The desired depth value is a constant and the roll desired angle is always . As the constraint C6: has been considered, the depth is controlled indirectly by modifying the desired pitch angle . This desired orientation angle is calculated in terms of the depth error aswhere is a positive constant.

The visual system defines the desired yaw angle . Images from the left camera are processed in a ROS node with the algorithms described in Section 2.1 in order to determine the presence of the sphere in the field of view. When visual information is not available, this angle remains constant with the initial value or with the last computed value. However, if a single circle with a radius bigger than a certain threshold is found, the new desired yaw angle is computed based on the visual error. This error is defined as the distance in pixels between the center of the image and the position in the -axis of the detected circle. So, the new desired yaw angle is computed aswhere is the actual yaw angle, is the visual error in horizontal axis, and are the image dimensions, and is the radius of the circle. This desired yaw angle is proportional to the visual error, but it also depends on the radius of the circle found. When the object is close to the camera, the sphere radius is larger, and therefore the change of also increases. Note that the resolution of the image given by the vehicle’s camera is pixels; with this, the gain used to define the reference yaw angle in (37) was established as 300. This value was obtained experimentally, with a trial error procedure, and produces a correction of approximately , with a visual error and radius of the observed sphere . This update of the desired yaw angle modifies the vehicle attitude and reduces the position error of the sphere in the image. We note that the update of the desired yaw angle was necessary only when the visual error was bigger than 7 pixels; by this reason when is smaller than this threshold the reference signal keeps the previous value.

Finally, when a circle inside of other circle is found, that means the underwater vehicle is close to the mark and a direction change is performed. The desired yaw angle is set to the actual yaw value plus an increment related to the location of the next sphere. This integration of the visual system and the controller results in the autonomous navigation system for underwater vehicle which is able to track the marks placed in a semistructured environment.

4. Experimental Results

In order to evaluate the performance of the visual navigation system, a couple of experimental results are presented in this section. Two red spheres were placed in a pool with turbid water. An example of the type of view in this environment is shown in Figure 10. This environment is semistructured because the floor is not natural and also because of the lack of currents; however, the system is subjected to disturbances produced by the movement of swimmers which closely follow the robot. As it was mentioned before, the exact position of the spheres is unknown; only the approximate angle which relates the position between the marks is available. Figure 11 shows a diagram with the artificial marks distribution. The underwater vehicle starts swimming towards one of the spheres. Although the circle detection algorithm includes functions for selecting the circle of interest when more than one are detected, for this first experiment, no more than one visual mark is in front of the visual field of the camera at the same time.

Figure 10: Diagram to illustrate the approximate location of visual marks.
Figure 11: Diagram to illustrate the approximate location of visual marks.

The implementation of the attitude and depth control was performed in a computer with QNX real-time operating system and the sample time of the controller is 1 ms. This controller accesses the inertial sensors in order to regulate the depth and orientation of the vehicle. The reference signal of the yaw angle was set with the initial orientation of the robot and updated by the visual system when a sphere is detected. This visual system was implemented in a computer with Ubuntu and ROS, having an approximate sample time of 33 ms when a visual mark is present. The parameters used in the implementation are presented in Table 1 and were obtained from the manufacturer. Water density value has been used with a nominal value assuming that the controller is able to handle the inaccuracy with respect to real values.

Table 1: Parameters of our AUV used in the experimental test.

The control gains in (4) and (36) were established after a trial and error process. The nonlinear and strongly coupled nature of the vehicle dynamics causes the fact that small variations in the control gains affect considerably the performance of the controller. For the experimental validation, we first tuned the gains of the attitude controller, following this sequence: , , , and . Then, the parameter of the depth controller was selected. With this, the control gains were set as follows:

In the first experiment, the navigation task considers the following scenario. A single red sphere is placed in front of the vehicle approximately at 8 meters of distance. The time evolution of the depth coordinate and the attitude signals , , are shown in Figure 12, where the corresponding depth and attitude reference signals are also plotted. The first 20 seconds corresponds to the start-up period of the navigation system. After that, the inertial controller ensures that the vehicle moves in the same direction until the navigation system receives visual feedback. This feedback occurs past thirty seconds and the desired value for the angle yaw starts to change in order to follow the red sphere. Notice that the reference signal for the pitch angle presents continuous changes after the initialization period. This is because the depth control is performed indirectly by modifying the value of with (36). In addition the initial value for is not relevant because this value is updated after the inertial navigation system starts. The corresponding depth and attitude error signals are depicted in Figure 13, where all of these errors have considerably small magnitude, bounded by a value around m for the depth error and for attitude error.

Figure 12: Navigation experiment when tracking one sphere. Desired values in red and actual values in black. (a) Depth ; (b) roll ; (c) pitch ; (d) yaw .
Figure 13: Navigation tracking errors of one sphere when using controller (4). (a) Depth ; (b) roll ; (c) pitch ; (d) yaw .

The time evolution of the visual error in the horizontal axis is depicted in Figure 14. Again, the first thirty seconds does not show relevant information because no visual feedback is obtained. Later, the visual error is reduced to a value in an acceptable interval represented by the red lines. This interval represents the values where the desired yaw angle does not change, even when the visual system is not detecting the sphere. As mentioned before, the experiments show that when pixels, the AUV can achieve the assigned navigation task. Finally, a disturbance, generated by nearby swimmers when they displace water, moves the vehicle and the error increases, but the visual controller acts to reduce this error.

Figure 14: Navigation tracking of one sphere. Visual error obtained from the AUV navigation experiment.

The previous results show that the proposed controller (4) under the thruster force distribution (32) provides a good behavior in the set-point control of the underwater vehicle, with small depth and attitude error values. This performance enables the visual navigation system to track the artificial marks placed in the environment.

The navigation task assigned to the underwater vehicle in the second experiment includes the two spheres with the distribution showed in Figure 11. For this experiment the exact position of the spheres is unknown; only the approximate relative orientation and distance between them are known. The first sphere was positioned in front of the AUV at an approximated distance of m. When the robot detects that the first ball was close enough, it should change the yaw angle to in order to find the second sphere. Figure 16 shows the time evolution of the depth coordinate , the attitude signals , , , and the corresponding reference signals during the experiment. Similarly to the previous experiment, the actual depth, roll angle, and pitch angle are close to the desired value, even when small ambient disturbances are present. The yaw angle plot shows the different stages of the system. The desired value at the starting period is an arbitrary value that does not have any relation with the vehicle state. After the initialization period, a new desired value for the yaw angle is set and this angle remains constant as long as the visual system does not provide information. When the visual system detects a sphere, the navigation system generates a smooth desired signal allowing the underwater vehicle to track the artificial mark. When the circle inside of the other circle was detected, the change in direction of was applied. This reference value was fixed until the second sphere was detected and a new desired signal was generated with small changes. Finally, the second circle inside of the sphere was detected and a new change of was performed and the desired value remained constant until the end of the experiment.

Figure 17 shows the depth and attitude error signals. Similar to the first experiment, the magnitude of this error is bounded by a value around m for the depth error and for the attitude error, except for the yaw angle, which presents higher values produced by the direction changes. Note that, in this experiment, significant amount of the error was produced by environmental disturbances.

Finally, the graph of the time evolution of the visual error is depicted in Figure 15. It can be observed that, at the beginning, while the robot was moving forward, the error remained constant because the system was unable to determine the presence of the artificial mark in the environment. At a given time , the visual system detected the first sphere, with an estimated radius of about pixels. Then, as the robot gets closer to the target, the visual error begins to decrease due to the improvement in visibility and the radius of the sphere increases. When the radius is bigger than a given threshold, a change-of-direction action is fired in order to avoid collision and to search for the second sphere. Then, all variables are reset. Once again the error remains constant at the beginning due to the lack of visual feedback. In this experiment, when the second mark was identified, the visual error was bigger than pixels, but rapidly this error decreased to the desired interval. At the end of the experiment, another change of direction was generated and the error remained constant, because no other sphere in the environment was detected.

Figure 15: Navigation tracking of two spheres. Visual error obtained from the AUV navigation experiment.
Figure 16: Navigation tracking of two spheres. Desired values in red and actual values in black. (a) Depth ; (b) roll ; (c) pitch ; (d) yaw .
Figure 17: Navigation error when tracking two spheres when using controller (4). (a) Depth ; (b) roll ; (c) pitch ; (d) yaw .

5. Conclusion

In this paper, a visual-based controller to guide the navigation of an AUV in a semistructured environment using artificial marks was presented. The main objective of this work is to provide to an aquatic robot the capability of moving in an environment when visibility conditions are far from ideal and artificial landmarks are placed with an approximately known distribution. A robust control scheme applied under a given thruster force distribution combined with a visual servoing control was implemented. Experimental evaluations for the navigation system were carried out in an aquatic environment with poor visibility. The results show that our approach was able to detect the visual marks and perform the navigation satisfactorily. Future work includes the use of natural landmarks and to lose some restrictions, for example, that more than one visual mark can be present in the field of view of the robot.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The authors thank the financial support of CONACYT, México.

References

  1. J. J. Leonard, A. A. Bennett, C. M. Smith, and H. J. S. Feder, “Autonomous underwater vehicle navigation,” in Proceedings of the IEEE ICRA Workshop on Navigation of Outdoor Autonomous Vehicles, 1998.
  2. J. C. Kinsey, R. M. Eustice, and L. L. Whitcomb, “A survey of underwater vehicle navigation: recent advances and new challenges,” in Proceedings of the IFAC Conference of Manoeuvering and Control of Marine Craft, vol. 88, 2006.
  3. L. Stutters, H. Liu, C. Tiltman, and D. J. Brown, “Navigation technologies for autonomous underwater vehicles,” IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, vol. 38, no. 4, pp. 581–589, 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. L. Paull, S. Saeedi, M. Seto, and H. Li, “AUV navigation and localization: a review,” IEEE Journal of Oceanic Engineering, vol. 39, no. 1, pp. 131–149, 2014. View at Publisher · View at Google Scholar · View at Scopus
  5. A. Hanai, S. K. Choi, and J. Yuh, “A new approach to a laser ranger for underwater robots,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '03), pp. 824–829, October 2003. View at Publisher · View at Google Scholar · View at Scopus
  6. F. R. Dalgleish, F. M. Caimi, W. B. Britton, and C. F. Andren, “An AUV-deployable pulsed laser line scan (PLLS) imaging sensor,” in Proceedings of the MTS/IEEE Conference (OCEANS '07), pp. 1–5, Vancouver, Canada, September 2007. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Annunziatellis, S. Graziani, S. Lombardi, C. Petrioli, and R. Petroccia, “CO2Net: a marine monitoring system for CO2 leakage detection,” in Proceedings of the OCEANS, 2012, pp. 1–7, IEEE, Yeosu, Republic of Korea, 2012. View at Publisher · View at Google Scholar
  8. G. Antonelli, Underwater Robots-Motion and Force Control of Vehicle-Manipulator System, Springer, New York, NY, USA, 2nd edition, 2006.
  9. T. Nicosevici, R. Garcia, M. Carreras, and M. Villanueva, “A review of sensor fusion techniques for underwater vehicle navigation,” in Proceedings of the MTTS/IEEE TECHNO-OCEAN '04 (OCEANS '04), vol. 3, pp. 1600–1605, IEEE, Kobe, Japan, 2004. View at Publisher · View at Google Scholar
  10. F. Bonin-Font, G. Oliver, S. Wirth, M. Massot, P. L. Negre, and J.-P. Beltran, “Visual sensing for autonomous underwater exploration and intervention tasks,” Ocean Engineering, vol. 93, pp. 25–44, 2015. View at Publisher · View at Google Scholar · View at Scopus
  11. K. Teo, B. Goh, and O. K. Chai, “Fuzzy docking guidance using augmented navigation system on an AUV,” IEEE Journal of Oceanic Engineering, vol. 40, no. 2, pp. 349–361, 2015. View at Publisher · View at Google Scholar · View at Scopus
  12. R. B. Wynn, V. A. I. Huvenne, T. P. Le Bas et al., “Autonomous Underwater Vehicles (AUVs): their past, present and future contributions to the advancement of marine geoscience,” Marine Geology, vol. 352, pp. 451–468, 2014. View at Publisher · View at Google Scholar · View at Scopus
  13. F. Bonin-Font, M. Massot-Campos, P. L. Negre-Carrasco, G. Oliver-Codina, and J. P. Beltran, “Inertial sensor self-calibration in a visually-aided navigation approach for a micro-AUV,” Sensors, vol. 15, no. 1, pp. 1825–1860, 2015. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Santos-Victor and J. Sentieiro, “The role of vision for underwater vehicles,” in Proceedings of the IEEE Symposium on Autonomous Underwater Vehicle Technology (AUV '94), pp. 28–35, IEEE, Cambridge, Mass, USA, July 1994. View at Scopus
  15. A. Burguera, F. Bonin-Font, and G. Oliver, “Trajectory-based visual localization in underwater surveying missions,” Sensors, vol. 15, no. 1, pp. 1708–1735, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. D. Kim, D. Lee, H. Myung, and H.-T. Choi, “Artificial landmark-based underwater localization for AUVs using weighted template matching,” Intelligent Service Robotics, vol. 7, no. 3, pp. 175–184, 2014. View at Publisher · View at Google Scholar · View at Scopus
  17. J. Sattar and G. Dudek, “Robust servo-control for underwater robots using banks of visual filters,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '09), pp. 3583–3588, Kobe, Japan, May 2009. View at Publisher · View at Google Scholar · View at Scopus
  18. C. Barngrover, S. Belongie, and R. Kastner, “Jboost optimization of color detectors for autonomous underwater vehicle navigation,” in Computer Analysis of Images and Patterns, pp. 155–162, Springer, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. J. Gao, A. Proctor, and C. Bradley, “Adaptive neural network visual servo control for dynamic positioning of underwater vehicles,” Neurocomputing, vol. 167, pp. 604–613, 2015. View at Publisher · View at Google Scholar · View at Scopus
  20. S. Heshmati-Alamdari, A. Eqtami, G. C. Karras, D. V. Dimarogonas, and K. J. Kyriakopoulos, “A self-triggered visual servoing model predictive control scheme for under-actuated underwater robotic vehicles,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '14), pp. 3826–3831, Hong Kong, June 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. P. L. N. Carrasco, F. Bonin-Font, and G. O. Codina, “Stereo graph-slam for autonomous underwater vehicles,” in Proceedings of the 13th International Conference on Intelligent Autonomous Systems, pp. 351–360, 2014.
  22. B. Li, Y. Xu, C. Liu, S. Fan, and W. Xu, “Terminal navigation and control for docking an underactuated autonomous underwater vehicle,” in Proceedings of the IEEE International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER '15), pp. 25–30, Shenyang, China, June 2015. View at Publisher · View at Google Scholar
  23. M. Myint, K. Yonemori, A. Yanou, S. Ishiyama, and M. Minami, “Robustness of visual-servo against air bubble disturbance of underwater vehicle system using three-dimensional marker and dual-eye cameras,” in Proceedings of the MTS/IEEE Washington (OCEANS '15), pp. 1–8, IEEE, Washington, DC, USA, 2015.
  24. B. Sütő, R. Dóczi, J. Kalló et al., “HSV color space based buoy detection module for autonomous underwater vehicles,” in Proceedings of the 16th IEEE International Symposium on Computational Intelligence and Informatics (CINTI '15), pp. 329–332, IEEE, Budapest, Hungary, November 2015. View at Publisher · View at Google Scholar
  25. M. Bryson, M. Johnson-Roberson, O. Pizarro, and S. B. Williams, “True color correction of autonomous underwater vehicle imagery,” Journal of Field Robotics, 2015. View at Publisher · View at Google Scholar · View at Scopus
  26. A. Yamashita, M. Fujii, and T. Kaneko, “Color registration of underwater images for underwater sensing with consideration of light attenuation,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '07), pp. 4570–4575, Roma, Italy, April 2007. View at Publisher · View at Google Scholar · View at Scopus
  27. D. L. Ruderman, T. W. Cronin, and C.-C. Chiao, “Statistics of cone responses to natural images: implications for visual coding,” Journal of the Optical Society of America A: Optics and Image Science, and Vision, vol. 15, no. 8, pp. 2036–2045, 1998. View at Publisher · View at Google Scholar · View at Scopus
  28. T. D'Orazio, C. Guaragnella, M. Leo, and A. Distante, “A new algorithm for ball recognition using circle hough transform and neural classifier,” Pattern Recognition, vol. 37, no. 3, pp. 393–408, 2004. View at Publisher · View at Google Scholar · View at Scopus
  29. C. Akinlar and C. Topal, “EDCircles: a real-time circle detector with a false detection control,” Pattern Recognition, vol. 46, no. 3, pp. 725–740, 2013. View at Publisher · View at Google Scholar · View at Scopus
  30. G. Dudek, P. Giguere, C. Prahacs et al., “AQUA: an amphibious autonomous robot,” Computer, vol. 40, no. 1, pp. 46–53, 2007. View at Publisher · View at Google Scholar · View at Scopus
  31. U. Saranli, M. Buehler, and D. E. Koditschek, “RHex: a simple and highly mobile hexapod robot,” International Journal of Robotics Research, vol. 20, no. 7, pp. 616–631, 2001. View at Publisher · View at Google Scholar · View at Scopus
  32. T. I. Fossen, Guidance and Control of Ocean Vehicles, John Wiley & Sons, 1994.
  33. R. Pérez-Alcocer, E. Olguín-Díaz, and L. A. Torres-Méndez, “Model-free robust control for fluid disturbed underwater vehicles,” in Intelligent Robotics and Applications, C.-Y. Su, S. Rakheja, and H. Liu, Eds., vol. 7507 of Lecture Notes in Computer Science, pp. 519–529, Springer, Berlin, Germany, 2012. View at Publisher · View at Google Scholar
  34. E. Olguín-Díaz and V. Parra-Vega, “Tracking of constrained submarine robot arms,” in Informatics in Control, Automation and Robotics, vol. 24, pp. 207–222, Springer, Berlin, Germany, 2009. View at Publisher · View at Google Scholar
  35. C. Georgiades, Simulation and control of an underwater hexapod robot [M.S. thesis], Department of Mechanical Engineering, McGill University, Montreal, Canada, 2005.
  36. N. Plamondon, Modeling and control of a biomimetic under-water vehicle [Ph.D. thesis], Department of Mechanical Engineering, McGill University, Montreal, Canada, 2011.