Abstract

In the ultra-close approaching phase of tethered space robot, a highly stable self-attitude control is essential. However, due to the field of view limitation of cameras, typical point features are difficult to extract, where commonly adopted position-based visual servoing cannot be valid anymore. To provide robot’s relative position and attitude with the target, we propose a monocular visual servoing control method using only the edge lines of satellite brackets. Firstly, real time detection of edge lines is achieved based on image gradient and region growing. Then, we build an edge line based model to estimate the relative position and attitude between the robot and the target. Finally, we design a visual servoing controller combined with PD controller. Experimental results demonstrate that our algorithm can extract edge lines stably and adjust the robot’s attitude to satisfy the grasping requirements.

1. Introduction

The increasing number of space debris has been a serious threat for the safety of space activities. Although many debris removal methods have already been proposed [13], such as the electrodynamic tether [4] and foam method [5], the majority of them are limitedly examined on ground and far from being practical. In practice, the traditional “space platform + multi-DOF manipulator arm” system has been the preferred solution; however, it has several disadvantages, such as small operating range, complicated control operation, and being only applicable for cooperative target capturing, whose attitude and movement are stable. Therefore, developing on-orbit service technologies, especially for noncooperative targets, is critical. In order to overcome the problems caused by manipulator, Huang et al. [6, 7] proposed the tethered space robot (TSR) system for noncooperative target capturing and detumbling [8, 9]. TSR is a new type of space robot, which comprises a platform, space tether, and an operational robot, as shown in Figure 1. The operational robot is released by the platform through the space tether to approach the target and accomplish an on-orbit task. Compared with the manipulator arm, TSR has two main merits: a large operational radius (as long as 1,000 m) and the enhanced flexibility.

The flow of providing an on-orbit service can be divided into three sequential stages [10]:(a)The platform distinguishes the target from space environment and flies towards it gradually from a far distance.(b)The platform flies around the target and detects corresponding regions to identify a suitable position to release the operational robot. Then, the operational robot will be launched out and freely fly towards the frontal target. Meanwhile, it keeps detecting the suitable grasping region of the target to guide end-effector in precise manipulation.(c)The operational robot grasps the target once reaching the appointed position (usually less than 0.15 m) and eliminates the tumbling of the robot-target combination with its own propellants. Then it provides on-orbit services, such as injecting propellants and dragging the target into a graveyard orbit.

In the third stage, the operational robot needs to capture the target spacecraft through the docking system (such as the docking ring) in order to provide on-orbit services. But the docking system usually works for cooperative targets only. When it comes to noncooperative targets, any device or structure mounted on them is unknown in advance. To facilitate the determination of the grasping position, we analyze the common spacecraft structures and select satellite bracket as a suitable location for grasp, which is usually structurally robust and widely present in spacecraft [11].

During the approaching process, it is necessary for TSR to recognize the grasping position and measure their relative positions and attitudes. The calculated information will be fed into the control system to adjust TSR’s self-attitude and guide its approaching path. Though microwave radar and laser radar have been used in relative navigation, we select charge-coupled device (CCD) cameras mounted on the operational robot to provide vision-based measurements considering their low mass and rich view information [12].

Providing the on-orbit service involves many key techniques and one of them is the control of the relative position and attitude between TSR and the target.

Yuga et al. [13] proposed a coordinated control method using tension and thruster. Wang et al. [14, 15] proposed that, through altering the position of the connecting point between the tether and the robot, the tether tension force was able to achieve the orbit and attitude control simultaneously. Mori and Matunaga [16] designed tether tension/rope length controller and realized the attitude control. Experiments showed that this controller could save propellants. But these methods all require that the relative position between the space platform, tethered robot, and the target is known. In order to measure the relative position and orientation between them, visual servoing control problem has been widely studied. Xu et al. [17] proposed a stereo vision-based method to measure the relative position and orientation. The basic idea is to identify the feature points of the target in 3D space. Thienel et al. [18] presented a nonlinear method to estimate the attitude of the target spacecraft and then realized the tracking to it. But it assumed that the target attitude was already estimated using vision method. Hafez et al. [19] proposed a visual servoing control method of the dual-arm space robot. In [20], Dong and Zhu used the extended Kalman filter to estimate pose and motion of noncooperative target in real time. Cai et al. [21] proposed a monocular visual servoing control system for the tethered space robot. It realized the real time tracking of noncooperative target satellite in complex scenes. For these position and orientation measurement methods, the key technology is the detection and recognition of the characteristics of the target spacecraft. Considering the wide existence of circle-shape device, Hough Transform methods [22] are generally adopted to detect space target in autonomous rendezvous and docking. Casonato and Palmerini [23] used Hough Transform to identify basic geometric shapes, such as circles and straight lines, in rendezvous monitoring. The work [24] also used Hough Transform to convert the image from spatial space to parameter space, facilitating the detection of launch bolt holes. In [25], two marks, an external circle and an internal circle, are laid on different planes of the target to indicate the relative attitude and position. They can be easily detected by Hough Transform methods too. In addition, Ramirez et al. [26] proposed a quadrilateral recognition method, which could be used to recognize the spacecraft’s solar panels. Grompone Von Gioi et al. [27] proposed a line segment detection algorithm based on image gradients. It could achieve rapid detection of line segments and had been applied to detect the edge lines of the spacecraft’s solar panels. Wang et al. [28, 29] proposed a new approach to dynamic eye-in-hand visual tracking for the robot manipulator in constrained environments with an adaptive controller.

The above methods generally aim at detecting the characteristics of the target from a relatively far distance, where the point features, linear features, or geometric features of the target could be fully detected [30], as shown in Figures 2(a) and 2(b). But when in extremely close distance (usually less than 1 m), these features are generally difficult to extract and are incomplete due to the limited FOV of cameras. The brackets supporting the solar panels are the element present in the majority of spacecraft, excluding the small cube satellites which often accommodate body-mounted panels. These elements can be easily recognized by cameras using their edge lines (Figure 2(c)). But for the above-mentioned methods, simply using the detected edge line features is inadequate and they tend to fail [31].

In this paper, we propose a visual servoing control method based on the detection of margin line on the satellite brackets. Firstly, a gradient based edge line detection method is proposed to extract edges of the satellite brackets. Then, we construct the model to acquire the relative position and attitude between TSR and the target. Lastly, we integrate the PD controller to design a new visual servoing controller. It is able to control TSR’s attitude to satisfy the grasping requirements. This method is appropriate for addressing the relative position and attitude problem in extremely close distance when TSR approaches the target satellite.

The remainder of this paper is organized as follows. In Section 2, we introduce the project model of edge lines. The detailed description of the line detection algorithm and servoing control method is presented in Sections 3 and 4. Section 5 illustrates and analyzes the experiments and we conclude our work in Section 6.

2. Projection Model of Edge Lines

2.1. Coordinates Definition

Observing system is composed of the operational robot and the satellite bracket. In extremely close stage, TSR moves towards the grasping position and only edge information of the brackets is available. In order to facilitate the relative measurement and future expression, the coordinate systems used in our paper are defined as follows:(1)The coordinate system of operating robot body : the origin is located in the centroid of the robot; axis points to the direction of flight; axis points to the geocenter; axis direction is determined by the right-hand rule.(2)The coordinate system of the capture point : the origin is located at the capture point of the target panel bracket; directions of axis and axis are same; the direction of axis is along the grasping pole of panel bracket; axis is determined by the right-hand rule.(3)Camera coordinate system : origin is at the optical center of the camera, axis is the optical axis, and axis and axis are horizontal axis and vertical axis which are parallel to the image plane.(4)Image coordinate system : coordinate plane of image coordinate system is on the camera’s CCD imaging plane, the origin is the intersection of optical axis and imaging plane, and axis and axis are parallel to the and the axis.(5)Computer image coordinate system : the origin is located in the upper left corner of the digital image, and denote the number of columns and rows of a digital image. The relationships of aforementioned coordinate systems are illustrated in Figure 3.

Besides, we choose the widely adopted Pinhole model to represent the camera imaging relationship [32]. Set the coordinate of a space point at camera coordinate system as ; then the imaging model of the point is represented as

is the coordinate of point in the coordinate system of computer image. is the main point of the camera. , is the focal length of the camera, and and are a pixel’s physical sizes in the horizontal and vertical direction. Generally, is equal to .

If two points, and , which locate in the computer image coordinate system , are considered, then the line can be expressed as

2.2. Camera Imaging Model

Assume the rotation matrix and translation vector from grasping point coordinate system to camera coordinate system as and , respectively [33]. The matrix transformed from grasping point coordinate system to camera coordinate system is

Therefore, the transformational relationship of the space point between homogeneous coordinates in grasping point coordinates and homogeneous coordinates in the camera coordinate system is

3. Line Segment Detection

3.1. Image Gradients

Image gradients reflect the intensity changes between adjacent pixels in the image. A small gradient value corresponds to a region with smooth gray level in the image, and a large gradient value corresponds to a sharp edge region in the image. Pixels along the consistent gradient direction are likely to belong to an edge in the image. Thus, calculating the gradients of images is important for the edge structure detection. For image , the gradients in pixel are defined as where represents the intensity value of pixel ; and are horizontal and vertical gradients of pixel . Then, we calculatewhere and represent the gradient value and gradient orientation of pixel , respectively.

To speed up the edge structure detection and exclude the monotonous regions in image, we first preprocess the image by thresholding the gradient value of each pixel. Pixels whose gradient value is less than the predefined threshold are more likely to be from nonedge regions and will be removed directly. Beside, we define pixel consistency as the difference between adjacent pixel gradient orientations, which is . If , it is considered that the orientations of adjacent pixels are consistent and these pixels should be in the same straight line, where is an human-defined parameter. The determination of is described in Section 3.3.

3.2. Pseudo-Sorting Method

Line support area is composed of pixels with similar gradient directions in the image, as shown in Figure 4. In general, we use the external rectangle, for example, the red color one in Figure 4, to approximate the support area.

The support area of straight line segment is generated using region growing method [34, 35]. It first selects a small part of pixels as the seeds. Then, it iteratively integrates neighboring pixels with similar orientations and the seed region gradually grows. The detailed region growing strategy is introduced in Section 3.3. The performance of the region growing method is sensitive to the selection of seed pixels. One way to determine seeds is to sort image pixels according to their gradient magnitudes and selects pixels with larger values as the seed pixels. However, the computational efficiency of sorting these pixels is rather low due to the large amount of pixels contained in the image. Even though some fast sorting methods (e.g., bubble sort, quick sort) are used, the real time performance is still hard to obtain. Here, we adopt the pseudo-sorting method to sort the pixels (Figure 5).

Pseudo-ordering method is generally used to roughly arrange pixels in order according to their gradients. Firstly, a linear interval is generated according to the maximum and minimum values of the image gradients. Each interval represents a range of gradient values. Pixels are placed in the corresponding intervals according to their gradient values. Hence, as the increase of interval number, the pseudo-ordering results will become more accurate while the algorithmic efficiency becomes lower. Then, for pixels in the same interval, they have the same possibility of being selected as seeds.

To determine the number of intervals, we take several experiments and conclude that, for common 8-bit image, setting interval number to be 1024 could achieve a good tradeoff between performance and time complexity.

3.3. Region Growing Method

According to the results of the pseudo-sorting, the pixel point is selected as the seed point of the region growing according to the descending gradient value [36]. First, we define the direction of the line support area as follows: where represents the angle of the support area and represents the linear direction angle of the pixel . Set the seed point of region growing as (); then the direction of the support area at the beginning is the direction of the seed point; namely, .

Let and . For each point in the support area, we visit its 8 neighboring pixels, which are , , , , , , , and , and calculate the difference between their gradient orientations and . If the difference is less than , it is considered that they have the same direction, and the corresponding pixel will be added to the straight line support area. Then, we update the direction of the support area using where denotes the selected neighbor pixel of . The visited pixels will be removed from the candidate list and not be accessed again. Repeat above steps until no new pixel could be added to the linear support area, and the regional growth algorithm terminates.

During the region growing, the selection of the threshold value is key and will influence the final results of the algorithm. Intuitively, when a small is adopted, only pixels with almost identical orientations could be included in the line support area. The generated support area may be too short to provide the complete edge information. As increases, more nonedge pixels tend to be covered by the support area, which leads to its increase in length and width. This will cause a lot of false positive detection. In order to make a tradeoff between deficient edge information and high false positive rate, we evaluate the performance under different and select to be 22.5° by trial and error.

The region growing results under different are illustrated in Figure 6. Besides 22.5°, two typical thresholds, which are smaller and larger than 22.5°, are selected to provide the visual examples.

3.4. Straight Line Determination

According to the aforementioned region growing algorithm, the support area of straight line segment is generated. Then the next step is to produce the corresponding line segments from each support area. First, in order to reduce the computational burden, the support area with extremely few pixels is excluded, which is likely caused by the isolated noise. Excluding areas with few pixels can be regarded as a type of erosion operation in image processing. Both of them aim to eliminate the small regions in image. Specifically, we could use a hollow circle template with small radius and slide it over the image. For areas with few pixels, they will be totally enclosed by the template and hence will be discarded. For support region areas, they will intersect with the circle due to their length and will be retained.

In order to determine the straight line segment, the external rectangle is used to describe its support area. The support area is scored by summing up the gradient values of pixels contained in it. Then the support area centroid () can be calculated by the following formula:where is the gradient magnitude value of the pixel in the support area ; and are coordinates of pixel . The center of the external rectangle is chosen as the center of the support area, and the long axis of the rectangle is chosen as the direction of the support area, with the short axis vertical to .

Since not all of the straight segment support areas correspond to a straight line segment model, they need to be further judged after obtaining the external rectangle of the support area. In this paper, we develop two criteria to achieve this.

(1) The ratio of long and short axis in the external rectangle should be larger than the set threshold (typically 4 to 7).

Due to the existence of noise, pixels in monotonous region may be wrongly selected as the seeds of region growing method, leading to the support areas shown in Figure 7(b). Besides, the noise is also likely to form some small line segments, which cannot be detected due to the pixel number it contained, as shown in Figure 7(c). It is obvious to see that by thresholding the ratio between long and short axis these support areas can both be eliminated.

(2) When the direction of the external rectangle is consistent with the direction of the long axis of rectangle (direction deviation is less than ), the proportion of its pixel number to the total number of pixels contained in the rectangular should be larger than the predefined threshold (typically 50% to 70%).

Determining the straight lines with long-short axis ratio may include some low-curvature curves, such as the circular edge with a larger radius, as shown in Figure 7(d). It can be observed that although the curve shares a large long-short axis ratio, it tends to produce the increase of the support region area. Hence, by thresholding the proportion of the pixel number divided by the support region area, these regions can be eliminated.

The support areas which satisfy these two criteria correspond to the straight line segment model, and the principal axis which comes through the center of the external rectangle is the straight line segment. As shown in Figure 8, the principal axis of the external rectangle, that is, the blue line segment, is the straight segment.

4. Servoing Controller Design

Through the above algorithm, we can detect the edge lines of the satellite brackets in the image. These operations are all based on the computer image coordinates. Hence, the detected edge lines can be expressed as

According to (10), we design a controller to achieve the control of relative attitude of the robot in the ultra-close approaching stage.

4.1. Edge Lines Analysis

The capturing device of TSR is usually installed in the front plate of the robot and close to the camera. Hence, we set that the capture coordinate system coincides with the camera coordinate system , and assume the Euler angles between capture coordinate system and target coordinate system as , respectively [37]. It is easy to analyze that, for different relative attitude of the robot and the target, the imaging of the bracket edge lines in the camera image plane is shown in Figure 9.

The imaging result in Figure 9(a) shows the final state of an ideal controller. It is requested that the Euler angles and should be zero, and the relative position deviation along direction is zero. The Euler angles of axis can be any angle. Figure 9(b) is the case that is not zero. Figure 9(c) shows that the Euler angle is not zero and there exists rotation along axis . The edge lines in Figure 9(d) are not parallel () and it reveals that the robot rotates along axis .

4.2. Controller Design

From the above analysis, we can conclude that the relative attitude angle can be measured by summing up the slopes of two edge lines; the relative attitude angle can be measured by subtracting the slopes of two edge lines; the position deviation along axis can be measured using the sum of and . Therefore, we refer to the PD controller and design our controller as follows [38]:where are the controller parameters and should be adjusted in practice. and are the control torque of axes and . is the control force of axis . The stability of the proposed controller could be proved by Lyapunov first method [39, 40].

We have to note that the attitude angle is not controlled by the controller, because only the pole structure of the bracket is imaged in ultra-close distance (Figure 9(a)) and it is resistant to attitude change along axis . But in practice should be restricted during the approaching process using additional sensors, for example, the global camera, to keep TSR’s self-attitude stable. The flowchart of our method is shown in Figure 10.

5. Experiments and Results

5.1. Experiment Conditions

In order to verify the effectiveness of the proposed control algorithm, we set up a visual servoing control system, as shown in Figure 11. It mainly consists of visual perception subsystem, control subsystem, movement platform, and the target satellite model. The composition of visual perception subsystem is listed in Table 1. Since this paper mainly focuses on the visual servoing control in ultra-close distance, we adopt a single-pole model to substitute the satellite bracket model. In the experiment, we use the 6-DOF movement platform to simulate TSR’s movement in space. The visual perception subsystem and target model are mounted on the movement platform to control their relative position and attitude.

5.2. Results Analysis

In order to evaluate the edge line detection performance in approaching process, we make a simulation and move TSR from initial position (about 3 m to the target) to grasping position (about 0.2 m to the target). Meanwhile, the single-pole model swings with an initial rate, about 10°/s. Figure 12 qualitatively demonstrates that despite the change of relative position and attitude between TSR and the target model our proposed algorithm is always able to detect the edge lines.

Figure 13 quantitatively reveals the measurement values of slope and intercept of the detected edge lines in every frame, respectively. In (a), the initial slope is −0.45, which means TSR’s self-attitude needs to be further adjusted. As TSR approaches the target, slops of the two detected lines gradually converge to about 0, which means TSR reaches the ideal grasping attitude. Meanwhile, intercepts of the detected lines converge to about 400 and 300, respectively. It is easy to find that their sum is rather close to the image width . It indicates that the two edge lines are horizontally symmetric along the image center and TSR reaches the ideal grasping position.

In Figure 14, we give the time consumption of the proposed method. The average time of detecting one image is about 0.115 seconds. Considering the control period of the controller is 0.25 seconds, our method could satisfy the real time need.

5.3. Limitations

As mentioned in Section 4.2, TSR’s rotation around axis is not taken into consideration in this paper. But in practice this rotation will lead to TSR’s deviation from the approaching path and increase the possibility of collision with the target. Hence in future works the control to attitude angle should be added to the controller.

Although our proposed method is able to work successfully, the number of frames it needs to offset the relative position and attitude is a bit high. It could be ascribed to the low frame frequency (4 Hz) of visual camera. The time span between adjacent frames leads to the inconsistency between actual measurement values and control parameters. In order to solve this, we consider increasing the frame frequency of cameras as well as further reducing the complexity of our method and introducing the motion prediction mechanism of TSR to offset the attitude change of target, in the future.

6. Conclusions

This paper presents a novel monocular visual servoing control method using only the edge lines. It is designed to measure the relative position and attitude between TSR and the target under nonfull field of view, which is intractable for traditional feature based visual servoing methods.

The novelty of this paper is constructed by the following three aspects: We propose a novel edge line detection method. It is based on gradient growing and can stably extract lines in the image. Also, it meets the real time requirement of controller. We build the model and analyze the relationship between edge line parameters and the relative attitude. We design a PD-based visual servoing controller for ultra-close approaching process, which requires the two detected edge line parameters only.

Experiments on the semiphysical simulation system indicate that our method is invariant to rotation and scale change.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This research is sponsored by the National Natural Science Foundation of China (Grant nos. 11272256, 60805034) and the Fundamental Research Funds for the Central Universities (Grant no. 3102016BJJ03).