Abstract

The important applications of monocular vision navigation in aerospace are spacecraft ground calibration tests and spacecraft relative navigation. Regardless of the attitude calibration for ground turntable or the relative navigation between two spacecraft, it usually requires four noncollinear feature points to achieve attitude estimation. In this paper, a vision navigation system based on the least feature points is designed to deal with fault or unidentifiable feature points. An iterative algorithm based on the feature point reconstruction is proposed for the system. Simulation results show that the attitude calculation of the designed vision navigation system could converge quickly, which improves the robustness of the vision navigation of spacecraft.

1. Introduction

Monocular vision navigation can be applied for spacecraft ground physical simulation platforms and spacecraft relative navigation [13]. Three-degree-of-freedom air bearing table is the key equipment of spacecraft ground test [46], which can simulate attitude motion in three directions. Air bearing table can be used in demonstration and validation of key technologies for space missions, such as spacecraft attitude determination, control actuator ground verification, control software development, autonomous spacecraft rendezvous and docking, space target detection and identification, target proximity and tracking control, demonstration and validation for precise pointing control of laser communication device, relative attitude cooperative control, microsatellite formation flying initialization, cooperative control, and autonomous operation. Three-degree-of-freedom air bearing table is higher in terms of relative accuracy, but lacking of initial reference. Therefore, the first condition to use the three-degree-of-freedom air bearing table is the attitude determination. The work in [7] proposed a turntable attitude determination algorithm based on monocular vision navigation, which requires four or more noncollinear feature points to achieve the attitude calculation. If the feature points are unidentifiable, there will be resolution error in the system.

For the spacecraft relative navigation, the principle of monocular vision navigation system is the same as the calibration for air bearing table in ground simulation. It also requires more than four noncollinear feature points to achieve the attitude determination. However, unlike the ground laboratory testing, the observation of objective spacecraft is uncontrollable. Destroyed feature points could barely be repaired. The software failure which appears in recognition algorithm will lead to unrecognition of the feature points. These all would affect the completion of the spacecraft relative navigation tasks. The work in [810] studied the vision navigation for the spacecraft and analyzed its failure. However, they did not take into account how the system continued to maintain the task with minimum feature points. The work in [11] studied the condition of unique solution with three feature points, where the conditions and algorithms of the unique solution are given under different modes. Nonetheless, its pattern constraint is very hard to apply into attitude determination, and the system may converge into other incorrect solutions.

To improve the robustness and practicality of monocular vision navigation system, this paper studies the monocular vision navigation system which only recognizes three feature points. We propose an algorithm in which feature point reconstruction could be achieved without pattern constraint. The algorithm effectively determines the attitude using only three feature points.

This paper is divided into five sections. Section 1 introduces the research background and significance. Section 2 is a description of the problem, including the definition of coordinate systems, the camera model, and the Haralick iterative algorithm. Section 3 presents the monocular vision navigation algorithm with no pattern constraints where minimum feature points are used under normal circumstances. After analyzing how to identify the loss of feature points, a feature point reconstruction algorithm is proposed and analyzed in terms of the uniqueness of the solution. A simulation is shown to validate the algorithm. Finally, the last section concludes the paper.

2. Problem Description

2.1. Coordinate Frames

This paper mainly introduces three coordinate frames: the camera coordinate frame, the image coordinate frame, and the target coordinate frame, as shown in Figure 1.

The image plane coordinate system is as follows: The image plane is perpendicular to the optical axis. It has two types: image pixel coordinates and pixel physical coordinates. The origin of the image pixel coordinate is located in the upper left corner of pixel image plane. -axis and -axis correspond to the rows and columns of the image plane. The origin of the pixel physical coordinates is located at the intersection of image plane and optical axis. -axis, -axis are parallel to and within the image plane.

The camera coordinate frame is as follows: Origin is located in the optical center of the camera. -axis is perpendicular to the image plane and pointing to the target. -axis, -axis are parallel to and in the image plane. The distance from the plane to the image plane is the focal of the camera.

The target coordinate frame is as follows: The target coordinate system is fixed to the target. Origin is located in the center of the target. -axis is perpendicular to the flotation platform and pointing to the ground. satisfies the right-hand rule.

2.2. Camera Model

The transformation from the target in three-dimensional spatial coordinates to the image in two-dimensional is called a forward transform. Conversely, calculating the target motion information in three-dimensional space by the two-dimensional image is called an inverse transform. The vision navigation solving the relative pose information by images is an inverse transform.

A pinhole camera model is selected as the imaging model. Coordinates of the target feature point in the target coordinate frame are . Coordinates in the camera coordinate frame are . Coordinates of image point in the image coordinate frame are . Assuming that and are the matrix of attitude rotation and the translation vector from target coordinate frame to the camera coordinate, a general equation of rigid motion could be denoted by

The conversion between the target and the camera coordinate frame can be shown by the pinhole imaging model aswhere is the focal length. Assuming that the camera has been calibrated, the key issue of vision navigation is solving the relative pose parameters and with coordinates of feature points in the camera coordinate frame and the target coordinate frame (Figure 2).

2.3. Haralick Iterative Algorithm

Every target feature point corresponds to a ray departure from the camera projection center, which passes through the image point and points to the target. Considering the fact that direction of ray is converse with projection line of the target feature point, the ray is called inverse projection line. The unit vector can be denoted by

Ideally, should be on the inverse projection line of and satisfies the condition ofwhere is the distance from the target feature to the camera’s optical center, also known as the depth of field. So there is the following:

The depth of the field is unknown. So we need to determine the depth scale factor and then apply the absolute attitude determination method to calculate it. Generally an iterative estimated method is used here. It firstly estimates the depth and then solves the estimation matrix of attitude parameters by the substituting of the estimated value into the camera model. Then the estimation matrix is used to calculate the new depth . Haralick proposed an iterative algorithm which can calculate the position and the depth of the field of a target at the same time. It calculates the relative position by using eigenvalue decomposition algorithm. The calculation of the depth of the field is denoted byDetermination conditions for the convergence of iterations are as follows:Repeat iteration until and get the minimum error (Figure 3).

3. Monocular Vision Navigation Algorithm with Minimum Feature Points

For a vision navigation system, its solution entirely depends on feature points captured by the camera. The PP refers to Perspective--Point. is the number of feature points captured. Normally if there are fewer than four feature points, the relative position could not be determined. P3P is a form which has the least feature points in the PP problem. How to solve the relative position with only three feature points is worthy of consideration.

In the case of terrestrial physics simulation platform spacecraft applications, through designing the relative position of the camera and platform to meet the P3P problem mode constraints, the relative position and attitude can be calculated. However, in actual application, redesign position is always complex and mode constraint has its limitations. Studying the monocular vision navigation algorithm with minimum feature points under normal circumstances is needed.

3.1. Reconstruction of Missing Feature Points

As shown in Figure 4, four noncollinear features are designed in the same plane within the target coordinates frame. Any three points form a different triangle.

All of the feature points on the image plane could be identified under normal circumstances, but the feature points would be lost if feature points are damaged or the recognition algorithm is broken. If only three feature points could be identified, such as , corresponding to the image point in Figure 5, it could not be solved under the no-meet mode constraint conditions. If we could reconstruct the missing feature points and the corresponding image point , then the problem could be properly solved. (The missing feature point could be any one.)

In the case of ground turntable applications, it is easy to determine the loss of feature points. In general, the concentrical feature points may be used for identification. This could also be done by matching the image plane before losing the feature points and the one after it. Furthermore, spacecraft gyro outputs and navigation filter outputs can be used to determine the identified feature points and missing feature points. Figure 5 shows the missing feature points and their reconstruction.

In the target coordinate system, represents the area of the triangle composed of feature points ; represents the area of the triangle composed of feature points ; represents the area of the triangle composed of feature points ; represents the area of the triangle composed of feature points . In the image plane, represents the area of the triangle composed of feature points , represents the area of the triangle composed of feature points , represents the area of the triangle composed of feature points , and represents the area of the triangle composed of feature points , as the following formula:According to affine transformation properties, two-triangle area ratio is affine invariant; thenSince can be identified, can be directly calculated; based on the area ratio formula, the estimated value of , , can be calculated and expressed as , , , :According to the triangle area formula, three areas related are as follows:Ideally, ; then,Convert into matrix formThat is, andMeasurement errors, imaging errors, calculation errors, and other factors that lead to the equation are not absolutely accurate. In actuality ; thenIt can be considered that coordinates with minimum are the reconstructed image point :

3.2. Reconstruction Uniqueness Analysis

Shown as the area , is the calculated area value; then according to the triangle formula we have the following.(a)Reconstruction image point must lie in line which is paralleled to the straight line with the distance of . represents the segment length. There will be two parallel lines satisfied with this condition on the image plane. But according to the polygon affine transformation does not change the convexity principle and reconstruction of the image point is located only on the half-plane in Figure 6. So there is only one straight line to meet the requirements.

Similarly we have the following.(b)Reconstruction image point must lie in line which is paralleled to the straight line with the distance of . represents the segment length.(c)Reconstruction image point must lie in line which is paralleled to the straight line with the distance of . represents the segment length.

As shown in Figure 6, the intersection of the straight lines is the image point which is needed to be reconstructed and more than one solution does not exist. In the ideal case without error, there exists a unique solution. However, three straight lines may not intersect at one point due to imaging errors, calculation errors, measurement errors, and other reasons. The least-squares algorithm is linear optimal solution and can be used to decide the point.

3.3. Simulation Results Analysis

A laptop with 32-bit Windows 7, Intel Core i5 processor, and 4 GB memory has been used to carry out the simulation.

The coordinates of feature points relative to the target coordinates frame are  mm,  mm,  mm, and  mm. The camera focal length is  mm,  mm, and  mm2.

Figure 7 shows the relative attitude curves. Assuming the image points lost. Figure 8 is the attitude error curve after feature point reconstruction. Attitude errors decreased clearly after the reconstruction of feature points, and the system can control attitude accuracy within 0.4 degrees. Figure 9 represents the computation time, no more than 0.1 seconds.

4. Conclusions

A vision navigation algorithm based on the least feature points is proposed to improve the reliability of the monocular vision navigation system for aerospace. In this paper, a vision navigation system based on three feature points is designed to deal with fault or unidentifiable feature points. An iterative vision navigation algorithm is proposed based on feature points’ reconstruction.

The simulation results show that the attitude calculation of vision navigation system designed in this paper can converge fast. Using the reconstruction algorithm for vision navigation, the accuracy will be better than 0.4 degrees. It improves the robustness of the vision navigation of spacecraft and reduces the number of target feature points.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by Beijing University of Aeronautics and Astronautics Open Funding Project of State Key Laboratory of Virtual Reality Technology and Systems under Grant nos. BUAA-VR-14KF-06 and BUAA-VR-14KF-03; National Natural Science Foundation of China (nos. 61203188 and 61403197); Jiangsu Provincial Natural Science Foundation (no. BK20140830); China Postdoctoral Science Foundation (no. 2013M531352).