Abstract

Autonomous on-orbit servicing is expected to play an important role in future space activities. Acquiring the relative pose information and inertial parameters of target is one of the key technologies for autonomous capturing. In this paper, an estimation method of relative pose based on stereo vision is presented for the final phase of the rendezvous and docking of noncooperative satellites. The proposed estimation method utilizes the sparse stereo vision algorithm instead of the dense stereo algorithm. The method consists of three parts: (1) body frame reestablishment, which establishes the body-fixed frame for the target satellite using the natural features on the surface and measures the relative attitude based on TRIAD and QUEST; (2) translational parameter estimation, which designs a standard Kalman filter to estimate the translational states and the location of mass center; (3) rotational parameter estimation, which designs an extended Kalman filter and an unscented Kalman filter, respectively, to estimate the rotational states and all the moment-of-inertia ratios. Compared to the dense stereo algorithm, the proposed method can avoid degeneracy when the target has a high degree of axial symmetry and reduce the number of sensors. The validity of the proposed method is verified by numerical simulations.

1. Introduction

The autonomous on-orbit servicing is expected to be one of the most challenging and exciting space activities in the future [1, 2]. In the past, many expensive satellites were out of service in orbit due to various failures, such as solar panel undeployment and gyro malfunction. However, in these cases, most of the other parts of the satellites were still functional [3, 4]. The on-orbit servicing, which is the execution of repair, refueling, orbit maintenance, and reorbiting, can extend the life of malfunctioned satellites and save a large amount of expense.

Therefore, more and more attentions have been paid to the technologies of autonomous on-orbit servicing. Several on-orbit servicing projects including on-orbit servicing demonstration experiments and conceptual on-orbit servicing systems have been carried out [5, 6].

As malfunctioned satellites are generally noncooperative targets tumbling in space and have no equipment which can be used for relative pose measurement [7], it is necessary to develop the method of relative pose measurement without the cooperation of target satellites. For example, the purpose of the Spacecraft for the Universal Modification of Orbits (SUMO) program is to demonstrate the integration of machine vision, robotics, mechanisms, and autonomous control algorithms to accomplish autonomous rendezvous and grapple of a variety of interfaces traceable to future spacecraft servicing operations [8]. In the SUMO program, the main concept is to be able to capture an unaided target satellite; that is, target satellite is equipped without special facilities such as grapple fixtures or reflectors which is compatible with the SUMO. SUMO was later renamed as the Front-End Robotics Enabling Near-term Demonstration (FREND) to develop a series of key components including a seven degree-of-freedom (DOF) flight robotic arm which was demonstrated in a test bed with a stereo photogrammetric imaging system [9]. Similarly, DEOS (Deutsche Orbitale Servicing Mission) project focuses on finding and evaluating the procedures and techniques for rendezvous, capture, and deorbiting of a noncooperative target satellite [10]. In the automatic mode, a number of images are taken and dumped to the ground segment, and then target motion is estimated by the control software in the ground station.

The relative pose information of two involved satellites is of vital importance for the end-effector to successfully grasp the tumbling satellite. If the dynamic model of the target is known, a filter can be designed to suppress the measurement noise and estimate the states which cannot be measured directly. Furthermore, the dynamic model can also be used to predict the motion of the target satellite, and then the optimal path planning can be conducted to guide a robotic manipulator to capture the tumbling target at a rendezvous point with the same velocity [11]. Vision system is more and more commonly utilized for the relative pose measurements in the rendezvous missions due to the low cost, low mass, and low energy requirement of CCD/CMOS cameras [12, 13]. Nagamatsu et al. proposed a 12th order extended Kalman filter to estimate the position and attitude of a torque-free target satellite by using a simplified dynamical model [14]. NASA investigated the feasibility of robotic autonomous servicing of the Hubble Space Telescope (HST). A nonlinear estimator was developed to estimate the HST body rates by using vision-based sensors which can output relative quaternion [15, 16]. NASA/GSFC continued to develop a Goddard Natural Feature Image Recognition (GNFIR) algorithm. A 6-DOF pose estimation was demonstrated in Robotic Refueling Mission in order to augment the traditional overlays [17]. Tweddle described a new approach to solve a SLAM problem for unknown and uncooperative objects that are spinning about an arbitrary axis. An experimental test-bed known as “Goggles” with computer vision-based navigation capability by the stereo cameras is used to test the algorithm [18]. Lichter and Dubowsky proposed an important and effective architecture of estimating dynamic state, geometric shape, and model parameters of objects in orbit [19]. A team of cooperating 3D vision sensors captures sequences of range images to determine the rough target poses before filtering. The relative positions and orientations between sensors mounted on different satellites are required to be known with high accuracy, which is perhaps a cost and operation burden for on-orbit servicing. A model-based pose refinement algorithm was proposed to perform the relative pose estimation of a floating object from the visual information of a stereo-vision system [20]. Du et al. proposed a method based on two collaborative cameras sharing the recognition task to determine the pose of a large noncooperative target [21], and details of image processing were also discussed.

In this paper, the problem of relative pose estimation for the final phase of the rendezvous and docking of noncooperative satellites is investigated. Instead of the dense stereo algorithm presented in the literature [17], here the relative pose estimation method is based on the sparse stereo algorithm. The body-fixed coordinate system of the target satellite is reestablished by utilizing the natural features on the target surface. Two relative attitude measurement methods based on TRIAD and QUEST are presented. Then, a standard Kalman filter is designed to estimate the translational states and the location of the mass center. An extended Kalman filter (EKF) and an unscented Kalman filter (UKF) are designed, respectively, to estimate the rotational states and the moment-of-inertia ratios including the ratios of product of inertia, and their performances are compared with each other.

This paper is structured as follows. Section 2 defines the coordinate systems and transformation matrixes used in the relative pose estimation. Section 3 presents two algorithms for relative attitude measurements including the scheme for updating the target feature coordinate system when target is tumbling. Then, an approach based on Kalman filter to estimate the location of mass center and translation parameters is proposed in Section 4. In Section 5, the EKF and UKF are designed, respectively, to estimate the moment-of-inertia ratios and orientation parameters. Numerical simulation is conducted to verify the proposed algorithm and results are discussed as well in Section 6. The last section contains conclusions.

2. Coordinate Systems and Transformation Matrixes

As various physical quantities are usually defined in different coordinate systems, the coordinate systems and coordinate transformation matrixes are defined to illustrate the relative geometrical relationship between the target and chaser. The relative geometrical relationship and coordinate systems are illustrated in Figure 1.

2.1. Inertial Coordinate System ()

The origin of the inertial coordinate system is centered on Earth, the axis is aligned with the rotation axis, and the axis is defined to point toward the Vernal equinox. The axis completes the right-handed orthogonal coordinate system. The attitude of target satellite and service satellite can be described as the rotation from the inertial frame to the body frame.

2.2. Local Orbital Coordinate System ( & )

The origin of the local orbital coordinate system () lies in the mass center of service satellite. The axis is in the opposite direction of the Earth center, the axis is in the flight direction, and the axis which completes the right-handed orthogonal coordinate system is in the direction of the angular momentum of the orbit.

Similarly, the local orbital coordinate system is attached to target satellite. The rotation matrix between the two satellite orbital frames is denoted as . As two satellites are almost in the same orbit and the service satellite is very close to the target, is nearly a identity matrix. For instance, if the target is a leading satellite in the circular orbit of 700 km altitude and the chaser is 100 m behind the target in the same orbit, the rotation angle between two orbital fames is only about .

2.3. Chaser Body-Fixed Coordinate System ()

The origin of chaser body-fixed coordinate system lies in the mass center of service satellite, and the three body axes of symmetry are defined as three coordinate axes , , and . The orientation of the relative to or is determined by attitude and orbit determination system which is equipped with gyro, star sensor, GNSS receiver, and so on. The coordinate transformation matrix from inertial coordinate system to is denoted as , and the coordinate transformation matrix from to is denoted as .

2.4. Stereovision-Fixed Coordinate System ()

The origin of stereovision-fixed coordinate system lies in the stereo vision system. The axis is parallel with the optical axis of the cameras, is vertical to the optical axis, and the direction of axis obeys the right-hand role. The measurement information (coordinates of feature points) of stereo vision system is described in this coordinate system.

The rotation matrix and the translation vector from to can be acquired by calibration on the ground. Therefore, relationship between the coordinates of feature points in and can be described as where are the coordinates of th feature point in the frame and are in .

2.5. Target Feature Coordinate System ()

As there are no artificial marks which can provide effective cooperative information to service satellite, natural features on the target surface are utilized to establish target feature coordinate system . The natural features are usually on frameworks of the antenna backboard, solar panels, and so on. The frame is applied to determine the relative pose between target and chaser satellite. The methods to establish the frame are described in Section 3.

2.6. Target Body-Fixed Coordinate System ()

The origin of target body-fixed coordinate system lies in the mass center of target satellite, and the three coordinate axes , , and are set to be parallel to , , and , respectively. Obviously, the axes , , and are probably not aligned with three body axes of symmetry of target.

2.7. Target Nominal Body-Fixed Coordinate System ()

The origin of target nominal body-fixed coordinate system lies in approximate mass center of target satellite, and the three coordinate axes , , and are parallel to , , and , respectively. Approximate location of mass center is estimated by ground staff using teleoperation loop or estimated automatically by service spacecraft. For example, the approximate mass center of target satellite can be supposed to lie in the target centroid. Then, the vector is known and the offset which is denoted as will be estimated.

3. Algorithms of Relative Attitude Measurement

Natural feature points are utilized to establish the target feature coordinate system . As image processing including image filtering, edge detection, feature point recognition, and matching is discussed [19], it is assumed in this paper that the coordinates of natural feature points in frame are outputted by the stereo vision system.

3.1. Attitude Determination Based on TRIAD

There are several feature points tracked by the stereo vision system, which is illustrated in Figure 1. Three points , , and are selected according to geometry and imaging quality of natural feature points to establish the frame by right-handed rule. The is the origin of , the axis is in the direction of vector , and the axis is vertical to the plane which contains the three points. The direction of axis obeys the right-hand role. The rotation matrix from the frame to can be determined by direction cosine matrix as follows: where is the magnitude calculation and is the representation of the operation of vector cross product. The error of TRIAD can be described as rotation angle about the Euler axis where is the angle between and , and it produces minimum error when it is 90°. and are pointing errors of and , respectively, and the pointing errors will be reduced with the extension of the line segments. Then, it is concluded that the three points which contribute the largest area are to be selected to obtain better accuracy in practice.

3.2. Attitude Determination Based on QUEST

As there are often more than 3 feature points available in the view of stereo vision system, it is better that all available points (the number of points is denoted as ) in view are utilized to improve the accuracy of relative attitude determination. Then coordinates of all feature points except in the frame can be calculated as follows: where is the vector inner product. As measuring is repeated, an average processing is utilized and then more accurate coordinates are determined in the frame . The components of are described as where is the number of measurements.

A set of observation unit vectors are defined as

Similarly, a set of reference unit vectors are defined as

According to the Quest algorithm, an attitude matrix can be found to minimize the loss function where is the weight of the th vector pair which is defined as

The algorithm of optimal processing for (8) is presented in the literature [22].

3.3. Strategy for Updating the Target Feature Coordinate System

As the target satellite is tumbling in space, some feature points will be lost and new feature points will appear in the view of the stereo vision system. The new target feature coordinate system is established dynamically according to the measurement requirements. It is necessary to determine the offset and orientation between the frames and ( is the original target feature coordinate system and the rotation angle between the frame and is 0) to keep continuous pose estimation.

According to (4) and (5), accurate coordinates of the feature points in are obtained. Three new feature points , , and are selected among the feature point set to establish with as the new origin. The offset of the origin of the frame in is , which is denoted as . Similar to (2), the orientation of the frame relative to can be described as

In general, the relationship between the coordinates of the feature points in and can be described as where is the number of establishing target feature coordinate systems, are the coordinates of th feature point in the frame , are the coordinates in , and is the translation vector between and .

As the rotation matrix and translation vector between the frames and can be derived according to (11), the coordinates of any natural feature point in the frame can be obtained even if feature points which are used to establish the frame are lost in view.

4. Estimation of the Location of Mass Center and Translation Parameters

In order to estimate the location of the mass center for target satellite and the relative position and velocity between chaser and target, a standard Kalman filter is designed to process the coordinates of feature points with the CW equations as the dynamic model.

4.1. CW Equations

When two satellites are nearly in the same circular orbit with close distance, the CW equations can perfectly describe the translational motion dynamics, where , , and ; it is due to the effect of orbital mechanics, and are identity and zero matrices, respectively, is matrix dimension, and is the force disturbance on a unit mass. is the displacement vector from the target mass center to the chaser mass center and it is expressed in the target orbital frame , and is the relative velocity between target and chaser.

4.2. Measurement Model

The estimation of relative position and velocity is also the process of locating the target mass center. When a series of feature points are continuously tracked, the coordinates of feature points are outputted in the frame . The relative attitude between the target satellite and the stereo vision system is determined according to methods in Section 3. According to the geometrical relationship, the displacement vector is described as where is the coordinate value of the th natural feature point in the frame , is the location error of mass center which is illustrated as in Figure 1, is the vector from the origin of the frame to the th feature point and is expressed in the frame , and is the rotation matrix from the frame to . As the target is a malfunctioned satellite which is noncooperative, no information including the attitude parameters can be provided to the service spacecraft. An available way to measure the target attitude is combining the relative attitude and service spacecraft attitude. It is written as where is the rotation matrix from the frame to the frame which will be provided by attitude and orbit control system (AOCS), is ignored as it is nearly a 3 × 3 identity matrix, is defined in Section 2.4, is determined by methods described in Sections 3.1 and 3.2, and is rewritten as

can be described as where is the coordinates of the th natural feature point in the frame , which is measured by stereo vision system, and and are defined in Section 2.4.

There are several natural feature points on the surface of the target spacecraft. If stereo vision system tracks these feature points simultaneously during the relative pose estimation, better measurements will be achieved by an averaging operation. Substituting (14) into (13), the observation equations can be rewritten as where is the measurement noise. If stereo vision system is carefully calibrated, the coordinate error of a feature point can be approximately regarded as white noise. Therefore, is approximately modeled as zero mean white Gaussian process with covariance matrix of . Obviously,

4.3. Filter Design

In order to suppress measurement noise and estimate states which cannot be measured directly, a standard Kalman filter is designed.

Define state variables as follows:

Then, the relative position and velocity estimation dynamic equations can be concluded as follows:

The math model of estimating mass center location and translational states is a linear system according to (17) and (20). Setting the linear system (20) into the standard state-space form , the solution to the corresponding state transfer matrix and covariance matrix of the discrete-time process noise is as follows:

Equation (17) is rewritten as measurement equations:

Initialization is

Time update is

Measurement update is

5. Estimation of the Moment-of-Inertia Ratios and Orientation Parameters

As noncooperative target satellite is often a malfunctioned satellite that is tumbling freely with unknown rates in space, the moment-of-inertia ratios can be estimated instead of the moment-of-inertia. Because the axes of the frame are not aligned with three body axes of symmetry, the products of inertia cannot be neglected.

5.1. Measurement Model

The AOCS of the service spacecraft can also output the attitude relative to inertial system. A measurement of the target attitude will be provided by combining the relative attitude and service spacecraft attitude. Then, where is the attitude of service satellite relative to the frame , is described in Section 2, and is which is described in (14). The rotation matrix can be rewritten in quaternion form and can be expressed as where is target quaternion measurement, is the quaternion which defines the rotation from inertial to the target body frame, and is the measurement noise. The symbol designates the multiplication of quaternion.

5.2. System Dynamic Equations

The time-derivative of the quaternion can be expressed as follows: where , and is angular velocity of target satellite with respect to inertial space, resolved in target body-fixed coordinate system .

According to Euler’s equation, dynamics of the rotational motion for a rigid target satellite can be expressed as where is disturbing torque vector and is the inertia matrix,

We define the element as 1, and then we obtain the moment-of-inertia ratios matrix as

Then, (29) can be rewritten as where and the elements , , , , and are constants. For the dynamic equations are nonlinear system, EKF and UKF will be employed to estimate the moment-of-inertia ratios and the relative attitude.

5.3. EKF Filter Design

Define state variables as follows: where . When dynamic equations are linearized, is ignored as is not an independent variable and it has variations of only the second order. Then, the error-state vector is described as where the components of are defined as follows: where .

The dynamic equations can be given as

For simplicity, process equations (36) and measurement equations (27) are rewritten as where the state noise and the measurement noise are assumed to be independent of each other, white noise.

Define the state transfer matrix where is the Jacobian matrix of partial derivatives of with respect to state variable ; that is, where

If (27) is quaternion multiplied by in left, the linearized measurement equation is obtained by ignoring the high order terms,

For simplicity, (41) is rewritten as

Initialization is

Time update is

Measurement update is

Therefore, the estimated states , , and can be innovated right after measurement update according to (35).

5.4. UKF Filter Design

Although the EKF is the estimation algorithm which is widely used for nonlinear systems, UKF is more accurate when the model is highly nonlinear [23, 24]. Therefore, a UKF filter is designed to achieve a better estimation.

The initialization of UKF is the same to EKF. For , calculate sigma points:where is the number of states, and is the scaling factor for the sigma points which is calculated as where is the scaling parameter which determines the spread of the sigma points around and is usually set to be a small positive value (), and is a secondly scaling parameter which is usually set to 0.

Time update is where and are the weighted coefficients and defined as follows: where is the scaling parameter and is used to incorporate prior knowledge of the distribution of (for Gaussian distribution, is optimal).

Measurement update is where , , , and are calculated by using the same method described in Section 5.3.

6. Simulation and Evaluation

The computer simulation is developed to verify the validity of the proposed algorithms in this paper. The simulation software integrates the orbital and attitude dynamics, camera photograph model, and relative pose estimation algorithms. The simulation is realized in MATLAB and Figure 2 shows the simulation blocks in the MATLAB environment.

It is assumed that the service satellite and the malfunctioned satellite are in the same orbit with different orbital phases. The initial orbit elements of the target and chaser are given in Table 1.

So the orbit altitude is about 300 km and the service satellite stands at the position which is about 10 m behind target satellite.

The target satellite is under torque-free tumbling motion. The target satellite is initially aligned with orbital frame and initial angular velocity is assumed as . The moment-of-inertia matrix in the target body-fixed coordinate system () is set as

Therefore, the corresponding moment-of-inertia ratios matrix is as follows:

As the image processing including image filtering, edge detection, and feature matching is discussed in many papers, the image processing is assumed as a solved problem in this paper, and then coordinates of natural feature points in image plane are regarded to be available for the estimation processing. The specifications of feature points in frame are shown in Table 2.

Two cameras are parallel to each other and the baseline is 0.5 m. The focus length of camera is set as 2.5 cm. The two cameras of the stereo vision system are mounted at  m and  m in frame . The CCD pixel elements are assumed as 2048 × 2048 and the pixel size is 3.2 μm × 3.2 μm. The coordinates of feature points in image plane are corrupted by white Gaussian noises 0.5 pixels () to simulate the image processing error.

The location errors of mass center are set as  mm and the initial values of the corresponding states are supposed as  mm. The initial relative position errors are set as  mm and initial relative velocity errors are  mm/s. The initial relative attitude errors and angular velocity errors are given as and , respectively.

The first 3 points listed in Table 2 are selected to establish the frame by TRIAD and number 1 point is used as the origin. Assume that number 1 point is lost at 10 s and number 2–4 points are selected to establish the new frame with number 4 point as the origin. According to (10) and (15), target attitude with respect to chaser (the orientation of relative to ) can be continuously measured. The attitude determination errors are plotted in Figure 3. In Figure 3, the errors in period of 10 s–300 s vary similarly to the errors in period of 0 s–10 s, which proves that method described in Section 3.3 is feasible.

Attitude determination errors using QUEST are shown in Figure 4. Obviously, QUEST algorithm achieves a higher precision than TRIAD, because all feature points tracked are utilized. The standard deviations of TRIAD are 0.27°, 0.11°, and 0.26°, and the standard deviations of QUEST are 0.19°, 0.04°, and 0.18°. Then, the TRIAD algorithm is applied to the following simulations.

The standard Kalman filter for estimating the location of the mass center and the translation parameters is tuned. Filter parameters are defined as , , and . A series of simulation figures are shown from Figure 5 to Figure 11.

The estimation errors of relative position and relative velocity are plotted in Figures 5 and 6, respectively, and the estimation of mass center location is displayed in Figure 7. The convergence time for recognition of the mass center location and estimating the relative position and velocity is less than 20 s. The accuracy of relative navigation is fairly high with the triaxial relative position errors less than 2.5 mm, 5.5 mm, and 3.0 mm, respectively, and the triaxial relative velocity errors less than 0.2 mm/s, 0.2 mm/s, and 0.2 mm/s, respectively. The location errors of the mass center are less than 2.0 mm, 6.0 mm, and 1.1 mm in three axes. It also should be noticed that axial errors are larger than the other two axial errors. The corresponding reason is that the positioning errors of stereo vision system are bigger along the optical axis direction.

The EKF and UKF are utilized to estimate the moment-of-inertia ratios and the orientation parameters. In order to compare performances of the two filters, the filter parameters are defined the same as , , and . Other parameters of UKF are tuned as , , and .

The differences between the referenced attitude of the target and the estimated attitude by using EKF and UKF, which are attitude estimation errors, are plotted together in Figure 8. The simulation results show that both estimation algorithms can assure the convergence. Comparing UKF output errors with those from EKF, the accuracy of UKF is better than the accuracy of EKF. In the period of 50 s–300 s, the attitude determination errors of UKF are less than 0.08°, 0.12°, and 0.08° and errors of EKF are less than 0.38°, 0.52°, and 0.34°.

Figure 9 shows the estimation errors of angular velocity. The estimation performances of angular velocity of the two algorithms are similar to the performances of attitude. In the same time period, the estimation errors of UKF are less than 0.01°/s, 0.025°/s, and 0.01°/s and errors of EKF are less than 0.038°/s, 0.11°/s, and 0.038°/s.

Estimation errors of moment-of-inertia ratios are shown in Figures 10 and 11. Although is the linear model, the convergence time of UKF is shorter than that of EKF as the UKF estimation accuracy for angular velocity is better. However, UKF and EKF achieve similar accuracy for estimating the moment-of-inertia ratios in the period of 100 s–300 s. At the same time, it is noticed that the convergence time of estimating moment-of-inertia ratios is longer than attitude and angular velocity. The reason why the estimation of moment-of-inertia ratios needs a longer convergence time lies in that the moment-of-inertia ratios cannot be observed directly and more time will be required to measure attitude errors which are produced by the errors of moment-of-inertia ratios. The estimation errors are displayed in Table 3 according to errors of UKF in the period of 200 s–250 s.

7. Conclusions

In order to capture the malfunctioned satellite, this paper proposes an algorithm for estimating the relative pose and inertial parameters of the target satellite by using a single stereo vision system. Compared to the dense stereo algorithm presented in literature 17, the proposed method can avoid degeneracy when the target has a high degree of axial symmetry and reduce the numbers of sensors and the complexity of the operation. As UKF achieves a better performance than EKF, the UKF is recommended for the rotational estimation. The simulation results demonstrate that the developed algorithm can estimate the relative pose, the location of mass center, and the moment-of-inertia ratios with high precision. However, the proposed method also has degeneracy; if there are no obvious features on the target surface, it is a better choice that the two methods are jointly used in practice. The experimental verification of the proposed algorithm should be conducted to ensure the practical application, which will be a focus in the further study.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (Grant no. 61203197), the Aerospace Science and Technology Innovation Fund of the China Aerospace Science, and Technology Corporation and Innovation Fund of Shanghai Aerospace Science and Technology (SAST 201308). The authors fully appreciate the financial supports.