International Journal of Aerospace Engineering

International Journal of Aerospace Engineering / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 6895628 | 15 pages | https://doi.org/10.1155/2019/6895628

Pose Determination for Malfunctioned Satellites Based on Depth Information

Academic Editor: Enrico C. Lorenzini
Received09 Nov 2018
Revised22 Feb 2019
Accepted01 Apr 2019
Published11 Jun 2019

Abstract

Autonomous on-orbit servicing is the future space activity which can be utilized to extend the satellite life. Relative pose estimation for a malfunctioned satellite is one of the key technologies to achieve robotic on-orbit servicing. In this paper, a relative pose determination method by using point cloud is presented for the final phase of the rendezvous and docking of malfunctioned satellites. The method consists of three parts: (1) planes are extracted from point cloud by utilizing the random sample consensus algorithm. (2) The eigenvector matrix and the diagonal eigenvalue matrix are calculated by decomposing the point cloud distribution matrix of the extracted plane. The eigenvalues are utilized to recognize rectangular planes, and the eigenvector matrix is the attitude rotation matrix from the sensor to the plane. The solution of multisolution problem is also presented. (3) An extended Kalman filter is designed to estimate the translational states, the rotational states, the location of mass center, and the moment-of-inertia ratios. Because the method only utilizes the local features without observing the whole satellite, it is suitable for the final phase of rendezvous and docking. The algorithm is validated by a series of mathematical simulations.

1. Introduction

The on-orbit servicing technology, which is a very challenging research topic, is commonly regarded to be utilized to extend the life of malfunctioned satellites. Although several on-orbit servicing demonstration cases were performed [1], autonomous on-orbit servicing operation under realistic conditions has not been carried out in space up to now. A series of technologies including autonomous navigation, control, and robotic arm are still immature and should be furtherly developed.

Because the relative pose measurement for a malfunctioned satellite is the precondition of on-orbit servicing [2], many researchers have made many efforts in order to solve this important problem. The relative navigation of TECSAS/DEOS, being studied at the center for robotics and machinery at the German Aerospace Center (DLR), is realized by a stereo vision system. In the automatic mode, a number of images are taken and dumped to ground segment and then are processed to estimate the relative position and the target motion [3]. The Argon system utilizes two different view-size cameras and a flash LIDAR. Complex image processing techniques are developed to measure the relative position and relative attitude of noncooperative targets [4, 5]. Volpe et al. proposed an on-orbit relative pose and shape estimation with the use of a monocular camera and a distance sensor [6, 7]. In China, Caixiu proposed a method by using a single camera to measure the relative pose by utilizing a solar panel triangle structure and a circular docking ring [8]. Miao et al. utilized solar panel components as the identification objects for noncooperative spacecraft pose determination [9]. Zhang et al. presented the relative pose measurement algorithm by using a circle and a noncoplanar feature point [10]. Huang et al. proposed a monocular visual servoing control method using only the edge lines of satellite brackets [11]. According to the above research, visual navigation is the main relative pose determination method in the relative state estimation for malfunctioned satellites [12]. However, the visual system sometimes suffers poor imaging quality in the space due to the special optical environment, which leads to the difficulty in subsequent processing. What is worse, the surface of the malfunctioned satellite is often covered by multilayer insulation (MLI) materials for the thermal protection. Because of this, optical features such as texture change according to the direction of lighting and view; the image features may not be obviously identified [13].

Therefore, other viable image processing method or sensors were utilized by some researchers. Terui et al. used binocular stereo vision to calculate the dense 3D points of target satellite [13], and relative pose was obtained by using Iterative Closest Point (ICP). LIDAR has been employed for full 6DOF pose estimation [14]. Gómez Martínez et al. proposed a pose estimation method for a rocket body in orbit using a time-of-flight (TOF) camera [15]. Hillenbrand and Lampariello realized the motion estimation and model identification of free-floating targets by utilizing range data measurements [16]. The aforementioned literatures intrinsically utilize the target geometrical shape by processing a point cloud of 3D coordinates mapping the sensed target surface. The ICP method is commonly employed to calculate target pose parameters.

However, Lichter and Dubowsky proposed a method to estimate the state, shape, and inertia parameters of noncooperative targets based on a series of distance images [17]. This method includes three parts: firstly, the coarse target pose can be estimated by condensing the range data of the target; secondly, the full dynamic states and inertial parameters are estimated by using Kalman filters; and finally, the shape of the target is estimated by using the estimated pose information and the raw range data. This proposed method is effective only when the whole target is observed. But it is difficult to obtain the complete distance image of the target at the close distance. Therefore, the author advised that a team of range imagers should be distributed uniformly to capture the whole target surface and the relative pose knowledge between sensors mounted on different satellites is required to be known with high accuracy. Literature [15] identified the rocket’s geometrical properties including the cylindrical body, nozzle, side tanks, and fairing from the point cloud in the target pose initialization.

Inspired by literatures [15, 17], this paper proposes a new method to estimate the pose of a malfunctioned satellite based on the rectangle feature from point cloud. The complete distance image of the target is not necessary for this method, and only the local rectangular features are extracted to measure the relative position and attitude. Simulation results show the effectiveness of the proposed method.

2. Rectangular Plane Extraction, Recognition, and Pose Determination

2.1. Plane Extraction from Point Cloud

Because satellites are man-made objects, there are some regular components on satellites, such as the main body, solar panels, and docking ring. Although these components are not designed specifically to measure the relative pose of malfunctioned satellites, they have obvious geometric features for recognition and can be utilized to realize relative navigation. The rectangular feature is one of the typical geometric shapes on malfunctioned satellites. For example, the main body of most satellites and solar panels is composed of a variety of rectangles. Therefore, this paper proposes to select the rectangular planes as the identification object.

The random sample consensus (RANSAC) algorithm is utilized to extract planes from the point cloud dataset. The algorithm can be described as follows: firstly, select a mathematical model; secondly, find the so-called minimum set from the original point cloud sample, and then estimate the parameters of the model by using the minimum set of points; thirdly, check the consistency of the remaining points with the estimated model, and add these qualified points into the point set; and finally, the parameters of the model can be reestimated using all the points that conform to the model.

According to the aforementioned RANSAC algorithm, the specific steps for extracting the plane from the point cloud are proposed as follows: (1)Select three points , , and randomly in the point cloud dataset(2)Calculate the parameters of the plane according to , , and (3)Suppose the plane thickness as , and calculate the distance from any point to the plane in the point cloud: (4)Find out how many points are satisfying (5)Repeat step 1~step 4 times, and select the plane that includes the most points, where can be calculated by the following formula: where is the proportion of the outer point in the point cloud and is the probability that the optimal plane can be found after calculation

2.2. Rectangular Plane Recognition

Some components of a malfunctioned satellite are composed of rectangular planes. The dimensions of these rectangular planes are known for a specific malfunctioned satellite. After the plane is extracted from the point cloud, the point cloud distribution matrix is utilized to identify if it is a rectangle plane and determine its dimensions.

Before calculating the point cloud distribution matrix, the location of the geometric centroid of the extracted plane can be determined as where is the point number in the point cloud, is the position of the point in the sensor coordinate system, and is the centroid location of the extracted plane.

In literature [17], the attitude is determined based on the second moment information of point cloud, which is called as geometric moment matrix. However, the point cloud distribution matrix is proposed in this paper, and then, the point cloud distribution matrix is decomposed to calculate the attitude and identify the shape. In essence, both the geometric moment matrix and point cloud distribution matrix reflect the shape information of the point cloud, but the latter requires less processing power. The point cloud distribution matrix is computed as

The eigenvector matrix and the diagonal eigenvalue matrix can be obtained by utilizing a matrix decomposition operation as where is the diagonal eigenvalue matrix of and is the eigenvector matrix.

The eigenvalue matrix reflects the point cloud distribution in the extracted plane body coordinates, which is described in Figure 1. The origin of the coordinate system is the center of the extracted plane; the -axis, -axis and -axis are parallel to the principal geometric axes. If the extracted plane is a rectangle, the -axis and -axis are parallel to the short and long sides of the rectangle plane, respectively, and the corresponding side length can be denoted as and , respectively. The eigenvalue matrix reflects the dimension parameters of the extracted plane. Therefore, is invariable for a specific plane. That is to say, it does not change with the relative pose.

Similarly, in the body coordinate system, it is obvious that the eigenvalues matrix can be defined as where is the point number in the point cloud, is the position of the point in the body coordinate system, and is the origin of the body coordinate system, which also can be defined as

It is supposed that the feature points obey a uniform distribution in the feature plane coordinate system and the measurement noise of point cloud is the white Gaussian noise. Therefore, can be rewritten as where is the true position of the point in the body coordinate system and is the noise.

Therefore, equation (6) can also be rewritten as where are the three components of and are the three components of .

Suppose and then where is the mathematical expectation of .

Substitute equation (8) into (10), and then, equation (10) can be expressed as

If it is supposed that the extracted plane is a rectangular plane and the number of point is large enough, the following equations can hold: where is the mean square deviation of noise and is the side length of the edge which is parallel to the -axis.

Equations (12), (13), (14), (15), and (16) are substituted into equation (11) and then

Similarly, and .

Suppose and then

It is regarded that and are independent of each other. Therefore, the following equation holds:

Similarly, and .

It can be seen from the above derivation that the eigenvalue matrix of the rectangular point cloud plane is only related to the size of the rectangular plane and the noise of the sensor. If the mean square deviation of the sensor noise is known, the parameters and can be estimated. Therefore, the extracted rectangular surface of the satellite can be identified by calculating the eigenvalue matrix .

2.3. Coarse Pose Determination

The matrix decomposition operation presented in equation (5) can provide the attitude rotation matrix. The eigenvector matrix is the attitude rotation matrix from the sensor coordinate system to the body coordinate system of the extracted rectangular plane. It is assumed that the sensor coordinate system is the same as the chaser body coordinate system in this paper.

Thus, the quaternion of the body coordinate system with respect to the chaser body coordinate system, which is denoted as , can be calculated as where , , , , , , , , and are the elements of the attitude rotation matrix .

If the eigenvalues of are arranged as the middle, the largest, and the smallest in the diagonal line of , there are 4 cases for establishing the body coordinate system depending on the axial direction due to the symmetrical structure of the extracted rectangle plane. The 4 cases are shown in Figure 2.

It is supposed that case (1) is the coordinate system of the present time (see Figure 2). Any one of 4 cases of the body coordinate system may appear during the next matrix decomposition operation. The relationship between the target attitude and the attitude quaternion can be described as where is the attitude quaternion of the chaser satellite and is the quaternion from the body coordinate system of the extracted rectangular plane to the target body coordinate system, which is known.

Because the target attitude quaternion can be predicted in the filter which is designed in Section 3, the predicted can be derived as

The predicted can be utilized to solve the multisolution problem presented in Figure 2. Define the difference quaternion between the predicted and the measured as

If the scalar part of is close to 1, case (1) occurs. If the scalar part of is close to 0, any one of cases (2)~(4) occurs, and the body coordinate system of the extracted plane needs to be changed to case (1) as where is the adjusted quaternion and is the quaternion which is utilized to adjust the attitude quaternion , satisfying where is the direction of the Euler axis and is equal to 180 degrees.

Cases (2), (3), and (4) need to be changed to case (1); the coordinate system should be rotated 180 degrees around the -, -, and -axes, respectively. The corresponding quaternions are , , and , respectively.

If there are 2 or more extracted rectangular planes in a single measurement, the similar problem exists. It is supposed that there are 2 extracted rectangular planes. The 2 attitude quaternions are denoted as and , respectively. The corresponding target attitudes and can be derived according equation (21). The difference quaternion between and can be obtained by using equation (23). If the scalar part of the difference quaternion is close to 0, the similar operation presented in equations (24) and (25) should be performed.

3. Estimation of the Rotational and Translational Parameters

3.1. State Equations

When two satellites are nearly in the same circular orbit with a close distance, the Hill equations can perfectly describe the translational motion dynamics as where is the average orbital angular rate of the malfunctioned satellite and and are the relative position and velocity between the target and the chaser, which is described in the target orbital frame. is the disturbance force on an unit mass which is modeled as Gaussian white noise.

As the target satellite is a malfunctioned satellite, the angular velocity cannot be obtained directly. The attitude kinematics and dynamics can be written as where is the target attitude quaternion, , is the target angular velocity, is the moment-of-inertia ratio matrix which can be described as and is the ratio of the disturbing torque and , which is modeled as Gaussian white noise.

3.2. Measurement Equations

The geometrical relationship between the two satellites is described in Figure 3. is the chaser body coordinate system. is the location of the geometric centroid of an extracted rectangular plane, and it is expressed in the chaser body coordinate system, where denotes the sequence number of the extracted rectangular plane. is the target nominal body coordinate system. is the location of the geometric centroid of the extracted rectangular plane in the frame . Because the target is a malfunctioned satellite, the satellite structure model is known. Therefore, the relative pose between the extracted rectangular plane and frame is known. So is a known value. Because the mass center changes due to the fuel consumption, the location error of the mass center is denoted as . is the target body coordinate system, whose origin is at the true mass center, and its axes , , and are parallel to the axes , , and , respectively. is the displacement vector from the target mass center to the chaser mass center and it is expressed in the target orbital frame. The origin of the orbital frame lies in the mass center of service satellite, the axis is in the flight direction, the axis is in the direction of the Earth center, and the axis which completes the right-handed orthogonal coordinate system is in the opposite direction of the angular momentum of the orbit.

According to the geometrical relationship presented in Figure 3, the displacement can be described as where is the attitude rotation matrix from the frame to the target orbital frame, is the rotation matrix from the chaser orbital frame to the target orbital frame, is the rotation matrix from the frame to the chaser orbital frame and it can be determined by the chaser attitude determination system, and is the observation noise of the geometric centroid of the extracted rectangular plane.

Because the two satellites are nearly in same circular orbit with a close distance, the rotation matrix can be regarded as the unit matrix [18].

Therefore, equation (30) can be rewritten as where is the rotation matrix from the inertial frame to the chaser orbital frame and it can be determined according to the orbit information and is target attitude matrix.

It is obvious that is a constant vector and can be modeled as

As the target is a malfunctioned satellite which is noncooperative, no information including the attitude parameters can be provided to the service spacecraft. An available way to measure the target attitude is combining the relative attitude and the chaser attitude. According to equation (21), the attitude measurement is where is the observation noise.

A set of feature rectangles is tracked by the observer. As the target tumbling in orbit, new rectangular features are observed and must be added to the state vector, and some features disappear and can be only propagated in the filter. This strategy has been proven effective in literatures [6, 7, 19].

3.3. EKF Filter Design

Define state variables as where and denotes location errors of all the tracked feature rectangles.

The dynamic equations are made up of equations (25), (33), and (28). Due to the nonlinearity of attitude kinematics and dynamic equations, a linearization operation can be performed. Then, the new state vector is described as where , , , , , , and . Because is not an independent variable and it has the second-order variations, is ignored in the linearization procedure.

Define the state transfer matrix . where is the discrete time, is the Jacobian matrix of partial derivatives of the dynamic equations with respect to state variables , that is, where and are defined in Section 3.1, , , and is the number of the tracked features.

Similarly, in equations (32) and (34), a linearization operation is also performed as

Therefore, and , and the observation matrix can be defined as

The detailed information of the EKF can be concluded as (1)Initialization:(2)Time update:(3)Measurement update:

The estimated states , , , , , and can be innovated right after measurement update.

4. Simulation and Evaluation

In order to evaluate the effectiveness of extracting a point cloud plane, a point cloud rectangular plane is simulated by the simulation software. The point cloud is randomly simulated obeying a uniform distribution in a rectangular plane, which ranges from -5 cm to 5 cm in the -axis, ranges from -9 cm to 9 cm in the -axis, and fixes at 6.5 cm in the -axis. It is also simulated that there are 2500 points including 100, 500, and 1250 outliers, respectively, in the rectangular plane. The results are shown in Figures 47.

In this simulation, the repeat number is set to be 300 times, and the distance threshold is 0.005. The extraction results are shown in Table 1.


OutliersThe number of extracted pointsTime (ms)

0250020
100240020
500200030
1250125270

It can be seen in Table 1 that the plane can be extracted exactly from the point cloud with outliers. However, the processing speed of the plane extraction is slowed down with the increase of outliers.

The point cloud declines in number as the distance between the target and the sensor increases. In order to validate the effectiveness of a rectangular feature recognition formula as the number of points varies, the number of points in the rectangular plane is set to be 10000, 5000, 2500, 1250, and 625, respectively. The geometric centroid and the point cloud distribution matrix of the extracted plane are calculated, and the three eigenvalues are obtained by using the matrix decomposition operation. The theoretical value of the geometric centroid is (0, 0, 6.5), and the eigenvalues are 8.3333, 27, and 0, respectively. The geometric centroid and the 3 eigenvalues of the rectangular point cloud are calculated and presented in Table 2. Therefore, it can be seen that the geometric centroid and the eigenvalues are almost not obviously affected by the number of points.


Number (points)The center of gravityEigenvalue distribution

10000(0.0154, -0.0390, 6.5000)(8.4102, 26.7072, 0)
5000(-0.0079, -0.0090, 6.5000)(8.3106, 27.3320, 0)
2500(-0.0801, -0.0033, 6.5000)(8.5595, 26.7923, 0)
1250(0.1542, -0.0438, 6.5000)(8.4415, 26.4924, 0)
625(0.0906, -0.3373, 6.5000)(8.2372, 27.7920, 0)

And then, the random noise of 0.01 cm, 0.1 cm, and 1 cm is added to the rectangular point cloud plane which contains 2500 points. It can be seen in Table 3 that the location error does not obviously increase as the added random noise increases, because the number of points is large enough.


Error (cm)The geometric centroid (cm)The location error (cm)

0(-0.0801, -0.0033, 6.5000)(-0.0801, -0.0033, 0)
0.01(0.0817, -0.0833, 6.4999)(0.0817, -0.0833, 0.0001)
0.1(0.0541, 0.0661, 6.4984)(0.0541, 0.0661, 0.0016)
1(-0.0261, 0.0210, 6.5254)(-0.0261, 0.0210, 0.0254)

The theoretical eigenvalues and the eigenvalues computed by using the matrix decomposition operation are presented in Table 4. The computed eigenvalues are compared with the theoretical eigenvalues, and both the values are almost the same. Therefore, the effectiveness of the proposed recognition algorithm is validated according to Tables 3 and 4.


Error (cm)Eigenvalues (simulation)Eigenvalues (theory)

0(8.5595, 26.7923, 0)(8.3333, 27, 0)
0.01(8.5898, 26.9769, 0.0001)(8,3334, 27.0001, 0.0001)
0.1(8.3124, 26.9526, 0.0103)(8.3433, 27.01, 0.01)
1(9.4664, 27.7494, 0.9765)(9.3333, 28, 1)

In order to validate the effectiveness of the relative navigation method, digital simulation software has been written in VC++. In this simulation, it is assumed that the target and the chaser are in the same orbit with different orbital phases. The chaser locates at the position which is about 25 m behind the target satellite. The inertia matrix of the target satellite is set as . The initial angular velocity of the target is °/s. The point cloud data is randomly simulated by using uniform distribution on the target surface with Gaussian noise. The number of the points is 1800, and the mean square deviation of the Gaussian error is  m. The true location error is set as  m.

The PCL (Point Cloud Library) is employed to extract planes. And then, the proposed method is utilized to recognize the feature rectangular plane and determine the relative position and attitude between the feature rectangular plane and the chaser. The relative position and attitude errors are shown in Figures 810.

It can be seen from Figure 8 that the accuracy of the relative position is better than 0.01 m, and the triaxial maximum errors are about 0.01 m, 0.01 m, and 0.01 m, respectively. The attitude error with a multisolution problem is shown in Figure 9, and the attitude error after removing the multisolution problem is shown in Figure 10. The multisolution problem occurs, and then the error is about 180°. There is no such phenomenon in Figure 10. Therefore, the method proposed in this paper has solved the attitude multisolution problem caused by the symmetrical rectangular plane. The attitude error presented in Figure 10 is better than 0.2°, and the triaxial maximum error is about 0.13°, 0.12°, and 0.16°.

In the relative navigation filter, is set as the first observation of the relative position, and is set as 0. is set as the first observation of the target attitude, and is set to be the target angular velocity corrupted by initial errors. The covariance matrix , where , , , , , and . The system noise is Gaussian noise with zero mean, and the system noise variance matrix is .

It can be seen from Figures 1113 that not only the relative position and velocity can be estimated but also the mass center location error of the malfunctioned satellite can be estimated. According to the simulation curves, the relative position and velocity errors converge in a very short time, and the fluctuation is very small after convergence. The accuracy of the three axis positions is better than 1.4 mm, 3.2 mm, and 1.4 mm. The triaxial velocity error is better than 0.01 mm/s, 0.01 mm/s, and 0.01 mm/s, respectively. The estimation error of the mass center location is less than 1 mm, 1 mm, and 1 mm, respectively.

The estimation errors of the attitude, the angular velocity, and the moment-of-inertia ratios of the target satellite are presented in Figures 1416. There are some fluctuations of the malfunctioned satellite attitude, but the accuracy is better than 0.1°, and the angular velocity error is better than 0.01°/s. The moment-of-inertia ratios also converge to the true values as time goes by.

5. Conclusion

A new method is proposed in this paper to estimate target pose by utilizing local feature rectangular planes when the whole satellite cannot be observed. The plane extraction method, the rectangular feature recognition method, and the solution of multisolution problem are presented. Finally, the Kalman filter is designed to estimate the relative position and the attitude of the target satellite. Therefore, this paper provides a feasible algorithm for relative navigation during rendezvous and docking to malfunctioned satellites.

Data Availability

The data used to support the findings of this study are included within the article. The data is generated by using the simulation software which is described in Simulation and Evaluation. The data used to support the findings of this manuscript: “Pose Determination for Malfunctioned Satellites Based on Depth Information”, written by Feng Yu, Yi Zhao, and Yanhua Zhang, is generated by simulation software. The detailed simulation conditions are presented in detail in this manuscript. The simulation software is available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This study was supported by the “Fundamental Research Funds for the Central Universities,” No. NS2016084. The authors fully appreciate the financial support.

References

  1. G. Roesler, P. Jaffe, and G. Henshaw, “Inside DARPA’s mission to send a repair robot to geosynchronous orbit,” IEEE Spectrum, vol. 8, 2017. View at: Google Scholar
  2. B. Liang, X. Du, C. Li, and W. Xu, “Advances in space robot on-orbit servicing for non-cooperative spacecraft,” Robot, vol. 34, no. 2, pp. 242–256, 2012. View at: Publisher Site | Google Scholar
  3. K. Landzettel, “Ongoing space robotics missions: TECSAS/DEOS[EB/OL],” http://www.dlr.de/rm-neu/en/desktopdefault.aspx/tabid-3825/5963_read-8759. View at: Google Scholar
  4. J. Galante, J. Van Eepoel, M. Strube et al., “Pose measurement performance of the argon relative navigation sensor suite in simulated-flight conditions,” in AIAA Guidance, Navigation, and Control Conference, pp. 27–42, Minneapolis, Minnesota, 2012. View at: Publisher Site | Google Scholar
  5. S. D’Amico, J. S. Ardaens, G. Gaias, H. Benninghoff, B. Schlepp, and J. L. Jørgensen, “Noncooperative rendezvous using angles-only optical navigation: system design and flight results,” Journal of Guidance, Control, and Dynamics, vol. 36, no. 6, pp. 1576–1595, 2013. View at: Publisher Site | Google Scholar
  6. R. Volpe, M. Sabatini, and G. B. Palmerini, “Pose and shape reconstruction of a noncooperative spacecraft using camera and range measurements,” International Journal of Aerospace Engineering, vol. 2017, Article ID 4535316, 13 pages, 2017. View at: Publisher Site | Google Scholar
  7. R. Volpe, G. B. Palmerini, and M. Sabatini, “A passive camera based determination of a non-cooperative and unknown satellite’s pose and shape,” Acta Astronautica, vol. 151, pp. 805–817, 2018. View at: Publisher Site | Google Scholar
  8. C. Caixiu, Pose On-Orbit Measurement of Non-Cooperative Spacecraft, Beijing University of Posts and Telecommunication, Beijing, China, 2015.
  9. X. Miao, F. Zhu, Y. Hao, Q. Wu, and R. Xia, “Vision pose measurement for non-cooperative space vehicles based on solar panel component,” Chinese High Technology Letters, vol. 23, no. 4, pp. 400–406, 2013. View at: Google Scholar
  10. L.-m. Zhang, F. Zhu, Y.-m. Hao, and M.-y. Wang, “Pose measurement based on a circle and a non-coplanar feature point,” Acta Photonica Sinica, vol. 44, no. 11, article 1112002, 2015. View at: Publisher Site | Google Scholar
  11. P. Huang, L. Chen, B. Zhang, Z. Meng, and Z. Liu, “Autonomous rendezvous and docking with nonfull field of view for tethered space robot,” International Journal of Aerospace Engineering, vol. 2017, Article ID 3162349, 11 pages, 2017. View at: Publisher Site | Google Scholar
  12. S. Segal, A. Carmi, and P. Gurfil, “Stereovision-based estimation of relative dynamics between noncooperative satellites: theory and experiments,” IEEE Transactions on Control Systems Technology, vol. 22, no. 2, pp. 568–584, 2014. View at: Publisher Site | Google Scholar
  13. F. Terui, H. Kamimura, and S.’i. Nishida, “Motion estimation to a failed satellite on orbit using stereo vision and 3D model matching,” in 2006 9th International Conference on Control, Automation, Robotics and Vision, pp. 1–8, Singapore, 2006. View at: Publisher Site | Google Scholar
  14. J. A. Christian and S. Cryan, “A survey of LIDAR technology and its use in spacecraft relative navigation,” in AIAA Guidance, Navigation, and Control (GNC) Conference, pp. 1–7, Boston, MA, USA, 2013. View at: Publisher Site | Google Scholar
  15. H. Gómez Martínez, G. Giorgi, and B. Eissfeller, “Pose estimation and tracking of non-cooperative rocket bodies using time-of-flight cameras,” Acta Astronautica, vol. 139, pp. 165–175, 2017. View at: Publisher Site | Google Scholar
  16. U. Hillenbrand and R. Lampariello, “Motion and parameter estimation of a free-floating space object form range data for motion prediction,” in The 8th international symposium on artificial intelligence, Robotics and Automation in Space, pp. 1–10, Munich, Germany, 2005. View at: Google Scholar
  17. M. D. Lichter and S. Dubowsky, “Estimation of state, shape, and inertial parameters of space objects from sequences of range images,” in Intelligent Robots and Computer Vision XXI: Algorithms, Techniques, and Active Vision, pp. 194–205, Providence, RI, USA, 2003. View at: Publisher Site | Google Scholar
  18. F. Yu, Z. He, B. Qiao, and X. Yu, “Stereo-vision-based relative pose estimation for the rendezvous and docking of noncooperative satellites,” Mathematical Problems in Engineering, vol. 2014, Article ID 461283, 12 pages, 2014. View at: Publisher Site | Google Scholar
  19. T. Bailey, Mobile robot localization and mapping in extensive outdoor environments, [Ph.D. thesis], University of Sydney, 2002.

Copyright © 2019 Feng Yu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

342 Views | 191 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder