Research Article  Open Access
Error Modeling in Distance and Rotation for SelfCalibration of Space Robots on Orbit
Abstract
Vibration and impact of launching, inner and outer pressure difference, and thermal deformation of the space capsule will change the transformation between the pose measurement system and the space robot base. It will be complicated, even hard, to measure and calculate this transformation accurately. Therefore, an error modeling method considering both the distance error and the rotation error of the endeffector is proposed for selfcalibration of the space robot on orbit, in order to avoid the drawback of frame transformation. Moreover, according to linear correlation of the columns of the identification matrix, unrecognizable parameters in the distance and rotation error model are removed to eliminate singularity in robot kinematic calibration. Finally simulation tests on a 7DOF space robot are conducted to verify the effectiveness of the proposed method.
1. Introduction
Space robots can assist astronauts to reach and expand their maintenance work areas, improving operational efficiency and safety [1], and even can complete onorbit missions such as spacecraft rendezvous and docking, satellite fault maintenance, and satellite capture, independently [2]. Some of these precise space missions require the space robot to have high end positioning accuracy. Nevertheless, defects in manufacturing and assembly lead to the difference between the actual kinematic parameters and the nominal ones, generally regarded as systematic errors. Additionally, the end positioning accuracy is also affected by random errors, such as environmental changes, gear transmission, and mechanical deformation. Calibration on the ground can remedy the positioning deficiencies caused by these inherent kinematic errors [3, 4]. However, contrast to traditional industrial robots, space robots are subjected to strong vibration and impact with the launch of spacecraft and then confronted with extreme temperature on orbit. These factors will inevitably cause the kinematic parameters of the space robot to change, resulting in a decrease in the end positioning accuracy. Therefore, it is necessary to perform onorbit kinematic calibration [5].
The actual pose of the space robot endeffector can hardly be measured by an external measuring device due to the extreme orbital environment, so the internal sensing system mounted on its endeffector is adopted for measurement during selfcalibration. Many researchers have devoted efforts to kinematic selfcalibration of robot manipulators. Angulo and Torras [6] developed a neuralnetwork method to recalibrate automatically a commercial robot manipulator after undergoing wear or damage, which has been applied in the REIS robot included in the space station mockup at DaimlerBenz Aerospace. Liang et al. [7] developed an adaptive selfcalibration of handeye systems in which a visualfeedbackbased selflearning process is used for dynamically and continuously learning the handeye transformation through repetitive operation trials. Liu et al. [8] proposed a selfcalibration method based on handeye vision, which establishes the relative pose error model of the space robot and uses the particle swarm optimization algorithm to identify the kinematic parameters. Yin et al. [9] proposed a visionbased robot selfcalibration method, eliminating the need for a robotbased frame and handtoeye calibrations. Du et al. [10–13] introduced an inertial measurement unit to estimate the end posture and attaching a position marker to the endeffector to measure the actual position and identified kinematic parameters with different filters to overcome the impact of sensing noise. Especially, for the sake of more accurate and reliable estimation from the sensors, various filter tools such as Kalman filter and particle filter are used in the estimation process and the position estimation always combines with the orientation estimation [14–16]. Zhang et al. [17] realized kinematic calibration based on the local exponential product formula by measuring the end position of the robot with a fixed camera and the plane mark mounted on the endeffector.
Works on selfcalibration above adopted the absolute pose/position error model for kinematic calibration. They have to describe the endeffector pose errors under the robot base frame, making it inevitable to identify the transformation matrix between the measurement system frame and the robot base frame before calibration. However, this transformation matrix is very complicated to measure and calculate accurately, even hardly possible to obtain in unmanned environments such as on orbit [18]. To avoid the drawback of frame transformation, the distance error of any two positions in robot workspace is applied to calibrate the robot position accuracy indirectly [19, 20]. Roning and Korzun [21] used the criteria of equal distances between the points in the robot space and the task space to perform calibration on the GM Fanuc S10 robot. Gong et al. [22] used a hybrid noncontact optical sensor mounted on the endeffector, calibrating the 6 degreeoffreedom (DOF) robot based on distance error. Tan et al. [23] made use of the screw theory and the distance error model, considering the initial orientation errors. Gao et al. [24] obtained the linearized equation describing the relationship between the positioning errors and the kinematic errors by differentiating the kinematic equation. Zhang et al. [25] derived a linear model from link parameter errors to squared range difference of the robot endeffector. Zhang et al. [26] proposed a method to directly establish parameter error equations based on relative distance error and identified the parameter errors by employing a hybrid genetic algorithm. Mu et al. [27] synthesized the handeye transformation parameters and the robot kinematic parameters during calibration of the system parameters of a flexible measurement system based on spheres’ centretocentre distance errors. Shi et al. [28] established the distance error model to connect the robot distance error and the absolute positioning error and validated the error model on a handling robot. Li et al. [29] used two error models, including the position error model and the distance error model, for calibrating Selective Compliance Assembly Robot Arm (SCARA) robots. Yao et al. [30] measured the distance between two points in space to conduct kinematic calibration on a service robot. As for measurement of the distance errors, a laser tracker was always used to measure these errors [31–33], while a CMM (coordinate measuring machine) could be also employed [34]. Joubair and Bonev [35] developed a kinematic calibration method to improve the accuracy of a sixaxis serial industrial robot in a specific target workspace, using distance error and sphere constraints. Works above obtained the distance error model by ignoring the linearizing error, however with a lack of the rotation error of the robot endeffector, so as to degrade calibration performance to a certain extent.
In this paper, we propose an error modeling method considering both the distance and the rotation error of the endeffector of the space robot. The remainder of this paper is organized as follows. In Section 2, the kinematic selfcalibration system for a 7DOF space robot is elaborated. In Section 3, based on the absolute pose error model, we build the mappings from the kinematic errors to the rotation error of the robot endeffector, obtaining the distance and rotation error model. Then, the redundant parameters in this error model are analysed theoretically in Section 4. At last, Sections 5 and 6 give the results of our experiments and conclude this paper, respectively.
2. Kinematic SelfCalibration System
Different from kinematic calibration of robot manipulators on the ground, the space robot has large structural size and extreme working environment, making it impossible to measure the end pose of the space robot using external measuring equipment, so its own handeye vision system [36–38] with a checkerboard calibration plate is adopted for pose measurement. As shown in Figure 1, a 7DOF space robot is mounted on the outside of the space capsule, with a handeye camera attached to its end. A checkerboard calibration plate is placed on the outside of the space capsule, away from the space robot base, as a target for the handeye camera.
As illustrated in Figure 2, denotes the base frame of the space robot, and denotes the reference frame attached to each joint, respectively. is the endeffector reference frame, which coincides exactly with the reference frame . is the handeye camera frame, namely, the measurement system frame, and its transformation matrix with respect to the base frame is assumed to be . Therefore, once the poses of the checkerboard in the handeye camera frame are measured, the transformation matrix of the endeffector frame with respect to the handeye camera frame is obtained, denoted as . Further, the transformation matrix from the base frame to the endeffector frame is calculated as
However, vibration and impact of launching, inner and outer pressure difference, and thermal deformation of the space capsule will change the pose of the checkerboard with respect to the robot base frame, making it complicated to measure and calculate the transformation matrix between the camera frame and the robot base frame accurately. Therefore, the relative errors including distance and rotation errors of the robot endeffector are analysed.
3. Error Modeling in Distance and Rotation
3.1. Distance Error Modeling
As the basis of error modeling, kinematic modeling is aimed at describing the relation between any two adjoining link coordinate frames with as few parameters as possible. But inappropriate kinematic parameters might not meet three fundamental principles, namely, completeness, model continuity, and minimality [39]. Various methods of kinematic modeling for robot manipulators have been proposed; e.g., the classic DH method [40] uses only 4 parameters to describing the relation between any two adjoining link coordinate frames, while the MDH (modified DH) method [41] uses 5 parameters. There are more such as the CPC (complete and parametrically continuous) method [42] and zero reference model method [43].
Without loss of generality, we assume that the number of kinematic parameters used to describe the relation between any two adjoining link coordinate frames is , and the DOF of the space robot is , then all the kinematic parameters of the space robot can be recorded as , while one robot configuration is denoted as .
Definition 1. Kinematic model: the kinematic model of the space robot relates the configuration and the kinematic parameters to the endeffector pose through a function as
The robot configurations are not exactly known due to the existence of sensor noise. Similarly, defects in manufacturing and assembly result in kinematic errors. The difference between the nominal value and the exact one is assumed to be , such as . It is noteworthy that errors in can be also included in kinematic errors .
Proposition 1. Absolute pose error model [39]: the differential error of the endeffector pose is the product of the generalized Jacobian matrix and the kinematic errors, such as where is the generalized Jacobian matrix, also termed as the identification matrix, determined by the kinematic parameters and the configuration . denotes the differential error of the endeffector pose, composed of 3 differential translations along the xyz axis and 3 differential rotations around the xyz axis, respectively.
In Figure 3, as the robot configuration changes from to , and are the corresponding nominal and actual displacements, respectively, whereas and are the endeffector position offset displacements under these two configurations. According to the absolute error model in Equation (3), where is the positionrelated part of the identification matrix.
Proposition 2. Distance error model [20]: as the robot configuration changes from to , the measurable distance error scalar is the product of the identification matrix and the kinematic errors, such as where denotes the measurable distance error scalar. and are the norms of the vectors and , respectively, the former of which is calculated by nominal forward kinematics, while the latter can be measured by the handeye camera system. Both the values have nothing to do with their reference frames. The identification matrix of distance error model is obtained as .
3.2. Propositions of Differential Rotation
Definition 2. General rotation transformation: suppose that is a unit vector over origin, the rotation matrix around with the angle can be obtained as where , , and .
Proposition 3. Equivalent angle and axis of rotation [40]: any rotation matrix can be expressed as a rotation around a certain axis with a certain angle . The axis is termed as equivalent axis of rotation while is the equivalent angle of rotation.
It should be noted that the equivalent angle and axis of rotation have the following three important properties. (a)For a certain rotation matrix , it may have more than one set of equivalent angle and axis of rotation. Actually, is equivalent to and even equivalent to , . Therefore, the value of is always forced to locate in .(b)When is small, the rotation axis will be hard to be computed because of singularity.(c), where denotes the transpose of . The proof is simple and omitted.
Proposition 4. Differential rotation by equivalent angle and axis of rotation: any differential rotation transformation can always be regarded as a differential rotation around a certain axis with the rotation angle , such as
Proof. is so small that , , and . Then, Equation (7) can be obtained by substituting and into Equation (6).
Proposition 5. Differential rotation by 3dimensional differential rotation angles [44]: any differential rotation transformation can always be regarded as differential rotation around the axes , , and in turn. Suppose that the 3dimensional differential rotation angles are , then where denotes a identity matrix, and the function is used to create a skew symmetric matrix, which is defined as
Proposition 6. If the equivalent angle and axis of rotation of a certain differential rotation matrix is and , and its 3dimensional differential rotation angles are , then
Proof. It can be obtained by Equations (7) and (8) as
Considering that is a unit vector, then
Thus, Equation (10) can be obtained by
Proposition 7. Differential rotation by the rotation matrix [44]: if the rotation matrix is the rotation matrix transformed by a differential rotation , then
It should be noted that is calculated with respect to the reference frame of . And the differential rotation with respect to the frame of , termed as , can be obtained as
The two kinds of differential rotation matrix satisfy
3.3. Rotation Error Modeling
We can avoid identifying the transformation matrix between the measurement system frame and the robot base frame by using the distance error. This is true for equivalent angle of rotation to describe the variation of the endeffector orientation.
As shown in Figure 3, the matrixes , , , and denote the rotation between any two frames of , , , and . Since kinematic errors are small, and are very close to each other. In other words, meets the definition of differential rotation.
Proposition 8. Suppose that , , and represent 3dimensional differential rotation angles of , , and , respectively, then
Proof. According to Proposition 7, we can obtain the differential rotation matrix as
Then, the differential rotation matrix between and can be calculated as
Equation (17) can be obtained by substituting , , and into Equation (19) and then simplifying it.
Suppose that the equivalent angle and axis of rotation corresponding to are and , while those of are and . Both and are vectors.
According to Equation (3), we can obtain where is the orientationrelated part of the identification matrix. It should be noted that the identification matrix in Equation (20) is calculated with respect to the endeffector reference frame.
Proposition 9. Rotation error model: as the robot configuration changes from to , the measurable rotation error scalar is the product of the identification matrix and the kinematic errors, such as where is the measurable rotation error scalar, and the identification matrix of the rotation error model is .
Proof. Substituting Equation (20) into Equation (17), we can obtain
Similar to the derivation of distance error, we make the rotational axis coincide with and the starting point of rotation matrix coincide with that of as shown in Figure 4, where the point is the projection of onto the flat , while the point satisfies . The measurable equivalent rotation error can be obtained as
is the projection of onto the flat , which is actually the equivalent angle of rotation of differential rotation around the axis , such as . Ignoring the linearizing error, we can obtain , and then
According to Proposition 6, substitute Equation (22) into (24), then
According to the third property of the equivalent angle and axis of rotation,
Finally, the rotation error model in Equation (21) can be obtained by substituting Equation (26) into (25).
3.4. Distance and Rotation Error Modeling
In summary, the distance and rotation error model is obtained by Equations (5) and (21) as
So we can obtain the identification matrix of the distance and rotation error model as
When there are redundant parameters in the error model, the identification matrix is rank defect, and measurement noises will affect the accuracy and robustness of parameter identification seriously, inferring the necessity of removal of redundant parameters. Redundant parameters in the error model will be discussed in the next section.
4. Parameter Independence Analysis
4.1. Kinematic Model by the MDH Method
A modified DH method, termed as MDH method [41], is used for kinematics modeling in this paper, which describes the deviation between two adjacent parallel joints axes with an additional rotation transformation to remedy the incompleteness. The MDH method is a good selection for verifying the proposed error model, because its modeling process is simple and partially overcomes the singularity.
The coordinate frame is established by the MDH method as shown in Figure 5. The MDH method uses five parameters (let ) including to describe the two adjoining link coordinate frames, and the homogeneous transformation between them is shown as follows: where and denote the translation and rotation matrices, respectively, and denotes the homogeneous transformation of joint with respect to joint . means and means . The length of the rods shown in Figure 2 is illustrated in Table 1.

According to the modeling rules of the MDH method, we obtain the nominal kinematic parameters of the 7DOF space robot as shown in Table 2.

Since the relation between the camera frame and the endeffector has been calibrated on the ground and it is assumed to be unchanged on orbit, the coordinate frame of the endeffector coincides with that of the last joint. Therefore, the endeffector pose of a robot manipulator with n DOFs can be calculated as where the function denotes the transformation from a homogeneous matrix to its corresponding 3dimension positions and ZYX Euler angles.
4.2. Absolute Pose Error Model and Its Identifiability
It should be noted that nonsingularity of an error model indicates its identification matrix is column full rank. That is to say, any column of the identification matrix is linearly independent. The kinematic parameters of the robot can be sorted into three groups by its corresponding column in the identification matrix. (a)Independent parameters, whose corresponding column is linearly independent with each other(b)Relative parameters, whose corresponding column is linearly dependent with another one(c)Ineffective parameters, whose corresponding column is the zero vector, indicating that they have no effect on the pose error of the robot endeffector
We differentiate Equation (30) against the kinematic parameters to obtain the absolute pose error model as where denotes the kinematic parameter errors of the joint, and denotes its corresponding identification matrix.
Definition 3. Local transfer matrix of the MDH method [44]: the local transfer matrix as illustrated in Figure 6 is used to transfer to the local pose error in its own frame . It can be calculated as
Definition 4. Global transfer matrix [44]: then, the local pose error is passed to the next coordinate frame until the last one, and we obtain the pose error of the endeffector with respect to the last coordinate frame. The global transfer matrix from the frame to the one is written as where denotes the rotation matrix in the homogeneous matrix and is its translation vector. The error transferring matrix between the two adjoining coordinate frames satisfies
Parameter independence in the error model can be determined just by analysing the redundant parameters of adjacent joints [42]. In other words, the matrix is required to be full column rank to ensure that all parameters are identifiable.
Proposition 10. The full column rank of is equivalent to the full column rank of .
Proof. According to Equations (32), (33), and (34), we can obtain the identification matrix with respect to and as
Since has nothing to do with and , the full column rank of is equivalent to the full column rank of according to Equation (35). Considering that the expression of is more simple and just depends on and , it becomes easier to analyze the parameter independence in the error model.
Suppose and then
Considering that and are variables, and the initial value of and is generally set as zero, we can obtain by Equation (36)
Next, three typical singularities are discussed to analyze the parameter independence between and . (1)The two adjacent joints are parallel but not collinear, indicating that . can be rewritten as
According to Equations (37) and (38), , indicating that there is a linear interrelationship between and . (2)The two adjacent joints are parallel and collinear, indicating that . can be rewritten as
According to Equations (37) and (39), and , indicating that there is a linear interrelationship between and , and this is true for and . (3)The two adjacent joints are orthogonal with . can be rewritten as
According to Equations (37) and (40), , so is linearly related to .
Based on the analysis above, the identifiability of kinematic parameters in the absolute pose error model of the 7DOF space robot can be obtained and shown in Table 3.

Only the relative parameters and all the independent parameters can be identified.
4.3. Parameter Independence Analysis of the Distance and Rotation Error Model
Equation (28) shows that the identification matrix of the distance and rotation error model is calculated based on that of the absolute pose error model. Specifically, the distancerelated identification matrix is based on the positionrelated one whereas the rotationrelated one is based on the orientationrelated one. However, it should be noted that the two parts have different coefficients, i.e., and , and that the positionrelated identification matrix is expressed in the endeffector frame while the orientationrelated is expressed in the robot base frame. Accordingly, we discuss the two kinds of identification matrix separately in the following section.
(1) The distancerelated identification matrix: we can obtain the distancerelated identification matrix with respect to by Equations (28) and (33) as where denotes the first three rows of , and is used to convert the related frame of the positionrelated identification matrix from the endeffector frame to the robot base frame.
The position of the endeffector can be written as
For the first joint, , such as
Substitute Equation (44) into Equation (43), and Equation (44) can be rewritten as where can be calculated as Equation (29) and is independent of .
We assume that the space robot moves from the configuration to , and the first joint rotates from to , i.e., from to . The endeffector position changes from to . Using the Wolfram Mathematica, we can obtain the distancerelated identification matrix as
Obviously, the kinematic parameters are ineffective, for that their corresponding columns are zero shown in Equation (46).
For the last joint, and , such as where depends on all the kinematic parameters, and so does the coefficient , which indicates that the identifiability of depends on . Obviously, from Equation (37), the kinematic parameters are ineffective.
For the joint , the coefficients in Equation (28) and in Equation (41) both depend on all the kinematic parameters, so the parameter identifiability of the distance error model is the same with that of the position error model. (1)The rotationrelated identification matrix
We can obtain the rotationrelated identification matrix by Equations (28) and (33) as
For the first joint, ,such as where
By Equations (28), (49) and (50), we can obtain where depends on all the kinematic parameters, and the kinematic parameters are ineffective.
For the last joint, , such as
By Equations (28) and (52), we can obtain
By Equation (53), the kinematic parameters are ineffective. Actually, if both the distance and rotation error are taken into consideration, the ineffective parameters are .
For the joint (), the coefficients in Equation (28) and in Equation (48) both depend on all the kinematic parameters, so the parameter identifiability of the rotation error model is the same with that of the orientation error model.
(2) The distance and rotation error model: in summary, the identifiability of kinematic parameters in the distance and rotation error model of the 7DOF space robot can be obtained and shown in Table 4.

Since is linearly related to , and is ineffective, is also ineffective. Therefore, only the relative parameters and all the independent parameters can be identified.
5. Method Verification
The process of calibration simulation is shown in Figure 7. Firstly, the measurement configurations are selected from the permissible operating range of joints with the given amount of configurations; then, the actual endeffector poses are calculated with the actual kinematic parameters; finally, taking the encoder noise and the measurement noise into account, the least squares method is adopted to identify the kinematic parameters. It is worthy noted that the actual parameters are calculated by the nominal parameters artificially added with parameter errors, indicating that all these parameters are known for analysis and comparison.
Whether it is the absolute pose error model or the proposed distance and rotation error model, the least squares method is a powerful tool to identify the kinematic errors against sensor noises. The specific application of least squares method is as where , denotes the total number of iterations, and is the initial nominal kinematic parameters. is the MoorePenrose inverse of a matrix. The process is iterated until converges to a small threshold. Finally, the identified kinematic parameters can be obtained as
For the distance and rotation error model, is the measured value of the distance and rotation of the space robot, so it is inaccurate due to the existence of sensor noises. The work [39] indicates that a sufficient number of measurements can guarantee the convergence of the above process. For the distance and rotation error model of the space robot, the least number of measurements is the number of all the identifiable kinematic parameters.
5.1. Selection of Measurement Configurations
The endeffector poses of the space robot are measured by a handeye camera, so the camera has to point to the target checkerboard. Besides, the endeffector of the space robot under the selected measurement configurations has to be close to the checkerboard in order to ensure measurement accuracy.
For the sake of convenience and economy, we select one configuration for comparison as and another 51 configurations as . Actually, the corresponding endeffector position of the configuration is located 0.5 m above the centre of the checkerboard, while the corresponding endeffector positions of the other 51 configurations are distributed uniformly on the hemispherical surface, whose centre is located at just the corresponding endeffector position of the configuration , and its radius is 0.5 m. Additionally, the axis of the coordinate frames corresponding to all these endeffector poses points to the centre of the checkerboard, as shown in Figure 8.
5.2. Sensor Noises and Kinematic Parameter Errors
In practical applications, the actual endeffector pose of a robot can be obtained by the handeye camera, while the actual robot configurations can be measured by the encoders. However, in simulation applications, measuring noises and encoder noises are added according to their respective distributions in order to imitate the influence of sensor errors. These measuring noises meet normal distribution as shown in Table 5.

According to Section 4, the transformation matrix between the robot base frame and the frame cannot be identified, so we have to make the assumption that this transformation matrix will not be changed on orbit or can be calibrated in another way. Therefore, errors are added to other kinematic parameters but . All the kinematic parameter errors are shown in Table 6.

5.3. Result and Analysis
The purpose of robot calibration is to obtain the accurate kinematic parameters representing the robot structure and the exact estimation of the endeffector pose. The least squares method is adopted to obtain the calibrated kinematic parameters , and then the calibration residuals by different kinds of error models are calculated as
The calibration residuals by the distance error model, by the distance and rotation error model, and by the absolute pose error model are shown in Tables 79.



Some of the calibration residuals are relatively big in Tables 79, which are actually relative parameters, so these calibration residuals should be summed up accordingly.
For the distance and rotation error model,
For the absolute pose error model,
Equations (57) and (58) illustrate that once summed up, calibration residuals of these relative parameters will counteract each other, indicating that only some of the relative parameters are necessary to be identified. This is consistent with the analysis of parameter independence, which proves the correctness of the analysis results.
Moreover, we come to a conclusion that the distance and rotation error model does better than the distance error model in identification accuracy of the kinematic parameters, but a little weaker than the absolute pose error model.
Finally, 500 robot configurations are selected randomly to serve as the validation group. The endeffector position estimate errors of validation configurations are calculated, respectively, with different error models, and maximum and average of these estimate errors are analysed to compare calibration performance.
Figure 9 gives the endeffector position estimate errors of validation configurations before calibration and those after calibration by three kinds of error models. Figure 10 gives the corresponding histograms of these position errors. The maximum and average of these 500 position errors are shown in Table 10.

Whether it is the identification accuracy of the kinematic parameters shown in Tables 79 or statistical analysis of validation tests in Figures 9 and 10 and Table 10, it verifies the effectiveness of the proposed distance and rotation error model. The rotation error of the robot endeffector is included so as to improve calibration performance. But it should be pointed out that, due to the lack of absoluteness, errors in kinematic parameters of the first (root) joint of the space robot cannot be identified.
6. Conclusions
Proposed in this paper is an error model involving both the distance and the rotation error of the space robot endeffector. The error model can avoid identifying the transformation matrix between the measurement system frame and the robot base frame, suitable for selfcalibration of the space robot. Besides, identifiable parameters in the distance and rotation error model are confirmed to eliminate singularity in robot kinematic calibration. Finally, we conduct the calibration simulation and compare differences in calibration performance between these models. Statistical results indicate that the proposed error model does better in the accuracy of the robot endeffector position estimate and of the kinematic parameter identification than the only distance error model. However, it still matters that observability of the distance and rotation error model be studied as an indicator of measurement configuration optimization, which will significantly reduce the number of configurations required for calibration. Besides, information fusion provides a powerful tool to deal with uncertainty and external disturbance of pose measurement and application of filtering algorithms in robot calibration is worthy of attention. From the operational point of view, the light conditions to carry on calibration processes should be also taken into consideration. In summary, there are still more future works and challenges to adopt the proposed method in practical application.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (61573066 and 61327806).
References
 C. Sallaberger, “Canadian space robotic activities,” Acta Astronautica, vol. 41, no. 4–10, pp. 239–246, 1997. View at: Publisher Site  Google Scholar
 P. J. Staritz, S. Skaff, C. Urmson, and W. Whittaker, “Skyworker: a robot for assembly, inspection and maintenance of large scale orbital facilities,” in Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164), vol. 4, pp. 4180–4185, Seoul, South Korea, May 2001. View at: Publisher Site  Google Scholar
 Z. Roth, B. Mooring, and B. Ravani, “An overview of robot calibration,” IEEE Journal on Robotics and Automation, vol. 3, no. 5, pp. 377–385, 1987. View at: Publisher Site  Google Scholar
 ChenGang, LiTong, ChuMing, J.Q. Xuan, and S.H. Xu, “Review on kinematics calibration technology of serial robots,” International Journal of Precision Engineering and Manufacturing, vol. 15, no. 8, pp. 1759–1774, 2014. View at: Publisher Site  Google Scholar
 R. P. Judd and A. B. Knasinski, “A technique to calibrate industrial robots with experimental verification,” IEEE Transactions on Robotics and Automation, vol. 6, no. 1, pp. 20–30, 1990. View at: Publisher Site  Google Scholar
 V. R. Angulo and C. Torras, “Selfcalibration of a space robot,” IEEE Transactions on Neural Networks, vol. 8, no. 4, pp. 951–963, 1997. View at: Publisher Site  Google Scholar
 P. Liang, Y. L. Chang, and S. Hackwood, “Adaptive selfcalibration of visionbased robot systems,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 19, no. 4, pp. 811–824, 1989. View at: Publisher Site  Google Scholar
 Y. Liu, H. Liu, F. L. Ni, and W. F. Xu, “New selfcalibration approach to space robots based on handeye vision,” Journal of Central South University, vol. 18, no. 4, pp. 1087–1096, 2011. View at: Publisher Site  Google Scholar
 S. Yin, Y. Ren, J. Zhu, S. Yang, and S. Ye, “A visionbased selfcalibration method for robotic visual inspection systems,” Sensors, vol. 13, no. 12, pp. 16565–16582, 2013. View at: Publisher Site  Google Scholar
 G. Du and P. Zhang, “Online robot calibration based on vision measurement,” Robotics and ComputerIntegrated Manufacturing, vol. 29, no. 6, pp. 484–492, 2013. View at: Publisher Site  Google Scholar
 G. Du and P. Zhang, “IMUbased online kinematic calibration of robot manipulator,” The Scientific World Journal, vol. 2013, Article ID 139738, 10 pages, 2013. View at: Publisher Site  Google Scholar
 G. Du and P. Zhang, “Online serial manipulator calibration based on multisensory process via extended Kalman and particle filters,” IEEE Transactions on Industrial Electronics, vol. 61, no. 12, pp. 6852–6859, 2014. View at: Publisher Site  Google Scholar
 G. Du, P. Zhang, and D. Li, “Online robot calibration based on hybrid sensors using Kalman filters,” Robotics and ComputerIntegrated Manufacturing, vol. 31, pp. 91–100, 2015. View at: Publisher Site  Google Scholar
 G. Du, P. Zhang, and D. Li, “Human–manipulator interface based on multisensory process via Kalman filters,” IEEE Transactions on Industrial Electronics, vol. 61, no. 10, pp. 5411–5418, 2014. View at: Publisher Site  Google Scholar
 G. Du and P. Zhang, “A markerless human–robot Interface using particle filter and Kalman filter for dual robots,” IEEE Transactions on Industrial Electronics, vol. 62, no. 4, pp. 2257–2264, 2015. View at: Publisher Site  Google Scholar
 G. Du, P. Zhang, and X. Liu, “Markerless human–manipulator interface using leap motion with interval Kalman filter and improved particle filter,” IEEE Transactions on Industrial Informatics, vol. 12, no. 2, pp. 694–704, 2016. View at: Publisher Site  Google Scholar
 X. Zhang, Y. Song, Y. Yang, and H. Pan, “Stereo vision based autonomous robot calibration,” Robotics and Autonomous Systems, vol. 93, pp. 43–51, 2017. View at: Publisher Site  Google Scholar
 Y. J. Ren, Z. Jigui, Y. Xueyou, and Y. Shenghua, “Measurement robot calibration model and algorithm based on distance accuracy,” Acta Metrologica Sinica, vol. 3, no. 29, pp. 198–202, 2008. View at: Google Scholar
 Z. Xuecai, Z. Qixian, and Z. Shixiong, “A new model with compensation algorithm for distance errors of robot mechanisms,” Robot, vol. 3, no. 1, 1991. View at: Google Scholar
 X. Zhou and Q. Zhang, “Distance error model in the study on the positioning accuracy of robots,” Robot, vol. 17, no. 1, 1995. View at: Google Scholar
 J. Roning and A. Korzun, “A method for industrial robot calibration,” in Proceedings of International Conference on Robotics and Automation, vol. 4, pp. 3184–3190, Albuquerque, NM, USA, April 1997. View at: Publisher Site  Google Scholar
 C. Gong, J. Yuan, and J. Ni, “Nongeometric error identification and compensation for robotic system by inverse calibration,” International Journal of Machine Tools & Manufacture, vol. 40, no. 14, pp. 2119–2137, 2000. View at: Publisher Site  Google Scholar
 Y. Tan, H. Sun, and Z. Shao, “New manipulator calibration method based on screw theory and distance error,” Journal of Beijing University of Aeronautics & Astronautics, vol. 32, no. 9, pp. 1104–1108, 2006. View at: Google Scholar
 W. Gao, H. Wang, Y. Jiang, and X.'a. Pan, “Kinematic calibration method of robots based on distance error,” Robot, vol. 35, no. 5, p. 600, 2013. View at: Publisher Site  Google Scholar
 T. Zhang, X. Dai, and D. Liang, “Robot error calibration based on distance measurement with parameter selection,” Journal of Beijing University of Aeronautics and Astronautics, vol. 40, no. 5, pp. 585–590, 2014. View at: Google Scholar
 Y. G. Zhang and H. Zhang, “An approach of robotic kinematics parameters calibration,” Advanced Materials Research, vol. 655657, no. 5, pp. 1023–1028, 2013. View at: Publisher Site  Google Scholar
 N. Mu, K. Wang, Z. Xie, and P. Ren, “Calibration of a flexible measurement system based on industrial articulated robot and structured light sensor,” Optical Engineering, vol. 56, no. 5, article 054103, 2017. View at: Publisher Site  Google Scholar
 Y. Shi, J. Fang, and Z. Weng, “Research on kinematic parameter calibration of handling robot,” in 2017 13th IEEE International Conference on Electronic Measurement & Instruments (ICEMI), pp. 224–228, Yangzhou, China, 2017. View at: Publisher Site  Google Scholar
 X. Li, H. Hu, and W. Ding, “Two error models for calibrating SCARA robots based on the MDH Model,” MATEC Web of Conferences, vol. 95, article 08008, 2017. View at: Publisher Site  Google Scholar
 X. Yao, W. Shi, L. Zhang, D. Xu, and J. Zuo, “Research on kinematic calibration of service robot based on distance error,” Modern Manufacturing Engineering, vol. 9, no. 1, 2017. View at: Google Scholar
 I.C. Ha, “Kinematic parameter calibration method for industrial robot manipulator using the relative position,” Journal of Mechanical Science and Technology, vol. 22, no. 6, pp. 1084–1090, 2008. View at: Publisher Site  Google Scholar
 W. Zhenhua, X. Hui, C. Guodong, S. Rongchuan, and L. Sun, “A distance error based industrial robot kinematic calibration method,” Industrial Robot: An International Journal, vol. 41, no. 5, pp. 439–446, 2014. View at: Publisher Site  Google Scholar
 M. John, “Kinematic calibration of Delta robot using distance measurements,” Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 1989–1996 (203+210), vol. 39, no. 8, pp. 55–60, 2015. View at: Google Scholar
 T. Zhang, “Kinematic calibration of robot based on distance error,” Journal of South China University of Technology, vol. 39, no. 11, pp. 98–103, 2011. View at: Google Scholar
 A. Joubair and I. A. Bonev, “Kinematic calibration of a sixaxis serial robot using distance and sphere constraints,” International Journal of Advanced Manufacturing Technology, vol. 77, no. 1–4, pp. 515–523, 2015. View at: Publisher Site  Google Scholar
 G. Flandin, F. Chaumette, and E. Marchand, “Eyeinhand/eyetohand cooperation for visual servoing,” in Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), vol. 3, pp. 2741–2746, San Francisco, CA, USA, April 2000. View at: Publisher Site  Google Scholar
 M. Sabatini, R. Monti, P. Gasbarri, and G. B. Palmerini, “Adaptive and robust algorithms and tests for visualbased navigation of a space robotic manipulator,” Acta Astronautica, vol. 83, pp. 65–84, 2013. View at: Publisher Site  Google Scholar
 M. Carpentiero, M. Sabatini, and G. B. Palmerini, “Capabilities of stereo vision systems for future space missions,” in Proceedings of the 67thInternational Astronautical Congress, Guadalajara, Mexico, 2016. View at: Google Scholar
 K. Schröer, S. L. Albright, and M. Grethlein, “Complete, minimal and modelcontinuous kinematic models for robot calibration,” Robotics and ComputerIntegrated Manufacturing, vol. 13, no. 1, pp. 73–85, 1997. View at: Publisher Site  Google Scholar
 J. Denavit and R. S. Hartenberg, “A kinematic notation for lowerpair mechanisms based on matrices,” ASME Journal of Applied Mechanics, vol. 22, pp. 215–221, 1955. View at: Google Scholar
 S. A. Hayati, “Robot arm geometric link parameter estimation,” in The 22nd IEEE Conference on Decision and Control, pp. 1477–1483, San Antonio, TX, USA, December 1983. View at: Publisher Site  Google Scholar
 H. Zhuang, Z. S. Roth, and F. Hamano, “A complete and parametrically continuous kinematic model for robot manipulators,” IEEE Transactions on Robotics & Automation, vol. 8, no. 4, pp. 451–463, 1992. View at: Publisher Site  Google Scholar
 B. W. Mooring and G. R. Tang, “An improved method for identifying the kinematic parameters in a sixaxis robot,” in Computers in Engineering, Proceedings of the International Computers in Engineering Conference and Exhibit, vol. 1, pp. 79–84, Chicago, IL, USA, 1984. View at: Google Scholar
 Y. L. Xiong, Robotics, China Machine Press, 1993.
Copyright
Copyright © 2019 Qingxuan Jia et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.