#### Abstract

Vibration and impact of launching, inner and outer pressure difference, and thermal deformation of the space capsule will change the transformation between the pose measurement system and the space robot base. It will be complicated, even hard, to measure and calculate this transformation accurately. Therefore, an error modeling method considering both the distance error and the rotation error of the end-effector is proposed for self-calibration of the space robot on orbit, in order to avoid the drawback of frame transformation. Moreover, according to linear correlation of the columns of the identification matrix, unrecognizable parameters in the distance and rotation error model are removed to eliminate singularity in robot kinematic calibration. Finally simulation tests on a 7-DOF space robot are conducted to verify the effectiveness of the proposed method.

#### 1. Introduction

Space robots can assist astronauts to reach and expand their maintenance work areas, improving operational efficiency and safety [1], and even can complete on-orbit missions such as spacecraft rendezvous and docking, satellite fault maintenance, and satellite capture, independently [2]. Some of these precise space missions require the space robot to have high end positioning accuracy. Nevertheless, defects in manufacturing and assembly lead to the difference between the actual kinematic parameters and the nominal ones, generally regarded as systematic errors. Additionally, the end positioning accuracy is also affected by random errors, such as environmental changes, gear transmission, and mechanical deformation. Calibration on the ground can remedy the positioning deficiencies caused by these inherent kinematic errors [3, 4]. However, contrast to traditional industrial robots, space robots are subjected to strong vibration and impact with the launch of spacecraft and then confronted with extreme temperature on orbit. These factors will inevitably cause the kinematic parameters of the space robot to change, resulting in a decrease in the end positioning accuracy. Therefore, it is necessary to perform on-orbit kinematic calibration [5].

The actual pose of the space robot end-effector can hardly be measured by an external measuring device due to the extreme orbital environment, so the internal sensing system mounted on its end-effector is adopted for measurement during self-calibration. Many researchers have devoted efforts to kinematic self-calibration of robot manipulators. Angulo and Torras [6] developed a neural-network method to recalibrate automatically a commercial robot manipulator after undergoing wear or damage, which has been applied in the REIS robot included in the space station mock-up at Daimler-Benz Aerospace. Liang et al. [7] developed an adaptive self-calibration of hand-eye systems in which a visual-feedback-based self-learning process is used for dynamically and continuously learning the hand-eye transformation through repetitive operation trials. Liu et al. [8] proposed a self-calibration method based on hand-eye vision, which establishes the relative pose error model of the space robot and uses the particle swarm optimization algorithm to identify the kinematic parameters. Yin et al. [9] proposed a vision-based robot self-calibration method, eliminating the need for a robot-based frame and hand-to-eye calibrations. Du et al. [10–13] introduced an inertial measurement unit to estimate the end posture and attaching a position marker to the end-effector to measure the actual position and identified kinematic parameters with different filters to overcome the impact of sensing noise. Especially, for the sake of more accurate and reliable estimation from the sensors, various filter tools such as Kalman filter and particle filter are used in the estimation process and the position estimation always combines with the orientation estimation [14–16]. Zhang et al. [17] realized kinematic calibration based on the local exponential product formula by measuring the end position of the robot with a fixed camera and the plane mark mounted on the end-effector.

Works on self-calibration above adopted the absolute pose/position error model for kinematic calibration. They have to describe the end-effector pose errors under the robot base frame, making it inevitable to identify the transformation matrix between the measurement system frame and the robot base frame before calibration. However, this transformation matrix is very complicated to measure and calculate accurately, even hardly possible to obtain in unmanned environments such as on orbit [18]. To avoid the drawback of frame transformation, the distance error of any two positions in robot workspace is applied to calibrate the robot position accuracy indirectly [19, 20]. Roning and Korzun [21] used the criteria of equal distances between the points in the robot space and the task space to perform calibration on the GM Fanuc S-10 robot. Gong et al. [22] used a hybrid noncontact optical sensor mounted on the end-effector, calibrating the 6 degree-of-freedom (DOF) robot based on distance error. Tan et al. [23] made use of the screw theory and the distance error model, considering the initial orientation errors. Gao et al. [24] obtained the linearized equation describing the relationship between the positioning errors and the kinematic errors by differentiating the kinematic equation. Zhang et al. [25] derived a linear model from link parameter errors to squared range difference of the robot end-effector. Zhang et al. [26] proposed a method to directly establish parameter error equations based on relative distance error and identified the parameter errors by employing a hybrid genetic algorithm. Mu et al. [27] synthesized the hand-eye transformation parameters and the robot kinematic parameters during calibration of the system parameters of a flexible measurement system based on spheres’ centre-to-centre distance errors. Shi et al. [28] established the distance error model to connect the robot distance error and the absolute positioning error and validated the error model on a handling robot. Li et al. [29] used two error models, including the position error model and the distance error model, for calibrating Selective Compliance Assembly Robot Arm (SCARA) robots. Yao et al. [30] measured the distance between two points in space to conduct kinematic calibration on a service robot. As for measurement of the distance errors, a laser tracker was always used to measure these errors [31–33], while a CMM (coordinate measuring machine) could be also employed [34]. Joubair and Bonev [35] developed a kinematic calibration method to improve the accuracy of a six-axis serial industrial robot in a specific target workspace, using distance error and sphere constraints. Works above obtained the distance error model by ignoring the linearizing error, however with a lack of the rotation error of the robot end-effector, so as to degrade calibration performance to a certain extent.

In this paper, we propose an error modeling method considering both the distance and the rotation error of the end-effector of the space robot. The remainder of this paper is organized as follows. In Section 2, the kinematic self-calibration system for a 7-DOF space robot is elaborated. In Section 3, based on the absolute pose error model, we build the mappings from the kinematic errors to the rotation error of the robot end-effector, obtaining the distance and rotation error model. Then, the redundant parameters in this error model are analysed theoretically in Section 4. At last, Sections 5 and 6 give the results of our experiments and conclude this paper, respectively.

#### 2. Kinematic Self-Calibration System

Different from kinematic calibration of robot manipulators on the ground, the space robot has large structural size and extreme working environment, making it impossible to measure the end pose of the space robot using external measuring equipment, so its own hand-eye vision system [36–38] with a checkerboard calibration plate is adopted for pose measurement. As shown in Figure 1, a 7-DOF space robot is mounted on the outside of the space capsule, with a hand-eye camera attached to its end. A checkerboard calibration plate is placed on the outside of the space capsule, away from the space robot base, as a target for the hand-eye camera.

As illustrated in Figure 2, denotes the base frame of the space robot, and denotes the reference frame attached to each joint, respectively. is the end-effector reference frame, which coincides exactly with the reference frame . is the hand-eye camera frame, namely, the measurement system frame, and its transformation matrix with respect to the base frame is assumed to be . Therefore, once the poses of the checkerboard in the hand-eye camera frame are measured, the transformation matrix of the end-effector frame with respect to the hand-eye camera frame is obtained, denoted as . Further, the transformation matrix from the base frame to the end-effector frame is calculated as

However, vibration and impact of launching, inner and outer pressure difference, and thermal deformation of the space capsule will change the pose of the checkerboard with respect to the robot base frame, making it complicated to measure and calculate the transformation matrix between the camera frame and the robot base frame accurately. Therefore, the relative errors including distance and rotation errors of the robot end-effector are analysed.

#### 3. Error Modeling in Distance and Rotation

##### 3.1. Distance Error Modeling

As the basis of error modeling, kinematic modeling is aimed at describing the relation between any two adjoining link coordinate frames with as few parameters as possible. But inappropriate kinematic parameters might not meet three fundamental principles, namely, completeness, model continuity, and minimality [39]. Various methods of kinematic modeling for robot manipulators have been proposed; e.g., the classic DH method [40] uses only 4 parameters to describing the relation between any two adjoining link coordinate frames, while the MDH (modified DH) method [41] uses 5 parameters. There are more such as the CPC (complete and parametrically continuous) method [42] and zero reference model method [43].

Without loss of generality, we assume that the number of kinematic parameters used to describe the relation between any two adjoining link coordinate frames is , and the DOF of the space robot is , then all the kinematic parameters of the space robot can be recorded as , while one robot configuration is denoted as .

*Definition 1. *Kinematic model: the kinematic model of the space robot relates the configuration and the kinematic parameters to the end-effector pose through a function as

The robot configurations are not exactly known due to the existence of sensor noise. Similarly, defects in manufacturing and assembly result in kinematic errors. The difference between the nominal value and the exact one is assumed to be , such as . It is noteworthy that errors in can be also included in kinematic errors .

Proposition 1. *Absolute pose error model [39]: the differential error of the end-effector pose is the product of the generalized Jacobian matrix and the kinematic errors, such as
where is the generalized Jacobian matrix, also termed as the identification matrix, determined by the kinematic parameters and the configuration . denotes the differential error of the end-effector pose, composed of 3 differential translations along the x-y-z axis and 3 differential rotations around the x-y-z axis, respectively.*

In Figure 3, as the robot configuration changes from to , and are the corresponding nominal and actual displacements, respectively, whereas and are the end-effector position offset displacements under these two configurations. According to the absolute error model in Equation (3), where is the position-related part of the identification matrix.

Proposition 2. *Distance error model [20]: as the robot configuration changes from to , the measurable distance error scalar is the product of the identification matrix and the kinematic errors, such as
where denotes the measurable distance error scalar. and are the norms of the vectors and , respectively, the former of which is calculated by nominal forward kinematics, while the latter can be measured by the hand-eye camera system. Both the values have nothing to do with their reference frames. The identification matrix of distance error model is obtained as .*

##### 3.2. Propositions of Differential Rotation

*Definition 2. *General rotation transformation: suppose that is a unit vector over origin, the rotation matrix around with the angle can be obtained as
where , , and .

Proposition 3. *Equivalent angle and axis of rotation [40]: any rotation matrix can be expressed as a rotation around a certain axis with a certain angle . The axis is termed as equivalent axis of rotation while is the equivalent angle of rotation.*

It should be noted that the equivalent angle and axis of rotation have the following three important properties. (a)For a certain rotation matrix , it may have more than one set of equivalent angle and axis of rotation. Actually, is equivalent to and even equivalent to , . Therefore, the value of is always forced to locate in .(b)When is small, the rotation axis will be hard to be computed because of singularity.(c), where denotes the transpose of . The proof is simple and omitted.

Proposition 4. *Differential rotation by equivalent angle and axis of rotation: any differential rotation transformation can always be regarded as a differential rotation around a certain axis with the rotation angle , such as
*

*Proof. * is so small that , , and . Then, Equation (7) can be obtained by substituting and into Equation (6).

Proposition 5. *Differential rotation by 3-dimensional differential rotation angles [44]: any differential rotation transformation can always be regarded as differential rotation around the axes , , and in turn. Suppose that the 3-dimensional differential rotation angles are , then
where denotes a identity matrix, and the function is used to create a skew symmetric matrix, which is defined as
*

Proposition 6. *If the equivalent angle and axis of rotation of a certain differential rotation matrix is and , and its 3-dimensional differential rotation angles are , then
*

*Proof. *It can be obtained by Equations (7) and (8) as

Considering that is a unit vector, then

Thus, Equation (10) can be obtained by

Proposition 7. *Differential rotation by the rotation matrix [44]: if the rotation matrix is the rotation matrix transformed by a differential rotation , then
*

It should be noted that is calculated with respect to the reference frame of . And the differential rotation with respect to the frame of , termed as , can be obtained as

The two kinds of differential rotation matrix satisfy

##### 3.3. Rotation Error Modeling

We can avoid identifying the transformation matrix between the measurement system frame and the robot base frame by using the distance error. This is true for equivalent angle of rotation to describe the variation of the end-effector orientation.

As shown in Figure 3, the matrixes , , , and denote the rotation between any two frames of , , , and . Since kinematic errors are small, and are very close to each other. In other words, meets the definition of differential rotation.

Proposition 8. *Suppose that , , and represent 3-dimensional differential rotation angles of , , and , respectively, then
*

*Proof. *According to Proposition 7, we can obtain the differential rotation matrix as

Then, the differential rotation matrix between and can be calculated as

Equation (17) can be obtained by substituting , , and into Equation (19) and then simplifying it.

Suppose that the equivalent angle and axis of rotation corresponding to are and , while those of are and . Both and are vectors.

According to Equation (3), we can obtain where is the orientation-related part of the identification matrix. It should be noted that the identification matrix in Equation (20) is calculated with respect to the end-effector reference frame.

Proposition 9. *Rotation error model: as the robot configuration changes from to , the measurable rotation error scalar is the product of the identification matrix and the kinematic errors, such as
where is the measurable rotation error scalar, and the identification matrix of the rotation error model is .*

*Proof. *Substituting Equation (20) into Equation (17), we can obtain

Similar to the derivation of distance error, we make the rotational axis coincide with and the starting point of rotation matrix coincide with that of as shown in Figure 4, where the point is the projection of onto the flat , while the point satisfies . The measurable equivalent rotation error can be obtained as

is the projection of onto the flat , which is actually the equivalent angle of rotation of differential rotation around the axis , such as . Ignoring the linearizing error, we can obtain , and then

According to Proposition 6, substitute Equation (22) into (24), then

According to the third property of the equivalent angle and axis of rotation,

Finally, the rotation error model in Equation (21) can be obtained by substituting Equation (26) into (25).

##### 3.4. Distance and Rotation Error Modeling

In summary, the distance and rotation error model is obtained by Equations (5) and (21) as

So we can obtain the identification matrix of the distance and rotation error model as

When there are redundant parameters in the error model, the identification matrix is rank defect, and measurement noises will affect the accuracy and robustness of parameter identification seriously, inferring the necessity of removal of redundant parameters. Redundant parameters in the error model will be discussed in the next section.

#### 4. Parameter Independence Analysis

##### 4.1. Kinematic Model by the MDH Method

A modified DH method, termed as MDH method [41], is used for kinematics modeling in this paper, which describes the deviation between two adjacent parallel joints axes with an additional rotation transformation to remedy the incompleteness. The MDH method is a good selection for verifying the proposed error model, because its modeling process is simple and partially overcomes the singularity.

The coordinate frame is established by the MDH method as shown in Figure 5. The MDH method uses five parameters (let ) including to describe the two adjoining link coordinate frames, and the homogeneous transformation between them is shown as follows: where and denote the translation and rotation matrices, respectively, and denotes the homogeneous transformation of joint with respect to joint . means and means . The length of the rods shown in Figure 2 is illustrated in Table 1.

According to the modeling rules of the MDH method, we obtain the nominal kinematic parameters of the 7-DOF space robot as shown in Table 2.

Since the relation between the camera frame and the end-effector has been calibrated on the ground and it is assumed to be unchanged on orbit, the coordinate frame of the end-effector coincides with that of the last joint. Therefore, the end-effector pose of a robot manipulator with n DOFs can be calculated as where the function denotes the transformation from a homogeneous matrix to its corresponding 3-dimension positions and Z-Y-X Euler angles.

##### 4.2. Absolute Pose Error Model and Its Identifiability

It should be noted that nonsingularity of an error model indicates its identification matrix is column full rank. That is to say, any column of the identification matrix is linearly independent. The kinematic parameters of the robot can be sorted into three groups by its corresponding column in the identification matrix. (a)Independent parameters, whose corresponding column is linearly independent with each other(b)Relative parameters, whose corresponding column is linearly dependent with another one(c)Ineffective parameters, whose corresponding column is the zero vector, indicating that they have no effect on the pose error of the robot end-effector

We differentiate Equation (30) against the kinematic parameters to obtain the absolute pose error model as where denotes the kinematic parameter errors of the joint, and denotes its corresponding identification matrix.

*Definition 3. *Local transfer matrix of the MDH method [44]: the local transfer matrix as illustrated in Figure 6 is used to transfer to the local pose error in its own frame . It can be calculated as

*Definition 4. *Global transfer matrix **[**44**]**: then, the local pose error is passed to the next coordinate frame until the last one, and we obtain the pose error of the end-effector with respect to the last coordinate frame. The global transfer matrix from the frame to the one is written as
where denotes the rotation matrix in the homogeneous matrix and is its translation vector. The error transferring matrix between the two adjoining coordinate frames satisfies

Parameter independence in the error model can be determined just by analysing the redundant parameters of adjacent joints [42]. In other words, the matrix is required to be full column rank to ensure that all parameters are identifiable.

Proposition 10. *The full column rank of is equivalent to the full column rank of .*

*Proof. *According to Equations (32), (33), and (34), we can obtain the identification matrix with respect to and as

Since has nothing to do with and , the full column rank of is equivalent to the full column rank of according to Equation (35). Considering that the expression of is more simple and just depends on and , it becomes easier to analyze the parameter independence in the error model.

Suppose and then

Considering that and are variables, and the initial value of and is generally set as zero, we can obtain by Equation (36)

Next, three typical singularities are discussed to analyze the parameter independence between and . (1)The two adjacent joints are parallel but not collinear, indicating that . can be rewritten as

According to Equations (37) and (38), , indicating that there is a linear interrelationship between and . (2)The two adjacent joints are parallel and collinear, indicating that . can be rewritten as

According to Equations (37) and (39), and , indicating that there is a linear interrelationship between and , and this is true for and . (3)The two adjacent joints are orthogonal with . can be rewritten as

According to Equations (37) and (40), , so is linearly related to .

Based on the analysis above, the identifiability of kinematic parameters in the absolute pose error model of the 7-DOF space robot can be obtained and shown in Table 3.

Only the relative parameters and all the independent parameters can be identified.

##### 4.3. Parameter Independence Analysis of the Distance and Rotation Error Model

Equation (28) shows that the identification matrix of the distance and rotation error model is calculated based on that of the absolute pose error model. Specifically, the distance-related identification matrix is based on the position-related one whereas the rotation-related one is based on the orientation-related one. However, it should be noted that the two parts have different coefficients, i.e., and , and that the position-related identification matrix is expressed in the end-effector frame while the orientation-related is expressed in the robot base frame. Accordingly, we discuss the two kinds of identification matrix separately in the following section.

(1) The distance-related identification matrix: we can obtain the distance-related identification matrix with respect to by Equations (28) and (33) as where denotes the first three rows of , and is used to convert the related frame of the position-related identification matrix from the end-effector frame to the robot base frame.

The position of the end-effector can be written as

For the first joint, , such as

Substitute Equation (44) into Equation (43), and Equation (44) can be rewritten as where can be calculated as Equation (29) and is independent of .

We assume that the space robot moves from the configuration to , and the first joint rotates from to , i.e., from to . The end-effector position changes from to . Using the Wolfram Mathematica, we can obtain the distance-related identification matrix as

Obviously, the kinematic parameters are ineffective, for that their corresponding columns are zero shown in Equation (46).

For the last joint, and , such as where depends on all the kinematic parameters, and so does the coefficient , which indicates that the identifiability of depends on . Obviously, from Equation (37), the kinematic parameters are ineffective.

For the joint , the coefficients in Equation (28) and in Equation (41) both depend on all the kinematic parameters, so the parameter identifiability of the distance error model is the same with that of the position error model. (1)The rotation-related identification matrix

We can obtain the rotation-related identification matrix by Equations (28) and (33) as

For the first joint, ,such as where

By Equations (28), (49) and (50), we can obtain where depends on all the kinematic parameters, and the kinematic parameters are ineffective.

For the last joint, , such as

By Equations (28) and (52), we can obtain

By Equation (53), the kinematic parameters are ineffective. Actually, if both the distance and rotation error are taken into consideration, the ineffective parameters are .

For the joint (), the coefficients in Equation (28) and in Equation (48) both depend on all the kinematic parameters, so the parameter identifiability of the rotation error model is the same with that of the orientation error model.

(2) The distance and rotation error model: in summary, the identifiability of kinematic parameters in the distance and rotation error model of the 7-DOF space robot can be obtained and shown in Table 4.

Since is linearly related to , and is ineffective, is also ineffective. Therefore, only the relative parameters and all the independent parameters can be identified.

#### 5. Method Verification

The process of calibration simulation is shown in Figure 7. Firstly, the measurement configurations are selected from the permissible operating range of joints with the given amount of configurations; then, the actual end-effector poses are calculated with the actual kinematic parameters; finally, taking the encoder noise and the measurement noise into account, the least squares method is adopted to identify the kinematic parameters. It is worthy noted that the actual parameters are calculated by the nominal parameters artificially added with parameter errors, indicating that all these parameters are known for analysis and comparison.

Whether it is the absolute pose error model or the proposed distance and rotation error model, the least squares method is a powerful tool to identify the kinematic errors against sensor noises. The specific application of least squares method is as where , denotes the total number of iterations, and is the initial nominal kinematic parameters. is the Moore-Penrose inverse of a matrix. The process is iterated until converges to a small threshold. Finally, the identified kinematic parameters can be obtained as

For the distance and rotation error model, is the measured value of the distance and rotation of the space robot, so it is inaccurate due to the existence of sensor noises. The work [39] indicates that a sufficient number of measurements can guarantee the convergence of the above process. For the distance and rotation error model of the space robot, the least number of measurements is the number of all the identifiable kinematic parameters.

##### 5.1. Selection of Measurement Configurations

The end-effector poses of the space robot are measured by a hand-eye camera, so the camera has to point to the target checkerboard. Besides, the end-effector of the space robot under the selected measurement configurations has to be close to the checkerboard in order to ensure measurement accuracy.

For the sake of convenience and economy, we select one configuration for comparison as and another 51 configurations as . Actually, the corresponding end-effector position of the configuration is located 0.5 m above the centre of the checkerboard, while the corresponding end-effector positions of the other 51 configurations are distributed uniformly on the hemispherical surface, whose centre is located at just the corresponding end-effector position of the configuration , and its radius is 0.5 m. Additionally, the -axis of the coordinate frames corresponding to all these end-effector poses points to the centre of the checkerboard, as shown in Figure 8.

##### 5.2. Sensor Noises and Kinematic Parameter Errors

In practical applications, the actual end-effector pose of a robot can be obtained by the hand-eye camera, while the actual robot configurations can be measured by the encoders. However, in simulation applications, measuring noises and encoder noises are added according to their respective distributions in order to imitate the influence of sensor errors. These measuring noises meet normal distribution as shown in Table 5.

According to Section 4, the transformation matrix between the robot base frame and the frame cannot be identified, so we have to make the assumption that this transformation matrix will not be changed on orbit or can be calibrated in another way. Therefore, errors are added to other kinematic parameters but . All the kinematic parameter errors are shown in Table 6.

##### 5.3. Result and Analysis

The purpose of robot calibration is to obtain the accurate kinematic parameters representing the robot structure and the exact estimation of the end-effector pose. The least squares method is adopted to obtain the calibrated kinematic parameters , and then the calibration residuals by different kinds of error models are calculated as

The calibration residuals by the distance error model, by the distance and rotation error model, and by the absolute pose error model are shown in Tables 7-9.

Some of the calibration residuals are relatively big in Tables 7-9, which are actually relative parameters, so these calibration residuals should be summed up accordingly.

For the distance and rotation error model,

For the absolute pose error model,

Equations (57) and (58) illustrate that once summed up, calibration residuals of these relative parameters will counteract each other, indicating that only some of the relative parameters are necessary to be identified. This is consistent with the analysis of parameter independence, which proves the correctness of the analysis results.

Moreover, we come to a conclusion that the distance and rotation error model does better than the distance error model in identification accuracy of the kinematic parameters, but a little weaker than the absolute pose error model.

Finally, 500 robot configurations are selected randomly to serve as the validation group. The end-effector position estimate errors of validation configurations are calculated, respectively, with different error models, and maximum and average of these estimate errors are analysed to compare calibration performance.

Figure 9 gives the end-effector position estimate errors of validation configurations before calibration and those after calibration by three kinds of error models. Figure 10 gives the corresponding histograms of these position errors. The maximum and average of these 500 position errors are shown in Table 10.

Whether it is the identification accuracy of the kinematic parameters shown in Tables 7-9 or statistical analysis of validation tests in Figures 9 and 10 and Table 10, it verifies the effectiveness of the proposed distance and rotation error model. The rotation error of the robot end-effector is included so as to improve calibration performance. But it should be pointed out that, due to the lack of absoluteness, errors in kinematic parameters of the first (root) joint of the space robot cannot be identified.

#### 6. Conclusions

Proposed in this paper is an error model involving both the distance and the rotation error of the space robot end-effector. The error model can avoid identifying the transformation matrix between the measurement system frame and the robot base frame, suitable for self-calibration of the space robot. Besides, identifiable parameters in the distance and rotation error model are confirmed to eliminate singularity in robot kinematic calibration. Finally, we conduct the calibration simulation and compare differences in calibration performance between these models. Statistical results indicate that the proposed error model does better in the accuracy of the robot end-effector position estimate and of the kinematic parameter identification than the only distance error model. However, it still matters that observability of the distance and rotation error model be studied as an indicator of measurement configuration optimization, which will significantly reduce the number of configurations required for calibration. Besides, information fusion provides a powerful tool to deal with uncertainty and external disturbance of pose measurement and application of filtering algorithms in robot calibration is worthy of attention. From the operational point of view, the light conditions to carry on calibration processes should be also taken into consideration. In summary, there are still more future works and challenges to adopt the proposed method in practical application.

#### Data Availability

The data used to support the findings of this study are included within the article.

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.

#### Acknowledgments

This work was supported by the National Natural Science Foundation of China (61573066 and 61327806).