Abstract

It is easy to realize that most robots do not move to the desired endpoint (Tool Center Point (TCP)) using high-resolution noncontact instrumentation because of manufacturing and assembly errors, transmission system errors, and mechanical wear. This paper presents a robot calibration solution by changing the endpoint trajectories while maintaining the robot’s control system and device usages. Two independent systems to measure the endpoint positions, the robot encoder and a noncontact measuring system with a high-resolution camera, are used to determine the endpoint errors. A new trajectory based on the measured errors will be built to replace the original trajectory. The results show that the proposed method can significantly reduce errors; moreover, this is a low-cost solution and easy to apply in practice and calibration can be done cyclically. The only requirement for this method is a noncontact measuring device with high-resolution and located independently with the robot in calibration.

1. Introduction

Actuators, particularly robots, will have kinetic errors after a long working time. Over the years, researchers have proposed various methods to employ software interference to overcome this problem.

Ali et al. [1] used an IMU and a position sensor which are closely tied to the robot tool to automatically measure the robot position during the working time. Ultrasonic triangulation sensors used for the position tracking had increased processing speed and reduced the computational power, allowing the robot to react faster. The kinematics can be identified by EKF (Extended Kalman Filter) after estimating the robot positions. Although this method can reduce parameter errors, constant adjustment of the robot joint to achieve the target point is necessary. Therefore, it is time-consuming and the manipulator accuracy depends on the feedback errors. Wang et al. [2] calibrated a 5500 kg 8-DOF (degree of freedom) engineering robot based on joint angle division and an artificial neural network (ANN). To reduce the influence of alignment errors on positioning accuracy, the joint angle workspace is divided into several local regions and each region has its own set of DH parameters. ANN model will compensate for the remaining errors instead of the complex model of nongeometric errors. The results showed that the average position errors were reduced from 17 cm to 4.5 cm after compensation. Wu et al. [3] used a new industry-oriented performance measure to evaluate the calibration plan quality through the manipulator positioning accuracy after geometrical error compensation and studied the industrial requirements related to the prescribed manufacturing task. It is proved that the suggested performance evaluation can be shown as the weighted trace of the associated covariance matrix, in which the weighting coefficients are determined by the corresponding test position. Advanced partial position measurement method is the basis for a dedicated algorithm for geometric parameter determination, using only direct position measurements from an external device for many reference endpoints. The user can enhance the basic parameter recognition accuracy and avoid further calculations of the end-effector orientation components, which may lead to inconsistencies in the relevant identification equations. The manipulator’s geometric parameters have been determined with an accuracy of 0.15 mm and 0.01° for linearity and angle (on average), respectively. These findings achieved a manipulator positioning accuracy of 0.17 mm, which is 5.5 times better than uncalibrated robots. Moreover, some researchers examined the robot’s TCP-attached camera and used the “virtual closed kinematic chain” closed-loop method recommended in [47], using joint angle measurements in the robot control software, and it can also be known as self-calibration. In addition, Hans and Beno [8] presented a Computer Vision 3D Model-based method to track every position of a six-DOF robot in real time through a combination of textured model projection and optical stream. Hsiao et al. [9] used a robot hand with tactile sensors to localize the object on a table and finally reached the targeted position. Quet al. [10] showed a closed-loop tracking system based on a laser sensor to reduce the robot’s approximation error to less than 0.2 mm and ±1” during the drilling process for aircraft-assisted robot assembly. Nevertheless, these methods have restrictions, as their requirements are complex steps, such as camera calibration, angle detection, and laser alignment. Laser-based methods require a broad and open-sided space, and the laser beam is easily sheltered during manipulation. These actions are difficult, time-consuming, and impossible for several applications. Du and Zhang [1113] used an IMU rigidly attached to the TPC to evaluate the robot position in real time. To reduce the influence of noise and increase accuracy, a method linking FQA (Factored Quaternion Algorithm) and KF (Kalman Filter) was proposed to find out the orientation of the IMU. Ultimately, an EKF (Extended Kalman Filter) is applied to examine the differential errors of each kinematic parameter. The mean approximation error is reduced by about 0.51 mm. The significant benefit of this method is that the system captures images without making more movement. After the robot executes a command, it stops and the system simultaneously collects static measurement data from the IMU. Liu et al. [14] developed a method to improve robot control accuracy, which uses a multiple-sensor combination measuring system (MCMS), including a visual sensor, an angle sensor, and a series robot. The visual sensor measured the manipulator position in real time, and the angle sensor was attached closely to the manipulator to determine its orientation. Two data consolidation techniques, the Kalman Filter (KF) and multisensor optimal information fusion algorithm (MOIFA), were used to combine the manipulator’s position and orientation. The test results showed that the highest accuracy of the photogrammetry system was located at the center of the FOV, which was at the range of 1 × 0.8 × 1∼2 × 0.8 × 1 m. The robot manipulator’s position error was less than 1 mm after calibration. Švaco et al. [15] used a noncontact stereo vision system attached to robot Kuka KR 6 R900 to define calibration points represented as spheres in the workspace. Each robot configuration gives different sphere center coordinates. The position error was reduced from 3.63 mm to 1.29 mm after calibration. The accuracy around the calibration points has been improved with a tolerance of 0.74 mm using optimizing offset values of the original robot kinematic model. However, the calibrated parameters could not be imported directly to the robot controller and a new kinematic solver was required to handle the new model definition. Furthermore, this tolerance was only a simulated result from MATLAB and still needed to be confirmed in real experiments. Barati et al. [16] used five different algorithms (least squares, genetic algorithms, particle swarm optimization algorithm, QPSO, and Sa-PSO) to identify and calibrate the 3 DOF manipulators’ positioning errors caused by inaccuracy of the geometrical parameters. The advantages of this method were that only an encoder was needed to measure the joint angle and a graduated plate with an accuracy of was used to determine the endpoint’s position. The results showed that 87% of the positioning errors had been compensated. However, some sources of errors such as thermal errors, joint transducer errors, and steady state errors in the joint positions were not included in the calibration model.

In this paper, we introduce a new method to calibrate the robot’s accuracy by a noncontact measuring device with a suitable resolution to the required accuracy. The measured TCP errors will be converted backward to find a replacement point to reduce the kinetic error. This technique is easily and cheaply applied in industry, which makes it perfect for calibration in robot maintenance cycles.

2. Coordinate Transformation in the Workspace

2.1. Coordinate Transformation between Two Endpoints

P1 is the desired position that TCP needs to achieve according to the technological requirement. Because of actuator errors, the actual position received is P2. Both data are represented in the same O0 coordinate system which is attached to the first link of the investigated robot. The relationship between the two data is given by equation (1):

Here, T is the general transformation matrix that can comprise the orientation of TCP at the survey point and may not depend on technological requirements.

If only position compensation is required and the orientation compensation requirement at work point is neglected,

In this case, it is necessary to compensate for both the position and orientation of the work point:

2.2. Convert the Measured Data by Camera

The data received by the camera are converted to represent in the O0 frame by an axis a transition matrix C determined by the camera itself. Equation (1) is extended by one more step:

3. The Basic of Alternative Matrix

3.1. Reversing One Point of the Real and Alternative Trajectory

Consider P1 and P2 as shown in Figure 1, where P1 is the desired position of TCP and P2 is the actual position of TCP.

Although is quite small, it is possible to accurately determine the general transition (3) between them, which consists of a translational and a rotating motion. Assuming that the matrix T according to (4) is determined correctly, the following inverse relationship can be established:

Actually, the camera only shows P2 and does not show P1. P1 is determined according to (5) or provided by the encoders of the robot. If the displacement is assumed to be small enough, the transformation from P1 to P3 is similar to the transformation from P1 to P2 because they are quite close to each other in the workspace; P3 is then determined by (6):

If P3 is considered as an alternative point for the desired point P1, the above transition rule (4) gives the target, which is very close to P1:

If a sequence of appropriate key points is created, the set of P3 that forms the alternate trajectory is the set of interpolated P1. It should be noted that the use of the transformation is suitable for the neighborhood of point Pi. Thus, to complete the trajectory, a corresponding set of transitions , the data completed in the robot calibration step when applying the alternative trajectory method, is needed.

3.2. Reversing All Actual Trajectories into an Alternative Trajectory

P1i is the set of key points of the desired trajectory, which is given in advance. P2i is the set of actual points that the camera measures in the test run. These data are combined to determine the parameters of as follows:

If only considering the displacement between two survey points, the matrix has the following form:

The automatic conversion is determined entirely by the measurement results and a data processor performed at each measurement point given by (9). As shown in Figure 1, any nominal destination point in the test run will provide two pieces of information (9), which are as follows:(i)Point P1, displayed on the robot control screen, is measured by the encoders attached to the joints of the robot.(ii)Point P2 is recorded by the camera and automatically converted the frame according to (4).

The alternative trajectory equation is expressed in a full form as in (10) instead of converting each point in (6):

Note that (10) is the trajectory equation in the workspace. The conversion to the trajectory equation in the joint space still follows the normal sequence when controlling the robot. The whole algorithm is shown in Figure 2.

If a trajectory substitution is performed, more than once, the points with the smallest error in each compensation are selected to regroup to form the alternative trajectory. In other words, if we call the point with the number of i at j trajectory displacement, the trajectory error of this point is denoted by and these errors are less than the given value , which means

Therefore, to choose the key point of the alternative trajectory, the following statistics are required:

4. Case Study

In this study, we use the Collaborative 6 DOF robot and Leica camera in combination with AT960-MR probe as shown in Figure 3 to measure the errors of survey points. Errors are obtained by comparing the two data of the robot and the camera after being referred to the robot’s original representation. Then, these errors are used to form the alternative trajectory. The change of trajectory is verified by the camera. The experiment was conducted at the School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, Guangdong, China.

The measuring device used in the experiment is the Swiss Leica laser absolute coordinate measuring device; the probe used in the experiment is the AT960-MR [17]. The measuring equipment parameters are shown in Table 1, as follows.

The DH’s parameters and the table of the experimental robot are shown in Figure 4 and Table 2 as follows.

After the camera calibration is completed, two measurement channels with the same original frame O0 are established. The relationship between the coordinate of O0 attached to the first link of the robot and the Oc coordinate of the camera is shown in Table 3.

The coordinate system of OTCP (i.e., coordinate system of AT960-MR probe installed in the coordinate system of the last link) and the O6 coordinate system are shown in Table 4.

RPY matrix is used to describe the orientation between two frames; the axis transition matrix between these two frames is as follows:where is the coordinates of TCP shown in the frame of the robot; is the coordinates of TCP seen by the camera in the camera’s frame of reference; T is the axis transition matrix describing the correlated position between the camera and the experimental robot; in this case, T has the following form:

Performing the trajectory replacement three times in a row, we have the errors after each time as shown in Table 5.

The results for the lowest error of the three times compensations are combined to form the alternative trajectory, as shown in Figure 5.

Figure 5 shows that, with mean error  = 0.39 (mm), it is necessary to change the trajectory three times. The best results of the three attempts will give the last alternative trajectory with the error graph of the orange line (the lowest) in Figure 5. Statistical-based calculations show that, after three compensations, the best trajectory obtained from the three-compensation data reduces the average error from 0.39 (mm) to 0.16 (mm), which means that the accuracy increased 58% compared to the original trajectory. When changing the trajectory, the error of all points will decrease sharply, and no single point has a higher or unchanged error.

5. Conclusions

There are several ways to calibrate robots. If their motion errors can be measured, the method proposed in this paper is of particular value for many reasons. First, the trajectory is changed without any software or hardware interference. Second, this is a way to maintain the habit of using robots. It does not affect production as calibration takes place in a certain cycle. Third, the alternative trajectory can be done after one or more replacements, but it only needs to change once and the error is reduced to the necessary level. When measuring specific errors in the math model during robot calibration, measuring device (camera) data will be used until the next calibration. Therefore, this approach interferes with the robot’s operating technology to adjust its accuracy and at the same time does not interfere with the robot’s hardware and software which will not cause any unwanted technical problems and takes a long time to change to a new trajectory. This approach will be more acceptable in mass production with different robot systems and the calibration only takes place during equipment maintenance.

Experiments performed on the 6 degrees of freedom collaborative robot showed a 58% increase in the robot’s trajectory control accuracy, which proves that the method is highly feasible. In reality, this approach will be more accepted in terms of mass production such as welding robots, autoassembly, and electronic circuit board assembly with different robot systems, and calibration only takes place during equipment maintenance.

Future work will focus on some issues following. Besides the effect of robot error compensation, this method can be applied to machine tools and multiaxis CNC machining centers, especially machining centers that use parallel robot structures and require precision machining. Additionally, this method is only implemented offline, for example, measuring errors and calculating parameters for compensation; therefore, more work can be done to approve the online capacity of the method in performing error compensation in real time. To further improve the control accuracy, especially for the curve trajectories, the analytic geometry method proposed by Pavol Božek et al. [18] can be used to calculate the alternative trajectories of robotics or integrating inertial sensors into the robot joints [19] to obtain more accurate data.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Thai Nguyen University of Technology (TNUT), Thai Nguyen Province Vietnam.