Abstract

There are few existing omnipotent sensors that handle a complex surface inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantage of their strengths. One key task is the extrinsic parameter calibration (global calibration) of the multiple sensors before measurement. This paper proposes a method of optimal extrinsic calibration for a structured light sensor (SLS) and conoscopic holography sensor (CHS). In adopting this method, a common planar calibration board is placed with different poses in front of the multisensory system, and the extrinsic calibration problem is solved through a three-dimensional reconstruction of the calibration board and using geometric constraints of the views from the SLS and CHS. This calibration method, which uses only the plane calibration board, is simple. Physical experiments demonstrate that the proposed method is robust and accurate in the calibration of multiple inhomogeneous optical sensors for the measurement of a complex surface.

1. Introduction

Advanced optical sensing technology has been more and more studied and applied [1]; especially in modern industries, such as the aerospace, automobile, and shipbuilding industries, there are increasingly high requirements of measurement accuracy, measurement efficiency, and data integrity [24]. However, an individual sensor can neither accurately provide holistic information of a part nor provide spatial and temporal coverage with small measurement uncertainty. There is an increasing need for the development of more effective measurement methods that allow high-speed, high-accuracy, flexible, and holistic inspection [5, 6]. Multiple sensors are therefore employed both to achieve holistic geometrical measurement information and to improve the reliability or reduce the uncertainty of measurement data. Multiple physical sensor configurations can be roughly classified into three categories: complementary, competitive, and cooperative configurations [7]. In a complementary configuration, the sensors work independently, but the acquired data are combined to give more complete information of the measured object. In a competitive configuration, sensors independently measure the same feature to reduce measurement uncertainty. In a cooperative configuration, the information provided by two or more independent inhomogeneous sensors is used to derive data that would not be available from a sensor individually. In precision metrology and reverse engineering, the requirements of the cooperative integration of inhomogeneous sensors are increasing in terms of accuracy, flexibility, and automation of the whole measurement process.

Usually, each sensor employed in a multisensor system has its own coordinate system. An important task is to calibrate the transformation relationship of all sensors to ensure that the measurement data captured by the sensors can be aligned and merged into a common coordinate system and that a complete model can be obtained. At present, there are two types of methods for calibration of the global coordinate system. One adopts a unified calibration object according to the different types of sensors and realizes the global calibration by measuring the calibration object with each sensor [810]. The other adopts the numerical calculation of the rigid-body transformation matrix by extracting three-dimensional data characteristics of the same object measured by each sensor to achieve global calibration [11, 12].

In the case of a multisensor system that comprises a contact probe and an optical scanning probe, geometric invariant of characteristics of the unified target is usually adopted to achieve global calibration. Targets in these methods include a standard sphere [10, 13], standard sphere gauge [14, 15], polyhedron [9, 16], and other special standard objects [17]. The multisensor calibration methods were analyzed and compared using polyhedral and spherical artefacts by Fernández et al. [18]. They adopted a sphere-based calibration method to calibrate their integration system finally. A ball–plate standard has been used to calibrate a fringe projection sensor and CMM integration system [15]. These methods face problems in terms of difficult manufacturing and a high cost of the calibrator.

The second type of method calibrates the global coordinate system by obtaining the measurement data of the same part. It has the advantages of low cost, flexibility, and easy realization of full automation. However, the calibration accuracy is not high because of the effect of the environment and surface characteristics of the measured object. These methods are widely used in unmanned driving and robot navigation [19, 20] and are less used in the precise measurement of the geometry of parts. To reduce the effect of data noise, Huang et al. [17] presented an iterative registration and fusion approach for multisensor calibration. They used surfaces reconstructed from multiple point clouds and Kalman filtering to fuse data and enhance the registration accuracy and robustness. A modified iterative closest-point algorithm has been proposed to improve the registration performance for multisensory coordinate metrology using curvature information [21]. Zhao et al. [8] adopted separated standards to be measured by a video camera sensor and a tactile probe sensor. A common reference coordinate system was created adopting a multistep registration method to match the datasets of two inhomogeneous sensors. Shaw et al. [11] adopted a two-step method that comprised coarse and fine registration with some features of the measured parts and then calculated the common reference coordinates automatically.

It is obvious that all above calibration methods and theories require a sufficient number of coincident points or must have common standards and features. The calculation of coincident points using measured parts is not possible when the parts have two sides and are very thin, such as in the case of aeroengine blades. It is therefore important to have an exact knowledge of the position of each sensor in the measurement setup, if the coincident points are calculated using standard instruments, such as a sphere, polyhedral, and special artefacts. Owing to the different measuring techniques and working principles of heterogeneous sensors, the process of calibration is time-consuming and an accurate alignment cannot be ensured.

The calibration of such spatial relationships among different sensors includes two tasks: intrinsic calibration, where internal sensor parameters are determined, and extrinsic calibration where the position and orientation of a sensor relative to a given coordinate system are determined. This paper presents a simple and optimal extrinsic calibration method for two inhomogeneous optical sensors in a three-dimensional (3D) measurement system: a conoscopic holography sensor (CHS) and a structured light sensor (SLS). In adopting this method, a common planar calibration board is used in the intrinsic calibration of the SLS with different poses in front of the multisensory system. The extrinsic calibration problem is solved through a 3D reconstruction of the calibration board and using geometric constraints of the views from the SLS and CHS.

The remainder of the paper is organized as follows. Section 2 introduces the mechanical structure and measuring principle of the optical inspection device. Section 3 presents the proposed calibration method. Section 4 reports on a calibration experiment and complex surface inspection experiment and analyzes the results. Section 5 gives the conclusions drawn from the results of the study.

2. System Setup and Measuring Principle

2.1. System Setup

The system setup and mechanical structure of the measurement system are shown in Figure 1. The multisensor system mainly comprises two inhomogeneous optical sensors (i.e., the CHS and SLS), a four-axis motion platform having three straight axes, a rotating shaft, and an antivibration platform. The two sensors are fixed rigidly on the -axis and are movable in -, -, and -directions. The measured object is mounted on the rotation platform using a special fixture.

The SLS has a structured light projection unit and an acquisition system with two cameras on each side. Each camera has a resolution of and a frame rate of 60 frames per second. It is characterized by a resolution of 0.05 mm and accuracy of 0.02 mm that is tested by the optical scanning system standard of VDI/VDE 2634 [22].

The CHS is a type of laser displacement sensor based on conoscopic holography interference. Modular setup with interchangeable objective lenses enabling various standoffs and working ranges in the same sensor. According to the principle, it is only one-dimensional measuring data collected (the distance from the transmitter to the projection of the laser beam on the material surface); the three-dimensional data collected by CHS need to be calculated based on the encoder position of the motion platform. When integrated in measurement devices, conoscopic technology offers major benefits, such as collinear measurement, low electronic noise dependency, and multiple standoffs, compared to the old-style triangulation method. The probe can directly read the pulse value of the grating encoder. Therefore, the measurement accuracy of CHS is not affected by the motion accuracy of mechanical platform. Table 1 shows the main characteristics of this CHS for the 50 mm lenses.

2.2. Coordinate Systems

The SLS and CHS are mounted on the four-axis platform, so that they can be displaced along the direction (200 mm), (200 mm), and (300 mm) to acquire data. Based on the configuration of the fusion noncontact measuring system, four different coordinate systems are defined, as shown in Figure 2.

The motion platform coordinate system (called machine coordinate system (MCS)) (, , , ) is as follows: (i)The workpiece coordinate system (WCS) (, )(ii)The SLS coordinate system (SLCS) ( )(iii)The intrinsic coordinate system of the CHS ( )

Supposing a point on the surface, we have

Combining equations (1) and (2) yields where is the translation vector between the and , denotes the coordinates in the SLCS, denotes the coordinates in the CHCS, and and are vectors expressing the positions of the sensors (SLS and CHS) in the .

Furthermore, from the viewpoint of coordinate transformation, equation (3) can be rewritten as where and are, respectively, matrices of rotation from the SLCS and CHCS to the WCS while and are, respectively, matrices of translation from the SLCS and CHCS to the WCS. In the integrated system, SLCS is taken as WCS, so , , and , where is the identify matrix. Our goal in this paper is to develop methods of calibration to acquire the rotation matrices and the transformation vector .

3. Calibration Method

The flowchart in Figure 3 illustrates all steps of our method. To begin with, a planar calibration board is placed in front of our system in different positions and orientations, such that it is visible to both the SLS and CHS. For each pose, two images are captured by the cameras of the SLS, and the center points are extracted automatically. The 3D positions of the center points are then calculated in the coordinate system of the SLS. And CHS are used to obtain point clouds on the calibration board for each pose. A least-squares fitting- (LSF-) based method is then used to best fit a 3D plane using the reconstructed points of the calibration board in the SLS coordinate system. Finally, a nonlinear optimization solution is adopted to calculate the extrinsic parameters, considering that all points of the CHS must lie on the calibration plane estimated using the SLS. In our method, the calibration process only uses the plane calibration board, and the extrinsic parameters of the SLS and CHS are thus calibrated with the calibration intrinsic/extrinsic parameters of the SLS at the same time.

3.1. SLS Calibration

Calibration of the two cameras is crucial to SLS, since it determines reconstruction accuracy of three-dimensional points of calibration board [23].The camera calibration process requires that the control points be perfectly positioned on the planar calibration board to ensure calibration accuracy. However, it is very difficult and expensive to manufacture high-precision calibration board. To cope with this problem, a nonlinear optimization method called the bundle adjustment (BA) process is used in stereo vision system [24, 25]. It can adjust the coordinates of benchmarks and thereby estimate the calibration parameters more accurately even with an imperfect target, which has been proved by Ref. [24] from theoretical analysis to experiments. The overall calibration process is briefly described as follows.

3.1.1. Initial Parameter Calibration

The intrinsic parameters of the left and right cameras are calculated using Zhang’s method to obtain the initial values for the two cameras [26]. Zhang’s method is the most popular method that uses a flat checker board for camera calibration. It is an easy to use and flexible technique to calibrate a camera, which only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. This is a conventional calibration process, which can not eliminate the accuracy loss caused by the manufacturing error of the target.

3.1.2. Parameter Optimization Using BA

The bundle adjustment method is used to optimize the intrinsic parameters of a single camera, and the world coordinates of the control points are treated as unknowns to optimize at the same time. To get the optimal estimation of the parameters, we defined the following cost function using the reprojection error: where denotes the image coordinates of the camera, subscript denotes the th benchnark and the th pose, is the camera intrinsic parameter matrix, is the camera extrinsic parameter in the th pose, is the th benchnark, is to solve the reprojection point of the th benchnark and th pose, and is to slove euclidean distance between two image points.

3.1.3. Optimization of Structure Parameters

The intrinsic parameters obtained from Steps 1 and 2 are fixed, and only the structure parameters and of the left and right cameras of the SLS are optimized. The cost function is written as where denotes the image coordinates of the left camera, is the corresponding point of the right camera, is the total number of corresponding points of all views of the calibration board, and is an antisymmetric matrix of translation vector . In the optimization process, the coordinate system of the left camera is taken as the reference of the world coordinate system.

3.2. Center Point Triangulation and 3D Plane Fitting
3.2.1. Point Triangulation

Triangulation is a process of reconstructing the 3D structure of a scene from corresponding points. We can compute the 3D position of the corresponding points using the line-of-sight method when the configuration parameters of the cameras are known. Unfortunately, correspondence detection uncertainty is inevitable in practice, and the corresponding lines of sight may therefore not meet in the scene. Thus, a middle-point method is commonly adopted. However, this simple method does not give the best results. Kanatani et al. [27] proposed an optimal correction strategy for triangulation, which is faster and robust. In this paper, we apply a subpixel center point extraction method to get the corresponding center points and and Kanatani’s method to compute the optimal corrected center point pairs and

Let

has a formation similar to that of ; we have [23] where, and .

The 3D coordinates are acquired by triangulating all center point pairs extracted from the left and right camera images. The next step is to fit a 3D plane using all reconstructed 3D points.

3.2.2. 3D Plane Fitting

Using point clouds to evaluate the best fitting plane is a classic linear regression problem in mathematics. Either LSF or principal component analysis can effectively solve this problem. We choose the LSF method for its simplicity; detailed comparisons can be found in [28]. A plane in 3D Euclidean space is formulated as

The distance from the point to the plane is expressed as

The best fitting condition of minimizing the squared sum of the distances is then

Applying the described method, the 3D plane fitting is performed for various positions and orientations of the calibration board. The results of fitting seven 3D planes to the reconstructed 3D points are shown in Figure 4. The colored frames and points, respectively, represent the estimated planes and the reconstructed 3D center points.

3.3. Path Planning and Automatic Measurement by CHS

The CHS is a point-based laser sensor in our multisensor inspection system. One of the drawbacks of the sensor is its limited working range. The best accuracy is therefore achieved when the measured surface lies at the center of the working range. This condition implies that the sensor maintains a constant standoff with the part’s surface. In the calibration process, the calibration board is placed in different positions and orientations. Planning of the measurement path (i.e., controlling the constant standoff) is therefore required for obtaining points on the calibration board in a highly precise manner. A computer-aided design- (CAD-) independent sensor standoff control method, which does not require manual intervention or a fixture, has been proposed [28]. However, this method is not suitable for this work. On the basis of our calibration principle, a simple method is presented here to acquire sufficient points on the calibration board. Figure 5 is a schematic diagram of the improved method.

The scanning process is described in more detail as follows:

Step 1. Initial phase of measurements: first, the relative positions of the CHS and calibration board are adjusted to maintain the standoff and get the first point. Then, using a smaller step size , where is a feedback control that is used to move the CHS in the -direction, it is ensured that the plane variations of the calibration board are not too large to cause the surface to fall out of the working range of the CHS. The CHS adjusts itself in the -direction to ensure that the calibration board plane is in the center of the working range for the next measurement. Another three or four points are obtained by repeating the above process. The same method is used in the -direction to obtain four or more points.

Step 2. Measurement point planning: this step uses the points obtained in Step 1 to fit two lines as the scanning path in the - and -directions to get more accurate points on the calibration board. Those points are fitted by a plane using the method described in Section 3.2.2. After obtaining the plane, we capture the points that are used for the optimal estimation of the extrinsic parameters of the SLS and CHS according to our calibration requirements.

3.4. Optimal Estimation of the Rigid Transformation between the SLS and CHS

This section provides details on how to effectively solve the extrinsic calibration problem for the system of SLS and CHS. A nonlinear optimization with outlier detection and optimization is proposed to refine the extrinsic camera parameters.

Our proposed calibration method is to place a planar calibration board in front of the system such that it is visible to both the SLS and CHS. The points on the calibration board are obtained adopting the method described in Section 3.3 using the CHS. Images of the calibration board in various positions and orientations are gathered to calculate the intrinsic parameters of the SLS. In an ideal situation, the CHS points lie on the calibration plane estimated from the SLS, and we have a geometric constraint on the rigid transformation between the CHCS and SLCS. The geometric constraint is expressed as [19] where is a normal vector, , is a known point on the plane, and are, respectively, the normalized homogeneous coordinates of and , is a matrix, and Equation (11) gives the Euclidean distance between a laser point and the calibration plane. Given different poses of the calibration board, we defined an error function as the sum of such distances for each CHS point in the very estimated 3D plane : where is the estimated 3D plane of the calibration board placed in the pose. In Equation (14), represents the distance from point measured by CHS to the fitting plane measured by SLS, and it can be used to indicate the registration error between the measurement coordinate systems of SLS and CHS. In theory, this error should be zero if these two measurement coordinate systems were aligned perfectly. We minimize (13) as a nonlinear optimization problem using the Levenberg–Marquardt method. Once is determined, we can estimate the relative orientation and position as where is the column of matrix .

4. Experimental Results

The proposed algorithm was implemented with C++ and tested in calibration experiments and by measuring a complex surface which is in our development multisensor measuring system to verify the performance of the algorithm.

4.1. Calibration Experiments

In the calibration experiments, the temperature and relative humidity were, respectively, varied in the ranges of 20.5–21.5°C and 39%–42%. The accuracy of the two sensors is described in Section 2. The calibration board had a point array with a center distance of 8 mm, and the center point accuracy was less than 5 μm, which is verified by high-accurate image measuring instrument. The image of the calibration board was captured by two cameras of the SLS, and the surface of the calibration board was detected by the CHS for seven poses. The measurement process is shown in Figure 6. Ten independent trials were carried out in the working volume, and the RMS errors of the parameters were computed to evaluate the robustness of the calibration results. Meanwhile, the same experiment was carried out by measuring the standard ball data. The point cloud of standard ball is acquisition by SLS and CHS, which are shown in Figures 7(a) and 7(b), respectively. The calibration parameters can be estimated by the ICP algorithm using the Matlab function of pcregrigid. The comparison experiment results are shown in Table 2. In the experiment, because the ground truth of the transformation parameters could not be obtained, each estimated rigid transformation parameter was compared with the first one to test the repeatability of the calibration results. Errors are computed as

where and are the first ones to be measured as ground truth values while and are the estimated values.

It can be concluded that the value and of proposed method are more accurate than using standard ball. This is mainly because limited by the depth of field measured by CHS, only a small part of the crown data can be collected. Moreover, the SLS acquisition ball data is affected by phase noise, which is not as accurate as the data reconstruction of the calibration board. In addition to the calibration accuracy better than the standard ball method, this method can also realize the synchronous calibration of SLS system at the same time, which not only improves the calibration accuracy but also improves the calibration efficiency. Considering the accuracy of the machine, the sensors, and the requirements for detecting the surfaces of complex parts, the deviation is deemed acceptable.

4.2. Measurement of Complex Parts and Accuracy Analyses

A complex part that was processed by a computer-numerical-control five-axis machine tool was measured to test the calibration error and thus confirm the validity of the present method. A real part and CAD model are shown in Figures 8(a) and 8(b). Firstly, the complex surface parts were measured by the CMM to obtain the machining error of the parts. The point cloud and the difference between the measurements and CAD model are shown in Figure 9(a) and 9(b). The figure shows that the standard deviation of the difference for the part is 0.0084 mm, which is mainly the machining error. The SLS was then used to capture the fringe image, and the point cloud of the part was reconstructed. The CHS was moved along -, -, and -directions, as shown in Figures 10, and 57 sections of the target were selected for scanning to obtain the point cloud. The SLS and CHS point clouds are, respectively, shown in Figures 11(a) and 11(b). The two-point clouds of the part transformation result based on the calibration parameters are shown in Figure 11(c), and the result of accuracy analysis is shown in Figure 11(d). The deviation of the complex surface part is 0.011 mm. This deviation mainly depends on the system calibration error, sensor, and platform accuracy, which can be considered the measurement accuracy of the developed device. This measurement accuracy is suitably used for the inspection of most complex surface parts.

5. Conclusions

We presented a simple and robust method of calibrating the extrinsic parameters of a multisensor measurement system integrating SLS and CHS. Adopting the method, all measured points of a complex part derived from different optical sensors are represented in a common coordinate system. The proposed method uses only a planner calibration board for calibration of the SLS and makes measurements for a few poses of the board with the SLS and CHS to obtain points on a plane. A geometric constraint on the reconstructed plane imposed by the points of the SLS and points of the CHS is then used to calculate the extrinsic parameters. This extrinsic calibration method is simple and does not require special artifacts. Results of a calibration experiment showed that the calibration process is robust. In further measurement experiments, the transformation results of a complex surface part showed that it is sufficient to use the calibration parameters as initial values for a multiple-optical-sensor system in part data registration. The proposed calibration method can be applied to the multiple optical inspection of a complex surface part.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported by the Ordinary Colleges and Universities Key Field Special Project of Guangdong Province (2020ZDZX2039) and the National Natural Science Foundation of China (grant numbers 51505134 and 51405138). We thank Liwen Bianji, Edanz Editing China (http://www.liwenbianji.cn/ac), for editing a draft of this manuscript.