Abstract

Lens distortion practically presents in a real optical imaging system causing nonuniform geometric distortion in the images and gives rise to additional errors in the vision measurement. In this paper, a planar-dimensions vision measurement method is proposed by improving camera calibration, in which the lens distortion is corrected on the pixel plane of image. The method can be divided into three steps: firstly, the feature points, only in the small central region of the image, are used to get a more accurate perspective projection model; secondly, rather than defining a uniform model, the smoothing spline function is used to describe the lens distortion in the measurement region of image, and two correction functions can be obtained by fitting two deviation surfaces; finally, a measurement method for planar dimensions is proposed, in which accurate magnification factor of imaging system can be obtained by using the correction functions. The effectiveness of the method is demonstrated by applying the proposed method to the test of measuring shaft diameter. Experimental data prove that the accurate planar-dimensions measurements can be performed using the proposed method even if images are deformed by lens distortion.

1. Introduction

Image measurement has advantages of noncontact, fast speed, and high precision and has applications in industry, medicine, and other fields [13]. Because most mechanical part sizes are often in plane, planar-dimensions measurement (2D measurement) has been widely used in the field of industrial measurement [4, 5]. This has several advantages compared to a 3D vision measurement, such as, for example, (a) reduction in costs (only one camera involved) and (b) more precise (no cross-camera matching and triangulation). Since no triangulation procedures are involved in 2D measurements, the camera calibration is often omitted and lens distortions are also not corrected [6].

In fact, for real camera lenses, such as a fixed length lens, a zoom lens, or even an expensive high-quality telecentric lens, image distortions unavoidably exist due to lens aberrations and misalignment of optical elements [7]. Due to the nonuniform characteristics of lens distortion, the imaging of mechanical parts in the sensor plane may be warped. Clearly, these distortions in the images caused by lens distortion can be detected by the subpixel detection algorithms and hence corrupt the real size of mechanical parts, reducing the accuracy of 2D measurements. With the precision requirement of measurement becoming higher, the errors due to lens distortion are much worth to be considered and should therefore be eliminated. Though various powerful camera calibration techniques [810] have been developed in the field of computer vision and have also been successfully used in 3D measurement, these techniques seem to be too complicated for 2D measurement, which corrects the lens distortion on a hypothetical image plane in calibration model rather than pixel plane of real image [11]. A simple, easy-to-implement yet effective lens distortion correction method is therefore necessary.

In practice, lens distortion can be regarded as a system error since camera and lens are fixed. Therefore, it is not necessary that uniform model of image distortions is used for the camera calibration [12, 13]. Since the influence of image distortion is related to position with respect to principal point (closing to image center), the center region of image rather than the whole image is used generally for the measurement to ensure accuracy. So, in the present work, a planar-dimensions vision measurement method is proposed by correcting lens distortion in a region of interest (measurement region) on the pixel plane. Firstly, a linear camera model and the feature points near the image center are used to calibrate the perspective projection; then, the image distortion in the measurement region, considering the region larger than the center area, is corrected using the smoothing spline function; finally, a method of measuring planar dimensions is proposed by means of the function. Accuracy and performance of the method are tested by a measurement experiment.

The organization of the paper is as follows: Section 2 presents a linear calibration model. The distortion correction function is proposed in Section 3. A planar-dimensions vision measurement method is given and an experiment of measuring shaft diameter is implemented in Section 4. Finally, a conclusion is made in Section 5.

2. The Calibration of Perspective Projection

At present, the pinhole projective model was calibrated by mapping 3D scenes to the 2D camera image plane in the literature [10]. The mapping from the world points to the image ones can be expressed as follows: where . And the calibration is finished using the Levenberg-Marquardt algorithm. Here, (2), whose form is determined, corrects radial distortion of a camera. However, in real imaging system, the distribution of distortion in the image is not uniform, and the distortion near the projection center is very small based on past experience. Therefore, it is not necessary that (2) is used to correct the lens distortion in the centre area. So, this work puts forward a method of calibrating perspective projection accurately. In the method, a pinhole model and the feature points near the projection center can be used to calibrate a camera without considering image distortion. In this way, a linear calibration model can be obtained by using (1), (3) and ignoring (2): in the model, where is the th column vector of the rotation matrix, is the translation vector, is intrinsic parameters matrix, and the homography is a matrix.

In the experiment, nine patterns of a check board are acquired by the camera with a 25 mm fixed lens, and the image resolution is pixels, as shown in Figure 1. Size of grids on the board is  mm, and corner points of the patterns are detected using the method in reference [14]. In this paper, only the corner points in pixels region around the projection center of patterns are used in (4) to calibrate the camera, as shown in Figure 2. The residual of calibration on the world coordinate can be calculated by the following formula: where is the mean residual of the th pattern in metric units, is number of the corner points used on one pattern, denotes the world coordinate of the corner points, and does the world coordinate of the corner points calculated by the model (4). By this way, the residuals of corner points only in the region of each pattern are calculated and are listed in Table 1. In order to compare, Table 1 also gives the residuals of the central region calculated by means of Zhang’s model, which is calibrated using all the corner points of the patterns.

Data of Table 1 show that the present method can achieve high calibration accuracy in the central region of pixels. This is since the distortion of the points near the image center is very small, and the accurate perspective projection can be obtained. Because Zhang’s model is influenced greatly by the points away from the center region, calibration accuracy of the central region will eventually be sacrificed.

3. Smoothing Spline Distortion Model

In practice, the image region used for the measurement is usually larger than that for the calibration. This may affect the measurement accuracy since the imaging system is not perfect, such as the lens distortion. Now, deviation between corner positions on the board pattern and ones calculated by (4) is considered, as shown in Figure 3. The deviation values for the region between pixels and ones are large since the homography matrix in (4) is calibrated by points in region of pixels. So, the predication, which is given by (4) in the region of pixels outside, is corrected by the check board. And the correction can be expressed by where and are the undistortion image coordinates on the pixel plane projected from (4), and are the distortion image coordinates extracted from the board patterns, and and are deviations of the points on the coordinate axis and . And correction of (6) can be conveniently governed as a surface fitting problem:

Regarding the correction, the previous papers [810] used a uniform mathematical model to describe the radial, decentering, and prism distortions in whole image. However, the distortions are related to a specific imaging system and cannot be represented by the uniform model [12, 13]. Therefore, (7) use a union of spline function to describe the distortion only in the local region, and this surface fitting of (7) is finished by the smoothing spline algorithm [15, 16].

As an example, the deviation in center region of pixels is corrected based on (7) by smoothing spline algorithm. Since the homography matrix have been calibrated based on (4) by using the points in area of pixels, the world coordinates of corner points in nine patterns can be projected onto the pixel coordinates. By this way, the total deviation distribution in the and axis of the image is acquired by (6). Using the smoothing spline algorithm, two distortion correction functions can be obtained, and two deviation surfaces are shown in Figures 4 and 5. It can be observed in the Figure 6 that the deviation is very small after correction.

4. Planar-Dimensions Vision Measurement Method

A planar-dimensions measurement method is proposed by means of the distortion correction functions and used to measure the shaft diameter as an example in this section. Although the shaft is a 3D object, the measurement of shaft diameter can be considered as a 2D measurement when the optical axis of the imaging lens is perpendicular to center line of shaft approximatively.

It can be seen in Figure 7 that the central region of pixels contains the main portion of the shaft, whose diameter is about 40 mm. That is to say, this region can be used as a measurement region according to this lens, and the measurement range is supposed to be about 40 mm. Then, the measurement is carried by the following steps:(1)using the proposed calibration method, the central image region of pixels can be calibrated and two distortion correction functions can be obtained in the and axis of the image;(2)two edges of the shaft are detected using sub-pixel edge detection method [1719], and the pixel coordinates of edge points are corrected by the distortion correction functions;(3)two parallel lines are fitted using the corrected points in the plane of pixel, and pixel distance between the two lines can be obtained.

A four-segment shaft, whose diameters are known, should be measured using the above measurement procedures firstly, as shown in Figure 8. In this way, the metric length per pixel in the measurement region can be obtained. Then, the other shafts, shown in Figures 9, 10, and 11, can be measured through the same operation with a known per-pixel, and the measured values are listed in Table 2. In order to compare them, Table 2 also gives the values measured using edge points without correction. Through comparison with real diameters (measured by Electronic Digital Outside Micrometer), the mean absolute error 0.0045 mm and variance of the proposed method can be calculated, which are smaller than the measurements without correction. So, the proposed method can improve the 2D measurement accuracy efficiently.

5. Conclusion

This study develops a machine vision method for high-precision 2D measurement. In the method, a novel algorithm is proposed by improving the calibration model. In this way, the lens distortion can be corrected on the pixel plane before measuring, and accurate magnification factor of imaging system can be obtained. Experimental results indicate that the proposed method possesses a precision of 0.005 mm for measuring shaft diameter about 40 mm.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The work described in this paper is partially supported by the National Nature Science Foundation of China under Grant nos. 61201084 and 11226335.