Research Article  Open Access
Qiucheng Sun, Yueqian Hou, Qingchang Tan, Guannan Li, "A PlanarDimensions Machine Vision Measurement Method Based on Lens Distortion Correction", The Scientific World Journal, vol. 2013, Article ID 963621, 6 pages, 2013. https://doi.org/10.1155/2013/963621
A PlanarDimensions Machine Vision Measurement Method Based on Lens Distortion Correction
Abstract
Lens distortion practically presents in a real optical imaging system causing nonuniform geometric distortion in the images and gives rise to additional errors in the vision measurement. In this paper, a planardimensions vision measurement method is proposed by improving camera calibration, in which the lens distortion is corrected on the pixel plane of image. The method can be divided into three steps: firstly, the feature points, only in the small central region of the image, are used to get a more accurate perspective projection model; secondly, rather than defining a uniform model, the smoothing spline function is used to describe the lens distortion in the measurement region of image, and two correction functions can be obtained by fitting two deviation surfaces; finally, a measurement method for planar dimensions is proposed, in which accurate magnification factor of imaging system can be obtained by using the correction functions. The effectiveness of the method is demonstrated by applying the proposed method to the test of measuring shaft diameter. Experimental data prove that the accurate planardimensions measurements can be performed using the proposed method even if images are deformed by lens distortion.
1. Introduction
Image measurement has advantages of noncontact, fast speed, and high precision and has applications in industry, medicine, and other fields [1–3]. Because most mechanical part sizes are often in plane, planardimensions measurement (2D measurement) has been widely used in the field of industrial measurement [4, 5]. This has several advantages compared to a 3D vision measurement, such as, for example, (a) reduction in costs (only one camera involved) and (b) more precise (no crosscamera matching and triangulation). Since no triangulation procedures are involved in 2D measurements, the camera calibration is often omitted and lens distortions are also not corrected [6].
In fact, for real camera lenses, such as a fixed length lens, a zoom lens, or even an expensive highquality telecentric lens, image distortions unavoidably exist due to lens aberrations and misalignment of optical elements [7]. Due to the nonuniform characteristics of lens distortion, the imaging of mechanical parts in the sensor plane may be warped. Clearly, these distortions in the images caused by lens distortion can be detected by the subpixel detection algorithms and hence corrupt the real size of mechanical parts, reducing the accuracy of 2D measurements. With the precision requirement of measurement becoming higher, the errors due to lens distortion are much worth to be considered and should therefore be eliminated. Though various powerful camera calibration techniques [8–10] have been developed in the field of computer vision and have also been successfully used in 3D measurement, these techniques seem to be too complicated for 2D measurement, which corrects the lens distortion on a hypothetical image plane in calibration model rather than pixel plane of real image [11]. A simple, easytoimplement yet effective lens distortion correction method is therefore necessary.
In practice, lens distortion can be regarded as a system error since camera and lens are fixed. Therefore, it is not necessary that uniform model of image distortions is used for the camera calibration [12, 13]. Since the influence of image distortion is related to position with respect to principal point (closing to image center), the center region of image rather than the whole image is used generally for the measurement to ensure accuracy. So, in the present work, a planardimensions vision measurement method is proposed by correcting lens distortion in a region of interest (measurement region) on the pixel plane. Firstly, a linear camera model and the feature points near the image center are used to calibrate the perspective projection; then, the image distortion in the measurement region, considering the region larger than the center area, is corrected using the smoothing spline function; finally, a method of measuring planar dimensions is proposed by means of the function. Accuracy and performance of the method are tested by a measurement experiment.
The organization of the paper is as follows: Section 2 presents a linear calibration model. The distortion correction function is proposed in Section 3. A planardimensions vision measurement method is given and an experiment of measuring shaft diameter is implemented in Section 4. Finally, a conclusion is made in Section 5.
2. The Calibration of Perspective Projection
At present, the pinhole projective model was calibrated by mapping 3D scenes to the 2D camera image plane in the literature [10]. The mapping from the world points to the image ones can be expressed as follows: where . And the calibration is finished using the LevenbergMarquardt algorithm. Here, (2), whose form is determined, corrects radial distortion of a camera. However, in real imaging system, the distribution of distortion in the image is not uniform, and the distortion near the projection center is very small based on past experience. Therefore, it is not necessary that (2) is used to correct the lens distortion in the centre area. So, this work puts forward a method of calibrating perspective projection accurately. In the method, a pinhole model and the feature points near the projection center can be used to calibrate a camera without considering image distortion. In this way, a linear calibration model can be obtained by using (1), (3) and ignoring (2): in the model, where is the th column vector of the rotation matrix, is the translation vector, is intrinsic parameters matrix, and the homography is a matrix.
In the experiment, nine patterns of a check board are acquired by the camera with a 25 mm fixed lens, and the image resolution is pixels, as shown in Figure 1. Size of grids on the board is mm, and corner points of the patterns are detected using the method in reference [14]. In this paper, only the corner points in pixels region around the projection center of patterns are used in (4) to calibrate the camera, as shown in Figure 2. The residual of calibration on the world coordinate can be calculated by the following formula: where is the mean residual of the th pattern in metric units, is number of the corner points used on one pattern, denotes the world coordinate of the corner points, and does the world coordinate of the corner points calculated by the model (4). By this way, the residuals of corner points only in the region of each pattern are calculated and are listed in Table 1. In order to compare, Table 1 also gives the residuals of the central region calculated by means of Zhang’s model, which is calibrated using all the corner points of the patterns.

(a) The small central region in the image
(b) The local region
Data of Table 1 show that the present method can achieve high calibration accuracy in the central region of pixels. This is since the distortion of the points near the image center is very small, and the accurate perspective projection can be obtained. Because Zhang’s model is influenced greatly by the points away from the center region, calibration accuracy of the central region will eventually be sacrificed.
3. Smoothing Spline Distortion Model
In practice, the image region used for the measurement is usually larger than that for the calibration. This may affect the measurement accuracy since the imaging system is not perfect, such as the lens distortion. Now, deviation between corner positions on the board pattern and ones calculated by (4) is considered, as shown in Figure 3. The deviation values for the region between pixels and ones are large since the homography matrix in (4) is calibrated by points in region of pixels. So, the predication, which is given by (4) in the region of pixels outside, is corrected by the check board. And the correction can be expressed by where and are the undistortion image coordinates on the pixel plane projected from (4), and are the distortion image coordinates extracted from the board patterns, and and are deviations of the points on the coordinate axis and . And correction of (6) can be conveniently governed as a surface fitting problem:
Regarding the correction, the previous papers [8–10] used a uniform mathematical model to describe the radial, decentering, and prism distortions in whole image. However, the distortions are related to a specific imaging system and cannot be represented by the uniform model [12, 13]. Therefore, (7) use a union of spline function to describe the distortion only in the local region, and this surface fitting of (7) is finished by the smoothing spline algorithm [15, 16].
As an example, the deviation in center region of pixels is corrected based on (7) by smoothing spline algorithm. Since the homography matrix have been calibrated based on (4) by using the points in area of pixels, the world coordinates of corner points in nine patterns can be projected onto the pixel coordinates. By this way, the total deviation distribution in the and axis of the image is acquired by (6). Using the smoothing spline algorithm, two distortion correction functions can be obtained, and two deviation surfaces are shown in Figures 4 and 5. It can be observed in the Figure 6 that the deviation is very small after correction.
(a) The deviation surface in pixels
(b) Total deviation distribution in coordinate
(a) The deviation surface in pixels
(b) Total deviation distribution in coordinate
4. PlanarDimensions Vision Measurement Method
A planardimensions measurement method is proposed by means of the distortion correction functions and used to measure the shaft diameter as an example in this section. Although the shaft is a 3D object, the measurement of shaft diameter can be considered as a 2D measurement when the optical axis of the imaging lens is perpendicular to center line of shaft approximatively.
It can be seen in Figure 7 that the central region of pixels contains the main portion of the shaft, whose diameter is about 40 mm. That is to say, this region can be used as a measurement region according to this lens, and the measurement range is supposed to be about 40 mm. Then, the measurement is carried by the following steps:(1)using the proposed calibration method, the central image region of pixels can be calibrated and two distortion correction functions can be obtained in the and axis of the image;(2)two edges of the shaft are detected using subpixel edge detection method [17–19], and the pixel coordinates of edge points are corrected by the distortion correction functions;(3)two parallel lines are fitted using the corrected points in the plane of pixel, and pixel distance between the two lines can be obtained.
A foursegment shaft, whose diameters are known, should be measured using the above measurement procedures firstly, as shown in Figure 8. In this way, the metric length per pixel in the measurement region can be obtained. Then, the other shafts, shown in Figures 9, 10, and 11, can be measured through the same operation with a known perpixel, and the measured values are listed in Table 2. In order to compare them, Table 2 also gives the values measured using edge points without correction. Through comparison with real diameters (measured by Electronic Digital Outside Micrometer), the mean absolute error 0.0045 mm and variance of the proposed method can be calculated, which are smaller than the measurements without correction. So, the proposed method can improve the 2D measurement accuracy efficiently.

5. Conclusion
This study develops a machine vision method for highprecision 2D measurement. In the method, a novel algorithm is proposed by improving the calibration model. In this way, the lens distortion can be corrected on the pixel plane before measuring, and accurate magnification factor of imaging system can be obtained. Experimental results indicate that the proposed method possesses a precision of 0.005 mm for measuring shaft diameter about 40 mm.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
The work described in this paper is partially supported by the National Nature Science Foundation of China under Grant nos. 61201084 and 11226335.
References
 X. Su and Q. Zhang, “Dynamic 3D shape measurement method: a review,” Optics and Lasers in Engineering, vol. 48, no. 2, pp. 191–204, 2010. View at: Publisher Site  Google Scholar
 C. L. Phillips, D. A. T. Silver, P. J. Schranz, and V. Mandalia, “The measurement of patellar height: a review of the methods of imaging,” Journal of Bone and Joint Surgery B, vol. 92, no. 8, pp. 1045–1053, 2010. View at: Publisher Site  Google Scholar
 B. Pan, K. Qian, H. Xie, and A. Asundi, “Twodimensional digital image correlation for inplane displacement and strain measurement: a review,” Measurement Science and Technology, vol. 20, no. 6, Article ID 062001, 2009. View at: Publisher Site  Google Scholar
 C. Liguori, A. Paolillo, and A. Pietrosanto, “An online stereovision system for dimensional measurements of rubber extrusions,” Measurement, vol. 35, no. 3, pp. 221–231, 2004. View at: Publisher Site  Google Scholar
 L. Angrisani, P. Daponte, A. Pietrosanto, and C. Liguori, “Imagebased measurement system for the characterisation of automotive gaskets,” Measurement, vol. 25, no. 3, pp. 169–181, 1999. View at: Publisher Site  Google Scholar
 P. Lava, W. V. Paepegem, and S. Coppieters, “Impact of lens distortions on strain measurements obtained with 2D digital image correlation,” Optics and Lasers in Engineering, vol. 51, pp. 576–584, 2013. View at: Google Scholar
 B. Pan, L. P. Yu, and D. F. Wu, “Systematic errors in twodimensional digital image correlation due to lens distortion,” Optics and Lasers in Engineering, vol. 51, pp. 140–147, 2013. View at: Google Scholar
 R. Y. Tsai, “Versatile camera calibration technique for highaccuracy 3D machine vision metrology using offtheshelf TV cameras and lenses,” IEEE Journal of Robotics and Automation, vol. 3, no. 4, pp. 323–344, 1987. View at: Google Scholar
 J. Heikkilä, “Geometric camera calibration using circular control points,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp. 1066–1076, 2000. View at: Publisher Site  Google Scholar
 Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. View at: Publisher Site  Google Scholar
 M. A. Sutton, J. J. Orteu, and H. W. Schreier, Image Correlation for Shape, Motion and deformation Measurements, Springer, New York, NY, USA, 2009.
 R. V. Carlos and J. S. Antonio, “Correcting nonlinear lens distortion in cameras without using a model,” Optics and Laser Technology, vol. 42, no. 4, pp. 628–639, 2012. View at: Google Scholar
 R. V. Carlos and J. S. Antonio, “Using the camera pinhole model restrictions to calibrate the lens distortion model,” Optics and Laser Technology, vol. 43, no. 6, pp. 996–1005, 2011. View at: Publisher Site  Google Scholar
 Y. B. Jean, Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the algorithm, 2000, http://www.vision.caltech.edu/bouguetj/index.html.
 C. H. Reinsch, “Smoothing by spline functions,” Numerische Mathematik, vol. 10, no. 3, pp. 177–183, 1967. View at: Publisher Site  Google Scholar
 E. R. Cook and K. Peters, “The smoothing spline a new approach to standardizing forest interior treering width series for dendroclimatic studies,” TreeRing Bulletin, vol. 41, pp. 45–53, 1981. View at: Google Scholar
 E. P. Lyvers, O. R. Mitchell, M. L. Akey, and A. P. Reeves, “Subpixel measurements using a momentbased edge operator,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 12, pp. 1293–1309, 1989. View at: Publisher Site  Google Scholar
 J. Ye, G. Fu, and U. P. Poudel, “Highaccuracy edge detection with Blurred Edge model,” Image and Vision Computing, vol. 23, no. 5, pp. 453–467, 2005. View at: Publisher Site  Google Scholar
 A. J. Tabatabai and O. R. Mitchell, “Edge location to subpixel values in digital imagery,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6, no. 2, pp. 188–201, 1984. View at: Google Scholar
Copyright
Copyright © 2013 Qiucheng Sun et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.