Review Article  Open Access
Review of Calibration Methods for Scheimpflug Camera
Abstract
The Scheimpflug camera offers a wide range of applications in the field of typical closerange photogrammetry, particle image velocity, and digital image correlation due to the fact that the depthofview of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.
1. Introduction
The Scheimpflug camera, adopting the Scheimpflug condition by tilting the lens with respect to the sensor [1, 2], can optimize the distribution of depth of field (DOF) without zooming the aperture, which has been widely used in the field of PIV [3–5], line structured light [6], and portable 3D laser scanner [7, 8]. At the same time, it has also been introduced into the medical field, greatly facilitating cataract and other ophthalmic surgery [9, 10]. So far, more and more scholars have also devoted themselves to the related research [6, 7, 11–18].
The Scheimpflug principle, traditionally credited to Theodor Scheimpflug in 1902, states that the object plane (the plane that is in focus), the thin lens’s plane, and the image plane must all meet in a single line, Scheimpflug line. The principle is applicable to both thin prism and thick prism models, which only needs to be modified accordingly [1, 2]. As for the case of a real optical system, it is likely to find that for different object points, the optical system have different fnumbers and different aberrations; thus, the image generated by the real optical system will not be completely sharp [7]. Yet, this influence usually can be ignored in the Scheimpflug camera research.
Scheimpflug camera calibration is a necessary preliminary step to ensure its further highquality measurement. The conventional calibration methods are not valid in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition [19–22]. Although Legarda et al. [23], Nocerino et al. [16], and Cornic et al. [17] point out that when the lenstilted angle is small (≤6°), the existing calibration methods can compensate, to a certain extent, the error generated by tilting the sensor via tangential distortion parameters to solve the calibration of Scheimpflug cameras. However, when the tilted angle is greater, the conventional camera calibration methods are weak.
Therefore, more and more researchers have carried out related researches [14, 17, 18, 23–33]. Up to now, a variety of methods has been proposed to accommodate various applications. Generally, these calibration methods can be classified into parametric (defined by several intrinsic parameters) [14, 17, 18, 23–33] and general nonparametric calibration methods according to the imaging model [33–35]. In fact, the kernels of the parametric methods are basically identical, which firstly develop an ideal image plane parallel to the lens plane as a bridge, then the different form of relationships between the ideal and actual image plane can be established according to different models. As the projection between the ideal image plane and calibration checkerboard can be easily obtained via conventional imaging model, the projection between the tilted image plane and calibration checkerboard is established, on the basis of which the calibration algorithm is developed [14, 17, 18, 23–31].
Generally, the parametric calibration methods of Scheimpflug camera fall into two categories according to the dimension of the tilted angle, the literature [11, 24, 25] as the representative of the onedimensional angle calibration model and the literature [13–15, 17, 18, 23, 27, 28, 30, 31] as the twodimensional angle calibration model. Further, as the lens is supposed to be planar and symmetric about optical axis, twoangle parameters should be sufficient to express the homography between the tilted image plane and ideal image plane [23, 27].
Furthermore, the parametric calibration methods with twodimensional angles are supposed to be divided into two main categories: (1) modified pinhole imaging model (MPIM), as the name implies, is developed on the basis of the conventional pinhole imaging model and taking the imaging characteristics of Scheimpflug camera into account, see literature [14, 15, 17, 23, 27, 28, 30, 31] to cite a few. (2) Pupil centric imaging model (PCIM) modeled the different ray angles in object and image space, and the actual projection center are the center of the exit and entrance pupils. There is a new mapping from pupil centric imaging to a geometrically equivalent pinhole imaging; thus, the Scheimpflug camera calibration can be carried out in a conventional framework, see literature [13, 18, 29, 36] to cite a few.
In general, the mainstream parametric camera calibration algorithms can be attributed to the nonlinear parameter optimization problem and the appropriate initialization is the key to the fast convergence and global minimum. The initialization acquisition methods can be classified into the three categories: (1) taking the nominal parameters of the camera as the initial values of the optimization algorithm [14, 25], (2) adopting the results of Zhang et al.’s calibration algorithm as initial values [17, 23, 27, 30], (3) obtaining initialization intrinsic parameters of the Scheimpflug camera by means of auxiliary tools, for example, adopts a collimator to obtain the principal point [31], and (4) initializing by complex analytic solution, as illustrated in literature [13, 29, 32, 33].
While perspective projection serves as the dominant imaging model in nowadays’ computer vision, conventional camera calibration techniques are taylor made for specific camera model which may not suffice for an unknown imaging system (a black box). Thus, the general imaging models accommodating a wide range of devices have been proposed [33–35]. The mentality of the general imaging model is that all types of imaging systems perform a mapping from the scene rays to a set of associated image pixels and the image pixels measure light traveling along the associated halfrays from the scene in various directions. As illustrated in literature [33–35], the calibration undergoing general imaging models simply refers to the computation of the projection between the pixels and associated 3D rays.
Aiming to offer a comprehensive review that provides an insight into recent calibration methods of Scheimpflug cameras with perspective lens, this paper presents a survey of recent calibration methods of Scheimpflug cameras and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Besides, the general nonparametric imaging model is novelly introduced to solve the problem. Furthermore, Real data experiments are performed to validate the performance of different calibration models which therefore lays the foundation of further research of the Scheimpflug cameras.
2. Modified Pinhole Imaging Model
Modified pinhole imaging model [14, 17, 23, 26–31, 37], as the name implies, is developed on the basis of the conventional pinhole imaging model, and the imaging characteristics of Scheimpflug camera is taken into account. Generally, the projection center of MPIM is supposed to be the optical center.
According to the different ways of describing the tilt effect of the lens, the MPIM can be simply divided into three categories: (1) extended distortion model (EDM), which extends the pinhole imaging model by considering the inclination of the sensor as an additional distortion [14, 26, 29]; (2) rotation matrix model (RMM), which models the lenssensor configuration by an explicit rotation matrix about the optic axis and includes it as a part of intrinsic calibration parameter set [17, 30, 31, 38]; (3) and pointline vector model (PLVM), in which the transformation between the tilted and ideal image plane is established by the intersection of the light ray, determined by the optical center, the spatial point, and two image planes [23, 27, 28].
2.1. Extended Distortion Model (EDM)
Extended distortion model considers the inclination of the sensor as an additional distortion and develops a Scheimpflug camera calibration model based on the classical pinhole imaging model [14, 26]. As the work of Wang et al. [26] is only useful for very small tilt angles, Peng et al. [14] extend their studies to the condition with greater tilt angles based on the geometric optics theory. As illustrated in Figure 1, OX_{C}Y_{C}Z_{C} is the camera coordinate frame with the origin at the aperture stop. Z_{C} is the optical axis of the lens, and I_{0} and I_{S} denote the ideal and real tilted image plane, respectively. The symbols α and β indicate the angles between the image plane I_{S} and the xaxis and yaxis, respectively, while symbol d represents the distance from the origin O_{C} to the image plane I_{0}. The 3D world point P_{W}(X_{W},Y_{W},Z_{W}) projects on the I_{0} at point p_{i}, and the light ray intersects the I_{S} at point p_{S}.
The section views of the camera geometry when the image plane, respectively, tilts α with respect to the x and yaxis are illustrated in Figure 2, where A(x,y) is an arbitrary point on I_{0} while point B is the back projection of A on the I_{S}. The symbols θ_{x} and θ_{y}, respectively, denote the projection angle of the angle between the light ray and the optical axis.
According to the derivation of the literature [14], the increment (∆x_{α},∆y_{α}) and (∆x_{β},∆y_{β}) along the xaxis or yaxis caused by rotating the angles α and β can be expressed as follows,
Then, on the basis of conventional distortion model [39], the extended distortion model can be obtained as follows, where (∆x,∆y) are the total distortions caused by the Scheimpflug camera, (x_{n},y_{n})^{T} are the normalized image coordinates and . (k_{1},k_{2}) are the radial distortion coefficients. And it deserves noting that the tangential distortion coefficients are ignored here, as in many of the Scheimpflug camera calibration methods [11, 13, 23, 25, 27–29, 40].
The tangential distortion mainly consists of decentering distortion and thin prism distortion. The thin prism distortion usually results from the defects of lens design and manufacture and the slight tilt of sensor plane [26]. It also implicitly confirms the conclusions in literature [16, 17, 23] that the conventional calibration methods can compensate the error generated by tilting the sensor via tangential distortion parameters, to a certain extent.
In conclusion, the calibration method based on EDM is logically explicit, and the model is simple and easy to operate which can completely inherit the existing calibration process of the standard camera [21]. However, in spite of the fact that Peng et al. [14] have extended the model to suit larger tilt angles, the range of application of the model is still limited within a relatively small tilt angle. Also, the model described in this section is mainly based on the assumptions of the telecentric lens [14], which cannot be applied directly to the standard perspective lens condition.
Moreover, the model illustrated here requires that the length of the lens should be provided in advanced [14], which in practice is not easy to obtain and the nominal parameters of the lens might not be accurate enough. Thus, the method needs to be further improved.
2.2. Rotation Matrix Model (RMM)
Rotation matrix model models the lenssensor configuration by an explicit rotation matrix about the optic axis and includes it as a part of intrinsic calibration parameter set [17, 30, 31, 38]. One of the most representative methods is that Kumar and Ahuja [29] introduce the rotation transformation between the lens and image plane and extends the RAC (radial alignment constraint) [19] to accommodate the Scheimpflug camera, which is defined as the gRAC (generalized radial alignment constraint).
As illustrated in Figure 3(a), O_{W}X_{W}Y_{W}Z_{W}, O_{L}X_{L}Y_{L}Z_{L}, and O_{S}X_{S}Y_{S}Z_{S} denote the world, lens, and imaging plane coordinate frames, respectively. The lens coordinate frame is assumed to be parallel to the image plane coordinate frame with a common Zaxis. O_{S} is the principal point, and P_{u}(x_{u},y_{u}), and P_{d}(x_{d},y_{d}) indicate the ideal and the distorted image points of the 3D spatial point P_{W}, respectively. The RAC [29] points out that, assuming only radial lens distortion exists, the position vector determined by the ideal image point and the distorted image point should be aligned with the position vector , which is the vertical line from the point P_{W} to the optical axis, and there is a relationship between two position vectors as .
(a)
(b)
Nevertheless, as shown in Figure 3(b), the triplet O_{S}, P_{u}, and P_{d} might not be collinear due to lens misalignment or intentional sensor tilt, which makes RAC difficult to be directly applied to Scheimpflug condition [29]. Thus, a more generic imaging model is proposed where the nonalignment of lens and image plane is explicitly modeled via a rotation matrix.
As illustrated in Figure 4, Kumar and Ahuja [29] derive a rotation matrix R(ρ,σ) which aligns the coordinate frame of the image plane with the lens’s by two dimensional angles (ρ,σ) indicating clockwise rotation about its x and yaxis, respectively. Consider the coordinate frames described in Figure 3, the 3D world point P_{W} is projected on the image plane at point P_{S}, and the ray of light intersects the ideal image plane at point P_{N}. Let the rotation matrix as wherein r_{ij} represents the ith row and jth column entry of R. Given that the relative rotation between two planes is known, the relationship between image points in tilted image plane and ideal image plane can be obtained according to the derivation in literature [29]: where (x_{N},y_{N})^{T} and (x_{S},y_{S})^{T} denote the image point in ideal image plane and tilted image plane, respectively, and λ indicates the distance between the lens and ideal image plane.
The world point P_{W} is projected on the optical axis with the coordinate . And it is supposed that the projected ideal image point P_{N} aligns with the world point P_{W}, which means the location vectors and should be coplanar and also parallel to each other. Thus, we can obtain the radial alignment constraint as follows:
The gRAC with the rotation matrix can be established by combining formulas (4) and (5), on the basis of which a twostep calibration algorithm is developed. The algorithm proposes a new analytical solution to solve the gRAC for a subset of calibration parameters, followed by the nonlinear refinement of calibration parameters [29].
As far as the paper is concerned, the calibration method based on RMM is flexible, accurate, and robust, which is also easy to operate and initialize. Inevitably, there are still some drawbacks of the method. The analytical solution, as proposed in literature [29], is tedious and involving more sign ambiguous estimation, and the computational efficiency of the calibration algorithm should make further efforts.
2.3. PointLine Vector Model (PLVM)
Pointline vector model shows that the transformation between the tilted image plane and ideal image plane is established on the basis of the intersections of the light ray and two image planes (ideal image plane and real tilted image plane) (see literature [23, 27, 28] to cite a few).
As illustrated in Figure 5, the symbol O represents the optical center and OX_{C}Y_{C}Z_{C} is the coordinate frame of the Scheimpflug camera. The image planes S and P denote the tilted image plane and ideal image plane, respectively. The origins of the ideal and real tilted image planes are assumed to be same, as the ideal image plane can be arbitrarily placed [23, 27].
The symbol r indicates a straight ray which goes from the 3D world point P_{W}(X_{W},Y_{W},Z_{W}) to the optical center O. r intersect with the tilted image plane S and the ideal image plane P at the points P_{S} and P_{P}, respectively. (m,n) are the base vectors in the plane S, and the vectors m and n, respectively, lie in the planes XOZ and YOZ. Further, according to the definition in the literature [23], m = (cosβ, 0, sinβ) and the vector n is perpendicular to the m and has an angle with the Yaxis; hence, the vector n = (−sinαsinβ, cosα, sinαcosβ).The formula of the line r is
And the symbol A^{T} denotes the transpose of the vector A. The following transposed vectors are noted the same way. And λ here is an arbitrary nonzero scale factor. The intersections of r and two image planes can be expressed as Substituting formula (7) into (6) yields wherein , assuming that the (C_{Px},C_{Py}) and (C_{Sx},C_{Sy}) represent the principal points in planes P and S, respectively, and (dx,dy) denote the pixel sizes on the image plane while (Fx,Fy) are the equivalent focal length in pixel units. Then, the formula (6) can be converted to (9) as where (u,v) and (u_{S},v_{S}), respectively, denote the image points on planes P and S and
Without a loss of generality, assuming that the calibration target plane is on Z ≡ 0 of the world coordinate frame, we have where s is an arbitrary scale factor, (R,T) represent the rotation and transformation which relate the world coordinate frame to the ideal image plane coordinate frame, and r_{1} and r_{2} are the first two columns of the rotation matrix R. The homography between the tilted image plane and the calibration target plane can be obtained by combining the formulas (9) and (11).
To summarise, the calibration method based on PLVM has the advantage of a robust and simple model, as well as convenient calibration process. However, the application of the model in literature [27] still strongly depends on the quality of the calibration target. In addition, the initial estimation of the calibration parameters is obtained using Zhang’s method [21] directly rather than taking the unique imaging characteristics of Scheimpflug camera into account.
3. Pupil Centric Imaging Model (PCIM)
The imaging models as described above assume that the angle of the chief ray in object space and the angle of the chief ray in image space are identical, which is incorrect. And this difference employs a significant influence on the Scheimpflug camera, which in general can be ignored in pinhole imaging model. So Kumar and Ahuja [13] propose a generalized PCIM which models the exact relationship between these rays. Besides, Steger [18] reduces the number of the model parameters and develops a simpler PCIM.
Figure 6 shows a classical geometry of a thick lens, a ray that enters the principal plane P and exits the principal plane P at the same position with respect to the optical axis (however, generally not at the same angle) [18]. The ray geometry of the real Scheimpflug camera model is illustrated in Figure 7.
First, the object points are projected to the ideal image plane lying at a distance c from the projection center O, where the ray angles in object space and image space are identical ω = ω. Then, move the ideal image plane to a distance d, resulting in the difference of angles ω ≠ ω. Next, the real tilted image plane can be obtained by tilting the ideal image plane at the correct distance d by the angle τ.
Consider the definition of coordinate frames in literature [18], as shown in Figure 8. Ox_{u}y_{u}z_{u} and Ox_{t}y_{t}z_{t} denote the coordinate frame of ideal image plane and tilted image plane, respectively. n is the rotation axis in a plane orthogonal to the optical axis which forms an angle ρ with axis x_{u}. And a new coordinate frame Ox_{S}y_{S}z_{S} is constructed in the tilted image plane parallel to coordinate frame Ox_{t}y_{t}z_{t} which lies at the perpendicular projection of the projection center. The coordinate frames of Ox_{u}y_{u}z_{u} and Ox_{S}y_{S}z_{S} can be considered as two cameras that are rotated around their common projection center O. Hence, the transformation from Ox_{u}y_{u}z_{u} to Ox_{S}y_{S}z_{S} can be given by K_{S}RK_{u}^{−1}, in which K_{S} and K_{u} are the calibration matrices corresponding to two cameras, respectively, and R denotes the rotation matrix that relates to the two cameras [18]. It is apparent that there is a relationship between the Ox_{S}y_{S}z_{S} and Ox_{t}y_{t}z_{t} defined by a translation matrix T. Thus, the complete projection can be expressed as follows: where and r_{ij} indicates the ith row and jth column entry of R defined as where c_{θ} = cosθ, s_{θ} = sinθ, and τ(0 ≤ τ < π/2). Thus, the projection relation between coordinate frame Ox_{u}y_{u}z_{u} and Ox_{t}y_{t}z_{t} can be expressed as
Furthermore, the projection from the world coordinate frame to the Ox_{u}y_{u}z_{u} can be easily obtained via perspective projection, so that the complete projection from the world frame to Ox_{t}y_{t}z_{t} is established.
In comparison with the aforementioned three calibration models, PCIM proposed in literature [13, 18] gives full consideration to the difference of the ray angles in object space and image space, which is more closely related to the actual imaging. In addition, the model allows the lens to be tilted in an arbitrary direction which is much more flexible and has more explicit physical meaning than the above three. The calibration method base on the PCIM in literature [18] is accurate, easy to initialize, and as well as fast convergence. However, the model also inevitably has some imperfections that the PCIM cannot be directly applied to the situation where the tilt is in the horizontal or vertical direction, which requires the pixel aspect ratio to be known in advance [18].
4. General Nonparametric Imaging Model (GNIM)
Great progress has been made during last decade in exploiting the general nonparametric imaging model for camera calibration [32, 33, 35, 41–43]. In comparison with the perspective imaging model, the GNIM is rather flexible and accommodates a wide range of imaging devices. Moreover, as has been demonstrated, given that the distortion is quite high, the GNIM may offer better calibration results than the parametric methods [44].
As proposed in literature [33], Ramalingam and Sturm model any camera as a set of image pixels and their associated camera rays in space, and the image pixels measure light traveling along the associated halfrays from the scene in various directions. Therefore, the calibration undergoing GNIM simply refers to the computation of the projection between the pixels and their associated 3D rays. The mentality of GNIM calibration is quite simple, as illustrated in Figure 9, and three (or more) different points corresponding to the same single pixel p_{i}(u,v) on the image plane, as sampled on three (or more) different calibration checkerboards with unknown relative position, must be collinear. Hence, the relative position among the calibration checkerboards can be obtained on the basis of the collinearity constraint of the checkerboard points. Then, the calibration can be completed by computing rays going through the associated sampled checkerboard points.
We consider here a Scheimpflug camera with perspective lens, to validate the performance of GNIM in the Scheimpflug camera calibration. As shown in Figure 9, let us denote several points on different planar calibration checkerboards corresponding to the same image pixel by homogeneous coordinates , for checkerboards . The unknown poses with respect to different girds can be expressed as rotation matrices R_{K} and translation vectors T_{K}. Thus, the point relative to the corresponding local coordinate frame can be projected to the global reference frame via
Furthermore, the optical center of the camera is defined as O(O_{1}, O_{2}, O_{3}, 1). And without the loss of generality, the first calibration checkerboard is adopted as the global coordinate reference frame.
As the collinearity constraint proposed in literature [32, 33], the checkerboard points corresponding to the one single pixel, after transforming into the global reference coordinate frame via formula (16), must be collinear. Thus, the matrix including the coordinates of collinear checkerboard points as follows should be of rank smaller than 3. where R_{ij} indicates the ith row and jth column entry of R. Therefore, the determinants of all the 3 × 3 submatrix of formula (17) must vanish. Hence, a new submatrix is constructed by selecting the first column and any other two columns, and the bilinear equations in accordance with checkerboard points and can be obtained as follows’ where M^{jk} represents the bifocal matching tensor with respect to the specific submatrix associated with the optical center and checkerboard pose. Referring to the approach proposed in literature [32, 33], the optical center and checkerboards’ poses can be determined with at least three different views. Then, we must fit the camera rays to the single optical center O. Let denotes the direction of the camera ray passing through the checkerboard point and λ_{i} represents the parameter corresponding to the closest point on the camera ray to the given checkerboard point . Thus, the vector can be calculated by minimizing the following cost function through nonlinear optimization algorithms.
On the whole, as has been noted, the general nonparametric imaging model is rather flexible and accommodates a wider range of imaging devices than the specific parametric imaging model. As in the case of Scheimpflug camera calibration, no matter whether the lens is tilted or not, the Scheimpflug camera can be regarded as the same with the ordinary camera in the general nonparametric imaging model. And the concise GNIM avoids the problem that the parameters in the conventional camera calibration methods easily converge to the local optimums.
Nonetheless, the nonparametric nature of GNIM results in the obscure physical meaning of the model. Moreover, the major drawback in generic calibration model, as presented in literature [32, 33, 35], is that the same motion variables can be computed from two different coupled variables, which leads to discrepancies in the computation of motion variables and some way of averaging should be used to obtain a consistent solution. Besides, the calibration method involves solving a large number of linear equations which is much more complicated than the former calibration methods.
5. Experiment
5.1. Experimental Apparatus
Two Scheimpflug imaging systems are calibrated to demonstrate the performance of different calibration models. One imaging system is a DSRL camera (Nikon D300s, 4288 × 2848 pixels, pixel size of 5.5 μm) with a tiltshift lens (Nikon PCE Micro NIKKOR 45 mm f/2.8 D ED). Owing to the fact that the tilt range of first imaging system is limited within ±8.5°, we construct a custommade Scheimpflug imaging system on the basis of a MVID2048 camera (2048 × 2048 pixels) with a perspective lens (kowaLM25JC5M2 25 mm/f), and the sensor is intentionally tilted with respect to the lens by ≈10°. A ceramic checkerboard with 9 × 12 squares, each having dimension of 10 × 10 mm, is employed to model known control points, and 15 calibration images are taken at different locations, from different angles.
As for the first imaging system, two sets of experiments are carried out to quantify the performance of calibration models. One set is to adjust the tiltshift lens to the angle approximate 0°, in the case of which the Scheimpflug camera model reduces to standard camera model. The other set rotates the tiltshift lens to the angle approximate 7°. In view of the fact that both Scheimpflug imaging systems employed here just simply rotate around one single axis, the tilt angle rotating around the other axis is assumed to be 0°.
5.2. Results and Discussion
Tables 1 and 2 show the calibration results of the first imaging system’s intrinsic parameters and reprojection errors using different calibration models with tilt angles of 0° and 7°, respectively. The calibration results of the second imaging system are illustrated in Table 3. In calibration model (1), only the first two order radial distortion coefficients (k_{1},k_{2}) are considered, while calibration model (2) includes both the radial and tangential distortion coefficients (k_{1},k_{2},k_{3},k_{4}). As illustrated in the following tables, (α,β) represent the two dimensional tilt angles corresponding to the specific calibration models, (Cx,Cy) and (fx,fy), respectively, denote the principal point and focal length, and the reprojection error, denoted by RMSE, is indicated via the root of mean squared error in pixels, between the detected image corners and the projected ones.



From the calibration results of conventional pinhole model in the above three tables, it can be seen that pinhole model works well with the tilt angle of approximate 0°, yet when the tilt angle increases several degrees (such as 7° and 10°), the reprojection errors increase significantly whereas the aforementioned models almost remain constant. This is also consistent with the conclusion made in literature [23] that when the lenstilted angle is greater than 7°, the tangential distortion cannot compensate the error generated by tilting the sensor.
In terms of tilt angles in above tables, the calibration results of RMM (1), PLVM (1), and PCIM (1) are quite close to corresponding tilt angles measured manually in the adapter, in spite of minor deviations resulting from measurement noise. Furthermore, comparing the calibration results of models (1) and models (2), respectively, it is obvious to find that the formers give better results and smaller angle errors than the latters. More comprehensive distortion model in RMM (2), PLVM (2), and PCIM (2) does not necessarily result in better calibration results, and together with the previous analysis, it can be concluded that the parameters of tangential distortion are likely coupled with the tilt angles of the image plane in RMM, PLVM, and PCIM.
It is difficult to obtain high accuracy ground truth that serves as absolute reference for the Scheimpflug camera’s calibration results. Hence, this paper borrows the idea from the literature [19–22] that assesses the accuracy of the calibration results via the uncertainties of the calibration parameters.
Suppose m = f(P,Q) indicates the projection from the point set Q(X_{i},Y_{i},Z_{i}) in the world to the image feature points , from n different views, where the parameter set P consists of the intrinsic parameter set P_{A} = {α,β,fx,fy,Cx,Cy,k_{1},k_{2}} and extrinsic parameter set P_{B} = {om^{k},T^{k}} of the Scheimpflug camera. Considering the fact that the nonlinear least square problem such as camera calibration generally employs the iterative method (e.g., LevenbergMarquardt method), the linear iteration of the parameters can be modeled as P^{i+ 1} = P^{i} + ΔP^{i}. As for the minor deviation ΔP, we have where the Jacobian matrix is defined by .
And the subJacobian matrix and are, respectively, defined as follows:
Assuming that Λ_{m} refers to the covariance matrix of m which can be defined as Λ_{m} = σ^{2}I (σ^{2} is the standard deviation of m, and in order to simplify the analysis, it can be assumed that each feature point is independently observed). Hence, the best linear unbiased estimate of ΔP can be calculated via
Thus, we can calculate the covariance matrix of the parameter set P by
Therefore, the uncertainties of the second imaging system’s calibration parameters using four kinds of models (1) can be obtained by the formula (25), and the results are shown in Table 4.

As shown in Table 4, the uncertainties of the calibration parameters recovered from RMM, PLVM, and PCIM are obviously smaller than those from the conventional pinhole imaging model. Besides, the uncertainties of recovered parameters from RMM, PLVM, and PCIM are almost consistent with each other, which mean the accuracies of the three models are approximately equal.
Moreover, the experiments examine the performance of different calibration models with respect to the number of the planes utilized to recover the camera parameters. To facilitate comparison, assuming that the tilt angles around the two axes are 10° and 0°, respectively, the results of tilt angles in different models are transformed into the deviations with respect to the reference values. Figure 10 shows the results of the second Scheimpflug imaging system’s intrinsic parameters versus the number of images for different calibration models.
(a)
(b)
(c)
(d)
(e)
(f)
From Figures 10(a) and 10(d), it can be seen that three calibration models can quickly converge to the global optimization and the tilt angles α recovered from the three models are almost consistent with an accuracy of 1°, while the tilt angles β vary from one model to another with the accuracies of approximate 1°, 0.5°, and 1.5°, respectively. As far as the experiment is concerned, the PCIM gives slightly better results than the other two models with respect to the tilt angles.
As the Figures 10(b) and 10(e) illustrated, the results of focal length via the three Scheimpflug calibration models are consistent with each other. Choosing the results of conventional pinhole imaging model as the reference, it is evident to find that the fx recovered from the three Scheimpflug calibration models is larger, while the fy is smaller. Besides, it is revealed that when the lens is tilted with respect to the sensor, the effective focal length increases accordingly, which is also in accord with the conclusion in literature [1, 2].
Figures 10(c) and 10(f) show the results of principal points using four different models. It is evident to observe that the results of Cx and Cy via the three Scheimpflug calibration models are quite similar with each other. And it is worthy of noting that the Cx derived from the Scheimpflug calibration models and pinhole imaging model almost has the same results with little difference. However, the results of Cy recovered from the two main types of calibration models are quite different. Obviously, the principal points are supposed to move in one direction on the image plane, since the lens is only tilted with respect to one single axis.
In addition, as depicted in Figure 11, it can be seen that the calibration models undergoing Scheimpflug condition give smaller RMSE than the pinhole imaging model. In spite of the fact that full calibration of Scheimpflug camera using a checkerboard requires at least three different views mathematically, four views at least are required to obtain accurate and robust results in practice. Together with the above calibration results, it can be concluded that calibration models undergoing Scheimpflug condition are the better description of Scheimpflug camera imaging.
On account of the fact that the GNIM does not have explicit intrinsic parameters used for comparison and the limit of pages, we choose the calibration results of any two checkerboards’ motion parameters, as illustrated in Table 5, to verify the performance of the GNIM to a certain extent. To facilitate comparison, the rotation matrix elements are expressed as the Rodrigues parameter forms.

As shown in Table 5, the motion parameter calibration results of GNIM are consistent with the other three models despite of minor deviations, which indirectly prove the effectiveness of the GNIM in Scheimpflug camera calibration. That is, the GNIM accommodates a wide range of imaging system including the Scheimpflug imaging system.
Furthermore, the 3D coordinates and structure of the checkerboards are reconstructed with the help of calibrated parameters and obtained image points. As far as the experimental results indicate, the reconstruction results of the checkerboard points using the four different Scheimpflug calibration models are basically in accord with each other, as illustrated in Figure 12. Figure 12(a) depicts the reconstructed 3D checkerboard points and the fitted plane while the error distribution of reconstructed checkerboard points is shown in Figure 12(b). Besides, the RMSEs of the distance from the reconstructed points to the fitted plane using four models are about 0.0300 mm despite of slight fluctuations, as given in Table 6.
(a)
(b)

According to the reconstruction results in Figure 12, the reconstructed 3D points agree well with the fitted plane in spite of minor deviations, which might result from the following factors: (1) the distortion parameters are not absolutely accurate to describe the lens distortion, and (2) the image point coordinates obtained by corner extraction also have errors inevitably. Consequently, the error of image points will be enlarged in the process of 3D reconstruction.
As shown in Figure 12(b), the errors of reconstructed points are approximately symmetrical about the plane center, while the errors of the reconstructed points at the edge are more significant. Nonetheless, the above reconstructed errors (about 0.030 mm) are still very small compared to the depth of the checkerboard.
To further verify the accuracy, another two sets of experiments are carried out with the help of twoaxis electric rotary table (SLT2MA) and threeaxis electric translation table (ZG14TA) as shown in Figures 13 and 14, respectively. The cooperated mark is successively mounted on the twoaxis electric rotary table and threeaxis electric translation table; therefore, the motions of cooperated mark are pure rotations and translations, respectively, in the two sets of experiments.
In the first experiment, the twoaxis electric rotary table is controlled to a rotation of no less than 5° around the two axes for each rotation. As for the second experiment, we control the threeaxis electric translation table to a translation of more than 10 mm for each movement, and both the operations are repeated ten times.
Meanwhile, we estimate the pose change of the cooperated mark via the first Scheimpflug imaging system with different imaging models. In this way, the spatial relationship between the coordinate frames of the checkerboard and the twoaxis electric rotary table or threeaxis electric translation table can be converted into the handeye calibration problem [45–48], which already has a very mature solution. Hence, the reference pose of the cooperated mark can be obtained via the two auxiliary equipment. And the pose estimation errors with respect to the reference values of the two sets of experiments are shown in Table 7.

It can be seen in Table 7 that the pose estimation errors vary little in first three parametric Scheimpflug imaging models, while they are slightly smaller than those in GNIM, regardless of whether the cooperated mark is merely rotation or translation. As revealed in Table 7, the measurement results of rotation and translation can be obtained to an accuracy of about 0.040° and 1.5 mm, respectively. Given that the camera employed here is an ordinary DSRL camera, rather than an industrial camera with high precision and robustness, the pose estimation errors of the two sets of experiments are relatively low, which also validates the accuracy and effectiveness of the calibration models.
6. Conclusion
This paper presents a comprehensive survey of recent calibration methods of the Scheimpflug camera with perspective lens. The general nonparametric imaging model is novelly employed to deal with the problem as well. All the calibration models are briefly recalled and compared in detail with respect to each other, with some highlights on their respective advantages and drawbacks. Real data experiments including calibrations, reconstructions, and measurements are performed to validate the performance of the calibration models.
As the experimental results indicated, compared with the classic pinhole imaging model, the models undergoing Scheimpflug condition are the better description of Scheimpflug camera imaging, especially when the tilt angle is greater than 7°. Moreover, although the imaging models and the parameter forms are various, the accuracies of the four calibration models (RMM, PLVM, PCIM, and GNIM) are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models in view of the errors in reconstruction and pose estimation. Given that the PLVM and PCIM require the pixel aspect ratio to be known in advance, the calibration model of RMM is rather flexible. Besides, the experimental results reveal that the parameters of tangential distortion are likely coupled with the tilt angles of the image plane in the calibration models of RMM, PLVM, and PCIM. In the actual calibration task, the appropriate calibration model is supposed to be chosen according to the specific implementation condition.
Conflicts of Interest
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The research was supported by the National Natural Science Foundation of China (no. 51509251) and National Key Scientific Instrument and Equipment Development Project of China (no. 2013YQ140517).
References
 H. M. Merklinger, Focusing the View Camera: A Scientific Way to Focus the View Camera and Estimate Depth of Field, Canada, 2010.
 P. Berliner, “The Scheimpflug principle,” Projection Lights & Staging News, 2010. View at: Google Scholar
 A. K. Prasad and K. Jensen, “Scheimpflug stereocamera for particle image velocimetry in liquid flows,” Applied Optics, vol. 34, no. 30, pp. 7092–7099, 1995. View at: Publisher Site  Google Scholar
 D. Calluaud and L. David, “Stereoscopic particle image velocimetry measurements of the flow around a surfacemounted block,” Experiments in Fluids, vol. 36, no. 1, pp. 53–61, 2004. View at: Publisher Site  Google Scholar
 S. Walker, “Twoaxis Scheimpflug focusing for particle image velocimetry,” in ICIASF 2001 Record, 19th International Congress on Instrumentation in Aerospace Simulation Facilities (Cat. No.01CH37215), pp. 114–124, Cleveland, OH, USA, 2001. View at: Publisher Site  Google Scholar
 J. Li, Y. Guo, J. Zhu et al., “Large depthofview portable threedimensional laser scanner and its segmental calibration for robot vision,” Optics and Lasers in Engineering, vol. 45, no. 11, pp. 1077–1087, 2007. View at: Publisher Site  Google Scholar
 A. Miks, J. Novak, and P. Novak, “Analysis of imaging for laser triangulation sensors under Scheimpflug rule,” Optics Express, vol. 21, no. 15, pp. 18225–18235, 2013. View at: Publisher Site  Google Scholar
 Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection 3D microscopy with the general imaging model,” Optics Express, vol. 23, no. 5, article 6846, 2015. View at: Publisher Site  Google Scholar
 D. S. Grewal and S. P. S. Grewal, “Clinical applications of Scheimpflug imaging in cataract surgery,” Saudi Journal of Ophthalmology, vol. 26, no. 1, pp. 25–32, 2012. View at: Publisher Site  Google Scholar
 Y. Hon and A. K. Lam, “Corneal deformation measurement using Scheimpflug noncontact tonometry,” Optometry and Vision Science, vol. 90, no. 1, pp. e1–e8, 2013. View at: Publisher Site  Google Scholar
 H. Louhichi, T. Fournel, J. M. Lavest, and H. B. Aissia, “Camera selfcalibration in Scheimpflug condition for air flow investigation,” in Advances in Visual Computing. ISVC 2006, vol. 4292 of Lecture Notes in Computer Science, pp. 891–900, Springer, Berlin, Heidelberg, 2006. View at: Publisher Site  Google Scholar
 Y. Saito, Y. Arai, and W. Gao, “Investigation of an optical sensor for small tilt angle detection of a precision linear stage,” Measurement Science and Technology, vol. 21, no. 5, article 54006, 2010. View at: Publisher Site  Google Scholar
 A. Kumar and N. Ahuja, “Generalized pupilcentric imaging and analytical calibration for a nonfrontal camera,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3970–3977, Columbus, OH, USA, 2014. View at: Publisher Site  Google Scholar
 J. Peng, M. Wang, D. Deng, X. Liu, Y. Yin, and X. Peng, “Distortion correction for microscopic fringe projection system with Scheimpflug telecentric lens,” Applied Optics, vol. 54, no. 34, article 10055, 10062 pages, 2015. View at: Publisher Site  Google Scholar
 T. Sun, J. Fang, D. Zhao, X. Liu, and Q. X. Tong, “A novel multidigital camera system based on tiltshift photography technology,” Sensors, vol. 15, no. 12, pp. 7823–7843, 2015. View at: Publisher Site  Google Scholar
 E. Nocerino, F. Menna, F. Remondino, J. A. Beraldin, L. Cournoyer, and G. Reain, “Experiments on calibrating tiltshift lenses for closerange photogrammetry,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLIB5, pp. 99–105, 2016. View at: Publisher Site  Google Scholar
 P. Cornic, C. Illoul, A. Cheminet et al., “Another look at volume selfcalibration: calibration and selfcalibration within a pinhole model of Scheimpflug cameras,” Measurement Science and Technology, vol. 27, no. 9, article 094004, 2016. View at: Publisher Site  Google Scholar
 C. Steger, “A comprehensive and versatile camera model for cameras with tilt lenses,” International Journal of Computer Vision, vol. 123, no. 2, pp. 121–159, 2016. View at: Publisher Site  Google Scholar
 R. Tsai, “A versatile camera calibration technique for highaccuracy 3D machine vision metrology using offtheshelf TV cameras and lenses,” IEEE Journal on Robotics and Automation, vol. 3, no. 4, pp. 323–344, 1987. View at: Publisher Site  Google Scholar
 J. Heikkila and O. Silven, “A fourstep camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1106–1112, San Juan, PR, USA, 1997. View at: Publisher Site  Google Scholar
 Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. View at: Publisher Site  Google Scholar
 P. F. Sturm and S. J. Maybank, “On planebased camera calibration: a general algorithm, singularities, applications,” in Proceedings 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), pp. 432–437, Fort Collins, CO, USA, 1999. View at: Publisher Site  Google Scholar
 A. Legarda, A. Izaguirre, N. Arana, and A. Iturrospe, “Comparison and error analysis of the standard pinhole and Scheimpflug camera calibration models,” in 2013 IEEE 11th International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics, pp. 1–6, Toulouse, France, 2013. View at: Publisher Site  Google Scholar
 T. Fournel, H. Louhichi, C. Barat, and J. F. Menudet, “Scheimpflug selfcalibration based on tangency points,” in The 12th International Symposium on Flow Visualization, pp. 1–10, Göttingen, Germany, September 2006. View at: Google Scholar
 H. Louhichi, T. Fournel, J. M. Lavest, and H. B. Aissia, “Selfcalibration of Scheimpflug cameras: an easy protocol,” Measurement Science and Technology, vol. 18, no. 8, pp. 2616–2622, 2007. View at: Publisher Site  Google Scholar
 J. Wang, F. Shi, J. Zhang, and Y. Liu, “A new calibration model of camera lens distortion,” Pattern Recognition, vol. 41, no. 2, pp. 607–615, 2008. View at: Publisher Site  Google Scholar
 A. Legarda, A. Izaguirre, N. Arana, and A. Iturrospe, “A new method for Scheimpflug camera calibration,” in 2011 10th International Workshop on Electronics, Control, Measurement and Signals, pp. 1–5, Liberec, Czech Republic, 2011. View at: Publisher Site  Google Scholar
 S. Hamrouni, H. Louhichi, H. B. Aissia, and M. Elhajem, “A new method for stereocameras selfcalibration in Scheimpflug condition,” in 15th International Symposium on Flow Visualization, pp. 1–10, Minsk, Belarus, 2012. View at: Google Scholar
 A. Kumar and N. Ahuja, “Generalized radial alignment constraint for camera calibration,” in 2014 22nd International Conference on Pattern Recognition, pp. 184–189, Stockholm, Sweden, 2014. View at: Publisher Site  Google Scholar
 P. Fasogbon, L. Duvieubourg, P.A. Lacaze, and L. Macaire, “Intrinsic camera calibration equipped with Scheimpflug optical device,” in Twelfth International Conference on Quality Control by Artificial Vision 2015, Le Creusot, France, 2015. View at: Publisher Site  Google Scholar
 X. Zhang and T. Zhou, “Generic Scheimpflug camera model and its calibration,” in 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 2264–2270, Zhuhai, China, 2015. View at: Publisher Site  Google Scholar
 S. Ramalingam, Generic Imaging Models: Calibration and 3D Reconstruction Algorithms, [Ph.D. thesis], Institut National Polytechnique de Grenoble, Grenoble, France, 2006.
 S. Ramalingam and P. Sturm, “A unifying theory for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 7, pp. 1309–1319, 2016. View at: Publisher Site  Google Scholar
 M. D. Grossberg and S. K. Nayar, “A general imaging model and a method for Finding its parameters,” in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, vol. 2, pp. 108–115, Vancouver, BC, Canada, 2001. View at: Publisher Site  Google Scholar
 P. Sturm and S. RamalingamJ. Matas, “A generic concept for camera calibration,” in Computer Vision  ECCV 2004. ECCV 2004, T. Pajdla, Ed., vol. 3022 of Lecture Notes in Computer Science, pp. 1–13, Springer, Berlin, Heidelberg, 2004. View at: Publisher Site  Google Scholar
 A. Kumar and N. Ahuja, “On the equivalence of moving entrance pupil and radial distortion for camera calibration,” in 2015 IEEE International Conference on Computer Vision (ICCV), pp. 2345–2353, Santiago, Chile, 2015. View at: Publisher Site  Google Scholar
 D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Optics and Lasers in Engineering, vol. 51, no. 5, pp. 538–541, 2013. View at: Publisher Site  Google Scholar
 O. Albers, A. Poesch, and E. Reithmeier, “Flexible calibration and measurement strategy for a multisensor fringe projection unit,” Optics Express, vol. 23, no. 23, article 29592, 29607 pages, 2015. View at: Publisher Site  Google Scholar
 D. C. Brown, “Decentering distortion of lenses,” Photogrammetric Engineering, vol. 32, pp. 444–462, 1966. View at: Google Scholar
 M. Naderan, S. Shoar, M. Naderan, M. A. Kamaleddin, and M. T. Rajabi, “Comparison of corneal measurements in keratoconic eyes using rotating Scheimpflug camera and scanningslit topography,” International Journal of Ophthalmology, vol. 8, no. 2, pp. 275–280, 2015. View at: Publisher Site  Google Scholar
 P. Miraldo, H. Araujo, and J. Queiro, “Pointbased calibration using a parametric representation of the general imaging model,” in 2011 International Conference on Computer Vision, pp. 2304–2311, Barcelona, Spain, 2011. View at: Publisher Site  Google Scholar
 P. Miraldo and H. Araujo, “Calibration of smooth camera models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 9, pp. 2091–2103, 2013. View at: Publisher Site  Google Scholar
 G. H. Lee, F. Faundorfer, and M. Pollefeys, “Motion estimation for selfdriving cars with a generalized camera,” in 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2746–2753, Portland, OR, USA, 2013. View at: Publisher Site  Google Scholar
 A. K. Dunne, J. Mallon, and P. F. Whelan, “Efficient generic calibration method for general cameras with single centre of projection,” Computer Vision and Image Understanding, vol. 114, no. 2, pp. 220–233, 2010. View at: Publisher Site  Google Scholar
 R. Y. Tsai and R. K. Lenz, “A new technique for fully autonomous and efficient 3D robotics hand/eye calibration,” IEEE Transactions on Robotics and Automation, vol. 5, no. 3, pp. 345–358, 1989. View at: Publisher Site  Google Scholar
 Y. C. Shiu and S. Ahmad, “Calibration of wristmounted robotic sensors by solving homogeneous transform equations of the form AX=XB,” IEEE Transactions on Robotics and Automation, vol. 5, no. 1, pp. 16–29, 1989. View at: Publisher Site  Google Scholar
 F. Dornaika and R. Horaud, “Simultaneous robotworld and handeye calibration,” IEEE Transactions on Robotics and Automation, vol. 14, no. 4, pp. 617–622, 1998. View at: Publisher Site  Google Scholar
 F. C. Park and B. J. Martin, “Robot sensor calibration: solving AX=XB on the Euclidean group,” IEEE Transactions on Robotics and Automation, vol. 10, no. 5, pp. 717–721, 2002. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 Cong Sun et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.