Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2018, Article ID 5351863, 12 pages
https://doi.org/10.1155/2018/5351863
Research Article

Practical In Situ Implementation of a Multicamera Multisystem Calibration

1Department of Geomatics Engineering, University of Calgary, 2500 University Dr NW, Calgary, AB, Canada T2N 1N4
2Lyles School of Civil Engineering, Purdue University, 550 Stadium Mall Dr West, Lafayette, IN 47907-2051, USA

Correspondence should be addressed to Ivan Detchev; ac.yraglacu@vehcted.i

Received 24 May 2017; Revised 4 October 2017; Accepted 31 October 2017; Published 7 February 2018

Academic Editor: Marco Scaioni

Copyright © 2018 Ivan Detchev et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Consumer-grade cameras are generally low-cost and available off-the-shelf, so having multicamera photogrammetric systems for 3D reconstruction is both financially feasible and practical. Such systems can be deployed in many different types of applications: infrastructure health monitoring, cultural heritage documentation, bio-medicine, as-built surveys, and indoor or outdoor mobile mapping for example. A geometric system calibration is usually necessary before a data acquisition mission in order for the results to have optimal accuracy. A typical system calibration must address the estimation of both the interior and the exterior, or relative, orientation parameters for each camera in the system. This article reviews different ways of performing a calibration of a photogrammetric system consisting of multiple cameras. It then proposes a methodology for the simultaneous estimation of both the interior and the relative orientation parameters which can work in several different types of scenarios including a multicamera multisystem calibration. A rigorous in situ system calibration was successfully implemented and tested. The same algorithm is able to handle the equivalent to a traditional-style bundle adjustment, that is, a network solution without constraints, for a single or multicamera calibrations, and the proposed bundle adjustment with built-in relative orientation constraints for the calibration of a system or multiple systems of cameras.

1. Introduction

A photogrammetric system consists of one or multiple digital cameras. In the case of a single camera [1], it would have to be moving or sequentially occupying varying camera stations. This scenario would only work for objects that remain shape-invariant throughout the image recording session due to the time lapse between the varying camera station exposures. Preferably, an array or a cluster of cameras should be used in the scenarios where the shape of the object of interest may be changing with time [2]. The availability of inexpensive digital cameras has made the use of such multisensor systems more and more common. Their employment in mobile mapping applications [3, 4], dense matching of imagery [5, 6], biomedical and motion-capture metric applications [2, 7], infrastructure health monitoring [8, 9], and the generation of photo scenes from multiple sensors [10] has become frequent occurrence.

Sensor calibration is known to be a critical quality assurance measure to maximize photogrammetric accuracy. This is even more so in the case of a multicamera system. A correct system calibration is essential for the accurate reconstruction of 3D object space required in photogrammetric applications. For this purpose, a mathematical model based on built-in relative orientation constraints (ROCs) is reviewed and further improved by modifying it to handle both single and multiple reference camera(s).

The next section summarizes the different types of system calibrations depending on whether the cameras are precalibrated or calibrated in situ and whether the estimation of the relative orientation parameters is performed in a two- or a one-step procedure. The preferred system calibration method for the simultaneous estimation of all system calibration parameters is proposed and tested.

2. System Calibration Methodology

The geometric calibration of a system comprising multiple cameras has two components: a camera calibration of each camera in the system and an estimation of the position and orientation of the cameras involved in the system with respect to a reference camera. The next subsections discuss various options for accomplishing the estimation of a system calibration.

2.1. Solution for the Interior Orientation Parameters

The camera calibration necessitates estimating the interior orientation parameters (IOPs) of each camera, which include the principal distance, the principal point offset, and any necessary distortion or additional parameters. This has been heavily addressed in photogrammetric literature. For consumer-grade digital cameras that can be purchased off-the-shelf, the preferred procedure for estimating the IOPs is a bundle adjustment with self-calibration [11, 12]. The distortion models are found in Brown [13] and Brown [14], while the analytical basis for the adjustment is published in Kenefick et al. [15] and Granshaw [16]. Clarke and Fryer [17] and Remondino and Fraser [18] include recommendations on how to carry out the calibration procedure for a single camera. In addition, Chandler et al. [19] and Fraser [20] show examples for the calibration of digital cameras that are specifically low-cost/off-the-shelf.

The calibration process can be performed in a specialized laboratory or on the job (i.e., in situ). Which type of calibration should be chosen depends on the project specifics. In the case of a photogrammetric system comprising a few cameras (e.g., two to three), the calibration process can successfully be carried out individually for each camera before the commencement of any data collection. Yet, if many cameras are involved in the system in question (e.g., four or more), precalibrating each one of them individually might be too time-consuming. Additionally, it may not be desirable to dismount and then remount the cameras from the system platform every time they need to be recalibrated. In such circumstances, performing the camera calibration on the job or in situ may be more practical and/or feasible. The challenge of such an in situ system calibration is the first order network design problem, that is, having a network geometry that will produce a solution for the unknown parameters with an acceptable variance-covariance matrix. For instance, sufficient number of target points with a well-distributed spread within the image format must be present for each camera. At the same time, multistation convergent images must be taken such that isotropic coordinate precision for the object space reconstruction is achieved. For a stationary camera system, this network configuration can be emulated by conducting numerous translations and rotations of a portable test field within the field of view of the cameras in the system [2125]. It is worth clarifying that even though the camera system is physically stationary and the test field is the one translated and rotated, the adjustment is handled inversely as if the test field is kept stationary and the camera system is the one moving. In this way, for each instance of translation and rotation, the number of exterior orientation parameters (EOPs) added to the adjustment is lesser than the number of 3D coordinates for the object space target points, and the total number of unknowns in the adjustment is thus minimized. Note that ideally this portable test field should be in 3D so that any projective compensation or high correlations within and between the IOPs and EOPs can be decoupled [15, 2225].

2.2. Solution for the Camera Mounting Parameters

Before the beginning of a data collection campaign, the position and the orientation of the cameras in the system must be estimated in addition to the IOPs. This can be done with respect to a reference camera or some other type of a reference frame (e.g., an IMU body frame). These parameters are referred to as the relative orientation parameters (ROPs) or the camera mounting parameters (CMPs), that is, the parameters describing how each camera is attached to the system platform. Assuming that the CMPs are defined relative to a reference camera, they consist of positional, , and rotational, , offsets between each camera and the reference camera. These components can also be referred to as the lever arm (baseline) and (angular) boresight, respectively. The estimation of the CMPs can be done in a two-step or a one-step process [26]. Both type of processes are reviewed in the next two subsections.

2.2.1. Two-Step CMP Estimation

The first step in the two-step procedure for providing a solution for the CMPs is estimating the EOPs for each of the cameras in the system. A traditional-style bundle adjustment, that is, a network solution based on the collinearity equations and without any constraints, is normally used for the purpose of completing this first step (see (1) and (2)). Note that time dependency is assumed here in order to have a general model, that is, a model that can handle both stationary or moving sensors or objects. where contains the coordinates of object space point, , with respect to the mapping frame ; and are the time-dependent positional and rotational parameters or the EOPs of camera with respect to the mapping frame at time ; is the image to object space scale; and the expression contains the distortion-free coordinates of image space point, , or the distortion-free projection of point in the frame of camera , where are the observed or distorted image coordinates for point ; is the principal point offset; is the principal distance; and are the image space distortions for point .

The second step in this process uses the estimated EOPs at time to compute the CMPs using (3) and (4) [4]. where is the time-dependent 3D lever arm/positional offset or translation between camera and the reference camera , that is, and are the time-dependent positional and rotational parameters or the EOPs of the reference camera with respect to the mapping frame ; and is the time-dependent positional component of the EOPs of camera . where is the time-dependent 3D boresight/rotational offset between camera and the reference camera , which is a function of , and , and is the time-dependent rotational component of the EOPs of camera .

If the EOPs are estimated in a single observation epoch, there would accordingly be a single set of computed CMPs. If the EOPs are, however, estimated in two or more observation epochs, the resultant redundant sets of time-dependent CMPs can be averaged and their standard deviations can be calculated [4, 2225]. Note, however, that rotation averaging should not be performed with Euler angles [27]. The point of averaging is to compute the best estimate of a random variable while minimizing the sum of squared errors. Averaging Euler angles, that is, sequential rotational parameters, does not minimize a meaningful cost function. Instead, for sound rotation averaging, quaternions or angle-axis representation must be used as they can minimize several cost functions, which are listed in Hartley et al. [27]. In addition, the rotational and positional parameters are often correlated. It should also be highlighted that if the reference camera does not observe the test field in certain observation epochs, these observation epochs cannot contribute to the estimations of the CMPs. Moreover, if the field of view of a particular camera does not overlap with the one for the reference camera (i.e., the two cameras cannot observe the test field simultaneously in any of the observation epochs), the CMPs of the camera in question cannot be directly estimated as in (3) and (4). A work-around procedure exploiting the overlap with other cameras in the system must be then implemented.

Using constraint equations for the EOPs in the network solution in order to enforce an invariant geometrical relationship between the cameras at different times [10, 2833] may mitigate some of the mentioned problems. The benefit of using EOP constraint equations in the bundle adjustment is that it is not necessary to perform any averaging. This is the case since no matter which observation epoch is used, the same values for the CMPs would be computed with (3) and (4). The disadvantages of this method are that it is still technically a two-step process, a work-around procedure is necessary for computing the CMPs of any cameras, which do not overlap with the reference camera in any of the observation epochs, and the complexity of the implementation procedure intensifies with the increase of the number of cameras in the system and the number of observation epochs [4]. Given these drawbacks, especially the complexity consideration, this type of network solution is not implemented as part of this research.

2.2.2. One-Step CMP Estimation

A one-step procedure for estimating the CMPs is desired in order to avoid a separate estimation step for the CMPs and potential work-around procedures for addressing situations where a camera does not have any overlap with the reference camera in any observation epoch. This can be accomplished by directly incorporating ROCs among all cameras and the reference camera in the collinearity-equation-based network solution [4, 29, 30].

The CMPs, and , in (5) are now explicitly treated as time-independent parameters. In other words, it is assumed that the CMPs for the system of multiple cameras remain stable during all observation epochs within a given system calibration campaign. Also, the EOPs of the reference camera, and , now represent the EOPs of the system platform. This model is relatively straightforward to implement as it preserves its simplicity regardless of the number of cameras employed or the number of observation epochs used. As the number of cameras and the number of observation epochs increase, it significantly reduces the number of unknowns to solve compared to the traditional-style bundle adjustment. It should be noted that when the observation equations for the reference camera, , are established, (5) reduces to (6), because the lever arm vector is set to zero, , and the boresight rotation matrix is set to identity, . The difference between the mathematical models for the traditional-style bundle adjustment (1) and the one with built-in ROCs (5) is visually summarized in Figure 1.

Figure 1: Mathematical model for 3D reconstruction using the traditional-style collinearity-equation-based bundle adjustment with no constraints (a) versus the one using built-in ROCs (b); (0,0,0) denotes the origin of the mapping frame; , , and are example cameras 1, 2, and 3, respectively; and is the reference camera.
2.3. Simultaneous IOP and CMP Estimation

There are four possible types of a calibration of a system with multiple cameras that can be performed given the options for estimating the IOPs and the CMPs. Recall that individual cameras can be either precalibrated or multiple cameras can be calibrated in situ and that the CMPs can be either estimated via the EOPs in a two-step process or directly in a one-step process. The four scenarios are summarized in Table 1 and explained in more detail as follows: (1)The first system calibration scenario involves precalibrating all cameras individually and computing the CMPs in a two-step procedure via the EOPs. It is not desirable, because it may not be practical to precalibrate many cameras and/or the IOPs of the precalibrated cameras may not be stable. In addition, in situations where the reference camera does not observe the test field in certain observation epochs, or certain cameras do not have any overlap with the reference camera, the CMP estimation may be ambiguous or not even possible.(2)The second system calibration scenario involves calibrating all cameras in situ and computing the CMPs in a two-step procedure via the EOPs. Note that the IOPs and the EOPs here are estimated simultaneously. As long as the necessary network configuration can be provided, this procedure is more practical in terms of the camera calibration aspect. It is, however, still not a preferred approach due to the drawbacks listed with regard to the two-step CMP estimation.(3)The third system calibration scenario involves precalibrating all cameras individually and estimating the CMPs in a single step. This procedure may be acceptable for a system with a low number of cameras (e.g., one to three) as long as the individual camera IOPs remain stable from the point in time when the IOPs are estimated to the point in time when the cameras are set up for the CMP estimation.(4)The fourth system calibration scenario involves calibrating all cameras in situ and estimating the CMPs in a single step. Note that the IOPs and the CMPs here are estimated simultaneously. Again, as long as the necessary network configuration can be provided, this procedure is the most desirable one because all unknown parameters are estimated simultaneously in a single step.

Table 1: Possible types of system calibrations.
2.4. Multicamera Multisystem Calibration

The ability to handle multiple reference cameras in the bundle adjustment with built-in relative orientation constraints is a valuable contribution of the proposed procedure for system calibration. Essentially, all cameras participating in the system calibration adjustment are assigned to or selected as a reference camera, and there can be as many reference cameras as the total number of participating cameras. Thus, the adjustment is not limited to a single reference camera. This is significant, because it provides the flexibility of performing the following types of calibrations within the same algorithm: (i)Calibration of a single camera—the camera in question is selected as a reference camera; this scenario reduces to a regular single camera calibration, where the unknowns are the IOPs of the camera, the EOPs for each image, and the X,Y,Z coordinates for all the target points.(ii)Calibration of a single system of multiple cameras—all the available cameras are assigned to the same reference camera; other than the IOPs for all the cameras and the X,Y,Z coordinates for all the target points, the unknowns here are the EOPs for the one reference camera and the CMPs for all other cameras with respect to the reference camera.(iii)Calibration of multiple systems with multiple cameras within the same adjustment—the cameras are divided into groups where each group is assigned a different reference camera; note that in the most general case, each group corresponds to a physically different system. Other than the IOPs and X,Y,Z coordinates already mentioned, the unknowns in this scenario are the EOPs for the multiple reference cameras and the CMPs for the remaining cameras in each system with respect to their corresponding reference camera; note that the observations coming from a particular camera affect the EOPs of only its corresponding reference camera.(iv)Calibration of multiple cameras without any ROCs within the same adjustment—if multiple cameras need to be calibrated, but they are not employed within a stable system, each camera is selected as a reference camera; this scenario reduces to a regular multiple camera calibration, and it is basically a special case of the previous one where each camera is treated as its own system.

3. Experiment Setup

A multicamera system was employed in a structures laboratory for the purposes of this research initiative. This system is described here, as are both a newly designed calibration test field and a routine for acquiring calibration data.

3.1. Example System Setup

The photogrammetric system employed in this research work consisted of eight digital cameras. The camera bodies used were Canon EOS 1000D/Rebel XS DSLR with Canon EF-S 18–55 mm f/3.5–5.6 zoom kit lenses. A 22.2 mm × 14.8 mm complementary metal oxide semiconductor (CMOS) solid state sensor was present in each camera body [34]. Image resolution of 10.1 mega pixels (i.e., 3888 pixels along the image width and 2592 pixels along the image height) was possible. A square pixel size with a nominal dimension of 5.71 μm was assumed. The pixel size was computed by dividing the dimensions of the effective sensor size by the number of pixels in the corresponding direction. For this research project, the cameras were mounted on a steel frame via tripod heads with three degrees of freedom. The tripod heads were necessary to point the cameras towards the specimen of interest. The cameras were configured and synchronized so that nonblurred images of both stationary/static and kinematic/dynamic objects could be taken. After the cameras were focused on the specimen surface, the focus and zoom rings of the lenses were locked with electrical tape, and the auto-focus, the vibration reduction, and the sensor cleaning functions of the cameras were disabled in order to minimize any potential camera instability as much as possible.

The camera system was suspended from an overhanging metal frame (see Figure 2). This configuration was chosen so that the system can observe the top surface of a concrete beam while it was being deformed by a hydraulic actuator. The array of cameras was set up in such a way that the optical axes of the cameras were converging as much as possible at the object of interest. The convergence angle between consecutive cameras varied from 0° to 20° and was approximately 70° between the first and last cameras (see Figure 3). This convergent network geometry allows for near-isotropic coordinate precision to be achieved. The nominal principal distance of the cameras varied between 22 mm and 28 mm. Principal distances on the lower end were used for the central cameras, while ones on the higher end were set for the end cameras.

Figure 2: Suspended setup showing the hydraulic actuator, the concrete beam specimen (a), and a close-up of the camera system (b); note that the circular targets on the floor and the circular checkerboard targets on the actuator frame were placed for another project and are not relevant for this research.
Figure 3: Camera configuration for the suspended system setup; the camera stations are shown as black circles, and the convention used for the camera axes is as follows: x-axis (red) is along the object of interest, y-axis (not shown) is across the object of interest, and z-axis (blue) is away from the object of interest.
3.2. Portable Calibration Test Field and Routine for In Situ System Calibration

Before the commencement of any actual experiments, in situ calibration data had to be collected by the system. The time allocated for the collection of the calibration data was approximately 15 to 30 minutes. The in situ calibration data collection involved translating and rotating a portable test field above the object(s)/surface(s) of interest. The test field used in this research was a 2D reinforced plywood sheet with a four by three grid of coded targets and an eleven by eight grid of checkerboard targets (see Figure 4(a)). While a 3D test field would have been preferred, the use of a 2D test field was more practical given the laboratory conditions and the object of interest at hand. The coded targets were only used for automating the target labelling and solving the image-point correspondence problem. The target coding system and the software used to automatically identify the codes were created in-house. The image point measurements for the checkerboard targets were first made with the Harris corner detector. Then, a subpixel optimization algorithm in OpenCV [35] was used to further refine them. The centre of the test field was the origin of the local coordinate system. If the test field was placed in front of the system, the orientation of the X-, Y-, and Z-axes was chosen so that the Z-axis would point in the general direction of the z-axis of the reference camera (see Figure 4(b)). Additionally, a piece of the test field was cut out by design. This allowed for the test field to fit around the piston of the hydraulic actuator at the cost of losing only three checkerboard targets (see Figure 5).

Figure 4: The portable calibration test field showing the origin and the orientation of the local coordinate system (a) and the coordinate system used for a particular camera (b).
Figure 5: Sample placement of the portable test field around the piston of the hydraulic actuator.

Series of images of the test field were taken simultaneously from as many cameras as possible. Given the test field employed was in 2D, care was taken to partially mitigate any projective compensation or high correlations between the interior and exterior orientation parameters, namely, convergent network geometry, and rolls of the test field were implemented. In each observation epoch, the position and orientation of the test field with respect to the reference camera were altered. For example, with (i.e., in “landscape” mode), the test field was translated under each camera with and being nominally 0°. Four more translation rounds were repeated where the test field was rotated around the X- and Y-axes in such a way that the range in and was between 30° and 65°. The test field was then rotated with or (i.e., in “portrait” mode), and five more similar translation rounds were performed.

In order to evenly fill out the entire usable image format of each camera with targets, the described translations and rotations of the test field were performed under each camera. This target distribution was required so that the lens distortion coefficients for each camera could be estimated reliably. Figure 6 shows two examples of all the image coordinate measurements acquired within the format of a particular camera superimposed on a photograph taken by that camera.

Figure 6: Examples of an entire image format (a) and of a usable portion of an image format (b) for different cameras being filled with targets.

4. Experimental Results

The experimental results related to the proposed multicamera multisystem calibration methodology are presented next. The tests for estimating the IOPs, the CMPs, or the IOPs and CMPs simultaneously are as follows: (i)Estimation of IOPs in an individual camera calibration versus calibration of multiple cameras simultaneously with no ROCs versus calibration of multiple cameras as a system with built-in ROCs(ii)Estimation of CMPs in a two-step calibration versus a one-step calibration with or without prior knowledge of the IOPs(iii)Estimation of all system calibration parameters (i.e., IOPs and CMPs) simultaneously for individual multicamera systems versus for multiple multicamera systems within the same adjustment

Comparison of the various calibration parameter outcomes is also performed using a previously developed system stability analysis tool [26].

Note that for all the bundle adjustments run in the experimental results, the coordinate frame was always the same. The datum was defined by fixing six coordinates and by including observations for the six spatial distances between the four outermost targets on the test field. Thus, the datum definition was minimally constrained (with redundancy for the scale). Also note that except for the six fixed coordinates, all coordinates for the targets on the board were used as unknowns in the adjustments. Also, note that while not all targets were observed in each acquired image, all targets were used in all solutions.

4.1. IOP Estimation Test

After an in situ system calibration data set was acquired using the portable test field, the data were processed in three different ways in order to assess the estimation quality of the IOPs. (i)A separate bundle adjustment was run for each camera in order to calibrate each camera individually.(ii)A single bundle adjustment without using any ROCs was run with all cameras in order to calibrate all the cameras simultaneously.(iii)A single bundle adjustment with a reference camera and built-in ROCs was run in order to calibrate the cameras as a system.

The results from the IOP estimation test are summarized in Table 2. A “per camera normalization” was applied to some of the adjustment quantities in the multiple camera calibration and system calibration columns, that is, the quantity in question was divided by the number of cameras for a more objective comparison. Note that even though some variation in the quality of the image coordinate measurements was present in the different adjustments, a thorough investigation as to whether it is necessary to perform any reweighting of the measurements has not yet been conducted. While the three types of adjustments had similar overall image coordinate measurement precision, RMSExy, the range for the IOP standard deviation values, , improved for the multiple camera calibration and for the system calibration. The extra intersecting light rays, coming from the increased number of image point observations for the same object space targets, increased the redundancy and strengthened the adjustment geometrically [36]. Figure 7 shows a comparison between the network geometry for a single camera calibration versus all cameras involved in the system. Additionally, the decrease in the number of unknowns for the system calibration further improved the redundancy compared to the multiple camera calibration. It can be thus concluded that in terms of the IOP estimation, the proposed system calibration yielded the strongest solution.

Table 2: Results for the IOP estimation test of the cameras belonging to the system.
Figure 7: Example network geometry for the calibration of a single camera (a) versus the simultaneous calibration of all cameras involved in the system (b). The camera stations are shown as black circles, the camera x-axis is in red, its y-axis is in green, and its z-axis is in blue. The checkerboard targets on the portable test field are shown as black crosses, and the magenta lines indicate distance measurements used for scale definition.

The same additional parameter (AP) model was used in all three types of bundle adjustments for the conducted IOP test. It was decided that the , , , and coefficients were sufficient to adequately model the systematic error present in the image coordinate observations for the cameras involved in the system. Without these APs, the size of some of the residuals was several times the size of a pixel, which was deemed unacceptably large.

4.2. CMP Estimation Tests

The quality of the CMP estimation was also tested with the in situ system calibration data set. Four system calibration scenarios were tested. (1)A traditional-style bundle adjustment where the IOPs for all the cameras were taken from the individual camera calibrations in the previous subsection and were kept as fixed constants(2)A self-calibrating traditional-style bundle adjustment(3)The proposed bundle adjustment with a reference camera and built-in ROCs where the IOPs for all cameras were again taken from the individual camera calibrations in the previous subsection and were kept as fixed constants(4)The proposed bundle adjustment with a reference camera and built-in ROCs in self-calibrating mode

Note that these four system calibration scenarios correspond to the types of system calibrations described in Section 2.3 and listed in Table 1. The results from the CMP estimation test are summarized in Table 3. As previously explained, the former two approaches require two steps for the CMP estimation. For this system, there were many observation epochs where the reference camera did not observe the test field, and the majority of the cameras did not have any overlap with the reference camera in any of the observation epochs; in fact, the test field was only seen by two to four cameras at a time. Thus, none of the CMPs can be derived for all observation epochs, and the CMPs of some cameras can only be derived through a daisy chain with one or more intermediate cameras. It is suggested that some sort of weighted averaging or an additional higher level adjustment must be performed in order to avoid an arbitrary solution, that is, to achieve a proper CMP estimation. Such solution was however not implemented for this paper.

Table 3: Results for the CMP estimation test.

The latter two approaches, as previously explained, were able to solve for the CMPs in one step. It should be emphasized that while the method where precalibrated cameras were used seemed to have better CMP standard deviation values, , especially for the rotational CMPs; the method with self-calibration is still the preferred one. This is because the system consists of eight cameras, and it is not practical to run nine different adjustments in order to solve for all the IOPs and CMPs. Also, if the cameras were truly precalibrated (i.e., they were calibrated in a different location prior to being installed in the structures laboratory), there would be no guarantee that the previously estimated IOPs would still be valid.

In addition to analyzing the standard deviations of the estimated CMPs, the two one-step system calibration approaches can also be compared using a method for system stability analysis referred to as “object space parallax in image space units” [26]. While the issue here is not a matter of actual (in)stability, this system stability analysis tool can be used to check the compatibility between the two different system calibration approaches. This method quantifies/provides a numerical measure of the differences between two sets of calibration parameters based on a number of image and object space simulations. The results are shown in Table 4. Note that Habib et al. [26] output an image space RMSE only, while here an object space RMSE is also shown. The object space RMSE is computed by scaling the image space RMSE by the ratio of the object-to-camera depth over the average value of the principal distance. Since the object-to-camera depth could vary, reasonable “near field” and “far field” values are picked and a range of object space RMSEs is reported. The total image space RMSE value for any camera pair was one pixel or less, while the total object space RMSE value ranged from 0.11 mm to 0.44 mm depending on the camera pair and the camera to object distance. Since the 3D reconstruction for the beam in this research initiative was based on pixel level image matching, and the sought after object space precision was 0.5 mm, the compatibility between the two types of system calibration solutions was considered satisfactory.

Table 4: Check for the image and object space compatibility between the two system calibration approaches (i.e., calibration scenarios three and four).
4.3. Individual versus Multisystem Calibration Tests

Additional system calibration data sets were acquired during a multiday beam deflection experiment. Each calibration data set was collected before any actual deflection observations were made on that particular day. Three calibration data sets referred to as Day 1, Day 2, and Day 3 were processed in two different ways which are as follows: (i)As three individual multicamera system calibrations where each calibration data set was treated as a separate system(ii)As a multicamera multisystem calibration where the three calibration data sets were combined in a single adjustment, but the data sets from the different days were treated as separate systems

The results from the individual system calibrations can be seen under the Day 1, Day 2, and Day 3 columns of Table 5. The proposed multicamera system calibration was applied, where the IOPs and CMPs for all the cameras in the system were estimated simultaneously. Since the network geometry of the different data sets was kept consistent, the standard deviations for the estimated parameters had similar ranges. The only noticeable differences in the adjustment quantities were in the total number of image points. This was due to the slight difference in the number of observation epochs and thus the number of images.

Table 5: Results for the individual versus multisystem calibration test.

The result from the multisystem calibration can be seen under the Days 1 + 2 + 3 column of Table 5. Note that a “per system normalization” was applied to some of the adjustment quantities in the last column of the table, that is, some of the quantities were divided by the number of systems in order to make the comparisons more objective. It should also be noted that the object space coordinates of the portable test field were what tied the three systems in this adjustment. Thus, the assumption for this type of solution was that the test field did not deform during the multiday experiment. In the individual versus multicamera calibration test in Section 4.1, there was an improvement in both the network geometry and the redundancy of the bundle adjustment. Since the network geometry between the different systems was similar, in this individual versus multisystem calibration test, the improvement was only in the redundancy. Due to the greater redundancy, it would be expected that the precision values for the IOP and CMP ranges would improve for the Days 1 + 2 + 3 calibration compared to the calibrations for the individual days. According to Table 5, however, this was not the case. A speculation as to why this was would be that the test field did experience some level of deformation during the multiday experiment. Nevertheless, the multisystem calibration is still a practical option as multiple data sets can be run in the same adjustment (i.e., three versus one runs in this case). Also, since all the data sets share the same object space, potentially more realistic system stability analysis can be performed.

In addition to analyzing the adjustment quantities from the two system calibration approaches, the results from the individual versus multisystem calibrations can be compared again via a system stability analysis method. More specifically, the Day 1 individual versus Day 1 multisystem, Day 2 individual versus Day 2 multisystem, and Day 3 individual versus Day 3 multisystem comparisons are listed in Table 6. The total image space RMSE value for any camera pair was under one pixel, while the total object space RMSE ranged from 0.05 mm to 0.44 mm. Again, the compatibility between the two types of system calibration solutions was considered acceptable for the work in this research initiative.

Table 6: Check for image and object space compatibility between the individual versus multisystem calibration approaches.

5. Conclusions

A mathematical model for a more straightforward-to-implement multicamera system calibration was described in this paper. The model was based on the use of a single or multiple reference camera(s) and built-in relative orientation constraints where the IOPs and the CMPs for all the cameras were explicitly estimated simultaneously in a single step. The complexity of the system calibration model is not affected by the number of cameras in the system or the number of observation epochs. The implemented adjustment for this model was able to handle an individual camera calibration, a calibration of multiple nonconstrained cameras, an individual multicamera system calibration, and a multicamera multisystem calibration. Moreover, an in situ multicamera system calibration routine was carried out, where a newly designed portable calibration test field was used. A notable feature of the test field was that it had a piece cut out of it in order for it to fit around otherwise obstructing objects. This routine fulfilled the requirements for a successful calibration, that is, it ensured that a suitable network geometry is present and that the usable image format for all cameras involved in the system had an even distribution of targets. It also proved that the multicamera system or multicamera multisystem calibration yielded the most practical and the strongest adjustment solutions due to the increased number of observations, the reduced number of unknowns, and the improved network geometry.

While the precision achieved by the system was considered sufficient for the 3D reconstruction in this research work, there are a few aspects of the calibration that can be improved. (i)Explore different AP models in order to further optimize the precision of the results.(ii)Investigate if the image coordinate measurement precision differs between cameras or images.(iii)Experiment with modifying the test field in order to turn it from a 2D one to a 3D one so as to reduce the level of projective compensation or high correlations between and within the interior and exterior orientation parameters and also to make it less prone to any deformations.(iv)Perform RMSE analysis for the 3D object space coordinates of the target points or distances between them; the ground truth values should be provided by a coordinate measuring machine or laser interferometry, that is, techniques which can achieve accuracy at the micron level.(v)Test if the calibration procedure can be run on a divergent multicamera system; this is important as some multicamera systems are being set up with divergent geometry due to preferences of coverage over network strength.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

The authors would like to thank Jeremy Steward and Dr. Hervé Lahamy for their help during the data collection and Dr. Zahra Lari for her development of the coded targets. In addition, the authors are grateful to Dr. Mamdouh El-Badry, his students, and the civil engineering support staff for the opportunity to work in the structures laboratory at the University of Calgary. Portions of this paper come from the doctoral dissertation of the first author [37].

References

  1. F. Remondino, “3-D reconstruction of static human body shape from image sequence,” Computer Vision and Image Understanding, vol. 93, no. 1, pp. 65–85, 2004. View at Publisher · View at Google Scholar · View at Scopus
  2. N. D’Apuzzo, “Surface measurement and tracking of human body parts from multi-image video sequences,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 56, no. 5-6, pp. 360–375, 2002. View at Publisher · View at Google Scholar · View at Scopus
  3. C. Ellum and N. El-Sheimy, “Land-based mobile mapping systems,” Photogrammetric Engineering & Remote Sensing, vol. 68, no. 1, pp. 13–28, 2002. View at Google Scholar
  4. J. Y. Rau, A. F. Habib, A. P. Kersting et al., “Direct sensor orientation of a land-based mobile mapping system,” Sensors, vol. 11, no. 12, pp. 7243–7261, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. F. Remondino, S. F. El-Hakim, A. Gruen, and L. Zhang, “Turning images into 3-D models,” IEEE Signal Processing Magazine, vol. 25, no. 4, pp. 55–65, 2008. View at Publisher · View at Google Scholar · View at Scopus
  6. F. Remondino and S. El-Hakim, “Image-based 3d modelling: a review,” The Photogrammetric Record, vol. 21, no. 115, pp. 269–291, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. I. Detchev, A. Habib, and Y. C. Chang, “Image matching and surface registration for 3d reconstruction of a scoliotic torso,” Geomatica, vol. 65, no. 2, pp. 175–187, 2011. View at Publisher · View at Google Scholar · View at Scopus
  8. I. Detchev, A. Habib, and M. El-Badry, “Dynamic beam deformation measurements with off-the-shelf digital cameras,” Journal of Applied Geodesy, vol. 7, no. 3, pp. 147–157, 2013. View at Publisher · View at Google Scholar
  9. E. Kwak, I. Detchev, A. Habib, M. El-Badry, and C. Hughes, “Precise photogrammetric reconstruction using model-based image fitting for 3d beam deformation monitoring,” Journal of Surveying Engineering, vol. 139, no. 3, pp. 143–155, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. A. M. Tommaselli, M. Galo, M. V. De Moraes, J. Marcato, C. R. Caldeira, and R. F. Lopes, “Generating virtual images from oblique frames,” Remote Sensing, vol. 5, no. 4, pp. 1875–1893, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. C. S. Fraser, “Digital camera self-calibration,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 52, no. 4, pp. 149–159, 1997. View at Publisher · View at Google Scholar
  12. A. F. Habib and M. F. Morgan, “Automatic calibration of low-cost digital cameras,” Optical Engineering, vol. 42, no. 4, pp. 948–955, 2003. View at Publisher · View at Google Scholar · View at Scopus
  13. D. C. Brown, “Decentering distortion of lenses,” Photogrammetric Engineering & Remote Sensing, vol. 32, no. 3, pp. 444–462, 1966. View at Google Scholar
  14. D. C. Brown, “Close-range camera calibration,” Photogrammetric Engineering & Remote Sensing, vol. 37, no. 8, pp. 855–866, 1971. View at Google Scholar
  15. J. F. Kenefick, M. S. Gyer, and B. F. Harp, “Analytical self-calibration,” Photogrammetric Engineering & Remote Sensing, vol. 38, no. 11, pp. 1117–1126, 1972. View at Google Scholar
  16. S. I. Granshaw, “Bundle adjustment methods in engineering photogrammetry,” The Photogrammetric Record, vol. 10, no. 56, pp. 181–207, 1980. View at Publisher · View at Google Scholar
  17. T. A. Clarke and J. G. Fryer, “The development of camera calibration methods and models,” The Photogrammetric Record, vol. 16, no. 91, pp. 51–66, 1998. View at Publisher · View at Google Scholar
  18. F. Remondino and C. Fraser, “Digital Camera Calibration Methods: Considerations and Comparisons,” in ISPRS Archives, vol. XXXVI-5, pp. 266–272, Dresden, Germany, 2006. View at Google Scholar
  19. J. H. Chandler, J. G. Fryer, and A. Jack, “Metric capabilities of low-cost digital cameras for close range surface measurement,” The Photogrammetric Record, vol. 20, no. 109, pp. 12–26, 2005. View at Publisher · View at Google Scholar · View at Scopus
  20. C. S. Fraser, “Automatic camera calibration in close range photogrammetry,” Photogrametric Engineering & Remote Sensing, vol. 79, no. 4, pp. 381–388, 2013. View at Google Scholar
  21. I. Detchev, M. Mazaheri, S. Rondeel, and A. Habib, “Calibration of Multi-Camera Photogrammetric Systems,” in ISPRS Archives, vol. XL-1, pp. 101–108, Denver, Colorado, 2014. View at Publisher · View at Google Scholar · View at Scopus
  22. E. Harvey and M. Shortis, “A system for stereo-video measurement of sub-tidal organisms,” Marine Technology Society Journal, vol. 29, no. 4, pp. 10–22, 1996. View at Google Scholar
  23. E. S. Harvey and M. R. Shortis, “Calibration stability of an underwater stereo-video system: implications for measurement accuracy and precision,” Marine Technology Society Journal, vol. 32, no. 2, pp. 3–17, 1998. View at Google Scholar
  24. M. R. Shortis and E. S. Harvey, “Design and calibration of an underwater stereo-video system for the monitoring of marine fauna populations,” in ISPRS Archives, vol. XXXII-5, pp. 792–799, Hakodate, Japan, 1998. View at Google Scholar
  25. M. R. Shortis, S. Miller, E. S. Harvey, and S. Robson, “An analysis of the calibration stability and measurement accuracy of an underwater stereo-video system used for shell surveys,” Geomatics Research Australasia, vol. 73, pp. 1–24, 2000. View at Google Scholar
  26. A. Habib, I. Detchev, and E. Kwak, “Stability analysis for a multi-camera photogrammetric system,” Sensors, vol. 14, no. 8, pp. 15084–15112, 2014. View at Publisher · View at Google Scholar · View at Scopus
  27. R. Hartley, J. Trumpf, Y. Dai, and H. Li, “Rotation averaging,” International Journal of Computer Vision, vol. 103, no. 3, pp. 267–305, 2013. View at Publisher · View at Google Scholar · View at Scopus
  28. G. He, K. Novak, and W. Feng, “Stereo camera system calibration with relative orientation constraints,” In SPIE Videometrics, vol. 1820, pp. 2–8, 1993. View at Publisher · View at Google Scholar
  29. B. King, “Methods for the photogrammetric adjustment of bundles of constrained stereopairs,” In ISPRS Archives, vol. XXX, pp. 473–480, 1994. View at Google Scholar
  30. B. King, “Bundle adjustment of constrained stereo pairs - mathematical models,” Geomatics Research Australasia, vol. 63, pp. 67–91, 1995. View at Google Scholar
  31. J. L. Lerma, S. Navarro, M. Cabrelles, and A. E. Seguí, “Camera calibration with baseline distance constraints,” The Photogrammetric Record, vol. 25, no. 130, pp. 140–158, 2010. View at Publisher · View at Google Scholar · View at Scopus
  32. D. D. Lichti, G. B. Sharma, G. Kuntze, B. Mund, J. E. Beveridge, and J. L. Ronsky, “Rigorous geometric self-calibrating bundle adjustment for a dual fluoroscopic imaging system,” IEEE Transactions on Medical Imaging, vol. 34, no. 2, pp. 589–598, 2015. View at Publisher · View at Google Scholar · View at Scopus
  33. S. Zheng, R. Huang, B. Guo, and K. Hu, “Stereo-camera calibration with restrictive constraints,” Cehui Xuebao/Acta Geodaetica et Cartographica Sinica, vol. 41, no. 6, pp. 877–885, 2012. View at Google Scholar
  34. Canon Inc, EOS Rebel XS/EOS 1000d Instruction Manual, 2008.
  35. OpenCV.org, “Feature detection—OpenCV 2.4.13.0 documentation,” 2014, http://docs.opencv.org/2.4/modules/imgproc/doc/feature_detection.html. View at Google Scholar
  36. C. S. Fraser, M. R. Shortis, and G. Ganci, “Multisensor system self-calibration,” In SPIE Videometrics IV, vol. 2598, pp. 2–18, 1995. View at Publisher · View at Google Scholar · View at Scopus
  37. I. Detchev, “Image-based Fine-scale Infrastructure Monitoring,” Tech. Rep., PhD dissertation, UCGE report #20474, Department of Geomatics Engineering, University of Calgary, Calgary, Alberta, Canada, 2016. View at Google Scholar