Journal of Sensors

Volume 2018, Article ID 5351863, 12 pages

https://doi.org/10.1155/2018/5351863

## Practical In Situ Implementation of a Multicamera Multisystem Calibration

^{1}Department of Geomatics Engineering, University of Calgary, 2500 University Dr NW, Calgary, AB, Canada T2N 1N4^{2}Lyles School of Civil Engineering, Purdue University, 550 Stadium Mall Dr West, Lafayette, IN 47907-2051, USA

Correspondence should be addressed to Ivan Detchev; ac.yraglacu@vehcted.i

Received 24 May 2017; Revised 4 October 2017; Accepted 31 October 2017; Published 7 February 2018

Academic Editor: Marco Scaioni

Copyright © 2018 Ivan Detchev et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Consumer-grade cameras are generally low-cost and available off-the-shelf, so having multicamera photogrammetric systems for 3D reconstruction is both financially feasible and practical. Such systems can be deployed in many different types of applications: infrastructure health monitoring, cultural heritage documentation, bio-medicine, as-built surveys, and indoor or outdoor mobile mapping for example. A geometric system calibration is usually necessary before a data acquisition mission in order for the results to have optimal accuracy. A typical system calibration must address the estimation of both the interior and the exterior, or relative, orientation parameters for each camera in the system. This article reviews different ways of performing a calibration of a photogrammetric system consisting of multiple cameras. It then proposes a methodology for the simultaneous estimation of both the interior and the relative orientation parameters which can work in several different types of scenarios including a multicamera multisystem calibration. A rigorous in situ system calibration was successfully implemented and tested. The same algorithm is able to handle the equivalent to a traditional-style bundle adjustment, that is, a network solution without constraints, for a single or multicamera calibrations, and the proposed bundle adjustment with built-in relative orientation constraints for the calibration of a system or multiple systems of cameras.

#### 1. Introduction

A photogrammetric system consists of one or multiple digital cameras. In the case of a single camera [1], it would have to be moving or sequentially occupying varying camera stations. This scenario would only work for objects that remain shape-invariant throughout the image recording session due to the time lapse between the varying camera station exposures. Preferably, an array or a cluster of cameras should be used in the scenarios where the shape of the object of interest may be changing with time [2]. The availability of inexpensive digital cameras has made the use of such multisensor systems more and more common. Their employment in mobile mapping applications [3, 4], dense matching of imagery [5, 6], biomedical and motion-capture metric applications [2, 7], infrastructure health monitoring [8, 9], and the generation of photo scenes from multiple sensors [10] has become frequent occurrence.

Sensor calibration is known to be a critical quality assurance measure to maximize photogrammetric accuracy. This is even more so in the case of a multicamera system. A correct system calibration is essential for the accurate reconstruction of 3D object space required in photogrammetric applications. For this purpose, a mathematical model based on built-in relative orientation constraints (ROCs) is reviewed and further improved by modifying it to handle both single and multiple reference camera(s).

The next section summarizes the different types of system calibrations depending on whether the cameras are precalibrated or calibrated in situ and whether the estimation of the relative orientation parameters is performed in a two- or a one-step procedure. The preferred system calibration method for the simultaneous estimation of all system calibration parameters is proposed and tested.

#### 2. System Calibration Methodology

The geometric calibration of a system comprising multiple cameras has two components: a camera calibration of each camera in the system and an estimation of the position and orientation of the cameras involved in the system with respect to a reference camera. The next subsections discuss various options for accomplishing the estimation of a system calibration.

##### 2.1. Solution for the Interior Orientation Parameters

The camera calibration necessitates estimating the interior orientation parameters (IOPs) of each camera, which include the principal distance, the principal point offset, and any necessary distortion or additional parameters. This has been heavily addressed in photogrammetric literature. For consumer-grade digital cameras that can be purchased off-the-shelf, the preferred procedure for estimating the IOPs is a bundle adjustment with self-calibration [11, 12]. The distortion models are found in Brown [13] and Brown [14], while the analytical basis for the adjustment is published in Kenefick et al. [15] and Granshaw [16]. Clarke and Fryer [17] and Remondino and Fraser [18] include recommendations on how to carry out the calibration procedure for a single camera. In addition, Chandler et al. [19] and Fraser [20] show examples for the calibration of digital cameras that are specifically low-cost/off-the-shelf.

The calibration process can be performed in a specialized laboratory or on the job (i.e., in situ). Which type of calibration should be chosen depends on the project specifics. In the case of a photogrammetric system comprising a few cameras (e.g., two to three), the calibration process can successfully be carried out individually for each camera before the commencement of any data collection. Yet, if many cameras are involved in the system in question (e.g., four or more), precalibrating each one of them individually might be too time-consuming. Additionally, it may not be desirable to dismount and then remount the cameras from the system platform every time they need to be recalibrated. In such circumstances, performing the camera calibration on the job or in situ may be more practical and/or feasible. The challenge of such an in situ system calibration is the first order network design problem, that is, having a network geometry that will produce a solution for the unknown parameters with an acceptable variance-covariance matrix. For instance, sufficient number of target points with a well-distributed spread within the image format must be present for each camera. At the same time, multistation convergent images must be taken such that isotropic coordinate precision for the object space reconstruction is achieved. For a stationary camera system, this network configuration can be emulated by conducting numerous translations and rotations of a portable test field within the field of view of the cameras in the system [21–25]. It is worth clarifying that even though the camera system is physically stationary and the test field is the one translated and rotated, the adjustment is handled inversely as if the test field is kept stationary and the camera system is the one moving. In this way, for each instance of translation and rotation, the number of exterior orientation parameters (EOPs) added to the adjustment is lesser than the number of 3D coordinates for the object space target points, and the total number of unknowns in the adjustment is thus minimized. Note that ideally this portable test field should be in 3D so that any projective compensation or high correlations within and between the IOPs and EOPs can be decoupled [15, 22–25].

##### 2.2. Solution for the Camera Mounting Parameters

Before the beginning of a data collection campaign, the position and the orientation of the cameras in the system must be estimated in addition to the IOPs. This can be done with respect to a reference camera or some other type of a reference frame (e.g., an IMU body frame). These parameters are referred to as the relative orientation parameters (ROPs) or the camera mounting parameters (CMPs), that is, the parameters describing how each camera is attached to the system platform. Assuming that the CMPs are defined relative to a reference camera, they consist of positional, , and rotational, , offsets between each camera and the reference camera. These components can also be referred to as the lever arm (baseline) and (angular) boresight, respectively. The estimation of the CMPs can be done in a two-step or a one-step process [26]. Both type of processes are reviewed in the next two subsections.

###### 2.2.1. Two-Step CMP Estimation

The first step in the two-step procedure for providing a solution for the CMPs is estimating the EOPs for each of the cameras in the system. A traditional-style bundle adjustment, that is, a network solution based on the collinearity equations and without any constraints, is normally used for the purpose of completing this first step (see (1) and (2)). Note that time dependency is assumed here in order to have a general model, that is, a model that can handle both stationary or moving sensors or objects. where contains the coordinates of object space point, , with respect to the mapping frame ; and are the time-dependent positional and rotational parameters or the EOPs of camera with respect to the mapping frame at time ; is the image to object space scale; and the expression contains the distortion-free coordinates of image space point, , or the distortion-free projection of point in the frame of camera , where are the observed or distorted image coordinates for point ; is the principal point offset; is the principal distance; and are the image space distortions for point .

The second step in this process uses the estimated EOPs at time to compute the CMPs using (3) and (4) [4]. where is the time-dependent 3D lever arm/positional offset or translation between camera and the reference camera , that is, and are the time-dependent positional and rotational parameters or the EOPs of the reference camera with respect to the mapping frame ; and is the time-dependent positional component of the EOPs of camera . where is the time-dependent 3D boresight/rotational offset between camera and the reference camera , which is a function of , and , and is the time-dependent rotational component of the EOPs of camera .

If the EOPs are estimated in a single observation epoch, there would accordingly be a single set of computed CMPs. If the EOPs are, however, estimated in two or more observation epochs, the resultant redundant sets of time-dependent CMPs can be averaged and their standard deviations can be calculated [4, 22–25]. Note, however, that rotation averaging should not be performed with Euler angles [27]. The point of averaging is to compute the best estimate of a random variable while minimizing the sum of squared errors. Averaging Euler angles, that is, sequential rotational parameters, does not minimize a meaningful cost function. Instead, for sound rotation averaging, quaternions or angle-axis representation must be used as they can minimize several cost functions, which are listed in Hartley et al. [27]. In addition, the rotational and positional parameters are often correlated. It should also be highlighted that if the reference camera does not observe the test field in certain observation epochs, these observation epochs cannot contribute to the estimations of the CMPs. Moreover, if the field of view of a particular camera does not overlap with the one for the reference camera (i.e., the two cameras cannot observe the test field simultaneously in any of the observation epochs), the CMPs of the camera in question cannot be directly estimated as in (3) and (4). A work-around procedure exploiting the overlap with other cameras in the system must be then implemented.

Using constraint equations for the EOPs in the network solution in order to enforce an invariant geometrical relationship between the cameras at different times [10, 28–33] may mitigate some of the mentioned problems. The benefit of using EOP constraint equations in the bundle adjustment is that it is not necessary to perform any averaging. This is the case since no matter which observation epoch is used, the same values for the CMPs would be computed with (3) and (4). The disadvantages of this method are that it is still technically a two-step process, a work-around procedure is necessary for computing the CMPs of any cameras, which do not overlap with the reference camera in any of the observation epochs, and the complexity of the implementation procedure intensifies with the increase of the number of cameras in the system and the number of observation epochs [4]. Given these drawbacks, especially the complexity consideration, this type of network solution is not implemented as part of this research.

###### 2.2.2. One-Step CMP Estimation

A one-step procedure for estimating the CMPs is desired in order to avoid a separate estimation step for the CMPs and potential work-around procedures for addressing situations where a camera does not have any overlap with the reference camera in any observation epoch. This can be accomplished by directly incorporating ROCs among all cameras and the reference camera in the collinearity-equation-based network solution [4, 29, 30].

The CMPs, and , in (5) are now explicitly treated as time-independent parameters. In other words, it is assumed that the CMPs for the system of multiple cameras remain stable during all observation epochs within a given system calibration campaign. Also, the EOPs of the reference camera, and , now represent the EOPs of the system platform. This model is relatively straightforward to implement as it preserves its simplicity regardless of the number of cameras employed or the number of observation epochs used. As the number of cameras and the number of observation epochs increase, it significantly reduces the number of unknowns to solve compared to the traditional-style bundle adjustment. It should be noted that when the observation equations for the reference camera, , are established, (5) reduces to (6), because the lever arm vector is set to zero, , and the boresight rotation matrix is set to identity, . The difference between the mathematical models for the traditional-style bundle adjustment (1) and the one with built-in ROCs (5) is visually summarized in Figure 1.