International Journal of Optics

International Journal of Optics / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 8561380 | 10 pages | https://doi.org/10.1155/2019/8561380

Geometric Evaluation of Mobile-Phone Camera Images for 3D Information

Academic Editor: E. Bernabeu
Received19 Mar 2019
Revised11 Aug 2019
Accepted07 Sep 2019
Published30 Sep 2019

Abstract

This study aimed to investigate the usability of smartphone camera images in 3D positioning applications with photogrammetric techniques. These investigations were performed in two stages. In the first stage, the cameras of five smartphones and a digital compact camera were calibrated using a calibration reference object, with signalized points having known three-dimensional (3D) coordinates. In the calibration process, the self-calibration bundle adjustment method was used. To evaluate the metric performances, the geometric accuracy tests in the image and object spaces were performed and the test results were compared. In the second stage, a 3D mesh model of a historical cylindrical structure (height = 8 m and diameter = 5 m) was generated using Structure-from-Motion and Multi-View-Stereo (SfM-MVS) approach. The images were captured using the Galaxy S4 smartphone camera, which produced the best result in the geometric accuracy tests for smartphone cameras. The accuracy tests on the generated 3D model were also applied in order to examine 3D object reconstruction capabilities of imaging with this device. The results demonstrated that smartphone cameras can be easily used as image acquisition tools for multiple photogrammetric applications.

1. Introduction

In the early 1980s, the advent of digital cameras had a striking and positive impact on close-range photogrammetry, immediately expanding the scope of applications and providing facilities for full measurement automation. The development of digital cameras has provided a substantial acceleration in the processing steps for offline measurement tasks. In addition, subpixel image operators such as centering or template matching provided a level of image measurement accuracy that was routinely better than 0.1 pixel [1].

Digital cameras can be divided into two categories with respect to their metric properties: comparatively low-cost, low-resolution, amateur digital cameras, and comparatively expensive, professional digital cameras with high resolution. Professional cameras have a wide range of features, such as good lens quality, a robust structure, a large sensor with high resolution and sensitivity, and the ability to switch lenses, while amateur cameras may contain any of these features. The main difference between compact and professional cameras is the lower geometric stability of amateur cameras. Among the different amateur cameras, smartphone cameras are the most interesting option because mobiles phones are light, portable, inexpensive, and fully equipped with high-resolution digital cameras [2, 3].

The camera calibration is a process required to extract accurate and reliable 3D information from images. Various algorithms have been proposed for camera calibration in the area of photogrammetry and computer vision. The developed algorithms are generally based on the perspective camera model [3]. In the early 1970s, the self-calibration approach was initially developed by Brown [4] and has been routinely used as an efficient technique in photogrammetry [1].

The photogrammetric camera calibration has been the subject of many publications. These studies are based on experience acquired over the years that digital cameras have been used for photogrammetric measurement, and in the particular parameter sets, the stability of parameters, camera configurations, and analysis techniques are suggested [57].

Mobile-phone cameras have been used in many research studies and commercial applications. One of the most prominent applications is the use of smartphones for recognizing the two-dimensional (2D) barcodes. Smartphone cameras are used for capturing and decoding the barcodes of devices. After decoding, an encoded URL, which automatically directs the users to the source website for further information, is obtained [8].

One of the first applications in the field of photogrammetry using mobile-phone cameras was presented in the work by Akca and Gruen [9]. They investigated the geometric and radiometric evaluation of the low-resolution mobile-phone cameras. Furthermore, Azhar and Ahmad [10] carried out the same tests for a low-resolution mobile-phone camera.

With the advent of high-resolution mobile-phone cameras in the 2010s, these devices have begun to be used as imaging tools in photogrammetric tasks. A few studies have focused on the use of smartphone camera images for pure photogrammetric processes, e.g., in the work of El-Ashmawy et al. [2]. The authors used mobile-phone camera images to photogrammetrically determine the displacements of signalized points on a beam under loading.

3D reconstruction procedures using the smartphones have initiated research in this field [1113]. Recently, image-based 3D reconstruction techniques based on the combination of computer vision and photogrammetric algorithms have become a robust and efficient solution [14]. The basic concept of these methods is to apply an automatic image orientation by SFM and then a dense image matching. The measured point cloud is then converted into a triangular mesh or textured surface which represents the object surface shape. Depending on the computational load and the desired reliability level, the 3D reconstruction process can be performed on a mobile phone directly or on a cloud-based server or on a PC using the appropriate software. In the PC and server-based solutions, the mobile phone is used only as an imaging device to capture images of the scene of interest. There are a wide variety of open-source solutions (Visual SfM, Bundler, etc.) and free web-based services (e.g., Photosynth, 123DCatch, etc.) to implement the SfM-MVS approaches [15]. Although these methods provide automation and convenient services for users in data processing, they do not ensure accuracy and robustness in the final results, lacking georeferencing process and spatial data creation. On the commercial side, many effective software packages have also emerged on the market (e.g., Pix4d, Agisoft PhotoScan), providing a 3D reconstruction of objects from image data [14, 16]. In the literature, there are a few studies using these software packages for the processing of smartphone camera images. In one of these studies, Kim et al. [17] investigated the possibilities of using smartphone cameras in photogrammetric UAV systems. In another study, Micheletti et al. [18] explored the possibility of obtaining high-resolution topographic and terrain data using a set of low-resolution smartphone camera images. The solution was implemented on a smartphone, on a server, and also with a commercial software package.

The geometric accuracy tests of high-resolution smartphone camera images and 3D object reconstruction capabilities have not been sufficiently researched in literature. Herein, the usability of smartphone cameras in photogrammetric applications has been investigated. For this purpose, the geometric accuracy tests for five smartphone cameras and one compact camera have been performed using a 3D reference field, followed by a comparison of the test results. In the second stage, a 3D model of a historical structure has been reconstructed based on SfM-MVS approaches using smartphone camera (Galaxy S4) images and geometric accuracy tests have been performed on the model. This produced better results in terms of geometric accuracy.

In the next section, the process of geometric accuracy tests for the cameras is introduced. Subsequently, the details of 3D modeling performed using the images captured by Galaxy S4 are provided. Finally, the test results are summarized.

2. Geometric Performance Tests

2.1. Cameras

Today, for purchasing a smartphone, users have an option to choose from a wide variety of brands and models. In our research, we studied five smartphones, which were widely used during the period of the study, and one digital compact camera. Table 1 lists the technical specifications of the compact camera and smartphone cameras.


Apple iPhone 5Nokia C7Samsung Galaxy S3Samsung Galaxy S4Sony Xperia SCanon IXUS 960 IS

Sensor typeBSI-CMOSCMOSBSI-CMOSBSI-CMOSBSI-CMOSCCD
Sensor size (mm)4.54 × 3.424.56 × 3.414.56 × 3.414.69 × 3.524.80 × 3.607.44 × 5.58
Pixel size (μm)1.401.401.401.141.201.85
Image format3264 × 2448 8 MP3264 × 2448 8 MP3264 × 2448 8 MP4128 × 3096 13 MP4000 × 3000 12 MP4000 × 3000 12 MP
Focal length (mm)444448–28

2.2. Comparison of Cameras

In total, 25 images of the calibration reference object with 80 targets comprising a white dot on a black background (Figure 1) from approximately the same locations were captured using the compact camera and each smartphone camera. 3D coordinates of the marked points installed on a transparent glass plate (60 × 60 cm) at different heights were measured previously with high accuracy. The images were taken from a distance of ∼70 cm. To avoid correlations between the parameters of interior orientation (IO), exterior orientation, and the coordinates of the object point, the cameras of the smartphones were rotated 90° to the left and right around the optical axis in eight positions while capturing the images.

For photogrammetric evaluation, the Australis photogrammetric software package (version 6.06; Photometrix, 2012) was used. It can perform least-squares adjustments of photogrammetric bundles with photogrammetric-only data. Alternatively, it uses combined adjustments with either known camera parameters or by self-calibration. The self-calibrating bundle adjustment method used herein is the most versatile and accurate photogrammetric positioning and calibration method. The mathematical model of this method is based on the following collinearity condition, which is implicit in the perspective transformation between the image and object spaces [19, 20]:with,where x and y are the image coordinates of the point. x0, y0, and c are the IO parameters. X, Y, and Z are the object coordinates of the point. X0, Y0, and Z0 are the object coordinates of the perspective center. R is the orthogonal rotation matrix built with the three rotation angles (ω, φ, and κ) of the camera. Δx and Δy are functions of a set of additional parameters (AP) to account for the departures from collinearity due to lens distortion and focal plane distortions [19, 20].

The most common set of APs employed to compensate for systematic errors in digital cameras is the 8-term “physical” model originally formulated by Brown [4]. This model includes the 3D position of the perspective center in image space (principal distance and principal point), as well as the three coefficients of radial and two of decentering distortion. The model can be extended by two further parameters to account for affinity and shear within the sensor system. While a large number of additional parameter sets has been published in the literature [21], this model has become an accepted standard for digital camera calibration [6]. In this study, the standard 10-term “physical” calibration model described by the following equations has been used to investigate the geometric accuracy potential of the cameras used in the research.with,where k1, k2, and k3 are the first three parameters of radial symmetric distortion. p1 and p2 are the first two parameters of decentering distortion. b1 and b2 represent terms for differential scaling and nonorthogonality between the x- and y-axes [22].

The computation is iteratively performed using the Gauss–Markov least-squares model. The results of the self-calibrating bundle adjustment process comprise the 3D object space coordinates of unknown points and the camera parameters.

The results of the self-calibrating bundle adjustment for multiple comparisons are summarized in Table 2. In the bundle adjustment process, 10 points from a total of 80 test field points with known 3D coordinates were used as the control point and the 3D coordinates of the remaining points (checkpoints) were calculated. To evaluate the effect on the positioning accuracy of the APs, we also performed photogrammetric bundle adjustment without using APs for each smartphone camera. Using free-network bundle adjustment for each camera, another camera calibration procedure was performed. In this case, the control points were not used and were included in the adjustment process as checkpoints.


gcpapchkrSigma0 (μm) pixel gcp (mm) chk (mm) gcp (mm) chk (mm) gcp (mm) chk (mm) gcp (mm) chk (mm) gcp (mm) chk (mm) gcp (mm) chk (mm)

iPhone 51007029024.760.1660.1660.2240.2110.1580.437
3.400.2320.2310.3730.5410.5421.198
10107028740.550.0190.0190.0260.0110.0280.030
0.390.0270.0270.0420.0550.0380.079
801028530.53
Free0.380.0230.230.0350.0440.0420.075

Nokia C71007029806.090.2190.2190.2810.2320.1830.285
4.350.2920.2910.4060.8180.8351.040
10107030160.560.0190.0190.0250.0130.0180.042
0.400.0270.0260.0360.0460.0400.075
80102987055
Free0.390.0230.0220.0290.0470.0400.079

Galaxy S31007030903.140.1090.1100.1530.1480.1130.165
2.240.1600.1600.3100.5980.6000.719
10107032180.490.0170.0170.0240.0110.0140.013
0.350.0260.0250.0480.0430.0320.071
801031950.49
Free0.350.0230.0230.0410.0780.0580.216

Galaxy S41007031726.710.2290.2300.3010.2130.1740.441
5.890.3150.3180.4360.5120.8371.242
10107032160.270.0090.0090.0120.0120.0150.018
0.240.0130.0130.0180.0260.0220.031
801031970.27
Free0.240.0110.0110.0140.0280.0280.035

Xperia S10070326213.220.4470.4480.6230.3380.2480.917
11.020.6070.6071.1201.4791.5503.176
10107031120.570.0190.0190.0270.0130.0180.016
0.480.0270.0270.0480.0480.0470.052
801030950.57
Free0.480.0240.0240.0410.0510.0430.053

IXUS 960 IS10107027420.630.0180.0190.0260.0260.0220.027
0.340.0210.0210.0290.0450.0510.040
801027610.66
Free0.360.0170.0170.0220.0460.0500.039

gcp/chk: number of control points/independent checkpoints, respectively. ap: number of additional parameters. r: redundancy. sigma0: standard deviation of image observations a posteriori. : average theoretical precision values of chk/gcp coordinates. : empirical accuracies of chk/gcp coordinates.

Compared with other cameras, the Galaxy S4 camera exhibited the best performance. According to the results of the bundle adjustment with external constraints using 10 control points, the triangulation misclosures (RMS of image coordinate residuals) used as precision indicators for internal accuracy was computed as 0.27 μm for Galaxy S4, whereas this value for the three other smartphone cameras and the Canon compact camera was determined to be in the range 0.49–0.63 μm. The relative precision was determined as 1/40000 for Galaxy S4. Relative precision is the ratio of the mean target coordinate precision to the largest span of the target array. For other cameras, this ratio was in the range from 1/18000 to 1/25000.

With regard to relative accuracy (measured as the mean RMS coordinate error divided by the largest span of the target array), the most accurate result in the horizontal was 1 : 25000 for Galaxy S4. A relative accuracy average of 1 : 13000 in the horizontal was achieved with the other four smartphone cameras and the compact camera. The best result in the depth direction was 1 : 22000 for Galaxy S4, whereas those for iPhone 5, Nokia C7, Sony Xperia S, Galaxy S3, and Canon IXUS 960 IS were computed as 1 : 9000, 1 : 9000, 1 : 13000, 1 :10000, and 1 : 17000, respectively. The best fit between the theoretical precision and experimental accuracy was obtained for Galaxy S4.

Without using APs, we calculated the accuracy in the image space for all smartphone cameras. These values were in the range 2.26–11.02 pixel. A comparison of bundle adjustments with and without APs showed that the accuracy in the image space improved by a factor of 15. The greatest improvement was by a factor of 25 for Galaxy S4, and the minimum improvement was by a factor of 6 for Galaxy S3. An average of 24-factor improvement in the accuracy was observed in the object space.

With free adjustments, the photogrammetric network was not affected by the probable discrepancies between the reference points. The object coordinate residuals were only affected by photogrammetric measurements and model quality. Free-network adjustment thus provides optimal internal precision [23]. A comparison of the results obtained from free-network adjustment and from adjustment with control points showed that the values of the camera parameters are considerably similar. As expected, the accuracy and precision values resulting from free-network adjustment were smaller than those resulting from adjustment with control points.

The Gaussian radial distortion profiles recovered for each camera are also shown in Figure 2. The profiles are obtained using the following equation:where dr is the radial distortion and r is the radial distance [24]. The plots were derived using a free-network self-calibrating bundle adjustment. The profiles have been plotted only for the maximum radial distance encountered in the self-calibration.

The lowest values for radial distortion (∼9 μm at the corners) were obtained for Galaxy S3, whereas the highest value was obtained for Canon IXUS 960 IS (∼180 μm at the corners). The obtained decentering lens distortion value was less than 1 μm for the cameras, except for Sony Xperia S. This value for Sony Xperia S was calculated as 3 μm at the corners. The calculated principal point locations and their standard deviations are listed in Table 3. The highest values for the principal point position were obtained for Galaxy S4 and Canon IXUS 960 IS. Imperfect mounting of the lenses may have led to these high values.


CameraAdjustment type

iPhone 5Free−2.700.762.600.72
Nokia C7Free−4.300.79−3.300.80
Galaxy S3Free−10.970.52−53.930.53
Galaxy S4Free64.600.36−22.200.38
Sony Xperia SFree24.200.91−35.800.93
Canon IXUS 960 ISFree39.001.1657.401.24

3. Image-Based 3D Modeling Tests

The integration of photogrammetric methods and computer vision algorithms is leading to attractive procedures which have increasingly automated the whole image-based 3D modeling process [24]. Recently, automatic solutions based on SfM-MVS techniques have been extensively used in image-based 3D reconstruction tasks [16, 25, 26]. The process principally involves image orientation and dense model reconstruction with a high level of automation.

3.1. Structure from Motion

Structure from motion can be described as the determination of orientation parameters and 3D scene’s model at the same time. Traditionally, the SfM pipeline consists of two main stages. First, a set of point correspondences between image sequences is detected as a consequence of feature detection and image matching. Second, the SfM is operated to determine the orientation parameters and scene structure [27].

The aim of correspondence estimation phase is to obtain sets of matching pixel positions between image sequences. Each set of matching pixels ideally represents a single point in 3D space. Currently, the scale-invariant operators such as SIFT or SURF provide the state-of-the-art methodology for extracting point features from the images. The features in each image are invariant with respect to image scaling, translation, rotation, and partially illumination changes. For each of these features, the detector also computes a “signature” for the neighborhood of that feature, also known as a feature descriptor [27]. These descriptors are unique enough to allow features to be matched in large datasets [28].

Next, for each pair of images, a set of matching features is determined. The matches are generally obtained on the basis of a kd-tree procedure based on the approximate nearest neighbours [29].

After matching features for an image pair, the fundamental matrix is robustly estimated for the pair using RANSAC with the eight-point algorithm [30, 31]. After finding a set of geometrically consistent matches between each image pair, the matches are organized into tracks, where a track is a connected set of matching key points across multiple images [32, 33].

The second stage of the pipeline comprises determining a set of camera parameters and a 3D position for each track. The recovered parameters should be coherent, in that the reprojection error is minimized. This minimization problem is considered as a nonlinear least-squares problem and solved using bundle adjustment [33].

3.2. Dense Scene Reconstruction

SfM is capable to construct a sparse geometric structure consisting of the 3D positions of matched image features. While this is adequate for some applications such as the image-based visualizations, the reconstruction of a highly detailed and accurate 3D model demands producing a dense point cloud, which requires to apply dense image matching methods between oriented images [27, 34].

The image matching problem is generally resolved by utilizing stereo pairs or via determination of correspondences in multiple images. Stereo methods can be local or global. Local methods determine the disparity at a given point depending only on intensity values within a local window, while global methods make explicit smoothness assumptions and then solve an optimization problem over a global cost function. Most of these procedures apply consistency measures only to single stereo pairs. On the other hand, geometric constraints are applied only during the fusion of the point clouds derived by the stereo pairs [24, 35].

The last steps of the 3D reconstruction process consist of meshing and texturing. Various approaches can be used to derive a photorealistic 3D model from a dense point. Remondino and El-Hakim [36] expressed that polygonal meshing is generally the most efficient solution to accurately represent the results of 3D measurements, providing an optimal surface description. One of the most popular polygonal 2D mesh algorithms is the Delaunay triangulation method. In general, these methods require a starting point such as the visual hull model, a calculation of additional information such as a vertex normal, and a sufficient number of points [34].

3.3. 3D Modeling and Performance Testing

A second application was implemented to examine the potential use of mobile-phone images for 3D modeling applications. In this application, 3D modeling of a historical cylindrical structure (height = 8 m and diameter = 5 m) was first performed based on SfM-MVS techniques using the images captured using Galaxy S4, and then accuracy tests were conducted on the 3D model.

The historical building, which is said to have been built in the mid-14th century, is known as Sircali cupola. Today, this cupola is located in the garden of Kayseri Technical and Industrial Vocational High School in Kayseri (Figure 3).

For 3D modeling, a closed five-point polygon network was first created and measured the coordinates of the 96 signalized points on the historic cupola using a reflectorless total station with high accuracy. In addition to the signalized points, 280 natural points, clearly identified and well distributed on the cupola, were also measured. Herein, we took 69 overlapping images of the cupola. The images were captured from a distance of 5–20 m. The average shooting distance was ∼8 m. The ground resolution at this shooting distance was an average of 2.06 mm/pixel.

The Agisoft PhotoScan Professional software package (version 1.4.3; Agisoft, 2018) was used to generate a 3D photorealistic model of the historical structure based on SfM and MVS algorithms. The workflow of the software for 3D modeling comprised four primary steps: (1) align photos, (2) optimization, (3) build dense point cloud, and (4) surface reconstruction (3D polygonal model). The align photos process consists of detecting the common tie points in the images, matching them on the images, and then the photoalignment process. In our study, the photoalignment accuracy was selected as “high” (the software package uses photos in their original size). To optimize the performance, the number of matching points for each image was limited to the default value of 4000. The outputs of the photoalignment process are the camera position and orientation parameters for each image, camera calibration parameters, and a sparse point cloud model.

For the optimization step, firstly, 3D coordinates of the marked points were added to the input data set. Twenty-eight of these points were used as ground control points (GCPs), while the remaining sixty-eight points were used as the checkpoint. The coordinates of the marks were manually associated with the corresponding marker center. This procedure georeferenced the sparse point cloud model. Nonlinear deformations of the model can be eliminated by optimizing the calculated sparse point cloud and camera parameters based on the known control point coordinates. Throughout this optimization procedure, the software updated estimated point coordinates and camera parameters minimizing the sum of reprojection error and reference coordinate misalignment error.

The third step involved the generation of a dense point cloud using the estimated camera positions and the images. The software calculated the depth map for each image and combined into a final dense point cloud. To obtain a more detailed and accurate geometry, the reconstruction quality was set to “high.”

The final step in the process was the reconstruction of the 3D polygonal model representing the surface of the object based on the dense point cloud. The surface type was selected as “arbitrary,” which was suitable for modeling any object. The user could also determine the maximum number of polygons in the final mesh. This parameter was set to “high” to optimize the number of polygons for a mesh with the corresponding level of detail [37].

From the optimization step, the average precision of the coordinates of the control points was calculated as 0.59 mm and 0.71 mm for the in-plane (x-z) and out-of-plane (y) components, respectively. The empirical accuracy in the object space calculated from the checkpoints was found to be 1.04 mm and 1.33 mm for the in-plane and out-of-plane components, respectively. The RMS of reprojection error (residuals of image coordinates) was 0.54 pixels for all tie points.

The SfM-MVS process produced a 3D dense point cloud containing 21.8 million points (with an average density of 17 points/cm2). This point cloud was triangulated to create a mesh model with 4.3 million faces and 2.18 million vertices (Figure 4).

To assess the final positional accuracy of the geometric model, 68 signalized checkpoint coordinates were measured within the 3D point cloud. These measurements were performed by measuring the closest points to the center of the targets. RMSEs at checkpoint coordinates were calculated as 2.05 mm and 2.38 mm for the in-plane and out-of-plane components, respectively.

Besides point measurements at checkpoints, an assessment of positional accuracy was performed by measuring the spatial differences between the meshed point cloud and the point cloud, which consists of both the checkpoints and natural points using the cloud-to-mesh distance function of the CloudCompare software package. This function is considered more robust to local noise. The distance to the nearest triangle was calculated as follows. If the orthogonal projection of the checkpoint fell inside the triangle, the distance was defined as the orthogonal distance from the point to the triangular plane; however, if the projection fell outside the triangle, the distance to the nearest edge was considered. Figure 5 illustrates the results of distance computation with a color scale display.

Each surface deviation depiction is followed by a graph showing the deviation distance frequency of the occurrence along with the mean distance and standard deviation (σ) of the measured checkpoints [38]. It is worth noting that the comparison resulted in Gaussian-like distributions. The average mean distance between the 3D mesh model and the checkpoints was 0.12 mm, whereas the standard deviation of the checkpoints was 3.83 mm.

3.4. Automatic Camera Calibration

Currently, a convenient, stand-alone targetless camera calibration is achievable via a process that combines SfM methods with rigorous photogrammetric orientation and self-calibration [39]. An attempt was made to evaluate the performance of the targetless camera calibration method based on the SFM algorithm on Galaxy S4 camera images. A total of 40 images of the cupola were used in the study and camera calibration parameters were calculated automatically in Agisoft PhotoScan software. For comparison with the targetless approach, the camera calibration parameters were recalculated in the Australis software using the same images and signalized points having known 3D coordinates. Twenty-five of the signalized points were used as ground control points, while the remaining 50 points were used as the checkpoint. The self-calibration results are given in Table 4.


CaseFocal length, c () mmx0 () mmy0 () mm r = 1.50 mm μm r = 2.50 mm μmP (r) r = 1.50 mm μmP (r) r = 2.50 mm μmRMS No. of pts

Target-based4.2100 (0.0010)0.0623 (0.0018)−0.0295 (0.0016)−15.82−26.090.320.880.33 pixel 75
Targetless4.2120 (0.0005)0.0613 (0.0003)−0.0283 (0.0003)−14.91−22.010.381.070.60 pixel 32000

The calibrated values for focal length (c) and principal point offsets (x0, y0) are listed in the table, together with the estimated standard errors. In addition, the radial distortion correction values (Δr) at the two selected radial distances, and the two decentring distortion profile values P (r) for the same radial distances are listed. The RMS value () of image coordinate residuals and the number of object points for each case are also shown in the table. The repeatability between the target-based and targetless cases can be seen to be high for the interior orientation and lens distortion parameters. Regarding the internal accuracy indicators, as expected, there was a two-fold discrepancy between the accuracy of the image coordinate measurement in both cases, with the RMS values being 0.33 pixel for the target-based case and 0.67 pixels for the targetless case. The accuracy tests were also performed by using checkpoints for the target-based case and as a result of the improvement in image coordinate measurement accuracy, the relative object point accuracy was determined as l : 11000 in-plane and 0.012% of average depth.

4. Conclusions

We investigated the usability of mobile-phone camera images in photogrammetric applications in two stages. In the first stage, five different mobile-phone cameras and a digital compact camera were compared in terms of their accuracy and precision. For this purpose, both self-calibration with the control points and free-network bundle adjustments were performed for each camera. With external constraints using 10 control points, the accuracy of the image space was calculated to be ∼1/4 pixel for Galaxy S4. For other cameras, this value was in the range 1/2–1/2.9 pixel. The relative accuracy in the object space for Galaxy S4 was l : 25000 in-plane and 0.004% of average depth, whereas these ratios for other cameras ranged from l : 10000 to l : 13000 in-plane and from 0.01% to 0.006% of average depth. The 3D object point accuracy was less than 0.1 mm for all cameras. The best results in all evaluations were obtained for the Galaxy S4 smartphone camera. For the mobile-phone cameras used in the study, we think that the principal influence on accuracy is the resolution of the camera sensor and the pixel size. Luhman et al. [1] stated that values consistent with anticipated accuracy of image coordinate measurement are in the range of 0.03–0.1 pixels for automatically measured targets, and the quality of the camera calibration will provide a relative accuracy of 1 : 50000 with the precondition of a strong multi-image geometry. There was about twofold discrepancy between the accuracy values obtained for the Galaxy S4 and recommended for digital cameras in both image and object space. On the other hand, in an earlier study using test field data, Akca and Gruen [9] demonstrated that relative accuracies of 1 : 8000 in-plane and 0.03% of average depth can be achieved with low-resolution mobile-phone cameras. Therefore, it can be said that there is an appreciable improvement in relative accuracy as a result of developments in image resolution and mobile-phone technology.

In the second stage, we used the SfM-MVS techniques to reconstruct the 3D model of the historical Sircali cupola using the images captured using the Galaxy S4 smartphone camera, which yielded the best performance in the geometric accuracy tests. After aerotriangulation block adjustment, relative accuracy values were determined as 1 : 7500 and 1 : 6000 for the in-plane and out-of-plane components, respectively. In SfM-based approaches, there are two main problems associated with measurement applications. Firstly, imperatives of avoiding wide base-lines means that a weaker network geometry results. Secondly, descriptor-based feature point matching leads to the lower accuracy image measurement [39]. The disparity in accuracy is likely related to these disadvantages of the sfm approach, the lower image scale, and the manual image measurements of targets. Indeed, as also stated by Fraser and Shortis [40], the accuracy of vision metrology systems based on digital cameras is dependent on the image resolution, image scale, image measurement precision, and a number of other factors, such as network design. The problems of the sfm approach were also seen in the automatic calibration process. Although the compatibility between the target-based and targetless cases was high for the camera calibration parameters, there was about twofold discrepancy between the accuracy of the image coordinate measurements.

The final positional accuracy tests of the geometric model showed a reasonable accuracy (mm level) of the dense point cloud and the resulting mesh model. The data evaluation phase demonstrates that it is possible to obtain high-quality results from numerous images captured by smartphone cameras using appropriate software solutions. Furthermore, the generated 3D model demonstrated the feasibility of the SfM-MVS approaches in low-budget digitization or documentation projects.

Consequently, with an appropriate imaging configuration, calibration, and data processing software performance, these devices can be used in multiple photogrammetric measurement applications demanding high accuracy. This option is being explored because mobile-phone cameras have good resolution and are economical and flexible. In addition, in line with the technological advancements, the quality and performance of mobile-phone cameras will develop further along with their built-in image processing functions. Therefore, it will be possible to obtain high-quality results for 3D modeling applications in which these devices are used alone as both photogrammetric data acquisition and processing tools for at least small projects.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. T. Luhmann, C. Fraser, and H.-G. Maas, “Sensor modelling and camera calibration for close-range photogrammetry,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 115, pp. 37–46, 2015. View at: Publisher Site | Google Scholar
  2. K. El-Ashmawy, C. J. Bellmana, S. Robson, G. J. Johnston, and G. W. Johnson, “Using smart phones for deformations measurements of structures,” Geodesy and Cartography, vol. 43, no. 2, pp. 66–72, 2017. View at: Publisher Site | Google Scholar
  3. M. R. Shortis et al., “Stability of zoom and fixed lenses used with digital SLR cameras,” in Proceedings of the ISPRS Commission V Symposium of Image Engineering and Vision Metrology, pp. 285–290, Dresden, Germany, September 2006. View at: Google Scholar
  4. D. C. Brown, “Close-range camera calibration,” Photogrammetric Engineering, vol. 37, no. 8, pp. 855–866, 1971. View at: Google Scholar
  5. J. Fryer, “Camera calibration,” in Close Range Photogrammetry and Machine Vision, K. Atkinson, Ed., Whittles Publishing, Caithness, UK, 1996. View at: Google Scholar
  6. F. Remondino and C. Fraser, “Digital camera calibration methods: considerations and comparisons,” in Proceedings of the ISPRS Commission V Symposium “Image Engineering and Vision Metrology”, vol. 36, pp. 266–272, Dresden, Germany, September 2006. View at: Google Scholar
  7. T. Luhmann, J. Boehm, S. Kyle, and S. Robson, Close-Range Photogrammetry and 3D Imaging, De Gruyter Textbook, Berlin, Germany, 2nd edition, 2013.
  8. C. Chen, A. C. Kot, and H. Yang, “A two-stage quality measure for mobile phone captured 2D barcode images,” Pattern Recognition, vol. 46, no. 9, pp. 2588–2598, 2013. View at: Publisher Site | Google Scholar
  9. D. Akca and A. Gruen, “Comparative geometric and radiometric evaluation of mobile phone and still video cameras,” The Photogrammetric Record, vol. 24, no. 127, pp. 217–245, 2009. View at: Publisher Site | Google Scholar
  10. N. A. Azhar and A. Ahmad, “Comparative geometric and radiometric evaluation of mobile phone, compact and DSLR cameras,” in Proceedings of the 2013 IEEE 9th International Colloquium on Signal Processing and its Applications, Kuala Lumpur, Malaysia, March 2013. View at: Google Scholar
  11. P. Tanskanen, K. Kolev, L. Meier, F. Camposeco, O. Saurer, and M. Pollefeys, “Live metric 3D reconstruction on mobile phones,” in Proceedings of the 2013 IEEE International Conference on Computer Vision, pp. 65–72, Sydney, Australia, December 2013. View at: Google Scholar
  12. K. Kolev, P. Tanskanen, P. Speciale, and M. Pollefeys, “Turning mobile phones into 3D scanners,” in Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3946–3953, Columbus, OH, USA, June 2014. View at: Google Scholar
  13. O. Muratov, Y. Slynko, V. Chernov, M. Lyubimtseva, A. Shamsuarov, and V. Bucha, “3DCapture: 3D reconstruction for a smartphone,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 75–82, Las Vegas, NV, USA, June-July 2016. View at: Google Scholar
  14. D. Gonzalez, L. López-Fernández, P. Rodriguez-Gonzalvez et al., “GRAPHOS—open-source software for photogrammetric applications,” The Photogrammetric Record, vol. 33, no. 161, pp. 11–29, 2018. View at: Publisher Site | Google Scholar
  15. E. Nocerino, F. Poiesi, A. Locher et al., “3D reconstruction with a collaborative approach based on smartphones and a cloud-based server,” ISPRS—International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLII-2/W8, pp. 187–194, 2017. View at: Publisher Site | Google Scholar
  16. F. Remondino, S. Del Pizzo, T. P. Kersten, and S. Troisi, “Low-cost and open-source solutions for automated image orientation—a critical overview,” in Proceedings of the 4th International Conference, EuroMed 2012 (LNCS 7616), Limassol, Cyprus, October-November 2012. View at: Publisher Site | Google Scholar
  17. J. Kim, S. Lee, H. Ahn, D. Seo, S. Park, and C. Choi, “Feasibility of employing a smartphone as the payload in a photogrammetric UAV system,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 79, pp. 1–18, 2013. View at: Publisher Site | Google Scholar
  18. N. Micheletti, J. H. Chandler, and S. N. Lane, “Investigating the geomorphological potential of freely available and accessible structure-from-motion photogrammetry using a smartphone,” Earth Surface Processes and Landforms, vol. 40, pp. 473–486, 2015. View at: Publisher Site | Google Scholar
  19. C. S. Fraser, “Digital camera self-calibration,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 52, no. 4, pp. 149–159, 1997. View at: Publisher Site | Google Scholar
  20. F. Yilmazturk, “Full-automatic self-calibration of color digital cameras using color targets,” Optics Express, vol. 19, no. 19, pp. 18164–18174, 2011. View at: Publisher Site | Google Scholar
  21. S. Abraham and T. Hau, “Towards autonomous high precision calibration of digital cameras,” in Proceedings of SPIE Videometrics V, S. El-Hakim, Ed., vol. 3174, pp. 82–93, San Diego, CA, USA, July 1997. View at: Google Scholar
  22. C. S. Fraser and K. L. Edmundson, “Design and implementation of a computational processing system for off-line digital close-range photogrammetry,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 55, no. 2, pp. 94–104, 2000. View at: Publisher Site | Google Scholar
  23. T. Luhmann, S. Robson, S. Kyle, and I. Harley, Close Range Photogrammetry: Principles, Techniques and Applications, Whittles Publishing, Caithness, Scotland, 2006.
  24. F. Remondino, M. G. Spera, E. Nocerino, F. Menna, and F. Nex, “State of the art in high density image matching,” The Photogrammetric Record, vol. 29, no. 146, pp. 144–166, 2014. View at: Publisher Site | Google Scholar
  25. M. J. Westoby, J. Brasington, N. F. Glasser, M. J. Hambrey, and J. M. Reynolds, “Structure-from-motion photogrammetry: a low-cost, effective tool for geoscience applications,” Geomorphology, vol. 179, pp. 300–314, 2012. View at: Publisher Site | Google Scholar
  26. Á. Gómez-Gutiérrez, J. de Sanjosé-Blasco, J. de Matías-Bejarano, and F. Berenguer-Sempere, “Comparing two photo-reconstruction methods to produce high density point clouds and dems in the corral del veleta rock glacier (Sierra Nevada, Spain),” Remote Sensing, vol. 6, no. 6, pp. 5407–5427, 2014. View at: Publisher Site | Google Scholar
  27. N. Snavely, I. Simon, M. Goesele, R. Szeliski, and S. M Seitz, “Scene Reconstruction and visualization from community photo collections,,” Proceedings of the IEEE, vol. 98, no. 8, pp. 1370–1390, 2010. View at: Publisher Site | Google Scholar
  28. D. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at: Publisher Site | Google Scholar
  29. L. Barazzetti, F. Remondino, and M. Scaioni, “Automation in 3D reconstruction: results on different kinds of close-range blocks,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 38, pp. 55–61, 2010. View at: Google Scholar
  30. C. Orrite and E. Pollo, “Feature-based scaffolding for object tracking,” in Pattern Recognition and Image Analysis. IbPRIA. Lecture Notes in Computer Science, L. Alexandre, J. Salvador Sánchez, and J. Rodrigues, Eds., vol. 10255, Springer, Cham, Switzerland, 2017. View at: Google Scholar
  31. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, New York, NY, USA, 2nd edition, 2003.
  32. N. Snavely, S. M. Seitz, and R. Szeliski, “Modeling the world from internet photo collections,” International Journal of Computer Vision, vol. 80, no. 2, pp. 189–210, 2008. View at: Publisher Site | Google Scholar
  33. N. Snavely, S. M. Seitz, and R. Szeliski, “Photo tourism: exploring photo collections in 3D,” ACM Transactions on Graphics, vol. 25, no. 3, pp. 835–846, 2006. View at: Publisher Site | Google Scholar
  34. B. S. A. Alsadik, “Guided close range photogrammetry for 3D modelling of cultural heritages sites,” Enschede: ITC Faculty of Geo-Information Science and Earth Observation University of Twente, Enschede, The Netherlands, 2014, Dissertation. View at: Google Scholar
  35. D. A. Altantawy and S. Kishk, “Non-Local versus Bilateral: multi-adapting disparity map estimation framework,” in Proceedings of the Computer Engineering Conference (ICENCO), pp. 10–15, Cairo, Egypt, December 2014. View at: Google Scholar
  36. F. Remondino and S. El-Hakim, “Image-based 3D modelling: a review,” The Photogrammetric Record, vol. 21, no. 115, pp. 269–291, 2006. View at: Publisher Site | Google Scholar
  37. M. Jaud, S. Passot, R. L. Bivic, C. Delacourt, P. Grandjean, and N. L. Dantec, “Assessing the accuracy of high resolution digital surface models computed by PhotoScan® and MicMac® in sub-optimal survey conditions,” Remote Sensing, vol. 8, no. 6, p. 465, 2016. View at: Publisher Site | Google Scholar
  38. A. Koutsoudis, B. Vidmar, G. Ioannakis, F. Arnaoutoglou, G. Pavlidis, and C. Chamzas, “Multi-image 3D reconstruction data evaluation,” Journal of Cultural Heritage, vol. 15, no. 1, pp. 73–79, 2014. View at: Publisher Site | Google Scholar
  39. C. S. Fraser, “Advances in close-range photogrammetry,” in Proceedings of the 57th Photogrammetric Week, Stuttgart, Germany, September 2015. View at: Google Scholar
  40. C. S. Fraser and M. R. Shortis, “Metric exploitation of still video imagery,” Photogrammetric Record, vol. 15, no. 85, pp. 107–122, 1995. View at: Publisher Site | Google Scholar

Copyright © 2019 Ferruh Yilmazturk and Ali Ersin Gurbak. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

965 Views | 529 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.