Abstract

Construction of three-dimensional structures from video sequences has wide applications for intelligent video analysis. This paper summarizes the key issues of the theory and surveys the recent advances in the state of the art. Reconstruction of a scene object from video sequences often takes the basic principle of structure from motion with an uncalibrated camera. This paper lists the typical strategies and summarizes the typical solutions or algorithms for modeling of complex three-dimensional structures. Open difficult problems are also suggested for further study.

1. Introduction

Over the past two decades, many researchers seek to reconstruct the model of a three-dimensional (3D) scene structure and camera motion from video sequences taken with an uncalibrated camera or unordered photo collections from the Internet. Most traditionally, depth measurement and 3D metric reconstruction can be done from two uncalibrated stereo images [1]. Nowadays, reconstructing a 3D scene from a moving camera is one of the most important issues in the field of computer vision. This is a very challenging task because of its computational efficiency, generality, complexity, and exactitude. In this paper, we aim to show the development and current status of the 3D reconstruction algorithms on this topic.

The basic concept and knowledge of the problem can be found from the fundamentals of the multiview geometry through the books and thesis such as Multiple View Geometry in Computer Vision [2], The Geometry of Multiple Images [3], Triangulation [4], and some typical publications [58], which are independent for implementing an entire system. Multiple-view geometry is most fundamental in computer vision, and the algorithms of structure from motion are based on the perspective geometry, affine geometry, and the Euclidean geometry. For simultaneous computation of 3D points and camera positions, this is a linear algorithm framework for the Euclidean structure recovery utilizing a scaled orthographic view and perspective views based on having a reference plane visible in all views [9]. There is an affine framework for perspective views that are captured by a single extremely simple equation based on a viewer-centered invariant, called relative affine structure [10]. A comprehensive method is used for estimating scene structure and camera motion from an image sequence taken by affine cameras which can incorporate all point, line, and conic features in a unified manner [11]. The other approach tries to calculate the cameras along with the 3D points, only relying on established correspondences between the observed images. These systems and improvements are covered in many publications [2, 6, 1215]. The literature gives a compact yet accessible overview covering a complete reconstruction system.

For multiview modeling of a rigid scene, an approach is presented in [16], which merges traditional approaches to reconstructing image-extractable features, and modeling via user-provided geometry includes steps to obtain features for a first guess of the structure and motion, fit geometric primitives, correct the structure so that reconstructed features would lie exactly on geometric primitives, and optimize both structure and motion in a bundle adjustment manner. A nonlinear least square algorithm is presented in [17] for recovering 3D shape and motion from image streams.

Sparse 3D measurements of real scenes are readily estimated from N-view image sequences using structure-from-motion techniques. There is a fast algorithm for rigid structure from image sequences in [18]. Hilton presents a geometric theory for reconstruction of surface models from sparse 3D data captured from N camera views [19] for 3D shape reconstruction by using vanishing points [20]. Relative affine structure is given for canonical model for 3D from 2D geometry and applications [10].

The paper describes the progress in automatic recovering 3D scene structures together with 3D camera positions from a sequence of images acquired by an unknown camera undergoing unknown movement [12]. The main departure from previous structure from motion strategies is that the processing is not sequential. Instead, a hierarchical approach is employed for building from image triplets and associated trifocal tensors. A method is presented for dealing with hundreds of images without precise calibration knowledge [21]. Optimizing just over the motion unknowns is fast, and given the recovered motion, one can recover the optimal structure algebraically for two images [4].

In fact, reconstruction of nonrigid scenes is very important in structure from motion. The recovery of 3D structure and camera motion for nonrigid scenes from single-camera video footages is a key problem in computer vision. For an implicit imaging model of nonrigid scenes, there is an approach that gives a nonrigid structure-from-motion algorithm based on computing matching tensors over subsequences, and each nonrigid matching tensor is computed, along with the rank of the subsequence, using a robust estimator incorporating a model selection criterion that detects erroneous image points [22]. Uncalibrated motion captures exploiting articulated structure constraints [23] such as humans. The technique shows promise as a means of creating 3D animations of dynamic activities such as sports events. For the problem of 3D reconstruction of nonrigid objects from uncalibrated image sequences, under the assumption of an affine camera and that the nonrigid object is composed of a rigid part and a deformation part, a stratification approach can be used to recover the structure of nonrigid objects by first reconstructing the structure in affine space and then upgrading it to the Euclidean space [24]. In addition, a general framework of locally rigid motion for solving the M-point and N-view structure-from-motion problem for unknown bodies deforming under orthography is presented in [25]. An incremental approach is presented in [26], where a new framework for nonrigid structure from motion simultaneously addresses three significant challenges: severe occlusion, perspective camera projection, and large non-linear deformation.

With the development of structure-from-motion algorithms, geometry constraint and optimization are necessary for reconstructing a good 3D model of the object or scene. Many researchers give us some useful approaches. For example, a technique is proposed in [27] for estimating piecewise planar models of objects from their images and geometric constraints and 3D structure from a single calibrated view using distance constraints [28]. Marques and Costeira present an approach to estimating 3D shape from degenerated sequences with missing data [29]. Beyond the epipolar constraint, it improves the effect of structure from motion [30].

3D affine measurements may be computed from a single perspective view of a scene given only minimal geometric information determined from the image. This minimal information is typically the vanishing line of a reference plane and a vanishing point for a direction not parallel to the plane. Without camera parameters, Criminisi et al. [31] show how to (i) compute the distance between planes parallel to the reference plane; (ii) compute area and length ratios on any plane parallel to the reference plane; (iii) determine the camera’ location. Direct estimation is the fundamental estimation of scene structure and camera motion from a sequence of images. No computation of optical flow or feature correspondences is required [32]. A good critique on structure-from-motion algorithms can be found in [33] by Oliensis.

The remainder of this paper is organized as follows. Section 2 briefly gives some typical applications of structure from video sequences. Section 3 introduces the general reconstruction principle of structure from video sequences and unstructured photo collections. Section 4 outlines the methods for structure and motion estimation. Section 5 discusses the relevant available algorithms for every step to obtain a better result. We offer our impressions of current and future trends in the topic and conclude the development in Sections 6 and 7.

2. Typical Applications

2.1. Modeling and Reconstruction of 3D Buildings or Landmarks

For 3D reconstruction of an object or building, Pollefeys et al. typically present a complete system to build visual model with a hand-held camera [6]. There is a system for photorealistic 3D reconstruction from hand-held cameras [34]. Sinha et al. [35] present an algorithm for interactive 3D architectural models from unordered photo collections. There is a fully automated 3D reconstruction and visualization system for architectural scenes including its interiors and exteriors [36]. The system utilizes structure-from-motion, multiview stereo and a stereo algorithm.

The 3D models of historical relics and buildings, for example, the Emperor Qin’s Terra-cotta Warriors and Piazza San Marco, have very significant meanings for archeologists. A system that can match and reconstruct 3D scenes from extremely large collections of photographs has been developed by Agarwal et al. [37]. A method for enabling existing multiview stereo algorithms to operate on extremely large unstructured photograph collections has been contrived by Furukawa et al. [38]. This approach is to decompose the collection into a set of overlapping sets of photos that can be processed in parallel and to merge the resulting reconstructions [38]. People want to sightsee the famous buildings or landscapes from the Internet; they could tour the world via building a web-scale landmark recognition engine [39].

Modeling and recognizing landmarks at world scale is a useful yet challenging task. There exists no readily available list of worldwide landmarks. Obtaining reliable visual models for each landmark can also pose problems, and efficiency is another challenge for such a large-scale system. Zheng et al. leverage the vast amount of multimedia data on the web, the availability of an Internet image search engine, and advances in object recognition and clustering techniques, to address these issues [39].

2.2. Urban Reconstruction

Modeling the world and reconstructing a city present many challenges for a visualization system in computer vision. It can use some products such as Google Earth Google Map. For instance, Pollefeys et al. [40] present a system for automatic, georegistered, real-time multiview stereo 3D reconstruction form long image sequences of urban scenes. The system collects video streams, as well as GPS and inertia measurements in order to obtain the georegistered coordinates of the 3D models [40]. Faugeras et al. [41] address the problem of recovery of a realistic textured model of a scene from a sequence of images, without any prior knowledge either about the parameters of the cameras or about their motion.

2.3. Navigation

If the world’s model or the city’s reconstruction is exhaustively completed, we can obtain relative location of the buildings and find related views for navigation for robots or other vision systems. Photo Tourism can enable full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps [14]. It gives several modes for navigation, including free-fight navigation, moving between related views, object-based navigation, and creating stabilized slideshows. The system by Pollefeys et al. also contains the navigation function [40]. Supplying realistically textured 3D city models at ground level promises to be useful for previsualizing upcoming traffic situations in car navigation systems [42].

2.4. Visual Servoing

In the literature, there are applications that can employ SfM algorithms successfully in practical engineering. For instance, based on structure from controlled motion or on robust statistics, a visual servoing system is presented in [43]. A general-purpose image understanding system via a control structure is designed by Marengoni et al. [44] and 3D video compression via topology matching [45]. More applications are being developed by researchers and engineers in the community.

2.5. Scene Recognition and Understanding

3D reconstruction is an important application to face recognition, facial expression analysis, and so on. Fidaleo and Medioni [46] design a model-assisted system for reconstruction of 3D faces from a single-consumer quality camera using a structure-from-motion approach. Park and Jain [47] present an algorithm for 3D-model-based face recognition in video.

Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television [48]. Nedovic et al. propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem [48].

3. Information Organization

The goal of structure-form-motion is automatic recovery of camera motion and scene structure from two or more images. The problem of using pixel correspondences or track points to determine camera and point geometry in this manner is known as structure from motion. It is a self-calibration technique and called automatic camera tracking or match moving. We must consider several questions like (1)Correspondence (feature extracting and tracking or matching): given a point in one image, how does it constrain the position of the corresponding point in other images? (2)Scene geometry (structure): given point matches in two or more images, where are the corresponding points in 3D?(3)Camera geometry (motion): given a set of corresponding points in two or more images, what are the camera matrices for these views?

Based on these questions, we can give the 3D reconstruction pipeline as in Figure 1. The goal of correspondence is to build a set of matching 2D coordinates of pixels across the video sequences. It is a significant step in the flow of the structure from motion. Correspondence is always a challenging task in computer vision. So far, many researchers have developed some practical and robust algorithms. Given a video sequence of scene, how can we find matching points?

Firstly, there are some well-known algorithms for image sequences or videos; one popular is the KLT tracker [4951]. It gives us an integrated system that can automatically detect the KLT feature points and track them. However, it cannot apply to the situations with wide baseline, illustration changing, variant scale, duplicate and similar structure, occlusion, noise, image distortion, and so on. Generally speaking, for video sequences, the KLT tracker can perform a good effect. Figures 2 and 3 show examples of the feature points of the KLT detector output with example images from http://www.ces.clemson.edu/~stb/klt/.

In the KLT tracker [4951], if the time interval between two frames of video is sufficiently short, we can suppose that the positions of feature points move, but their intensities do not change; that is, where is the position of a feature point and is a transformation function.

In the papers of Lucas and Kanade [49], Tomasi and Kanade [50], and Shi and Tomasi [51], the authors made an important hypothesis that for high enough frame rates, can be approximated with a displacement vector :

Then symmetric definition for the dissimilarity between two windows, one in image and one in image , is as follows: where is the weighting function, usually set to the constant 1. The algorithm is calculating the vector which minimizes. Now, utilizing the first-order Taylor expansion of to truncate to the linear term and setting the derivative of with respect to to , obtaining the linear equation: where is the following matrix: and is the following vector: where .

On the other hand, for a completely unorganized set of images, the tracker becomes invalid. There is another popular algorithm in computer vision area, named scale-invariant feature transform (SIFT) [52]. It is effective to feature detection and matching in a wide class of image transformation, including rotations, scales, and changes in brightness or contrast, and to recognize panoramas [53]. Figures 4 and 5 show examples of the feature points of the SIFT output with example images from http://www.cs.ubc.ca/~lowe/keypoints/.

4. Structure and Motion Estimation

Assume that we have obtained a set of correspondences between images or video sequence, and then we use the set to reconstruct the 3D structure of each point in the set of correspondences and recover the motion of a camera. This task is called structure from motion. The problem has been an active research topic in computer vision since the development of the Longuet-Higgins eight-point algorithm [54] that focused on reconstructing geometry from two views. In the literature [2], several different approaches to solve the structure-from-motion problem are given.

4.1. Factorization

There is a popular factorization algorithm for image streams under orthography, using many images and tracking many feature points to obtain highly redundant feature position information, which was firstly developed by Tomasi and Kanade [55] in the 1990s. The main idea of this algorithm is to factorize the tracking matrix into structure and motion matrices simultaneously via singular value decomposition (SVD) method with low-rank approximation, taking advantage of the linear algebraic properties of orthographic projection.

However, an orthographic formulation limits the range of motions the method can accommodate. Perspective projection is a projection model that closely approximates perspective projection by modeling several effects not modeled under orthographic projection, while retaining linear algebraic properties [56, 57]. Poelman and Kanade [56] have developed a paraperspective factorization method that can be applied to a much wider range of motion scenarios, including image sequences containing motion toward the camera and aerial image sequences of terrain taken from a low-altitude airplane.

With the development of factorization method, a factorization- based algorithm for multi-image projective structure and motion is developed by Sturm and Triggs [57]. This technique is a practical approach for recovery of scaled feature points, using fundamental matrix and epipoles estimated from the image sequences.

Because matrix factorization is a key component for solving several computer vision problems, Tardif et al. have proposed batch algorithms for matrix factorization [58] that are based on closure and basis constraints, which handle the presence of missing or erroneous data, which often arise in structure from motion.

In mathematical expression of the factorization algorithm, assume that the tracked points are . The algorithm defines the measurement matrix . The rows of and are then registered by subtracting from each entry the mean of the entries in that row: The goal of the Tomasi-Kanade algorithm [55] is to factorize into two matrices as follows: where , named motion matrix, is a matrix which represents the camera rotation in each frame and , named structure matrix, is a matrix which denotes the positions of the feature points in object space. So in the absence of the Gauss noise, .

Then we can compute SVD decomposition of to obtain : where if the singular value of is , we can get the matrix and .

The method can also handle and obtain a full solution from a partially filled-in measurement matrix, which occurs when features appear and disappear in the video due to occlusions or tracking failures [55]. This method gives accurate results and does not introduce smoothing in structure and motion. Using the above method, the problem can be solved for the video of general scene such as building and sculpture (Figure 6).

4.2. Bundle Adjustment

Bundle adjustment is a significant component of most structure from motion systems. It is the joint nonlinear refinement of camera and point parameters, so it can consume a large amount of time for large problems. Unfortunately, the optimization underlying structure from motion involves a complex, nonlinear objective function with no closed-form solution, due to nonlinearities in perspective geometry. Most modern approaches use nonlinear least squares algorithms [17] to minimize this objective function, a process known as bundle adjustment; [53] that is, basic mathematics of the bundle adjustment problem is well understood [59]. Generally speaking, bundle adjustment is a global algorithm, but it consumes much time and cannot achieve real time to solve the minimize restriction. Mouragnon et al. [60] propose an approach for generic and real-time structure from motion using local bundle adjustment. It allows 3D points and camera poses to be refined simultaneously through the image sequence. Zhang et al. [61] apply bundle optimization to further improve the results of consistent depth maps from a video sequence.

4.3. Self-Calibration

To upgrade the projective and affine reconstruction to a metric reconstruction (i.e., determined up to an arbitrary Euclidean transformation and a scale factor), calibration techniques, to which we follow the approach described in [2, 6, 9, 15, 62], can deal with this problem. It can be done by imposing some constraints on the intrinsic camera parameters. This approach that is called self-calibration has received a lot of attention in recent years. The ambiguity on the reconstruction is restricted from projective to metric through self-calibration [6]. Mostly self-calibration algorithms are concerned with unknown but constant intrinsic camera parameters [2, 4, 12]. The paper presented the problem of 3D Euclidean reconstruction of structured scenes from uncalibrated images based on the property of vanishing points [63]. They propose a multistage linear approach, with structure from motion technique based on point and vanishing point matches in images [64].

4.4. Correlative Improvement

Traditional SFM algorithms using just two images often produce inaccurate 3D reconstructions, mainly due to incorrect estimation of the camera’ motion. Thomas and Oliensis [65] present a practical algorithm that can deal with noise in multiframe structure from motion. It describes a new incremental algorithm for reconstructing structure from multi-image sequences which estimates and corrects for the error in computing the camera motion. The research of structure from motion has shown great progress throughout several decades, but the algorithms on structure from motion still exhibit some faults and shortages. The result of Structure from Motion cannot satisfy people in many situations. However, many researchers present a lot of improving approaches, such as dual computation of projective shape and camera positions from multiple images [66].

For incremental algorithms that solve progressively larger bundle adjustment problems, Crandall et al. present an alternative formulation for structure from motion based on finding a coarse initial solution using a hybrid discrete-continuous optimization and then improve the solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement [67].

For time efficiency, Havlena et al. present a method of efficient structure from motion by graph optimization [68]. Gherardi et al. improve the algorithm of efficiency with hierarchical structure and motion [69].

For duplicate or similar structure, Roberts et al. couple an expectation maximization (EM) algorithm for structure from motion for scenes with large duplicate structures [70]. A hierarchical framework that resamples 3D reconstructed points to reduce computation cost on time and memory for very-large-scale structure from motion [71]. Savarese and Bao propose a formulation called semantic structure from motion (SSFM), where SSFM takes advantages of both semantic and geometrical properties associated with objects in the scene [72].

5. Relevant Algorithms

5.1. Features

(1) Line
For the problem of camera motion and 3D structure reconstruction from line correspondences across multiple views, there is a triangulation algorithm that outperforms standard linear and bias-corrected quasi-linear algorithms, and that bundle adjustment using our orthonormal representation yields results similar to the standard maximum likelihood trifocal tensor algorithm, while being usable for any number of views [73]. Spetsakis and Aloimonos [74] present a system for structure from motion using line correspondences. The recovery algorithm is formulated in terms of an objective function which measures the total squared distance in the image plane between the observed edge segments and the projections of the reconstructed lines [75]. A linear method is developed for reconstruction using lines and points simultaneously [76].

(2) Curve
Tubic et al. [77] present an approach for reconstructing a surface from a set of arbitrary, unorganized, and intersecting curves. There is an approach for reconstructing open surfaces from image data [78]. Kaminski and Shashua [79] introduce a number of new results in the context of multiview geometry from general algebraic curves, which start with the recovery of camera geometry from matching curves. Berthilsson et al. present a method for reconstruction of general curves, using factorization and bundle adjustment [80].

(3) Silhouette
Liang and Wong [81] develop an approach that produces relatively complete 3D models similar to volumetric approaches, with the topology conforming to what is observed from the silhouettes. In addition, the method neither assumes nor depends on the spatial order of viewpoints. Hartley and Kahl give us critical configurations for projective reconstruction from multiple views in [82]. Joshi et al. design an algorithm for structure and motion estimation from dynamic silhouettes under perspective projection [83]. Liu et al. present a method that is shaped from silhouette outlines using an adaptive dandelion model [84]. Yemez and Wetherilt develop a volumetric fusion technique for surface reconstruction from silhouettes and range data [85].

5.2. Other Aspects

(1) Multiview Stereo
Multiview stereo (MVS) techniques take as input a set of images with known camera parameters (i.e., position and orientation of the camera, focal length, image distortion parameters) [38, 53, 86]. We can refer to [87] for a classification and evaluation of recent MVS techniques.

(2) Clustering
There are clustering techniques to partition the image set into groups of related images, based on the visual structure represented in the image connectivity graph for the collection [88, 89].

While algorithms of structure from motion have been developed for 3D reconstruction in many applications, some problems of reconstructing geometry from video sequences still exist in computer vision and photography. Until recently, however, there have been no good computer vision techniques for recovering this kind of structure from motion. Many researchers are still making efforts to improve the methods mainly in the following aspects.

6.1. Feature Tracking and Matching

Zhang et al. give a robust and efficient algorithm on efficient nonconsecutive feature tracking for structure from motion via two main steps, that is, consecutive point tracking and nonconsecutive track matching [90]. They improve the KTL tracker by the invariant feature points and a two-pass matching strategy to significantly extend the track lifetime and reduce the sensitivity of feature points to variant scale, duplicate and similar structure, and noise and image distortion. The results can be found at http://www.cad.zju.edu.cn/home/gfzhang/.

6.2. Active Vision

The method is based on the structure from controlled motion that constrains camera motions to obtain an optimal estimation of the 3D structure of a geometrical primitive [91]. Stereo geometry is acquired from 3D egomotion streams [92]. Wide-area egomotion estimation is acquired from known 3D structure [93]. A work on estimating surface reflectance properties of a complex scene under captured natural illumination can be found in [94]. Other algorithms are also attempted on selective attention of human eyes.

6.3. Unorganized Images

To solve the resulting large-scale nonlinear optimization, we reconstruct the scene incrementally, starting from a single pair of images, then adding new images and points in rounds, and running a global nonlinear optimization after each round [53]. Structure from motion could be applied to photos found in the wild, reconstructing scenes from several large Internet photo collections [14]. The large redundancy in online photo collections means that a small fraction of images may be sufficient to produce high-quality reconstructions. An investigation has begun to explore by extracting image skeletons from large collections [95]. Perhaps the most important challenge is to find ways to effectively parallelize all the steps of the reconstruction pipeline to take advantage of multicore architectures and cloud computing [37, 38, 53, 89].

7. Conclusion

This paper has summarized the recent development of structure from motion algorithm that is able to metrically reconstruct complex scenes and objects. The wide applications have been addressed in computer vision area. Typical contributions are introduced for feature point detection, tracking, matching, factorization, bundle adjustment, multiview stereo, self-calibration, line detection and matching, modeling, and so forth. Representative works are listed for readers to have a general overview of the state of the art. Finally, a summary of existing problems and future trends of structure modeling is addressed.

Acknowledgments

This work was supported by the National Natural Science Foundation of China and Microsoft Research Asia (NSFC nos. 61173096, 60870002, and 60802087), Zhejiang Provincial S&T Department (2010R10006, 2010C33095), and Zhejiang Provincial Natural Science Foundation (R1110679).