Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018, Article ID 4908273, 10 pages
https://doi.org/10.1155/2018/4908273
Research Article

Scene Flow Estimation Based on Adaptive Anisotropic Total Variation Flow-Driven Method

School of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China

Correspondence should be addressed to Xuezhi Xiang; nc.ude.uebrh@ihzeuxgnaix

Received 5 December 2017; Accepted 27 February 2018; Published 20 May 2018

Academic Editor: Francesco Marotti de Sciarra

Copyright © 2018 Xuezhi Xiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Scene flow estimation based on disparity and optical flow is a challenging task. We present a novel method based on adaptive anisotropic total variation flow-driven method for scene flow estimation from a calibrated stereo image sequence. The basic idea is that diffusion of flow field in different directions has different rates, which can be used to calculate total variation and anisotropic diffusion automatically. Brightness consistency and gradient consistency constraint are employed to establish the data term, and adaptive anisotropic flow-driven penalty constraint is employed to establish the smoothness term. Similar to the optical flow estimation, there are also large displacement problems in the estimation of the scene flow, which is solved by introducing a hierarchical computing optimization. The proposed method is verified by using the synthetic dataset and the real scene image sequences. The experimental results show the effectiveness of the proposed algorithm.

1. Introduction

Scene flow was first introduced by Vedula et al. [1] in 1999. It is defined as a dense 3D motion field that describes the change of structured light in the image surfaces, which can be estimated from a calibrated stereo image sequence. We can obtain stereo image sequence by using binocular stereo vision sensor, like BumblebeeX3. As we all know, optical flow can be seen as the projection of the moving object on the retina or the camera’s sensor. So scene flow can be regarded as a 3D extension of optical flow. Reliable 3D motion flow field can be used in many applications. For example, Menze and Geiger [2] focus on the autonomous driving with 3D scene flow estimation. Lenz et al. [3] used scene flow for moving object detection in urban environments. Waizenegger et al. [4] present a real-time approach for 3D structure estimation of human bodies. In general, scene flow estimation techniques can be divided into two categories. One is variational minimization method of scene flow based on the optical flow and the disparity from stereo image sequence [5]. The other is 3D point cloud parametrization method based on the 3D structure to estimate the scene flow, which allows us to directly estimate the desired unknowns [6]. Under these two frameworks, a lot of researches on optical flow estimation can be used for scene flow estimation.

In the estimation of optical flow, many regularization approaches have been derived to solve the ill-posed problem. These methods can be done by minimizing an energy functionwhere is a regularization term which penalizes variations and . refers to the domain of the image plane. and are convex regularization functions and used to guarantee convexity and differentiability of the energy function.

Many smoothing regularizations are characterized as below. Homogenous regularization is named by Bruhn et al. [7], because the approach is a linear diffusion equation which is defined by Horn and Schunck [8]. Inspired by image filtering, Alvarez and Esclarin [9] use the squared magnitude of the image gradient to control the smoothness. Perona and Malik [10] propose an anisotropic smoothness method, which can preserve flow edges for a large image gradient and reduce the smooth strength in the consistency area. Nagel and Enkelmann [11] only smooth flow along the image edge and reduce the smooth strength in the direction perpendicular to image edge. In other words, this is the “oriented smoothness” method, which is driven by image gradient. Weickert and Schnörr [12] propose a systematic classification of rotation invariant convex regularizers, which provides a unifying framework for image-driven and flow-driven methods, as well as isotropic and anisotropic regularizers.

Inspired by optical flow regularizers, we propose a scene flow estimation method, which can take the advantages of anisotropic total variation and adaptive regularizer. Different from the structure tensor, our anisotropic diffusion applies robust penalty function to the component of gradient terms in different direction. A weight function used by Blomgren and Chan [13] for image segmentation is introduced to reduce diffusion of flow field in the large gradient area.

2. Related Work

Binocular camera can be used to build the stereo vision system that is used to obtain the stereo image sequences. Li and Sclaroff [14] first estimated scene flow under binocular setting as a joint framework including optical flow estimation and stereo matching. Huguet and Devernay [5] proposed a common variational framework for scene flow estimation by coupling optical flow and stereo matching. A rotating sphere dataset with ground truth is provided publicly in this paper, which allows us to quantitatively evaluate the accuracy of the estimation. After that, KITTI benchmark was published for autonomous driving [2], and scene flow estimation can also be evaluated using this dataset. 400 dynamic scenes were annotated with detailed 3D CAD models and revealed great challenges for scene flow algorithms. Wedel et al. [15] modified Huguet and Devernay’s method [5] by decoupling the stereo and motion estimation. Optical flow consistency in both views and the stereo consistency in time are taken as data term, while the smoothness term of disparity change was separated from the regularization term. With GPU’s development, they achieved the promising accuracy results with frame-rate at 5 fps. Basha et al. [6] proposed a 3D point cloud parametrization method which enforces multiview geometric consistency based on brightness consistency and piecewise smoothness assumption. Menze and Geiger [2] used a novel model and dataset for scene flow estimation, which represents each element in the scene by its rigid motion parameters. And each element can be achieved by super pixel segmentation. Vogel et al. [16] proposed a piecewise rigid model, which combines rigidly moving 3D planes constraint and segmentation regularization, and their method yields better results.

3. Proposed Method of Scene Flow Estimation

3.1. Variational Scene Flow Framework

Our goal is unify the data term and smooth term in a variational framework and obtain the scene flow and 3D structure. Our method follows the variational formulation proposed by Huguet and Devernay [5]. It has the following common form:

3.2. Data Term

We first obtain a set of stereo image sequences, whose disparity is only in the horizontal direction. Two image sequences between two time steps are shown in Figure 1. According to the time and space constraint from 4 images, data term can be divided into 4 partswhere usually refers to the domain of the image plane. is an occlusion factor, which equals 0 when the pixel is occluded and equals 1 otherwise. Each part contains brightness constancy assumption and gradient constancy assumption to overcome the nonlinear illumination problem.

Figure 1: Data term in the two-view case. Motion of a single pixel point in two consecutive frames of the image sequences. Brightness and gradient can be considered equal approximately in every image.

In other words, the brightness and the gradient of the pixel are nearly constant in the short time and in different views. According to the corresponding relationship, brightness and gradient constancy assumption are given bywhere is the vector of gradient. We introduce the following abbreviation to explain these 4 terms.

The 4 components of the data term can be written aswhere and are the left and the right data term energy function of optical flow, respectively. is the disparity term that represents the stereo matching at time . is the disparity term at time . Penalty function is introduced to guarantee convexity and differentiability of the whole data term energy function and remove the outliers at the same time. Outliers are the pixels that are not consistent with the pixels affected by noise, illumination change, and occlusion. can be determined according to the intensity of illumination change or just rely on the experiences.

The most obvious thing should be noted here is that if there is an error in any term of , and , the error would be propagated to others. We minimize the 4 data terms together to prevent the error from this situation.

3.3. Smooth Term Based on Anisotropic Total Variation

Huguet and Devernay [5] used flow-driven regularization that uses a robust and convex function to keep convexity and differentiability of the smooth termHere is used to remove the outliers and preserve the discontinuities of the flow field. The magnitude of the gradient isWe further derive its scene flow form by minimizing the following problems. In order to simplify the equation, we only consider the 2D caseIt corresponds to the diffusion equation asIn the directions of and , the weights are the same and are related to the sum of the squares of the gradient. We propose a new scheme that regularization includes information about the direction of the flow and we define it as anisotropic total variation smoothness term. In the 2D case,But for outliers, robustness function still should be used. Here we show that it is anisotropic. Similarly, in the 2D case, we define the following equation:We minimize it and obtain the following equation:Obviously, the weights only depend on the direction of each gradient of or and are not related to other direction, which can be regarded as anisotropic diffusion of flow field. In this way, we can control the rate of diffusion according to the gradient of flow field so that we can preserve the sharp edge at the position of nonflat area. And in the flat region, it has the same strength of smooth term. Then a new smooth term can be defined aswhere and are empirical parameters.

3.4. Smooth Term Based on Adaptive Anisotropic Total Variation Flow-Driven Method

A major deficiency of the total variation that Rudin et al. [17] pointed out is staircase in the steady state solution. Marquina and Osher [18] proposed a modified method in 2000, but they just relieved this phenomenon. Different from total variation, there is no such problem with norm. Blomgren and Chan [13] used the following norm in the image restoration:where function monotonously decreases and meets the following requirement:At the edge position, we haveIn the flat region, we haveIn other words, this norm can choose from to according to the gradient magnitude automatically. To achieve this, a rational form can be chosen as follows:The Euler-Lagrange equation for (15) isDefine a function to represent the equation of the bracket in (20):Then (20) can be simplified asEquation (23) can be derived from (21):In the smooth region of flow, the regularization is similar to linear smoothing, and total variation model will be used closing to the flow edge. Combining the robust function , our scene flow smooth term can be written as

3.5. Optimization
3.5.1. Euler-Lagrange Equations

Before the introduction of minimizing the energy function, we first introduce the following abbreviations for the left image. The similar abbreviation is also applicable to the right image.We also use abbreviations to replace the partial derivatives of the data term.It should be noted that the regularization term we used is anisotropic, so the Euler-Lagrange equations are not the same as (21). Considering anisotropic total variation of individual variableswe can get the following Euler equations:We define the function as follows:and then (28) can be simplified asAccording to the variational principle, we minimize the energy function and achieveEquation (31) corresponds to the 4 Euler equations. We calculate the partial derivation of with respect to and let it be equal to 0 to get the following equation:The partial derivatives of with respect to , and produce similar equations as given below:

3.5.2. Multiresolution Strategy

In order to solve the problem of large displacements in the scene and avoid local minima, we introduce the multiresolution method. Referring to Brox et al. [19], the sampling factor is set to 0.9 to avoid a larger span of adjacent layers and local minima. At the same time, we also have a serious problem in the scene flow estimation. If we start to calculate from low resolution layer, there will be a false disparity which is passed to the high resolution layer. For this reason, we start to calculate scene flow from middle resolution layer. A schematic diagram that uses the pyramid hierarchy is given in Figure 2.

Figure 2: Multiresolution strategy for scene flow computation. The left side pyramid is the image at time . The right side pyramid is the image at time .

In each level of the pyramid, we use the warping method. For the two frames of the level, is obtained according to a certain sampling standard. can be obtained by motion compensation based on the image in upper level. We only need to minimize the residual of each layer of the scene flow and the error generated by the large displacement can be avoided. The evaluation of scene flow requires minimizing partial differential equations (32)-(33). This is done by using Euler-Lagrange equations. We use a nested fixed point iteration scheme suggested by Brox et al. [19] and linearize equations in each layer of pyramid. The inner loop is used to linearize brightness constancy assumption and outer loop linearizes the robust penalty function . We use SOR (Super Relaxation Iteration) as an iterative method. The full algorithm for scene flow estimation is given in the algorithm scheme (see Algorithm 1).

Algorithm 1: Scene flow estimation.

4. Experiments and Results

In our method, scene flow can be represented by optical flow and disparity . We used data sets from Middlebury website to evaluate the accuracy of scene flow estimation. The datasets are captured from 8 calibrated cameras in each scene. The images from cameras 2 and 6 are taken as the stereo pairs at time , while the images from cameras 4 and 8 are taken as the corresponding stereo pairs at time . We evaluate the accuracy by calculating the RMS error in terms of optical flow, disparity map at time , and disparity map at as well as the average angular error (AAE).

We compared our results with Huguet and Devernay [5] and Basha et al. [6] using Venus, Cones, and Teddy from Middlebury datasets as shown in Figures 3, 4, and 5. Table 1 summarizes the RMS errors for optical flow, disparity at time , and disparity at time to measure the deviation with the ground truth. AAE errors are computed to measure the deviation with the standard flow field.

Table 1: Comparison of scene flow on Middlebury data set.
Figure 3: Venus of Middlebury stereo datasets. (a) and (b) are left and right image in time . (c) and (d) are ground truth of disparity maps. (e) and (f) are left and right image in time . (g) is ground truth of optical flow in horizontal direction. (h) is ground truth of optical flow in vertical direction.
Figure 4: Cones of Middlebury stereo datasets. (a) and (b) are left and right image in time . (c) and (d) are ground truth disparity maps. (e) and (f) are left and right image in time . (g) is ground truth of optical flow in horizontal direction. (h) is ground truth of optical flow in vertical direction.
Figure 5: Teddy of Middlebury stereo datasets. (a) and (b) are left and right image in time . (c) and (d) are ground truth disparity maps. (e) and (f) are left and right image in time . (g) is ground truth of optical flow in horizontal direction. (h) is ground truth of optical flow in vertical direction.

RMS errors of optical flow (O.F.), disparity at time (Disp. ), disparity at time (Dip. ), and AAE errors of optical flow are computed to evaluate the accuracy of our method.

Table 1 shows that the results of our method are more accurate than those of Huguet and Devernay [5] for RMS of optical flow and disparity. Comparing with Basha et al. [6], AAE errors of our method are more accurate. For the RMS errors in disparity , our method is better than Huguet and Devernay and Basha et al.’s method. This benefits from proposed smooth term based on adaptive anisotropic total variational method.

Figure 6 shows the performance on the Venus dataset quantitatively. In terms of the component U, our method keeps more smooth and sharp boundary when comparing with Huguet and Devernay [5].

Figure 6: Comparison of our U component and Huguet and Devernay’s [5]. (a) is the U component of Huguet and Devernay. (b) is our U component.

Cones, Teddy, and Venus datasets have complex edges in the scenes. Traditional scene flow methods are suffering from the isotropic diffusion and receive lower flow accuracy. The proposed adaptive anisotropic diffusion smooth term can smooth scene flow along the direction of motion edges and choose or regularization automatically, which can receive high flow accuracy at the position of edges and preserve motion edges.

In order to compare our method with other scene flow methods, we used a rotating sphere dataset which includes relatively complex motion and consists of the hemispheres in opposite directions.

Figure 7 shows the left and right views of the sphere. The 3D motion and disparity are also shown for comparison. Figure 8 shows results computed by our method. Table 2 summarizes the RMS errors of sphere sequence.

Table 2: 2D errors for rotating sphere sequence.
Figure 7: Rotating sphere which consists of the hemispheres in opposite directions. (a) and (b) are left and right images in time. (e) and (f) are left and right images in time. (c), (d), (g), and (h) are ground truth of , , , and , respectively.
Figure 8: Rotating sphere which consists of the hemispheres in opposite directions. (a) and (b) are , computed by our method. (c) and (d) are , computed by our method.

Table 2 shows that RMS error of our method in optical flow is the same as that of Vogel et al. [16] and better than others. For the RMS error in disparity at time , our method produces a preferable result.

Rotating sphere dataset has opposite directions motion, which is a very challenging task. The proposed method can smooth flow field along the motion edge and can also reduce the rate of diffusion at the position of motion edges. In this way, our method can preserve motion edges better.

5. Conclusion

In this article, we proposed an adaptive anisotropic total variation flow-driven regularization method in the variational framework for scene flow estimation. According to the corresponding relationship of the stereo image sequence, data term is divided into 4 parts and meets the brightness and gradient constancy assumption. Regularization term has the advantages of anisotropic smoothing, which applies different weights in the different directions. and norm can be chosen automatically according to the gradient of the flow field to preserve the boundary of scene flow and reduce the staircase effect at the same time. By minimizing the energy function, we obtain dense optical flow and disparity map. Pyramid optimization method and warping technology are also referenced to deal with large displacement problems.

In the section of experiments, RMS errors of our method for Middlebury datasets Cones and Teddy are better than those of Huguet and Devernay [5] and close to those of Basha et al. [6]. For the rotation sphere dataset, our method’s result is better than those of Huguet and Devernay [5] and Wedel et al. [15] and has the same accuracy as that of Vogel et al. [16]. The experimental results show the effectiveness of our method.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by projects of National Natural Science Foundation of China (61401113) and Natural Science Foundation of Heilongjiang Province of China (LC201426).

References

  1. S. Vedula, S. Baker, P. Rander, R. Collins, and T. Kanade, “Three-dimensional scene flow,” in Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV '99), vol. 2, pp. 722–729, Corfu, Greece, September 1999. View at Scopus
  2. M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, pp. 3061–3070, Boston, Mass, USA, June 2015. View at Publisher · View at Google Scholar · View at Scopus
  3. P. Lenz, J. Ziegler, A. Geiger, and M. Roser, “Sparse scene flow segmentation for moving object detection in urban environments,” in Proceedings of the 2011 IEEE Intelligent Vehicles Symposium, IV '11, pp. 926–932, Baden-Baden, Germany, June 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. W. Waizenegger, I. Feldmann, O. Schreer, and P. Eisert, “Scene flow constrained multi-prior patch-sweeping for real-time upper body 3D reconstruction,” in Proceedings of the 20th IEEE International Conference on Image Processing, ICIP '13, pp. 2086–2090, Melbourne, Australia, September 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. F. Huguet and F. Devernay, “A variational method for scene flow estimation from stereo sequences,” in Proceedings of the IEEE 11th International Conference on Computer Vision, ICCV, Rio de Janeiro, Brazil, October 2007. View at Publisher · View at Google Scholar · View at Scopus
  6. T. Basha, Y. Moses, and N. Kiryati, “Multi-view scene flow estimation: a view centered variational approach,” International Journal of Computer Vision, vol. 101, no. 1, pp. 6–21, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. A. Bruhn, J. Weickert, T. Kohlberger, and C. Schnörr, “A multigrid platform for real-time motion computation with discontinuity-preserving variational methods,” International Journal of Computer Vision, vol. 70, no. 3, pp. 257–277, 2006. View at Publisher · View at Google Scholar · View at Scopus
  8. B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, no. 1–3, pp. 185–203, 1981. View at Publisher · View at Google Scholar · View at Scopus
  9. L. Alvarez and J. Esclarin, “A PDE model for computing the optical flow,” in Proceedings of the CEDYA XVI, pp. 1349–1356, 1999.
  10. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629–639, 1990. View at Publisher · View at Google Scholar · View at Scopus
  11. H.-H. Nagel and W. Enkelmann, “An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 5, pp. 565–593, 1986. View at Publisher · View at Google Scholar · View at Scopus
  12. J. Weickert and C. Schnörr, “A theoretical framework for convex regularizers in PDE-based computation of image motion,” International Journal of Computer Vision, vol. 45, no. 3, pp. 245–264, 2001. View at Publisher · View at Google Scholar · View at Scopus
  13. P. Blomgren and T. F. Chan, “Color TV: Total variation methods for restoration of vector-valued images,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 304–309, 1998. View at Publisher · View at Google Scholar · View at Scopus
  14. R. Li and R. Sclaroff, “Multi-scale 3D scene flow from binocular stereo sequences,” in Proceedings of the IEEE Workshop on Motion and Video Computing, MOTION 2005, pp. 147–153, USA, January 2005. View at Publisher · View at Google Scholar · View at Scopus
  15. A. Wedel, C. Rabe, T. Vaudrey, T. Brox, U. Franke, and D. Cremers, “Efficient dense scene flow from sparse or dense stereo data,” in Computer Vision – ECCV 2008, vol. 5302 of Lecture Notes in Computer Science, pp. 739–751, 2008. View at Google Scholar
  16. C. Vogel, K. Schindler, and S. Roth, “Piecewise rigid scene flow,” in Proceedings of the 14th IEEE International Conference on Computer Vision, ICCV '13, pp. 1377–1384, Sydney, NSW, Australia, December 2013. View at Publisher · View at Google Scholar · View at Scopus
  17. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, no. 1-4, pp. 259–268, 1992. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. A. Marquina and S. Osher, “A new time dependent model based on level set motion for nonlinear deblurring and noise removal,” in Proceedings of the International Conference on Scale-Space Theories in Computer Vision, pp. 429–434, 1999.
  19. T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Computer Vision—ECCV 2004, vol. 3024 of Lecture Notes in Computer Science, pp. 25–36, Springer, Berlin, Germany, 2004. View at Publisher · View at Google Scholar