Table of Contents Author Guidelines Submit a Manuscript
International Journal of Biomedical Imaging
Volume 2019, Article ID 9435163, 14 pages
https://doi.org/10.1155/2019/9435163
Research Article

An Optical Flow-Based Approach for Minimally Divergent Velocimetry Data Interpolation

1School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, 85281, USA
2Department of Computer Science, Technical University of Munich, Munich, 80333, Germany
3School of Biological and Health Systems Engineering, Arizona State University, Tempe, 85281, USA
4School of Arts, Media and Engineering, Arizona State University, Tempe, 85281, USA

Correspondence should be addressed to Berkay Kanberoglu; ude.usa@orebnakb

Received 18 July 2018; Revised 4 December 2018; Accepted 10 December 2018; Published 3 February 2019

Academic Editor: A. K. Louis

Copyright © 2019 Berkay Kanberoglu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Three-dimensional (3D) biomedical image sets are often acquired with in-plane pixel spacings that are far less than the out-of-plane spacings between images. The resultant anisotropy, which can be detrimental in many applications, can be decreased using image interpolation. Optical flow and/or other registration-based interpolators have proven useful in such interpolation roles in the past. When acquired images are comprised of signals that describe the flow velocity of fluids, additional information is available to guide the interpolation process. In this paper, we present an optical-flow based framework for image interpolation that also minimizes resultant divergence in the interpolated data.

1. Introduction

Image interpolation is a fundamental problem encountered in many fields [19]. There are countless scenarios wherein images are acquired at resolutions that are suboptimal for the needs of specific applications. For example, biomedical images spanning a three-dimensional (3D) volume are often acquired with in-plane pixel spacings far less than the out-of-plane spacings between images. This can be the case with clinical images (e.g., from computed tomography (CT) and/or magnetic resonance (MR) imaging) as well as in vitro images acquired with modalities such as particle image velocimetry (PIV) [1017]. In cases where motion estimation and registration are parts of an interpolation framework, hardware based approaches can offer solutions as well [1824].

However, when acquired images are comprised of signals that describe the flow velocity of fluids, additional information is available to guide the interpolation process. Specifically, the flows of an incompressible fluid into and out of an interrogation volume must be equal according to conservation of mass [25]. Quantifying the deviation from zero net flow that is entering (or alternatively leaving) an interrogation volume (i.e., divergence) thus provides a means to direct interpolation in such a way as to reconstruct more physically accurate data.

Optical flow and/or other registration-based interpolators have proven useful in interpolating velocimetry data in the past [2637]. Particle Image Velocimetry (PIV) is a technique that measures a velocity field in a fluid volume with the help of tracer particles in the fluid and specialized cameras [38, 39]. The default technique to determine the velocity field from the raw PIV data is a correlation analysis between two frames that were acquired by the cameras [40]. This technique can be extended to 3D as well. Optical flow-based approaches have been widely used in computer vision [4144], and they have been appealing to researchers because of the flexibility of variational approaches. Regularizers can be used for different constraints in the energy functional to be minimized. In the conventional optical flow method there are two constraints, brightness and smoothness [45]. Optical flow-based methods have been promising in the area of fluid flow estimation in PIV [4651]. For example, in [47], incompressibility of the flow is added as a constraint in the optical flow minimization problem. In [48], the vorticity transport equation, which describes the evolution of the fluid’s vorticity over time, is used in physically consistent spatio-temporal regularization to estimate fluid motion.

Divergence and curl (vorticity) have been used in estimating optical flow previously [5255]. In [52], the smoothness constraint is decomposed into two parts, divergence and vorticity, in this way, the smoothness properties of the optical flow can be tuned. In [56], both incompressibility and divergence-free constraints are used in the ill-posed minimization problem to calculate a 3D velocity field from 3D Cine CT images. In [54], a second-order div-curl spline smoothness condition is employed in order to compute a 3D motion field. In [55], a data term based on the continuity equation of fluid mechanics [25] and a second-order div-curl regularizer are employed to calculate fluid flow.

Here we present an optical-flow based framework for image interpolation that also minimizes resultant divergence in the interpolated data. That is, the divergence constraint attempts to minimize divergence in interpolated velocimetry data, not the divergence of the optical flow field. To our knowledge, using divergence in this way as a constraint in an optical-flow framework for image interpolation has not been investigated prior to the preliminary work presented in [57]. The method is applied to PIV, computational fluid dynamics (CFD), and analytical data and results indicate that the trade-off between minimizing errors in velocity magnitude values and errors in divergence can be managed such that both are decreased below levels observed for standard truncated sinc function-based interpolators, as well as pure optical flow-based interpolators. The proposed method thus has potential to provide an improved basis for interpolating velocimetry data in applications where isotropic flow velocity volumes are desirable, but out-of-plane data (i.e., data in different images spanning a 3D volume) cannot be resolved as highly as in-plane data.

The remainder of this paper is structured as follows. In Section 2, a definition of the term optical flow will be given and a canonical optical flow method will be briefly described. This will provide a basis for the following sections as most of the work described in this paper has been built on the described method. In Section 2, an optical flow-based framework for interpolating minimally divergent velocimetry data is described. The new method uses flow velocity data to guide the interpolation toward lesser divergence in the interpolated data. In Section 3, performance of the proposed technique is presented with experiments and simulations on real and analytical data. The results and performance of the proposed method are discussed and concluded in Section 4.

2. Methods

2.1. Optical Flow

Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. In one canonical optical flow paper [45], two kinds of constraints are introduced in order to estimate the optical flow: the smoothness constraint and the brightness constancy constraint. In this section, we give a brief overview of the original optical flow algorithm (Horn-Schunck method) and the modified algorithm that was used in this project.

Optical flow methods estimate the motion between two consecutive image frames that were acquired at times and . A flow vector for every pixel is calculated. The vectors represent approximations of image motion that are based in large part on local spatial derivatives. Since the flow velocity has two components, two constraints are needed to solve for it.

2.1.1. The Brightness Constancy Constraint

The brightness constancy constraint assumes that the brightness of a small area in the image remains constant as the area moves from image to image. Image brightness at the point in the image at time is denoted here as . If the point moves by and in time , then according to the brightness constancy constraintThis can also be stated asIf we expand the left side of (2) with a Taylor series expansion, thenwhere the ellipsis denotes higher order terms in the expansion. After canceling from both sides of the equationWe can divide this equation by , which leads toSubstitutingthe brightness constraint can be written in a more compact form:where , , and . In this form and represent the image velocity components and represents the brightness gradients.

2.1.2. The Smoothness Constraint

Fortunately, points from an object that is imaged in temporally adjacent frames usually have similar velocities, which results in a smooth velocity field. Leveraging this property, we can express a reasonable smoothness constraint by minimizing the sums of squares of the Laplacians of the velocity components and . The Laplacians are

2.1.3. Minimization

Optical flow assumes constant brightness and smooth velocity over the whole image. The two constraints described above are used to formulate an energy functional to be minimized:Using variational calculus, the Euler-Lagrange equations can be determined for this problem. Those equations need to be solved for each pixel in the image. Iterative methods are suitable to solve the equations since it can be very costly to solve them simultaneously. The iterative equations that minimize (9) arewhere denotes the iteration number and and denote neighborhood averages of and . More detailed information on the method can be found in [45].

2.2. Optical Flow with Divergence Constraint
2.2.1. Continuity Equation

According to the continuity equation in fluid dynamics, the rate of mass entering a system is equal to the rate of the mass leaving the system [25]. The differential form of the equation iswhere is the fluid density, is time, and is the velocity vector field. In the case of incompressible flow, becomes constant and the continuity equation takes the form:This means that the divergence of the velocity field is zero in the case of incompressible flow. Figure 1 shows the change in flow velocity of a voxel.

Figure 1: Change in flow velocity of a sample voxel.
2.2.2. Symmetric Setup

For the new method, a symmetric interpolation setup is proposed as shown in Figure 2. In the figure, upper and lower slices are from the dataset and the interpolated slice is in the middle.In this section, denotes the velocity magnitude image and denotes the velocity vector components (i.e., ,,). If one approximates the expressions with Taylor expansion around the points , we getAfter substituting (14a) and (14b) into (13), terms can be arranged to obtain the new brightness constraint:

Figure 2: Illustration of the symmetric interpolation setup.

In the next step, we aim to minimize the divergence of the interpolated slice. Ideally, the divergence equation of the interpolated slice should be used:Since this information is unavailable, to generate the middle slice with as little divergence as possible, we can use the fact thatwhich leads to the following constraint by using the divergence expressions of the two outer slices, and :Using Taylor expansion on (18) yieldsIn (19), we need the derivatives of and in the z-direction. Calculating these derivatives in the z-direction would require additional outer slices. To simplify this requirement, we can expand and around the points and obtain the following:

Using (20a) and (20b) in (18), we obtain the new divergence constraint that does not require additional slices for the z-direction derivative, Combining (15), (21) and the optical flow smoothness constraint, we obtain the new energy functional that needs to be minimized,where Using variational calculus, the Euler-Lagrange equations can be determined for this problem. They need to be solved for each pixel in the image. The iterative equations that minimize the solutions are given bywhere denotes the iteration number and and denote neighborhood averages of and . The coefficient expressions in (24a) and (24b) are given asThe numerical scheme to solve the Euler-Lagrange equations is based on the solution laid out in [45]. More detailed information on the steps of the derivation can be found in the appendix.

There have been several studies that attempt to improve the performance of optical flow techniques and computation schemes [41, 44, 5864]. For example, in [59] non-linear convex penalty functions are used for the constraints in the optical flow energy functional. The approach uses numerical approximations to obtain a sparse linear system of equations from the highly nonlinear Euler-Lagrange equations. The resulting linear system of equations can be solved with numerical methods like Gauss-Seidel, which is similar to Jacobi method, or successive over-relaxation (SOR), which is a Gauss-Seidel variant. Another improvement to variational optical flow computation is presented in [60]. The approach uses a multigrid numerical optimization method and because of its speedup gains, it can be used in real-time. After all these advances, in [58], it was argued that the typical formulation of optical flow has changed little and most of the advances have been mainly numerical optimization and implementation techniques and robustness functions. This is also true for the proposed method as well. The optical flow portion of this interpolation framework is closely related to the Horn-Schunck method. The derived numerical scheme to solve the equations enhances this notion while its implementation is straightforward and simple. For example, setting the divergence coefficient to 0 in (24a) and (24b) reduces the solutions to Horn-Schunck solutions. The numerical scheme is also sufficient for velocimetry data because unlike in many other types of images, stark discontinuities are unexpected in velocimetry images at Reynolds numbers on the order of biomedical flows.

2.3. PIV Setup

The testing datasets were acquired using particle image velocimetry, an optical experimental flow measurement technique. PIV data acquisition and processing generally consists of the following steps: (1) computational modeling, (2) physical model construction, (3) particle image acquisition, (4) PIV processing, and (5) data analysis. The testing datasets were acquired for an in-vitro model of a cerebral aneurysm. Patient-specific computed tomography (CT) images were first segmented and reconstructed to obtain the computational cerebral aneurysm model as shown in Figure 3. The computational model was then translated into an optically clear, rigid urethane model using a lost-core manufacturing methodology. The physical model was connected to a flow loop consisting of a blood analog solution seeded with 8 m fluorescent microspheres. Fluid flow through the physical model was controlled at specific flow rates (3, 4, and 5 mL/s). PIV was performed using a FlowMaster 3D Stereo PIV system (LaVision, Ypsilanti, MI), where the fluorescent particles were illuminated with a 532 nm dual-pulsed Nd:YAG laser at a controlled rate, while two CCD cameras captured the images across seven parallel planes (or slices) within the aneurysmal volume. A distance of 1 mm separated the planes. Two hundred image pairs, at each flow rate and slice, were acquired at 5 Hz. The image pairs were processed using a recursive cross-correlation algorithm using Davis software (LaVision, Ypsilanti, MI) to calculate the velocity vectors within region of interest (i.e., the aneurysm). Initial and final interrogation window sizes of 32 by 32 pixels and 16 by 16 pixels, respectively, were used. Detailed explanation of the experimental process can be found in [65]. A sample experimental model is shown in Figure 4.

Figure 3: Dimensions of the aneurysm.
Figure 4: Example flow slice from the PIV experiments.

The proposed algorithm was developed in MATLAB (Mathworks, Inc). Since the proposed algorithm has two separate terms for divergence and smoothness, different combinations of coefficients can be used for the terms. However, in order to get a clear idea about the performance of the method only one set of parameters were used in the simulations. The divergence term’s coefficient was set to 150. From previous tests, it was seen that the proposed method performed better when a relatively large was used while keeping the smoothness coefficient small. The smoothness coefficient was set to 1. The same smoothness coefficient was also used for the Horn-Schunck based method. The iterations for both methods were set to 2000. Each PIV dataset used in testing had 7 slices. The slices were originally 154x121. They were cropped and zero-padded to reach 128x128. The size of the region where MSE and divergence were calculated is 110x110. Even though there are 7 slices in each dataset, only 3 slices were reconstructed from the datasets. These are slices 3, 4 and 5. Two different spacing steps were used between the slices. The first one is z=2 where the neighboring slices z-1 and z+1 were used to reconstruct the middle slice. The second one is z=4 where slices z-2 and z+2 were used for the interpolation; e.g., slices 1 and 5 were used to reconstruct slice 3. The method was tested against linear interpolation and an implementation of Horn-Schunck optical flow based interpolation.

2.4. Analytical Datasets

The method was tested with a 3D divergence-free analytical dataset and a CFD data set with turbulent flow. The analytical dataset is given below:Out-of-plane distance was kept much higher than the in-plane resolution. In order to assess the robustness of the proposed method, each velocity field was perturbed by Gaussian noise. The noise had zero mean and standard deviation of 10% of the maximum velocity in each velocity field.

2.5. Computational Fluid Dynamics (CFD) Simulations

The original computational aneurysm model was imported into ANSYS ICEM (ANSYS, Canonsburg, PA), where the inlet and outlets of the aneurysm model were extruded. After meshing was performed to discretize blood volumes into tetrahedrons, the final mesh was imported into ANSYS Fluent where the blood volume was modeled as an incompressible fluid with the same density and viscosity as the blood analog solution used in experiments. The vessel wall was assumed to be rigid, and a no-slip boundary condition was applied at the walls. A steady flat 4 ml/s flow profile was applied at the inlet of the model, and zero pressure boundary conditions were imposed at the outlets. The overall CFD approach has been described previously in [65, 66].

3. Results

Figure 5 shows divergence and MSE comparison graphs when z=2. The proposed method consistently achieves lower divergence values than the Horn-Schunck-based interpolation whereas the MSE values vary between better and worse values. On average, divergence values were 11% lower than the Horn-Schunck-based interpolation. In some cases, the proposed method achieves up to 20% lower divergence values.

Figure 5: Divergence and MSE comparisons when slice distance is 2 mm.

Figure 6 shows divergence and MSE comparison graphs when z=4. In this case, the proposed method consistently achieves lower divergence and MSE values than the other tested methods.

Figure 6: Divergence and MSE comparisons when slice distance is 4 mm.

Figure 7 shows original, noisy, and interpolated slices from the analytical dataset for comparison. In the figure, only and components were plotted to show the effect of the divergence term. In Figure 8, it can be seen that the proposed algorithm reduces divergence while the MSE is increased in the CFD dataset.

Figure 7: Plotted and components of the 3D analytical divergence-free vector field. (a) Original, (b) Gaussian noise added, (c) linear interpolation, (d) Horn-Schunck based interpolation, and (e) proposed method. Note that the proposed method is able to achieve a smoother velocity field in the corners of the interpolated data.
Figure 8: Divergence and MSE comparisons for the CFD dataset.

The graphs in Figure 9 show the behavior of the proposed method as the divergence coefficient increases linearly. In this simulation, the smoothness coefficient was kept constant (=1). The graphs are taken from the PIV dataset. The divergence graph profiles were consistent across different images and three datasets. The MSE graph profiles may differ slightly from the divergence graph profiles across different datasets, but MSE always increased with increasing . The coefficient values tested were from 0 to 2000. The profiles shown in the figure show that there needs to be a balance between the divergence and the smoothing terms. The graphs in the figure are consistent with profiles of other published -based regularization methods [67, 68]. Figure 10 shows the behavior of the proposed method as and increase linearly.

Figure 9: Divergence and MSE profiles of the proposed method as is increased linearly while .
Figure 10: Divergence and MSE profiles of the proposed method as and are increased linearly from 0.1 to 2500.

The computational cost of obtaining flow vectors with the proposed method is similar to that of the Horn-Schunck approach. Even though the iterative solutions of the proposed method employ several terms, these need to be computed only once and can be reduced to a simpler form that is similar to the Horn-Schunck solutions. Both approaches took around 0.1 seconds to obtain an optical flow field on a single core of an Intel dual core CPU (i7-6500U @ 2.50GHz).

Another parameter that could affect the divergence, MSE, and computational cost was the iteration number. We ran simulations with different iteration numbers and noted that the divergence and MSE results seem to stabilize after 200 iterations. Higher iteration number mostly had an effect on the computation time.

4. Discussion and Conclusions

A new optical flow-based framework for image interpolation that also reduces divergence is proposed. The new method uses flow velocity data to guide the interpolation toward lesser divergence in the interpolated data. In addition to the symmetric interpolation setup, the method introduces a new divergence term into the canonical optical flow method. The method is applied to PIV, analytical, and CFD data. The method was tested against linear interpolation and the Horn-Schunck optical flow method since it uses a similar formulation as the Horn-Schunck method. The proposed method applies a symmetric interpolation setup and considers a new divergence term in addition to the brightness and smoothness terms in the energy functional.

In order to test the effects of the divergence term, both the Horn-Schunck and proposed methods were subject to the same smoothness coefficient. When tested on the noisy analytical data, the proposed method achieved a smoother and less noisy interpolated velocity field.

The proposed method was also applied to the PIV data with different values of smoothness and divergence term coefficients, and , respectively. Results indicate that the tradeoff between minimizing errors in velocity magnitude values and errors in divergence can be managed such that both are decreased below levels observed for standard truncated sinc function-based interpolators as well as pure optical flow-based interpolators. The divergence term coefficient, , needs to be large enough to reduce divergence in the interpolated data but not so large as to dominate the energy functional and introduce errors into the final interpolated velocity field. The effect of the iteration number on the divergence and MSE numbers was found to be minimal after 200 iterations. The computational cost of the method was similar to that of the Horn-Schunck based approach.

The method uses a numerical scheme that is well-known and straightforward. It is true that a more recent optical flow computation scheme may lead to performance gains in quality and/or speed-up. Methods presented in [59, 60] have become popular because of their speed, simplicity, and flexibility. Adoptation of recent numerical optimization and implementation techniques will be explored for future research.

The proposed method has potential to improve the interpolation of velocimetry data when it is difficult achieve an out-of-plane resolution close to the in-plane resolution. The results also indicate that the effect of the new divergence term in the optical flow functional can be appreciated better as the distance between the interpolated slice and the neighboring slices increases. It was noted that the proposed method outperforms the tested methods in both divergence and MSE values when the slice distance was increased. When the slice distance is small, the proposed method achieves lower divergence than the other methods while achieving similar MSE values.

Appendix

This can be minimized by solving the associated Euler-Lagrange equations. where L is the integrand of the energy functional.After rearranging the terms, we getapproximating the Laplacians of and , where is a proportionality constant and and are local averages. These approximations are substituted for Laplacians and the terms in the equation are rearranged. Determinants can be used to solve the above equations.

Data Availability

The PIV data used to support the findings of this study are available from the corresponding author upon request. Sample code can be found at https://github.com/berk-github/OF_interp.

Disclosure

A portion of this work was presented as an abstract at the 2017 IEEE International Symposium on Biomedical Imaging (ISBI 2017).

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Science Foundation .

References

  1. O. Oktay, W. Bai, M. Lee et al., “Multi-input Cardiac Image Super-Resolution Using Convolutional Neural Networks,” in Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2016, vol. 9902 of Lecture Notes in Computer Science, pp. 246–254, Springer International Publishing, Cham, Netherlands, 2016. View at Publisher · View at Google Scholar
  2. Y. Liu, X. Chen, Z. Wang, Z. J. Wang, R. K. Ward, and X. Wang, “Deep learning for pixel-level image fusion: Recent advances and future prospects,” Information Fusion, vol. 42, pp. 158–173, 2018. View at Publisher · View at Google Scholar · View at Scopus
  3. R. S. Alves and J. M. Tavares, “Computer Image Registration Techniques Applied to Nuclear Medicine Images,” in Computational and Experimental Biomedical Sciences: Methods and Applications, vol. 21 of Lecture Notes in Computational Vision and Biomechanics, pp. 173–191, Springer International Publishing, Cham, Netherlands, 2015. View at Publisher · View at Google Scholar
  4. W. Witwit, Y. Zhao, K. Jenkins, and Y. Zhao, “Satellite image resolution enhancement using discrete wavelet transform and new edge-directed interpolation,” Journal of Electronic Imaging, vol. 26, no. 2, 2017. View at Google Scholar · View at Scopus
  5. L. Roszkowiak, A. Korzynska, J. Zak, D. Pijanowska, Z. Swiderska-Chadaj, and T. Markiewicz, “Survey: interpolation methods for whole slide image processing,” Journal of Microscopy, vol. 265, no. 2, pp. 148–158, 2017. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Titus and S. Geroge, “A comparison study on different interpolation methods based on satellite images,” IJERT – International Journal of Engineering Research & Technology, vol. 2, 2013. View at Google Scholar
  7. K. K. Teoh, H. Ibrahim, and S. K. Bejo, “Investigation on several basic interpolation methods for the use in remote sensing application,” in Proceedings of the 2008 IEEE Conference on Innovative Technologies in Intelligent Systems and Industrial Applications, CITISIA, pp. 60–65, July 2008. View at Scopus
  8. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Processing Magazine, vol. 20, no. 3, pp. 21–36, 2003. View at Publisher · View at Google Scholar · View at Scopus
  9. T. M. Lehmann, C. Gönner, and K. Spitzer, “Survey: interpolation methods in medical image processing,” IEEE Transactions on Medical Imaging, vol. 18, no. 11, pp. 1049–1075, 1999. View at Publisher · View at Google Scholar · View at Scopus
  10. G. Liberman, E. Solomon, M. Lustig, and L. Frydman, “Multiple-coil k-space interpolation enhances resolution in single-shot spatiotemporal MRI,” Magnetic Resonance in Medicine, vol. 79, no. 2, pp. 796–805, 2018. View at Publisher · View at Google Scholar · View at Scopus
  11. N. Karani, C. Tanner, S. Kozerke, and E. Konukoglu, “Reducing Navigators in Free-Breathing Abdominal MRI via Temporal Interpolation Using Convolutional Neural Networks,” IEEE Transactions on Medical Imaging, vol. 37, no. 10, pp. 2333–2343, 2018. View at Publisher · View at Google Scholar
  12. Y. Chen, Y. Xie, Z. Zhou, F. Shi, A. G. Christodoulou, and D. Li, “Brain MRI super resolution using 3D deep densely connected neural networks,” in Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 739–742, Washington, DC, USA, April 2018. View at Publisher · View at Google Scholar
  13. C.-H. Pham, A. Ducournau, R. Fablet, and F. Rousseau, “Brain MRI super-resolution using deep 3D convolutional networks,” in Proceedings of the 14th IEEE International Symposium on Biomedical Imaging, ISBI 2017, pp. 197–200, April 2017. View at Scopus
  14. Q. Lin, Q. Zhang, and T. Li, “Slice interpolation in MRI using a decomposition-reconstruction method,” in Proceedings of the 4th International Conference on Information Science and Control Engineering, ICISCE 2017, pp. 678–681, July 2017. View at Scopus
  15. N. H. L. C. de Hoon, A. C. Jalba, E. Eisemann, and A. Vilanova, “Temporal interpolation of 4d pc-mri blood-flow measurements using bidirectional physics-based fluid simulation,” in Eurographics Workshop on Visual Computing for Biology and Medicine, The Eurographics Association, 2016. View at Google Scholar
  16. C. Ozturk, J. A. Derbyshire, and E. R. Mcveigh, “Estimating motion from MRI data,” Proceedings of the IEEE. Institute of Electrical and Electronics Engineers, vol. 91, no. 10, pp. 1627–1647, 2003. View at Publisher · View at Google Scholar · View at Scopus
  17. D. H. Frakes, C. P. Conrad, T. M. Healy et al., “Application of an adaptive control grid interpolation technique to morphological vascular reconstruction,” IEEE Transactions on Biomedical Engineering, vol. 50, no. 2, pp. 197–206, 2003. View at Publisher · View at Google Scholar · View at Scopus
  18. G. Pastuszak and M. Trochimiuk, “Architecture design of the high-throughput compensator and interpolator for the H.265/HEVC encoder,” Journal of Real-Time Image Processing, vol. 11, no. 4, pp. 663–673, 2016. View at Publisher · View at Google Scholar · View at Scopus
  19. G. Botella, A. García, M. Rodríguez-Álvarez, E. Ros, U. Meyer-Baese, and M. C. Molina, “Robust bioinspired architecture for optical-flow computation,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 18, no. 4, pp. 616–629, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. G. Botella, U. Meyer-Baese, A. García, and M. Rodríguez, “Quantization analysis and enhancement of a VLSI gradient-based motion estimation architecture,” Digital Signal Processing, vol. 22, no. 6, pp. 1174–1187, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. D. González, G. Botella, C. García, M. Prieto, and F. Tirado, “Acceleration of block-matching algorithms using a custom instruction-based paradigm on a Nios II microprocessor,” EURASIP Journal on Advances in Signal Processing, vol. 2013, no. 1, article 118, pp. 1–20, 2013. View at Publisher · View at Google Scholar · View at Scopus
  22. A. H. Nguyen, M. R. Pickering, and A. Lambert, “The FPGA implementation of a one-bit-per-pixel image registration algorithm,” Journal of Real-Time Image Processing, vol. 11, no. 4, pp. 799–815, 2016. View at Publisher · View at Google Scholar · View at Scopus
  23. Z. Xinchen, A. I. Haojun, H. Ruimin, and L. Deren, “A novel algorithm for sub-pixel block motion estimation [video compression applications],” in Proceedings of the 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing, pp. 587–590, October 2004.
  24. D.-X. Li, W. Zheng, and M. Zhang, “Architecture design for H.264/AVC integer motion estimation with minimum memory bandwidth,” IEEE Transactions on Consumer Electronics, vol. 53, no. 3, pp. 1053–1060, 2007. View at Publisher · View at Google Scholar · View at Scopus
  25. J. Pedlosky, Geophysical Fluid Dynamics, Springer, New York, NY, USA, 1987. View at Publisher · View at Google Scholar
  26. G. Glomb, G. S'wirniak, and J. Mroczka, “An optical flow-based method for velocity field of fluid flow estimation,” in Proceedings of the SPIE 10329, Optical Measurement Systems for Industrial Inspection X, vol. 10329, 2017.
  27. A. Baghaie and Z. Yu, “Curvature-Based Registration for Slice Interpolation of Medical Images,” in Computational Modeling of Objects Presented in Images. Fundamentals, Methods, and Applications, vol. 8641 of Lecture Notes in Computer Science, pp. 69–80, Springer International Publishing, 2014. View at Publisher · View at Google Scholar
  28. A. This, L. Boilevin-Kayl, H. G. Morales et al., “One Mesh to Rule Them All: Registration-Based Personalized Cardiac Flow Simulations,” in Functional Imaging and Modelling of the Heart, vol. 10263 of Lecture Notes in Computer Science, pp. 441–449, Springer International Publishing, 2017. View at Publisher · View at Google Scholar
  29. G. Glomb and G. S'wirniak, “Particle image models for optical flow-based velocity field estimation in image velocimetry,” in Proceedings of the SPIE 10679, Optics, Photonics, and Digital Technologies for Imaging Applications V, vol. 10679, 2018.
  30. D. H. Frakes, K. Pekkan, L. P. Dasi et al., “Modified control grid interpolation for the volumetric reconstruction of fluid flows,” Experiments in Fluids, vol. 45, no. 6, pp. 987–997, 2008. View at Publisher · View at Google Scholar · View at Scopus
  31. G. P. Penney, J. A. Schnabel, D. Rueckert, M. A. Viergever, and W. J. Niessen, “Registration-based interpolation,” IEEE Transactions on Medical Imaging, vol. 23, no. 7, pp. 922–926, 2004. View at Publisher · View at Google Scholar · View at Scopus
  32. L. D. C. Casa and P. S. Krueger, “Radial basis function interpolation of unstructured, three-dimensional, volumetric particle tracking velocimetry data,” Measurement Science and Technology, vol. 24, no. 6, p. 065304, 2013. View at Google Scholar · View at Scopus
  33. F. Brunet, E. Cid, A. Bartoli, E. Bouche, F. Risso, and V. Roig, “Image registration algorithm for molecular tagging velocimetry applied to unsteady flow in Hele-Shaw cell,” Experimental Thermal and Fluid Science, vol. 44, pp. 897–904, 2013. View at Publisher · View at Google Scholar · View at Scopus
  34. C. J. Elkins and M. T. Alley, “Magnetic resonance velocimetry: Applications of magnetic resonance imaging in the measurement of fluid motion,” Experiments in Fluids, vol. 43, no. 6, pp. 823–858, 2007. View at Publisher · View at Google Scholar · View at Scopus
  35. D. Frakes, M. Smith, D. De Zélicourt, K. Pekkan, and A. Yoganathan, “Three-dimensional velocity field reconstruction,” Journal of Biomechanical Engineering, vol. 126, no. 6, pp. 727–735, 2004. View at Publisher · View at Google Scholar · View at Scopus
  36. D. P. Melnikov and V. M. Shevtsova, “Liquid particles tracing in three dimensional buoyancy driven flows,” Fluid Dynamics and Materials Processing, vol. 1, pp. 189–199, 2005. View at Google Scholar
  37. D. Heitz, E. Mémin, and C. Schnörr, “Variational fluid flow measurements from image sequences: Synopsis and perspectives,” Experiments in Fluids, vol. 48, no. 3, pp. 369–393, 2010. View at Publisher · View at Google Scholar · View at Scopus
  38. M. Raffel, C. E. Willert, and J. Kompenhans, Particle Image Velocimetry: A Practical Guide, Springer, 2013.
  39. R. J. Adrian and J. Westerweel, Particle Image Velocimetry, Cambridge University Press, 2011.
  40. F. Scarano, “Iterative image deformation methods in PIV,” Measurement Science and Technology, vol. 13, no. 1, pp. R1–R19, 2002. View at Publisher · View at Google Scholar
  41. D. Fortun, P. Bouthemy, and C. Kervrann, “Optical flow modeling and computation: A survey,” Computer Vision and Image Understanding, vol. 134, pp. 1–21, 2015. View at Publisher · View at Google Scholar · View at Scopus
  42. D. Sun, S. Roth, and M. J. Black, “A quantitative analysis of current practices in optical flow estimation and the principles behind them,” International Journal of Computer Vision, vol. 106, no. 2, pp. 115–137, 2014. View at Publisher · View at Google Scholar · View at Scopus
  43. A. Dosovitskiy, P. Fischery, E. Ilg et al., “FlowNet: Learning optical flow with convolutional networks,” in Proceedings of the 15th IEEE International Conference on Computer Vision, ICCV 2015, pp. 2758–2766, December 2015. View at Scopus
  44. G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations, Springer, New York, NY, USA, 2006. View at MathSciNet
  45. B. K. P. Horn and B. G. Schunck, Determining optical flow, A.I. Memo 572, Massachusetts Institute of Technology, 1980.
  46. L. Alvarez, C. Castaño, M. García et al., “A variational approach for 3d motion estimation of incompressible piv flows,” in Scale Space and Variational Methods in Computer Vision, F. Sgallari, A. Murli, and N. Paragios, Eds., pp. 837–847, Springer, Berlin, Germany, 2007. View at Google Scholar
  47. L. Alvarez, C. A. Castaño, M. García et al., “A new energy-based method for 3D motion estimation of incompressible PIV flows,” Computer Vision and Image Understanding, vol. 113, no. 7, pp. 802–810, 2009. View at Publisher · View at Google Scholar · View at Scopus
  48. P. Ruhnau, A. Stahl, and C. Schnörr, “Variational estimation of experimental fluid flows with physics-based spatio-temporal regularization,” Measurement Science and Technology, vol. 18, no. 3, pp. 755–763, 2007. View at Publisher · View at Google Scholar · View at Scopus
  49. P. Ruhnau and C. Schnörr, “Optical Stokes flow estimation: an imaging-based control approach,” Experiments in Fluids, vol. 42, no. 1, pp. 61–78, 2006. View at Publisher · View at Google Scholar
  50. I. Herlin, D. Béréziat, N. Mercier, and S. Zhuk, “Divergence-free motion estimation,” in Computer Vision – ECCV 2012, A. Fitzgibbon, S. Lazebnik, P. Perona et al., Eds., pp. 15–27, Springer, Berlin, Germany, 2012. View at Google Scholar
  51. Q. Zhong, H. Yang, and Z. Yin, “An optical flow algorithm based on gradient constancy assumption for PIV image processing,” Measurement Science and Technology, vol. 28, no. 5, p. 055208, 2017. View at Publisher · View at Google Scholar
  52. D. Suter, “Motion estimation and vector splines,” in Proceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 939–942, June 1994. View at Scopus
  53. S. N. Gupta and J. L. Prince, “Stochastic models for DIV-CURL optical flow methods,” IEEE Signal Processing Letters, vol. 3, no. 2, pp. 32–34, 1996. View at Publisher · View at Google Scholar · View at Scopus
  54. S. N. Gupta and J. L. Prince, “On div-curl regularization for motion estimation in 3-d volumetric imaging,” in Proceedings of 3rd IEEE International Conference on Image Processing, vol. 1, pp. 929–932, 1996.
  55. T. Corpetti, D. Heitz, G. Arroyo, E. Mémin, and A. Santa-Cruz, “Fluid experimental flow estimation based on an optical-flow scheme,” Experiments in Fluids, vol. 40, no. 1, pp. 80–97, 2006. View at Publisher · View at Google Scholar · View at Scopus
  56. S. M. Song and R. M. Leahy, “Computation of 3-D Velocity Fields from 3-D Cine CT Images of a Human Heart,” IEEE Transactions on Medical Imaging, vol. 10, no. 3, pp. 295–306, 1991. View at Publisher · View at Google Scholar · View at Scopus
  57. B. Kanberoglu, P. Nair, and D. Frakes, “An optical flow-based approach for the interpolation of minimally divergent velocimetry data,” in Proceedings of the 14th IEEE International Symposium on Biomedical Imaging, ISBI 2017, pp. 538–542, April 2017. View at Scopus
  58. D. Sun, S. Roth, and M. J. Black, “Secrets of optical flow estimation and their principles,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 2432–2439, IEEE, San Francisco, Calif, USA, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  59. T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Computer Vision—ECCV 2004, vol. 3024 of Lecture Notes in Computer Science, pp. 25–36, Springer, Berlin, Germany, 2004. View at Publisher · View at Google Scholar
  60. A. Bruhn, J. Weickert, T. Kohlberger, and C. Schnörr, “A multigrid platform for real-time motion computation with discontinuity-preserving variational methods,” International Journal of Computer Vision, vol. 70, no. 3, pp. 257–277, 2006. View at Publisher · View at Google Scholar · View at Scopus
  61. A. Bruhn, J. Weickert, and C. Schnörr, “Lucas/Kanade meets Horn/Schunck: combining local and global optic flow methods,” International Journal of Computer Vision, vol. 61, no. 3, pp. 211–231, 2005. View at Publisher · View at Google Scholar · View at Scopus
  62. A. Bruhn, J. Weickert, C. Feddern, T. Kohlberger, and C. Schn\"Orr, “Variational optical flow computation in real time,” IEEE Transactions on Image Processing, vol. 14, no. 5, pp. 608–615, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  63. P. Gwosdek, H. Zimmer, S. Grewenig, A. Bruhn, and J. Weickert, “A Highly Efficient GPU Implementation for Variational Optic Flow Based on the Euler-Lagrange Framework,” in Trends and Topics in Computer Vision, vol. 6554 of Lecture Notes in Computer Science, pp. 372–383, Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. View at Publisher · View at Google Scholar
  64. H. A. Rashwan, M. A. Garcia, and D. Puig, “Variational optical flow estimation based on stick tensor voting,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2589–2599, 2013. View at Publisher · View at Google Scholar · View at Scopus
  65. B. N. Roszelle, P. Nair, L. F. Gonzalez, M. H. Babiker, J. Ryan, and D. Frakes, “Comparison among different high porosity stent configurations: Hemodynamic effects of treatment in a large cerebral aneurysm,” Journal of Biomechanical Engineering, vol. 136, no. 2, Article ID 021013, 2014. View at Publisher · View at Google Scholar · View at Scopus
  66. M. H. Babiker, L. F. Gonzalez, J. Ryan et al., “Influence of stent configuration on cerebral aneurysm fluid dynamics,” Journal of Biomechanics, vol. 45, no. 3, pp. 440–447, 2012. View at Publisher · View at Google Scholar · View at Scopus
  67. C. B. Shaw and P. K. Yalavarthy, “Effective contrast recovery in rapid dynamic near-infrared diffuse optical tomography using l1-norm-based linear image reconstruction method,” Journal of Biomedical Optics, vol. 17, no. 8, Article ID 086009, 2012. View at Publisher · View at Google Scholar · View at Scopus
  68. C. Habermehl, J. Steinbrink, K.-R. Müller, and S. Haufe, “Optimizing the regularization for image reconstruction of cerebral diffuse optical tomography,” Journal of Biomedical Optics, vol. 19, no. 9, Article ID 096006, 2014. View at Publisher · View at Google Scholar · View at Scopus