Electromagnetic Tissue Properties Imaging for Biomedical Applications
View this Special IssueResearch Article  Open Access
Robust Myocardial Motion Tracking for Echocardiography: Variational Framework Integrating LocaltoGlobal Deformation
Abstract
This paper proposes a robust realtime myocardial border tracking algorithm for echocardiography. Commonly, after an initial contour of LV border is traced at one or two frames from the entire cardiac cycle, LV contour tracking is performed over the remaining frames. Among a variety of tracking techniques, optical flow method is the most widely used for motion estimation of moving objects. However, when echocardiography data is heavily corrupted in some local regions, the errors bring the tracking point out of the endocardial border, resulting in distorted LV contours. This shape distortion often occurs in practice since the data acquisition is affected by ultrasound artifacts, dropouts, or shadowing phenomena of cardiac walls. The proposed method is designed to deal with this shape distortion problem by integrating local optical flow motion and global deformation into a variational framework. The proposed descent method controls the individual tracking points to follow the local motions of a specific speckle pattern, while their overall motions are confined to the global motion constraint being approximately an affine transform of the initial tracking points. Many real experiments show that the proposed method achieves better overall performance than conventional methods.
1. Introduction
In company with the development of realtime threedimensional echocardiography (RT3DE), the demands for automated analysis methods of left ventricle (LV) assessment such as ejection fraction, motion analysis, and strain analysis are rapidly increasing. Nevertheless, most of the analysis methods are still based on the measurements in a few twodimensional (2D) slices, because they are available in clinical practice [1, 2]. In general, the quantitative assessment for heart function is performed by manually tracing endocardial border in some 2D slices of different view at frames (such as endsystole (ES) or enddiastole (ED) frames) selected from the entire cardiac cycle and automatically tracking the traced LV contour over the remaining frames [3, 4]. The motion tracking of LV is carried out by observing the speckle pattern associated with deforming tissue. Speckle pattern is an inherent appearance in ultrasound imaging and its local brightness reflects the local echogeneity of the underlying scatterers. Since it is a difficult task to automatically track the motion of endocardial border in ultrasound images due to ultrasound artifacts, dropouts or shadowing phenomena, low contrast, and so on, user intervention is somewhat required for stable and successful tracking of endocardial border.
In the last decades, there have been numerous studies for tracking of LV wall motion such as the tracking methods using deformable models [5–8], active shape models [9–11], and optical flow methods [2, 12–15]. Those methods have some limitations to practical application of endocardial border motion tracking. In deformable models, their methods are relatively time consuming due to iterative contour evolution with stopping criteria and often need preprocessing for speckle reduction before wall motion tracking. Active shape models are the statistical methods based on the dataset of trained images so that they require additional effort to train on many images. Both deformable models and active shape models provide the motion information of LV border and enable user to measure the volume inside LV, whereas they are somewhat inadequate for strain analysis related to the motion and deformation of heart, because they are not speckle trackingbased methods providing motion information of local region on the myocardium but shapebased tracking methods.
On the other hand, optical flow methods, which use the assumption that the intensity of a moving object is constant over time, provide the local motion information of myocardium. They are capable of measuring the LV volume as well as the myocardial wall motion analysis or strain analysis to detect LV abnormalities. After an initial contour of endocardial border is traced, each point on the contour tracks the specific intensity and speckle pattern in sequential images. However, it is problematic to track the endocardial border in ultrasound images with unclear speckle pattern or weak signals. In practical environment, there often exist some incorrectly tracked points due to ultrasound artifacts, dropouts, or shadowing phenomena of cardiac wall [16]. When edge dropout or indistinguishable speckle pattern is present in a local neighborhood of a tracking point, the errors bring the tracking point out of the endocardial border, resulting in distorted LV contours throughout the entire cardiac cycle as shown in Figure 1 or irregular distances between the tracked points in Figure 2. These distorted results affect LV volume measurement or strain analysis.
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
In order to cope with these problems, we develop a new optical flow method equipped with a global motion constraint that is designed to prevent each tracking point from getting out of the endocardial border. In the proposed model, the LucasKanade (LK) optical flow method [17] and a global motion constraint being approximately an affine transformation of the initial tracking points are incorporated into a variational framework. So the individual tracking points follow speckle patterns (corresponding to each tracking point) and their overall motions are confined to the global motion constraint. The global motion constraint is based on the results [18, 19] that heart motion is regarded as the nonrigid motion by rotation, contraction/expansion, and shear. Typically, nonrigid motion consists of global deformation and local deformation. The global deformation is modeled by an affine transformation while the local deformation is described by a freeform deformation.
The proposed algorithm is capable of tracking LV border in realtime since its movement is directly computed from the difference between two sequential images via a simple matrix multiplication. For performance evaluation, we carry out various real experiments with Samsung Medison R&D Center (http://www.samsungmedison.com/). Numerous experiments show better performance of the proposed tracking methods compared to the conventional tracking methods.
2. Methods
2.1. Conventional Optical Flow Tracking Methods
Let represent the intensity of echocardiography at the location and the time . Optical flow tracking methods are based on the assumption that the intensity of a moving object is constant over time, so that the noisy timevarying images approximately satisfy where is the velocity vector to be estimated. Based on (1), numerous approaches for estimating the velocity vector have been proposed and those were applied to LV border tracking in echocardiography [2, 13–15].
Horn and Schunk [20] proposed the optical flow technique incorporating the smoothness of the motion vector in the entire image as a global constraint. In their model, the velocity at each time is determined by minimizing the energy functional: where is the image domain and a regularization parameter which controls the balance between the optical flow term and the smoothness on . The velocity at each time can be computed by solving the corresponding EulerLagrange equation that is a reactiondiffusion equation. In [21], it has been observed that this global method with the global smoothness constraint is significantly more sensitive to noise than the local method used by Lucas and Kanade [17].
Lucas and Kanade [17] used the assumption of locally constant motion to compute the velocity at a target location and time by forcing constant velocity in a local neighborhood of a point , denoted by . Following Lucas and Kanade, Barron et al. [21] estimated the velocity by minimizing the weighted least square criterion in the neighborhood : where is a weight function that enables to give more relevance to central terms rather than the ones in the periphery. Here, “arg min” stands for the argument of the minimum, that is, the vector for which the right integral attains its minimum value. Since this method determines at each location by combining information from all pixels in the neighborhood of , it is reasonably robust against image noise. We used (3) as the LucasKanade method, because this weighted window LK method is essentially same as the LK method. When the weight function is uniform, the form is the same as the Lucas and Kanade one, in fact.
As we mentioned in Section 1, there often exist some incorrectly tracked points due to weak signal on cardiac wall since echocardiography data is acquired through transmitting and receiving ultrasound signals between the ribs, causing considerable shadowing of cardiac wall [16]. Due to these incorrectly tracked points, LK method may produce significantly distorted LV shape.
Recently, Sühling et al. [13] improved the weighted window LK method (3) by introducing a linear model for the velocity along the time direction, and the displacement is obtained by evaluating such that and matrix minimize the following energy functional: where is the symmetric window function, which gives more weight to constraints at the center of the local spatiotemporal region than to those at the periphery. Since this method uses multiple frames centering around the time , it is more robust than the LK method (3) using the single frame at . However, the same problem of LV shape distortion as in LK method still remains.
Compared with the approaches based on the LK method, Duan et al. [15] used the regionbased tracking method (also known as the block matching or pattern matching method) with the crosscorrelation coefficients as a similarity measure. For given two consecutive images at time and , the velocity vector for each pixel is estimated by maximizing the crosscorrelation coefficients:
Instead of maximizing the crosscorrelation coefficients, the velocity vector can be estimated by minimizing the sumofsquared difference (SSD) [21] as follows:
The block matching method uses similarity measures that are less sensitive to noise, of fast motion, and of potential occlusions and discontinuities [15].
The above three local methods have drawback in dealing with the problem of the contour shape distortion in the presence of locally weak signal corrupted by rib shadowing and other factors. Hence, we need to develop a method alleviating shape distortion.
2.2. Proposed Method
The proposed method uses an affine transformation to describe a global motion that is synthesized by integrating local deformations. We denote the endocardial border traced at initially selected frame (e.g., endsystole or enddiastole frame) by a parametric contour that can be identified as its tracking points . Here, . Let be the contour deformed from at time . The motion of the contour will be determined by an appropriately chosen velocity indicating a time change of tracking points :
Here, we identify the contour with tracking points .
In our method, for each time is a minimizer of the following energy functional reflecting localtoglobal deformation: where is a nonnegative parameter, is the weight function as used in the LK method, and the affine coefficients at time are given by where
The first term in (8) controls the individual tracking points to follow the local motions of a specific speckle pattern, while the second term controls their overall motions to be confined to the global motion constraint being approximately an affine transform of the initial tracking points.
The first term in (8) reflects the wellknown LK optical flow (3) that probes local motions using bloodtotissue intensity ratio.
The second term concerns a misfit between the estimated tracking points and their projection onto the space , the space of affine transforms of the initial tracking points, given by
To be precise, a careful computation yields
Hence, the second term in (8) with the above identity reflects a global motion involving contraction, expansion, translation, and rotation.
To compute the minimizer of the energy functional (8), we need to derive the EulerLagrange equation which can be obtained by taking partial derivative of with respect to each : where is the component of the matrix
The derivation of the EulerLagrange equation is given in the appendix.
For numerical algorithm, we replace the integral over in (13) by summation over pixels around . Assuming that the neighborhood consists of pixels , (13) becomes where , , and .
For notational simplicity, let the time be fixed and let
Then, the system (15) can also be represented by
This can be concisely written by where , , , , , , , , and . Using the block matrix form, we can rewrite it as the system of linear equations:
Therefore, we can directly compute the movement of size from the formula: because the column vectors of the block matrix of size are linearly independent.
For the parameter , the displacements by (15) are exactly the same as those by the LK optical flow. However, (20) has a distinction to be capable of controlling the global shape in that the bigger the parameter is, the stronger the shape constraint is imposed. The LK optical flow performs a role as the local deformation subject to the global shape constraint, which is represented by the relationship of all tracking points. Therefore, each point efficiently tracks maintaining the global deformation of initial LV contour.
2.3. Heuristic Choice of Parameter
For heuristic choice of parameter , we use various datasets of manually delineated LV borders by clinical experts. With manually defined data and in a given image , we define the parameter as a function of quantity , and time :
We should note that if satisfies (15) for all and , then , the constant independent of time .
From numerous experiments, we observed that tends to depend mainly on the contrast of the image , and its dependency on time is relatively small. We found a linear relationship between and , where is an overall tissue/blood intensity ratio.
To investigate behavior of the parameter , we generate synthetic speckle images consisting of tissue and blood regions and test them by changing conditions including tissue/blood contrast as shown in Figure 3. We use an apical longaxis view template shown in Figure 4(a). When the synthetic images are generated, it is assumed that speckle is fully developed so that the statistics of echo envelope follow the Rayleigh distribution ([22, 23]) and, by logcompression, the distribution of the intensities is changed into the FisherTippett distribution ([24, 25]) where is the distribution parameter represented in Rayleigh distribution and , are the predetermined system parameters for logcompression of echo envelope. Finally, the synthetic images are smoothed by lowpass filter (in Figures 4(b), 4(c), and 4(d), resp.).
(a)
(b)
(c)
(d)
For modeling of heart motion, we simulate a heart with the nonrigid motion integrating global and local deformations. Figure 5(a) illustrates the deformation of LV in four simulated images. LV contours are represented by 13 tracking points and a natural cubic spline connecting them, which are denoted by at each time. Their coordinates and displacements from previous tracking points to next tracking points are listed in Table 1. For the sake of convenience in computation, it is assumed that the global deformation is modeled by an affine transformation of coefficients , , , , , and , which illustrate a contraction, and the local deformation is modeled by a freeform deformation of 0.1% variants with respect to the global deformation. In the first row of Figure 5(a), the blue solid lines and the red asterisks are showed as LV contour and tracking points by the defined heart motion, respectively. The green lines and asterisks mean LV contour at the previous frame. To generate the sequential images indicating the heart deformation, we also generate the tracking points of the epicardial contours so that the wall thickness between two contours is changed from 20 to 25 pixels in the sequential images. Using node points containing the endocardial and epicardial tracking points, the Delaunay triangulation meshes are generated and the sequential images are filled using linear spatial transformation from each mesh at previous image to the corresponding mesh at next image (second row).
(a) Tracking points  
 
(b) Displacements  

(a)
(b)
We first test the dependency of on the time . For the given sequential synthetic images and tracking points at each time step, we compute and plot the change of with time . The parameter varies within the range of 250 to 350 as shown in Figure 5(b). Using , the mean value of , we again compute (20) and get the displacements having the errors within 1 pixel compared to the reference displacements in Table 1(b). In this test, we use the 2dimensional Gaussian function of variance (pixel size) for the weight function over the square neighborhood with side length 21 pixels. From this test, we observe that the dependency of on the time is negligibly small.
Next, we test the dependency of on the tissue/blood intensity ratio. We generate the two consecutive images by varying the intensity of tissues as mentioned in Figure 3 and evaluate the change of with respect to the tissue/blood intensity ratio. Figure 6 shows that the relationship between the image intensity contrast and is approximately linear. This linear relationship enables us to provide a way of choosing the parameter depending on the tissue/blood intensity ratio effectively.
3. Experimental Results
We test the proposed algorithm in clinical setting using many real data. We compare the performance of the proposed algorithm with some widely used tracking algorithms including the block matching tracking methods using sumofsquared difference (SSD) and crosscorrelation coefficient, and the LK optical flow. For experiments, we use the 35 cases of size 2D echocardiography data acquired using a Samsung Medison V10 ultrasound system (Seoul, Republic of Korea) and a phased array transducer P24BA (2–4 MHz). We use 19 tracking points to track the endocardial border and make the LV contour connecting the points using the natural cubic spline. All the experiments were conducted using MATLAB 7.5 and laptop computer (Inter processor U7300 at 1.3 GHz and 1 GB RAM), and the computational time was about 40 milliseconds at each frame.
3.1. Assessment of LV Border Tracing
A quantitative evaluation on the performance of the proposed tracking algorithm is done on real 2D image sequences. For computation of , we use the standard finite difference method. We use the Hausdorff distance [26, 27] to compare the automated LV contours produced by algorithms with manually traced contours by a clinical expert. Here, the Hausdorff distance between the contour and is given by
For two representative cases among the 35 cases of 2D echocardiography data, the LV tracking results of the proposed method and the conventional methods are shown in Figures 7 and 8, respectively. For the sake of a name, we call them Cases I and II.
In Figure 7, the first row is manually traced LV contours by a clinical expert for images at ED, ES, and ED frames in the entire cycle. The next two rows are results by two block matching tracking methods using the different similarity measures of sumofsquared difference (SSD) and crosscorrelation. The fourth and fifth rows are obtained by the LK optical flow and the proposed method, respectively. The three conventional methods produce distorted LV contours due to a few incorrect tracking points alienated from the real LV border. On the other hand, the proposed method successfully follows local speckle patterns without distorting the whole LV shape.
For initial 10 sequential images, we compute by manually identifying each tracking point to the corresponding position on each image. From the computed parameters , is set to 120 according to the choice method described in Section 2.3.
In Figure 8, we test for real images having indistinguishable speckle patterns near endocardial border. Due to the presence of indistinguishable speckle patterns, the three conventional methods produce irregular distribution of tracking points as shown in second, third, and fourth rows in Figure 8. The proposed method keeps regular distribution of tracking points and successfully track local speckle patterns. For Case II, is set to 100.
Figure 9 shows the comparison results of four different methods using Hausdorff distance between contours drawn manually and contours generated automatically for the entire cycle from an ED frame to the next ED frame. The proposed method provides the smallest errors in final tracking results of both Cases I and II.
(a) Case I
(b) Case II
3.2. Assessment of Individual Tracking Point Errors
For performance evaluation of the proposed algorithm, we propose an additional assessment regarding the repeatability of local point along the forward and backward entire cardiac cycle. Let be the set of initial tracking points on a manually delineated contour (see the images of the left column in Figure 7). Let be a time interval of a one cycle image between ED frame and the next ED frame. Using one cycle image , we generate a forwardbackward image defined by
Using this forwardbackward image , we apply an automated tracking algorithm to get the returning tracking position at time . The local tracking point assessment is obtained by estimating the distance between the initial position and the corresponding returning position :
For the previous two representative cases, Cases I and II, Table 2 shows the comparison results of the proposed method with the conventional methods using the FBTE.

Table 3 shows the mean and standard deviation of the forwardbackward point tracking errors of the results obtained by the three conventional tracking methods and the proposed method. Tables 2 and 3 reveal that the proposed method provides improved performance compared with the conventional tracking methods.

4. Discussion and Conclusion
The proposed method controls the individual tracking points following optical flow by confining their overall motions by penalizing the misfit between the estimated tracking points and their projection onto the affine transform space in (11) of the initial tracking points.
We have experimentally demonstrated that the proposed method is capable of performing robust realtime LV border tracking even in the presence of indistinguishable portions of the LV walls in echocardiography data. In practice, echocardiography data often contains edge dropout or indistinguishable speckle patterns in a local neighborhood of a tracking point which may bring the tracking point out of the endocardial border, resulting in distorted LV contours. The proposed method effectively deals with these problems by taking advantage of an LV shape space describing a global motion that is synthesized by integrating local deformations governed by the LK optical flow model. Various experiments show that the proposed method achieves better overall performance than the widely used conventional methods including the block matching tracking methods using sumofsquared difference (SSD) and crosscorrelation, and the LK optical flow.
The proposed method performs the LV border tracking by directly computing the displacements between two sequential images via a simple matrix multiplication. The computational time is affected by the size of the matrix, depending on the number of tracking points.
We also proposed a new performance evaluation method for LV tracking that is based on the forwardbackward tracking error estimation as shown in Section 3.2. The conventional evaluation of global tracking performance using the delineated LV contours has some limitations in estimating errors of individual tracking points; in the case when tracking points erroneously move along LV border, the LV contour connecting the tracking points cannot reveal those individual tracking errors. The forwardbackward point tracking error estimation provides a better local tracking performance assessment in the whole cycle.
The proposed technique can be extended to three dimensions by using 3D affine transformation as a global deformation.
Appendix
Derivation of the EulerLagrange Equation (13)
In this appendix, we derive the EulerLagrange equation (13) from (8). From (9), we have for . From the first and fourth identities of (A.1), we have
Using (A.1) and (A.2), the partial derivatives and of with respect to are computed by
Simple arrangement of the above identities leads to the EulerLagrange equation: where . The EulerLagrange equation (13) can be obtained by a careful rearrangement of (13) using the identity
Hence, it remains to prove the above identity. It suffices to prove for . To compute simply, denote
Then can be simplified by
This completes the first identity of (A.7). Similarly, we can get the remaining two identities of (A.7).
Acknowledgments
The author would like to thank Samsung Medison R&D Center for supporting many ultrasound data and valuable suggestions. This work was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (no. 201281798 and no. 201181782).
References
 V. MorAvi, L. Sugeng, L. Weinert et al., “Fast measurement of left ventricular mass with realtime threedimensional echocardiography: comparison with magnetic resonance imaging,” Circulation, vol. 110, no. 13, pp. 1814–1818, 2004. View at: Google Scholar
 F. Veronesi, C. Corsi, E. G. Caiani, A. Sarti, and C. Lamberti, “Tracking of left ventricular long axis from realtime threedimensional echocardiography using optical flow techniques,” IEEE Transactions on Information Technology in Biomedicine, vol. 10, no. 1, pp. 174–181, 2006. View at: Publisher Site  Google Scholar
 J. Hansegård, S. Urheim, K. Lunde, S. Malm, and S. I. Rabben, “Semiautomated quantification of left ventricular volumes and ejection fraction by realtime threedimensional echocardiography,” Cardiovascular Ultrasound, vol. 7, no. 1, article 18, 2009. View at: Publisher Site  Google Scholar
 K. Y. E. Leung and J. G. Bosch, “Automated border detection in threedimensional echocardiography: principles and promises,” European Journal of Echocardiography, vol. 11, no. 2, pp. 97–108, 2010. View at: Publisher Site  Google Scholar
 E. D. Angelini, A. F. Laine, S. Takuma, J. W. Holmes, and S. Homma, “LV volume quantification via spatiotemporal analysis of realtime 3D echocardiography,” IEEE Transactions on Medical Imaging, vol. 20, no. 6, pp. 457–469, 2001. View at: Publisher Site  Google Scholar
 M. M. Nillesen, R. G. P. Lopata, I. H. Gerrits et al., “Segmentation of the heart muscle in 3D pediatric echocardiographic images,” Ultrasound in Medicine and Biology, vol. 33, no. 9, pp. 1453–1462, 2007. View at: Publisher Site  Google Scholar
 O. I. I. Soliman, B. J. Krenning, M. L. Geleijnse et al., “Quantification of left ventricular volumes and function in patients with cardiomyopathies by realtime threedimensional echocardiography: a headtohead comparison between two different semiautomated endocardial border detection algorithms,” Journal of the American Society of Echocardiography, vol. 20, no. 9, pp. 1042–1049, 2007. View at: Publisher Site  Google Scholar
 E. O. Chukwu, E. Barasch, D. G. Mihalatos et al., “Relative importance of errors in left ventricular quantitation by twodimensional echocardiography: insights from threedimensional echocardiography and cardiac magnetic resonance imaging,” Journal of the American Society of Echocardiography, vol. 21, no. 9, pp. 990–997, 2008. View at: Publisher Site  Google Scholar
 J. Hansegard, F. Orderud, and S. I. Rabben, “Realtime active shape models for segmentation of 3D cardiac ultrasound,” in Computer Analysis of Images and Patterns, vol. 4673 of Lecture Notes in Computer Science, pp. 157–164, Springer, New York, NY, USA, 2007. View at: Google Scholar
 F. Orderud and S. I. Rabben, “Realtime 3D segmentation of the left ventricle using deformable subdivision surfaces,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR'08), pp. 1–8, Anchorage, Alaska, USA, June 2008. View at: Publisher Site  Google Scholar
 M. Ma, M. van Stralen, J. H. C. Reiber, J. G. Bosch, and B. P. F. Lelieveldt, “Model driven quantification of left ventricular function from sparse singlebeat 3D echocardiography,” Medical Image Analysis, vol. 14, no. 4, pp. 582–593, 2010. View at: Publisher Site  Google Scholar
 P. Baraldi, A. Sarti, C. Lamberti, A. Prandini, and F. Sgallari, “Evaluation of differential optical flow techniques on synthesized echo images,” IEEE Transactions on Biomedical Engineering, vol. 43, no. 3, pp. 259–272, 1996. View at: Publisher Site  Google Scholar
 M. Sühling, M. Arigovindan, C. Jansen, P. Hunziker, and M. Unser, “Myocardial motion analysis from Bmode echocardiograms,” IEEE Transactions on Image Processing, vol. 14, no. 4, pp. 525–536, 2005. View at: Publisher Site  Google Scholar
 M. G. Linguraru, N. V. Vasilyev, G. R. Marx, W. Tworetzky, P. J. del Nido, and R. D. Howe, “Fast block flow tracking of atrial septal defects in 4D echocardiography,” Medical Image Analysis, vol. 12, no. 4, pp. 397–412, 2008. View at: Publisher Site  Google Scholar
 Q. Duan, E. D. Angelini, S. L. Herz et al., “Regionbased endocardium tracking on realtime threedimensional ultrasound,” Ultrasound in Medicine and Biology, vol. 35, no. 2, pp. 256–265, 2009. View at: Publisher Site  Google Scholar
 K. Y. E. Leung, M. G. Danilouchkine, M. van Stralen, N. de Jong, A. F. W. van der Steen, and J. G. Bosch, “Left ventricular border tracking using cardiac motion models and optical flow,” Ultrasound in Medicine and Biology, vol. 37, no. 4, pp. 605–616, 2011. View at: Publisher Site  Google Scholar
 B. Lucas and T. Kanade, “An iterative image restoration technique with an application to stereo vision,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI'81), vol. 2, pp. 121–130, 1981. View at: Google Scholar
 A. A. Amini, R. W. Curwen, J. C. Gore, and Snakes a, “nd splines for tracking nonrigid heart motion,” in Computer Vision—ECCV'96, vol. 1065 of Lecture Notes in Computer Science, pp. 249–261, Springer, New York, NY, USA, 1996. View at: Google Scholar
 M. Li, C. Kambhamettu, and M. Stone, “Nonrigid motion recovery for 3D surfaces,” Image and Vision Computing, vol. 25, no. 3, pp. 250–261, 2007. View at: Publisher Site  Google Scholar
 B. Horn and B. Schunk, “Determining optical flow,” Artificial Intelligence, vol. 17, no. 2, pp. 185–203, 1981. View at: Publisher Site  Google Scholar
 J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques,” International Journal of Computer Vision, vol. 12, no. 1, pp. 43–77, 1994. View at: Publisher Site  Google Scholar
 E. Brusseau, C. L. de Korte, F. Mastik, J. Schaar, and A. F. W. van der Steen, “Fully automatic luminal contour segmentation in intracoronary ultrasound imaging—a statistical approach,” IEEE Transactions on Medical Imaging, vol. 23, no. 5, pp. 554–566, 2004. View at: Publisher Site  Google Scholar
 A. Sarti, C. Corsi, E. Mazzini, and C. Lamberti, “Maximum likelihood segmentation of ultrasound images with rayleigh distribution,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 52, no. 6, pp. 947–960, 2005. View at: Google Scholar
 D. Kaplan and Q. Ma, “On the statistical characteristics of logcompressed Rayleigh signals: theoretical formulation and experimental results,” Journal of the Acoustical Society of America, vol. 95, no. 3, pp. 1396–1400, 1994. View at: Google Scholar
 G. Slabaugh, G. Unal, M. Wels, T. Fang, and B. Rao, “Statistical regionbased segmentation of ultrasound images,” Ultrasound in Medicine and Biology, vol. 35, no. 5, pp. 781–795, 2009. View at: Publisher Site  Google Scholar
 D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge, “Comparing images using the Hausdorff distance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 9, pp. 850–863, 1993. View at: Publisher Site  Google Scholar
 E. Belogay, C. Cabrelli, U. Molter, and R. Shonkwiler, “Calculating the Hausdorff distance between curves,” Information Processing Letters, vol. 64, no. 1, pp. 17–22, 1997. View at: Google Scholar
Copyright
Copyright © 2013 Chi Young Ahn. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.