Table of Contents Author Guidelines Submit a Manuscript
Journal of Healthcare Engineering
Volume 2017 (2017), Article ID 6506049, 11 pages
https://doi.org/10.1155/2017/6506049
Research Article

An Improved Random Walker with Bayes Model for Volumetric Medical Image Segmentation

1Department of Mathematics and Computer Science, Fort Valley State University, Fort Valley, GA, USA
2College of Computer Science and Technology, Zhejiang University, Hangzhou, China
3Radiology Department, Sir Run Run Shaw Hospital, Medical School of Zhejiang University, Hangzhou, China
4Graduate School of Information Science and Engineering, Ritsumeikan University, Kyoto, Japan

Correspondence should be addressed to Yen-Wei Chen

Received 24 February 2017; Accepted 23 April 2017; Published 23 October 2017

Academic Editor: Pan Lin

Copyright © 2017 Chunhua Dong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Random walk (RW) method has been widely used to segment the organ in the volumetric medical image. However, it leads to a very large-scale graph due to a number of nodes equal to a voxel number and inaccurate segmentation because of the unavailability of appropriate initial seed point setting. In addition, the classical RW algorithm was designed for a user to mark a few pixels with an arbitrary number of labels, regardless of the intensity and shape information of the organ. Hence, we propose a prior knowledge-based Bayes random walk framework to segment the volumetric medical image in a slice-by-slice manner. Our strategy is to employ the previous segmented slice to obtain the shape and intensity knowledge of the target organ for the adjacent slice. According to the prior knowledge, the object/background seed points can be dynamically updated for the adjacent slice by combining the narrow band threshold (NBT) method and the organ model with a Gaussian process. Finally, a high-quality image segmentation result can be automatically achieved using Bayes RW algorithm. Comparing our method with conventional RW and state-of-the-art interactive segmentation methods, our results show an improvement in the accuracy for liver segmentation ().

1. Introduction

Segmentation of organ from CT volume is an important prerequisite for computer-aided surgery, computer-assisted intervention, and image-guided surgery. The accurate segmentation of the organ from clinical CT images is considered a challenging task: Large variations in shape make an accurate segmentation difficult, and existing lesions (e.g., tumors) exhibit considerable variation for the organ anatomical structure. To accurately segment an organ, various approaches have been proposed in literatures [18], such as intensity-based [911], classification-based [12, 13], clustering-based [1418], statistical shape model- (SSM-) based [19, 20], probabilistic atlas- (PA-) based [2125], active contour- (AC-) based [26, 27], and watershed-based [28, 29] segmentation methods. However, the main challenge of the abovementioned methods is the fast and efficient segmentation of large image data. This can be observed particularly in medical applications where a resolution of three-dimensional CT and MRI body scans constantly increases.

Recently, a growing interest is attracted by an interactive graph-based image segmentation algorithms such as graph cut (GC) [3036] and random walker (RW) [3741] algorithms. The random walker algorithm represents a recent noteworthy development in the weighted graph-based interactive segmentation methods. This technique with user interaction is more suitable for volumetric medical images to guarantee the reliability, accuracy, and fast speed demands.

However, due to the classical RW algorithm definitions on the weighted graphs, for a high-resolution volumetric medical image, RW method needs to construct the corresponding large-scale graph to solve the resulting sparse linear system, which leads to high computation cost: the long computation time and the high memory usage. Hence, over the past years, a large amount of research has been conducted to extend and enhance the random walker algorithm. Grady et al. [40] extended the classical RW segmentation approach by combining the regional intensity priors. The sparse linear equations can be addressed by the preconditioned conjugate gradient to achieve an acceptable memory consumption and easy parallelization. In [41], the computational demands with RW are alleviated by introducing an “offline” precomputation before user interaction with RW in real-time “online.” Using a similar principle, an offline precomputation was used to further speed up the online segmentation in [42]. Both methods used the “offline” and “online” strategies to minimize the time spent waiting. In addition, Goclawski et al. [43] proposed a superpixel-based random walker method to reduce the graph size, while the computation time increases linearly with the number of superpixels. The accuracy of superpixels plays an immediate decisive role in the process of organ segmentation.

To resolve these limitations, in our previous research [44], we proposed a knowledge-based segmentation framework for the volumetric medical image in a slice-by-slice manner based on the classical random walker. This algorithm employs the previous segmented slice as the prior knowledge for automatically setting the object/background seed points for the adjacent slices. It can reduce the graph scale and significantly speed up the optimization procedure of the graph. However, the classical RW algorithm was designed to be a general purpose interactive segmentation method, such that a user could mark a few pixels with an arbitrary number of labels and expect a quality result, regardless of the data set or the segmentation goal. Segmentation of a medical image ignores itself absolute intensity and shape information. If a consistent intensity and shape profile characterize an object of interest, then this information should be incorporated into the RW segmentation process.

Taking these into consideration, in our study, we extended a classical random walker algorithm by incorporating the prior (shape and intensity) knowledge in the optimization of sparse linear system. The objective of our work is to combine the prior knowledge with the spatial cohesion of the random walker algorithm in a principled way that produces the correct result. Based on the extended random walker, we applied a knowledge-based segmentation framework for the volumetric medical image in a slice-by-slice manner. Our strategy is to employ the previous segmented slice to obtain the prior (shape and intensity) knowledge of the target organ for the adjacent slice. With a small number of user-defined seed points, we can obtain the segmentation results of the start slice in the volume which can be used as the prior knowledge of the target organ. According to this prior knowledge, the object/background seed points are automatically defined and the corresponding Bayes model can be generated. Integrating this Bayes model into the RW sparse system, the organ is automatically segmented for the adjacent slice.

The remainder of this paper is organized as follows. Section 2 presents a brief recapitulation of the random walker algorithm and then extends to incorporate the prior (shape and intensity) knowledge. Section 3 elaborates our proposed knowledge-based framework using the extended RW with the Bayes model. Section 4 contains experimental work, and Section 5 discusses the implementation of our method, followed by the conclusion (Section 6).

2. Development

The random walk algorithm treats image segmentation as an optimization problem on a weighted graph, where each node represents a pixel or voxel. Therefore, we firstly define the graph that we are working on. We use the following notations for the rest of the paper. Given an image, , a graph consists of with vertices (nodes) and edges . Each node in uniquely identifies an image pixel . An edge, , spanning two vertices and , is denoted by . A weighted graph assigns a weight to each edge. The weight of an edge, , is denoted by . It represents the similarity between two neighboring nodes and . The degree of a vertex is for all edges incident on .

2.1. Review of Random Walker Method

The random walker segmentation algorithm of [37] computes the probability, for each pixel, that a random walker leaving that pixel will first arrive at a foreground seed before arriving at a background seed. It was shown in [37] that these probabilities may be calculated analytically by solving a linear system of equations with the graph Laplacian matrix. The Laplacian matrix is defined as where is indexed by vertices and . is the edge weight, and and indicate the image intensity at vertices and , respectively. represents a tuning constant that depends on the user.

Given a weighted graph, a set of marked (labeled) nodes, , and a set of unmarked nodes, , such that and , we would like to label each node with a label . stands for the foreground, and stands for the background. Assuming that each node has also been assigned with a label , we can compute the probabilities, , that a random walker leaving node arrives at a marked node by solving the minimization of

All nodes are divided into two sets: the marked (prelabeled) nodes and unlabeled (i.e., free) nodes . Therefore, the above function can be reformulated as follows:

Minimization of (3) with respect to , the random walker problem can be solved by the following system of equations:

The variable represents the set of probabilities corresponding to unmarked nodes; is the set of probabilities corresponding to marked nodes (i.e., “1” for foreground nodes and “0” for background nodes). By virtue of being a probability,

The random walk algorithm is explained in detail elsewhere [37]. Next, we will now present how the incorporation of the Bayes model into the above framework yields a segmentation algorithm.

2.2. Random Walker with Bayes Model

According to the above priori knowledge, we can calculate a posterior probability at the node which belongs to the label . Assuming that each label is equally likely, Bayes theorem gives the probability that a node belongs to label as where is the likelihood map for an organ and is the shape map for the targeted organ. can be obtained by dilating the targeted organ region in the previous segmented slice. can be estimated by the previous segmented slice of the organ. is the foreground, and is the background.

Equation (6) can be also written in vector notation: where is a diagonal matrix with the values of on the diagonal.

According to (6), the minimum energy distribution for the external function is

To incorporate the posteriori probability function (external term) into the RW algorithm (internal term), we may optimize the following energy:

The first term is the driving force behind the spatial cohesion of the random walker algorithm. The second term is a Bayes penalty term with the weight used to guarantee robustness to small disconnected pieces. The used Bayes model is generated according to the prior knowledge of an organ: shape and intensity. In this work, we set the weight to .

The minimum energy of the above equation is obtained when satisfies the solution to

Optimizing this energy leads to the system of linear equations:

The usage of the proposed Bayes-based RW algorithm is strongly limited by the enormous size of the graph represented in 3D volumetric medical image and the necessity of solving a huge sparse linear system. It results in the relative increase of the unlabeled seed points relative to a 2D image. Hence, in order to estimate the probability of each unlabeled seed point, the extended RW algorithm needs to calculate the larger inverse matrix , which leads to high computation costs: long computation time and high memory usage. We integrated our extended RW algorithm into a knowledge-based framework to make it more suitable and workable for our application. The following details our knowledge-based framework and results.

3. Knowledge-Based Framework

Our knowledge-based strategy employs the previous segmented slice as the prior (shape and intensity) knowledge of the target organ for automatic segmentation of the adjacent slice. Using a small number of user-defined seed points, we can obtain the segmentation results of the start slice of the volume for use as the prior knowledge of the target organ. According to the prior knowledge, the object/background seed points can be dynamically updated for the adjacent slice by combining the narrow band threshold (NBT) method and the organ model with a Gaussian process. Meanwhile, the corresponding Bayes model can be generated. Finally, an extended Bayes-based random walker algorithm is applied to automatically segment the whole volume in a slice-by-slice manner. In our work, “object” means the target organ to be segmented and “background” means the other tissues except the target organ. The whole procedure of the proposed approach is shown in Figure 1. In this method, there is a three-step pipeline consisting of the following: (1)Selecting and segmenting the start slice, as shown in the middle-part of Figure 1: (a) Manually defining the object/background seed points. (b) Generating a Gaussian model (GM) using the seed points. (c) Segmenting the organ (“Candicate Pixels” for the liver) using the classical RW method.(2)Segmenting the adjacent slice, as shown in the upper-part and bottom-part of Figure 1: (a) Generating a Gaussian model (GM) according to the previous segmented organ (intensity knowledge). (b) Automatic setting the object/background seeds based on the restricted region by morphological operation of the previous segmented organ (shape knowledge). (c) Refining the seed points based on NBT. (d) Segmenting the organ using our proposed Bayes-based RW methods. Thus, it automatically segments the whole organ in the remaining slices based on the updated prior knowledge of the organ.(3)Smoothing the boundary of the whole volume: Finally, the boundary of the output volume is smoothed by “Fourier transform” that forms the final organ surface.

Figure 1: The whole procedure of our knowledge-based method.

In the following section, we will introduce the start slice segmentation, the GM generation, and automatic seed point selection which integrate the prior intensity and shape knowledge of the previous segmented organ.

3.1. Interactive Segmentation of the Start Slice

Our proposed segmentation is a slice-by-slice method. There are two main steps in our proposed method. The first step is to segment the start slices interactively, and the second step is to segment other remaining slices automatically based on the segmented start slices. The aim of the first step (interactive segmentation of the start slices) is to find the initial region of the target organ (liver) so that it can be used as prior (intensity and shape) knowledge of the organ as the following steps for automatic segmentation.

The process of the first interactive segmentation of two start slices is shown in Figure 2 and involves four steps: (1) manually select one axial start slice. Scanning an input CT volume along the axial axis to find one slice in which the organ has the relative larger cross section in the axial plane; (2) manually define the object/background seeds on this start slice; (3) automatically generate the thresholded images based on the constructed Gaussian model (GM) using these seeds. To remove the intercostal muscles and the other nonobject parts, the object seeds are employed to construct the approximate intensity models for this organ using the Gaussian model (GM). After estimating the statistical intensity model, the constructed model is thresholded to find “Candidate Pixels” for the organ; (4) automatically segment the thresholded images. The final step of this process is to segment the thresholded image based on “Candidate Pixels” by the classical RW method.

Figure 2: Interactive segmentation of the start slice in CT image.
3.2. Automatic Segmentation of the Adjacent Slice
3.2.1. GM for Generation of the Thresholded Image

Constructing a Gaussian model (GM) [45] is aimed to estimate a new preprocessed image of the target organ so that it can more easily distinguish the difference between the target organ and other tissues. As explained in the last section, the initial segmented slice can be used to estimate the statistical parameters of the liver model for the current slice. Due to the existence of a large number of the liver pixels, estimation of the statistical parameters can be trusted. A Gaussian model is employed to estimate the intensity distribution of the liver. The Gaussian model is given by where the parameters mean and variance can be estimated by the marked seed points or the previous segmented slice of the organ. indicates the image intensity at the node . is the object, and is the background.

The intensity models are automatically determined for each slice according to the segmented organ in the previous slice. Furthermore, in order to remove some nonobject parts and obtain an accurate result, we threshold the output of this intensity model by discarding probabilities less than 0.5, so it can generate a likelihood map of the object. Comparison of the original CT image (Figure 3(b)) with the corresponding intensity model (Figure 3(c)) revealed that the liver can be more easily distinguished from other tissues. However, for the background, the likelihood map keeps the original probability value without thresholding.

Figure 3: Steps of the RWBayes method. (a) The segmented liver (red) of the previous slice; (b) the current slice; (c) candidate pixel by thresholding the GM; (d) the rough object (red) and background (green) seed points; (e) the fine seed points using a NBT method; (f) the initial segmentation result by RWBayes; (g) smoothing the boundary by Fourier transform; and (h) visualisation of the segmented liver volume.
3.2.2. Automatic Setting of Seed Points

The main assumption in our method is that it can determine the approximate prior (shape and intensity) knowledge for the organ. Due to a slice-by-slice technique that is applied to segment the organ in our method, the user segments one slice in the volume to define this prior knowledge, and consequently, they are automatically updated for the nearby slices. In this approach, assuming the consequent slices of the same patient have a high correlation, the boundary of the organ in the next slice does not go far from its border in the previous slice. Thus, a defined shape constraints based on the previous slice can be used to roughly select the object/background seed points for the adjacent slice.

Assuming the cross-section of the liver in the ith slice is divided into parts and the region of the organ for each part (, ) is known, corresponding to the part in the th slice, the object and background seeds can be defined by the following equation: where is the mask of the organ corresponding to the jth part in the slice . and are the structuring elements used for dilation in the region. is the structuring elements used for erosion in the region. These elements are empirically selected to be disks with a radius of , , and .

The background seed points are directly selected in the current slice in the region which can be considered as accurately seeded points outside the liver’s boundary. However, as shown in Figure 3(d), it can be seen that there were still a lot of false positives (other tissues) in the despite eroding the liver region for the previous slice, because we cannot segment the previous slice accurately and there still exists a variation of liver shape for different slices.

3.2.3. Refinement of Seed Points

As already explained above, it can dynamically update the parameters of GM model for the following slices. If the intensity model of the liver includes the parameters and , we can threshold this component in the narrow region to find the fine seed points corresponding to the candidate liver pixel.

We empirically found that the values of are in the range corresponding to low-contrast and high-contrast datasets.

In addition, it can been seen from Figure 3(d), since the defined region may include the nonliver part (such as vessels); we can threshold the narrow band to achieve more accurate object seeds (Figure 3(e)). Thus, for a pixel located in the region , if the intensity value of this pixel belongs to the narrow range , it is considered as an object seed. After estimating the “Candidate Pixels” and the fine object/background seeds for the current slice, the Bayes-based RW algorithm is applied to segment the liver (Figure 3(f)).

3.3. Smoothing the Boundary of the Whole Volume

However, the boundary of the segmented object obtained in the last step is not smooth, as shown in Figure 3(f). If the coordinates of the boundary points are analyzed by the Fourier transform (FT), they contain a significant number of high-frequency components. According to the definition of the FT, the coordinates are transformed from the spatial domain into the frequency domain as where is the number of the boundary points that are usually greater than 100. The boundary is smoothed by removing the high-frequency components, while the useful (information bearing) low-frequency components are retained. Hence, the first 15 components in frequency domain are kept and then transferred into the spatial domain (Figure 3(g)).

4. Results

4.1. Database

Our dataset included 26 CT images of the abdominal region with a resolution of 0.683 × 0.683 × 1 mm3 and a size of 512 × 512 × (159–263) pixels. All of the data were stored in DICOM image format with a depth of 12 bits per pixel. These data were acquired by GE LightSpeed Ultra scanners with eight detectors. The large variation of liver images was an important feature in the evaluation of our segmentation method. Hence, data were acquired from normal and pathological cases between 20 and 75 years old. The sample contained 20 normal cases and 6 pathological cases: no. 1 to no. 20 were normal cases and no. 21 to no. 26 belonged to pathological cases. Therein, patients (pathological cases) were those who were suspected of having a disease, such as chronic liver disease, and were scanned in the course of diagnosis. In order to make a quantitative evaluation for our proposed method, the liver was segmented for each image (i.e., subject) manually as the ground truth. The segmentation was performed under the guidance of a physician in order to obtain accurate liver volumes. This study was conducted with the approval of the institutional review boards at University Ethics Committee, and all data provided written informed consent.

The proposed algorithm was implemented in a MC-OS-based personal computer (Intel®Corei7 2.5GHz and 16GB-DRAM). The programming environment was coded in the MATLAB environment. Visualization of the shapes was performed using VTK [46] in C++ languages.

4.2. Quantitative Measurement

To measure the accuracy of our method, we compared it with the conventional RW method and the state-of-the-art interactive segmentation algorithms by two metrics.

4.2.1. Dice Coefficient (Dice)

The dice coefficient is one of the most popular methods to evaluate segmentation accuracy. This metric is given in percent and based on the voxels of two binary 3D volumes, with as the manually and as the automatically segmented organs.

4.2.2. Volumetric Overlap Error (VOE)

The volumetric overlap error between two sets of voxels and is given in percent. This ratio is also known as Tanimoto or Jaccard coefficient.

4.3. Quantitative Validation of Liver Segmentation

To investigate the performance of our proposed segmentation method, we applied our proposed RWBayes method to 26 clinical CT volumes which are described in the previous section. The segmentation results of two typical cases are shown in Figure 4. The results in Figure 4 proved that performing the RWBayes method to segment the livers can give us accurate results. A common difficulty for computer-aided liver segmentation is the erroneous inclusion of heart volumes, which our method robustly avoided. It confirmed the ability of our method to segment the livers with a precision segmentation result.

Figure 4: Comparison of the manual segmentation (blue) with the segmentation results of our method (red). The first row is the segmentation result in case 9. The second row is the segmentation result of pathological case with the unusual liver shape in case 22.

Additional challenges come from enlarged livers, where the liver has large shape variations which made it very difficult to be segmented. Taking this limitation into consideration, in this research, our technique performed on 26 CT scans that combined normal cases and pathological cases with large morphological variations. Figure 4 shows the liver segmentation result from one pathological case. It proved the performance of our proposed algorithm which was robust for segmenting the liver in the pathological cases with large morphological variations.

Apart from a visual inspection, a quantitative evaluation was conducted. Figure 5 gave a more clear depiction of the corresponding accurate results of 26 cases. The first 20 data points correspond to normal cases (the average Dice is 0.946), and the remaining 6 data points are pathological cases (the average Dice is 0.930). Regarding the result of applying our method to synthetic shapes, we can conclude that our proposed method was robust in addressing the segmentation of the liver (with the average Dice’s similarity coefficient = 0.942). Future research directions will include applying our method on more datasets in order to more accurately evaluate the performance.

Figure 5: Our technique performed on 26 CT scans with Dice measurement. The first 20 data points are normal cases, and the remaining 6 data points are pathological cases.
4.4. Qualitative Comparison of Interactive Segmentation Methods

To evaluate the effectiveness of the proposed method (RWBayes), RWBayes was compared with the classical random walk (RW3D) [37]. Considering the memory usage demands for applying the RW3D algorithm to the computer, we resized all of our datasets ( pixels) into the size of pixels. Moreover, we also compared our proposed method with a knowledge-based framework using the classical random walker and narrow band threshold (RWNBT) [44], in which the RWNBT did not generate a thresholded image based on the constructed Gaussian mixture model according to the previous segmented liver.

Quantitative and comparative results from applying the RW3D, RWNBT, and RWBayes methods for the liver segmentation are presented in Figure 6. In order to intuitively make a comparison between our proposed RWBayes and RW3D methods, it was unreasonable to give only one start slice with the corresponding segmentation result. It was necessary to show different slices for one data corresponding to a point on the curve with pixels. The red images were segmented liver slices, which were overlaid with the original CT slices. The simulation verifies that the performance of RWBayes was significantly better than the RW3D and RWNBT methods for segmenting the liver.

Figure 6: Comparison of the liver segmentation results with RWBayes method, RWNBT method, and RW3D method in case 6.

In order to make a comparison with the state-of-the-art interactive segmentation algorithms, we also compared the results using the graph cut algorithm (GC) [34] and interactive K-means algorithm (IKM) [14]. Table 1 clearly depicts the merits of our method by listing the comparative results with the average of Dice, VOE, and runtime between automated and manual segmentations for all 26 test CT scans. Computation time is an important metric for evaluating one segmentation algorithm. For the classical RW algorithm, the basis of RW method is a large, sparsely occupied linear equations, whose size corresponds to the number of voxels in the 3D image. Hence, it exhibited slowness for solving 3D image segmentation. A significant reduction in runtime values using RWBayes-based segmentation compared with those based on RW3D was confirmed. Meanwhile, the accuracy of RWBayes was observed to have significantly higher Dice/VOE than the state-of-the-art interactive segmentation methods. To directly demonstrate the performance of our proposed method, in respect to the statistical significance analysis, the p value was the probability of obtaining a test statistic result that was actually observed. These statistical tests demonstrated that our proposed RWBayes approach yields the high precision results with respect to the conventional RW3D method ().

Table 1: Segmentation accuracy obtained by the state-of-the-art methods for the liver on 26 CT scans.

5. Discussion

This paper introduced a new knowledge-based framework for the organ segmentation using the RWBayes method. The proposed method segmented an organ based on a set of prior knowledge. Prior knowledge included the approximate shape of an organ (shape knowledge) and statistical parameters of the organ’s intensities (intensity knowledge). According to a prior knowledge of an organ, the proper selection of object/background seeds was performed skillfully for our method to accurately segment the organ from the CT image.

The basic idea of the proposed method is based on the high correlation between adjacent slices. Seed points for the current slice are automatically generated according to the prior knowledge from the segmented organ region of the previous slice. As shown in Figure 5, precision results were achieved in our experiments as we used high-resolution data.

In practical clinics, however, CT images exist in various resolutions. In general, thin slices (high resolution) correspond to strong correlation while thick slices (low resolution) correspond to weak correlation.

In order to verify the effect of resolutions on our RWBayes method, a typical CT image (a resolution of and a size of pixels) is resized into 7 different resolutions in the axial-axis (z-axis) and then segmented with the same seed points. Figure 7 indicates that our proposed technique can be performed on the CT scans with large resolution variations. Regardless of image resolution, satisfactory segmentation results were achieved. In conclusion, our RWBayes method was robust in segmenting the livers from CT images of various resolutions. The simulation results prove the high capacity of our proposed RWBayes method for the organ segmentation using various resolutions of CT scans.

Figure 7: Effect of resolution on segmentation accuracy for case 1.

6. Conclusion

In this paper, we proposed a novel knowledge-based framework for organ segmentation using the RWBayes algorithm. A prior knowledge of the previous segmented organ was integrated into our strategy and has the following benefits: (1) small-scale graph; (2) automation of object/background seed setting according to the prior knowledge of the already segmented slices; and (3) robust segmentation technique by combing a Bayes model of an organ into the sparse system to calculate the probability of each unmarked node. The evaluation of the results demonstrated the high precision of the proposed approach. Compared with the conventional RW and the state-of-the-art interactive segmentation methods, our proposed method can significantly improve the segmentation accuracy (). As for future applications, the proposed method can be extended to segment other organs.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Army Research office under Award no. W911NF-15-1-0521 from the USA and in part by the MEXT Support Program for the Strategic Research Foundation at Private Universities (2013–2017) from Japan.

References

  1. S. Umetsu, A. Shimizu, H. Watanabe, H. Kobatake, and S. Nawano, “An automated segmentation algorithm for CT volumes of livers with atypical shapes and large pathological lesions,” Journal of IEICE Transactions on Information and Systems, vol. E97-D, no. 4, pp. 951–963, 2014. View at Publisher · View at Google Scholar · View at Scopus
  2. P. Karasev, I. Kolesov, K. Fritscher, P. Vela, P. Mitchell, and A. Tannenbaum, “Interactive medical image segmentation using PDE control of active contours,” IEEE Transactions on Medical Imaging, vol. 32, no. 11, pp. 2127–2139, 2013. View at Publisher · View at Google Scholar · View at Scopus
  3. D. Mahapatra, “Analyzing training information from random forests for improved image segmentation,” IEEE Transactions on Image Processing, vol. 23, no. 4, pp. 1504–1512, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. J. L. Peng, Y. Wang, and D. X. Kong, “Liver segmentation with constrained convex variational model,” Pattern Recognition Letters, vol. 43, pp. 81–88, 2014. View at Publisher · View at Google Scholar · View at Scopus
  5. L. DF, Y. Wu, G. Harris, and W. L. Cai, “Iterative mesh transformation for 3D segmentation of livers with cancers in CT images,” Computerized Medical Imaging and Graphics, vol. 43, pp. 1–14, 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. A. Shimizu, R. Ohno, T. Ikegami, H. Kobatake, S. Nawano, and D. Smutek, “Segmentation of multiple organs in non-contrast 3D abdominal CT images,” International Journal of Computer Assisted Radiology and Surgery, vol. 2, no. 3-4, pp. 135–142, 2007. View at Publisher · View at Google Scholar · View at Scopus
  7. W. Xiong, S. H. Ong, Q. Tian et al., “Construction of a linear unbiased diffeomorphic probabilistic liver atlas from CT images,” in 2009 16th IEEE International Conference on Image Processing (ICIP), pp. 1773–1776, Cairo, Egypt, November 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. A. H. Foruzan and Y. W. Chen, “Improved segmentation of low-contrast lesions using sigmoid edge model,” International Journal of Computer Assisted Radiology and Surgery, vol. 11, no. 7, pp. 1267–1283, 2016. View at Publisher · View at Google Scholar · View at Scopus
  9. X. L. Zhang, X. F. Li, and Y. C. Feng, “A medical image segmentation algorithm based on bi-directional region growing,” Optik - International Journal for Light and Electron Optics, vol. 126, no. 20, pp. 2398–2404, 2015. View at Publisher · View at Google Scholar · View at Scopus
  10. Z. G. Pan and J. F. Lu, “A Bayes-based region-growing algorithm for medical image segmentation,” Journal of Computing in Science and Engineering., vol. 9, no. 4, pp. 32–38, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. R. Adams and L. Bischof, “Seeded region growing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 6, pp. 641–647, 1994. View at Publisher · View at Google Scholar · View at Scopus
  12. E. A. Rikxoort, Y. Arzhaeva, and B. Ginneken, “Automatic segmentation of the liver in computed tomography scans with voxel classification and atlas matching. MICCAI workshop 3-D segmental,” Clinic: A Grand Challenge, pp. 101–108, 2007. View at Google Scholar
  13. K. Kasiri, K. Kazemi1, M. J. Dehghani, and M. S. Helfroush, “Atlas-based segmentation of brain MR images using least square support vector machines,” in 2010 2nd International Conference on Image Processing Theory, Tools and Applications (IPTA 2010), pp. 306–310, Paris, France, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  14. A. H. Foruzan, Y. W. Chen, R. A. Zoroofi et al., “Segmentation of liver in low-contrast images using K-means clustering and geodesic active contour algorithms,” IEICE Transactions on Information and Systems, vol. E96-D, pp. 798–807, 2013. View at Publisher · View at Google Scholar · View at Scopus
  15. Y. W. Chen, K. Tsubokawa, and A. H. Foruzan, “Liver segmentation from low contrast open MR scans using K-means clustering and graph-cuts,” in Advances in Neural Networks - ISNN 2010, L. Zhang, B. L. Lu, and J. Kwok, Eds., vol. 6064 of Lecture Notes in Computer Science, pp. 162–169, Springer, Berlin, Heidelberg, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. B. N. Li, C. K. Chui, S. Chang, and S. H. Ong, “Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation,” Computers in Biology and Medicine, vol. 41, no. 1, pp. 1–10, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. Y. He, M. Y. Hussaini, J. Ma, B. Shafei, and G. Steidl, “A new fuzzy c-means method with total variation regularization for segmentation of images with noisy and incomplete data,” Pattern Recognition, vol. 45, no. 9, pp. 3463–3471, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. Z. X. Ji, Q. S. Sun, and D. S. Xia, “A modified possibilistic fuzzy c-means clustering algorithm for bias field estimation and segmentation of brain MR image,” Computerized Medical Imaging and Graphics, vol. 35, no. 5, pp. 383–397, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. X. Zhang, T. Jie, K. X. Deng, Y. F. Wu, and X. L. Li, “Automatic liver segmentation using a statistical shape model with optimal surface detection,” IEEE Transactions on Biomedical Engineering, vol. 57, Part 2, no. 10, pp. 2622–2626, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. T. Heimann and H. P. Meinzer, “Statistical shape models for 3D medical image segmentation: a review,” Journal of Medical Image Analysis, vol. 13, no. 4, pp. 543–563, 2009. View at Publisher · View at Google Scholar · View at Scopus
  21. H. Lamecker, T. Lange, and M. Seebaee, Segmentation of the Liver Using a 3D Statistical Shape Model, ZIB Technology Report, Zuse Institute, Berlin, German, 2004.
  22. T. Okada, R. Shimada, M. Hori et al., “Automated segmentation of the liver from 3D CT images using probabilistic atlas and multi-level statistical shape model,” Academic Radiology, vol. 15, no. 11, pp. 1390–1403, 2008. View at Publisher · View at Google Scholar · View at Scopus
  23. M. G. Linguraru, J. K. Sandberg, Z. Li, F. Shah, and R. M. Summers, “Automated segmentation and quantification of liver and spleen from CT images using normalized probabilistic atlases and enhancement estimation,” International Journal of Medical Physics, vol. 37, no. 2, pp. 771–783, 2010. View at Publisher · View at Google Scholar · View at Scopus
  24. C. H. Dong, Y. W. Chen, A. H. Foruzan et al., “Segmentation of liver and spleen based on computational anatomy models,” Computers in Biology and Medicine, vol. 67, pp. 146–160, 2015. View at Publisher · View at Google Scholar · View at Scopus
  25. H. Park, P. H. Bland, and C. R. Meyer, “Construction of an abdominal probabilistic atlas and its application in segmentation,” IEEE Transactions on Medical Imaging, vol. 22, no. 4, pp. 483–492, 2003. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Esfandiarkhani and A. H. Foruzan, “A generalized active shape model for segmentation of liver in low-contrast CT volumes,” Computers in Biology and Medicine, vol. 82, pp. 59–70, 2017. View at Publisher · View at Google Scholar
  27. V. Caselles, F. Catte, T. Coll, and F. Dibos, “A geometric model for active contours in image processing,” Numerische Mathematik, vol. 66, no. 1, pp. 1–31, 1993. View at Publisher · View at Google Scholar · View at Scopus
  28. V. Grau, A. U. Mewes, M. Alcaniz, R. Kikinis, and S. K. Warfield, “Improved watershed transform for medical image segmentation using prior information,” IEEE Transactions on Medical Imaging, vol. 23, no. 4, pp. 447–458, 2004. View at Publisher · View at Google Scholar · View at Scopus
  29. N. Salman and C. Q. Liu, “Image segmentation and edge detection based on watershed techniques,” International Journal of Computers and Applications, vol. 25, no. 4, pp. 258–263, 2003. View at Google Scholar
  30. Y. Boykov, “Graph cuts and efficient N-D image segmentation,” International Journal of Computer Vision, vol. 70, no. 2, pp. 109–131, 2006. View at Google Scholar
  31. Y. Boykov and M. P. Jolly, “Interactive graph cuts for optimal boundary & region segmentation of objects in N-D images,” in Proceedings Eighth IEEE International Conference on Computer Vision (ICCV 2001), vol. 1, pp. 105–112, Vancouver, BC, Canada, July 2001. View at Publisher · View at Google Scholar
  32. Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max- flow algorithms for energy minimization in vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 9, pp. 1124–1137, 2004. View at Publisher · View at Google Scholar · View at Scopus
  33. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 11, pp. 1222–1239, 2001. View at Publisher · View at Google Scholar · View at Scopus
  34. A. Afifi and T. Nakaguchi, “Liver segmentation approach using graph cuts and iteratively estimated shape and intensity constraints,” in IEEE International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2012), N. Ayache, H. Delingette, P. Golland, and K. Mori, Eds., vol. 7511 of Lecture Notes in Computer Science, pp. 396–403, Springer, Berlin, Heidelberg, 2012. View at Publisher · View at Google Scholar
  35. A. K. Sinop and L. Grady, “A seeded image segmentation framework unifying graph cuts and random walker which yields a new algorithm,” in 2007 IEEE 11th International Conference on Computer Vision (ICCV 2007), pp. 1–8, Rio de Janeiro, Brazil, October 2007. View at Publisher · View at Google Scholar · View at Scopus
  36. W. Casaca, L. G. Nonato, and G. Taubin, “Laplacian coordinates for seeded image segmentation,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), pp. 384–391, Columbus, OH, USA, June 2014. View at Publisher · View at Google Scholar · View at Scopus
  37. L. Grady, “Random walks for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 11, pp. 1768–1783, 2006. View at Publisher · View at Google Scholar · View at Scopus
  38. L. Grady, T. Schiwietz, S. Aharon, and R. Westermann, “Random walks for interactive organ segmentation in two and three dimensions: implementation and validation,” in IEEE International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2005), vol. 3750, pp. 502–509, Springer, Berlin, Heidelberg, 2005. View at Publisher · View at Google Scholar · View at Scopus
  39. L. Grady and G. Funka-Lea, “Multi-label image segmentation for medical applications based on graph-theoretic electrical potentials,” in IEEE International Conference on Computer Vision and Mathematical Methods in Medical and Biomedical Image Analysis (ECCV 2004), pp. 230–245, Prague, Czech Republic: Springer, 2004.
  40. L. Grady, “Multilabel random walker image segmentation using prior models,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 1, pp. 763–770, San Diego, CA, USA, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  41. L. Grady and A. K. Sinop, “Fast approximate random walker segmentation using eigenvector precomputation,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), pp. 1–8, Anchorage, AK, USA, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  42. S. Andrews, G. Hamarneh, and A. Saad, “Fast random walker with priors using precomputation for interactive medical image segmentation,” in IEEE International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2010), pp. 9–16, Springer, Berlin, Heidelberg, 2010. View at Publisher · View at Google Scholar · View at Scopus
  43. J. Goclawski, T. Weglinski, and A. Fabijanska, “Accelerating the 3D random walker image segmentation algorithm by image graph reduction and GPU computing,” in Conference on Image Processing and Communications Challenges 6. Advances in Intelligent Systems and Computing, p. 313, Springer, Cham, 2015. View at Publisher · View at Google Scholar · View at Scopus
  44. C. H. Dong, Y. W. Chen, L. F. Lin et al., “Simultaneous segmentation of multiple organs using random walks,” Journal of Information Processing, vol. 24, no. 2, pp. 320–329, 2016. View at Publisher · View at Google Scholar · View at Scopus
  45. Z. Zivkovic, “Improved adaptive Gaussian mixture model for background subtraction,” in Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, vol. 2, pp. 23–26, Cambridge, UK, August 2004. View at Publisher · View at Google Scholar
  46. “Visualization Toolkit,” 2014, July 2011, http://www.vtk.org.