Research Article  Open Access
Jinke Wang, Haoyan Guo, "Automatic Approach for Lung Segmentation with JuxtaPleural Nodules from Thoracic CT Based on Contour Tracing and Correction", Computational and Mathematical Methods in Medicine, vol. 2016, Article ID 2962047, 13 pages, 2016. https://doi.org/10.1155/2016/2962047
Automatic Approach for Lung Segmentation with JuxtaPleural Nodules from Thoracic CT Based on Contour Tracing and Correction
Abstract
This paper presents a fully automatic framework for lung segmentation, in which juxtapleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonalbased border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arcbased border smoothing and concavebased border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD) cm^{3}, volume overlap error (VOE) %, average surface distance (ASD) mm, root mean square distance (RMSD) mm, maximum symmetric absolute surface distance (MSD) mm, and average timecost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxtapleural nodules.
1. Introduction
Multidetector CT makes chest imaging with highresolution and submillimeter isotropic characteristics, which greatly promote the automatic analytical techniques on medical imaging. Precise segmentation of pulmonary parenchyma is regarded as a critical step for automatic detection of various lung diseases. However, accurate lung segmentation often failed when abnormity turns up, and abnormity may be missed or other tissues that not belong to lungs could be included. Thus, conventional segmentation techniques are often insufficient to segment pulmonary parenchyma from chest CT datasets.
Previous work on lung segmentation can be roughly classified into two categories. The first category is thresholdbased methods, which depend on the different attenuations between lung parenchyma and its surrounding tissues [1–9]. The main limitation of these methods is that their accuracy is badly influenced by pleura abnormity or artifact and often result in oversegmentation. Most of the thresholdbased methods are twodimensional approaches that process each axial section separately. Although it is a reasonable choice for thick slices CT, threedimensional approach is more preferable when isotropic data is available, in which inconsistency between slices can be avoided.
Sun et al. [1] proposed a fully threedimensional based lung segmentation and visualization technology. Firstly, in the preprocessing phase, isotropic filtering is used to improve the signaltonoise ratio; and then, wavelet transformbased interpolation is applied to reconstruct the 3D voxels. Finally, by use of region growing, homogeneity, and gradient features, the lung region is extracted. Brown et al. [2, 3] also suggested a system framework based on 3D region growing and morphology smoothing; moreover, they proposed a semantic network anatomical model. On the basis of the attenuation threshold, shape, adjacent properties, volume, and relative position, the model can simulate the chest wall, mediastinum, bronchial tree, and left and right lungs to distinguish the different anatomical structures. Sun et al. [4] developed a thresholdbased segmentation method for missed diagnosis of large tumor. First, a normal shape model of lung is constructed by training of 41 sets of segmented datasets; second, for initialization, ribbased matching algorithm is used to produce the contour. Since the shape model cannot capture the details of the border, thus graphcut method is implemented for the recovery of the details.
The second category is specific abnormitybased methods, which focus on specific abnormal diseases [10–15]. Due to their specificity on particular case, they are not applicable for routine test of largescale datasets.
Sofka et al. [10] from Siemens used the visible structure knowledge of chest CT to present a multistage learning method. Firstly, the method identifies the spine among the tracheas; secondly, a hierarchical network is used to predict the posture parameter of left and right lungs. Thirdly, by use of the marks near the ribs and spine, a shape model is initialized and followed by a transformation operation to achieve the refinement. Korfiatis et al. [11] proposed a texture classificationbased method for interstitial lung disease. The method used intensitybased means clustering for initialization, and for containing pixels that around the initial contour, the statistical features of intensity and wavelet coefficients are calculated for support vector classification. In order to compensate for the lost juxtapleural nodules and ensure the smoothness of the lung boundary, several methods have been proposed to correct the lung contours [12, 13]. Yim and Hong [12] proposed a new curvaturebased method for correcting the segmented lung boundary, a 3D branchbased region growing algorithm was utilized to segment the trachea and the left and right bronchi with adaptive growing conditions. Pu et al. [13] developed a lung segmentation method for reducing errors result from juxtapleural tumor in traditional thresholding approaches. The proposed method begins with segmenting the lung contour with thresholding and smoothing and then flooding in the nonlung region of each slice; by this way, the initial border of the lung is tracked, and the adaptive border marching algorithm is utilized for reincluding the juxtapleural tumor.
In addition to the abovementioned studies, a few algorithms focus on diverse lung scans with dense pathologies being proposed. Sluimer et al. [16] proposed an atlasbased technology for lung segmentation with severe lesion. By registering 15 sets of chest CT to referenced lung atlas, the probability atlas is constructed, and then elastic registering is used for mapping the probability atlas to new scans for initialization and transformation. Finally, the trained lung border is utilized for refining the lung border.
The existing methods are either not taking the juxtapleural tumors into consideration or too specified to be qualified for largescale testing. Alleviating these difficulties is exactly what we are concerned with in this paper. We developed a fully automatic framework to segment pulmonary parenchyma with juxtapleural nodules from chest CT. It starts from skin boundary detection with maximum connected component analysis, and then, rough segmentation of lung contour is implemented by diagonalbased tracing, which is followed by the separation of the left and right lungs with maximum cost path algorithm. And the final segmentation of pulmonary parenchyma is achieved by arcbased smoothing and concavebased correction. Our scheme is evaluated on 45 sets of CT scans, and its results are compared with the state of the art method, which is validated by the manual segmentation standard of radiologist.
2. Methods
In this section, the proposed framework will be described in detail. It is a multistep approach that gradually accumulates information until the final result is obtained. We depict the flowchart of the framework in Figure 1. It is subdivided into three phases: skin boundary detection, contour segmentation, and parenchyma refinement. In the rest of the parts, we further describe each individual step and explain how to segment pulmonary parenchyma automatically from chest CT.
2.1. Skin Boundary Detection
Skin boundary detection is the foundation of lung segmentation. In view of the high contrast between chest and the background, thresholdbased method is utilized for segmentation purpose. In this section, firstly, principal componentbased image aligning is implemented to correct the tilted scans; secondly, mathematical morphology operation is applied for noise reduction, and finally, by maximum connected region analyzing, the chest mask is extracted.
2.1.1. Principal ComponentBased Image Aligning
The contour detection algorithm assumes that all patients have the same pose. In particular, it assumes that they lie upright and on their back in the scanner. This assumption is in most cases true due to the standardized CT scanning protocol. However, there are some rare cases in which the patients lie on their side, as shown in Figure 2(a). Because the border detection algorithm is not able to directly handle such scans in view of missing the starting point, an algorithm has been developed which automatically identifies scans in which patients lie on their side and rotates them accordingly.
(a)
(b)
(c)
In this paper, we limit the inclination angle on the plane, and using the rotation method proposed by [17] for aligning. Firstly, for all the bone voxels on the plane, principal component analysis [18] is applied for extracting the first principal component , and then is mapped to the positive direction of the axis to generate the rotation matrix with the rotated degree :where is the mapping of vector on axis, while is the mapping of vector on axis. It is assumed that is orthogonal to the patients sagittal plane and tangential to his coronal plane. It is further assumed that the angle between and the positive axis is between and . If this is not the case, that is, , the direction of is inverted by multiplying −1.
A diagonalbased border detection algorithm is utilized in the subsequent section. By experience only if is out of can the aligning algorithm be applied, or the initial point of lung border could be missed. As is in , the influence on the boundary tracking algorithm can be eliminated. As shown in Figure 2, by rotating around the center for degree, the tilted image is aligned.
2.1.2. Mathematical MorphologyBased Denoising
The main problem in skin boundary detection is the existence of various external noises, including human appendant, bed sheet, and CT scanner itself (Figure 3(a)). To eliminate these noises, firstly, Otsu threshold [19] is used for binary processing (Figure 3(b)); secondly, by morphological opening operation, salt noise in the CT scan, bed sheet, and the scanner itself are removed (Figure 3(c)). Finally, by connected regional analysis, the chest mask is determined (Figure 3(d)), and by masking the original chest scan, the final chest region is obtained (Figure 3(e)).
(a)
(b)
(c)
(d)
(e)
2.2. Rough Segmentation of Lung Contour
After skin boundary detection, we step into lung parenchyma segmentation. In this section, two procedures are applied: (1) diagonal tracingbased lung contour initialization; (2) maximum cost pathbased lungs separation.
2.2.1. Diagonal TracingBased Lung Contour Initialization
A diagonal tracingbased method is proposed for lung contour initialization, with the detailed description in the following.
Step 1. Define the major diagonal as the searching path (Figure 4).
(a)
(b)
Step 2. Search first with three consecutive “0s” as the start point of the left lung.
Step 3. 8neighborhoodbased boundary tracing is utilized for boundary extraction of the left lung. Assume the boundary point set is denoted by .
Step 4. Once an overlap between the final two points and the initial two points is found, for example, , , the algorithm ends.
By this way, the boundary of the left lung is achieved; similar to the method of obtaining the left lung border, by searching the start point along the minor diagonal, with the accompanied boundary tracing algorithm, the boundary of the right lung is achieved. Thus the initialization of lung contour is fulfilled.
2.2.2. Maximum Cost PathBased Lungs Separation
The separation of left and right lungs is the necessary step for accurate lung segmentation. In [20, 21], 2D edge tracking was used to find the boundaries of the left and right lungs. Hu et al. [22] separated the left and right lungs by identifying the anterior and posterior junctions using dynamic programming. In this paper, we use the dynamic programming algorithm [22] for separation purpose. The dynamic programming algorithm is used on each slice with single connective component. Its target is to locate the position of the left and right lungs and reseparate them (see Figure 5). In this method, the weight map that is proportional to the intensity level is used for searching the maximum cost path, which corresponds to the separation line of left and right lungs.
(a)
(b)
Once the single connective area is found, 2D erosion process is applied for separation, while dilating process with constraint is used for reconstructing the original borderline. Supposing as the original set of lung pixels, the erosion operation is adopted to calculate a new set for separated lungs . The equation is showed as follows:where indicates binary morphology erosion, and is a binary diamondshaped structure. By iterative erosion with , is separated into two components, and the iterative number is indicated by .
For the reconstruction of lung border, iterative dilation with constraint is used that is described as follows:where represents morphology dilation, with constraint , while keeps the same components number with , and is used for initialization. Equation (3) is implemented until is not satisfied or the component number is changed. Figure 5 illustrates the reconstruction process.
2.3. Pulmonary Parenchyma Refinement
In this section, two successive phases are implemented to refine the rough lung contour. We will describe the details step by step until the final pulmonary parenchyma is achieved.
2.3.1. Arc ReconstructionBased Border Smoothing
Lots of jagged edges are generated after rough segmentation of lungs as shown in Figure 6. In order to make image smooth and reduce the impacts of gradient mutations, curve smoothing method is used. The partial arc coefficient is produced by multiple points, and through appropriate smoothing frequency, the optimum result is obtained. Since any curve on a plane can be defined as , (where represents the arc length of the curve), therefore, the edge of the lung parenchyma can be denoted using (4):Smoothing is essentially a resampling process, and the convergence condition is (5). When the start points with the last two ones constitute a 8neighborhood relation, a closed contour is determined. In this paper, cubic spline interpolation [23] is used for constructing the new smoothing border, and the detailed algorithm is described below.
(a)
(b)
Step 1. Resampling the initial contour () with step size , and then, the arc length between two adjacent points can be denoted by ; in this paper, is used.
Step 2. For on the border with adjacent points (). The arc between the neighbors and are . Assuming as the coordinate of , and as the arc length between and , we get the following deduction:Then, the least squares method [24] is utilized to obtain the coefficient series , .
Step 3. Take the arc lengths of and into polynomial, and a new smoothed location is generated.
Step 4. Repeat Steps 2 and 3 for a new border set until convergence.
Step 5. Set threshold for perimeter convergence, iterating from Steps 1 to 5 until .
The effect on jagged border smoothing is shown in Figure 6, with the testing parameters provided in Table 1. In this paper, parameters , , and are selected.

2.3.2. Concave DiscriminationBased Border Correction
After border smoothing, appropriate detection and correction are required for solving undersegmentation problem caused by juxtapleural nodules. The following approach is aiming to this target.
Concave area is defined as the line between the start point and the rightmost or leftmost point of step size. To determine the orientation, the right hand rule [25] is used, and for detecting all the possible concave areas, the adaptive border marching algorithm (ABM) [13] is utilized. We have developed a model with two parameters (Figure 8) for the determination of boundary refinement. One parameter is , the Euclidean distance between two consecutive points after the marching operation, and the other is , the maximum height perpendicular to this connecting line segment. We defined the threshold which is the lengthwidth ratio of and . For any concave region where , replace the concave area with a straight line. The ABM algorithm involves five consecutive points, as shown in Figure 7. Choose as the start point and as the reference direction (indicated by red line); then, because point is found on the right of , thus, is substituted by as a new direction. Since all the rest points locate on the left side, thus, a new concave point is detected (indicated by green line). Then, a new round with the new point is continued until a closed path is achieved.
(a)
(b)
(c)
(d)
As concave region detection is completed, we step into the correction phase. On the one hand, concave area correction can reduce the missed diagnosis rate of juxtapleural nodules; on the other hand, excessive correction will undoubtedly results in more undersegmentation errors. Therefore, lengthwidth ratiobased threshold is proposed for solving this problem. The main procedure of this algorithm is described below.
Step 1. Calculate the perimeter of lung border set .
Step 2. For all the concave points on border set , calculate the lengthwidth ratio (see Figure 8).
Step 3. For any concave point that , substitute the concave area with a straight line, where indicates the ratiobased threshold.
Step 4. Recalculate a new lung border set with perimeter .
Step 5. Set the threshold for perimeter convergence, iterating from Steps 1 to 5 until .
In this paper, the convergence threshold is used, and Figure 9 depicts the undersegmentation case via concave correction.
(a)
(b)
3. Experimental Results and Discussion
3.1. Quality and Accessibility of Datasets
A total of 45 sets of chest CT scans from Weihai Municipal Hospital are used for experiment, in which 20 groups are generated by Somatom Sensation 64 of Siemens Medical Systems, and another 25 groups come from Brilliance 64bit scanner of Philips Medical Systems. CT size is to , with pixel size 0.625 mm to 0.742 mm, and slice thickness 0.55 mm to 1.0 mm. Table 2 presents a detailed information about the quality of the images; meanwhile, Table 3 provides a detailed description of juxtapleura tumor used in our experiment. All 45 groups are manually segmented by a radiologist as golden standard. Experiments are performed in Matlab R2010a, with quadcore CPU i54590 and 8 G memory.


The ground truth of the segmentation used in this paper was obtained by the radiologists of the cooperative hospital, utilizing a manual segmentation software named MITK [26], which provides an open source and a graphical user interface developed by the German Center for Cancer Research. The general procedure for ground truth segmentation is as follows. First, a smoothing operation is selected for reducing the noises in the images; second, a threedimensional region growing method is used to obtain an initial segmentation result; finally, the rough segmentation results were further optimized by the radiologists on the crosssectional, sagittal, or coronal slice until the final segmentation results were satisfactory.
3.2. Evaluation Metrics and Criteria
To evaluate the segmentation performance, seven error metrics are used in this paper, which are often utilized for evaluating on the accuracy and complexity [6, 12], including volume difference (VD), volume overlap error (VOE), relative volume difference (RVD), average surface distance (ASD), root mean square distance (RMSD), maximum symmetric absolute surface distance (MSD), and process time. For automatic segmentation volume and manual segmentation volume , is defined as , , , and , , are defined by (7), (8), and (9), respectively:where and correspond to two segmentation results, and represents the shortest Euler distance from voxel to the segmentation result .
Similar to the criteria used in [13], oversegmentation and undersegmentation rates are also considered as criteria for comparative study. Oversegmentation rate is defined as the segmentation volume that is regarded as lung tissue in our method, while not in the ground truth, and the undersegmentation rate is vice versa. We use the cumulative distribution to demonstrate the fitting between the lung surfaces obtained by the proposed method and the ground truth, which are calculated by the shortest distance between a point on the lung surfaces obtained by the proposed method and the lung surfaces of the ground truth.
3.3. Accuracy Analysis
Table 4 shows the experimental result on 45 chest scans, based on the proposed method and the golden standard. As shown in the table, is cm^{3}, VOE is %, ASD is mm, RMSD is mm, and MSD is mm. In clinical practice, VOE of 5% is generally considered as the most acceptable error, and therefore, the proposed method is capable of providing clinical assist.

The automatic segmentation results were compared with manual segmentation result of the radiologist. Whether a juxtapleural nodule was correctly included or not was determined by a radiologist to see whether there are obvious defects in the segmentation due to juxtapleural nodules. Figure 10 shows the threedimensional view before and after border correction. It can be seen that the juxtapleural nodules are reincluded after correction operation. However, to a certain extent, oversegmentation error is inevitable due to the overcorrection; thus, most of VD in Figure 11 are positive, which appear above the axis, denoting oversegmentation. The over and undersegmentation are 29 and 16 sets, respectively; in other words, the probability of oversegmentation is almost twice of undersegmentation.
(a)
(b)
In order to study the average distance of segmentation error, we depict the cumulative probability distribution based on under and oversegmentation, which are showed in Figure 12. In general, oversegmentation is defined as the lung volume that is regarded as lung tissue in our segmentation method while not in the reference standard, and the undersegmentation is vice versa. In this paper, the metric of RVD (relative volume difference) is used for determining whether a segmentation belongs to oversegmentation or undersegmentation. If RVD gets a positive value, we regard this segmentation result as an oversegmentation on the whole, or undersegmentation vice versa.
We use the cumulative distribution to demonstrate the fitting between the lung surfaces obtained by our method and the manual segmentation standard. The cumulative distance distribution is formed by calculating the metric of ASD (average surface distance) obtained by our automatic method and manual segmentation standard.
In Figure 12, cumulative probability distribution for over and undersegmentation within 1 mm are 70% and 60%, respectively, while the maximum distance errors are 1.1 mm and 1.2 mm, respectively, thus proving the higher probability of segmentation errors generated by oversegmentation.
3.4. Complexity Analysis
Figure 13 shows the timeconsuming diagram of the main processing phases, including skin boundary detection, rough segmentation lung parenchyma, and refinement of lung parenchyma. In the figure, the whole timeconsuming is seconds on average, among which skin boundary detection costs seconds (accounting for 4.66% of the whole time), rough segmentation of lung parenchyma costs seconds (accounting for 19.03% of the whole time), and refinement costs seconds (accounting for 76.31% of the overall time). And the time complexity for these phases are , , and , respectively, where indicates the iteration numbers of maximum cost path, while indicates the iterative number of reconstruction and concave determination.
It can be seen from Figure 13 that the proposed scheme spends much more time on smoothing and correction process, in which iterative convergence accounts for a large proportion. On the average, processing time for each image is 2 seconds, while radiologist needs 1 minute for manual segmentation, which proves the efficiency of the proposed scheme.
3.5. Comparison with State of the Art Method
To evaluate the performance of our method, the proposed method was compared with the state of the art method proposed by Pu et al. [13]. In Pu’s method, an adaptive border marching (ABM) algorithm was proposed to segment the lung and correct the segmentation defects caused by juxtapleura nodules while minimizing undersegmentation and oversegmentation relative to the true lung border. The primary emphasis and distinguishing characteristic of the proposed method is on robustly correcting missed juxtapleural nodules.
Table 5 presents the lung segmentation results by using our proposed method on 45 datasets, when compared to the conventional ABMbased method (Pu’s method). For the final segmentation results, our method yields mean VOE of 3.51% and ASD of 0.79 mm, while conventional ABMbased method yields mean VOE of 3.86% and ASD of 0.83 mm. Our method outperforms the conventional method by 0.35% and 0.04 mm on average in terms of AOE and ASD, respectively.

In addition, similar results were also obtained for comparison study by boxplot between Pu’s and our method in Figure 14. For the ASD, there is no abnormal point in both results; meanwhile, value is less than 0.05 by test, hence proving the significant better accuracy of our method.
Table 6 lists the comparative RVD results on both methods. For the RVD result of oversegmentation, our method and conventional ABM yield mean 1.87% and 1.92%, respectively, while, for the RVD result of undersegmentation, our method and conventional ABM yield mean −1.58% and −1.65%, respectively. Our method outperforms the conventional ABM by 0.05% and 0.07% on average in terms of RVD on oversegmentation and undersegmentation, respectively.

Therefore, for the lung tissue with juxtapleura nodules, our proposed method achieves more accurate and robust segmentation results than the conventional method. The main reason for that is the lungs separation operation in our method improves the accuracy of lung contour segmentation. It can thus be deployed for accurate and robust lung segmentation with juxtapleura nodules.
For our study, the main target is to solve the problem of the segmentation result by juxtapleural nodules; thus it is not a generic tool to have this segmentation method when lung includes other pathological lesions or abnormities especially near the pleura. However, in our datasets, GGO (ground glass opacity) nodules are also considered in our scans, some of which are attached to the pleura. Although GGO often shows the irregular shape and low intensity, and its irregular shape is usually not fit to regular concave area detection algorithm, its low contrast with the lung parenchyma helps obtain the correct lung contour. In Figure 15(a) the lung segmentation is performed correctly due to the superiority of border tracing algorithm even when GGO is attached to the pleura, while other conventional region growingbased methods often need further processing because of the inhomogeneity between GGO and the lung parenchyma.
(a)
(b)
We recognize that the proposed scheme still needs further improvement. As the failure cases Figure 16(a) demonstrated, oversegmentation occurred around the trachea which is close to the parenchyma, and that is the result of overcorrection. In Figure 16(b), when the big vessel is located on the edge of the lung parenchyma, undersegmentation occurred because of the undercorrection. It is difficult to overcome this dilemma through 2D slice since border correction is a tradeoff problem. Nevertheless, the segmentation error generated by juxtapleura nodules could be reduced significantly due to the appropriate lengthtohigh ratio. Further study on how to reduce the errors caused by trachea and vessel could help alleviate the abovementioned dilemma.
(a)
(b)
(c)
(d)
4. Conclusion
In this paper, a fully lung segmentation framework for chest CT with juxtaplural nodules is proposed via five main procedures, including chest segmentation, lung border tracing, left and right lung separation, lung border smoothing, and border correction, which focus on the oversegmentation problem caused by juxtapleural nodules. Compared with manual segmentation, the volume overlap error of our approach is less than 5%, which meets the clinical requirements. And also, the timeconsuming is about 2 seconds per image, which is more efficient than the manual cost of 1 minute per image. However, the proposed scheme tends to result in some undersegmentation, especially around the area which is close to the mediastinum, where the dense tracheas are located. Therefore, the border correction algorithm still needs further improvement, especially for the irregular lung shape caused by abnormal lesions. Nevertheless, comparing with the traditional method, our proposed scheme achieved great advantages in accuracy and time complexity, which indicates a potential tool for lung segmentation with juxtapleural nodules.
Consent
Informed consent was obtained from all patients for being included in the study.
Competing Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported by the Nature Science Foundation of Heilongjiang Province of China (no. QC2016090) and the Science and Technology Planning Fund of Colleges and Universities of Shandong Province of China (no. J16LN60).
References
 X. Sun, H. Zhang, and H. Duan, “3D computerized segmentation of lung volume with computed tomography,” Academic Radiology, vol. 13, no. 6, pp. 670–677, 2006. View at: Publisher Site  Google Scholar
 M. S. Brown, M. F. McNittGray, N. J. Mankovich et al., “Method for segmenting chest CT image data using an anatomical model: Preliminary results,” IEEE Transactions on Medical Imaging, vol. 16, no. 6, pp. 828–839, 1997. View at: Publisher Site  Google Scholar
 M. S. Brown, J. G. Goldin, M. F. McNittGray et al., “Knowledgebased segmentation of thoracic computed tomography images for assessment of split lung function,” Medical Physics, vol. 27, no. 3, pp. 592–598, 2000. View at: Publisher Site  Google Scholar
 S. Sun, C. Bauer, and R. Beichel, “Automated 3D segmentation of lungs with lung cancer in CT data using a novel robust active shape model approach,” IEEE Transactions on Medical Imaging, vol. 31, no. 2, pp. 449–460, 2012. View at: Publisher Site  Google Scholar
 S. Ukil and J. M. Reinhardt, “Smoothing lung segmentation surfaces in threedimensional Xray CT images using anatomic guidance,” Academic Radiology, vol. 12, no. 12, pp. 1502–1511, 2005. View at: Publisher Site  Google Scholar
 E. M. Van Rikxoort, B. De Hoop, M. A. Viergever, M. Prokop, and B. Van Ginneken, “Automatic lung segmentation from thoracic computed tomography scans using a hybrid approach with error detection,” Medical Physics, vol. 36, no. 7, pp. 2934–2947, 2009. View at: Publisher Site  Google Scholar
 A. Hasegawa, S.C. B. Lo, J.S. Lin, M. T. Freedman, and S. K. Mun, “A shiftinvariant neural network for the lung field segmentation in chest radiography,” Journal of Signal Processing Systems, vol. 18, no. 3, pp. 241–250, 1998. View at: Google Scholar
 J. K. Leader, B. Zheng, R. M. Rogers et al., “Automated lung segmentation in XRay computed tomography: development and evaluation of a heuristic thresholdbased scheme,” Academic Radiology, vol. 10, no. 11, pp. 1224–1236, 2003. View at: Publisher Site  Google Scholar
 Y. Yang, S. Zhou, P. Shang, E. Qi, S. Wu, and Y. Xie, “Contour propagation using featurebased deformable registration for lung cancer,” BioMed Research International, vol. 2013, Article ID 701514, 8 pages, 2013. View at: Publisher Site  Google Scholar
 M. Sofka, J. Wetzl, N. Birkbeck et al., “Multistage learning for robust lung segmentation in challenging ct volumes,” in Medical Image Computing and ComputerAssisted InterventionMICCAI 2011: 14th International Conference, Toronto, Canada, September 18–22, 2011, Proceedings, Part III, vol. 6893 of Lecture Notes in Computer Science, pp. 667–674, Springer, Berlin, Germany, 2011. View at: Publisher Site  Google Scholar
 P. Korfiatis, C. Kalogeropoulou, A. Karahaliou, A. Kazantzi, S. Skiadopoulos, and L. Costaridou, “Texture classificationbased segmentation of lung affected by interstitial pneumonia in highresolution CT,” Medical Physics, vol. 35, no. 12, pp. 5290–5302, 2008. View at: Publisher Site  Google Scholar
 Y. Yim and H. Hong, “Correction of segmented lung boundary for inclusion of pleural nodules and pulmonary vessels in chest CT images,” Computers in Biology and Medicine, vol. 38, no. 8, pp. 845–857, 2008. View at: Publisher Site  Google Scholar
 J. Pu, J. Roos, C. A. Yi, S. Napel, G. D. Rubin, and D. S. Paik, “Adaptive border marching algorithm: automatic lung segmentation on chest CT images,” Computerized Medical Imaging and Graphics, vol. 32, no. 6, pp. 452–462, 2008. View at: Publisher Site  Google Scholar
 S. Zhou, Y. Cheng, and S. Tamura, “Automated lung segmentation and smoothing techniques for inclusion of juxtapleural nodules and pulmonary vessels on chest CT images,” Biomedical Signal Processing and Control, vol. 13, no. 1, pp. 62–70, 2014. View at: Publisher Site  Google Scholar
 M. N. Prasad, M. S. Brown, S. Ahmad et al., “Automatic segmentation of lung parenchyma in the presence of diseases based on curvature of ribs,” Academic Radiology, vol. 15, no. 9, pp. 1173–1180, 2008. View at: Publisher Site  Google Scholar
 I. Sluimer, M. Prokop, and B. Van Ginneken, “Toward automated segmentation of the pathological lung in CT,” IEEE Transactions on Medical Imaging, vol. 24, no. 8, pp. 1025–1038, 2005. View at: Publisher Site  Google Scholar
 M. Kirschner, The probabilistic active shape model: from model construction to flexible medical image segmentation [Ph.D. thesis], TU Darmstadt, 2013.
 I. T. Jolliffe, Principal Component Analysis, Wiley Online Library, 2002.
 N. Otsu, “A threshold selection method from graylevel histograms,” Automatica, vol. 11, no. 285–296, pp. 23–27, 1975. View at: Google Scholar
 W. A. Kalender, H. Fichte, W. Bautz, and M. Skalej, “Semiautomatic evaluation procedures for quantitative ct of the lung,” Journal of Computer Assisted Tomography, vol. 15, no. 2, pp. 248–255, 1991. View at: Publisher Site  Google Scholar
 L. W. Hedlund, R. F. Anderson, P. L. Goulding, J. W. Beck, E. L. Effmann, and C. E. Putman, “Two methods for isolating the lung area of a CT scan for density information,” Radiology, vol. 144, no. 2, pp. 353–357, 1982. View at: Publisher Site  Google Scholar
 S. Hu, E. A. Hoffman, and J. M. Reinhardt, “Automatic lung segmentation for accurate quantitation of volumetric Xray CT images,” IEEE Transactions on Medical Imaging, vol. 20, no. 6, pp. 490–498, 2001. View at: Publisher Site  Google Scholar
 H. Späth, Spline Algorithms for Curves and Surfaces, Utilitas Mathematica Publishing, 1974. View at: MathSciNet
 E. Hinton and J. S. Campbell, “Local and global smoothing of discontinuous finite element functions using a least squares method,” International Journal for Numerical Methods in Engineering, vol. 8, no. 3, pp. 461–480, 1974. View at: Publisher Site  Google Scholar
 P. Bose, P. Morin, I. Stojmenović, and J. Urrutia, “Routing with guaranteed delivery in ad hoc wireless networks,” Wireless Networks, vol. 7, no. 6, pp. 609–616, 2001. View at: Publisher Site  Google Scholar
 I. Wolf, M. Vetter, I. Wegner et al., “The medical imaging interaction toolkit (MITK): a toolkit facilitating the creation of interactive software by extending VTK and ITK,” in Medical Imaging 2004: Visualization, ImageGuided Procedures, and Display, 16, vol. 5367 of Proceedings of SPIE, May 2004. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2016 Jinke Wang and Haoyan Guo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.