Complexity

Complexity / 2020 / Article
Special Issue

Solving Engineering and Science Problems Using Complex Bio-inspired Computation Approaches

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 6180457 | https://doi.org/10.1155/2020/6180457

Kun Zhang, JunHong Fu, Liang Hua, Peijian Zhang, Yeqin Shao, Sheng Xu, Huiyu Zhou, Li Chen, Jing Wang, "Multiple Morphological Constraints-Based Complex Gland Segmentation in Colorectal Cancer Pathology Image Analysis", Complexity, vol. 2020, Article ID 6180457, 16 pages, 2020. https://doi.org/10.1155/2020/6180457

Multiple Morphological Constraints-Based Complex Gland Segmentation in Colorectal Cancer Pathology Image Analysis

Guest Editor: Zhile Yang
Received11 May 2020
Accepted23 Jun 2020
Published28 Jul 2020

Abstract

Histological assessment of glands is one of the major concerns in colon cancer grading. Considering that poorly differentiated colorectal glands cannot be accurately segmented, we propose an approach for segmentation of glands in colon cancer images, based on the characteristics of lumens and rough gland boundaries. First, we use a U-net for stain separation to obtain H-stain, E-stain, and background stain intensity maps. Subsequently, epithelial nucleus is identified on the histopathology images, and the lumen segmentation is performed on the background intensity map. Then, we use the axis of least inertia-based similar triangles as the spatial characteristics of lumens and epithelial nucleus, and a triangle membership is used to select glandular contour candidates from epithelial nucleus. By connecting lumens and epithelial nucleus, more accurate gland segmentation is performed based on the rough gland boundary. The proposed stain separation approach is unsupervised, and the stain separation makes the category information contained in the H&E image easy to identify and deal with the uneven stain intensity and the inconspicuous stain difference. In this project, we use deep learning to achieve stain separation by predicting the stain coefficient. Under the deep learning framework, we design a stain coefficient interval model to improve the stain generalization performance. Another innovation is that we propose the combination of the internal lumen contour of adenoma and the outer contour of epithelial cells to obtain a precise gland contour. We compare the performance of the proposed algorithm against that of several state-of-the-art technologies on publicly available datasets. The results show that the segmentation approach combining the characteristics of lumens and rough gland boundary has better segmentation accuracy.

1. Introduction

Colon cancer may be caused by epithelium (lumens of blood vessels, organs, and surface tissues), also called adenocarcinoma (malignant tumor formed by gland structures in epithelial tissues) [1]. It affects the distribution of cells and also changes the structure of glands. Pathologists are able to accurately detect small abnormalities in a biopsy [24].

With the increasing popularity of histopathology images, digital pathology provides a viable solution to the detection problem. Histopathology image analysis can help us to extract quantitative morphological features and can be used for computer-assisted cancer grading [5]. Histopathology is the fixation of thin sections of potentially disease tissues on a glass slide and stain to show specific structural or functional details [6, 7]. By scanning the entire slide with a scanner, digitized images of those slides can be obtained, making histopathology suitable for image analysis [8, 9].

Colon histopathology image analysis is the basis of the primary detection of colon lesions [10]. The gland structure is shown in Figure 1(a). A typical colon gland histopathology image contains four tissue components: lumen, cytoplasm, epithelial cells, and stroma (connective tissues, blood vessels, nerve tissue, etc.). The lumen area is surrounded by an oval structure called epithelial cells [11, 12]. The whole structure is bounded by a thick line, called the epithelial cell nucleus.

In clinical practice, pathologists use glands as the objects of interests, including their structural morphology and gland formation [13, 14]. Particularly, when performing automated gland segmentation in H&E images, pathologists can extract important morphological features to determine prognosis and plan treatments for individual patients [15]. Digital histopathology images contain noise and homogenous regions that hinder gland detection and segmentation. For example, Zoltan et al. [16] developed two diagnostic modules, one for gland detection and the other for nuclei detection. In gland detection, HSV and LAB color spaces are used for color segmentation, and glands can be identified using the connected component. Due to large differences between the tissue preparation protocols, stain programs, and scanning characteristics, stain normalization of histopathology images provides a tool to ensure the efficiency and stability of the system. Daniel et al. [17] used a normalization technique to associate the mean and standard deviation of each channel of the target tissue image with those of the template image through a set of linear transformations in the LAB color space. In order to segment a large number of color images into meaningful structures, Banwari et al. [18] proposed a thresholding approach based on image intensity. These approaches are based on different tissue structures and color differences and are not suitable for segmenting adherent glands or glands that are mixed with stroma which require complex correction algorithms to obtain accurate results. The active contour segmentation approach proposed by Cohen [19] relies on the characteristics of the gland structures. The thickness of the tissue slice and the fading of the stain will lead to the change of the color distribution of the tissue image, and the gland model is not suitable for the glands with incomplete gland boundaries. The above conventional approaches mainly used glands’ appearance characteristics and features. The appearance characteristics are composed of the nucleus, cytoplasm, and epithelial cells. Sirinukunwattana et al. [20], Jacobs et al. [21], and others used low-level features such as color, texture, and edges, to identify glands. The contour features are based on a gland structure surrounded by epithelial cells. Sirinukunwattana et al. [22] and Fu et al. [23] proposed that the spatial random field model well segmented the benign gland contour, but it was not suitable for segmenting malignant and diseased glands.

With the recent development of deep learning in the field, it has become possible to apply deep learning to histopathology images. Roth et al. [24] proposed a multilevel deep convolutional neural network for automated pancreatic segmentation. Ronneberger et al. [25] and others proposed using U-net for histopathology image segmentation. The deep contour sense network proposed by Chen et al. [26] illustrates that contours play an important role in gland segmentation. The double parallel branch deep neural network proposed by Wang et al. [27] combined contours and other features to accurately segment glands. In addition, Xu et al. [28] proposed a fusion of complex multichannel regions and boundary modes for segmentation of gland instances by side supervision. This work was extended in the study of Xu et al. [29], which included additional information to enhance performance. Raza et al. [30] proposed a multi-input multi-output network (MIMO-Net) for gland segmentation and achieved state-of-the-art performance. All of the above approaches require a large number of manual annotations, but it was very difficult to label a large number of histopathology images. Zhang et al. [31] used a deep confrontation network for unannotated images, achieving consistently good segmentation performance.

Although the previous approaches have achieved certain promising results in gland segmentation, automated segmentation of glands is still a challenging task due to the complexity of histopathology images and the diversity of gland morphology, especially the gland lesions shown in Figure 1(b). For normal glands, epithelial cells can be clearly distinguished from the surrounding environment [32]. For malignant glands, epithelial cells are usually intermingled in the stroma, and the epithelial nucleus are not easily distinguished from stromal nucleus [33], and even glands are attached to each other. In this situation, we consider that the lumen is a defined structure of the gland. This structure can help decision-making because its presence and morphology indicate the grade of cancers [34]. It is observed that the lumen of the gland and the gland boundary have certain similarities in shapes, and the lumen can be accurately segmented compared to other structures of the gland. Afterward, a gland segmentation approach based on the correlation between the lumen and the gland boundary was proposed.

Our proposed approach first uses a U-net for stain separation to obtain H-, E-, and background stain intensity maps. Subsequently, epithelial nucleus is identified on the histopathology image. Taking into account that the lumen is similar to the background, shown in Figure 2, the histopathology image is then used as the input of the framework proposed in [35] to obtain the rough gland boundary and epithelial nucleus, and the lumen is segmented based on the improved SPF approach reported in [36]. Finally, based on the correlation between the lumen and the gland boundary, we select the best gland contour from the candidate contours so as to achieve the segmentation of glands attached to each other. The innovation in our method is that we are the first to use deep learning to achieve stain separation. Deep learning is used to predict the stain coefficients. In the deep learning framework, we design a stain coefficient interval model using Gaussian distribution. We can get an interval of the coefficient instead of a certain stain coefficient to improve the stain generalization performance. Another innovation is that we use multiple morphological constraints to find the optimal tumor contour based on the internal (lumen) and external (epithelium) contours. The proposed approach is evaluated on the 2015 MICCAI GlaS Challenge dataset and colon adenocarcinoma dataset, resulting in satisfactory segmentation outcomes.

2. Materials and Methods

2.1. Histopathology Image Stain Separation-Based Deep Learning Framework

The proposed stain separation framework is shown in Figure 3. Gaussian U-net Stain Separation (GUSS) makes the information contained in H&E images easy to identify, thus overcoming the influence of uneven staining intensity or large differences in H&E images. Traditional stain separation methods require manual settings for a standard stain matrix and cannot separate multiple stains at the same time. We here use deep learning to achieve this function. First, the histopathology image is used as the input of the model, and the U-shaped encoder-decoder model is constructed for stain separation. The network is supported by three parts: contracting, bridge, and expanding paths to complete the stain separation of H (hematoxylin), E (eosin), and B (background) channels. The contracting path is used to reduce the spatial dimension of the feature map, while increasing the number of the feature maps layer by layer [3740], extracting the input image as a compact feature. The bridge connects the contracting and expanding paths. This U-shaped encoder-decoder model is improved to be a multiple tasks model; besides the output of the U-net, we also use the most compact features to predict the stain color matrixes, which is combined with mean and variance of stain color values of hematoxylin, eosin, and background paths. The expanding path is used to gradually recover the details of the target and the corresponding spatial dimensions, and the output is used for the prediction of the pixelwise intensity map. The network is divided into ten residual branches. Prior to each residual branch of the expanding path, there is a cascade for the upsampling from the lower level feature maps and the feature maps from the corresponding contracting path. The existence of the residual unit effectively avoids the problem of gradient disappearance during the backpropagation [41]. In addition, each residual branch included Convolution, Max Pooling, BN (Batch Normalization), and ReLU (Rectified Linear Unit), which effectively accelerate the convergence speed [42].

The model is trained by minimizing the reconstruction loss between the input image and each reconstructed outcome; the original image goes into a 10-branch called F1-F10 network for stain separation. The contracting path is composed of the first 1–4-branch network, and the fifth branch is the bridge connecting the contracting and expanding paths, implementing the stain color matrix prediction function. The expanding path consists of the 6th-9th branch network, and the tenth branch output is used for stain intensity matrix prediction. In the stain color matrix prediction, the F5 features are first flattened into a vector, and two fully connected layers are deployed, with an intermediate node of 500 and an output node of 9, representing the R, G, and B distributions of the three stain channels. During the training, the proposed predicts the stain concentrations for each pixel as well as the parameters (mean and variance) of a series of Gaussian distributions sampled to form an estimate of the stain matrix.

Figure 4 shows an example of this process.

For each of the stains contained in the image, the proposed method predicts 3 distributions, one for each of the RGB color channels. The probability distribution may represent the red value of the hematoxylin stain. We use a value to form an estimate of the red value of hematoxylin. This process is repeated for each of the distributions which are combined to form the estimated stain matrix . The mean of each distribution represents the value around which our model has assigned the most probability, while the standard deviation describes how certain the model where a value is sampled from will result in a low reconstruction error.

Taking the example above, we again assume that is the distribution representing the red value of hematoxylin; if and the standard deviation is low, then the value we sample from has a high chance of being close to 0.5. If the true red value of hematoxylin is close to 0.5, then the sampled value results in a reduced reconstruction loss; consequently, if the true red value is far from 0.5, then the sampled value will result in a very high reconstruction loss. If the model predicts a large standard deviation, the sampled value will vary greatly and produce a large reconstruction loss even if the mean value is correct. To find the optimal values for , each of the mean values is close to the true values and the standard deviations are low.

For the stain separation task, in order to test the separation effect, the following loss function is defined:

In the formula, represents the nth pixel of the mth image, and represents the predicted image pixel.

Features are extracted from the histopathology image by the network described above and then passed to a number of subbranches that predict the intensity of the stain of each pixel and the parameters (mean and variance) of a series of Gaussian distributions. For each pixel in the image, R, G, and B of the three channels (hematoxylin, eosin, and background) are predicted.

2.2. Segmentation of Lumens from the Background Channel Based on the SPF-Level Set Method

Considering that lumens are one of the key components to distinguish glands, we segment lumens from the background channel after the stain separation. The SPF (Symbol Pressure Function) is constructed by using the statistical information of the image, so that the SPF has the function of maintaining or even enhancing the prominent foreground target. Similar to the classical C-V model [43], the contour allows us to divide the image I into two parts, inner and outer, respectively, and uses the global intensity distribution of the image to construct the SPF function. The stain intensity distribution functions of regions and are represented by P1 and P2:where and are the mean and standard deviation of the Gaussian distribution of the stain intensity, respectively. In the level set approach, the level set function is embedded, assuming and , and the corresponding contour can be represented by the zero level set . We can use the above stain intensity distribution function to construct the following new SPF function:

The level set equation is

Algorithm implementation is as follows:(i)Step 1. Initialize the level set function , and set the parameter (ii)Step 2. Calculate (iii)Step 3. Estimate the evolution curve through (4)(iv)Step 4., apply Gaussian filtering to smooth the curve(v)Step 5. Examine whether or not the level set function curve converges; otherwise returns to Step 2

The lumen segmentation process is shown in Figure 5. The lumen contour is obtained from the background channel by the above algorithm.

We use the spatially constrained CNN (SC-CNN) for nuclear detection and the softmax CNN for nuclear sorting [35]. We use the H-stain intensity map obtained from the stain separation as the input of SC-CNN to locate the nucleus. Since the detected nucleus includes epithelial and stromal nucleus, nuclear sorting is used. In classification, the morphology (shape, size, color, and texture) of the nucleus is employed. Therefore, the original RGB histopathology image is selected as the input of the softmax CNN, and the pixel set represents the epithelial cell nucleus. We select the epithelial nucleus closest to the stromal nucleus as the rough gland boundary pixels so as to obtain the rough gland boundary .

2.3. Lumen and Rough Gland Boundary Feature Representation Based on the ALI (Axis of Least Inertia)

The axis of least inertia is a line that minimizes the value after the integration of the square of the distance to all the points on the image boundary. Its physical meaning is that the rotational inertia of the graph around this axis is the smallest. It is the only reference line for representing the shape of the target. It can be known that, from the physical definition of the axis of least inertia, it must pass through the centroid of the graph. The mathematical expression was as follows: let the line ; then, the axis of least inertia iswhere is the set of edge points. Then, we use the condition that the axis of least inertia passes the centroid : ; then, B and C can be obtained. In order to describe the outline of the shape, the structure-based shape descriptor commonly used in the boundary description method is mainly a chain: this is a widely used descriptor, and its role is to use the outline of the shape with directions. The chain representation of the graph: the chain represents the target by a sequence of straight lines in a given direction. If the chain is used for matching, it depends on the choice of the first boundary pixel in this sequence. From the start point of a selection, a chain sequence is generated by using the x-direction (, based on our experience) chain.

As shown in Figure 6(a), the axis of least inertia is used as the reference axis, and the coordinate system is established by its perpendicular line. The lumen centroid is the origin of the coordinate system, and then according to the direction chain, the four regions of the coordinate system are equally divided into three regions with three directions so as to generate a chain sequence with 12 directions. The direction is perpendicular to the axis of least inertia and the direction of the closest point to the lumen is 0-direction, and the counterclockwise rotation is 30°, respectively, in the 0- to 11-direction. Then, the 12 straight lines with the point as the vertex will be compared with the lumen contour to , , ..., . 12 points constitute the chain code representing the lumen contour, and similarly the intersection points of these straight lines and the epithelial cell core set represent candidate contour chain codes. In Figure 6(b), and are the intersection of the 0- and the 1-direction and contour , respectively, and the triangle formed by the three points , , and is the characteristic triangle of the lumen (the point of the lumen outline in each direction is unique). and are the intersections of the 0- and 1-direction and the epithelial nucleus set , respectively, and the triangles formed by these points represent the gland’s candidate region. There are multiple epithelial nuclei in each direction. The similarity measure is performed using a trigonometric membership function. For each feature triangle, let , , and be the inner angles of the triangles, respectively, for which they have the following relationship:

Then, the triangular membership function is as follows:where is the Euclidean distance between the vertices of the feature triangle. Looking at the membership value of the feature triangle of the lumen and the membership value of the feature triangle of the gland candidate region, the similarity between them is

The similarity of all the eigenvalues is

If total , it indicates that the two contours are similar, and represents the number of the characteristic triangles.

The proposed approach is to find an accurate gland outline based on the two constraints.

The target contour based on the epithelial nucleus set is similar to the lumen contour and rough gland boundary , thus constructing a feature similarity constraint:

The target contour is close to the rough gland boundary ; thus, we have a distance constraint:where represents the sequence of the directions, represents the number of the epithelial nucleus in the i-direction, represents the intersection of the i-direction and the rough gland boundary , and represents epithelial nucleus in the i-direction. Taking the 0-direction as the start direction, the similarity of the feature triangles in each direction is retrieved counterclockwise. Taking Figure 6(b) as an example, first, the features and the lumen feature are compared with the outer contour L. The candidate contour point in direction 1 is determined by constraint condition equations (11) and (14). Similarly, candidate contour points in direction 1 are used as reference starting points to determine candidate contour points in direction 2. After sequentially determining candidate contour points in 12 directions, this forms a candidate contour chain. Assuming that there are J candidate points in the starting reference direction 0, J candidate contours are formed according to the above method. In Figure 6(c), the brownish yellow is the lumen contour, the orange color is the lumen contour feature triangle, and the red is one of the candidate contours obtained. The optimal gland contour is determined from the candidate contour according to constraint equation (15), and finally, the gland contour is smoothed by cubic spline interpolation.

2.4. Experiment Results and Discussion
2.4.1. Data

The image dataset is the Gland Segmentation (GlaS) Challenge dataset organized for MICCAI 2015 in addition to our own dataset. Our own dataset includes 100 calibrated pathological images of benign and malignant colon adenocarcinoma. They were taken from 34 H&E stained pathological sections of colon adenocarcinoma with cancer stage T3 or T4. Slices belong to different patients, and they are processed in different laboratory environments. The dataset has a very diverse diversity in a staining distribution and an organizational structure. The pathological slices are scanned through the whole slice to obtain a digital picture with a pixel precision of 0.465 microns. The full-frame image is readjusted to a pixel precision of 0.620 microns (equivalent to a 20x magnification). Then, we crop them randomly to a size of 128 × 128 and augment them to 22000 pieces for training and verification of the models. The nucleus is manually annotated by an experienced pathologist. This study needs to identify epithelial nucleus, so the nuclear annotation is divided into epithelial nucleus and others.

2.4.2. Stain Separation

The dataset consists of 22,000 histopathology patches with the size of 128 × 128 each. This work employs the ADAM optimizer, and the initial learning rate of 1-e3 is gradually reduced at the end of each epoch. To emphasize this further, Figure 7 shows the H&E image stain separation result. The results indicate that the background and H- and E-stain of histopathology images can be successfully separated, while the structure of the tissue is retained.

The pathological image containing the complete glandular structure is cropped without any interval to a size of 128 × 128, and the insufficient area was filled with zero operation. Figure 8 shows the separation results of H- and E-staining of the pathological images from two different datasets. The results show that, for pathological images with different sources and large differences in staining, the deep learning staining separation method can successfully separate H- and E-stains, and the separation staining result is consistent while maintaining the tissue structure.

After stain separation, H- and E-stains can be distinguished. We do not have ground truth to qualitatively evaluate the separation effect, but we can visualize the blue-violet characteristics. The H-staining maps are obtained by the two traditional staining separation methods mentioned in [44, 45], and the deep learning-based staining separation method is further investigated, and on the basis of staining separation, the cell nucleus is used to evaluate the effect of stain separation. Figure 9 shows the process of segmenting nucleus on the H-stained images. First, the H-stained image is converted into a grayscale image and then converted into a binary image as a nuclear segmentation mask, and finally, the segmentation mask is overlaid on the original pathological image for us to analyze the effect of the nuclear segmentation.

A singular value decomposition method based on optical density and an independent component analysis method in the wavelet domain, two traditional methods [44], and the deep learning method proposed here are used to separate the same pathological tissue image, and the H-stain image is used for nucleus segmentation processing. Figures 9(a–c) are the H-stained images obtained by the three stain separation methods, Figures 9(d–f) are their corresponding grayscale images, Figures 9(g–i) are their corresponding binary images, and Figures 9(j–l) are the outcome after we overlay the binary segmentation mask on the original image. Comparing the results in Figures 9(j–l), it can be found that Figures 9(a-b), which have poor stain separation effects, lead to oversegmentation or undersegmentation of the nucleus.

Figure 10 shows the comparison results of different methods. The Mikto method is used for cell division, so it can only be used to isolate H-staining. Color deconvolution (CD) is a classic method of stain separation, but manual intervention is required to calculate the optimal stain matrix. Using the CD method can preserve the structure but cannot well separate the background color. SDSA is the latest method to separate staining using statistical analysis of multiresolution staining data. It can be seen that SDSA successfully segments H-staining, but when there are more than two stains in the image, the separation outcome is poor.

2.4.3. Lumen Segmentation

The segmentation of glands depends on the interaction between the rough gland boundary and the lumen, so it is necessary to accurately segment the lumen. In the experiment, the segmentation results of the SPF approach and the improved SPF approach on the lumens are compared. In the level set approach, for binary selection and Gaussian filter regularization, the SPF can result in satisfactory segmentation.

Since the improved SPF approach is based on statistical information, the background channel obtained from the stain separation process, in which the lumen and the background probability tend to be consistent, causes some small background blocks in the image to be segmented. The small target is removed from the segmented image, and the final segmentation result is shown in Figure 11(d).

Multiple segmentation techniques (e.g., DRLSE, LBF, LGDF, and LIF) are used to segment the glandular cavity. As shown in Figure 12, the DRLSE model produces incomplete subdivisions; the LGDF model can segment the cavity from other areas. The LIF and LBF models are not suitable for the segmentation of the gland cavity. These models encounter challenges such as longer response time and more iterations. By using the new SPF-level set segmentation method, these shortcomings are overcome. The comparison results show that the proposed model is easy to implement, and its calculation time is only 21 s compared with other active contour models.

2.4.4. Gland Segmentation

This work is evaluated on the public GlaS dataset and compared with other methods in the GlaS competition. We use 100 images for system training and 65 for system testing, where 45 test images belong to test set A and the remaining 20 belong to test set B. For quantitative analysis, we use F1 score, object Dice, and object Hausdorff. Regarding Hausdorff distance, lower values are better; and for other measures, higher values are better. Table 1 shows the quantitative results, whereas the proposed method produces competitive results, compared with those algorithms presented in the competition. The proposed-N approach is only based on rough gland boundary obtained from the epithelial nucleus, and the proposed-N + L approach is based on rough gland boundary N and the lumen contour L. The algorithm first uses deep learning methods to perform staining separation. For different target segmentations, such as lumens, epithelial cells, and nucleus, one can accurately segment targets on the basis of staining separation. In test set A, the proposed algorithm performed poorly on F1 and the object Dice but performs better on test set B. In terms of measuring the shape similarity via object Hausdorff, a lower score indicates that, in malignant cases, the method takes into account the effect of the morphological features of lumens, so the results have a higher shape similarity to the ground truth.


MethodF1 scoreObject DiceObject Hausdorff
Test ATest BTest ATest BTest ATest B

Proposed-N + L0.9010.8510.8930.84244.12594.528
Proposed-N0.8860.8160.8860.82345.236103.686
CUMedVision10.8680.7690.8670.80074.596153.646
CUMedVision20.9120.7160.8970.78145.418160.347
ExB10.8910.7030.8820.78657.413145.575
ExB20.8920.6860.8840.75454.785187.442
ExB30.8960.7190.8860.76557.350159.873
Freiburg10.8340.6050.8750.78357.194146.607
Freiburg20.8700.6950.8760.78657.093148.463
CVML0.6520.5410.6440.654155.433176.244
LIB0.7770.3060.7810.617112.706190.447
Vision4GlaS0.6350.5270.7370.610107.491210.105

We compare the proposed approach with the state-of-the-art algorithms [17, 22, 27, 46, 47] on our independent dataset. The relevant measurement indicators are shown in Table 2. It can be seen from Table 2 that the proposed approach produces the best segmentation results. Figure 13 shows the ROC curves of the different algorithms.


Accuracy (%) Dice
MedianMeanStdMedianMeanStd

Bassem et al. [17]78.5677.329.120.7630.7500.120
Sirinukunwattana et al. [22]80.5179.148.360.7800.7700.098
Linbo et al. [27]81.1180.486.520.8010.7950.089
Kainz et al. [46]82.5381.086.130.8250.8150.076
Guannan et al. [47]85.3283.605.320.8410.8320.062
Proposed88.3486.913.720.8740.8690.047

As can be seen from Table 2, the proposed segmentation approach based on lumen and rough gland boundary improves the average pixel precision by at least 3%, and the Dice similarity coefficient has the improvement of 0.033. At the same time, the standard deviation of the pixel precision and Dice is at a low level, indicating that the segmentation approach is relatively stable and can effectively handle the problem of abnormal gland segmentation. Figure 14 shows the segmentation effect for multiple instances in our independent dataset, where green is the manually annotated contour and yellow is the segmentation contour by different methods.

It can be seen from Figure 14 that the gland segmentation method based only on epithelial cell nucleus, such as the one proposed in [17] and our proposed-N, relies too much on the accuracy of nuclear recognition. Inaccurate nuclear recognition directly leads to inaccurate gland segmentation. However, the method of polygonal approximation, such as the one proposed in [12], cannot detect the external contour of the gland. The double parallel structure method [27] combining the inside of the gland and the contour can segment the gland contour more accurately, but sometimes it cannot segment adhering glands. In summary, for the malignant and complex tumor images, our proposed method produces better segmentation results.

3. Conclusions

Histological assessment of glands is one of the challenges in colon cancer grading. Analysis of histological slides stained with hematoxylin and eosin is considered to be the “gold standard” in histological diagnosis. However, relying on artificial visual analysis is time-consuming and laborious, as pathologists need to thoroughly examine each case to ensure accurate diagnosis. In order to improve the diagnostic ability of automated approaches, we here proposed an approach for accurately segmenting glands in colon histopathology images based on the characteristics of lumens and gland boundaries. First, this work constructed a U-net for separation of H&E images to obtain H-, E-, and background stain intensity maps. Subsequently, the epithelial nucleus is identified on the histopathology images, and the segmentation of lumen is performed on the background intensity map. Then, the axis of the least inertia and chain is used to represent the lumen and gland boundary features. Based on the detection of lumens and epithelial nucleus, more accurately gland segmentation has been performed based on the rough gland boundary.

The main contribution of the approach includes three points. Firstly, a new unsupervised stain separation approach was proposed, which made the information contained in the H&E image easy to identify and deal with the uneven stain intensity and the inconspicuous stain difference. The superiority of the proposed stain separation approach was proved. Second, this work developed and combined a new set of features for segmentation of glands. It considered the morphological characteristics of the internal lumen of the gland structure. During the process of carcinogenesis, the lumen of the gland usually undergoes obvious distortion, which makes the surrounding epithelial cells irregularly arranged, but most were still distributed around the lumen. Therefore, the approach of combining the axis of least inertia was proposed to represent the characteristics of lumens and gland boundaries. Since lumens are more independent and easier to segment than the epithelial cells, the segmentation approach based on lumens can be used to achieve the segmentation of glands attached to each other. The results showed that the proposed approach had improved the segmentation accuracy. Finally, this work showed a feature representation of lumens and gland boundaries, and we will continue to study the application of this approach for benign and malignant feature extraction of tumors.

Data Availability

The data used to support the findings of this study were supplied by the Nantong University under license and so cannot be made freely available. Requests for access to these data should be made to Kun Zhang at zhangkun_nt@163.com.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was financially supported by Invest NI, the National Natural Science Foundation of China (no. 61671255), the Natural Science Foundation of Jiangsu Province, China (Grant no. BK20170443), and the Natural Science Foundation of the Higher Education Institutions of Jiangsu Province, China (Grant nos. 17KJB520030, 18KJB510038, and 19KJA350002) and in part by the program for “Qing Lan Project” of Colleges and Universities in Jiangsu Province (Grant no. XNY-039).

References

  1. R. Peyret, A. Bouridane, F. Khelifi et al., “Automatic classification of colorectal and prostatic histologic tumor images using multiscale multispectral local binary pattern texture features and stacked generalization,” Neurocomputing, vol. 275, pp. 83–93, 2018. View at: Publisher Site | Google Scholar
  2. R. Farjam, H. Soltanian-Zadeh, K. Jafari-Khouzani et al., “An image analysis approach for automatic malignancy determination of prostate pathological images,” Cytometr Part B: Clinical Cytometry, vol. 72B, no. 4, pp. 227–240, 2007. View at: Publisher Site | Google Scholar
  3. T. Gultekin, C. F. Koyuncu, C. Sokmensuer, and C. Gunduz-Demir, “Two-tier tissue decomposition for histopathological image representation and classification,” IEEE Transactions on Medical Imaging, vol. 34, no. 1, pp. 275–283, 2015. View at: Publisher Site | Google Scholar
  4. R. Awan, K. Sirinukunwattana, D. Epstein et al., “Glandular morphometrics for objective grading of colorectal adenocarcinoma histology images,” Scientific Reports, vol. 7, no. 1, 2017. View at: Publisher Site | Google Scholar
  5. R. Awan, S. Al-Maadeed, and R. Al-Saady, “Using spectral imaging for the analysis of abnormalities for colorectal cancer: when is it helpful?” PLoS One, vol. 13, no. 6, Article ID e0197431, 2018. View at: Publisher Site | Google Scholar
  6. A. Chaddad, C. Desrosiers, A. Bouridane et al., “Multi texture analysis of colorectal cancer continuum using multispectral imagery,” PLoS One, vol. 11, no. 2, Article ID e0149893, 2016. View at: Publisher Site | Google Scholar
  7. K. Sirinukunwattana, J. P. Pluim, H. Chen et al., “Gland segmentation in colon histology images: the glas challenge contest,” Medical Image Analysis, vol. 35, pp. 489–502, 2017. View at: Publisher Site | Google Scholar
  8. E. Ahn, A. Kumar, M. Fulham et al., “Convolutional sparse kernel network for unsupervised medical image analysis,” Medical Image Analysis, vol. 56, pp. 140–151, 2019. View at: Publisher Site | Google Scholar
  9. Z. Ding, G. Fleishman, X. Yang et al., “Fast predictive simple geodesic regression,” Medical Image Analysis, vol. 56, pp. 193–209, 2019. View at: Publisher Site | Google Scholar
  10. H. Sokooti, G. Saygili, B. Glocker et al., “Quantitative error prediction of medical image registration using regression forests,” Medical Image Analysis, vol. 56, pp. 110–121, 2019. View at: Publisher Site | Google Scholar
  11. L. Chen, P. Bentley, K. Mori et al., “Self-supervised learning for medical image analysis using image context restoration,” Medical Image Analysis, vol. 58, p. 101539, 2019. View at: Publisher Site | Google Scholar
  12. X. Yi, E. Walia, and P. Babyn, “Generative adversarial network in medical imaging: a review,” Medical Image Analysis, vol. 58, p. 101552, 2019. View at: Publisher Site | Google Scholar
  13. P. Kainz, “Semantic segmentation of colon glands with deep convolutional neural networks and total variation segmentation,” PeerJ., vol. 5, p. e3874, 2017. View at: Publisher Site | Google Scholar
  14. O. Ronneberger, “U-net: convolutional networks for biomedical image segmentation,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, New York, NY, USA, 2015. View at: Google Scholar
  15. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2018. View at: Publisher Site | Google Scholar
  16. K. Zoltan, T. Zoltan, S. Sandor et al., “Colon cancer diagnosis on digital tissue images,” in Proceedings of the 2013 IEEE 9th International Conference on Computational Cybernetics (ICCC), Tihany, Hungary, July 2013. View at: Publisher Site | Google Scholar
  17. B. B. Daniel, “A structure-based approach for colon gland segmentation in digital pathology,” in Biomedical Optics and Imaging—Proceedings of SPIE, v 9791, San Diego, CA, USA, January 2016. View at: Google Scholar
  18. A. Banwari, N. Sengar, M. K. Dutta et al., “Automated Segmentation of Colon Gland Using Histopathology Images,” in Proceedings of the 2016 Ninth International Conference on Contemporary Computing (IC3), Noida, India, August 2016. View at: Publisher Site | Google Scholar
  19. A. Cohen, E. Rivlin, I. Shimshoni et al., “Memory based active contour algorithm using pixel-level classified images for colon crypt segmentation,” Computerized Medical Imaging and Graphics, vol. 43, pp. 150–164, 2015. View at: Publisher Site | Google Scholar
  20. K. Sirinukunwattana, D. R. Snead, and N. M. Rajpoot, “A novel texture descriptor for detection of gland structures in colon histopathology images,” Biomedical Optics and Imaging- Proceedings of SPIE, vol. 9420, 2015. View at: Google Scholar
  21. J. G. Jacobs, E. Panagiotaki, and D. C. Alexander, “Gleason grading of prostate tumours with max-margin conditional random fields,” in International Workshop on Machine Learning in Medical Imaging, Springer, Berlin, Germany, 2014. View at: Google Scholar
  22. K. Sirinukunwattana, D. R. Snead, and N. M. Rajpoot, “A stochastic polygons model for gland structures in colon histopathology images,” IEEE Transactions on Medical Imaging, vol. 34, no. 11, pp. 2366–2378, 2015. View at: Publisher Site | Google Scholar
  23. H. Fu, G. Qiu, J. Shu et al., “A novel polar space random field model for the detection of gland structures,” IEEE Transactions on Medical Imaging, vol. 33, no. 3, pp. 764–776, 2014. View at: Google Scholar
  24. H. R. Roth, L. Lu, A. Farag et al., “Deeporgan: multi-level deep convolutional networks for automated pancreas segmentation,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Munich, Germany, October 2015. View at: Google Scholar
  25. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Munich, Germany, October 2015. View at: Google Scholar
  26. H. Chen, X. Qi, L. Yu et al., “Dcan: deep contour-aware networks for accurate gland segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  27. L. Wang, H. Zhen, X. Fang et al., “A unified two-parallel-branch deep neural network for joint gland contour and segmentation learning,” Future Generation Computer Systems, vol. 100, pp. 316–324, 2019. View at: Publisher Site | Google Scholar
  28. Y. Xu, Y. Li, M. Liu et al., “Gland instance segmentation by deep multichannel side supervision,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Shenzhen, China, October 2016. View at: Google Scholar
  29. Y. Xu, Y. Li, Y. Wang et al., “Gland instance segmentation using deep multichannel neural networks,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 12, pp. 2901–2912, 2017. View at: Publisher Site | Google Scholar
  30. S. E.A Raza, L. Cheung, D. Epstein et al., “Gland segmentation using multi-input-multi-output convolutional neural network,” in Proceedings of the Annual Conference on Medical Image Understanding and Analysis, Springer, Edinburgh, Scotland, July 2017. View at: Google Scholar
  31. Y. Zhang, L. Yang, J. Chen et al., “Deep adversarial networks for biomedical image segmentation utilizing unannotated images,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Quebec City, QC, Canada, September 2017. View at: Google Scholar
  32. N. Alsubaie, N. Trahearn, S. E. A. Raza et al., “Stain deconvolution using statistical analysis of multi-resolution stain colour representation,” PLoS One, vol. 12, no. 1, Article ID e0169875, 2017. View at: Publisher Site | Google Scholar
  33. F. Bray, J. Ferlay, I. Soerjomataram et al., “Global cancer statistics 2018: globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: A Cancer Journal for Clinicians, vol. 68, no. 6, pp. 394–424, 2018. View at: Publisher Site | Google Scholar
  34. H. Greenspan, B. Van Ginneken, and R. M. Summers, “Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1153–1159, 2016. View at: Publisher Site | Google Scholar
  35. K. Sirinukunwattana, R. Ahmed, E. Shan et al., “Locality sensitive deep learning for detection and classification of nucleus in routine colon cancer histopathology images,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1196–1206, 2016. View at: Publisher Site | Google Scholar
  36. C. Li, J. Su, L. Yu et al., “A variational level set approach image segmentation model with application to intensity inhomogene magnetic resonance imaging,” Multimedia Tools and Applications, vol. 77, no. 23, pp. 30703–30727, 2018. View at: Google Scholar
  37. G.-G. Carlos, S. Luisa, C.-R. J. Luis et al., “Multi-GPU development of a neural networks based reconstructor for adaptive optics,” Complexity, vol. 2018, Article ID 5348265, 9 pages, 2018. View at: Publisher Site | Google Scholar
  38. C. Xin, Y. Zuyuan, Y. Chao et al., “Sparse gene coexpression network analysis reveals eif3j-as1 as a prognostic marker for breast cancer,” Complexity, vol. 2018, Article ID 1656273, 12 pages, 2018. View at: Publisher Site | Google Scholar
  39. G. Tatjana, S. Zoran, M. Branislav et al., “Follow-up and risk assessment in patients with myocardial infarction using artificial neural networks,” Complexity, vol. 2017, Article ID 8953083, 8 pages, 2017. View at: Publisher Site | Google Scholar
  40. R. V. Sol, R. Ferrer-Cancho, J. M. Montoya et al., “Selection, tinkering, and emergence in complex networks,” Complexity, vol. 8, no. 1, pp. 20–23, 2002. View at: Publisher Site | Google Scholar
  41. Z. Han, B. Wei, Y. Zheng et al., “Breast cancer multi-classification from histopathological images with structured deep learning model,” Scientific Reports, vol. 7, no. 1, 2017. View at: Publisher Site | Google Scholar
  42. S. Hussain, S. Saxena, S. Shrivastava et al., “Multiplexed autoantibody signature for serological detection of canine mammary tumours,” Scientific Reports, vol. 8, no. 1, 2018. View at: Publisher Site | Google Scholar
  43. T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266–277, 2001. View at: Publisher Site | Google Scholar
  44. M. Macenko, M. Niethammer, J. S. Marron et al., “A method for normalizing histology slides for quantitative analysis,” in Proceedings of the 2009 IEEE International Symposium on Biomedical Imaging:From Nano to Macro, pp. 1107–1110, Boston, MA, USA, July 2009. View at: Publisher Site | Google Scholar
  45. J. Fu, K. Zhang, and P. Zhang, “Poorly differentiated colorectal gland segmentation approach based on internal and external stress in histology images,” in Proceedings of the 2020 5th International Conference on Computer and Communication Systems (ICCCS), pp. 338–342, Shanghai, China, May 2020. View at: Publisher Site | Google Scholar
  46. K. Philipp, P. Michael, and U. Martin, “Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization,” PeerJ, vol. 5, p. e3874, 2017. View at: Publisher Site | Google Scholar
  47. G. Li and A. Raza, “Multiresolution cell orientation congruence descriptors for epithelium segmentation in endometrial histopathology images,” Medical Image Analysis, vol. 37, pp. 91–100, 2017. View at: Publisher Site | Google Scholar

Copyright © 2020 Kun Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views374
Downloads395
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.