Abstract
Histological assessment of glands is one of the major concerns in colon cancer grading. Considering that poorly differentiated colorectal glands cannot be accurately segmented, we propose an approach for segmentation of glands in colon cancer images, based on the characteristics of lumens and rough gland boundaries. First, we use a U-net for stain separation to obtain H-stain, E-stain, and background stain intensity maps. Subsequently, epithelial nucleus is identified on the histopathology images, and the lumen segmentation is performed on the background intensity map. Then, we use the axis of least inertia-based similar triangles as the spatial characteristics of lumens and epithelial nucleus, and a triangle membership is used to select glandular contour candidates from epithelial nucleus. By connecting lumens and epithelial nucleus, more accurate gland segmentation is performed based on the rough gland boundary. The proposed stain separation approach is unsupervised, and the stain separation makes the category information contained in the H&E image easy to identify and deal with the uneven stain intensity and the inconspicuous stain difference. In this project, we use deep learning to achieve stain separation by predicting the stain coefficient. Under the deep learning framework, we design a stain coefficient interval model to improve the stain generalization performance. Another innovation is that we propose the combination of the internal lumen contour of adenoma and the outer contour of epithelial cells to obtain a precise gland contour. We compare the performance of the proposed algorithm against that of several state-of-the-art technologies on publicly available datasets. The results show that the segmentation approach combining the characteristics of lumens and rough gland boundary has better segmentation accuracy.
1. Introduction
Colon cancer may be caused by epithelium (lumens of blood vessels, organs, and surface tissues), also called adenocarcinoma (malignant tumor formed by gland structures in epithelial tissues) [1]. It affects the distribution of cells and also changes the structure of glands. Pathologists are able to accurately detect small abnormalities in a biopsy [2–4].
With the increasing popularity of histopathology images, digital pathology provides a viable solution to the detection problem. Histopathology image analysis can help us to extract quantitative morphological features and can be used for computer-assisted cancer grading [5]. Histopathology is the fixation of thin sections of potentially disease tissues on a glass slide and stain to show specific structural or functional details [6, 7]. By scanning the entire slide with a scanner, digitized images of those slides can be obtained, making histopathology suitable for image analysis [8, 9].
Colon histopathology image analysis is the basis of the primary detection of colon lesions [10]. The gland structure is shown in Figure 1(a). A typical colon gland histopathology image contains four tissue components: lumen, cytoplasm, epithelial cells, and stroma (connective tissues, blood vessels, nerve tissue, etc.). The lumen area is surrounded by an oval structure called epithelial cells [11, 12]. The whole structure is bounded by a thick line, called the epithelial cell nucleus.

In clinical practice, pathologists use glands as the objects of interests, including their structural morphology and gland formation [13, 14]. Particularly, when performing automated gland segmentation in H&E images, pathologists can extract important morphological features to determine prognosis and plan treatments for individual patients [15]. Digital histopathology images contain noise and homogenous regions that hinder gland detection and segmentation. For example, Zoltan et al. [16] developed two diagnostic modules, one for gland detection and the other for nuclei detection. In gland detection, HSV and LAB color spaces are used for color segmentation, and glands can be identified using the connected component. Due to large differences between the tissue preparation protocols, stain programs, and scanning characteristics, stain normalization of histopathology images provides a tool to ensure the efficiency and stability of the system. Daniel et al. [17] used a normalization technique to associate the mean and standard deviation of each channel of the target tissue image with those of the template image through a set of linear transformations in the LAB color space. In order to segment a large number of color images into meaningful structures, Banwari et al. [18] proposed a thresholding approach based on image intensity. These approaches are based on different tissue structures and color differences and are not suitable for segmenting adherent glands or glands that are mixed with stroma which require complex correction algorithms to obtain accurate results. The active contour segmentation approach proposed by Cohen [19] relies on the characteristics of the gland structures. The thickness of the tissue slice and the fading of the stain will lead to the change of the color distribution of the tissue image, and the gland model is not suitable for the glands with incomplete gland boundaries. The above conventional approaches mainly used glands’ appearance characteristics and features. The appearance characteristics are composed of the nucleus, cytoplasm, and epithelial cells. Sirinukunwattana et al. [20], Jacobs et al. [21], and others used low-level features such as color, texture, and edges, to identify glands. The contour features are based on a gland structure surrounded by epithelial cells. Sirinukunwattana et al. [22] and Fu et al. [23] proposed that the spatial random field model well segmented the benign gland contour, but it was not suitable for segmenting malignant and diseased glands.
With the recent development of deep learning in the field, it has become possible to apply deep learning to histopathology images. Roth et al. [24] proposed a multilevel deep convolutional neural network for automated pancreatic segmentation. Ronneberger et al. [25] and others proposed using U-net for histopathology image segmentation. The deep contour sense network proposed by Chen et al. [26] illustrates that contours play an important role in gland segmentation. The double parallel branch deep neural network proposed by Wang et al. [27] combined contours and other features to accurately segment glands. In addition, Xu et al. [28] proposed a fusion of complex multichannel regions and boundary modes for segmentation of gland instances by side supervision. This work was extended in the study of Xu et al. [29], which included additional information to enhance performance. Raza et al. [30] proposed a multi-input multi-output network (MIMO-Net) for gland segmentation and achieved state-of-the-art performance. All of the above approaches require a large number of manual annotations, but it was very difficult to label a large number of histopathology images. Zhang et al. [31] used a deep confrontation network for unannotated images, achieving consistently good segmentation performance.
Although the previous approaches have achieved certain promising results in gland segmentation, automated segmentation of glands is still a challenging task due to the complexity of histopathology images and the diversity of gland morphology, especially the gland lesions shown in Figure 1(b). For normal glands, epithelial cells can be clearly distinguished from the surrounding environment [32]. For malignant glands, epithelial cells are usually intermingled in the stroma, and the epithelial nucleus are not easily distinguished from stromal nucleus [33], and even glands are attached to each other. In this situation, we consider that the lumen is a defined structure of the gland. This structure can help decision-making because its presence and morphology indicate the grade of cancers [34]. It is observed that the lumen of the gland and the gland boundary have certain similarities in shapes, and the lumen can be accurately segmented compared to other structures of the gland. Afterward, a gland segmentation approach based on the correlation between the lumen and the gland boundary was proposed.
Our proposed approach first uses a U-net for stain separation to obtain H-, E-, and background stain intensity maps. Subsequently, epithelial nucleus is identified on the histopathology image. Taking into account that the lumen is similar to the background, shown in Figure 2, the histopathology image is then used as the input of the framework proposed in [35] to obtain the rough gland boundary and epithelial nucleus, and the lumen is segmented based on the improved SPF approach reported in [36]. Finally, based on the correlation between the lumen and the gland boundary, we select the best gland contour from the candidate contours so as to achieve the segmentation of glands attached to each other. The innovation in our method is that we are the first to use deep learning to achieve stain separation. Deep learning is used to predict the stain coefficients. In the deep learning framework, we design a stain coefficient interval model using Gaussian distribution. We can get an interval of the coefficient instead of a certain stain coefficient to improve the stain generalization performance. Another innovation is that we use multiple morphological constraints to find the optimal tumor contour based on the internal (lumen) and external (epithelium) contours. The proposed approach is evaluated on the 2015 MICCAI GlaS Challenge dataset and colon adenocarcinoma dataset, resulting in satisfactory segmentation outcomes.

2. Materials and Methods
2.1. Histopathology Image Stain Separation-Based Deep Learning Framework
The proposed stain separation framework is shown in Figure 3. Gaussian U-net Stain Separation (GUSS) makes the information contained in H&E images easy to identify, thus overcoming the influence of uneven staining intensity or large differences in H&E images. Traditional stain separation methods require manual settings for a standard stain matrix and cannot separate multiple stains at the same time. We here use deep learning to achieve this function. First, the histopathology image is used as the input of the model, and the U-shaped encoder-decoder model is constructed for stain separation. The network is supported by three parts: contracting, bridge, and expanding paths to complete the stain separation of H (hematoxylin), E (eosin), and B (background) channels. The contracting path is used to reduce the spatial dimension of the feature map, while increasing the number of the feature maps layer by layer [37–40], extracting the input image as a compact feature. The bridge connects the contracting and expanding paths. This U-shaped encoder-decoder model is improved to be a multiple tasks model; besides the output of the U-net, we also use the most compact features to predict the stain color matrixes, which is combined with mean and variance of stain color values of hematoxylin, eosin, and background paths. The expanding path is used to gradually recover the details of the target and the corresponding spatial dimensions, and the output is used for the prediction of the pixelwise intensity map. The network is divided into ten residual branches. Prior to each residual branch of the expanding path, there is a cascade for the upsampling from the lower level feature maps and the feature maps from the corresponding contracting path. The existence of the residual unit effectively avoids the problem of gradient disappearance during the backpropagation [41]. In addition, each residual branch included Convolution, Max Pooling, BN (Batch Normalization), and ReLU (Rectified Linear Unit), which effectively accelerate the convergence speed [42].

The model is trained by minimizing the reconstruction loss between the input image and each reconstructed outcome; the original image goes into a 10-branch called F1-F10 network for stain separation. The contracting path is composed of the first 1–4-branch network, and the fifth branch is the bridge connecting the contracting and expanding paths, implementing the stain color matrix prediction function. The expanding path consists of the 6th-9th branch network, and the tenth branch output is used for stain intensity matrix prediction. In the stain color matrix prediction, the F5 features are first flattened into a vector, and two fully connected layers are deployed, with an intermediate node of 500 and an output node of 9, representing the R, G, and B distributions of the three stain channels. During the training, the proposed predicts the stain concentrations for each pixel as well as the parameters (mean and variance) of a series of Gaussian distributions sampled to form an estimate of the stain matrix.
Figure 4 shows an example of this process.

For each of the stains contained in the image, the proposed method predicts 3 distributions, one for each of the RGB color channels. The probability distribution may represent the red value of the hematoxylin stain. We use a value to form an estimate of the red value of hematoxylin. This process is repeated for each of the distributions which are combined to form the estimated stain matrix . The mean of each distribution represents the value around which our model has assigned the most probability, while the standard deviation describes how certain the model where a value is sampled from will result in a low reconstruction error.
Taking the example above, we again assume that is the distribution representing the red value of hematoxylin; if and the standard deviation is low, then the value we sample from has a high chance of being close to 0.5. If the true red value of hematoxylin is close to 0.5, then the sampled value results in a reduced reconstruction loss; consequently, if the true red value is far from 0.5, then the sampled value will result in a very high reconstruction loss. If the model predicts a large standard deviation, the sampled value will vary greatly and produce a large reconstruction loss even if the mean value is correct. To find the optimal values for , each of the mean values is close to the true values and the standard deviations are low.
For the stain separation task, in order to test the separation effect, the following loss function is defined:
In the formula, represents the nth pixel of the mth image, and represents the predicted image pixel.
Features are extracted from the histopathology image by the network described above and then passed to a number of subbranches that predict the intensity of the stain of each pixel and the parameters (mean and variance) of a series of Gaussian distributions. For each pixel in the image, R, G, and B of the three channels (hematoxylin, eosin, and background) are predicted.
2.2. Segmentation of Lumens from the Background Channel Based on the SPF-Level Set Method
Considering that lumens are one of the key components to distinguish glands, we segment lumens from the background channel after the stain separation. The SPF (Symbol Pressure Function) is constructed by using the statistical information of the image, so that the SPF has the function of maintaining or even enhancing the prominent foreground target. Similar to the classical C-V model [43], the contour allows us to divide the image I into two parts, inner and outer, respectively, and uses the global intensity distribution of the image to construct the SPF function. The stain intensity distribution functions of regions and are represented by P1 and P2:where and are the mean and standard deviation of the Gaussian distribution of the stain intensity, respectively. In the level set approach, the level set function is embedded, assuming and , and the corresponding contour can be represented by the zero level set . We can use the above stain intensity distribution function to construct the following new SPF function:
The level set equation is
Algorithm implementation is as follows:(i)Step 1. Initialize the level set function , and set the parameter (ii)Step 2. Calculate (iii)Step 3. Estimate the evolution curve through (4)(iv)Step 4., apply Gaussian filtering to smooth the curve(v)Step 5. Examine whether or not the level set function curve converges; otherwise returns to Step 2
The lumen segmentation process is shown in Figure 5. The lumen contour is obtained from the background channel by the above algorithm.

(a)

(b)

(c)
We use the spatially constrained CNN (SC-CNN) for nuclear detection and the softmax CNN for nuclear sorting [35]. We use the H-stain intensity map obtained from the stain separation as the input of SC-CNN to locate the nucleus. Since the detected nucleus includes epithelial and stromal nucleus, nuclear sorting is used. In classification, the morphology (shape, size, color, and texture) of the nucleus is employed. Therefore, the original RGB histopathology image is selected as the input of the softmax CNN, and the pixel set represents the epithelial cell nucleus. We select the epithelial nucleus closest to the stromal nucleus as the rough gland boundary pixels so as to obtain the rough gland boundary .
2.3. Lumen and Rough Gland Boundary Feature Representation Based on the ALI (Axis of Least Inertia)
The axis of least inertia is a line that minimizes the value after the integration of the square of the distance to all the points on the image boundary. Its physical meaning is that the rotational inertia of the graph around this axis is the smallest. It is the only reference line for representing the shape of the target. It can be known that, from the physical definition of the axis of least inertia, it must pass through the centroid of the graph. The mathematical expression was as follows: let the line ; then, the axis of least inertia iswhere is the set of edge points. Then, we use the condition that the axis of least inertia passes the centroid : ; then, B and C can be obtained. In order to describe the outline of the shape, the structure-based shape descriptor commonly used in the boundary description method is mainly a chain: this is a widely used descriptor, and its role is to use the outline of the shape with directions. The chain representation of the graph: the chain represents the target by a sequence of straight lines in a given direction. If the chain is used for matching, it depends on the choice of the first boundary pixel in this sequence. From the start point of a selection, a chain sequence is generated by using the x-direction (, based on our experience) chain.
As shown in Figure 6(a), the axis of least inertia is used as the reference axis, and the coordinate system is established by its perpendicular line. The lumen centroid is the origin of the coordinate system, and then according to the direction chain, the four regions of the coordinate system are equally divided into three regions with three directions so as to generate a chain sequence with 12 directions. The direction is perpendicular to the axis of least inertia and the direction of the closest point to the lumen is 0-direction, and the counterclockwise rotation is 30°, respectively, in the 0- to 11-direction. Then, the 12 straight lines with the point as the vertex will be compared with the lumen contour to , , ..., . 12 points constitute the chain code representing the lumen contour, and similarly the intersection points of these straight lines and the epithelial cell core set represent candidate contour chain codes. In Figure 6(b), and are the intersection of the 0- and the 1-direction and contour , respectively, and the triangle formed by the three points , , and is the characteristic triangle of the lumen (the point of the lumen outline in each direction is unique). and are the intersections of the 0- and 1-direction and the epithelial nucleus set , respectively, and the triangles formed by these points represent the gland’s candidate region. There are multiple epithelial nuclei in each direction. The similarity measure is performed using a trigonometric membership function. For each feature triangle, let , , and be the inner angles of the triangles, respectively, for which they have the following relationship:

(a)

(b)

(c)
Then, the triangular membership function is as follows:where is the Euclidean distance between the vertices of the feature triangle. Looking at the membership value of the feature triangle of the lumen and the membership value of the feature triangle of the gland candidate region, the similarity between them is
The similarity of all the eigenvalues is
If total , it indicates that the two contours are similar, and represents the number of the characteristic triangles.
The proposed approach is to find an accurate gland outline based on the two constraints.
The target contour based on the epithelial nucleus set is similar to the lumen contour and rough gland boundary , thus constructing a feature similarity constraint:
The target contour is close to the rough gland boundary ; thus, we have a distance constraint:where represents the sequence of the directions, represents the number of the epithelial nucleus in the i-direction, represents the intersection of the i-direction and the rough gland boundary , and represents epithelial nucleus in the i-direction. Taking the 0-direction as the start direction, the similarity of the feature triangles in each direction is retrieved counterclockwise. Taking Figure 6(b) as an example, first, the features and the lumen feature are compared with the outer contour L. The candidate contour point in direction 1 is determined by constraint condition equations (11) and (14). Similarly, candidate contour points in direction 1 are used as reference starting points to determine candidate contour points in direction 2. After sequentially determining candidate contour points in 12 directions, this forms a candidate contour chain. Assuming that there are J candidate points in the starting reference direction 0, J candidate contours are formed according to the above method. In Figure 6(c), the brownish yellow is the lumen contour, the orange color is the lumen contour feature triangle, and the red is one of the candidate contours obtained. The optimal gland contour is determined from the candidate contour according to constraint equation (15), and finally, the gland contour is smoothed by cubic spline interpolation.
2.4. Experiment Results and Discussion
2.4.1. Data
The image dataset is the Gland Segmentation (GlaS) Challenge dataset organized for MICCAI 2015 in addition to our own dataset. Our own dataset includes 100 calibrated pathological images of benign and malignant colon adenocarcinoma. They were taken from 34 H&E stained pathological sections of colon adenocarcinoma with cancer stage T3 or T4. Slices belong to different patients, and they are processed in different laboratory environments. The dataset has a very diverse diversity in a staining distribution and an organizational structure. The pathological slices are scanned through the whole slice to obtain a digital picture with a pixel precision of 0.465 microns. The full-frame image is readjusted to a pixel precision of 0.620 microns (equivalent to a 20x magnification). Then, we crop them randomly to a size of 128 × 128 and augment them to 22000 pieces for training and verification of the models. The nucleus is manually annotated by an experienced pathologist. This study needs to identify epithelial nucleus, so the nuclear annotation is divided into epithelial nucleus and others.
2.4.2. Stain Separation
The dataset consists of 22,000 histopathology patches with the size of 128 × 128 each. This work employs the ADAM optimizer, and the initial learning rate of 1-e3 is gradually reduced at the end of each epoch. To emphasize this further, Figure 7 shows the H&E image stain separation result. The results indicate that the background and H- and E-stain of histopathology images can be successfully separated, while the structure of the tissue is retained.

The pathological image containing the complete glandular structure is cropped without any interval to a size of 128 × 128, and the insufficient area was filled with zero operation. Figure 8 shows the separation results of H- and E-staining of the pathological images from two different datasets. The results show that, for pathological images with different sources and large differences in staining, the deep learning staining separation method can successfully separate H- and E-stains, and the separation staining result is consistent while maintaining the tissue structure.

(a)

(b)

(c)
After stain separation, H- and E-stains can be distinguished. We do not have ground truth to qualitatively evaluate the separation effect, but we can visualize the blue-violet characteristics. The H-staining maps are obtained by the two traditional staining separation methods mentioned in [44, 45], and the deep learning-based staining separation method is further investigated, and on the basis of staining separation, the cell nucleus is used to evaluate the effect of stain separation. Figure 9 shows the process of segmenting nucleus on the H-stained images. First, the H-stained image is converted into a grayscale image and then converted into a binary image as a nuclear segmentation mask, and finally, the segmentation mask is overlaid on the original pathological image for us to analyze the effect of the nuclear segmentation.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)
A singular value decomposition method based on optical density and an independent component analysis method in the wavelet domain, two traditional methods [44], and the deep learning method proposed here are used to separate the same pathological tissue image, and the H-stain image is used for nucleus segmentation processing. Figures 9(a–c) are the H-stained images obtained by the three stain separation methods, Figures 9(d–f) are their corresponding grayscale images, Figures 9(g–i) are their corresponding binary images, and Figures 9(j–l) are the outcome after we overlay the binary segmentation mask on the original image. Comparing the results in Figures 9(j–l), it can be found that Figures 9(a-b), which have poor stain separation effects, lead to oversegmentation or undersegmentation of the nucleus.
Figure 10 shows the comparison results of different methods. The Mikto method is used for cell division, so it can only be used to isolate H-staining. Color deconvolution (CD) is a classic method of stain separation, but manual intervention is required to calculate the optimal stain matrix. Using the CD method can preserve the structure but cannot well separate the background color. SDSA is the latest method to separate staining using statistical analysis of multiresolution staining data. It can be seen that SDSA successfully segments H-staining, but when there are more than two stains in the image, the separation outcome is poor.

2.4.3. Lumen Segmentation
The segmentation of glands depends on the interaction between the rough gland boundary and the lumen, so it is necessary to accurately segment the lumen. In the experiment, the segmentation results of the SPF approach and the improved SPF approach on the lumens are compared. In the level set approach, for binary selection and Gaussian filter regularization, the SPF can result in satisfactory segmentation.
Since the improved SPF approach is based on statistical information, the background channel obtained from the stain separation process, in which the lumen and the background probability tend to be consistent, causes some small background blocks in the image to be segmented. The small target is removed from the segmented image, and the final segmentation result is shown in Figure 11(d).

(a)

(b)

(c)

(d)
Multiple segmentation techniques (e.g., DRLSE, LBF, LGDF, and LIF) are used to segment the glandular cavity. As shown in Figure 12, the DRLSE model produces incomplete subdivisions; the LGDF model can segment the cavity from other areas. The LIF and LBF models are not suitable for the segmentation of the gland cavity. These models encounter challenges such as longer response time and more iterations. By using the new SPF-level set segmentation method, these shortcomings are overcome. The comparison results show that the proposed model is easy to implement, and its calculation time is only 21 s compared with other active contour models.

(a)

(b)

(c)
2.4.4. Gland Segmentation
This work is evaluated on the public GlaS dataset and compared with other methods in the GlaS competition. We use 100 images for system training and 65 for system testing, where 45 test images belong to test set A and the remaining 20 belong to test set B. For quantitative analysis, we use F1 score, object Dice, and object Hausdorff. Regarding Hausdorff distance, lower values are better; and for other measures, higher values are better. Table 1 shows the quantitative results, whereas the proposed method produces competitive results, compared with those algorithms presented in the competition. The proposed-N approach is only based on rough gland boundary obtained from the epithelial nucleus, and the proposed-N + L approach is based on rough gland boundary N and the lumen contour L. The algorithm first uses deep learning methods to perform staining separation. For different target segmentations, such as lumens, epithelial cells, and nucleus, one can accurately segment targets on the basis of staining separation. In test set A, the proposed algorithm performed poorly on F1 and the object Dice but performs better on test set B. In terms of measuring the shape similarity via object Hausdorff, a lower score indicates that, in malignant cases, the method takes into account the effect of the morphological features of lumens, so the results have a higher shape similarity to the ground truth.
We compare the proposed approach with the state-of-the-art algorithms [17, 22, 27, 46, 47] on our independent dataset. The relevant measurement indicators are shown in Table 2. It can be seen from Table 2 that the proposed approach produces the best segmentation results. Figure 13 shows the ROC curves of the different algorithms.

As can be seen from Table 2, the proposed segmentation approach based on lumen and rough gland boundary improves the average pixel precision by at least 3%, and the Dice similarity coefficient has the improvement of 0.033. At the same time, the standard deviation of the pixel precision and Dice is at a low level, indicating that the segmentation approach is relatively stable and can effectively handle the problem of abnormal gland segmentation. Figure 14 shows the segmentation effect for multiple instances in our independent dataset, where green is the manually annotated contour and yellow is the segmentation contour by different methods.

It can be seen from Figure 14 that the gland segmentation method based only on epithelial cell nucleus, such as the one proposed in [17] and our proposed-N, relies too much on the accuracy of nuclear recognition. Inaccurate nuclear recognition directly leads to inaccurate gland segmentation. However, the method of polygonal approximation, such as the one proposed in [12], cannot detect the external contour of the gland. The double parallel structure method [27] combining the inside of the gland and the contour can segment the gland contour more accurately, but sometimes it cannot segment adhering glands. In summary, for the malignant and complex tumor images, our proposed method produces better segmentation results.
3. Conclusions
Histological assessment of glands is one of the challenges in colon cancer grading. Analysis of histological slides stained with hematoxylin and eosin is considered to be the “gold standard” in histological diagnosis. However, relying on artificial visual analysis is time-consuming and laborious, as pathologists need to thoroughly examine each case to ensure accurate diagnosis. In order to improve the diagnostic ability of automated approaches, we here proposed an approach for accurately segmenting glands in colon histopathology images based on the characteristics of lumens and gland boundaries. First, this work constructed a U-net for separation of H&E images to obtain H-, E-, and background stain intensity maps. Subsequently, the epithelial nucleus is identified on the histopathology images, and the segmentation of lumen is performed on the background intensity map. Then, the axis of the least inertia and chain is used to represent the lumen and gland boundary features. Based on the detection of lumens and epithelial nucleus, more accurately gland segmentation has been performed based on the rough gland boundary.
The main contribution of the approach includes three points. Firstly, a new unsupervised stain separation approach was proposed, which made the information contained in the H&E image easy to identify and deal with the uneven stain intensity and the inconspicuous stain difference. The superiority of the proposed stain separation approach was proved. Second, this work developed and combined a new set of features for segmentation of glands. It considered the morphological characteristics of the internal lumen of the gland structure. During the process of carcinogenesis, the lumen of the gland usually undergoes obvious distortion, which makes the surrounding epithelial cells irregularly arranged, but most were still distributed around the lumen. Therefore, the approach of combining the axis of least inertia was proposed to represent the characteristics of lumens and gland boundaries. Since lumens are more independent and easier to segment than the epithelial cells, the segmentation approach based on lumens can be used to achieve the segmentation of glands attached to each other. The results showed that the proposed approach had improved the segmentation accuracy. Finally, this work showed a feature representation of lumens and gland boundaries, and we will continue to study the application of this approach for benign and malignant feature extraction of tumors.
Data Availability
The data used to support the findings of this study were supplied by the Nantong University under license and so cannot be made freely available. Requests for access to these data should be made to Kun Zhang at [email protected].
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was financially supported by Invest NI, the National Natural Science Foundation of China (no. 61671255), the Natural Science Foundation of Jiangsu Province, China (Grant no. BK20170443), and the Natural Science Foundation of the Higher Education Institutions of Jiangsu Province, China (Grant nos. 17KJB520030, 18KJB510038, and 19KJA350002) and in part by the program for “Qing Lan Project” of Colleges and Universities in Jiangsu Province (Grant no. XNY-039).