Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering

Volume 2014, Article ID 501206, 13 pages

http://dx.doi.org/10.1155/2014/501206
Research Article

An Automatic Indirect Immunofluorescence Cell Segmentation System

1Department of Management Information Systems, National Chung Hsing University, Taichung 402, Taiwan

2Department of Computer Science and Engineering, National Chung Hsing University, Taichung 402, Taiwan

3Department of Science and Biotechnology, China Medical University, Taichung 402, Taiwan

4Department of Computer Science, University of Munster, 48149 Münster, Germany

Received 26 February 2014; Accepted 24 April 2014; Published 22 May 2014

Academic Editor: Her-Terng Yau

Copyright © 2014 Yung-Kuan Chan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Indirect immunofluorescence (IIF) with HEp-2 cells has been used for the detection of antinuclear autoantibodies (ANA) in systemic autoimmune diseases. The ANA testing allows us to scan a broad range of autoantibody entities and to describe them by distinct fluorescence patterns. Automatic inspection for fluorescence patterns in an IIF image can assist physicians, without relevant experience, in making correct diagnosis. How to segment the cells from an IIF image is essential in developing an automatic inspection system for ANA testing. This paper focuses on the cell detection and segmentation; an efficient method is proposed for automatically detecting the cells with fluorescence pattern in an IIF image. Cell culture is a process in which cells grow under control. Cell counting technology plays an important role in measuring the cell density in a culture tank. Moreover, assessing medium suitability, determining population doubling times, and monitoring cell growth in cultures all require a means of quantifying cell population. The proposed method also can be used to count the cells from an image taken under a fluorescence microscope.

1. Introduction

IIF with HEp-2 cells has been frequently employed to detect ANA in systemic autoimmune diseases [1]. ANA testing can be used to scan a broad range of autoantibody entities and to describe them by distinct fluorescence patterns. The fluorescence patterns are usually identified by physician manually inspecting the slides with the help of a microscope. However, due to lacking in satisfying automation of inspection and a low level of standardization, this procedure still needs highly specialized and experienced technician or physician to obtain diagnostic result. For this purpose, automatic inspection for fluorescence patterns in an IIF image may assist physicians, without relevant experience, in making correct diagnosis. As ANA testing becomes more popularly used, a functional automatic inspection system is essential and its clinical application becomes more urgent [2].

Roughly, IIF with HEp-2 cells can be classified into six different main patterns: diffuse, peripheral, coarse speckled, fine speckled, discrete speckled, and nucleolar patterns. Figure 1 shows the six distinct autoantibody fluorescence patterns in IIF images. From an image processing point of view, the fluorescence cell belonging to diffuse, peripheral, coarse speckled, or fine speckled pattern normally includes only one connected region. On the contrary, the discrete speckled and nucleolar patterns contain mass and some bright spots [3].

501206.fig.001
Figure 1: Six main patterns of IIF images.

Ideally, a fluorescent image [4] shows only the structure of interest that is labelled with fluorescent dye, while the responses of unstained cells remain unobserved. Specificity of dyes with respect to cell types is sufficient for identifying supporting cells and receptor cells. The application of an array of fluorochromes has made it possible to identify cells and submicroscopic cellular components with a high degree of specificity amid nonfluorescing material. Many different fluorescent dyes can be used to stain different structures or chemical compounds. The fluorescence microscope is capable of revealing the presence of a single molecule. Through multiple fluorescent labelling, different probes can simultaneously identify several target molecules.

Segmenting cells from an IIF image plays an important role in developing an automatic ANA inspection system. Many cell segmentation methods [1, 57] are available to separate cells from an image but cannot be employed to cut cells off well from a fluorescence image. Most of them adopt a global threshold to convert a gray-level image into a binary image, but the intensities of cells in a fluorescence image may lie in an extremely wide range. It is very difficult to detect all cells using a global threshold only. Hence, the methods mentioned above cannot often give a good segmentation result for fluorescence images.

An automatic ANA inspection system can be divided into HEp-2 cell segmentation, fluorescence pattern classification, and computer aided diagnosis phases. This study will focus on the first phase, developing an effective method for automatically cutting off the cells with fluorescence pattern from IIF images.

In this study, a color selector is presented to transform a color IIF image into a gray-level image, a run length enhancer is provided to suppress the hazy fluorescent halo, an adaptive filter is proposed to smooth the surfaces as well as filling in the holes on the cells, and a gradient computing method is given to effectively compute the gradients of an image. These proposed techniques can be also applied to segment objects from other kinds of images. In addition, watershed and distance transform techniques are used to split overlapping objects. Since the characteristics of the six cell patterns in IIF images are extremely different, a rough classifier is proposed to decide which set of the parameter values should be employed in the proposed segmentation method for segmenting cells from the IIF images with different patterns.

Cell culture [8] is the process by which prokaryotic, eukaryotic, or plant cells are cultivated under controlled conditions [9]. This process allows individual cells to act as an independent unit. In cell culture, the size and quantity of cells will increase during cell division, and the growth of cells will be constrained by culture variables such as nutrient depletion. Culture techniques have become an important and extensive part of biotechnology in applied research areas such as genetics, cytology, and pathology. Cell culture also brings huge economic benefits and prospects to agriculture, crop genetics, crop breeding, and food additives.

Since the growing cells must be separated into two or more dishes to avoid excessive growth in cell culture, biologists need to know the precise number or density of cells. Quantity measurement is an important task in cell culture. In addition, assessing medium suitability, determining population doubling times, and monitoring cell growth in culture all require ways to quantifying cell population. Cell quantification also allows standardization for manipulations such as transfection or cell fusion.

Generally, biologists count the cells under microscope one by one or utilize the chambers. The most widely used chamber is the hemocytometer, a device designed for estimating the number of cells in a given volume under a microscope. This device separates the whole-cell samples into several large, equal squares, and each large square contains many small squares. Counting the cells in several small squares or larger squares provides an estimate of the total number of cells. However, manually counting the cells on an image that contains an extremely large number of cells is a time-consuming, arduous, and inaccurate task. An automatic software system to count the cells on an image taken under a microscope would save time and frustration and improve accuracy.

The cells in an IIF image generally lie in a wide range of gray level intensities; even the intensities of some cells are too weak for human eyes to find them. The purpose of this paper is to develop a system for automatically segmenting the cells in an indirect immunofluorescence (IIF) image. We call it automatic indirect immunofluorescence cell segmentation (AIICS) system which will enhance the cell contours and cut off the cells from an IIF image. The AIICS system also can be used to count the cells on the IIF image.

In the AIICS system, a color selector is proposed to transform a color IIF image into a gray-level image, a run length enhancer and an adaptive filter to get rid of noise and highlight the contours of objects on the IIF image, a new gradient computing method to compute the gradient of object contours, and an adaptive thresholding method to decide the most proper thresholds for detecting the cell contours on an IIF image. Moreover, a rough classifier is provided to classify IIF images and GAPD to decide the most suitable parameters used in the AIICS system.

2. Related Works

In this section, we will briefly review some cell segmentation methods [1, 57], the performances of which will be compared with the performance of the AIICS system in cell segmentation and counting. In addition, some measurements of object segmentation and counting errors [10] will be briefly reviewed as well.

2.1. Reviews

Eddins [5] presented an image segmentation strategy for isolating objects from an image. The objects could be anything: blood cells, stars, toner spots on a printed page, DNA microarray elements, or even quantum semiconductor dots. First, the strategy converts the image into a gray-scale one and uses a morphological top-hat operator with a disk-shaped structuring element to smooth out the uneven illumination. Second, Otsu’s thresholding method [11] is employed to determine a good threshold for converting the image to a binary one. Finally, this strategy computes the distance transform of the complement of the binary image, modifies it to force the background to be its own catchment basin, and then uses the watershed transform to extract objects.

Althoff et al. [1] provided a cell contour segmentation method to cut off the neural stem cells from a time-lapse image sequence. The method is an iterative process for every image in the sequence and can be described as follows.(i)Use multiscale Laplacian of Gaussian filters to separate image background from cell regions.(ii)Select the centroids of the blobs that are most likely to be cells.(iii)Use a dynamic programming to segment the cells.

Tang and Ewert [6] also offered a method for severing neural stem cells from a sequence of images. This method first applies a Gaussian filter to remove noise from the images. It then performs fuzzy thresholding as follows: all pixels with intensity below a lower threshold are set to 0, and all pixels with intensity above a higher threshold are set to 1. The gray level intensities between and are linearly rescaled to the range . The chosen lower threshold , where is the mean value and is the standard deviation of the background intensity. Similarly, the chosen higher threshold is high enough to guarantee that the pixels brighter than are really well inside the cells. Through the use of a fuzzy thresholding approach, the method becomes less sensitive to the exact values of these threshold levels than would have been the case if a standard crisp threshold had been used. Next, fuzzy gray weighted distance transform and watershed transform are used to separate cells from images.

In 2008, Yan et al. [7] proposed a method for segmenting the nuclei of cells from a genome-wide RNAi screening image. The method converts an image into a gray-scale one and applies Otsu’s thresholding method [11] to decide a suitable threshold for converting the gray-scale image to a binary image. It then computes the distance transform of the complement of the binary image, employs an enhanced watershed algorithm to detect the nucleus contours of cells, and removes the objects with small areas.

2.2. Classification and Segmentation Measures

Positive predictive value and sensitivity are two most frequently used criteria to evaluate the effectiveness of a classifier. Positive predictive value can be seen as a measure of exactness or fidelity, whereas sensitivity is a measure of completeness. Table 1 lists all possible classification conditions in a binary classification. The positive predictive value PPV and sensitivity SEN are defined as follows [10]:

tab1
Table 1: Classification condition.

There may be a trade-off problem between positive predictive value and sensitivity; greater positive predictive value decreases sensitivity and greater sensitivity leads to decreased positive predictive value. -measure is the harmonic mean of PPV and SEN and takes account of both measures [12]:

Appropriate validation of segmentation is important for clinical acceptance of a segmentation method. The positive predictive value, sensitivity, and detection accuracy (DACC) are also often employed to measure the segmentation accuracy of a segmentation method. DACC can be defined as follows:

Here, is the number of objects which are detected by a segmentation method and appear in ground truth; is the number of objects that are detected by the segmentation method but do not appear in ground truth; is the number of objects that are not detected by the segmentation method but appear in the ground truth. In this paper, we will take PPV, SEN, and DACC to measure the performance of the AIICS system.

3. AIICS System

The AIICS system contains two stages: preprocessing and cell segmenting. The preprocessing stage is to enhance cell contours and suppress noise contours for facilitating further cell segmenting. The cell segmenting stage is to cut off cells from an IIF image. The AIICS system will take different parameters for segmenting cells in different type IIF images. Figure 2 shows the framework of the AIICS system.

501206.fig.002
Figure 2: The framework of the AIICS system.
3.1. Preprocessing

In the preprocessing stage, the AIICS system first transforms a color IIF image into a gray-level image and then employs a run length enhancer and an adaptive filter to eliminate noise and highlight the contours of objects on .

3.1.1. Color Selector

Since antibodies bind stably and specifically to their corresponding antigen, they are invaluable as probes to identify a particular molecule in cells, tissues, or biological fluids. Antibody molecules can be used by a variety of different labelling techniques to locate their target molecules accurately in single cells or in tissue sections. When the antibody itself or the anti-immunoglobulin antibody is detected, it is labelled with a fluorescent dye; the technique is known as immunofluorescence microscopy. Under the indirect immunofluorescence microscopy, the chosen dye-antibody complex binds only to specific proteins in the cell. Cells may be stained by different dyes to different colors. The list of most available fluorescent labels includes red, green, blue, cyan, or yellow fluorescent proteins.

To remove the effect of dye color on cell segmentation, the AIICS system transforms each color IIF image into a gray-level image . The AIICS system computes the color histogram of with 6 bins. Let black (background color), red, green, blue, cyan, and yellow be the representative colors of the six bins, respectively, and let ( , , ) be the color components of the representative color of the th bin. In , for each pixel with three color components ( , , ), will be thrown into bin where

Let be the number of pixels which are thrown into bin , and is defined as . The AIICS system then projects the th pixel with three color components ( , , ) in on a line passing through ( ) and ( , , ), and assigns the value, obtained by projecting ( , , ) on the line, to the gray-level intensity of the th pixel of . Let max and min be the maximal and minimal gray-level intensities of all the pixels in . The AIICS system then changes the gray-level intensity of each pixel in into . This operation is to stretch the contrasts of from 0 to 255.

3.1.2. Run Length Enhancer

In IIF images, cells are bright objects protruding out from a uniform dark background. However, some cells are often surrounded by a hazy fluorescent halo, such as the imagein Figure 3(a). The hazy fluorescent halo will make cell segmentation more difficult. Therefore, this paper proposed a run length enhancer to suppress the hazy fluorescent halo. For each pixel located at the coordinates on , the run length enhancer draws eight line segments , all of which pass through , and the included angle between and -axis is . Let , consisting of pixels, be a window on , where is the central pixel of . We call the related window of . Assume that passes through pixels and which are located on the boundary of . Figure 4 illustrates the related window of marked by red color. If the gray-level intensity of is greater than the average gray-level intensity of all the pixels in , then the run length enhancer assigns to the gray-level intensity of ; otherwise it gives to the gray-level intensity of , where is the gray-level intensity of each pixel on line segment from to .

fig3
Figure 3: The result of preprocessing stage.
501206.fig.004
Figure 4: The eight line segments in run length enhancer.

The run length enhancer then stretches the gray-level contrast of from 0 to 255. Figure 3(b) is the obtained by executing the run length enhancer on image in Figure 3(a). Figure 3(b) explains that the run length enhancer can suppress the hazy fluorescent halo.

3.1.3. Adaptive Filter

Figure 3(b) shows that some cells are uneven and there may exist some holes on the cells in , indicated by a red arrow; therefore, the AIICS system adopts an adaptive filter to smooth the surfaces and to fill in the holes on the cells. Let be the average gray-level intensity of all the pixels in , let , consisting of pixels, be a related window of , and let as well as be the average and minimal gray-level intensities of all the pixels in . Since cell pixels in are generally brighter than background pixels, the adaptive filter gives to . After that, the adaptive filter stretches the pixel contrast of from 0 to 255. Figure 3(c) is the obtained by executing the adaptive filter on the image in Figure 3(b). Figure 3(c) shows that the adaptive filter can smooth the surfaces and fill in the holes on the cells in .

3.2. Cell Segmenting

This stage is to cut off cells from an IIF image. It consists of gradient computing, contour extracting, and overlapping cells splitting steps.

3.2.1. Gradient Computing

The gradients of pixels in an image can provide an abundance of object contour information. Let denote the gradient vector field of pixel where is the vector differential operator. The components of are the partial derivatives of . That is, The magnitude of the vector is given to

Let , , , and be and let be a unit vector in plane. We define the local contrast of at the plane in the direction of as

It is well known that a quadratic form equation (9) has a maximal and a minimal value for varying . These extreme values coincide with the eigenvalues of matrix , and they are attained when is the corresponding eigenvector [13]. The extreme values can be obtained by (10)

For each pixel , a pair ( ) is given, so that is the greatest gradient magnitude of in a certain gradient direction ; is the gradient of in other gradient direction and the included angle between and is 90°.

After computing the and of all the pixels in , the and of are, respectively, transformed into where and are the maximum and the minimum of all the pixels in and and are the maximum and the minimum of all the pixels in . Figures 5(b) and 5(c) demonstrate the and obtained by the gradient computing from the image shown in Figure 5(a).

fig5
Figure 5: The example results obtained by the gradient computing.

From Figure 5, one can obviously observe that and can effectively describe the gradient of but the object contours in (resp. ) are expanded (resp. shrunk) compared to the real object contours in . Hence, and are integrated into by the following formula: where the 1 in denominator is to ensure the value in denominator not to be zero.

Next, the contrasts of all the pixels in are stretched to range from 0 to 255. Figure 5(d) is the obtained by executing the gradient computing approach on the image in Figure 5(a). Figure 5(d) shows that the gradient computing approach can provide the impressive gradient of an image.

3.2.2. Contour Extracting

Since the gradients of some object contours are indistinct, such as the gradient indicated by a red arrow in Figure 5(d), the AIICS system takes an adaptive thresholding approach to transform into a binary image in which a 1-bit (resp. 0-bit) represents one white pixel (resp. black pixel). The adaptive thresholding approach first uses Otsu’s thresholding method [11] to partition all the pixels in into two groups according to their gray-level intensities, then respectively computes the average gray-level intensity of all the pixels in each group, and considers the smaller one of the two average gray-level intensities to be the gray-level intensity of the background pixels in .

Let , consisting of pixels, be the related window of , let be the average gray-level intensity of all the pixels in , and let be the standard deviation of the gray-level intensities of all the pixels in . Two thresholds and are set to The is given to be

Figure 6(a) illustrates the generated by the adaptive thresholding approach from the image in Figure 5(d).

fig6
Figure 6: The example results obtained by the thinning and spur pruning operations.

Since the contrast of the pixel at the vicinity of the cell contour or noise is generally high, it may cause a false contour with the thickness of over one pixel. The contour of an object should be one pixel in thickness. The AIICS system adopts a hit-and-miss transform-based skeletonization (HMTS) algorithm [14] to thin the contour of an object so that the contour of an object is the thickness of one pixel. We refer to the eliminated candidate contour pixels as the redundant-contour pixels and the remaining contour pixels as true-contour pixels. We call this algorithm the thinning operation.

Let , consisting of pixels, be the related window of . The thinning operation compares with each of the eight structuring elements shown in Figure 7, where the gray pixels stand for the do-not-care pixels (A do-not-care pixel may be a 1-bit pixel or a 0-bit pixel). The thinning operation considers that is matched if the positions and values of 1- and 0-bits on one structuring element are completely the same as those on , regardless of the do-not-care pixels. If is matched, ; otherwise, . The algorithm repeats this procedure until no more thinning can be performed. The thinning algorithm is to cut off the redundant-contour pixels, so that the contours have a thickness of only one pixel. Figure 6(b) shows the result after running the thinning operation on the image in Figure 6(a).

501206.fig.007
Figure 7: Eight structuring elements for thinning.

However, the uneven object contours tend to cause small spurs on the obtained skeleton by the thinning operation. These spurs are not expected. Therefore, a spur pruning operation is required to remove them. The procedure of the spur pruning operation [14] is entirely the same as the thinning operation except for the eight structuring elements in Figure 7 which are replaced by the eight structuring elements in Figure 8. Let be the result obtained by the spur pruning operation from . Figure 6(c) displays the result image after running the spur pruning operation on the image shown in Figure 6(b).

501206.fig.008
Figure 8: Eight structuring elements for Spurs.

In , some false contours should be removed. Let and be two pixels on the contour of the th object in , and let be the distance between and . Assume is the maximal one among all the ’s on the contour of . We call the major length of . If the major length of is less than the one-third of the average major length of all the object contours in , then is considered to be noise and removed from . After that, in Figure 6(c) is changed into in Figure 6(d).

3.2.3. Overlapping Cells Splitting

Some juxtaposed objects may share overlapping regions, which will make cell distinguishing and counting imprecise. We call the area containing some touching or overlapping cells an overlapping area. The watershed algorithm is commonly used to split the overlapping object [1518]. Watershed based segmentation can give good results for gray level images with different minima and catchment basins. For a binary image, however, there are only two gray levels 0 and 1, respectively, standing for background and cell. If two cells are connected together in the binary image , only one minimum and catchment basin will be formed in the topographic surface. In employing the watershed algorithm to segment connected cells, distance transform [19] is often adopted to preprocess the image to make it more suitable for segmentation by watershed algorithm, for assigning the catchment basins in the overlapping area.

Distance transform [19, 20] labels each pixel in an overlapping area with the distance of the pixel to the nearest pixel outside the overlapping area. Let be an overlapping area in . For each pixel in located at the coordinates on , the distance transform can be determined by the following three steps.(1)Attach a label to each pixel in the overlapping area on and give a label to each pixel outside the overlapping area.(2)Then travel through pixel by pixel. Replace the label of each pixel in with , , , , , , , , , .(3)Repeat step (2), until all labels have been converted to finite values.

The most intuitive way to explain the watershed based segmentation is to imagine that a hole is drilled in each minimum of the surface and we flood water into different catchment basins from the holes. If the water of different catchment basins is likely to merge due to further immersion, a dam is built to prevent the merging. This flooding process will eventually reach a stage when only the top of dam (the watershed lines) is visible above the water line.

Let be the line segment passing through and which are two pixels on the contour of the th object in , let be the distance between and , let be the major length of , where is the longest distance between any two pixels on the contour of , and let as well as be the coordinates of and on . Assume is the distance of pixels and on the contour of , passes through , and is perpendicular to . We call the minor length of .

Let be the area of and let be the average area of all the objects on . The AIICS system considers the object to be an overlapping area and takes the watershed algorithm [17] to split into more objects, only if

After that, is converted into other binary image . Figure 9(b) demonstrates the obtained by running this watershed algorithm on the of the image in Figure 9(a), where only the overlapping areas surrounded by white closed curves in Figure 9(b) are further split by the watershed algorithm.

fig9
Figure 9: The results of overlapping cells splitting.

There may be still some cells split into more than one region. The AIICS system tries to combine them. Let and be the average and standard deviation of the areas of all the object regions on . If the area of object region is less than , the AIICS system merges into one object region which is adjacent to and has the longest common edge with . After region combination, is converted into . Figure 9(c) displays the obtained from the in Figure 9(b) by the region combination.

4. Rough Classifier

There are six distinct autoantibody fluorescence pattern cells which have quite different fluorescence characteristics. To effectively sever the cells from the IIF images, the authors roughly classify IIF images into two categories, uniform pattern category and fleck pattern category. Most of the cells on an image in uniform pattern category are with coarse speckled pattern, diffused pattern, fine speckled pattern, or peripheral pattern, while most cells on an image in fleck pattern category are with discrete speckled pattern or nucleolar pattern. The nuclei of the cells on the images in fleck pattern category have bright spots. Since both image categories have significantly different properties, the AIICS system will take a rough classifier to classify IIF images and use distinct parameter values to segment the cells from the images.

Given an IIF image, the rough classifier first applies Otsu’s thresholding method [11] to determine a threshold for converting the IIF image into a binary image. A pixel is considered to an object pixel if its gray-level intensity is greater than ; else it is considered to be one background pixel. If the average area of all the cell regions is less than a given threshold , then the image is classified to fleck pattern category otherwise to uniform pattern category. Then, the AIICS system will take different and to segment cells on the images from different categories. In this paper, we will use a genetic algorithm to compute the most suitable , , and .

5. Genetic Algorithm Based Parameter Detector (GAPD)

According to the experimental results, the AIICS system can give better results when given different and for segmenting cells from fleck pattern category images and from uniform pattern category images. Assume that and are given to be as well as for segmenting cells from fleck pattern category images and as well as for segmenting cells from uniform pattern category images.

The performance of the AIICS system is significantly affected by the values of , , , , , and in segmenting the cells on an IIF image. In this paper, a genetic algorithm based parameter detector (GAPD) is provided to determine the most suitable values of , , , , , and . GAPD makes use of a binary string, consisting of six substrings , , , , , and which, respectively, comprise , , , , , and binary bits, to represent a chromosome Ch. , , , , , and can be used to describe , , , , , and in segmenting cells from IIF images. For each chromosome Ch, , , , , , and can be encoded as , , , , , and , where , , , , , and are the numbers of 1-bits in , , , , , and , individually.

GAPD applies the accumulated historical IIF images to train the most appropriate values of , , , , , and by using a genetic algorithm. The manually drawn cell regions are considered as a collection of ground truths. The relative distance error (RDE) [21] is often adopted to measure the segmentation errors of a segmentation method. GAPD will use RDE as the measure of fitness of Ch based on the values of , , , , , and encoded by Ch.

Initially, GAPD randomly generates chromosomes, each with binary bits. To evolve the best solution, the genetic algorithm repeatedly executes mutation, crossover, and selection operations until the relative fitness values of the reserved chromosomes are very similar to one another or the number of iterations is equal to MAXRUN.

In mutation operation, for each of the reserved chromosomes, GAPD uses a random number generator to specify one bit from each of the substrings , , , , , and . Then, all the ’s are replaced by ’s to generate a new chromosome, where signifies the operator “NOT.”

In crossover operation, GAPD uses a random number generator to designate pairs of chromosomes from the reserved chromosomes. Let be the substring of the th to th bits in Ch. For each chromosome pair ( ), GAPD concatenates into a new chromosome, and into another new chromosome.

In selection operation, optimal chromosomes are selected from the chromosomes reserved in the previous iteration and as well as chromosomes created in the mutation and crossover operations, respectively, according to their fitness. GAPD continuously performs the three operations—mutation, crossover, and selection, until the related fitness of the reserved chromosomes are very close. Finally, GAPD employs the chromosome with the best fitness in the reserved chromosomes to describe , , , , , and in segmenting cells from IIF images. Figure 10(a) is a chromosome indicating , , , and , where . Figure 10(b) displays a new chromosome derived from by the mutation operation, where the underlined bits are the randomly selected bits . Figure 10(c) displays two new chromosomes, and , generated from chromosomes Ch1 and Ch2 by the crossover operation.

fig10
Figure 10: Example for GAPD.

6. Experiments

This section is to explore the performance of the AIICS system in segmenting and counting the cells on an IIF image by experiments. In these experiments, 195 IIF images, 160 of which are in uniform pattern category and 35 of which are in fleck pattern category, are used as test images. Figure 1 shows a part of the test images. One can obviously observe that there are quite different characteristics with these images. The AIICS system hence employs rough classifier to classify an IIF image into fleck category or uniform pattern category and gives different values of and in segmenting the cells on the IIF image.

The first experiment is to compute the most suitable parameters , , , , , and . In this experiment, 17 of the 160 uniform pattern images and 3 of the 35 fleck pattern images are randomly selected to train , , , , , and via GAPD, where , , , and . The experimental results show that the AIICS system will give better segmentation results when , , , , , and in segmenting cells from IIF image. In the following experiments, , , , , , and are set to the values obtained in this experiment.

The purpose of the second experiment is to probe the performance of the gradient computing method proposed in this paper. In this experiment, the AIICS system is applied to segment cells out from 20 IIF images, except that the gradient computing method is replaced by Sobel [22], Prewitt [22], Roberts [22], and Laplacian [22]. The 20 IIF images, including 1242 cells, are randomly selected from the 195 test images. Misclassification error (ME) [23], relative area error (RAE) [23], modified Hausdorff distance (MHD) [23], and relative distance error (RDE) [21] are often adopted to measure the segmentation errors. In this experiment, they are used to evaluate the performances of Sobel, Prewitt, Roberts, and Laplacian and the gradient computing method proposed in this paper. Table 2 shows the results of this experiment. The experimental results indicate that our method is much better than Sobel, Prewitt, Roberts, and Laplacian methods in computing the gradient of an image.

tab2
Table 2: The segmentation errors obtained in the second experiment.

The third experiment is to explore the performances of the AIICS system, Eddins’, Althoff et al.’s, Tang et al.’s, and Yan et al.’s methods in segmenting cells on an IIF image. In this experiment, the 20 test images employed in the second experiment are also used as test images; ME, RAE, MHD, and RDE are employed to evaluate the segmentation errors. Table 3 demonstrates the results obtained in this experiment.

tab3
Table 3: The segmentation errors obtained in the third experiment.

The fourth experiment is to scrutinize the performance of the AIICS system and to compare this performance with those of Eddins’ [5], Althoff et al.’s [1], Tang and Ewert [6], and Yan et al.’s [7] methods in counting cells on an IIF image after segmenting the cells out. Tables 4, 5, and 6 show the experimental results, respectively, obtained by using all 195 IIF images, only 160 uniform pattern images, and only 35 fleck pattern images as the test data. The experimental results indicate that the AIICS system is much superior to Eddins’ [5], Althoff et al.’s [1], Tang and Ewert [6], and Yan et al.’s [7] methods in counting cells on an IIF image.

tab4
Table 4: The segmentation errors obtained in the third experiment.
tab5
Table 5: The results of experiment 4 by using 160 uniform pattern images as test data.
tab6
Table 6: The results of experiment 4 by using 35 fleck pattern images as test data.

The fifth experiment is to investigate the effect of rough classifier. In this experiment, 160 uniform pattern images and 35 fleck pattern images are used as test data. Table 7 shows the experimental results in classifying IIF images to the uniform pattern category or to the fleck pattern category by rough classifier. The classification measures obtained in this experiment are , , and .

tab7
Table 7: The classification results obtained by the rough classifier.

Table 7 displays that four images in fleck pattern category are erroneously classified to uniform pattern category. After that, in this experiment, the AIICS system is used to separate the cells from the four images with ( , ) and ( , ), respectively. Table 8 demonstrates the experimental results which tell that it is more appropriate to categorize the four images to the uniform pattern category in cell segmentation by the AIICS system.

tab8
Table 8: The segmentation results of the erroneously classified images obtained by the AIICSC system with and .

7. Conclusions

In this paper, the AIICS system is proposed to automatically segment and count cells on an IIF image. The experimental results show that the AIICS system can effectively segment and count the cells on an IIF image, even though the characteristics of the IIF images are extremely different. In this paper, color selector, run length enhancer, and adaptive filter are provided to remove noise and to enhance the contours of cells on an image. To cut off the cells from an image effectively, a new gradient computing method is proposed to compute the gradient of edge and an adaptive threshold method is given to decide the most proper thresholds for detecting the cells on an image. In addition, rough classifier is presented to classify IIF images. GAPD is presented to decide the most suitable parameters used in the AIICS system. These techniques can be also applied in segmenting the objects from other kind of images.

Conflict of Interests

The authors declare that they have no conflict of interests regarding the publication of this paper.

References

  1. K. Althoff, J. Degerman, and T. Gustavsson, “Combined segmentation and tracking of neural stem-cells,” in Image Analysis, vol. 3540 of Lecture Notes in Computer Science, pp. 282–291, 2005. View at Publisher · View at Google Scholar
  2. U. Sack, S. Knoechner, H. Warschkau, U. Pigla, F. Emmrich, and M. Kamprad, “Computer-assisted classification of HEp-2 immunofluorescence patterns in autoimmune diagnostics,” Autoimmunity Reviews, vol. 2, no. 5, pp. 298–304, 2003. View at Publisher · View at Google Scholar · View at Scopus
  3. Y.-L. Huang, Y.-L. Jao, T.-Y. Hsieh, and C.-W. Chung, “Adaptive automatic segmentation of HEp-2 cells in indirect immunofluorescence images,” in Proceedings of the IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing (SUTC '08), pp. 418–422, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. J. H. Price and D. A. Gough, “Comparison of phase-contrast and fluorescence digital autofocus for scanning microscopy,” Cytometry, vol. 16, no. 4, pp. 283–297, 1994. View at Publisher · View at Google Scholar · View at Scopus
  5. S. Eddins, “The Watershed Transform: Strategies for Image Segmentation,” Newsletters—MATLAB News & Notes, February 2002, http://www.mathworks.com/company/newsletters/articles/the-watershed-transform-strategies-for-image-segmentation.html.
  6. C. Tang and B. Ewert, “Automatic tracking of neural sem cells,” in Proceedings of the APRS Workshop on Digital Image Computing (WDIC '05), pp. 61–66, Brisbane, Australia, February 2005.
  7. P. Yan, X. Zhou, M. Shah, and S. T. C. Wong, “Automatic segmentation of high-throughput RNAi fluorescent cellular images,” IEEE Transactions on Information Technology in Biomedicine, vol. 12, no. 1, pp. 109–117, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. Chaudry, “Cell Culture,” http://www.scq.ubc.ca/cell-culture/.
  9. S. Cooper, “A unifying model for the G1 period in prokaryotes and eukaryotes,” Nature, vol. 280, no. 5717, pp. 17–19, 1979. View at Publisher · View at Google Scholar · View at Scopus
  10. D. L. Olson and D. Delen, Advanced Data Mining Techniques, Springer, New York, NY, USA, 1st edition, 2008.
  11. N. Otsu, “A threshold selection method from gray-level histogram,” IEEE Trans Syst Man Cybern, vol. 9, no. 1, pp. 62–66, 1979. View at Publisher · View at Google Scholar · View at Scopus
  12. J. Makhoul, K. Francis, S. Richard, and W. Ralph, “Performance measures for information extraction,” in The Proceedings of DARPA Broadcast News Workshop, pp. 249–252, Herndon, Va, USA, February 1999.
  13. A. Cumani, “Edge detection in multispectral images,” Graphical Models and Image Processing, vol. 53, no. 1, pp. 40–51, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  14. R. C. Gonzalez and R. E. Woods, Digital Image Processing, Prentice Hall, Upper Saddle River, NJ, USA, 2002.
  15. D. Hagyard, M. Razaz, and P. Atkin, “Analysis of watershed algorithms for grayscale images,” in Proceedings of the Processing of IEEE International Conferences on Image Processing, pp. 41–44, March 1996.
  16. K. Karantzalos and D. Argialas, “Improving edge detection and watershed segmentation with anisotropic diffusion and morphological levellings,” International Journal of Remote Sensing, vol. 27, no. 24, pp. 5427–5434, 2006. View at Publisher · View at Google Scholar · View at Scopus
  17. C. L. Orbert, E. W. Bengtsson, and B. G. Nordin, “Watershed segmentation of binary images using distance transformations,” The Processing of SPIE: Nonlinear Image Processing IV, vol. 1902, pp. 159–170, 1993. View at Google Scholar
  18. H. Sun, J. Yang, and M. Ren, “A fast watershed algorithm based on chain code and its application in image segmentation,” Pattern Recognition Letters, vol. 26, no. 9, pp. 1266–1274, 2005. View at Publisher · View at Google Scholar · View at Scopus
  19. F. Meyer, “Topographic distance and watershed lines,” Signal Processing, vol. 38, no. 1, pp. 113–125, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  20. G. Borgefors, “Distance transformations in digital images,” Computer Vision, Graphics, & Image Processing, vol. 34, no. 3, pp. 344–371, 1986. View at Publisher · View at Google Scholar · View at Scopus
  21. S.-F. Yang-Mao, Y.-K. Chan, and Y.-P. Chu, “Edge enhancement nucleus and cytoplast contour detector of cervical smear images,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 38, no. 2, pp. 353–366, 2008. View at Publisher · View at Google Scholar · View at Scopus
  22. E. Davies, Machine Vision: Theory, Algorithms and Practicalities, chapter 5, Academic Press, 1990.
  23. M. Sezgin and B. Sankur, “Survey over image thresholding techniques and quantitative performance evaluation,” Journal of Electronic Imaging, vol. 13, no. 1, pp. 146–168, 2004. View at Publisher · View at Google Scholar · View at Scopus