Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2012, Article ID 194953, 20 pages
Research Article

A Combined Approach on RBC Image Segmentation through Shape Feature Extraction

1College of Computer Science, Chongqing University, Chongqing 400030, China
2Department of Science and Technology, Chongqing University of Arts and Sciences, Chongqing 402160, China

Received 11 November 2011; Accepted 26 December 2011

Academic Editor: Ming Li

Copyright © 2012 Ruihu Wang and Bin Fang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The classification of erythrocyte plays an important role in clinic diagnosis. In terms of the fact that the shape deformability of red blood cell brings more difficulty in detecting and recognize for operating automatically, we believed that the recovered 3D shape surface feature would give more information than traditional 2D intensity image processing methods. This paper proposed a combined approach for complex surface segmentation of red blood cell based on shape-from-shading technique and multiscale surface fitting. By means of the image irradiance equation under SEM imaging condition, the 3D height field could be recovered from the varied shading. Afterwards the depth maps of each point on the surfaces were applied to calculate Gaussian curvature and mean curvature, which were used to produce surface-type label image. Accordingly the surface was segmented into different parts through multiscale bivariate polynomials function fitting. The experimental results showed that this approach was easily implemented and promising.

1. Introduction

The erythrocyte shape deformability is critical to the filterability of blood. It has drawn considerable attentions into the pathology research in clinical relevant blood diseases. Unfortunately, the diagnosing is usually performed by a human expert, and it shows some drawbacks such as time-cost consuming and inaccuracy. Conventionally, the experts deal with erythrocyte images segmentation issue with 2D gray scale image. However, in order to obtain a satisfied performance, the classification and recognition should be based on the real shape of RBCs. In fact, the shape feature of red blood cell provides more useful information for diagnosing accurately than intensity level image. So it is necessary to take the shape information into consideration in this real problem. For our experiments, we use a database of 100 RBC images obtained by scanned electron microscope rather than optical imaging system. In [1], Egerton elaborated the operation principle of SEM, which creates images that are particularly easy to implement because the brightness in it is a function of the slope of the specimen at that point and forms a varied shading image. It is unlike the optical and transmission electron microscope, whose brightness depends on the thickness and optical and electron density instead.

In [2], Russ mentioned that many of the two-dimensional images have been sectioned through three-dimensional structures. This is especially true in the various types of microscopy, where either polished flat planes or cut thin sections are needed in order to form the images in the first place. But the specimens thus sampled are three dimensional, and the goal of microscopists is to understand the three-dimensional structure. Some works have been done on optical blood cell images with traditional 2D methods [35]. In order to detect and classify malaria parasites in images of Giemsa-stained blood slides, Di Ruberto et al. proposed a morphological approach to evaluate the parasitaemia of the blood. They segmented the cells (red and white) from the background firstly and then detected and classified the parasites infecting them [3, 6]. Equally with malaria parasite detection mentioned above, Kumarasmy et al. presented an automated method for the robust analysis of RBC images via Gestalt laws [4], and Mandal et al. presented a segmentation method of blood smear images using normalized cut [5]. As we well know that, noise estimation is a challenge problem for complex structures of images, Liao et al. presented a method determining neighborhoods of the image pixels automatically with adaptive denoising and estimate noise for a single-slice sonogram of low-dose CT based on the homogenous patches centered at a special pixel. In their method, the noisy image is viewed as an observation of a nonlinear time series. The true state of the NTS must be recovered from the observation to realize image denoising [7, 8]. Furthermore, Hu et al. proposed an image smoothing using nonlinear anisotropic diffusion [9]. They suggested that the diffusion should be performed both among the time variants and spatial variant.

Figure 1 shows a typical example of such kind of red blood cell images captured by SEM with which we are going to deal. They were obtained at 600 times magnification using a scanned electron microscope. As shown in Figure 1, there are some outstanding characteristics about this image which make our problem be significant and challenging. On the one hand, the image represents very highly good quality with varied shading illuminated by light source. On the other hand, some light gridlines are superimposed in the image. The gridlines were added for the sake of manual counting. In addition, those cells’ shape takes on lots of irregular deformation, which is the primary problem we have to solve to segment effectively. As we well know, image segmentation is the bridge to classification properly. We aimed to develop a satisfied algorithm to classify the red blood cells into different groups accurately. And also we believed that conventional segmentation methods based on gray value could be unsuitable to this case. In this paper we proposed a new strategy to segment RBC image according to surface feature extraction. At first, we have to estimate the distribution of erythrocyte shapes from scanned electron microscope. Then each cell’s three-dimension shape was reconstructed as 3D height field using shape-from-shading technique. Lastly we implement multiscale surface fitting segmentation algorithm to partition the cells based on the depth data acquired in the previous procedures.

Figure 1: A typical SEM image of red blood cells.

This paper is organized as follow. Firstly the preliminary work including system framework and image preprocessing are introduced in Section 2. In Section 3, a guided contour tracing method was used to extract the boundary and center point information. Accordingly all pixels of each cell which is located on the top of overlapped cells can be further obtained, whose intensity tone is disposed as shading information. The 3-D reconstruction of each cell is introduced in Section 4. We deduced an image irradiance equation under SEM imaging condition with linear approximation. Shape-from-shading technique is applied in this project, such as shape-from-shading technique using linear approximation. The next section is about to divide surface into several different types after computing mean curvature and Gaussian curvature. Multiscale surface fitting segmentation algorithm is proposed in Section 6 which involves seed extraction, region growing, and so on. In the end, Section 7 draws some conclusions and expects a few future works.

2. Image Preprocessing and System Framework

As shown in Figure 2(a), there existed some bright gridlines superimposed on the original image, which have some side effects on the subsequent work. As mentioned before, the images show perfect quality other than these lines, which is used to count and classification manually. The system we developed here is to relieve human from exhausted hand work and run automatically. Additionally, in terms of the shading information being critical in our case, we regard the lines as noise and we have to remove them before recovering the 3D shape from gray tone image.

Figure 2: RBC image preprocessing.
2.1. System Framework

We describe the system framework in Algorithm 1.

Algorithm 1:

2.2. Image Processing Using Median Filtering Locally

As time consumption is sensitive during RBCs classification, we make use of median filter to get rid of the gridlines. The median filter is a smoothing technique that causes minimal edge blurring, which involves replacing the pixel value at each point in an image by the median of the pixel values in a neighborhood about the point: where is the original image and is median filtered image, respectively. is a 2-dimensional 7*7 template.

Figure 2(a) shows a scaled original RBC image with four white gridlines superimposed on it. The denoised image after median filtering directly on whole image is presented in Figure 2(c), in which the lines are removed successfully. However, the edges of cell image have been blurred and brightness changed at the same time. Consequently the issue of inaccuracy would arise from the change, because the recovered shape is relied on the irradiance mostly. Fortunately we can detect the exact positions where those lines are by horizontal and vertical projection using respectively, where only represents those pixels whose gray value is approximately equal to the pixels in those white gridlines. means that only horizontal pixels are computed when sum on and vertical pixels are projected when sum on as well. Experimental results show that, while determining the exact positions of vertical lines, the points located in the range of or have to be considered only. And those points whose coordinate is in the range of , , have to be dealt with when impose median filtering locally.

As a result, this local median filtering method leads to a handily approach, namely filling locally combined with median filtering, 7*7 structure element defined. The improved result is shown in Figure 2(b), in which all the cells keep the same shading information as the original image and the gridlines have been removed successfully as well.

3. Tracing Contour and Cell Extraction Individually

3.1. Guided Contour Tracing

Vromen and McCane [10] proposed a method named contour-tracing-based approach to the problem of finding the boundary of red blood cells in a scanned electron microscope image automatically. As shown in the above Figure 1, there are considerable overlapped cells. We are just only interested in estimating the distribution of different erythrocyte shapes from SEM image rather than the accurate counting number. So it makes sense to assume that the distribution of overlapped cells is identical to the overall distribution. Consequently only those top-level cells are needed to be detected and recognized. At the very beginning, the most possible direction is chosen by taking the prior information of tracing into account so far. Since the cell contours are likely elliptical, it would be reasonable to fit a conic shape to the path. A parameterized second-degree polynomial over the last points was modeled the local curvature:

The best fitting polynomial was calculated through a number of data points using least squares. In order to increase accuracy and decrease computation, consider only the directions with angles in a certain window around the predicted direction. This was represented with a set of unit vector : where is the angle of the predicted tangent. In our application is uniformly sampled between and .

Figure 3(a) shows the scaled 256*256 RBC image, and the contours after guided tracing are presented in Figure 3(b). There are 7 traced contours in Figure 3(b) which are located on top level of overlapped cell image. All of the contour information is stored as attributes in an XML file. It also contains the locus of each point on the boundary.

Figure 3: Cell image extracted using contour tracing.
3.2. Cell Extraction Individually

As shown in Figure 4, there is a break point marked by a white circle in each traced boundary, which would result in a wrong region when growing we should fill them as a complete boundary point before growing. If the number of neighbors around a predicted boundary point is less than three under 8-connectivity, we consider it as a break point. The relationships between the two break points and how they break with each other are shown in Figure 5.

Figure 4: Break point in cells boundary.
Figure 5: Five different types of break points.

According to the extracted cell contours’ information, we can grow each cell starting at center point regionally to get the entire cell image. The algorithms can be described in Algorithm 2.

Algorithm 2: Region growing algorithm.

The resulting image of region growing subject to contour boundary is shown in Figure 3(c). After growing regionally, we got the number of pixels which make the whole image and their gray level value. In Table 1, there are 7 cell contours that have been extracted altogether, where CENTER_X and CENTER_Y are the cell’s center point coordinates and PIXELS denotes the pixels number involved in each cell.

Table 1: Contours’ information of traced cells.

4. Shape from Shading Using Linear Approximation

4.1. Shape from Shading and PDE

The “shape from shading” problem, namely, SFS, is to recover the 3-D shape of a surface from a gray-level monochrome image. In the 1970s Horn firstly proposed the approach to reconstruct the original shape from a varied shading image, which associated with obtaining a solution of a nonlinear first-order partial differential equation (PDE), that is, brightness equation. From then on, a number of articles have emerged which come up with various kinds of methods to strive to implement this technique into real or artificial synthetic images.

This PDE equation arises from the is the coordinates of a point in the image. The brightness equation connects the reflectance map to the brightness image . Almost all the shape-from-shading methods at the exception of an extremely small number of papers [1113] assume that the scene model is Lambertian. The reflectance map is the cosine of the angle between the light vector and the normal vector to the surface: where , , and depend on [14].

Shape from shading is a fundamental issue in computer vision, and considerable research has been performed [15, 16] in trying to solve this problem including methods of medical image processing. In [17], the authors applied their method to an endoscopic image of a normal stomach and showed the result obtained by generic algorithm in the perspective case with the light source at the optical center, which is not suitable for SEM. Tankus et al. in their papers [1820] suggested the reconstruction algorithm under an assumption of perspective shape from shading. Deguchi and Okatani [21] accomplished shape reconstruction from an endoscope image by shape-from-shading technique for a point light source at the projection center.

4.2. Reflectance Map under SEM Imaging Condition

Jones and Taylor [22] proposed that SEM imaging process is particularly appropriate for SFS, since it allows us to make the simplifying assumptions that the projection is orthographic and the "light source" is at infinity. The Lambertian reflectance function is given by where is the unit normal vector, is a unit vector in the direction of the light source, and is the surface albedo.

The SEM reflectance function is based upon the theoretical prediction that the number of electron emitted from a surface in the SEM is proportional to the secant of the angle between the illumination direction and the surface normal. The reflectance function is denoted by [22]

In this paper, we assume that the image is formed by orthographic projection, because the object specimen examined by SEM is very small in comparison to distance from light source.

4.3. Linear Approximation

In [23], the authors believed that the linearity of the reflectance map in the depth , instead of and , is more appropriate in some cases. They presented a method for computing depth from a single-shaded image by employing the discrete approximations for and using finite differences and linearly approximating the reflectance in . It gave good results for the spherical surface and can be applied to any reflectance function.

In this paper we aim to recover the red blood cell’s shape of an image captured by SEM based on linear approximation. It is extended to solve such a problem assumed orthographic projection and derive the implementation equations with the reflectance function inverse to Lambertian reflectance function.

Image irradiance equation (IRE) indicates the relationship between reflection function and image irradiance. The recovered shape can be represented by depth map , normal , or surface gradient . The radiance of surface patch depends on gradient, light source location, and reflectance property. The gray level of a pixel in the image is determined by light direction and normal vector, assumed Lambertian model, which can be denoted by IRE: where is a gray level at pixel and , and is the illumination direction.

By approximating the and discretely, we get According to (4.6), the reflection function can be rewritten as

Under the assumption of the point and image is given, the linear approximation of function with respect to after Jacobi iteration method is: The th iterative result can be denoted by

As mentioned previously, the reflection function is inversal to Lambertian model under the condition of SEM imaging. Equation (4.5) is transformed into Now we compute the partial derivatives of and with : So,

The right part of (4.9) is rewritten as

We use the shape-from-shading method with linear approximation to reconstruct the red blood cell’s 3-D shape as in Figure 6.

Figure 6: Reconstructed RBC 3D shape.

5. Curvature Calculation

There are 8 different types of surface altogether, namely peak, pit, ridge, valley, flat, minimal surface, saddle ridge, and saddle valley. The surface type of each data point on a scene object can be designated by the signs of mean curvature and Gaussian curvature uniquely. Both of these two curvatures can be calculated by local convolution [24, 25]. Each data point in a given window is associated with a 2-dimensional position from the set , where and is odd.

The following discrete orthogonal polynomials provide local biquadratic surface fitting capability: where . The biquadratic is the minimal degree polynomial surface type needed to estimate the first and second partial derivatives. A corresponding set of functions is the normalized versions of the orthogonal polynomials given by , where the are normalizing constants. The three normalization constants are given by

Defining a set of surface in space, , which can be parameterized by

Before implementing the surface-fitting segmentation algorithm, the surface types have to be divided at first, which is based on mean curvature and Gaussian curvature. The computation of mean curvature and Gaussian curvature of digital surface is approximated by partial derivative estimation, which is calculated via the appropriate 2D image convolution (denoted by *): where is a binomial smoothing window: where

The mean curvature and Gaussian curvature can be calculated by partial derivative as follows: Table 2 shows the average and variance of mean curvature and Gaussian curvature of the 7 extracted cells in Figure 3.

Table 2: Average and variance of mean curvature and gaussian curvature.

6. Experiment

6.1. Surface-Type-Based Image Segmentation

The fundamental formulation of region-based image segmentation is defined as(1);(2) is a connected region, ;(3);(4);(5), is adjacent to ,

where is a uniformity predicate defined on groups of connected pixels. was grown regionally via 8-connected neighborhood. All the points in region satisfy the same surface function. Different regions meet different surface fitting function.

The segmentation procedure is divided into two mainly different parts. Firstly we compute the surface-type label image by where denotes the surface type ranging from 1 to 9 as shown in Table 3.

Table 3: Surface type defined by mean and Gaussian curvature.

We the define root mean square error (RMSE) as to measure whether the difference between fitted value and original depth is confined to the range preset by a threshold as in Algorithm 3.

Algorithm 3: Algorithm of multiscale segmentation through mapped depth.

6.2. Algorithm
6.2.1. Experimental Result

In our experiment, we define the RMS fit error as , where means noise variance, and is compared with to see if the pixel is compatible with the approximating surface function. If the magnitude of the difference between the function value and the digital surface value is less than the allowed tolerance value, denoted by , then the pixel is added to the set of compatible pixels, denoted by , which are compatible with the surface fit to the region . Otherwise, the pixel is incompatible and discarded. The result of this process is the compatible pixel list: We choose and experimentally.

If we threshold the mean curvature and Gaussian curvature with an appropriate value, only three different surface type are remained among all cells. In our experiment, we choose as the mean curvature and Gaussian curvature separately. The function of sgn in (6.1) is defined as

The three kinds of surface type are flat, pit, and valley, respectively, as shown in Figure 7.

Figure 7: Three surface-type components.

Figure 8 represents the process of extracting seed region through erosion (contraction) operation. In order to implement the surface fitting segmentation algorithm, the seed region has to be obtained using erosion operation firstly. After several iteration, the pixel numbers of remained regions for growing are 32, 34, 45, 6 for these four region, respectively.

Figure 8: Contraction process to extract seed region of red blood cell.

In Figure 9, the cell is segmented into three isolated parts perfectly, which are obtained through fitting based on surface type.

Figure 9: The segmentation result using surface fitting method.
6.3. Evaluation

In order to evaluate our proposed combined algorithm, we used a dataset containing 800 SEM images. These images are with resolution of 1024 by 768 pixels. For our evaluation, we ran the algorithm on 100 randomly selected images.

We divided the cells into different categories according to their distribution of surface type. Table 4 elaborated the segmentation accuracy of each category.

Table 4: Accuracy of segmentation.

7. Conclusion and Future Work

This paper is about how to reconstruct the 3D shape of red blood cell from gray tone images using scanned electronic microscope based on shape-from-shading technique, as well combined with linear approximation. The result of cell surface shape is given by height field. Our algorithm can be trivially transformed to various different kinds of reflection models. In Figure 7, the surface-type label image is given with cell number added manually. There are mainly three types of surfaces left after threshold. The distribution of count number of each surface type in every cell can present some useful information for classifying correctly, which will be trained as input data. In the end, we aim to construct a classifier by means of cascaded SVMs architecture to recognize whether the red blood cell is normal or not.


This work is sponsored by the National Key Technology R&D Program under Grant no. 2012BAI06B01, Science and Technology Foundations of Chongqing Municipal Education Commission under Grant no. KJ091216, and also by Excellent Science and Technology Program for Overseas Studying Talents of Chongqing Municipal Human Resources and Social Security Bureau under Grant no. 09958023.


  1. R. F. Egerton, Physical Principes of Eletron Miscroscopy, Springer, 2005.
  2. J. C. Russ, The Image Processing Handbook, CRC Press, Boca Raton, Fla, USA, 2007.
  3. C. Di Ruberto, A. Dempster, S. Khan, and B. Jarra, “Analysis of infected blood cell images using morphological operators,” Image and Vision Computing, vol. 20, no. 2, pp. 133–146, 2002. View at Publisher · View at Google Scholar · View at Scopus
  4. S. K. Kumarasamy, S. H. Ong, and K. S. W. Tan, “Robust contour reconstruction of red blood cells and parasites in the automated identification of the stages of malarial infection,” Machine Vision and Applications, vol. 22, no. 3, pp. 461–469, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. S. Mandal, A. Kumar, J. Chatterjee, M. Manjunatha, and A. K. Ray, “Segmentation of blood smear images using normalized cuts for detection of malarial parasites,” in Proceedings of the Annual IEEE India Conference: Green Energy, Computing and Communication (INDICON '10), pp. 1–4, Kolkata, India, 2010. View at Publisher · View at Google Scholar
  6. C. Di Ruberto, A. Dempster, S. Khan, and B. Jarra, “Segmentation of blood images using morphological operators,” in Proceedings of the 15th International Conference on Pattern Recognition (ICPR '00), pp. 397–400, IEEE, Barcelona, Spain, 2000. View at Scopus
  7. Z. Liao, S. Hu, and W. Chen, “Determining neighborhoods of image pixels automatically for adaptive image denoising using nonlinear time series analysis,” Mathematical Problems in Engineering, vol. 2010, Article ID 914564, 14 pages, 2010. View at Google Scholar · View at Zentralblatt MATH
  8. Z. Liao, S. Hu, M. Li, and W. Chen, “Noise estimation for single-slice sinogram of low-dose X-ray computed tomography using homogenous patch,” Mathematical Problems in Engineering, vol. 2012, Article ID 696212, 16 pages, 2012. View at Publisher · View at Google Scholar
  9. S. Hu, Z. Liao, D. Sun, and W. Chen, “A numerical method for preserving curve edges in nonlinear anisotropic smoothing,” Mathematical Problems in Engineering, vol. 2011, Article ID 186507, 14 pages, 2011. View at Google Scholar · View at Zentralblatt MATH
  10. J. Vromen and B. McCane, “Red blood cell segmentation using guided contour tracing,” in Proceedings of the 18th Annual Colloquium of the Spatial Information Research Centre (SIRC '06), Dunedin, New Zealand, November 2006.
  11. S. Bakshi and Y. Yang, “Shape from shading for non-lambertain surfaces,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '94), vol. 94, pp. 130–134, Austin, Tex, USA, 1994. View at Publisher · View at Google Scholar
  12. K. M. Lee and C. C. J. Kuo, “Shape from shading with a generalized reflectance map model,” Computer Vision and Image Understanding, vol. 67, no. 2, pp. 143–160, 1997. View at Publisher · View at Google Scholar · View at Scopus
  13. H. Ragheb and E. Hancock, “A probabilistic framework for specular shape–from-shading,” SIAM Journal of Numerical Analysis, vol. 29, no. 3, pp. 867–884, 1992. View at Publisher · View at Google Scholar
  14. E. Prados and O. Faugeras, “Shape from shading,” in Handbook of Mathematical Models in Computer Vision, pp. 375–388, Springer, 2006. View at Publisher · View at Google Scholar
  15. B. K. P. Horn and M. J. Brooks, “The variational approach to shape from shading,” Computer Vision, Graphics and Image Processing, vol. 33, no. 2, pp. 174–208, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  16. R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape from shading: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 8, pp. 690–706, 1999. View at Publisher · View at Google Scholar
  17. E. Prados, F. Camilli, and O. Faugeras, “A unifying and rigorous shape from shading method adapted to realistic data and applications,” Journal of Mathematical Imaging and Vision, vol. 25, no. 3, pp. 307–328, 2006. View at Publisher · View at Google Scholar
  18. A. Tankus, N. Sochen, and Y. Yeshurun, “Perspective shape-from-shading by fast marching,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '04), pp. I43–I49, July 2004. View at Publisher · View at Google Scholar · View at Scopus
  19. A. Tankus, N. Sochen, and Y. Yeshurun, “A new perspective on shape-from-shading,” in Proceedings of the 9th IEEE International Conference on Computer Vision (ICCV '03), vol. 2, pp. 862–869, October 2003. View at Publisher · View at Google Scholar · View at Scopus
  20. A. Tankus, N. Sochen, and Y. Yeshurun, “Reconstruction of medical images by perspective shape-from-shading,” in Proceedings of the 17th International Conference on Pattern Recognition (ICPR '04), vol. 3, pp. 778–781, August 2004. View at Publisher · View at Google Scholar · View at Scopus
  21. K. Deguchi and T. Okatani, “Shape reconstruction from an endoscope image by shape-from-shading technique for a point light source at the projection center,” in Proceedings of the Workshop on Mathematical Methods in Biomedical Image Analysis, pp. 290–298, IEEE, San Francisco, Calif, USA, June 1996. View at Publisher · View at Google Scholar · View at Scopus
  22. A. G. Jones and C. J. Taylor, “Robust shape from shading,” Image and Vision Computing, vol. 12, no. 7, pp. 411–421, 1994. View at Publisher · View at Google Scholar · View at Scopus
  23. P.-S. Tsai and M. Shah, “Shape from shading using linear approximation,” Image and Vision Computing, vol. 12, no. 8, pp. 487–498, 1994. View at Publisher · View at Google Scholar · View at Scopus
  24. P. J. Besl and R. C. Jain, It Surface in Range Image Understanding, Springer, New York, NY, USA, 1988.
  25. P. J. Besl and R. C. Jain, “Segmentation through variable-order surface fitting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 2, pp. 167–192, 1988. View at Publisher · View at Google Scholar · View at Scopus