About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 494761, 20 pages
http://dx.doi.org/10.1155/2012/494761
Research Article

A New and Fast Multiphase Image Segmentation Model for Color Images

Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China

Received 10 October 2011; Accepted 27 December 2011

Academic Editor: Pedro Ribeiro

Copyright © 2012 Yunyun Yang and Boying Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a new and fast multiphase image segmentation model for color images. We propose our model by incorporating the globally convex image segmentation method and the split Bregman method into the piecewise constant multiphase Vese-Chan model for color images. We have applied our model to many synthetic and real color images. Numerical results show that our model can segment color images with multiple regions and represent boundaries with complex topologies, including triple junctions. Comparison with the Vese-Chan model demonstrates the efficiency of our model. Besides, our model does not require a priori denoising step and is robust with respect to noise.

1. Introduction

Image segmentation [18] is an important technique for detecting objects and analyzing images in computer vision and image processing. Image segmentation is concerned with the task of partitioning a given image into different classes or regions corresponding to different objects and the background in the image.

Mumford and Shah [9] formulated the image segmentation problem and proposed the famous Mumford-Shah model. For a special case of the Mumford-Shah model, Chan and Vese [1] proposed the famous Chan-Vese model without using the image gradient. Chan and Vese gave the two-phase level set formulation for gray images in [1]. Then, the authors extended their scalar Chan-Vese algorithm to the vector-valued case in [2]. There are many other image segmentation models for color images, and we mention the works in [10, 11].

Either the scalar Chan-Vese algorithm in [1] or the Chan-Vese model for vector-valued images in [2] is mainly for images with two phases. To deal with images with multiple regions, Vese and Chan extended their original two-phase Chan-Vese model [1, 2] to a multiphase model by using a multiphase level set formulation in [3]. In their multiphase Vese-Chan model [3], multiple regions can be represented by multiple level set functions. Vese and Chan described their model in two cases: piecewise constant case and piecewise smooth case. In this paper, we mainly focus on the piecewise constant multiphase Vese-Chan model.

In fact, Zhao et al. [12], Samson et al. [13], and Paragios and Deriche [14] have already proposed several multiphase image segmentation models to segment images with multiple regions before the multiphase Vese-Chan model. However, these models have the natural problems of vacuum and overlap. Compared with these models, the multiphase Vese-Chan model has several advantages. Firstly, it automatically avoids the problems of vacuum and overlap by construction. Secondly, it needs fewer level set functions to represent the same number of phases in the piecewise constant case. Finally it can represent boundaries with complex topologies, including triple junctions.

Recently, the split Bregman method has been applied to solve image segmentation problem more efficiently. The efficiency of the split Bregman method for image segmentation has been demonstrated in [1518].

In this paper, we propose a fast multiphase image segmentation model in a variational level set formulation for color images. Our model is an extension of our previous model for gray images proposed in [19]. Our model has been applied to many synthetic and real color images. Numerical results show that our model has the advantages of the original Vese-Chan model [3] for multiphase image segmentation, but our model is much more efficient. The robustness of our model to noise has also been demonstrated by the numerical results.

The outline of the paper is as follows. We review our improved multiphase image segmentation model for gray images in Section 2.1. In Section 2.2, we extend our model to work for color images. Then, we apply the split Bregman method to solve our proposed minimization problem more efficiently in Section 2.3. Section 3 gives the algorithm and numerical results of our model. We conclude this paper in Section 4.

2. Our Model

2.1. Our Model for Gray Images

In our previous work [19], we have proposed an improved active contour model for multiphase image segmentation based on the piecewise constant multiphase Vese-Chan model [3], the globally convex image segmentation method [20] and the split Bregman method [1517]. The model we proposed in [19] is mainly for gray images. For simplicity, we only review the four-phase model in this section.

Let be the image domain and be a given gray level image. Our proposed model is given in the following level set formulation: where and are two level set functions, and is a positive constant. and are defined as where is an edge detector function [17, 18, 21], and is a parameter that determines the detail level of the segmentation. and are defined as where is a constant vector of averages in the four phases.

Given , once the minimization problem (2.1) is solved, the image domain can be partitioned into four regions by thresholding the level set functions as follows: for some , we choose in this paper. This is different from [3] where the zero level set is used to identify the boundary. We use the level set to identify the boundary because in our paper we have restricted . The average vector can be obtained by

2.2. Our Model for Color Images

In this section, we extend our model in Section 2.1 from gray images to color images. Let be a given color image and the three channels of . Let be the average vector of the channel.

The extension of our model to color images is still in the same formulation as (2.1):

The difference between (2.7) and (2.1) lies in the definition of and . In our model for color images (2.7), and are defined differently from (2.4):

Besides, another difference is the computing of the edge detector function . According to the definition of in (2.3), we need to compute when computing .

When is a gray image, for each pixel , is defined as

When is a color image, is defined as

Similarly to the case of gray images, once the minimization problem (2.7) is solved, the image domain can also be partitioned into four regions by (2.5). Then, the average vector for each channel can be updated by

Let be the fitting image of the given color image and the three channels of . Then, each channel of the fitting image can be expressed as

Thus, we can obtain the fitting image by

We denote the four segmented phases of as phase1, phase2, phase3, and phase4, then the four phases can be given as where denotes the three channels.

In fact, can also be obtained by

Remark 2.1. The multiphase image segmentation model we proposed is different from the piecewise constant multiphase Vese-Chan model in [3] for the following reasons. First, our energy functional in (2.1) or (2.7) is different from [3]. Second, we incorporate information from the edge into the energy functional by using a nonnegative edge detector function to detect boundaries more easily. Third, we apply the split Bregman method, and our model is much more efficient than the model in [3]. Last, we do not need the procedure to reinitialize to the signed distance function to its zero-level curve as [13], we simply initialize as a binary step function which takes 1 inside a region and 0 outside. We have restricted to , thus we do not need the Heaviside function or its smooth versions as [3].

2.3. Application of the Split Bregman Method to Update and

To minimize the energy functional in (2.1) or (2.7) with respect to and , one traditional method is using the standard gradient descent method directly [13]. In this section, we apply the split Bregman method to solve our proposed minimization problem (2.7) more efficiently. First, we introduce two auxiliary variables: and . Then, we add two quadratic penalty functions to weakly enforce the resulting equality constraints and get the unconstrained problem as follows: where is a positive constant.

We then strictly enforce the constraints by applying the Bregman iteration, and this results in the following optimization problem:

Keep , , and fixed, the Euler-Lagrange equation of the optimization problem (2.17) with respect to is

Note here that from the definition of in (2.8), is a function of . Here, when we minimize (2.17) with respect to , we just consider as a constant. From the numerical results in the next section, we can see that this consideration does not affect the performance our model. Similarly, when we minimize (2.17) with respect to for fixed , , and , we also consider as a constant and get the following Euler-Lagrange equation:

For (2.18) and (2.19), the central difference and the backward difference are used for the Laplace operator and the divergence operator, respectively. Then the resulting numerical scheme is where denotes the two schemes for (2.18) and (2.19).

Then we minimize (2.17) with respect to () for fixed , , and ( and ) and obtain where

3. The Algorithm and Numerical Results

3.1. The Algorithm

The split Bregman algorithm for the proposed minimization problem (2.7) in Section 2.3 can be summarized as in Algorithm 1.

alg1
Algorithm 1

3.2. Numerical Results

Our model has been applied to synthetic and real color images with multiple regions in this section. When working with the level set function , we simply initialize the level set function as a binary step function which takes 1 inside a region and 0 outside. The advantage of using binary step functions as the initial level set functions is that new contours can emerge easily and the curve evolution is significantly faster than the evolution from initial functions as signed distance maps. In this paper, and are used for all images. Unless otherwise specified, we choose for all synthetic color images and for all real images. The size of each image is specified in each figure.

Figure 1 shows the application of our model to a synthetic color image with a triple junction. Most of the models [12, 13] need three level set functions to represent the triple junction. Here, we need only two level set functions. The level sets of the final and are shown in (d) and (h), which have to overlap on a segment of the triple junction.

fig1
Figure 1: Results for a synthetic image with our model. (a)–(c): The active contour evolving process from the initial contour to the final contour; (e)–(g): The corresponding fitting images at different iterations; (d) and (h): The level sets of the final and ; (i)–(l): The final four phases; size = 256 × 256.

We then apply our model to another synthetic image in Figures 2 and 3. In Figure 2, we show the results for the clean image with three different initial conditions. When we seed with small initials in (a) and (b), our model can work well, both the final contours and the final fitting images are correct. However, when we use two disjoint circles as the initial curves, our model will be trapped in a local minimum, and the pink object will be missed as shown in (f) and (i). Then we add random noises with standard deviation 30.0 to the clean image. Figure 3 shows the results for the noisy version. We can observe that our model works well even if the noise level is high. In the following experiments, the noises we added are all random noise with standard deviation 30.0.

fig2
Figure 2: Results for a synthetic image with different initial conditions; (a)–(c): The original image with three different initial contours; (d)–(f): The corresponding final contours; (g)–(i): The corresponding final fitting images; size = 256 × 256.
fig3
Figure 3: Application of our model to the noisy version of the image from Figure 2. (a)–(c): The curve evolution process; (e)–(g): The corresponding fitting images ; (d) and (h): The level sets of the final and ; (i)–(l): The final four phases.

The results for another synthetic color image without or with noise are shown in Figures 4 and 5. The results for the clean image with two different initial conditions are shown in Figure 4. When we use two disjoint initial curves as shown in (a), both the final contour and the final fitting image are good. When using two joint initial curves as shown in (g), we can get the right final contour shown in (h), but the final fitting image is incorrect shown in (i). The two bottom objects are considered to have the same average value. The results for the noisy version are shown in Figure 5.

fig4
Figure 4: Results for a synthetic image with two different initial conditions. (a) and (g): The two different initial contours; (b) and (h): The corresponding final contours; (c) and (i): The corresponding final fitting images; (d) and (j): The level sets of the final and with the initial condition (a); (i)–(l): The final four phases with the initial condition (a); size = 240 × 110.
fig5
Figure 5: Application of our model to the noisy version of the image from Figure 4. (a)–(d): The curve evolution process; (e)–(h): The corresponding fitting images .

Figures 6, 7, and 8 show the applications of our model to two synthetic images and their noisy versions. Both the two images have three objects with different shapes, which may be difficult to detect. However, our model can identify the objects correctly for both clean images and noisy versions.

fig6
Figure 6: Results for a synthetic image without noise and with noise. (a)–(c) and (e)–(g): The initial contour, final contour and final fitting image for the clean and noisy images; (d), (h) and (i)–(l): The level sets of the final , and the final four phases for the noisy version; size = 200 × 200.
fig7
Figure 7: Results for a clean synthetic image with our model. (a)–(c): The curve evolution process; (e)–(g): The corresponding fitting images ; (d) and (h): The level sets of the final and ; (i)–(l): The final four phases; size = 256 × 256.
fig8
Figure 8: Application of our model to the noisy version of the image from Figure 7. (a)–(d): The curve evolution process; (e)–(h): The corresponding fitting images .

We have declared that the standard deviation of the random noise added to the above synthetic images is 30.0. Now, we increase the noise level and give the results with our model in Figures 9 and 10. For the above five synthetic images, when we increase the noise level to 50.0, our model can give the correct final contours and final fittings, which can be clearly seen from Figure 9. Even the noise is increased to a much higher level 80.0, our model can still detect the object boundaries of the synthetic images and give good fitting images. The corresponding initial contours, final contours and final fitting images are shown in Figure 10. Here, we only need to choose to avoid detecting the high-level noise for the noisy versions in Figures 9 and 10. This demonstrates the robustness of our model to noise.

fig9
Figure 9: Results for the synthetic images added by random noise with standard deviation 50.0. (a)–(e): The initial contours; (f)–(j): The final contours; (k)–(o): The final fitting images.
fig10
Figure 10: Results for the synthetic images added by random noise with standard deviation 80.0. (a)–(d): The initial contours; (e)–(h): The final contours; (i)–(l): The final fitting images.

We then apply our model to three real color images, and the results are shown in Figures 11, 12, 13, 14, and 15. We seed with small initial curves for all of these real images. Figures 11 and 12 show the results for an image of flowers, while Figures 13 and 14 give the results for a 4-colors image. We can observe that our model can segment these color images well. In Figure 15 we apply our model to an image with contours without gradient (cognitive contours). From the final fitting image shown in (d), it can be seen that our model can also detect “contours without edges".

fig11
Figure 11: Application of our model to a real color image of flowers. (a)–(d): The curve evolution process. (e)–(h): The corresponding fitting images ; size = 160 × 240.
fig12
Figure 12: Final four phases of the flowers image.
fig13
Figure 13: Application of our model to a 4-color image. (a)–(d): The curve evolution process; (e)–(h): The corresponding fitting images ; size = 185 × 136.
fig14
Figure 14: Final four phases of the 4-colors image.
fig15
Figure 15: Application of our model to a real color image. (a): The original image; (b): The initial contour; (c): The final contour; (d): The final fitting image; size = 160 × 240.

Figures 16 and 17 show the results of our model for two real color images from the Berkeley image set. In Figures 16 and 17, we give the curve evolution process in Row 1 and the corresponding fitting images in Row 2, while the final four phases are shown in Row 3. It can be observed that our model can handle these two real color images very well.

fig16
Figure 16: Results for a real color image with our model. (a)–(d): The curve evolution process; (e)–(h): The corresponding fitting images ; (i)–(l): The final four phases; size = 160 × 240.
fig17
Figure 17: Results for a real color image with our model. (a)–(d): The curve evolution process; (e)–(h): The corresponding fitting images ; (i)–(l): The final four phases; size = 160 × 240.

Our model can be applied to segment real medical MRI images. We apply our model to two real brain MRI images in Figures 18 and 19. Similarly, the curve evolution process, corresponding fitting images and final four phases are shown in Row 1, Row 2 and Row 3, respectively. We can see that our model can identify quite well the gray matter, the white matter, and so forth.

fig18
Figure 18: Results for a brain MRI image with our model. (a)–(d): The curve evolution process; (e)–(h): The corresponding fitting images ; (i)–(l): The final four phases; size = 120 × 117.
fig19
Figure 19: Results for a brain MRI image with our model. (a)–(d): The curve evolution process; (e)–(h): The corresponding fitting images ; (i)–(l): The final four phases; size = 125 × 160.

To demonstrate the efficiency of our model, we compare the computation (CPU) time for the above five synthetic images with the Vese-Chan model from [3] and our model in Tables 1 and 2. In Table 1, we test the CPU time for all the clean synthetic images used in this paper. Except the image from Figure 1, we use the noisy versions for other four synthetic images for Table 2. From Tables 1 and 2, we can see that our model is much more efficient than the Vese-Chan model by applying the split Bregman method. We record the CPU time from our experiments with Matlab codes run on an ACPI Multiprocessor PC, Intel(R) Core(TM)2 Quad CPU Q8200, 2.33 GHz, 2 GB RAM, with Matlab R2010a on Windows XP.

tab1
Table 1: The CPU time (in second) for the Vese-Chan model and our model for the clean synthetic images from Figures 1, 2, 4, 6, and 7. The sizes of images are 256 × 256, 256 × 256, 240 × 110, 200 × 200, and 256 × 256, respectively.
tab2
Table 2: The CPU time (in second) for the Vese-Chan model and our model for the noisy synthetic images from Figures 3, 5, 6, and 8. The sizes of images are 256 × 256, 240 × 110, 200 × 200, and 256 × 256, respectively.

4. Conclusion

In this paper, we propose a new and fast multiphase image segmentation model for color images. Our model is an extension of the model for gray images we proposed in [19]. We have tested our model with many synthetic and real color images. Numerical results show that our model has all the benefits of the Vese-Chan model [3], including robustness even with noisy data and automatic detection of interior contours. However, our model is much more efficient by applying the split Bregman method. Of course, our model has its limitation. Our model is a piecewise constant multiphase segmentation model, and it mainly focuses on homogeneous multiphase images. It only considers the global information of the given image, thus it cannot deal with images with inhomogeneity. In future work, we will extend our model to handle inhomogeneous multiphase images by taking both local and global information into consideration.

Acknowledgment

This paper is supported by Natural Sciences Foundation of Heilongjiang Province (A200909).

References

  1. T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266–277, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  2. T. F. Chan, B. Y. Sandberg, and L. A. Vese, “Active contours without edges for vector-valued images,” Journal of Visual Communication and Image Representation, vol. 11, no. 2, pp. 130–141, 2000. View at Publisher · View at Google Scholar · View at Scopus
  3. L. A. Vese and T. F. Chan, “A multiphase level set framework for image segmentation using the Mumford and Shah model,” International Journal of Computer Vision, vol. 50, no. 3, pp. 271–293, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  4. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,” International Journal of Computer Vision, vol. 1, no. 4, pp. 321–331, 1988. View at Publisher · View at Google Scholar · View at Scopus
  5. V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision, vol. 22, no. 1, pp. 61–79, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  6. S. Osher and J. A. Sethian, “Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations,” Journal of Computational Physics, vol. 79, no. 1, pp. 12–49, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. R. Malladi, J. A. Sethian, and B. C. Vemuri, “Shape modeling with front propagation: a level set approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 2, pp. 158–175, 1995. View at Publisher · View at Google Scholar · View at Scopus
  8. H.-C. Hsin, “Texture segmentation in the joint photographic expert group 2000 domain,” IET Image Processing, vol. 5, no. 6, pp. 554–559, 2011. View at Publisher · View at Google Scholar
  9. D. Mumford and J. Shah, “Optimal approximations by piecewise smooth functions and associated variational problems,” Communications on Pure and Applied Mathematics, vol. 42, no. 5, pp. 577–685, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  10. G. Sapiro, “Color snakes,” Computer Vision and Image Understanding, vol. 68, no. 2, pp. 247–253, 1997. View at Publisher · View at Google Scholar · View at Scopus
  11. J. Shah, “Curve evolution and segmentation functionals: application to color images,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '96), vol. 1, pp. 461–464, Lausanne, Switzerland, 1996.
  12. H.-K. Zhao, T. Chan, B. Merriman, and S. Osher, “A variational level set approach to multiphase motion,” Journal of Computational Physics, vol. 127, no. 1, pp. 179–195, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  13. C. Samson, L. Blanc-Féraud, G. Aubert, and J. Zerubia, “A variational model for image classification and restoration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 5, pp. 460–472, 2000. View at Publisher · View at Google Scholar · View at Scopus
  14. N. Paragios and R. Deriche, “Geodesic active regions and level set methods for supervised texture segmentation,” International Journal of Computer Vision, vol. 46, no. 3, pp. 223–247, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  15. T. Goldstein and S. Osher, “The split Bregman method for L1-regularized problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 323–343, 2009. View at Publisher · View at Google Scholar
  16. W. Yin, S. Osher, D. Goldfarb, and J. Darbon, “Bregman iterative algorithms for L1-minimization with applications to compressed sensing,” SIAM Journal on Imaging Sciences, vol. 1, no. 1, pp. 143–168, 2008. View at Publisher · View at Google Scholar
  17. T. Goldstein, X. Bresson, and S. Osher, “Geometric applications of the split Bregman method: segmentation and surface reconstruction,” Journal of Scientific Computing, vol. 45, no. 1–3, pp. 272–293, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  18. Y. Yang, C. Li, C.-Y. Kao, and S. Osher, “Split Bregman method for minimization of region-scalable fitting energy for image segmentation,” in Proceedings of the 6th International Symposium on Visual Computing, vol. 6454 of Lecture Notes in Computer Science, pp. 117–128, Springer, Berlin, Germany, 2010. View at Publisher · View at Google Scholar
  19. Y. Yang and B. Wu, “Improved active contour model for multiphase image segmentation based on the Vese-Chan model,” submitted to: Journal of Applied Mathematics.
  20. T. F. Chan, S. Esedoglu, and M. Nikolova, “Algorithms for finding global minimizers of image segmentation and denoising models,” SIAM Journal on Applied Mathematics, vol. 66, no. 5, pp. 1632–1648, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  21. X. Bresson, S. Esedoglu, P. Vandergheynst, J.-P. Thiran, and S. Osher, “Fast global minimization of the active contour/snake model,” Journal of Mathematical Imaging and Vision, vol. 28, no. 2, pp. 151–167, 2007. View at Publisher · View at Google Scholar