Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2014 (2014), Article ID 614613, 11 pages
http://dx.doi.org/10.1155/2014/614613
Research Article

Unsupervised Texture Segmentation Using Active Contour Model and Oscillating Information

1College of Information Engineering, Qingdao University, Qingdao 266071, China
2The Affiliated Hospital of Medical College, Qingdao University, Qingdao 266003, China

Received 20 March 2014; Revised 28 May 2014; Accepted 6 June 2014; Published 26 June 2014

Academic Editor: Peter G. L. Leach

Copyright © 2014 Guodong Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Textures often occur in real-world images and may cause considerable difficulties in image segmentation. In order to segment texture images, we propose a new segmentation model that combines image decomposition model and active contour model. The former model is capable of decomposing structural and oscillating components separately from texture image, and the latter model can be used to provide smooth segmentation contour. In detail, we just replace the data term of piecewise constant/smooth approximation in CCV (convex Chan-Vese) model with that of image decomposition model-VO (Vese-Osher). Therefore, our proposed model can estimate both structural and oscillating components of texture images as well as segment textures simultaneously. In addition, we design fast Split-Bregman algorithm for our proposed model. Finally, the performance of our method is demonstrated by segmenting some synthetic and real texture images.

1. Introduction

Unsupervised texture segmentation is a popular topic in image processing. It is an important technique for image analysis and understanding. Texture images are difficult to be segmented as the texture on an object is very similar to its boundary. The strong contrast of the texture always leads to wrong segmentation result.

Active contour models provide a very good framework for variational image segmentation. Some active contour models use the gradient information of the image to detect edge but may lead to unclear boundaries between two textures. For instance, some active contour models based on edge stopping function are mentioned in [19]. However, edge-based segmentation methods are limited in many applications where the objects have no clear edges such as in medical images. To deal with this problem, many features such as mean value, variance, and probability density function (pdf) have been incorporated into the segmentation models. Chan-Vese model [10] is one of the variational image segmentation models based on estimation of different means of different regions. Other statistical features such as the variance and pdf estimator are used to carry out the segmentation task in [11]. Zhu in [12] and Paragios et al. in [13, 14] used mixture of Gaussian models to approximate the pdf. Herbulot et al. in [1517] carried out the segmentation task by updating the pdf of the object and the background at every step of the iteration. The aforementioned models are not suitable for general texture segmentation because they use some specific features and some prior estimation models which are not suitable for textures.

Most active contour models for texture segmentation contain two steps. Firstly, different features of different textures are analyzed and modeled. Secondly, different modeled features are adopted into a general framework of active contour models. Statistical models are often used by the assumption that statistics features of each texture are often stationary [18, 19]. These methods can only deal with regular textures, and they often fail in real textural cases. In many cases, the original image must be transformed to meet the real demand. Doretto et al. [20] used Gauss-Markov model to process the relationships among pixels in different regions. They also proposed that nonparametric statistics and higher order statistics models can be used for feature space classification.

Different filters are used for assisting texture segmentations. The commonly used types of filters are Gabor and Wavelet which can decompose the image into a set of different subbands [6, 2123]. The responses of Gabor or Wavelet are different in different textures in different scales or orientations. The summation of the filter responses can synthesize a new image without texture which is the basic idea of the texture segmentation using filter theory. In the filtering process, the edges are also blurred, so such type of active contour models must search for help from the edge detecting methods. The texture segmentation results also rely on the effect of edge detection methods. Usually, filters based texture segmentation methods are very time consuming because many assisting images are generated by different filters with different scales and orientations.

Some sophisticated feature descriptors are also used in texture segmentation. Bigun et al. in [24] dealt with texture segmentation by making use of the structure tensors [25] which form a matrix composed of partial derivatives smoothed by a Gaussian kernel. The structure tensors contain the information of scalar value and texture orientation which are good properties of texture discrimination.

Recently, LBP (local binary pattern) method which is an excellent operator for texture analysis was used in active contour models for texture segmentation [2628]. The LBP features achieve higher classification accuracy and need lower computational burden than Gabor and Wavelet features [2931]. LBP guided active contour models [2628] use regional information of LBP distributions which is estimated by means of the log-likelihood statistics.

All the segmentation methods mentioned above enable us to find local minimizers of the segmentation problem. It means that the quality of the segmentation results depends on the choice of the initial condition. To solve the problem of local minimization, authors of [32, 33] reformulated the Chan-Vese model as a convex one which can be called CCV (convex Chan-Vese) model which leads to global minimizer. Fast methods such as dual method and splitting technology are also designed for accelerating the process of the segmentation such as unsupervised segmentation method based on the Kullback-Leibler distance and nonparametric estimation of pdf [34]. The Split-Bregman algorithm [35, 36] was proposed for solving L1 norm and also introduced in convexified active contour model which is faster than [32]. The Split-Bregman algorithm which is equivalent to the augmented Lagrangian method [37, 38] is superior over Graph Cut method in accuracy and efficiency. In this paper, we also use the Split-Bregman algorithm for accelerating the minimization of our proposed method.

All the methods mentioned above for texture segmentation mainly use statistical or filter theory. In this paper, we propose a new method for texture segmentation which does not need additional filter or statistical steps. This is the main difference between our model and many other models for texture segmentation. In our model, we only use the structural and oscillating components’ information of the image which can be estimated via variational image decomposition model [39]. The texture is considered as oscillating component and the structure part is considered as piecewise constant/smooth image component; these two features are used to distinguish features of different regions in the textural segmentation. The active contour model proposed in this paper does not use the gradient to detect boundaries because the variational decomposition model incorporates the edge information.

The main contribution of the paper is shown as follows. First, we propose a new active contour model coupling image decomposition. In other words, our textural segmentation method uses the information of oscillating and structural components. Because the active contour model and image decomposition method are all on a basis of variational methods, they can be combined together by nature. Second, for the sake of implementation, the fast and easy Split-Bregman method is applied for solving the model.

The outline of the paper is demonstrated as follows. The next section introduces the VO model for image decomposition. In Section 3, we will propose our new active contour model coupling image decomposition. Then, we design the Split-Bregman algorithms. Section 4 demonstrates the effectiveness of the proposed new model for texture segmentation on a variety of synthetic and real textured images. The last section is conclusion.

2. Image Decomposition and VO Model

Many papers [3944] are devoted to the variational models of decomposing an image into the structural component and textural component; that is, an image can be decomposed into two components and . is well-structured component, which is the main part of the image and includes the main information of geometric features. The component contains the oscillating patterns (both textures and noise). Meyer [40] established the oscillation function modeling theory of the textured image based on the ROF model [45], using the space of oscillation functions (space ) as the function space of textured image. As Meyer mentioned, space is essentially the dual space of the space of functions of bounded variation (BV) [41]. BMO [42] is a bounded mean oscillation space. But Meyer did not give the reliable method in [40]. Le and Vese [42] proposed Besov space to describe the oscillation part of the image by extending space. Aujol et al. [43] defined the energy functional on by improving the ROF model and Meyer’s theory. Vese and Osher [39] proposed a VO model which approximates Meyer’s theoretical model; they gave an approximation to the norm , the corresponding Euler-Lagrange equations, and numerical methods, but the speed of computation is slow. Osher et al. (OSV) [44] extended VO model and presented a variational model for image decomposition which is based on the total variation and the norm . The authors show that this new model was simpler than VO model; however, a new fourth-order nonlinear PDE arose from the Euler-Lagrange equations; its difference format was complex and computational efficiency was not high. In this paper, we couple VO model with our active contour model because VO model has the ability of large texture decomposition.

In [40], the Banach space proposed by Meyer is defined as induced by the norm Here, is an open and bounded domain. Given an image function defined on , Meyer’s decomposition model is as follows: In this model, is structural component of the image and is oscillating component constituted by texture or noise information, but, in practice, model (3) is difficult to implement due to the nature of the norm . Osher et al. [44] were the first to overcome this difficulty by proposing an approximation to the norm in the following energy: where , , , . is the oscillating component of the image. In their paper, the authors use the value and they claim that there is no any obvious difference for different values of , with . In this paper, we also set for convenience.

3. Active Contour Model Coupling Image Decomposition

The oscillating and structural components decomposed from original textural image are important information for image analysis. They are the main features between different textural parts. The VO model contains edge and oscillating information, so we incorporated it into active contour model for textural segmentation. The new model coupling image decomposition with active contour model is where is the image domain, , , , , , are the positive parameters, is the original texture image, and are the structural parts, are the oscillating components of different classes, and is a standard level set function. The oscillating components suffice to discriminate textures. This is a global minimization problem due to convex set . Chan et al. [33] transformed the original active contour model to a convex minimization problem by relaxing to and showed that the characteristic function is the global minimizer for every . We mainly use the form of (5) because the texture in different areas is not always the same. This is a natural extension of CV [10] and VC [46] model for piecewise constant or smooth model.

Equation (5) can be divided into the following three subproblems of minimization.

, are fixed; for solving , the energy function about is , are fixed; for solving , the energy function about is , are fixed; for solving , the energy function about is In (8), can be expressed by In what follows, we will solve (6), (7), and (8) using the Split-Bregman method.

The Split-Bregman method [37] is used to solve (6) by introducing auxiliary vector variable and Bregman iterative parameter and transforming (6) into the equivalent minimization problem: In the above equation, .

Alternative optimization of (11) results in We can explicitly solve the minimization problem for using a generalized shrinkage formula [36]: and are solved separately as follows: For solving , the procedure is the same as for solving .

Equation (8) has the same form as (6), so it can be solved using the same algorithm (see (10)–(14)). In what follows, we will solve (8) using Split-Bregman method. Equation (6) can be cast to the following minimization problem: Alternative optimization of (16) results in the following procedures.

Fix to solve : By variational method, the Euler-Lagrange equations of (17) are Using gradient descent method, we can get : is the time step and, in the whole paper, we set . Then, we construct by projecting it on ; that is, Fix to solve : The Euler-Lagrange equations of (21) are is also gotten using a generalized shrinkage formula [36]: The initialization is also important; we set , , , , , , , , .

In the end of calculation, we set Thus, the texture and the background can be separated. Due to , our new model is also globally convex. That is to say, the position of need not be initialized.

4. Numerical Experiments

To verify the effect of the proposed method, we test our method on a variety of synthetic and real textured images. Figure 1 is the simplest synthetic textured image, and the texture in the whole image is the same. Figure 1(a) is the origin texture image; Figures 1(b) and 1(c) are the structure and texture parts got by VO model. Figure 1 demonstrates that the initial textured image is easy to be segmented when the texture is removed. Our model does not do segmentation after decomposition; however, these two processes are coupled together.

fig1
Figure 1: Image for segmentation: (a) original image, (b) the structure part of the image using VO model, (c) the texture part of the image, and (d) the segmentation result.

Figure 2 is an image of a “chirp-like” brick-wall background and a Brodatz texture object, and the textures in different areas are different. The texture of background is not uniform along the gradient; the object is another texture mode. The algorithm gives the good result although in the upright and in several parts there are concave down. These two synthetic images demonstrate the goodness of our proposed segmentation method. To further verify the effectiveness of our method, we also use real-world images for comparison of the segmentation performance.

fig2
Figure 2: An image of a “chirp-like” brick-wall background and a Brodatz texture object. (a) Original image and (b) the segmentation result.

The next two examples are of two zebra images often used for texture segmentation test. The texture is very typical; there are small or large textures in different places and the direction is various. It is a powerful proof for evaluating the segmentation algorithm. We use Figure 3 to demonstrate how the contour evolved from start to end of the process. Figure 3(a) is the original image, Figures 3(b), 3(c), 3(d), and 3(e) are the intermediate results of steps 20, 40, 60, and 80, and Figure 3(f) is the final result. Figure 3(g) is the texture part and Figure 3(h) is the structural part of zebra. From Figure 3(g), we can see that the texture part is a natural feature of the image. Figure 3(h) is the structural part of zebra, which controls the contour move to the general place. Figure 4 is another zebra image; the image is different from Figure 3. In some places, the gap between the stripes is very large. The algorithm is also successful for the result. In order to further demonstrate the superiority of the proposed technique, a number of comparison experiments are shown in Figure 5 using the images from the well-known Berkeley Segmentation Dataset (BSDS) [47]. In Figure 5, we compare our method with the state-of-the-art methods in [22, 34]. The comparison results in Figure 5 demonstrate that our method achieves the best accuracy for the examples.

fig3
Figure 3: Image for segmentation: (a) original zebra image; (b) the intermediate result of step 20; (c) the intermediate result of step 40; (d) the intermediate result of step 60; (e) the intermediate result of step 80; (f) the final segmentation result; (g) the textural part of the zebra; (h) the structural part of zebra.
fig4
Figure 4: Image for segmentation: (a) original zebra image; (b) the segmentation result; (c) the textural part of the image; (d) the structural part of image.
fig5
Figure 5: Comparison results between our method and state-of-the-art methods in [22, 34]: (a) shows the original natural textural images; (b) shows the results using methods in [34]; (c) shows the results using methods in [22]; (d) shows the results of our method.

In Figure 6, we also give the quantitative evaluations and comparisons with the state-of-the-art methods on the textural image. The methods contain JSEG algorithm [48], a standard color-texture segmentation benchmark, NCut [49], CTM algorithm [50], and the MAP-ML estimations algorithm [51]. The quantitative comparison contains three segmentation performance measures, -measure [52], probabilistic rand index (PRI) [53], and variation of information (VoI) [54]. These three measures are often used in performance comparison of image segmentation. The values of -measure and PRI fall in , and larger value corresponds to better results. The value of VoI is in , and smaller value corresponds to better results. From the results we can see that our method gets the best performance in most of the cases. The methods we compared are all based on the color and texture information; however, our method does not use color information. We only use the oscillating and structural information of the images.

fig6
Figure 6: Segmentation results of some texture images with PRI, VoI, and -measure values presented in parenthesis: (a) is the results by JSEG method, (b) is the results by NCut method, (c) is the results by MAP-ML method, (d) is the results by CTM method, and (e) is the results by our proposed method.

To evaluate the enhancement obtained by adopting the Split-Bregman algorithm, we give the time consumption comparison result of Figure 6 in Table 1. All experiments are performed using Matlab 2010b on a Windows 7 platform with an Intel CPU at 3.2 GHz and 16 G memory. In Table 1, first, second, third, fourth, and fifth corresponding the image column number of Figure 6. In every step of iteration, the Split-Bregman method can get higher efficiency than traditional gradient descent method in energy minimization by using simple soft threshold. The total time consumed by using the Split-Bregman method is much less than using traditional gradient descent method because the Split-Bregman method gets convergence much more quickly than traditional method.

tab1
Table 1: Time consumption comparison of the Split-Bregman method with traditional gradient descent method.

Because the noise parts also belong to the oscillating parts, so the image decomposition can separate them [39, 55]. Thus, our model can also deal with images corrupted by noise with different level. Figure 7 is the example tested on synthetic noisy images corrupted by Gaussian noises with different levels. In Figure 7(a), there are two different areas corrupted by different noise level. In Figure 7(c), there are four parts with four different noise levels. The images cannot be segmented by using CV model [10]. In [13], the author uses covariance besides mean value of the gray value to separate different part noised by different noise with the same mean and different covariance. Our model can separate them because noise can be deemed as oscillating parts. From experiments, we can see that our model can also segment different noisy areas into different classes.

fig7
Figure 7: Images with different noise level for segmentation: (a) original image, (b) the segmentation result, (c) original image, and (d) the segmentation result.

5. Conclusion

In this paper, we propose a new segmentation method for images consisting of texture. The new model does not use other tools for texture modeling. To ease the implementation, we also give the Split-Bregman method for the new model. Our method can also deal with images containing different kinds of noises. Qualitative and quantitative results show that the proposed method is good. Future research needs to address the issue of the coupling active contour model with the new proposed decomposition model for texture segmentation.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (nos. 61305045, 61303079, and 61170106), the National “Twelfth Five-Year” Development Plan of Science and Technology (no. 2013BAI01B03), and the Qingdao Science and Technology Development Project (no. 13-1-4-190-jch).

References

  1. V. Caselles, F. Catté, T. Coll, and F. Dibos, “A geometric model for active contours in image processing,” Numerische Mathematik, vol. 66, no. 1, pp. 1–31, 1993. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic Active Contours,” International Journal of Computer Vision, vol. 22, no. 1, pp. 61–79, 1997. View at Publisher · View at Google Scholar · View at Scopus
  3. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,” International Journal of Computer Vision, vol. 1, no. 4, pp. 321–331, 1988. View at Publisher · View at Google Scholar · View at Scopus
  4. S. Kichenassamy, A. Kumar, P. Olver, A. Tannenbaum, and A. Yezzi, “Gradient flows and geometric active contour models,” in Proceedings of the 5th International Conference on Computer Vision (ICCV '95), pp. 810–815, Cambridge, Mass, USA, June 1995. View at Scopus
  5. R. Malladi, J. A. Sethian, and B. C. Vemuri, “Topology-independent shape modeling scheme,” in Geometric Methods in Computer Vision II, 246, Proceedings of SPIE, pp. 246–258, July 1993. View at Publisher · View at Google Scholar · View at Scopus
  6. N. Paragios and R. Deriche, “Geodesic active contours for supervised texture segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '99), pp. 699–706, June 1999. View at Scopus
  7. K. Siddiqi, Y. B. Lauzière, A. Tannenbaum, and S. W. Zucker, “Area and length minimizing flows for shape segmentation,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 433–443, 1998. View at Publisher · View at Google Scholar · View at Scopus
  8. C. Xu and J. L. Prince, “Snakes, shapes, and gradient vector flow,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 359–369, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  9. D. Mumford and J. Shah, “Optimal approximations by piecewise smooth functions and associated variational problems,” Communications on Pure and Applied Mathematics, vol. 42, no. 5, pp. 577–685, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266–277, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  11. Y. Chen, H. D. Tagare, S. Thiruvenkadam et al., “Using prior shapes in geometric active contours in a variational framework,” International Journal of Computer Vision, vol. 50, no. 3, pp. 315–328, 2002. View at Publisher · View at Google Scholar · View at Scopus
  12. S. C. Zhu, “Region competition: Unifying snakes, region growing, and bayes/mdl for multiband image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 9, pp. 884–900, 1996. View at Publisher · View at Google Scholar · View at Scopus
  13. N. Paragios and R. Deriche, “Geodesic active regions: a new framework to deal with frame partition problems in computer vision,” Journal of Visual Communication and Image Representation, vol. 13, no. 1-2, pp. 249–268, 2002. View at Publisher · View at Google Scholar · View at Scopus
  14. M. Rousson and R. Deriche, “A variational framework for active and adaptative segmentation of vector valued images,” in Proceedings of the Workshop on Motion and Video Computing, 2002, pp. 56–61.
  15. A. Herbulot, S. Jehan-Besson, M. Barlaud, and G. Aubert, “Shape gradient for multi-modal image segmentation using mutual information,” in Proceedings of the International Conference on Image Processing (ICIP '04), pp. 2729–2732, October 2004. View at Publisher · View at Google Scholar · View at Scopus
  16. S. Jehan-Besson, M. Barland, and G. Aubert, “DREAM2S: deformable regions driven by an Eulerian accurate minimization method for image and video segmentation,” International Journal of Computer Vision, vol. 53, no. 1, pp. 45–70, 2003. View at Publisher · View at Google Scholar · View at Scopus
  17. G. Aubert, M. Barlaud, and O. Faugeras, “Image segmentation using active contours: calculus of variations or shape gradients?” SIAM Journal on Applied Mathematics, vol. 63, no. 6, pp. 2128–2154, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. P. C. Chen and T. Pavlidis, “Segmentation by texture using correlation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 5, no. 1, pp. 64–69, 1983. View at Google Scholar · View at Scopus
  19. J. Mao and A. K. Jain, “Texture classification and segmentation using multiresolution simultaneous autoregressive models,” Pattern Recognition, vol. 25, no. 2, pp. 173–188, 1992. View at Publisher · View at Google Scholar · View at Scopus
  20. G. Doretto, D. Cremers, P. Favaro, and S. Soatto, “Dynamic texture segmentation,” in Proceedings of the International Conference on Computer Vision, pp. 1236–1242, October 2003. View at Scopus
  21. J. Aujol, G. Aubert, and L. Blanc-Feraud, “Wavelet-based level set evolution for classification of textured images,” IEEE Transactions on Image Processing, vol. 12, no. 12, pp. 1634–1641, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. C. Sagiv, N. A. Sochen, and Y. Y. Zeevi, “Integrated active contours for texture segmentation,” IEEE Transactions on Image Processing, vol. 15, no. 6, pp. 1633–1646, 2006. View at Publisher · View at Google Scholar · View at Scopus
  23. L. Li, L. Jin, X. Xu, and E. Song, “Unsupervised color-texture segmentation based on multiscale quaternion Gabor filters and splitting strategy,” Signal Processing, vol. 93, no. 9, pp. 2559–2572, 2013. View at Publisher · View at Google Scholar · View at Scopus
  24. J. Bigun, G. H. Granlund, and J. Wiklund, “Multidimensional orientation estimation with applications to texture analysis and optical flow,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 8, pp. 775–790, 1991. View at Publisher · View at Google Scholar · View at Scopus
  25. A. R. Rao and B. G. Schunck, “Computing oriented texture fields,” CVGIP: Graphical Models and Image Processing, vol. 53, no. 2, pp. 157–185, 1991. View at Publisher · View at Google Scholar
  26. M. A. Savelonas, D. K. Iakovidis, D. Maroulis, and S. Karkanis, “An active contour model guided by LBP distributions,” in Proceedings of the 8th International Conference on Advanced Concepts for Intelligent Vision Systems, pp. 197–207, 2006.
  27. M. A. Savelonas, D. K. Iakovidis, and D. E. Maroulis, “An LBP-based active contour algorithm for unsupervised texture segmentation,” in Proceedings of the 18th International Conference on Pattern Recognition (ICPR '06), pp. 279–282, August 2006. View at Publisher · View at Google Scholar · View at Scopus
  28. M. A. Savelonas, D. K. Iakovidis, and D. Maroulis, “LBP-guided active contours,” Pattern Recognition Letters, vol. 29, no. 9, pp. 1404–1415, 2008. View at Publisher · View at Google Scholar · View at Scopus
  29. T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on feature distributions,” Pattern Recognition, vol. 29, no. 1, pp. 51–59, 1996. View at Publisher · View at Google Scholar · View at Scopus
  30. P. Paclic, R. Duin, G. V. Kempen, and R. Kohlus, “Supervised segmentation of textures in backscatter images,” in Proceedings of the 16th International Conference on Pattern Recognition, vol. 2, pp. 490–493, 2002. View at Publisher · View at Google Scholar
  31. D. Iakovidis, D. Maroulis, and S. Karkanis, “A comparative study of color-texture image features,” in Proceedings of the 12th International Workshop on Systems, Signals and Image Processing (IWSSIP '05), pp. 203–209, Chalkida, Greece, September 2005. View at Scopus
  32. X. Bresson, S. Esedoglu, P. Vandergheynst, J. Thiran, and S. Osher, “Fast global minimization of the active contour/snake model,” Journal of Mathematical Imaging and Vision, vol. 28, no. 2, pp. 151–167, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  33. T. F. Chan, S. Esedoglu, and M. Nikolova, “Algorithms for finding global minimizers of image segmentation and denoising models,” SIAM Journal on Applied Mathematics, vol. 66, no. 5, pp. 1632–1648, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  34. N. Houhou, J. Thiran, and X. Bresson, “Fast texture segmentation model based on the shape operator and active contour,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  35. T. Goldstein, X. Bresson, and S. Osher, “Geometric applications of the split Bregman method: segmentation and surface reconstruction,” Journal of Scientific Computing, vol. 45, no. 1–3, pp. 272–293, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  36. J. Yang, W. Yin, Y. Zhang, and Y. Wang, “A fast algorithm for edge-preserving variational multichannel image restoration,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 569–592, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  37. S. Simon, “Split bregman algorithm, douglas-rachford splitting and frame shrinkage,” in Scale Space and Variational Methods in Computer Vision: Proceedings of the 2nd International Conference, SSVM 2009, Voss, Norway, June 1–5, 2009, vol. 5567 of Lecture Notes in Computer Science, pp. 464–476, 2009. View at Publisher · View at Google Scholar
  38. X. Tai and C. Wu, “Augmented lagrangian method, dual methods and split bregman iteration for rof model,” in Proceedings of the International Conference on Scale Space Methods and Variational Methods in Computer Vision (SSVM ’09), pp. 502–513, 2009.
  39. L. A. Vese and S. J. Osher, “Modeling textures with total variation minimization and oscillating patterns in image processing,” Journal of Scientific Computing, vol. 19, no. 1–3, pp. 553–572, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  40. Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, University Lecture Series, American Mathematical Society, Boston, Mass, USA, 2001. View at MathSciNet
  41. H. Koch and D. Tataru, “Well-posedness for the Navier-Stokes equations,” Advances in Mathematics, vol. 157, no. 1, pp. 22–35, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  42. T. M. Le and L. A. Vese, “Image decomposition using total variation and div(BMO),” Journal of Multiscale Modeling and Simulation, vol. 4, no. 2, pp. 390–423, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  43. J. Aujol, G. Aubert, L. Blanc-Féraud, and A. Chambolle, “Image decomposition into a bounded variation component and an oscillating component,” Journal of Mathematical Imaging and Vision, vol. 22, no. 1, pp. 71–88, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  44. S. Osher, A. Solé, and L. Vese, “Image decomposition and restoration using total variation minimization and the H1,” Multiscale Modeling & Simulation, vol. 1, no. 3, pp. 349–370, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  45. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, no. 1–4, pp. 259–268, 1992. View at Publisher · View at Google Scholar · View at Scopus
  46. L. A. Vese and T. F. Chan, “A multiphase level set framework for image segmentation using the Mumford and Shah model,” International Journal of Computer Vision, vol. 50, no. 3, pp. 271–293, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  47. D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of the 8th International Conference on Computer Vision, pp. 416–423, July 2001. View at Scopus
  48. Y. Deng and B. S. Manjunath, “Unsupervised segmentation of color-texture regions in images and video,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 8, pp. 800–810, 2001. View at Publisher · View at Google Scholar · View at Scopus
  49. J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888–905, 2000. View at Publisher · View at Google Scholar · View at Scopus
  50. A. Y. Yang, J. Wright, Y. Ma, and S. S. Sastry, “Unsupervised segmentation of natural images via lossy data compression,” Computer Vision and Image Understanding, vol. 110, no. 2, pp. 212–225, 2008. View at Publisher · View at Google Scholar · View at Scopus
  51. S. Chen, L. Cao, Y. Wang, J. Liu, and X. Tang, “Image segmentation by {MAP}-{ML} estimations,” IEEE Transactions on Image Processing, vol. 19, no. 9, pp. 2254–2264, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  52. D. R. Martin, C. C. Fowlkes, and J. Malik, “Learning to detect natural image boundaries using local brightness, color, and texture cues,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 5, pp. 530–549, 2004. View at Publisher · View at Google Scholar · View at Scopus
  53. R. Unnikrishnan, C. Pantofaru, and M. Hebert, “Toward objective evaluation of image segmentation algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 929–944, 2007. View at Publisher · View at Google Scholar · View at Scopus
  54. M. Meila, “Comparing clusterings: an axiomatic view,” in Proceedings of the 22nd International Conference on Machine Learning, pp. 577–584, Bonn, Germany, August 2005. View at Scopus
  55. G. Wang, Z. Pan, Z. Zhao, and X. Sun, “The split Bregman method of image decomposition model for ultrasound image denoising,” in Proceedings of the 3rd International Conference on BioMedical Engineering and Informatics (BMEI '10), pp. 644–647, October 2010. View at Publisher · View at Google Scholar · View at Scopus