Abstract
The segmentation of weak boundary is still a difficult problem, especially sensitive to noise, which leads to the failure of segmentation. Based on the previous works, by adding the boundary indicator function with norm, a new convergent variational model is proposed. A novel strategy for the weak boundary image is presented. The existence of the minimizer for our model is given, by using the alternating direction method of multipliers (ADMMs) to solve the model. The experiments show that our new method is robust in segmentation of objects in a range of images with noise, low contrast, and direction.
1. Introduction
Image segmentation [1–6] is the process of separating objects of interest from each other or backwards to find boundaries of objects. It has become increasingly important in the last decade, due to being a fast-expanding field of applications in image analysis and computer vision. Not only that, it is a fundamental problem in the field of computer vision, because recognition and reconstruction often rely on this information [7, 8]. There are many circumstances affecting the segmentation results such as the noise, weak edge, and directional texture information, and these effects lead the segmentation problem to be a typical structural ill-posed problem. Many approaches exist to deal with this problem including histogram analysis, region growth, and edge detection [9, 10]. Up to now, although there are many existing segmentation methods, they still lack the universality. Over the last decade, researchers are constantly exploring new segmentation methods to make image segmentation results as good as possible and variational methods and partial differential equation (PDE) base techniques have been introduced to image segmentation. In this paper, we present a classical method of variation for these problems. Specially, these methods are divided into three methods: threshold-based segmentation [11–13], region-based segmentation [8, 14], and edge-based segmentation [7, 15, 16]. Here, we mainly consider the region-based method and edge-based method, which are uniformed with the most common segmentation method.
The edge-based models such as the GAC model [15–18] mainly utilize the gradient information to direct the contours toward the boundaries of desired objects and then obtain the segmentation image. Therefore, these classes of models are very sensitive to the noise and also show difficulty to detect the weak boundaries. That is to say, when there is strong noise in the image, the boundary between the target and the background must not be obvious enough and the segmentation will fail. In order to overcome this drawback, one way to process a noisy image is by adding a smoothing step prior to segmentation, but doing this also smooths image edges. Moreover, the segmentation result generated by these models is highly dependent on the initial contour placement due to the nonconvexity of the proposed model. For the region-based models [8, 14], they mainly incorporate region information so that the image within each region has uniform characteristics such as intensities and textures. Then, these models are more suitable for segmentation of the image with weak boundaries or noise. Furthermore, they are also less sensitive to the location of the initial contour. The well-known Mumford–Shah (MS) variational model [7] can achieve both goals simultaneously by using a piecewise smooth representation of an image [19, 20] and separate different regions by using a set of close contours (curves in 2D and surfaces in 3D). However, due to an unknown set of boundaries and its nonconvexity, the numerical experiment of this model has made very hard. With the assumptions of the segmentation regions to be piecewise constant, the Mumford–Shah model can be simplified as the Chan–Vese (CV) model [8, 21, 22]:where , , and are positive parameters, and are the intensity averages of the original image inside and outside the contour , respectively, is the region element, and the term of length is to regularize the contour . Original CV model (1) only considers to segment two phases as the foreground and the background and then it extends to the multiphase image segmentation problem [23]. Obviously, the CV model can generally get satisfactory results for the images with intensity homogeneity. Simultaneously, the level set method was also proposed to solve problem (1) in [8]. However, due to the nonconvexity of problem (1), we only get the local minima which may lead to wrong levels of detail and scale and then show the sensitivity to the placement of initial contour. Hence, some global minimization active contour models had been proposed to avoid this problem for obtaining the segmentation regions when fixing intensity averages and . Chan et al. [14] proposed a convex relaxation method of the CV model. Bresson et al. [24] further established theorems to determine the existence of global minimization of the active contour/snake model. Wang et al. [25] proposed a novel global minimization hybrid active contour model. By designing a convex energy functional and the dual algorithm, Xu et al. [26] proposed a global and local active contour model.
Motivated by the Euler–Lagrange equation of problem (1) by introducing the level set function, Chan et al. [14] transformed to consider the following global segmentation model aswhere is an arbitrary positive parameter, is an open set representing the image domain, and is the total variation norm of the function . Although the model has the ability of global segmentation, the segmentation for gray-scale inhomogeneous images is invalid. Later, a large number of scholars made a number of improvements to the model, but there are still some problems in the segmentation of image gray-scale unevenness. Recently, Bresson et al. [24] proposed to determine a global minimum of the snake model by enhancing model (2). The enhancement is realized by unifying it with the classic GAC model [27], and then, they considered the following model:where is the edge indicator function as defined in the above. Obviously, compared with model (2), model (3) segments the image more robustly due to the term . In order to calculate the problem easily, similar to the case in [28], Bresson et al. [24] transform the above constraint problem into an unconstrained problem at Theorem 3, and it needs to compute the minimization of the function:where is an exact penalty function provided that the constant is chosen large enough compared to such as . Because problem (4) is unconstrained, it includes a nonsmoothing term and a nonlinear term which lead to numerical difficulties by directly employing numerical methods. One efficient method is to introduce a constrained variable and then get the following form based on the penalty method:where is the penalty parameter. Furthermore, in order to overcome numerical difficulties generated by the -norm term, an alternative minimization scheme was introduced by adding some auxiliary variables in [24]. However, the convergence analysis of this scheme is lost.
In this section, based on (5), let be a bounded open set with Lipschitz boundary and be a given image. We propose a new model based on Bresson’s method [24] as follows:where , , and are defined in (1) and is defined in (4). Also,
The minimizing energy function is as follows:Our novel contribution is to put an edge indication function into the norm, and a novel strategy for the image with weak boundaries is present. The existence of the minimizer for (8) is given as follows.
The alterative scheme to solve the minimizer model (8) is as follows:(1)Choose the original value for , , and , and set .(2)When the stop criterion is not satisfied, compute the following iteration:
The rest of this paper is organized as follows: In Section 2, the general framework of the proposed method and the concrete solution to each subproblem are presented. In Section 3, numerical algorithm and experimental results to illustrate the effectiveness of our model in image segmentation are given. Finally, we conclude the paper in Section 4.
2. The Weighted Chan–Vese Model
In this section, we mainly show the related definitions and properties and solve every subproblem in model (8).
2.1. Basic Notations
Without loss of generality, we present a gray image as an matrix, that is, . The Euclidean space is denoted as W, let , and we define the gradient operator , for , where denotes the gradient operator in the discrete context.
Let be a finite-dimensional vector space equipped with a standard scalar product for :
For discretization of the gradient operator , we use standard the finite difference operators with periodic boundary condition as follows:where
Furthermore, we define the discrete divergence operator by using the divergence theorem [27].for , where and denotes the adjoint operator of ; then, we have
2.2. The Minimization Problem
In this section, we first review some definitions and facts which will be used in our convergence analysis Algorithm 1. These related contents can be found in the books [29, 30]. We use ADMM to solve every subproblem.
|
2.2.1. Energy Minimization with respect to and
We fixed and in order to determine an exact solution of and ; then, the minimization of with respect to and is as follows:Then, we can obtain
2.2.2. Energy Minimization with respect to and
For any given and , we have the problem
Problem (18) is convex and is not strictly convex about and , so its local minimizer is also the global minimizer. One of the effective methods is to change it into an unconstrained minimization problem according to the theorems in [14, 24].
Obviously, it is easy to find that the objective function in problem (18) is strictly convex and coercive, so it owns a unique solution . Formally, problem (18) including two variables and as a coupled problem, and the classical method is to divide it into two subproblems. Thus, we alteratively consider the following minimization problem:
Theorem 1. The following assertions hold:(1)The solution of problem (19) is given by where is the project operator and is defined by(2)The solution of problem (17) satisfies where .
Remark 1. Obviously, problem (19) is a generated ROF model by adding a weighted function to the regularization term and so we can get the solution based on the scheme in [27,31,32]. For problem (20), it is similar to what happens in [28] for this circumstance; we notice that it is equivalent to the following minimization problem:where is the indicator function defined bywith . Thus, we can easily find that the solution of problem (20) satisfies condition (23).
Definition 1. The proximal operator of a proper, convex, semicontinuous function on is defined for bywhere is a bounded linear operator. Furthermore, when , it is the classical proximal operator defined in [33].
Definition 2. Let be a real-value convex function on . The subdifferential of at is defined byThe elements in are called subgradients. Especially, when is differential.
Definition 3. An operator is firmly nonexpansive if it satisfies one of the following equivalent conditions:(i)(ii)
Definition 4. An operator is nonexpansive if for any , we have
It follows that a firmly nonexpansive operator is nonexpansive. Based on the proof of the proximal operator in [34], we obtain the similar result of the proximal operator defined in Definition 1.
Lemma 1. The proximal operator defined in Definition 1 is firmly nonexpansive.
Proof. Following the definition of the proximity in Definition 1 and the chain rule, we can getWe obtainwhich implies the nonexpansion of the operator .
Corollary 1. The operator defined in minimization problem (19) is nonexpansive.
Proof. Based on above definitions, set ; then, the operator is the proximal operator of . Furthermore, we can get . According to Lemma 1, the operator is nonexpansive.
Lemma 2. Assume that is a closed convex set and is a projection operator; then, the projector is firmly nonexpansive.
Proof. Let and ; we haveSince while , we can getThis yields
Corollary 2. The operator defined in minimization problem (20) is firmly nonexpansive.
Proof. Obviously, from Theorem 1 and Remark 1, we derive thatIt follows from Lemma 2 that is firmly nonexpansive.
In view of the above results, we know that and are firmly nonexpansive.
Lemma 3. Let and be generated by (2) and (3), respectively. Then, and are convergent.
Proof. Denote and . We haveHence,Consider the Taylor series expansion of in the second variable, i.e.,Here, denotes the transpose of . We notice that is quadratic about . Then, where is the identity matrix. Moreover, since is a convex function, we getCombining (36), (37), and (38), we obtainThe subdifferential of with respect to is equivalent to the vector sum of the subdifferential of and about , i.e.,Since is the minimizer of , we havethat isTherefore, the first term in the right hand side of (39) is zero. When we solve successive minimization problems (19) and (20), we note that . Hence, we getIt follows that the partial sum of the sequence is bounded. Thus, the infinite series is convergent.
Let . By considering and using the similar method, we can prove that is convergent.
Definition 5. A operator is asymptotically regular if for any in , the sequence tends to zero as .
Based on Lemma 3, we get the following result.
Lemma 4. For any initial values , assume and are generated by (19) and (20), respectively; then, and are asymptotically regular.
Proof. According to Lemma 3, we obtainAs and , by using the recurrence method, we haveTherefore, we getThis indicates that and are asymptotically regular.
Lemma 5. Suppose the unique minimizer of is . Then, and are the unique fixed points of and , respectively.
Proof. Since is differentiable with respect to and separately, we obtainThis implies thatWe easily get and . Therefore, and are the corresponding fixed points of and .
On the other hand, we note that is strictly convex and differentiable about and , respectively. Therefore, the fixed points of and are the minimizers of . By virtue of the uniqueness of the minimizer of , and show a unique fixed point separately. That is to say, that and are the unique fixed points of and , respectively.
According to Theorem 1 in [7], we get the following result.
Theorem 2. The sequence generated by Algorithm 1 converges to the solution of problems (19) and (20).
Proof. We should first notice the fact thatso we can getIn particular, the sequence is nonincreasing.
Theorem 3. For any initial values , assume and are generated by (2) and (3), respectively; then converge to the corresponding fixed points of and , i.e., converges to which is the unique minimizer of , as .
Proof. In view of Lemmas 1–5, we know that are nonexpansive asymptotically regular mappings and have fixed points. By using Theorem 1 in [7], we get that converge, respectively, to the fixed points of and , i.e., converges to which is the unique minimizer of , as .
3. General Framework of the Proposed Model
Now, we discuss the solution to each variable separately; without loss of generality, we use the ADMM method to solve minimization problem (18).
Then, in the following sections, we obtain the solutions to problems (51) and (52).
3.1. Algorithm for Solving (51)
In this section, we solve model (51) about ; because the model is not easy to solve, we use ADMM to solve problem (51). We can take and in our model, and by introducing two variables, we can obtain the following constrained problem:where
By using the augmented Lagrangian method to turn the constrained problem into the unconstrained problem, we have the augmented Lagrangian function as follows:where and are Lagrangian multiplier that can alternatively be regarded as the dual variables of problem and and and are penalty parameters which are constant that should be chosen properly. We convert it into the maximum minimization problem and simplify
In the following, we use ADMM to solve each subproblem about (56). First, the minimum energy function is as follows:
The maximum energy function is as follows:
Then, we solve minimization and maximization problems by using ADMM.
(1) Subminimization with respect to. . We fixed in order to determine an exact solution of ; then, we have the minimization of with respect to as follows:In order to solve , we use the soft threshing operator defined in [35] and obtain
For the corresponding Lagrangian multiplier , we update it by using the gradient ascent method:
(2) Subminimization with respect to . We fixed in order to determine an exact solution of ; the minimization of with respect to is as follows:and then we calculate the equation about the and obtain
Simplifying equation (63), we can obtain
For the corresponding Lagrangian multiplier , we update it by using the gradient ascent method:
(3) Subminimization with respect to . We fixed in order to determine an exact solution of ; we begin by considering the minimization of (55) with respect to , and then,Then, using the divergence theorem in the equation about , we obtain
Similarly, due to periodic boundary condition being imposed, we can solve efficiently by the fast Fourier transform (FFT). This yields the following solution:with being the operator and being the complex conjugate. In (68), , where “” denotes the componentwise multiplication and the division is also componentwise.
3.2. Algorithm for Solving (52)
In this section, we solve model (52) about :
The details are as follows: as introduced in (23) and since the constrained variable , we calculate the minimization problem and getthat is,Thus, the -minimization can be achieved through the following update:
Then, the framework of the algorithm which solves problem (52) can be illustrated as follows.
In this section, we give experimental justification of our proposed model and compare it with ACWE model [8] and Bresson method [24]. These experiments show that our method is robust in segmentation of objects in a range of images that have noise, have low contrast, and are directional. The proposed model was implemented by Matlab7 on a computer with Intel Core2 Duo2.2 GHz CPU, 2GB RAM, and Windows XP operating system. In numerical implementations, we only consider to use proposed model (52) for the basic image segmentation problems. In fact, our model has been applied to synthetic and real images in this section.
3.3. Experimental Results of Real Images
In this section, the effectiveness of the segmentation models and algorithms is verified by some synthetic images. We first segment every original image into four images by adding 0.1, 0.15, and 0.2 random noise. The computer is Windows 7, a 64-bit operating system. The four original images with their three-level noise have been processed, and the results are illustrated in Figures 1–4. Slight tuning of model parameters is necessary between images (but not between models for a single image).




Figure 5 gives four original images which will be segmented. Figures 1–4 show the segmentation results of the proposed method and the related methods. The first line of Figure 1 shows the segmentation of the original pictures, and the second and third lines are the segmentation results by adding 0.1 noise and 0.15 noise to the original image, respectively; we can see that our method can be used to successfully segment for the bottom part of the image, and the other two methods all failed; from the fourth line, although we have not fully segmented, we can see that our segmentation is more full at several locations below the image.

Figure 2 shows that our method is successful when the image has no noise and 0.1 random noise. When the image is added 0.15 random noise and 0.2 random noise, the method failed in the three methods, and compared with the method, we can see and provide more details in the lion’s neck. In Figure 3, compared with the method, we have a more obvious advantage for the arc of the aircraft’s tail and the head of the aircraft. And compared with the method, for the letter A on the tail, we are more clear. From Figure 4, we can see that our method is better.
3.4. Experimental Results of Synthetic Images
We give three synthetic images and add three different degrees of random noise in this part, and the effectiveness of the segmentation models and algorithms proposed in the following is verified by comparison with related models. We test four typical real images and tested each image with three different degrees of random noise. Figure 6 shows that the size of the synthetic images is and the pixel values of these images are only 0.4 and 0.8.

(a)

(b)

(c)
Before doing these experiments, we make some explanations as follows:(1)For the above three synthetic images, we have two pixel values of 0.4 and 0.8.(2)We use the index error rate (err) to measure the similarity of the initial image and the segmentation, which are defined as follows, where represent the test image, represent the initial image:
From the definition, we can obtain that the closer the value of err is to 0, the closer the image is to the image (Figures 7–9).



|
The error rate for Figure 7 by using Algorithm 2 is shown in Table 1.
The error rate for the synthetic image is shown in Table 2.
The error rate for the synthetic image is shown in Table 3.
The closer the value of err is to 0, the better the quality of the segmentation is. The err tests can be seen in Tables 1 to 3 where all the three models are tested on simple synthetic images, and these results validate quantitatively the outstanding performance of the Chan–Vese model and the Bresson method in comparison with the competing models.
4. Discussion
In this paper, we have proposed a new variational model suitable for segmentation a range of images with a blurred edge, a certain degree of noise, and a directional texture. norm is made for the boundary indication function, and different weights are taken for the direction and direction of the boundary at a certain place of the image. Experimental results on both real and synthetic images demonstrated that our method is very robust and efficient. We will further improve the proposed model to extract the colorful image in our future work.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by Henan Province Natural Science Foundation Project (212 300 410 320).