Abstract

This paper presents a novel Markov random field (MRF) and adaptive regularization embedded level set model for robust image segmentation and uses graph cuts optimization to numerically solve it. Firstly, a special MRF-based energy term in the form of level set formulation is constructed for strong local neighborhood modeling. Secondly, a regularization constraint with adaptive properties is imposed onto the proposed model with the following purposes: reduce the influence of noise, force the power exponent of the regularization process to change adaptively with image coordinates, and ensure the active contour does not pass through the weak object boundaries. Thirdly, graph cuts optimization is used to implement the numerical solution of the proposed model to obtain extremely fast convergence performance. The extensive and promising experimental results on wide variety of images demonstrate the excellent performance of the proposed method in both segmentation accuracy and convergence rate.

1. Introduction

Image segmentation is the technology and process of dividing an image into several specific regions with unique properties. In the field of computer vision, it is a key step from image processing to image analysis. Researchers in image processing direction have proposed a large number of algorithms for image segmentation; however, none of them can solve the image segmentation problems of all modalities; therefore, the image segmentation direction has been developing. Among the family of image segmentation algorithms, there is a kind of method that occupies an absolute advantage, that is the geometric active contour models based on level set representation; such methods have a very rigorous mathematical foundation and are capable of handling the topological changes of the contour freely, which are difficult to be solved by the parametric active contour models, and thus it can simultaneously segment multiple targets and get a smooth and closed target contour. Without loss of generality, we can roughly classify the existing LSMs into the following two categories: the edge-based models [1, 2] and the region-based models [38].

The edge-based models use the gradient information of the image to construct the driving force required for the evolution process. The geodesic active contour (GAC) model [1] proposed by Caselles et al. is a typical representative, and it is also one of the most successful and popular edge-based models; the driving force of its segmentation is derived from the intrinsic geometric measure of the input image. In order to completely eliminate the time-consuming reinitialization operation, Li et al. [2] presented a new level set regularization (LSR) energy term, which is used to penalize the deviation of the current level set function from the signed distance function, and embedded it into a variational level set model with the edge-based stopping energy term.

The region-based modes construct the driving force needed for the evolution process based on the regional statistical information of the image. The popular Chan–Vese (CV) model [3] which was proposed on the assumption that image intensities is statistically homogeneous (roughly a constant) in each region. By transforming the problem of curve evolution into a simple integer operation between two linked lists, Shi and Karl [4] constructed a fast two-cycle (FTC) algorithm for the approximation of level-set-based curve evolution. In order to solve the problem of inhomogeneous image segmentation, Li et al. [5] proposed a variational level set model based on local binary fitting (LBF) energy. Lankton and Tannenbaum [6] proposed a natural framework called localized active contours (LAC), which allows any region-based level set segmentation models to be reformulated in a local way. Ding et al. [7] designed a local prefitting energy- (LPFE-) based active contours model for fast image segmentation, which achieves relatively fast segmentation performance and relatively robust initialization performance. In addition, relying on reciprocal cross-entropy theory, Ni and Wu [8] introduced an active contours model for image segmentation applications based on the novel fitting term (NFT).

The aforementioned two broad categories of methods can obtain ideal segmentation results in the high-quality image segmentation applications; however, the segmentation performance of these methods drops sharply when the noise interference components in the test image are obvious. The reason for this is that the existing methods usually assume that the pixels in each region are independent of each other when constructing the energy functionals; such a hypothetical logic makes the curve evolution process very sensitive to noise information. In addition, most of them use traditional numerical schemes of finite difference type to discretize the segmentation models. However, the internal logic and execution mechanism of the traditional numerical schemes of finite difference type determine its time consumption is very serious, thus this type of numerical strategy is powerless for image segmentation applications with high real-time requirements.

To solve the above problems, this paper presents a novel MRF [9] and adaptive regularization embedded level set model for robust image segmentation and uses the graph cuts to numerically solve it. Firstly, a special MRF-based energy term in the form of level set formulation is constructed for strong local neighborhood modeling. With the help of MRF modeling, adjacent pixel groups are preferentially assigned to the same region, which can be guaranteed even in the presence of noise interference, thus it has natural robustness to noise interference. Secondly, a regularization constraint with adaptive properties is imposed onto the proposed model with the purpose of reducing the influence of noise, ensuring that the active contour does not pass through the weak object boundaries. The adaptive regularization here is the extension and optimization of the method proposed by Zhou and Mu [10]. After extension, the proposed model overcomes the problem that Zhou and Mu’s method can only take a fixed constant exponent and cannot change adaptively with the pixel position. Thirdly, the graph cuts optimization [11] is used to implement the numerical solution of the proposed model. Under its help, our model obtains very fast convergence performance, and this can be confirmed by the quantitative data in the experimental section.

The remainder of this paper is organized as follows: Section 2 is a brief description of the related background knowledge. Section 3 presents the proposed model and its corresponding implementation strategies. Section 4 validates the proposed model by extensive experiments and discussions on a wide variety of images. Last, conclusions are drawn in Section 5.

2. Background

The LSM was originally proposed by Osher and Sethian [12] in 1988 to solve the change process of the flame shape that follows the thermodynamic equation. This is due to the high dynamism of the flames and the variability of the topological structure; if parametric curves or surfaces are used to describe this change of flame, it will inevitably encounter great difficulties. However, this problem can be solved well by introducing the level set function framework.

The core idea of level set image processing is to represent an active contour as the zero level set of an implicit function (called level set function) which is one dimension higher than it. Figure 1 shows an example of a level set function and its corresponding zero level set.

When dealing with the evolution of the planar curves, the LSM is not trying to track the location of the evolving curve, but follows a certain law (usually a gradient descent flow equation); in the two-dimensional Cartesian coordinate system, the LSF is updated continuously so as to achieve the goal of evolving the closed curve which is hidden in the LSF. The biggest advantage of this curve evolution style is that the LSF remains a valid function even if the topological change (splitting or merging) occurs in the closed curve which is hidden in the LSF. Figure 2 shows an example of the LSE; the first row represents the three states in the LSE process, and the second is the zero level set curve corresponding to the first row. From this diagram, we can find that the topological structure changes of the evolving curve can be well handled (the topological change here is splitting type) by using the level set expression.

Below we give a mathematical description of the level set function: the LSM represents the moving curve (or active contour) by the zero level set of a Lipschitz function , i.e., . The evolution equation of the level set function can be written as the following general form:where “” is the gradient operator and represents the iteration velocity of the level set evolution process. For image segmentation applications, the speed function is usually determined jointly by the image data and the level set function .

3. The Proposed Model

In this section, we will present the segmentation model proposed in this paper, which uses MRF and adaptive regularization as its kernel of thought. Similar to most popular LSMs, here we represent the energy functional framework of the proposed segmentation model as follows:where is the energy term used to implement the regularization operation and is the energy term that models the relationship between the pixel and its neighborhood under the Markov random field framework. At the macro role level, is equivalent to the internal energy and its function is to modify the properties of level set function itself. is equivalent to the external energy, and its role is to promote the level set function toward the direction of the foreground.

Below, we will first construct the energy subitems that the proposed model relies on. Finally, give a detailed solution process based on the idea of graph cuts optimization.

3.1. Energy Terms
3.1.1. Regularization Term

In order to enhance the smoothness of the level set evolution process, we introduce an adaptive regularization constraint that is directly applied to the zero level set, the effect of which is to reduce the influence of noise and ensure that the active contour does not pass through the weak object boundaries.

In order to solve the problem of regularization of zero level curve, Zhou and Mu [10] proposed an efficient regularization scheme based on weighted Dirichlet integral. Although this method has achieved good results, its constant exponent gives it the following problems: cannot effectively reflect the local characteristics of the image; therefore, the regularization scheme cannot let the exponent automatically adapt to the image data. In order to enhance the adaptability of the constraint to the local image information, we propose the following weighted Dirichlet integral regularization with variable exponent:where “” is the convolution operation between the image and the Gaussian filter kernel (whose standard deviation is ),” is a gradient operator, and the exponent is a strictly monotone function and has the following limiting properties: (i) ; (ii) . By matching these two features, we can construct the following simple function:

Below, we briefly analyze the macroscopic properties of the function :(i)When equals to 1, will degenerate to the following simple form: . By comparing it with the C-V model, we find that this degenerate form is the curve length constraint used in the C-V model, which calculates the length of the zero curve of the level set function .(ii)When is a constant F greater than 1, will be simplified as a geometric constraint for zero level curve used in the literature [10].

For most images, the intensity value of the same foreground area is not strictly uniform, that is to say, different local areas within the foreground range have different local characteristics. Under this generality, if the above exponent is set as a constant, the local characteristics of the image cannot be correctly reflected; in other words, the exponent cannot automatically match the image data. Therefore, it is very unreasonable to set the exponent constant. In view of this, we take a different approach. In our framework, the exponent is directly related to the local intensity information of the image. Our approach ensures that weak regularization () is performed within regions with almost constant intensity values (the gradient value of the image is almost zero); as a result, it can effectively avoid the problem of the disappearance of weak boundaries. In contrast, in other areas, it will take a strong regularization operation to force the removal of false contours. Therefore, the existence of variable exponent makes our regularization scheme adaptively choose between weak regularization and strong regularization. In the experimental results section, we will through a simple example to intuitively feel the effect of the variable exponent strategy.

3.1.2. MRF-Based External Energy Term

The traditional level set segmentation models ignore the coupling relationship between neighboring pixels, so the evolution curve often converges to the wrong noise position. In order to improve the overall antinoise performance of the level set segmentation model, we introduce the idea of MRF to model the coupling relationship between the pixels and their neighborhoods.

Firstly, we represent the square neighborhood required by the MRF idea as , whose size is and m refers to the component of the random variable . In order to match the requirements of MRF, we need to first construct a coupled random field whose purpose is to design our variable and symbolic system, here we represent the coupled random field as , where represents the field of pixel intensity, and represents the label random field (also known as characteristic random field), which is used to distinguish the foreground (F) from the background (B).

In addition, the relevant theoretical knowledge of MFR tells us that the energy functional based on MRF finds the optimal category (foreground or background) label by using the neighborhood information of each pixel. Usually, we can achieve this goal by maximizing a posteriori (MAP) probability distribution function. With the help of Bayes theory, we can express as follows:where , , , and represent segmentation probability, priori segmentation probability, conditional segmentation probability, and image probability, respectively. When the image data are given, the value of is a constant; therefore, we can delete it from the above expression, and the deletion will not affect the proportional relationship contained in the equation. Therefore, we have the following further proportional relationship:

The existing level set segmentation models generally assume that the pixels in the foreground and background regions are independent from each other. However, such assumption is not reasonable and directly leads to their segmentation results vulnerable to noise interference. In order to overcome this shortcoming, we introduce the MRF theory to model the two components of and in the above expression, that is to say, when we construct the probability expression, we incorporate the neighborhood relationships into our consideration system. After the introduction of MRF, the values of and for each pixel will depend on the neighborhood of the current pixel.

By assuming that the pixels in the foreground and background both obey Gaussian distribution, we can construct the following expression for :where and are the mean and standard deviation of the pixel values in the foreground and background regions, respectively. By using the level set function and instantiating “” as “” and “”, we can give their calculation formulas:

As for , we assume that the label field of the current pixel is only associated with its neighborhood, which is reasonable because it is highly consistent with the Markov property. Therefore, based on the Hammersley–Clifford theorem [13], can be expressed as a Gibbs density of the form:where is a constant for normalization, is a set, which is made up of all the possible cliques, and is the clique energy function, and it has the following form of calculation formula:where , is the absolute value of the difference between the center pixel and one of its 8 neighborhood pixels and is a Gibbsian parameter which usually needs to be set as an appropriate constant. From the calculation formula of , we can obviously find that its value is directly related to the local intensity information of the image; therefore, the prior segmentation probability has strong adaptability to the local image data. This is the important reason why the segmentation model of this paper can achieve excellent segmentation performance.

According to the theory of optimization, we know that the MAP problem is equivalent to the energy minimization problem; therefore, we can restate to the following form:

By instantiating “” as “” and “”, at the same time, introducing the level set function and Heaviside function , we can construct the MRF energy functional , which has the following linear combination form:

3.2. Gradient Descent Flow and Graph Cuts Optimization-Based Numerical Implementation
3.2.1. Gradient Descent Flow

By combining the aforementioned subenergy terms together, we obtain the total energy functional corresponding to the proposed segmentation model:

In order to derive the Euler–Lagrange equation corresponding to the level set function , here, we use the slightly regularized forms of the two functions and , their formulas are as follows:

Next, by introducing an artificial time variable and minimizing the energy functional with respect to , we obtain the following gradient descent flow equation:where is the partial derivative of with respect to . Based on this equation, we can realize the iterative updating of the level set function , and then realize the dynamic evolution of the curve.

If the traditional numerical discretization methods, such as the finite difference type methods used in our previous conference paper [14], are used to solve the gradient descent equation shown in equation (15), the time consumption of the evolution process will be very large, which is not feasible for real-time class applications. In the next section, by transforming the problem here into a graph cuts optimization problem, we can design a fast and efficient numerical solution.

3.2.2. Graph Cuts Optimization-Based Numerical Implementation

In this section, we will efficiently solve the proposed level set model from the perspective of graph cuts optimization. Because the graph cuts algorithms have two obvious features: rapidity (the time consumption of the iteration process is small) and globality (the iteration process does not fall into the local optimal point), the numerical implementation schemes based on such algorithm frameworks can greatly improve the speed of curve evolution process and the accuracy of segmentation results, and the final segmentation results are weakly correlated with the initial position of active contours (the benefit of the globality).

Suppose is a graph, which has an overall architecture like , where is the set of all vertices, is the set of all connected edges, and is the set of weights (greater than or equal to 0) associated with each connected edge. In order to intuitively express the segmentation and classification process of the graph, it is usually necessary to set two special terminal nodes: (source node) and (sink node). For each point in the vertex set , if there is a path starting from via to , then this type of graph is called a flow network. In this type of network, those edges starting from and those entering are called -link, and the edges connecting vertices other than and are called -link. A cut of a graph can divide it into two subsets and without intersections, and we have and . For a given cut , its cost function can be defined as follows:where is the weight of the connection edge of nodes and .

How to effectively estimate the length of the contour within the graph cuts framework is a critical step for our algorithm. Fortunately, this issue has been thoroughly studied by Kolmogrov and Boykov [1416]. They construct a grid graph and assign the weight values to the connection edges of the graph so that we can associate the cost of the cut and the length of the contour intuitively with the help of the Cauchy–Crofton formula. The Cauchy–Crofton formula explains that, by drawing sufficiently large number of straight lines in all direction from 0 to and counting the number of points of intersections of the lines and the contour of interest, an estimate of the length of the contour can be obtained. Then, by some reasonably partitioning, the following discrete formula can be used to approximate the length of the contours:where is the total number of directions of the neighborhood system used. For the 4-neighborhood system shown in Figure 3, M = 2. For the 8-neighborhood and 16-neighborhood systems shown in Figure 3, their values are 4 and 8, respectively. is the angle between two adjacent directions, is the spacing of the digital discrete grid, is the length of the edge , and is the total number of intersections of the active contour with all lines in the direction, i.e., in Figure 3. In order to estimate the value of , we introduce a metric function to determine whether the line connecting the two pixels and intersects the active contour or not. It is clear that the line intersects the contour only when and have different label values. In view of this, the metric function can be defined as follows:

Then, we have

If we choose the same edge weight for the straight line group with the same direction, then

Combining equation (17) with equation (20), the contour length can be rewritten as follows:

(1) Discrete Representation of Energy Functional. The energy functional equation shown in equation (13) is defined in continuous data field. In order to associate it with the aforementioned grid graph, we first need to discretize it. Suppose is the set of pixel grid points of the image to be considered, and the horizontal and vertical spacing between grid points is 1. Obviously, the total number of grid points in set is . To facilitate the representation of a partition of the grid graph, we define a binary flag function similar to that used in [17]. Its definition with the form of a simple expression is as follows:

Based on the abovementioned flag function, the first term of equation (13) can be discretized as the following expression:where and .

Based on the same processing logic, we can obtain the discrete forms of the second to fourth terms based on the above flag function:where .

By combining the above various subitems, a total discretization form corresponding to equation (13) can be obtained in the following form:

(2) Energy Minimization Using Graph Cuts Optimization. According to the function classification rules provided in [18], it is easy to see that equation (25) is a typical function that can be optimized using the graph cuts framework. Suppose the framework of the flow network to be considered is , which has a source node named and a sink node named . By treating the image pixels as the grid graph nodes, we can closely associate the image with the flow network . For each pixel , there are two -links, which are and , and their weights are defined as and , respectively. In addition, each pair pixel in the neighborhood system is connected by -link, where we assume that its connection weight is . By analyzing equation (25) and combining the similar items in it, we can get the formula for calculating the various weights of as follows:

After obtaining the weight coefficients shown in equations (26)–(28), we can obtain the minimum cut of by some typical maximum flow minimum cut algorithms [1921]. The principles of graph cuts tell us that the minimum cut of the graph corresponds exactly to the minimum value of the energy functional shown in equation (25). After completing each iteration, we can get the updated binary label and level set function .

3.3. Algorithm Steps

The detailed execution steps of the proposed algorithm are as follows:(1)Place (manually set or automatically generated via a computer program) an initial curve on the image data to be processed and calculate the corresponding signed distance function, which is exactly the initial level set function required for the evolution process. The symbol distribution rule of is as follows: if point is inside , if is outside , and if is on .(2)Compute according to equation (4).(3)Compute and according to equation (11).(4)Let the values of corresponding to and be 1 and 0, respectively.(5)Use equations (26)–(28) to construct the grid graph and use certain graph cuts algorithms to derive the minimum cut of until a set of updated binary labels are obtained.(6)Repeat step (5) until the graph cuts optimization process reaches its convergence state, wherein the portion of the binary label value equal to 1 is the final segmentation result.

4. Experimental Results and Discussions

In this section, we shall demonstrate the experimental results of our MAR and adaptive regularization constraint-embedded level set algorithm for image segmentation. The experiments are implemented by Matlab R2012a on a computer with 2.3G Intel Core i7 CPU, 8G RAM, and Windows 7 operating system. For the following parameters, we use the same values for all experiments, i.e., , , and . The standard deviation of the Gaussian filter kernel will change from image to image, that is to say, its value with a certain degree of image dependency. In addition, the parameter in the regularization term follows the same change rule. In order to rigorously test and validate the proposed model, we will introduce the following methods to compare with it: GAC [1], LSR [2], CV [3], FTC [4], LBF [5], LAC [6], LPFE [7], NFT [8]; we have outlined them in Introduction. When conducting comparison experiments, we follow the same rule that all models are data-driven and do not incorporate any type of prior knowledge. In addition, all methods use the same initial curve, and the control parameters of the relevant segmentation model are set strictly according to the selection criteria given in the corresponding literature.

In order to test the generalization ability of the proposed model, we adopted a method of randomly selecting the test images. Some of these images come from the literatures on image segmentation, and some come from the internet. These images are weakly related to each other.

Here, we will use four region overlap metrics to compare the performances of the models quantitatively, their definitions are as follows:where “” and “” represent the intersection and union of two regions, respectively, , , and are the output region of the segmentation algorithm, the ground-truth, and the common part of two regions, respectively, represents the number of pixels in the enclosed set. Obviously, the closer the JS [22] and DSC [23] values to 1, and the FPR and FNR values to 0, the better the segmentation results.

In order to test the antinoise performance of the proposed model, we apply it to a set of noisy images, which are obtained by adding Gaussian noise of different intensities (the set of variance is ) to a clean synthetic image of size 128×128 (located at the upper left corner of Figure 4). Figure 4 shows the two-valued segmentation results of different methods, where the first column is the input images and the initial contours required for the level set-based segmentation methods, and the second to the ninth columns are the segmentation results by using GAC, LSR, CV, FTC, LBF, LAC, LPFE, NFT, and our method, respectively, where the red and black regions represent the foreground and background, respectively. It can be seen from the experimental results that the superiority of the proposed method in terms of visual segmentation accuracy is very obvious compared with the methods of GAC, LSR, CV, FTC, LBF and LAC, LPFE, NFT, and our method outputs the correct segmentation results at all noise levels. In the absence of noise, all methods involved in the comparisons output the correct segmentation results; in the case of noisy interference, only the LBF model outputs the correct result at a noise level of 0.01; beyond that, their outputs are either completely wrong or contain a lot of noise. Here, we will simultaneously give the corresponding quantitative assessment indicators, which are calculated by equations (24)–(27) and shown in Table 1. By comparing these quantitative data, we find that the segmentation performance of our method is optimal.

Next, we apply our model to the task of medical image segmentation. It is well known that medical image segmentation is the basic task in the medical image analysis field. Accurate, robust, and rapid segmentation of the target area is an important guarantee for the subsequent quantitative analysis, three-dimensional visualization, etc.; it also lays a solid foundation for clinical applications such as image-guided surgery, radiotherapy planning, and treatment evaluation. However, medical image data generally contain noise interference, which undoubtedly poses high requirements for the robustness of segmentation algorithms. Here, we apply different level set segmentation methods to a set of medical images that come from several imaging bands. In addition to the noise interference mentioned above, there is a common feature of these images, that is, their target contours are relatively weak. The parameters associated with this set of experiments are as follows: and . A common characteristic of the initial curves in this set of experiments is that the target areas are completely located within the large rectangular curves, so there must be a lot of background information within the active contours, and this puts a high demand on the background suppression ability of the segmentation model; in addition, for those segmentation models that are related to the initial properties (position, shape, etc.) of the curve, this type of initialization will inevitably affect their output accuracy. The comparison segmentation results are shown in Figure 5, and the corresponding quantitative assessment indicators calculated by equations (11) and (12) are shown in Table 2. From the visual comparison results shown in Figure 5, we can clearly see that the proposed model outputs perfect segmentation results on all medical images, while the segmentation results of the methods involved in the comparison are either undersegmentation or oversegmentation, or the target area is divided into a large number of fragments; in short, they all output the wrong segmentation results. Through the quantitative parameters shown in Table 2, we can further confirm the aforementioned visual comparison conclusions.

In order to further verify the segmentation performance of the proposed model, we apply the proposed model to the task of segmentation of real infrared sea surface targets. It is well known that the infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, it is often very difficult to segment infrared sea surface targets based on traditional methods. Figure 6 shows a summary of the segmentation results for all the methods involved in the comparison, and Table 3 shows the corresponding quantitative indicators. When performing the initialization operation, we use the segmentation contour of OTSU [24] algorithm as the initial curve of the evolution process of level set. From this set of segmentation experiments, we can clearly see that the proposed method outputs the best segmentation results on all infrared sea surface target images; the reason is that the MRF-based level set model and the adaptive regularization strategy for the zero level curve can form a strong background suppression force; under their joint action, the proposed model can completely suppress the background interference. However, the other methods involved in the comparison do not effectively model the neighborhood pixels and the regularization strategies adopted are simple, thus they output very mediocre segmentation results.

In the next set of comparative experiments, we will give the convergence rate metrics for various methods, which are usually reflected by the number of iterations and the CPU time required to complete the entire evolution process. It is well known that the comparison of convergence rate metrics is meaningful only when the segmentation results of all the models participating in the comparison are correct. In view of this, we take the best approach when configuring constraints such as the initial level set function and the input parameters of the model. Here, we adopt the same comparison methods as the three sets of experiments mentioned above. Figure 7 shows the results of our model and other comparison models using the same images and the same initial contours, where the first row shows the original images (they are a set of simple images with a uniform background and target) along with initial contours, and the second to the tenth rows are the segmentation results by using GAC, LSR, CV, FTC, LBF, LAC, LPFE, NFT and our method, respectively. From the final convergence curves, we can clearly see that all models output almost (there are only very small differences between the results) the same correct segmentation results. The corresponding convergence rate metrics for this set of experiments are shown in Table 4, and the sizes of these images are also shown under the image numbers. After analyzing the convergence rate metrics in Table 4, we find that our model achieves the best convergence performance because the graph cuts optimization method with very fast characteristics is adopted in its numerical solution process. Although FTC achieves only the second-best performance in the entire comparison methods family, it is the best among all the comparison models that use traditional numerical discretization methods; the reason is that it adopts a special element switching mechanism, which makes it unnecessary to solve the partial differential equation. The convergence performance is closely followed by other comparison models. Because they adopt more complex internal or external energy terms, they get a slower rate of convergence.

Through the above four sets of comparison experiments, we clearly see that the proposed model has the best performance in both segmentation accuracy and convergence rate under different image modalities, which is mainly attributed to the following factors:(1)The proposed level set model possesses a strong ability of neighborhood modeling, thus it can effectively suppress the noise interference.(2)This paper adopts an adaptive regularization strategy specifically for the zero level curve, thus this further enhances the noise suppression capability of the proposed model. At the same time, it also makes the proposed model has strong weak edge detection capability.(3)Since graph cuts optimization method is used for numerical solution, the convergence rate of our model is greatly improved.

5. Conclusions

In this paper, we designed a robust MRF and adaptive regularization embedded level set segmentation model and solved it by graph cuts optimization. The existence of MRF makes the proposed method to have strong ability of neighborhood modeling, which makes it to have strong noise suppression performance. In addition, in order to further improve the noise suppression capability of the proposed model and accompany it with strong weak edge detection capability, we introduce a regularization strategy with adaptive characteristics. In numerical calculation stage, we use graph cuts optimization thought with extremely fast characteristics to realize the iterative solution of our model. In order to fully test the proposed model’s segmentation ability, we apply it to the segmentation tasks of different image modalities (including noisy synthetic images, medical images, infrared images, and clean synthetic images). Under the combined effect of the aforementioned three factors, the proposed model gets the best performance in both segmentation accuracy and convergence rate under different image modalities.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they do not have any commercial or associative interest that represents a conflicts of interest in connection with this work.

Acknowledgments

This work was supported by the Fundamental Research Funds for the Central Universities of China under Grant No. ZYGX2018J079.