Review Article  Open Access
Mohammed M. Abdelsamea, Giorgio Gnecco, Mohamed Medhat Gaber, Eyad Elyan, "On the Relationship between Variational Level SetBased and SOMBased Active Contours", Computational Intelligence and Neuroscience, vol. 2015, Article ID 109029, 19 pages, 2015. https://doi.org/10.1155/2015/109029
On the Relationship between Variational Level SetBased and SOMBased Active Contours
Abstract
Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. SelfOrganizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOMbased models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edgemap information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level setbased ACMs, SOMbased ACMs, and their relationship and review in a comprehensive fashion the development of their stateoftheart models from a machine learning perspective, with a focus on their strengths and weaknesses.
1. Introduction
Image segmentation is the problem of partitioning the domain of an image , where is the pixel location within the image, into different subsets , where each subset has a different characterization in terms of color, intensity, texture, and/or other features used as similarity criteria. Segmentation is a fundamental component of image processing and plays a significant role in computer vision, object recognition, and object tracking.
Traditionally, image segmentation methods can be classified into five categories. The first category is made up of thresholdbased segmentation methods [1]. These methods are pixelbased and usually divide the image into two subsets, that is, the foreground and the background, using a threshold on the value of some feature (e.g., gray level and color value). These methods assume that the foreground and background in the image have different ranges for the values of the features to be thresholded. Over the years, many different thresholding techniques have been developed, including Minimum error thresholding, Momentpreserving thresholding, and Otsu’s thresholding, just to mention a few. The most popular thresholding method, Otsu’s algorithm [2], improves the image segmentation performance over other thresholdbased segmentation methods in the following way. The threshold used in Otsu’s algorithm is chosen in such a way to optimize a tradeoff between the maximization of the interclass variance (i.e., between pairs of pixels belonging to the foreground and the background, resp.) and the minimization of the intraclass variance (i.e., between pairs of pixels belonging to the same region). Otsu’s thresholding algorithm and its extension to the case of multiple thresholds [3] are good for thresholding an image whose intensity histogram is either bimodal or multimodal (e.g., they provide a satisfactory solution in the case of the segmentation of large objects with nearly uniform intensities, significantly different from the intensity of the background). However, they have not the ability to segment images with a unimodal distribution (e.g., images containing small objects with different intensities), and their outputs are sensitive to noise. Thus, postprocessing operations are usually required to obtain a final satisfactory segmentation.
The second category of methods is called boundarybased segmentation [4]. These methods detect boundaries and discontinuities in the image based on the assumption that the intensity values of the pixels linking the foreground and the background are distinct. The first/second order derivatives of the image intensity are usually used to highlight those pixels (e.g., this is the case of Sobel and Prewitt edge detectors [4] as firstorder methods, and the Laplace edge detector [1] as a secondorder method, resp.). The difference between first and second order methods is that the latter can also localize the local displacement and orientation of the boundary. By far the most accurate technique of detecting boundaries and discontinuities in an image is the Canny edge detector [5]. The Canny edge detector is less sensitive to noise than other edge detectors, as it convolves the input image with a Gaussian filter. The result is a slightly blurred version of the input image. This method is also very easy to be implemented. However, it is still sensitive to noise and leads often to a segmentation result characterized by a discontinuous detection of the object boundaries.
The third category of methods is called regionbased segmentation [6]. Regionbased segmentation techniques divide an image into subsets based on the assumption that all neighboring pixels within one subset have a similar value of some feature, for example, the image intensity. Region growing [7] is the most popular regionbased segmentation technique. In region growing, one has to identify at first a set of seeds as initial representatives of the subsets. Then, the features of each pixel are compared to the features of its neighbor(s). If a suitable predefined criterion is satisfied, then the pixel is classified as belonging to the same subset associated with its “most similar” seed. Accordingly, region growing relies on the prior information given by the seeds and the predefined classification criterion. A second popular regionbased segmentation method is region “splitting and merging.” In such method, the input image is first divided into several small regions. Then, on the regions, a series of splitting and merging operations are performed and controlled by a suitable predefined criterion. As regionbased segmentation is an intensitybased method, the segmentation result in general leads to a nonsmooth and badly shaped boundary for the segmented object.
The fourth category of methods is learningbased segmentation [8]. There are two general strategies for developing learningbased segmentation algorithms, namely, generative learning and discriminative learning. Generative learning [9] utilizes a data set of examples to build a probabilistic model, by finding the best estimate of its parameters for some prespecified parametric form of a probability distribution. One problem with these methods is that the best estimate of the parameters may not provide a satisfactory model, because the parametric model itself may not be correct. Another problem is that the classification/clustering framework associated with a parametric probabilistic model may not provide an accurate description of the data due to the limited number of parameters in the model, even in the case in which its training is well performed. Techniques following the generative approach include Kmeans [10], the ExpectationMaximization algorithm [11], and Gaussian Mixture Models [12]. Discriminative learning [13, 14] ignores probability and attempts to construct a good decision boundary directly. Such an approach is often extremely successful, especially when no reasonable parametric probabilistic model of the data exists. Discriminative learning assumes that the decision boundary comes from another class of nonparametric solutions and chooses the best element of that class according to a suitable optimality criterion. Techniques following the discriminative approach include Linear Discriminative Analysis [15], Neural Networks [16], and Support Vector Machines [17]. The main problems with the application of these methods to image segmentation are their sensitivity to noise and the discontinuity of the resulting object boundaries.
The last category of methods is energybased segmentation [18, 19]. This class of methods is based on an energy functional and deals with the segmentation problem as a functional optimization problem, whose goal is to partition the image into regions based on the maximization/minimization of the energy functional. (Loosely speaking, a functional is defined as a function of a function, that is, a functional takes a function as its input argument and returns a scalar.) The most wellknown energybased segmentation techniques are called “active contours” or Active Contour Models (ACMs). The main idea of active contours is to choose an initial contour inside the image domain to be segmented and then make such a contour evolve by using a series of shrinking and expanding operations. Some advantages of the active contours over the aforementioned methods are that topological changes of the objects to be segmented can be often handled implicitly. More importantly, complex shapes can be modeled without the need of prior knowledge about the image. Finally, rich information can be inserted into the energy functional itself (e.g., boundarybased and regionbased information).
More specifically, ACMs usually deal with the segmentation problem as an optimization problem, formulated in terms of a suitable “energy” functional, constructed in such a way that its minimum is achieved in correspondence with a contour that is a close approximation of the actual object boundary. Starting from an initial contour, the optimization is performed iteratively, evolving the current contour with the aim of approximating better and better the actual object boundary (hence the denomination “active contour” models, which are used also for models that evolve the contour but are not based on the explicit minimization of a functional [20]). In order to guide efficiently the evolution of the current contour, ACMs allow to integrate various kinds of information inside the energy functional, such as local information (e.g., features based on spatial dependencies among pixels), global information (e.g., features which are not influenced by such spatial dependencies), shape information, prior information, and aposteriori information learned from examples. (Due to the possible lack of a precise prior information on the shape of the objects to be segmented, in this respect most ACMs make only the assumption that it is preferable to have a smooth boundary [21]. This goal is achieved by incorporating a suitable regularization term into their energy functionals [19].) As a consequence, depending on the kind of information used, one can divide ACMs into several categories, for example, edgebased ACMs [22–25], global regionbased ACMs [26, 27], edge/regionbased ACMs [28–30], local regionbased ACMs [31–33], and global/local regionbased ACMs [34, 35]. In particular, edgebased ACMs make use of an edgedetector (in general, the gradient of the image intensity) to stop the evolution of the active contour on the true boundaries of the objects of interest. Instead, regionbased ACMs use with the same purpose statistical information about the regions to be segmented (e.g., intensity, texture, and color distribution). Depending on how the active contour is represented, one can also distinguish between parametrized [36] and variational level setbased ACMs [26]. One important advantage of the latter is that they can handle implicitly topological changes of the objects to be segmented.
Although ACMs often provide an effective and efficient means to extract smooth and welldefined contours, trapping into local minima of the energy functional may still occur, because such a functional may be constructed on the basis of simplified assumptions on properties of the images to be segmented (e.g., the assumption of Gaussian intensity distributions for the sets in the case of the ChanVese active contour model [21, 26]). Motivated by this observation and by the specific ability of SOMs to learn, via their topology preservation property [37], information about the edge map of the image (i.e., the set of points obtained by an edgedetection algorithm), a new class of ACMs, named SOMbased ACMs [38, 39], has been proposed with the aim of modelling and controlling effectively the evolution of the active contour by a SelfOrganizing Map (SOM), often without relying on an explicit energy functional to be minimized. In this paper, we review some concepts of ACMs with a focus on SOMbased ACMs, illustrating both their strengths and limitations. In particular, we focus on variational level setbased ACMs and SOMbased ACMs, and on their relationship. The paper is a substantial extension of the short survey about SOMbased ACMs that we presented in [40]. A summary of the main strengths and drawbacks of the ACMs presented in the survey is reported in Table 1. Illustrating the motivations for such strengths and drawbacks is the main focus of this paper.

The paper is organized as follows. Section 2 provides a summary of variational level setbased ACMs. In Section 3, we review the state of the art of SOMbased ACMs not used in combination with variational level set methods. Section 4 describes a recent class of SOMbased ACMs combined with such methods. Finally, Section 5 provides some conclusions.
2. Variational Level SetBased ACMs
To build an active contour, there are mainly two methods. The first one is an explicit or Lagrangian method, which results in parametric active contours, also called “Snakes” from the name of one of the models that use such a kind of parametrization [19]. The second one is an implicit or Eulerian method, which results in geometric active contours, known also as variational level set methods.
In parametric ACMs, the contour , see Figure 1, is represented aswhere and are functions of the scalar parameter . A representative parametric ACM is the Snakes model, proposed by Kass et al. [19] (see also [36] for successive developments).
The main drawbacks of parametric ACMs are the frequent occurrence of local minima in the image energy functional to be optimized (which is mainly due to the presence of a gradient energy term inside such a functional), and the fact that topological changes of the objects (e.g., merging and splitting) cannot be handled during the evolution of the contour.
The difference between parametric active contour and geometric (or variational level setbased) Active Contour Models is that in geometric active contours, the contour is implemented via a variational level set method. Such a representation was first proposed by Osher and Sethian [41]. In such methods, the contour , see Figure 2, is implicitly represented by a function , called “level set function,” where is the pixel location inside the image domain . The contour is then defined as the zero level set of the function , that is,
A common and simple expression for , which is used by most authors, iswhere is a positive real number (possibly dependent on and , in such case it is denoted by ).
In the variational level set method, expressing the contour in terms of the level set function , the energy functional to be minimized can be expressed as follows:where and are integral energy terms inside and outside the contour, and is an integral energy term for the contour itself. More precisely, the three terms are defined aswhere is a suitable loss function, and and are, respectively, the Heaviside function and the Dirac delta distribution, that is,
Accordingly, the evolution of the level set function provides the evolution of the contour . In the variational level set framework, the (local) minimization of the energy functional can be obtained by evolving the level set function according to the following EulerLagrange Partial Differential Equation (in the following, when writing partial differential equations, in general we do not write explicitly the arguments of the involved functions, which are described either in the text, or in the references from which such equations are reported) (PDE):where is now considered as a function of both the pixel location and time , and the term denotes the functional derivative of with respect to (i.e., loosely speaking, the generalization of the gradient to an infinitedimensional setting). So, (7) represents the application to the present functional optimization problem of an extension to infinite dimension of the classical gradient method for unconstrained optimization. According to the specific kind of PDE (see (7)) that models the contour evolution, variational level set methods can be divided into several categories, such as Global Active Contour Models (GACMs) [42–46], which use global information, and Local Active Contour Models (LACMs) [47–51], which use local information.
2.1. Unsupervised Models
In order to guide efficiently the evolution of the current contour, ACMs allow to integrate various kinds of information inside the energy functional, such as local information (e.g., features based on spatial dependencies among pixels), global information (e.g., features that are not influenced by such spatial dependencies), shape information, prior information, and also aposteriori information learned from examples. As a consequence, depending on the kind of information used, one can further divide ACMs into several subcategories, for example, edgebased ACMs [22–25, 52, 53], global regionbased ACMs [26, 27, 45, 54, 55], edge/regionbased ACMs [28, 30, 56–58], and local regionbased ACMs [34, 35, 59–62].
In particular, edgebased ACMs make use of an edgedetector (in general, the gradient of the image intensity) to try to stop the evolution of the active contour on the true boundaries of the objects of interest. One of the most popular edgebased active contours is the Geodesic Active Contour (GAC) model [24], which is described in the following.
Geodesic Active Contour (GAC) Model [24]. The level set formulation of the GAC model can be described as follows:where is the level set function, is the gradient operator, is the divergence operator, is a “balloon” force term (controlling the rate of expansion of the level set function), and is an Edge Stopping Function (ESF), defined as follows:where is a Gaussian kernel function with width , is the convolution operator, and is the image intensity. Hence, the ESF function provides information related to the gradient of the image intensity.
For images with a high level of noise, the presence of the Edge Stopping Function may not be enough to stop the contour evolution at the right boundaries. Motivated by this issue, a novel edgebased ACM has been proposed in [63] with the aim of improving the robustness of the segmentation to the noise. This has been achieved by regularizing the Laplacian of the image through an anisotropic diffusion term, which also preserves edge information.
Since edgebased models make use of an edgedetector to stop the evolution of the initial guess of the contour on the actual object boundaries, they can handle only images with welldefined edge information. Indeed, when images have illdefined edges, the evolution of the contour typically does not converge to the true object boundaries.
An alternative solution consists in using statistical information about a region (e.g., intensity, texture, and color) to construct a stopping functional that is able to stop the contour evolution on the boundary between two different regions, as it happens in regionbased models (see also the survey paper [64] for the recent state of the art of regionbased ACMs) [26, 27]. An example of a regionbased model is illustrated in the following.
ChanVese (CV) Model [26]. The model is a wellknown representative global regionbased ACM (at the time of writing, it has received more than citations, according to Scopus). After its initialization, the contour in the model is evolved iteratively in an unsupervised fashion with the aim of minimizing a suitable energy functional, constructed in such a way that its minimum is achieved in correspondence with a close approximation of the actual boundary between two different regions. The energy functional of the model for a scalarvalued image has the expressionwhere is a contour, denotes the intensity of the image indexed by the pixel location in the image domain , is a regularization parameter which controls the smoothness of the contour, (foreground) and (background) represent the regions inside and outside the contour, respectively, and is another regularization parameter, which penalizes a large area of the foreground. Finally, and , which are defined, respectively, asrepresent the mean intensities of the foreground and the background, respectively, and are parameters which control the influence of the two image energy terms and , respectively, inside and outside the contour. The functional is constructed in such a way that, when the regions and are smooth and “match” the true foreground and the true background, respectively, reaches its minimum.
Following [65], in the variational level set formulation of (10), the contour is expressed as the zero level set of an auxiliary function :Note that different functions can be chosen to express the same contour . For instance, denoting by the infimum of the Euclidean distances of the pixel to the points on the curve , can be chosen as a signed distance function, defined as follows:This variational level set formulation has the advantage of being able to deal directly with the case of a foreground and a background that are not necessarily connected internally.
After replacing with and highlighting the dependence of and on , in the variational level set formulation of the model the (local) minimization of the cost (10) is performed by applying the gradientdescent technique in an infinitedimensional setting (see (7) and also the reference [26]), leading to the following PDE, which describes the evolution of the contour:where is the Dirac generalized function. The first term in μ of (14) keeps the level set function smooth, the second one in controls the propagation speed of the evolving contour, while the third and fourth terms in and can be interpreted, respectively, as internal and external forces that drive the contour toward the actual object boundary. Then, (14) is solved iteratively in [26] by replacing the Dirac delta by a smooth approximation and using a finite difference scheme. Sometimes, also a reinitialization step is performed, in which the current level set function is replaced by its binarization (i.e., for a constant , a level set function of the form (3), representing the same current contour).
The model can also be derived, in a Maximum Likelihood setting, by making the assumption that the foreground and the background follow Gaussian intensity distributions with the same variance [21]. Then, the model approximates globally the foreground and background intensity distributions by the two scalars and , respectively, which are their mean intensities. Similarly, Leventon et al. proposed in [66] to use Gaussian intensity distributions with different variances inside a parametric density estimation method. Also, Tsai et al. in [67] proposed to use instead uniform intensity distributions to model the two intensity distributions. However, such models are known to perform poorly in the case of objects with inhomogeneous intensities [21].
Compared to edgebased models, regionbased models usually perform better in images with blurred edges and are less sensitive to the contour initialization.
Hybrid models that combine the advantages of both edge and regional information are able to control better the direction of evolution of the contour than the previously mentioned models. For instance, the GeodesicAided ChanVese (GACV) model [28] is a popular hybrid model, which includes both region and edge information in its formulation. Another example of a hybrid model is the following one.
Selective Binary and Gaussian Filtering Regularized (SBGFRLS) Model [68]. The SBGFRLS model combines the advantages of both the CV and GAC models. It utilizes the statistical information inside and outside the contour to construct a regionbased Signed Pressure Force (SPF) function, which is used in place of the edge stopping function (ESF) used in the GAC model (recall (9)). The SPF function is so called because it tends to make the contour shrink when it is outside the object of interest and expand otherwise. The evolution of the contour in the variational level set formulation of the SBGFRLS model is described by the following PDE:where is a balloon force parameter and the SPF function spf is defined aswhere and are defined likewise in the model above. One can observe that, compared to the model, in (14) the Dirac delta term has been replaced by which, according to [68], has an effective range on the whole image, rather than the small range of the former. Also, the bracket in (14) is replaced by the function spf defined in (16). To regularize the curve , the authors of [68] (following the practice consolidated in other papers, e.g., [22, 68, 69]), rather than relying on the computationally costly term, convolve the level set curve with a Gaussian kernel ; that is,where the width of the Gaussian has a role similar to the one of μ in (14) of the model. If the value of is small, then the level set function is sensitive to the noise, and such a small value does not allow the level set function to flow into the narrow regions of the object to be segmented.
Overall this model is faster, is computationally more efficient, and performs better than the conventional model, as pointed out in [68]. However, it still has similar drawbacks as the model, such as its inefficiency in handling images with several intensity levels, its sensitivity to the contour initialization, and its inability to handle images with intensity inhomogeneity (arising, e.g., as an effect of slow variations in object illumination, possibly occurring during the image acquisition process).
In order to deal with images with intensity inhomogeneity, several authors have introduced in the SPF function terms that relate to local and global intensity information [34, 35, 59, 70]. However, these models are still sensitive to contour initialization and additive noise. Furthermore, when the contour is close to the object boundary, the influence of the global intensity force may distract the contour from the real object boundary, leading to object leaking [31], that is, the presence of a final blurred contour.
In general, global models cannot segment successfully objects that are constituted by more than one intensity class. On the other hand, sometimes this is possible by using local models, which rely on local information as their main component in the associated variational level set framework. However, such models are still sensitive to the contour initialization and may lead to object leaking. Some examples of such local regionbased ACMs are illustrated in the following.
Local Binary Fitting (LBF) Model [71]. The evolution of the contour in the LBF model is described by the following PDE:where and μ are nonnegative constants, is the Laplacian operator, , and the functions and are defined as follows:where and are, respectively, internal and external graylevel fitting functions, and is a Gaussian kernel function of width . Also, for , is a suitable regularized Dirac delta function, defined as follows:In more details, the functions and are defined as
In general, the LBF model can produce good segmentations of objects with intensity inhomogeneities. Furthermore, it has a better performance than the wellknown Piecewise Smooth () model [33, 72] for what concerns segmentation accuracy and computational efficiency. However, the LBF model only takes into account the local graylevel information. Thus, in this model, it is easy to be trapped into a local minimum of the energy functional, and the model is also sensitive to the initial location of the active contour. Finally, oversegmentation problems may occur.
Local Image Fitting (LIF) Energy Model [32]. Zhang et al. proposed in [32] the LIF energy model to insert local image information in their energy functional. The evolution of the contour in the LIF model is described by the following PDE:where the intensity of the local fitted image LFI is defined as follows:where and are the average local intensities inside and outside the contour, respectively.
The main idea of this model is to use the local image information to construct an energy functional, which takes into account the difference between the fitted image and the original one to segment an image with intensity inhomogeneities. The complexity analysis and experimental results showed that the LIF model is more efficient than the LBF model, while yielding similar results.
However, the models above are still sensitive to the contour initialization, and to high levels of additive noise. Compared to the two abovementioned models, a model that has shown higher accuracy when handling images with intensity inhomogeneity is the following one.
Local RegionBased ChanVese (LRCV) Model [31]. The LRCV model is a natural extension of the alreadymentioned ChanVese () model. Such an extension is obtained by inserting local intensity information into the objective functional. This is the main feature of the LRCV model, which provides to it the capability of handling images with intensity inhomogeneity, which is missing instead in the model.
The objective functional of the model has the expressionwhere and are functions which represent the local weighted mean intensities of the image around the pixel , assuming that it belongs, respectively, to the foreground/background:where is a Gaussian kernel function with and width .
The evolution of the contour in the model is described by the following PDE:
Equation (26) can be solved iteratively by replacing the Dirac delta by a smooth approximation and using a finite difference scheme. Moreover, one can perform also a regularization step, in which the current level set function is replaced by its convolution by a Gaussian kernel with suitable width .
A drawback of the model is that it relies only on the local information coming from the current location of the contour, so it is sensitive to the contour initialization.
Locally Statistical Active Contour Model (LSACM) [73]. This model has been proposed with the aim of handling images characterized by intensity inhomogeneity, and of being robust to the contour initialization. It can be considered as a generalization of the Local Intensity Clustering () model proposed in [74], which is applicable for both simultaneous segmentation and bias correction.
The evolution of the level set function in the LSACM model is controlled by the following gradient descent formulation:where and are two functions (one related to the foreground, the other one to the background), having suitable integral representations. Due to this fact, the LSACM model is able to combine the information about the spatial dependencies between pixels belonging to the same class and yields a soft segmentation. However, like the previous model, also this one is characterized by a high computational cost, in addition to the limitation of relying on a particular probabilistic model.
2.2. Supervised Models
From a machine learning perspective, ACMs for image segmentation can use both supervised and unsupervised information. Both kinds of ACMs rely on parametric and/or nonparametric density estimation methods to approximate the intensity distributions of the subsets to be segmented (e.g., foreground/background). Often, in such models one makes statistical assumptions on the image intensity distribution, and the segmentation problem is solved by a Maximum Likelihood () or Maximum APosteriori (MAP) probability approach. For instance, for scalarvalued images, in both parametric/nonparametric regionbased ACMs, the objective energy functional has usually an integral form (see, e.g., [75]), whose integrands are expressed in terms of functions having the form:where is the number of objects (subsets) to be segmented. Here, is the conditional probability density of the image intensity , conditioned on , so the loglikelihood term quantifies how much an image pixel is likely to be an element of the subset . In the case of supervised ACMs, the models are estimated from a training set, one for each subset . Similarly, for a vectorvalued image with components, the terms have the form:where .
Now, we briefly discuss some supervised ACMs, which take advantage of the availability of labeled training data. As an example, Lee et al. proposed in [75] a supervised ACM, which is formulated in a parametric form. In the following, we refer to such a model as a Gaussian Mixture Model () based ACM, since it exploits supervised training examples to estimate the parameters of multivariate Gaussian mixture densities. In such a model, the level set evolution PDE is given, for example, in the case of multispectral images , bywhere is a regularization parameter and is the average curvature of the level set function .
The two terms and in (30) are then expressed in [75] aswhere is the number of computational units, , are Gaussian functions with centers and covariance matrices , and the ’s are the coefficients of the linear combination. All the parameters () are then estimated from the training examples. Besides based ACMs, also nonparametric Kernel Density Estimation () based models with Gaussian computational units have been proposed in [76, 77] with the same aim. In the case of scalar images, they have the form:where the pixels and belong, respectively, to given sets and of training pixels inside the true foreground/background (with cardinalities and , resp.), is the width of the Gaussian kernel used in the based model, andOf course, such models can be extended to the case of vectorvalued images (in particular, replacing by a covariance matrix).
2.3. Other Variational Level SetBased ACMs
Supervised BoundaryBased GAC (sBGAC) Model [78]. The sBGAC model is a supervised level setbased ACM, which was proposed by Paragios and Deriche with the aim of providing a boundarybased framework that is derived by the GAC for texture image segmentation. Its main contribution is the connection between the minimization of a GAC objective with a contour propagation method for supervised texture segmentation. However, sBGAC is still limited to boundarybased information, which results in a high sensitivity to the noise and to the initial contour.
Geodesic Active Region (GARM) Model [79]. GARM was proposed with the aim of reducing the sensitivity of sBGAC to the noise and to the contour initialization, by integrating the regionbased information along with the boundary information. GARM is a supervised texture segmentation ACM implemented by a variational level set method.
The inclusion of supervised examples in ACMs can improve significantly their performance by constructing a Knowledge Base (KB), to be used as a guide in the evolution of the contour. However, stateoftheart supervised ACMs often make strong statistical assumptions on the image intensity distribution of each subset to be modeled. So, the evolution of the contour is driven by probability models constructed based on given reference distributions. Therefore, the applicability of such models is limited by how accurate the probability models are.
3. SOMBased ACMs
Before discussing SOMbased ACMs, we shortly review the use of SOMs as a tool in pattern recognition (hence, in image segmentation as a particular case).
3.1. SelfOrganizing Maps (SOMs)
The SOM [37], which was proposed by Kohonen, is an unsupervised neural network whose neurons update concurrently their weights in a selforganizing manner, in such a way that, during the learning process, the weights of the neurons evolve adaptively into specific detectors of different input patterns. A basic SOM is composed of an input layer, an output layer, and an intermediate connection layer. The input layer contains a unit for each component of the input vector. The output layer consists of neurons that are typically located either on a  or a  grid and are fully connected with the units in the input layer. The intermediate connection layer is composed of weights (also called prototypes) connecting the units in the input layer and the neurons in the output layer (in practice, one has one weight vector associated with each output neuron, where the dimension of the weight vector is equal to the dimension of the input). The learning algorithm of the SOM can be summarized by the following steps:(1)Initialize randomly the weights of the neurons in the output layer, and select suitable learning rate and neighborhood size around a “winner” neuron.(2)For each training input vector, find the winner neuron, also called Best Matching Unit () neuron, using a suitable rule.(3)Update the weights on the selected neighborhood of the winner neuron.(4)Repeat Steps (2)(3) above selecting another training input vector, until learning is accomplished (i.e., a suitable stopping criterion is satisfied).
More precisely, after its random initialization, the weight of each neuron is updated at each iteration through the following selforganization learning rule:where is the input of the SOM at time , is a learning rate, and is a neighborhood kernel around the BMU neuron (i.e., the neuron whose weight vector is the closest to the input ). Both functions and are designed to be timedecreasing in order to stabilize the weights for sufficiently large. Usual choices of the functions above arewhere is the initial learning rate and is a time constant, andwhere is the distance between the neurons and , and is a suitable choice for the width of the Gaussian function in (36).
SOMs have been used extensively for image segmentation, but often not in combination with ACMs [80, 81]. In the following subsection, we review, in brief, some of the existing SOMbased segmentation models which are not related to ACMs.
3.1.1. SOMBased Segmentation Models Not Related to ACMs
In [82], a SOMbased clustering technique was used as a thresholding technique for image segmentation. The idea was to apply the intensity histogram of the image to feed a SOM that divides the histogram into regions. Huang et al. in [83] proposed to use a twostages SOM system in segmenting multispectral images (specifically, made of three components, or channels). In the first stage, the goal was to identify a large initial set of color classes, while the second stage aimed to identify a final batch of segmented clusters. In [84], Jiang et al. used SOMs to segment multispectral images (specifically, made of five components), by clustering the pixels based on their color and on other spatial features. Then, those clustered regions were merged into a predefined number of regions by the application of some morphological operations. Concluding, SOMs have been extensively used in the field of segmentation, and, as stated in [85–90], the SOMbased segmentation models proposed in the literature yielded improved segmentation results compared to the direct application of the classical SOM.
Although SOMs are traditionally associated with unsupervised learning, in the literature there exist also supervised SOMs. A representative model of a supervised SOM is the Concurrent SelfOrganizing Map (CSOM) [91], which combines several SOMs to deal with the pattern classification problem (hence, the image segmentation problem as a particular case) in a parallel processing way, with the aim of minimizing a suitable objective function, usually the quantization error of the maps. In a CSOM, each SOM is constructed and trained individually on a subset of examples coming only from its associated class. The aim of this training is to increase the discriminative capability of the system. So, the training of the CSOM is supervised for what concerns the assigment of the training examples to the various SOMs, but each individual SOM is trained with the SOM specific selforganizing (hence, unsupervised) learning rule.
We conclude mentioning that, when SOMs are used as supervised/unsupervised image segmentation techniques, the application of the resulting model usually produces segmented objects characterized by disconnected boundaries, and the segmentation result is often sensitive to the noise.
3.1.2. SOMBased Segmentation Models Related to ACMs
In order to improve the robustness of edgebased ACMs to the blur and to illdefined edge information, SOMs have been also used in combination with ACMs, with the explicit aim of modelling the active contour and controlling its evolution, adopting a learning scheme similar to Kohonen’s learning algorithm [37], resulting in SOMbased ACMs [38, 39] (which belong, in the case of [38, 39], to the class of edgebased ACMs). The evolution of the active contour in a SOMbased ACM is guided by the feature space constructed by the SOM when learning the weights associated with the neurons of the map. Moreover, other kinds of neural networks have been used with the aim of approximating the edge map: for example, multilayer perceptrons [92]. One reason to prefer SOMs to other neural network models consists in the specific ability of SOMs to learn, for example, the edgemap information via their topology preservation property. A review of SOMbased ACMs belonging to the class of edgebased ACMs is provided in the two following subsections, whereas Section 4 presents a more recent class of SOMbased ACMs combined with variational level set methods.
3.2. An Example of a SOMBased ACM Belonging to the Class of EdgeBased ACMs
The basic idea of existing SOMbased ACMs belonging to the class of edgebased ACMs is to model and implement the active contour using a SOM, relying in the training phase on the edge map of the image to update the weights of the neurons of the SOM, and consequently to control the evolution of the active contour. The points of the edge map act as inputs to the network, which is trained in an unsupervised way (in the sense that no supervised examples belonging to the foreground/background, resp., are provided). As a result, during training the weights associated with the neurons in the output map move toward points belonging to the nearest salient contour. In the following, we illustrate the general ideas of using a SOM in modelling the active contour, by describing a classical example of a SOMbased ACM belonging to the class of edgebased ACMs, which was proposed in [38] by Venkatesh and Rishikesh.
Spatial Isomorphism SelfOrganizing Map (SISOM) Based ACM [38]. This is the first SOMbased ACM which appeared in the literature. It was proposed with the aim of localizing the salient contours in an image using a SOM to model the evolving contour. The SOM is composed of a fixed number of neurons (and consequently a fixed number of “knots” or control points for the evolving curve) and has a fixed structure. The model requires a rough approximation of the true boundary as an initial contour. Its SOM network is constructed and trained in an unsupervised way, based on the initial contour and the edgemap information. The contour evolution is controlled by the edge information extracted from the image by an edge detector. The main steps of the SISOMbased ACM can be summarized as follows:(1)Construct the edge map of the image to be segmented.(2)Initialize the contour to enclose the object of interest in the image.(3)Obtain the horizontal and vertical coordinates of the edge points to be presented as inputs to the network.(4)Construct a SOM with a number of neurons equal to the number of the edge points of the initial contour and two scalar weights associated with each neuron; the points on the initial contour are used to initialize the SOM weights.(5)Repeat the following steps for a fixed number of iterations:(a)Select randomly an edge point and feed its coordinates to the network.(b)Determine the bestmatching neuron.(c)Update the weights of the neurons in the network by the classical unsupervised learning scheme of the SOM [37], which is composed of a competitive phase and a cooperative one.(d)Compute a neighborhood parameter for the contour according to the updated weights and a threshold.
Figure 3 illustrates the evolution procedure of the SISOMbased ACM. On the leftside of the figure, the neurons of the map are represented by gray circles, while the black circle represents the winner neuron associated with the current input to the map (in this case, the gray circle on the righthand side of the figure, which is connected by the gray segments to all the neurons of the map). On the righthand side, instead, the positions of the white circles represent the initial prototypes of the neurons, whereas the positions of the black circles represent their final values, at the end of learning. The evolution of the contour is controlled by the learning algorithm above, which guides the evolution of the prototypes of the neurons of the SOM (hence, of the active contour) using the points of the edge map as inputs to the SOM learning algorithm. As a result, the final contour is represented by a series of prototypes of neurons located near the actual boundary of the object to be segmented.
We conclude by mentioning that, in order to produce good segmentations, the SISOMbased ACM requires the initial contour (which is used to initialize the prototypes of the neurons) to be very close to the true boundary of the object to be extracted, and the points of the initial contour have to be assigned to the neurons of the SOM in a suitable order: if such assumptions are satisfied, then the contour extraction process performed by the model is generally robust to the noise. Moreover, differently from other ACMs, this model does not require a particular energy functional to be optimized.
3.3. Other SOMBased ACMs Belonging to the Class of EdgeBased ACMs
In this subsection, we describe other SOMbased ACMs belonging also to the class of edgebased ACMs, and highlight their advantages and disadvantages.
Time Adaptive SelfOrganizing Map (TASOM) Based ACM [39]. The TASOMbased ACM was proposed by ShahHosseini and Safabakhsh as a development of the SISOMbased ACM, with the aim of inserting neurons incrementally into the SOM map or deleting them incrementally, thus determining automatically the required number of control points of the extracted contour. The addition and deletion processes are based on the closeness of any two adjacent neurons and . More precisely, if the distance between the corresponding weights and is smaller than a given threshold , then the two neurons are merged, whereas a new neuron is inserted between the two neurons if is larger than another given threshold . Moreover, at each time , each neuron is provided with its specific dynamic learning rate , which is defined as follows:where is a constant, is a positive constant which controls the slope of , is the input at time , and is a suitable scaling function, which makes the SOM network invariant to scaling transformations. Finally, at each time , each neuron is also associated with the neighborhood function , which has the form of (36).
The TASOMbased ACM can overcome one of the main limitations of the SISOMbased ACM, that is, its sensitivity to the contour initialization, in the sense that, for a successful segmentation, the initial guess of the contour in the TASOMbased ACM can be even far from the actual object boundary. Likewise in the case of the SISOMbased ACM, topological changes of the objects (e.g., splitting and merging) cannot be handled by the TASOMbased ACM, since both models rely completely on the edge information (instead than on regional information) to drive the contour evolution.
Batch SelfOrganizing Map (BSOM) Based ACM [20, 93]. This model is a modification of the TASOMbased ACM, and was proposed by Venkatesh et al. with the aim of dealing better with the leaking problem (i.e., the presence of a final blurred contour), which often occurs when handling images with illdefined edges. Such a problem is due to the explicit use by the TASOMbased ACM of only edge information to model and control the evolution of the contour. The BSOMbased ACM, instead, relies on the fact that the image intensity variation inside a local region can be used in a way to increase the robustness of the model during the movements of the contour. As a consequence, the BSOMbased ACM associates a region boundary term with each neuron , in order to control better the movements of the neurons. Such a term is defined aswhere is the number of neighborhood points of the neuron that are taken into account for the local analysis of the region boundary, is the image intensity function, sgn is the signum function, and , are suitable neighborhood points of the neuron , outside and inside the contour, respectively. Now, the sign of the difference in (38) between the image intensities at the points and should be the same for all , if the neuron is near a true region boundary. In this way, the robustness of the model is increased in handling images with blurred edges. At the same time, the BSOMbased ACM is less sensitive to the initial guess of the contour, when compared to parametric ACMs like Snakes, and to the SOMbased ACMs described above. However, like all such models, the BSOMbased ACM has not the ability to handle topological changes of the objects to be segmented. An extension of the BSOMbased ACM was proposed in [94, 95] and applied therein to the segmentation of pupil images. Such a modified version of the basic BSOMbased ACM increases the smoothness of the extracted contour, and prevents the extracted contour from being extended over the true boundaries of the object.
Fast Time Adaptive SelfOrganizing Map (FTASOM) Based ACM [96]. This is another modification of the TASOMbased ACM, and it was proposed by Izadi and Safabakhsh with the aim of decreasing the computational complexity of the method, by using an adaptive speed parameter instead of the one used in [39], which was fixed, instead. Such an adaptive speed parameter was also proposed with the aim of increasing the speed of convergence and accuracy. The FTASOMbased ACM is also based on the observation that choosing the learning rate parameters of the prototypes of the neurons of the SOM in such a way that they are equal to a large fixed value when they are far from the boundary, and to a small value when they are near the boundary, can lead to a significant increase of the convergence speed of the active contour. Accordingly, in each iteration, the FTASOMbased ACM finds the minimum distance of each neuron from the boundary, then it sets the associated learning rate as a fraction of that distance.
Coarse to Fine Boundary Location SelfOrganizing Map (CFBLSOM) Based ACM [97]. The above SOMbased ACMs work in an unsupervised way, as the user is required only to provide an initial contour to be evolved automatically. In [97], Zeng et al. proposed the CFBLSOMbased ACM as the first supervised SOMbased ACM, that is, a model in which the user is allowed to provide supervised points (supervised “seeds”) from the desired boundaries. Starting from this coarse information, the SOM neurons are then employed to evolve the contour to the desired boundaries in a “coarsetofine” approach. The CFBLSOMbased ACM follows such a strategy, when controlling the evolution of the contour. So, an advantage of the CFBLSOMbased ACM over the SOMbased ACMs described above is that it allows to integrate prior knowledge about the desired boundaries of the objects to be segmented, which comes from the user interaction with the SOMbased ACM segmentation framework. When compared to such SOMbased ACMs, this property provides the CFBLSOMbased ACM with the ability of handling objects with more complex shapes, inhomogeneous intensity distributions, and weak boundaries.
Figure 4 illustrates the evolution procedure of the CFBLSOMbased ACM, which is similar to the one of the SISOMbased ACM. The only difference is represented by the dashed circles, which are used as supervised pixels to increase the robustness of the model to the initialization of the contour. Due to this reason, for a successful segmentation, the white circles in Figure 4 can be initialized even far away from the actual boundary of the object, differently from Figure 3. Finally, due to the presence of the supervision, this method also allows one to handle more complex images.
Conscience, Archiving and MeanMovement Mechanisms SelfOrganizing Map (CAMSOM) Based ACM [98]. The CAMSOMbased ACM was proposed by Sadeghi et al. as an extension of the BSOMACM, by introducing three mechanisms called Conscience, Archiving and Meanmovement. The main achievement of the CAMSOMbased ACM is to allow more complex boundaries (such as concave boundaries) to be captured, and to provide a reduction of the computational cost. By the Conscience mechanism, the neurons are not allowed to “win” too much frequently, which makes the capture of more complex boundaries possible. The Archiving mechanism allows a significant reduction in the computational cost. By such mechanism, neurons whose prototypes are close to the boundary of the object to be segmented and whose values have not changed significantly in the last iterations are archived and eliminated from subsequent computations. Finally, in order to ensure a continuous movement of the active contour toward concave regions, the Meanmovement mechanism is used in each epoch to force the winner neuron to move toward the mean of a set of feature points, instead of a single feature point. Together, the Conscience and Meanmovement mechanisms prevent the contour from stopping the contour evolution at the entrance of object concavities.
Extracting Multiple Objects. The main limitation of various of the SOMbased ACMs reviewed above is their inability to detect multiple contours and to recognize multiple objects. A similar problem arises in parametric ACMs such as Snakes. To deal with the multiple contour extraction problem, Venkatesh et al. proposed in [93] to use a splitting criterion. However, if the initial contour is outside the objects, contours inside an object still cannot be extracted by using such a criterion. Sadeghi et al. proposed in [98] another splitting criterion (to be checked at each epoch) such that the main contour can be divided into several subcontours whenever the criterion is satisfied. The process is repeated until each of the subcontours encloses one single object. However, the merging process is still not handled implicitly by the model, which reduces its scope, especially when handling images containing multiple objects in the presence of noise or illdefined edges. Moreover, Ma et al. proposed in [99] to use a SOM to classify the edge elements in the image. This model relies first on detecting the boundaries of the objects. Then, for each edge pixel, a feature vector is extracted and normalized. Finally, a SOM is used as a clustering tool to detect the object boundaries when the feature vectors are supplied as inputs to the map. As a result, multiple contours can be recognized. However, the model shares the same limitations of other models that use a SOM as a clustering tool for image segmentation [80, 100, 101], resulting in disconnected boundaries and a high sensitivity to the presence of the noise.
4. SOMBased ACMs Combined with Variational Level Set Methods
Recently, a new class of SOMbased ACMs combined with variational level set methods has been proposed in [102–105], with the aim of taking advantage of both SOMs and variational level set methods, in order to handle images presenting challenges in computer vision in an efficient, effective, and robust way. In this section, we describe the main contributions of such approaches, by comparing them with the abovementioned active contour models.
Concurrent SelfOrganizing MapBased ChanVese (CSOMCV) Model. CSOMCV [102] is a novel regional ACM, which relies on a CSOM made of two SOMs to approximate the foreground and background image intensity distributions in a supervised fashion, and to drive the evolution of the active contour accordingly. The model integrates such an information inside the framework of the ChanVese () model, hence the name of such a model is Concurrent SelfOrganizing Mapbased ChanVese (CSOMCV) model. The main idea of the CSOMCV model is to concurrently integrate the global information extracted by a CSOM from a few supervised pixels into the levelset framework of the CV model to build an effective ACM. The proposed model integrates the advantages of the CSOM as a powerful classification tool and of the CV model as an effective tool for the optimization of a global energy functional. The evolution of the contour in the CSOMCV model (which is a variational level set method) is described by the following PDE:where and are two energy terms, which are used to determine the forces acting inside and outside the contour, respectively. They are defined, respectively, as where is the prototype of the neuron of the first SOM that is the BMU neuron to the mean intensity inside the current contour, while is the prototype of the neuron of the second SOM that is the BMU neuron to the mean intensity outside it.
Figure 5 illustrates the offline (i.e., training session) and online components of the CSOMCV model. In the offline session, the foreground supervised pixels are represented in light gray, while the background ones are represented in dark gray. The first SOM is trained using the intensity of the foreground supervised pixels, whereas the second one is trained using the intensity of the background supervised pixels. In such a session, the neurons of the two SOMs are arranged in such a way that the topological structure of the foreground and background intensity distributions are preserved. Finally, in the online session, the learned prototypes of the “foreground” and “background” neurons associated, respectively, with the two SOMs (and represented in light and dark gray, resp., in Figure 5) are used implicitly to control the evolution of the contour toward the true object boundary.
SelfOrganizing Active Contour (SOAC) Model [103]. Likewise the CSOMCV model, also the SOAC model combines a variational level set method with the prototypes associated with the neurons of a SOM, which are learned during the offline phase. Then, in the online phase, the contour evolution is implicitly controlled by the minimization of the quantization error of the organized neurons. The SOAC model can handle images with multiple intensity classes, intensity inhomogeneity, and complex distributions with a complicated foreground and background overlap. Compared to CSOMCV, the SOAC model makes the following important improvement: its regional descriptors and (which are used in a similar way as the ones and in (4) and (41), resp.) depend on the pixel location , while CSOMCV uses the regional descriptors and , which are constant functions. So, CSOMCV is a global ACM (i.e., the spatial dependencies of the pixels are not taken into account in such a model, since it just considers only the average intensities inside and outside the contour), whereas the SOAC model makes also use of local information, which provides it the ability of handling more complex images. Finally, the experimental results reported in [102] have shown the higher accuracy of the segmentation results obtained by SOAC on several synthetic and real images compared to some wellknown ACMs.
SOMBased ChanVese (SOMCV) Model [104]. This is similar to the CSOMCV model, with the difference that now the training of the model is completely unsupervised, differently from the two previous models. Likewise for the CSOMCV model, the prototypes of the trained neurons encode global intensity information also in this case. The SOMCV model can handle images with many intensity levels and complex intensity distributions, and it is robust to additive noise. Experimental results reported in [104] have shown the higher accuracy of the segmentation results obtained by the SOMCV model on several synthetic and real images, when compared to the CV active contour model. A significant difference with the CSOMCV model is that the intervention of the final user is significantly reduced in the SOMCV model, since no supervised information is used. Finally, SOMCV has a SelfOrganizing Topology Preservation (SOTP) property, which allows to preserve the topological structures of the foreground/background intensity distributions during the active contour evolution. Indeed, SOMCV relies on a set of selforganized neurons by automatically extracting the prototypes of selected neurons as global regional descriptors and iteratively, in an unsupervised way, integrates them in the evolution of the contour.
SOMBased Regional Active Contour (SOMRAC) Model [105]. Finally, likewise the SOMCV model, also the SOMRAC model relies on the global information coming from selected prototypes associated with a SOM, which is trained offline in an unsupervised way to model the intensity distribution of an image, and used online to segment an identical or similar image. In order to improve the robustness of the model, global and local information are combined in the online phase, differently from the three models above. The main motivation for the SOMRAC model is to deal with the sensitivity of local ACMs to the contour initialization (which arise, e.g., when intensity inhomogeneity and additive noise occur in the images) through the combination of global and local information by a SOMbased approach. Indeed, global information plays an important role to improve the robustness of ACMs against the contour initialization and the additive noise but, if used alone, it is usually not sufficient to handle images containing intensity inhomogeneity. On the other hand, local information allows one to deal effectively with the intensity inhomogeneity but, if used alone, it produces usually ACMs very sensitive to the contour initialization. The SOMRAC model combines both kinds of information relying on global regional descriptors (i.e., suitably selected weights of a trained SOM) on the basis of local regional descriptors (i.e., the local weighted mean intensities). In this way, the SOMRAC model is able to integrate the advantages of global and local ACMs by means of a SOM.
5. Conclusions and Future Research Directions
In this paper, a survey has been provided about the current state of the art of Active Contour Models (ACMs), with an emphasis on variational level setbased ACM, SelfOrganizing Map (SOM) based ACMs, and their relationships (see Figure 6).
Variational level setbased ACMs have been proposed in the literature with the aim of handling implicitly topological changes of the objects to be segmented. However, such methods are usually trapped into local minima of the energy functional to be minimized. Then, SOMbased ACMs have been proposed with the aim of exploiting the specific ability of SOMs to learn the edgemap information via their topology preservation property, and reducing the occurrence of local minima of the functional to be minimized, which is also typical of parametric ACMs such as Snakes. This is partly due to the fact that such SOMbased ACMs do not rely on an explicit gradient energy term. Although SOMbased ACMs belonging to the class of edgebased ACMs can effectively outperform other ACM models in handling complex images, most of such SOMbased ACMs are still sensitive to the contour initialization compared to variational level setbased ACMs, especially when handling complex images with illdefined edges. Moreover, such SOMbased ACMs have not usually the ability to handle topological changes of the objects. For this reason, we have concluded the paper presenting a recently proposed class of SOMbased ACMs, which takes advantage of both SOMs and variational level set methods, with the aims of preserving topologically the intensity distribution of the foreground and background in a supervised/unsupervised way and, at the same time, of allowing topological changes of the objects to be handled implicitly.
Among future research directions, we mention: the possibility of combining, inside SOMbased ACMs, other advantages of variational level set methods in handling the topological changes, in order to obtain a new class of models which are able to handle the topological changes implicitly and, at the same time, to avoid trapping into local minima; the development of more sophisticated supervised/semisupervised SOMbased ACMs based, for example, on the use of Concurrent SelfOrganizing Maps (CSOMs) [91], relying on regionalbased information (e.g., local/global statistical information about the intensity, texture, and color distribution) to guide the evolution of the active contour in a more robust way; the possibility of extending current SOMbased ACMs in such a way that the underlying neurons are incrementally added/removed in an automatic way, and suitably trained with the aim of overcoming the limitation of manually adapting the topology of the network, and of reducing the sensitivity of the model to the choice of the parameters; the inclusion of other kinds of prior information (e.g., shape information) in the models reviewed in the paper, with the aim of handling complex images presenting challenging problems such as occlusion; and possible further developments of the machinelearning components of the reviewed models from a streaminglearning perspective, which could lead to a better understanding of video contents through realtime segmentations. Such developments could be obtained by integrating streaminglearning algorithms into the segmentation framework of SOMbased ACMs.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
 S. Raut, M. Raghuvanshi, R. Dharaskar, and A. Raut, “Image segmentation—a stateofart survey for prediction,” in Proceedings of the IEEE International Conference on Advanced Computer Control (ICACC '09), pp. 420–424, IEEE, Singapore, January 2009. View at: Publisher Site  Google Scholar
 N. Otsu, “A threshold selection method from graylevel histograms,” IEEE Transactions on Systems, Man and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. View at: Publisher Site  Google Scholar
 P.S. Liao, T.S. Chen, and P.C. Chung, “A fast algorithm for multilevel thresholding,” Journal of Information Science and Engineering, vol. 17, no. 5, pp. 713–727, 2001. View at: Google Scholar
 Z. Musoromy, S. Ramalingam, and N. Bekooy, “Edge detection comparison for license plate detection,” in Proceedings of the 11th International Conference on Control, Automation, Robotics and Vision (ICARCV '10), pp. 1133–1138, December 2010. View at: Publisher Site  Google Scholar
 J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–698, 1986. View at: Google Scholar
 I. Karoui, R. Fablet, J.M. Boucher, and J.M. Augustin, “Variational regionbased segmentation using multiple texture statistics,” IEEE Transactions on Image Processing, vol. 19, no. 12, pp. 3146–3156, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 R. Adams and L. Bischof, “Seeded region growing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 6, pp. 641–647, 1994. View at: Publisher Site  Google Scholar
 M. P. Pathegama and O. Gol, “Edgeend pixel extraction for edgebased image segmentation,” International Journal of Computer, Information, Systems and Control Engineering, vol. 1, no. 2, pp. 434–437, 2007. View at: Google Scholar
 L. E. Baum and T. Petrie, “Statistical inference for probabilistic functions of finite state Markov chains,” The Annals of Mathematical Statistics, vol. 37, pp. 1554–1563, 1966. View at: Publisher Site  Google Scholar  MathSciNet
 J. B. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297, 1967. View at: Google Scholar
 D. M. Titterington, A. F. Smith, and U. E. Makov, Statistical Analysis of Finite Mixture Distributions, vol. 198 of Wiley series in probability and mathematical, Wiley, New York, NY, USA, 1985. View at: MathSciNet
 A. Declercq and J. H. Piater, “Online learning of Gaussian mixture models—a twolevel approach,” in Proceedings of the 3rd International Conference on Computer Vision Theory and Applications (VISAPP '08), pp. 605–611, January 2008. View at: Google Scholar
 P. Singla and P. Domingos, “Discriminative training of Markov logic networks,” in Proceedings of the 20th National Conference on Artificial Intelligence, pp. 868–873, AAAI Press, July 2005. View at: Google Scholar
 Y. N. Andrew and I. J. Michael, “On discriminative vs. generative classifiers: a comparison of logistic regression and naive Bayes,” in Proceedings of the Advances in Neural Information Processing Systems 14 (NIPS '01), 2001. View at: Google Scholar
 W. H. Highleyman, “Linear decision functions, with application to pattern recognition,” Proceedings of the IRE, vol. 50, pp. 1501–1514, 1962. View at: Google Scholar  MathSciNet
 W. S. McCulloch and W. Pitts, Neurocomputing: Foundations of Research, MIT Press, 1988.
 B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pp. 144–152, July 1992. View at: Google Scholar
 Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 11, pp. 1222–1239, 2001. View at: Publisher Site  Google Scholar
 M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,” International Journal of Computer Vision, vol. 1, no. 4, pp. 321–331, 1988. View at: Publisher Site  Google Scholar
 Y. V. Venkatesh, S. K. Raja, and N. Ramya, “A novel SOMbased approach for active contour modeling,” in Proceedings of the Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP '04), pp. 229–234, December 2004. View at: Google Scholar
 S. Chen and R. J. Radke, “Level set segmentation with both shape and intensity priors,” in Proceedings of the 12th IEEE International Conference on Computer Vision (ICCV '09), pp. 763–770, IEEE, Kyoto, Japan, October 2009. View at: Publisher Site  Google Scholar
 G. Zhu, S. Zhang, Q. Zeng, and C. Wang, “Boundarybased image segmentation using binary level set method,” Optical Engineering, vol. 46, no. 5, Article ID 050501, 2007. View at: Publisher Site  Google Scholar
 W. Kim and C. Kim, “Active contours driven by the salient edge energy model,” IEEE Transactions on Image Processing, vol. 22, no. 4, pp. 1667–1673, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision, vol. 22, no. 1, pp. 61–79, 1997. View at: Publisher Site  Google Scholar
 S. Kichenassamy, A. Kumar, P. Olver, A. Tannenbaum, and A. Yezzi Jr., “Conformal curvature flows: from phase transitions to active vision,” Archive for Rational Mechanics and Analysis, vol. 134, no. 3, pp. 275–301, 1996. View at: Publisher Site  Google Scholar  MathSciNet
 T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266–277, 2001. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 M. F. Talu, “ORACM: online regionbased active contour model,” Expert Systems with Applications, vol. 40, no. 16, pp. 6233–6240, 2013. View at: Publisher Site  Google Scholar
 L. Chen, Y. Zhou, Y. Wang, and J. Yang, “GACV: geodesicAided CV method,” Pattern Recognition, vol. 39, no. 7, pp. 1391–1395, 2006. View at: Publisher Site  Google Scholar
 M. M. Abdelsamea and S. A. Tsaftaris, “Active contour model driven by globally signed region pressure force,” in Proceedings of the 18th International Conference on Digital Signal Processing (DSP '13), pp. 1–6, July 2013. View at: Publisher Site  Google Scholar
 Y. Tian, F. Duan, M. Zhou, and Z. Wu, “Active contour model combining region and edge information,” Machine Vision and Applications, vol. 24, no. 1, pp. 47–61, 2013. View at: Publisher Site  Google Scholar
 S. Liu and Y. Peng, “A local regionbased ChanVese model for image segmentation,” Pattern Recognition, vol. 45, no. 7, pp. 2769–2779, 2012. View at: Publisher Site  Google Scholar
 K. Zhang, H. Song, and L. Zhang, “Active contours driven by local image fitting energy,” Pattern Recognition, vol. 43, no. 4, pp. 1199–1206, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 A. Tsai, A. Yezzi, and A. S. Willsky, “Curve evolution implementation of the MumfordShah functional for image segmentation, denoising, interpolation, and magnification,” IEEE Transactions on Image Processing, vol. 10, no. 8, pp. 1169–1186, 2001. View at: Publisher Site  Google Scholar
 P. Wang, K. Sun, and Z. Chen, “Local and global intensity information integrated geodesic model for image segmentation,” in Proceedings of the International Conference on Computer Science and Electronics Engineering (ICCSEE '12), vol. 2, pp. 129–132, IEEE, Hangzhou, China, March 2012. View at: Publisher Site  Google Scholar
 T.T. Tran, V.T. Pham, Y.J. Chiu, and K.K. Shyu, “Active contour with selective local or global segmentation for intensity inhomogeneous image,” in Proceedings of the 3rd IEEE International Conference on Computer Science and Information Technology (ICCSIT '10), vol. 1, pp. 306–310, July 2010. View at: Publisher Site  Google Scholar
 C. Xu and J. L. Prince, “Snakes, shapes, and gradient vector flow,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 359–369, 1998. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 T. Kohonen, “Essentials of the selforganizing map,” Neural Networks, vol. 37, pp. 52–65, 2013. View at: Publisher Site  Google Scholar
 Y. V. Venkatesh and N. Rishikesh, “Selforganizing neural networks based on spatial isomorphism for active contour modeling,” Pattern Recognition, vol. 33, no. 7, pp. 1239–1250, 2000. View at: Publisher Site  Google Scholar
 H. ShahHosseini and R. Safabakhsh, “A TASOMbased algorithm for active contour modeling,” Pattern Recognition Letters, vol. 24, no. 910, pp. 1361–1373, 2003. View at: Publisher Site  Google Scholar
 M. M. Abdelsamea, G. Gnecco, and M. M. Gaber, “A survey of SOMbased active contours for image segmentation,” in Advances in SelfOrganizing Maps and Learning Vector Quantization: Proceedings of the 10th International Workshop, WSOM 2014, Mittweida, Germany, July, 2–4, vol. 295 of Advances in Intelligent Systems and Computing, pp. 293–302, Springer, Berlin, Germany, 2014. View at: Publisher Site  Google Scholar
 S. Osher and J. A. Sethian, “Fronts propagating with curvaturedependent speed: algorithms based on HamiltonJacobi formulations,” Journal of Computational Physics, vol. 79, no. 1, pp. 12–49, 1988. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 X. Bresson, S. Esedoglu, P. Vandergheynst, J.P. Thiran, and S. Osher, “Fast global minimization of the active contour/snake model,” Journal of Mathematical Imaging and Vision, vol. 28, no. 2, pp. 151–167, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 L. D. Cohen and R. Kimmel, “Global minimum for active contour models: a minimal path approach,” International Journal of Computer Vision, vol. 24, no. 1, pp. 57–78, 1997. View at: Publisher Site  Google Scholar
 L. D. Cohen and I. Cohen, “Finiteelement methods for active contour models and balloons for 2D and 3D images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1131–1147, 1993. View at: Publisher Site  Google Scholar
 A. Yezzi Jr., A. Tsai, and A. Willsky, “A fully global approach to image segmentation via coupled curve evolution equations,” Journal of Visual Communication and Image Representation, vol. 13, no. 12, pp. 195–216, 2002. View at: Publisher Site  Google Scholar
 A. Myronenko and X. B. Song, “Global active contourbased image segmentation via probability alignment,” in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR '09), pp. 2798–2804, June 2009. View at: Publisher Site  Google Scholar
 Y. Gan and Q. Zhao, “An effective defect inspection method for LCD using active contour model,” IEEE Transactions on Instrumentation and Measurement, vol. 62, no. 9, pp. 2438–2445, 2013. View at: Publisher Site  Google Scholar
 H. Yu, L. Li, W. Xu, and W. Liu, “A multiscale approach to mass segmentation using active contour models,” in Medical Imaging: Image Processing, vol. 7623 of Proceedings of SPIE, p. 8, February 2010. View at: Publisher Site  Google Scholar
 A. Vard, K. Jamshidi, and N. Movahhedinia, “Small object detection in cluttered image using a correlation based active contour model,” Pattern Recognition Letters, vol. 33, no. 5, pp. 543–553, 2012. View at: Publisher Site  Google Scholar
 Y.M. Cheung, X. Liu, and X. You, “A local region based approach to lip tracking,” Pattern Recognition, vol. 45, no. 9, pp. 3336–3347, 2012. View at: Publisher Site  Google Scholar
 Y. Gu, V. Kumar, L. O. Hall et al., “Automated delineation of lung tumors from CT images using a single click ensemble segmentation approach,” Pattern Recognition, vol. 46, no. 3, pp. 692–702, 2013. View at: Publisher Site  Google Scholar
 P. MarquezNeila, L. Baumela, and L. Alvarez, “A morphological approach to curvaturebased evolution of curves and surfaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 1, pp. 2–17, 2014. View at: Publisher Site  Google Scholar
 W. Wang, L. Zhu, J. Qin, Y.P. Chui, B. N. Li, and P.A. Heng, “Multiscale geodesic active contours for ultrasound image segmentation using speckle reducing anisotropic diffusion,” Optics and Lasers in Engineering, vol. 54, pp. 105–116, 2014. View at: Publisher Site  Google Scholar
 Z. Li, Z. Liu, and W. Shi, “A fast level set algorithm for building roof recognition from high spatial resolution panchromatic images,” IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 4, pp. 743–747, 2014. View at: Publisher Site  Google Scholar
 W. Wen, C. He, and M. Li, “Transition regionbased active contour model for image segmentation,” Journal of Electronic Imaging, vol. 22, no. 1, Article ID 013021, 2013. View at: Publisher Site  Google Scholar
 J. Yin and J. Yang, “A modified level set approach for segmentation of multiband polarimetric SAR images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 11, pp. 7222–7232, 2014. View at: Publisher Site  Google Scholar
 V. Estellers, D. Zosso, X. Bresson, and J.P. Thiran, “Harmonic active contours,” IEEE Transactions on Image Processing, vol. 23, no. 1, pp. 69–82, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 B. Wang, X. Gao, D. Tao, and X. Li, “A nonlinear adaptive level set for image segmentation,” IEEE Transactions on Cybernetics, vol. 44, no. 3, pp. 418–428, 2014. View at: Publisher Site  Google Scholar
 U. Vovk, F. Pernuš, and B. Likar, “A review of methods for correction of intensity inhomogeneity in MRI,” IEEE Transactions on Medical Imaging, vol. 26, no. 3, pp. 405–421, 2007. View at: Publisher Site  Google Scholar
 S. BallaArabe, X. Gao, B. Wang, F. Yang, and V. Brost, “Multikernel implicit curve evolution for selected texture region segmentation in VHR satellite images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 8, pp. 5183–5192, 2014. View at: Publisher Site  Google Scholar
 Q. Zheng, E. Dong, Z. Cao, W. Sun, and Z. Li, “Active contour model driven by linear speed function for local segmentation with robust initialization and applications in MR brain images,” Signal Processing, vol. 97, pp. 117–133, 2014. View at: Publisher Site  Google Scholar
 X. Xie, C. Wang, A. Zhang, and X. Meng, “A robust level set method based on local statistical information for noisy image segmentation,” Optik, vol. 125, no. 9, pp. 2199–2204, 2014. View at: Publisher Site  Google Scholar
 H. Song, “Active contours driven by regularised gradient flux flows for image segmentation,” Electronics Letters, vol. 50, no. 14, pp. 992–994, 2014. View at: Publisher Site  Google Scholar
 F. Lecellier, S. JehanBesson, and J. Fadili, “Statistical regionbased active contours for segmentation: an overview,” IRBM, vol. 35, no. 1, pp. 3–10, 2014. View at: Publisher Site  Google Scholar
 H.K. Zhao, T. Chan, B. Merriman, and S. Osher, “A variational level set approach to multiphase motion,” Journal of Computational Physics, vol. 127, no. 1, pp. 179–195, 1996. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 M. E. Leventon, W. E. L. Grimson, and O. Faugeras, “Statistical shape influence in geodesic active contours,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '00), pp. 316–323, June 2000. View at: Google Scholar
 A. Tsai, A. Yezzi Jr., W. Wells et al., “A shapebased approach to the segmentation of medical imagery using level sets,” IEEE Transactions on Medical Imaging, vol. 22, no. 2, pp. 137–154, 2003. View at: Publisher Site  Google Scholar
 K. Zhang, L. Zhang, H. Song, and W. Zhou, “Active contours with selective local or global segmentation: a new formulation and level set method,” Image and Vision Computing, vol. 28, no. 4, pp. 668–676, 2010. View at: Publisher Site  Google Scholar
 Y. Shi and W. C. Karl, “Realtime tracking using level sets,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), pp. 34–41, June 2005. View at: Publisher Site  Google Scholar
 L. Wang, C. Li, Q. Sun, D. Xia, and C.Y. Kao, “Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation,” Computerized Medical Imaging and Graphics, vol. 33, no. 7, pp. 520–531, 2009. View at: Publisher Site  Google Scholar
 C. Li, C.Y. Kao, J. C. Gore, and Z. Ding, “Implicit active contours driven by local binary fitting energy,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), pp. 1–7, June 2007. View at: Publisher Site  Google Scholar
 L. A. Vese and T. F. Chan, “A multiphase level set framework for image segmentation using the Mumford and Shah model,” International Journal of Computer Vision, vol. 50, no. 3, pp. 271–293, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 K. Zhang, L. Zhang, K.M. Lam, and D. Zhang, “A level set approach to image segmentation with intensity inhomogeneity,” IEEE Transactions on Cybernetics, 2015. View at: Publisher Site  Google Scholar
 K. Zhang, Q. Liu, H. Song, and X. Li, “A variational approach to simultaneous image segmentation and bias correction,” IEEE Transactions on Cybernetics, no. 99, 2014. View at: Publisher Site  Google Scholar
 C. P. Lee, W. Snyder, and C. Wang, “Supervised multispectral image segmentation using active contours,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '05), pp. 4242–4247, April 2005. View at: Publisher Site  Google Scholar
 D. Cremers, S. J. Osher, and S. Soatto, “Kernel density estimation and intrinsic alignment for knowledgedriven segmentation: teaching level sets to walk,” International Journal of Computer Vision, vol. 69, no. 3, pp. 335–351, 2004. View at: Google Scholar
 D. Cremers and M. Rousson, “Efficient kernel density estimation of shape and intensity priors for level set segmentation,” in Deformable Models, Topics in Biomedical Engineering. International Book Series, pp. 447–460, Springer, New York, NY, USA, 2007. View at: Publisher Site  Google Scholar
 N. Paragios and R. Deriche, “Geodesic active contours for supervised texture segmentation,” in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR '99), pp. 2422–2427, June 1999. View at: Google Scholar
 N. Paragios and R. Deriche, “Geodesic active regions and level set methods for supervised texture segmentation,” International Journal of Computer Vision, vol. 46, no. 3, pp. 223–247, 2002. View at: Publisher Site  Google Scholar
 S. R. Vantaram and E. Saber, “Survey of contemporary trends in color image segmentation,” Journal of Electronic Imaging, vol. 21, no. 4, Article ID 040901, 28 pages, 2012. View at: Publisher Site  Google Scholar
 S. Skakun, “A neural network approach to flood mapping using satellite imagery,” Computing and Informatics, vol. 29, no. 6, pp. 1013–1024, 2010. View at: Google Scholar
 J. Lázaro, J. Arias, J. L. Martín, A. Zuloaga, and C. Cuadrado, “SOM Segmentation of gray scale images for optical recognition,” Pattern Recognition Letters, vol. 27, no. 16, pp. 1991–1997, 2006. View at: Publisher Site  Google Scholar
 H.Y. Huang, Y.S. Chen, and W.H. Hsu, “Color image segmentation using a selforganizing map algorithm,” Journal of Electronic Imaging, vol. 11, no. 2, pp. 136–148, 2002. View at: Publisher Site  Google Scholar
 Y. Jiang, K. Chen, and Z. Zhou, “SOM based image segmentation,” in Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, G. Wang, Q. Liu, Y. Yao, and A. Skowron, Eds., vol. 2639 of Lecture Notes in Computer Science, pp. 640–643, Springer, Berlin, Germany, 2003. View at: Publisher Site  Google Scholar
 G. Dong and M. Xie, “Color clustering and learning for image segmentation based on neural networks,” IEEE Transactions on Neural Networks, vol. 16, no. 4, pp. 925–936, 2005. View at: Publisher Site  Google Scholar
 N. C. Yeo, K. H. Lee, Y. V. Venkatesh, and S. H. Ong, “Colour image segmentation using the selforganizing map and adaptive resonance theory,” Image and Vision Computing, vol. 23, no. 12, pp. 1060–1079, 2005. View at: Publisher Site  Google Scholar
 A. R. F. Araújo and D. C. Costa, “Local adaptive receptive field selforganizing map for image color segmentation,” Image and Vision Computing, vol. 27, no. 9, pp. 1229–1239, 2009. View at: Publisher Site  Google Scholar
 A. SoriaFrisch, “Unsupervised construction of fuzzy measures through selforganizing feature maps and its application in color image segmentation,” International Journal of Approximate Reasoning, vol. 41, no. 1, pp. 23–42, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 M. Y. Kiang, M. Y. Hu, and D. M. Fisher, “An extended selforganizing map network for market segmentation—a telecommunication example,” Decision Support Systems, vol. 42, no. 1, pp. 36–47, 2006. View at: Publisher Site  Google Scholar
 D. Tian and L. Fan, “A brain MR images segmentation method based on SOM neural network,” in Proceedings of the 1st International Conference on Bioinformatics and Biomedical Engineering (ICBBE '07), pp. 686–689, IEEE, Wuhan, China, July 2007. View at: Publisher Site  Google Scholar
 V.E. Neagoe and A.D. Ropot, “Concurrent selforganizing maps for pattern classification,” in Proceedings of the 1st IEEE International Conference on Cognitive Informatics (ICCI '02), pp. 304–312, Calgary, Canada, 2002. View at: Publisher Site  Google Scholar
 I. Middleton and R. I. Damper, “Segmentation of magnetic resonance images using a combination of neural networks and active contour models,” Medical Engineering and Physics, vol. 26, no. 1, pp. 71–86, 2004. View at: Publisher Site  Google Scholar
 Y. V. Venkatesh, S. K. Raja, and N. Ramya, “Multiple contour extraction from graylevel images using an artificial neural network,” IEEE Transactions on Image Processing, vol. 15, no. 4, pp. 892–899, 2006. View at: Publisher Site  Google Scholar
 G. S. Vasconcelos, C. A. C. M. Bastos, T. I. Ren, and G. D. C. Cavalcanti, “BSOM network for pupil segmentation,” in Proceedings of the International Joint Conference on Neural Network (IJCNN '11), pp. 2704–2709, usa, August 2011. View at: Publisher Site  Google Scholar
 C. A. C. M. Bastos, I. R. Tsang, G. S. Vasconcelos, and G. D. C. Cavalcanti, “Pupil segmentation using pulling & pushing and BSOM neural network,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC '12), pp. 2359–2364, October 2012. View at: Publisher Site  Google Scholar
 M. Izadi and R. Safabakhsh, “An improved timeadaptive selforganizing map for highspeed shape modeling,” Pattern Recognition, vol. 42, no. 7, pp. 1361–1370, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 D. Zeng, Z. Zhou, and S. Xie, “Coarsetofine boundary location with a SOMlike method,” IEEE Transactions on Neural Networks, vol. 21, no. 3, pp. 481–493, 2010. View at: Google Scholar
 F. Sadeghi, H. Izadinia, and R. Safabakhsh, “A new active contour model based on the conscience, archiving and meanmovement mechanisms and the SOM,” Pattern Recognition Letters, vol. 32, no. 12, pp. 1622–1634, 2011. View at: Publisher Site  Google Scholar
 Y. Ma, X. Gu, and Y. Wang, “Contour detection based on selforganizing feature clustering,” in Proceedings of the 3rd International Conference on Natural Computation (ICNC '07), vol. 2, pp. 221–226, August 2007. View at: Publisher Site  Google Scholar
 W.G. Teng and P.L. Chang, “Identifying regions of interest in medical images using selforganizing maps,” Journal of Medical Systems, vol. 36, no. 5, pp. 2761–2768, 2012. View at: Publisher Site  Google Scholar
 Z. Yang, Z. Bai, J. Wu, and Y. Chen, “Target region location based on texture analysis and active contour model,” Transactions of Tianjin University, vol. 15, no. 3, pp. 157–161, 2009. View at: Publisher Site  Google Scholar
 M. M. Abdelsamea, G. Gnecco, and M. M. Gaber, “A concurrent SOMbased ChanVese model for image segmentation,” in Advances in Intelligent Systems and Computing, vol. 295, pp. 199–208, Springer, Berlin, Germany, 2014. View at: Publisher Site  Google Scholar
 M. M. Abdelsamea, G. Gnecco, and M. M. Gaber, “An efficient SelfOrganizing Active Contour model for image segmentation,” Neurocomputing, vol. 149, pp. 820–835, 2015. View at: Publisher Site  Google Scholar
 M. M. Abdelsamea, G. Gnecco, and M. M. Gaber, “A SOMbased ChanVese Model for unsupervised image segmentation,” IMT Technical Report, 2014. View at: Google Scholar
 M. M. Abdelsamea and G. Gnecco, “Robust localglobal SOMbased ACM,” Electronics Letters, vol. 51, no. 2, pp. 142–143, 2015. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2015 Mohammed M. Abdelsamea et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.