Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2013 (2013), Article ID 638563, 12 pages
Research Article

Segmentation of Brain MRI Using SOM-FCM-Based Method and 3D Statistical Descriptors

1Communications Engineering Department, University of Malaga, 29004 Malaga, Spain
2Department of Signal Theory, Communications and Networking, University of Granada, 18060 Granada, Spain

Received 12 February 2013; Accepted 15 April 2013

Academic Editor: Anke Meyer-Baese

Copyright © 2013 Andrés Ortiz et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Current medical imaging systems provide excellent spatial resolution, high tissue contrast, and up to 65535 intensity levels. Thus, image processing techniques which aim to exploit the information contained in the images are necessary for using these images in computer-aided diagnosis (CAD) systems. Image segmentation may be defined as the process of parcelling the image to delimit different neuroanatomical tissues present on the brain. In this paper we propose a segmentation technique using 3D statistical features extracted from the volume image. In addition, the presented method is based on unsupervised vector quantization and fuzzy clustering techniques and does not use any a priori information. The resulting fuzzy segmentation method addresses the problem of partial volume effect (PVE) and has been assessed using real brain images from the Internet Brain Image Repository (IBSR).

1. Introduction

Recent advances in the medical imaging systems make it possible to acquire high resolution images with high tissue contrast. Moreover, these systems provide images up to 16-bit depth, corresponding to 65535 intensity levels. On the other hand, the human vision system is not able to recognize more than several tens of gray levels. Thus, image processing techniques are necessary to exploit the information contained in medical images, to be successfully used in CAD systems. In addition, computer-aided tools can analyze the volume image in a reasonable amount of time. These are valuable tools for diagnosing some neurological disorders such as schizophrenia, multiple sclerosis, the Alzheimer’s [1] disease, or other types of dementia. Image segmentation consists in parcelling or delimiting the image into different regions according to some properties or features describing these regions. In brain magnetic resonance imaging (MRI), segmentation consists in delimiting neuroanatomical tissues present on a healthy brain: white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). All of the nonrecognized tissues or fluids may be classified as suspected of being pathological. Segmentation process can be addressed in two ways. While the first consists in manual delineation of the structures usually performed by experts, the latter aims to use automatic or semiautomatic techniques which use statistical features that describe different regions on the image. Some of these techniques use the image histogram to define different tissues by applying a threshold, under the assumption that a tissue is characterized by an intensity level or by intensity levels within an interval [2, 3]. In the ideal case, three different image intensities should be found in the histogram corresponding to GM, WM, and CSF, assuming the resolution to be high enough to ensure that each voxel represents a single tissue type. Nevertheless, variations in the contrast of the same tissue are found in an image due to RF noise or shading effects caused by magnetic field variations. These variations which affect the tissue homogeneity on the image are a source of errors for automatic segmentation methods. Other approaches model the intensity histogram as probability distributions [46] or by a set of model vectors computed by vector quantization techniques [7, 8], reducing the segmentation problem to model the peaks and valleys present in the histogram. There are other histogram-based segmentation approaches that take into account the relative position of the peaks and valleys or other statistics extracted from the histogram [810]. However, histogram-based techniques usually does not take into account the spatial information contained in the image, and different images may have similar histogram profiles. On the other hand, segmentation has been addressed in other works by means of contour detection techniques [11, 12], region-based techniques [13], or other approaches that seek for voxels belonging to an initial class following specific geometrical models [14]. Segmentation may also be addressed as a classification task, which can be accomplished by supervised or unsupervised learning. Clustering techniques group similar voxels in an unsupervised way, according to a similarity criterion [15], while statistical classifiers may use the expectation-maximization (EM) algorithm [12, 16, 17], maximum likelihood estimation (ML), or markov random fields [18]. In addition, fuzzy variants of the k-means algorithm have also been widely used as they avoid abrupt transitions in the classification process [19] and address the PVE issue (i.e., voxels can contain signal from several tissues at the same time due to limited image resolution). In this paper we propose a segmentation method based on first and second order statistical features extracted from the image. There are segmentation approaches for 2D-MRI data. However, as MRIs are 3D in nature, we use 3D statistical features extracted from overlapped cubes moving through the image to accomplish a 3D segmentation approach. Moreover, local and nonlocal statistical descriptors extracted from the image are modelled in an unsupervised way using a self-organizing map, computing a reduced number of prototypes representing all the voxels in the image. In addition, the degree a voxel is modelled by an SOM prototype is computed by means of clustering the SOM units using the FCM algorithm [20]. After this introduction, the rest of the paper is organized as follows. Section 2 presents the database used to evaluate our proposal and introduces the image preprocessing stage and the main techniques used such as SOM and FCM for modeling data and clustering the SOM, respectively. Section 3 shows the segmentation approach proposed in this paper and the results obtained using the images from the IBSR database are depicted in Section 4. Finally, conclusions are drawn in Section 5.

2. Materials and Methods

This section consists of six subsections which explain in detail the segmentation method presented in this paper and summarized in the block diagram shown in Figure 1. Moreover, the databases used to evaluate the performance of the proposes algorithm and the metric applied for quantitative assessment of the results are also provided in the following subsections.

Figure 1: Block diagram of the segmentation method.

2.1. Databases

The performance of our proposal has been evaluated in comparison with other methods, using the Internet Brain Segmentation Repository (IBSR) from the Massachusetts General Hospital [23]. This database provides 18 T1-weighted volumetric images, of 256 × 256 × 128 voxels, with voxels dimensions between 0.84 × 0.84 × 1.5 mm3 and 1 × 1 × 1.5 mm3, corresponding to subjects with ages between 7 and 71 years. The images are spatially normalized into the Talairach orientation and processed by the Center for Morphometric Analysis (CMA) at Massachusetts General Hospital with the biasfield autoseg routines for nonuniformity correction. In addition, IBSR database also provides manual segmentation references performed by expert radiologists. These segmented images were used as a reference to test our approach as usual in other works which use the IBSR 2.0 database [21, 22].

2.2. Image Preprocessing

Although the images are already registered in the database, they contain nonbrain structures such as scalp and skull. These structures have to be removed before dealing with segmentation. In our case, nonbrain structures were removed using the BET 2.0 tool [24] from FSL package [25], running two iterations on every subject. We run BET twice on each subject since nonbrain material remains on the MRI after one iteration and three iterations tend to remove some parts of the brain. Thus, running BET twice was determined to be optimum. Figure 1 shows the entire process, from original images to segmented ones.

2.3. Feature Extraction

Unlike other MRI segmentation approaches which use 2D statistical descriptors [22], we used 3D statistical descriptors extracted from overlapping cubes slided across the volumetric image. A similar method to extract 3D features is applied in [26] for 3D computerized tomography (CT) images. These descriptors include first and second order features, as (local) histogram features computed from each cube. First order features are the intensity level of the central voxel and mean and variance of the intensity of the voxels in the window. Second order statistics aim to describe the texture of the image as they take into account the relationship among voxels in a window. This way, Haralick et al. [27] proposed 14 features computed using the gray level cooccurrence matrix (GLCM), which is a structure that describes the cooccurring intensity values at a given offset. In other words, the GLCM provides information on how often a graylevel occurs at different directions. Usually, four directions are considered in the 2D case: , , , and . However, Haralick suggests using the mean value of the features computed for the four directions to guarantee rotation invariance. Moreover, symmetric GLCM (i.e., taking into account voxels separated by and voxels) is a common choice in image analysis [27]. The structure of the 2D-GLCM is shown in Figure 2, where is the number of cooccurrences of gray levels and at a distance and a specific direction. Thus, the GLCM, matrix defined as , is a square matrix of size , where is the total number of voxels in the window, so that entry represents the number of cooccurrences of gray levels and for voxels separated at a distance in direction .

Figure 2: 2D gray level cooccurrence matrix (GLCM) computation. Main directions (0°, 45°, 90°, and 135°) are used and average value is associated with central voxel.

In the case of 3D-GLCM, cubes of size instead of square windows () have to be considered. Moreover, the windowing process has been performed computing slices in each window. The choice of the window size plays an important role in the classification process, as it may determine the discrimination capabilities of the extracted features. The use of small windows reduces the computational burden and keeps resolution but it may not be able to capture the texture. On the other hand, large windows will capture textural properties but they increase memory and processing requirements and may result in resolution loss. This way, we choose windows as a trade-off between performance and resolution. This process is depicted in Figure 3.

Figure 3: (a) 3D overlapped windows extraction process. Note that overlapping is shown as color mixture. (b) cube vectorizing example. Image is split into slices depending on the coordinate. Values in these slices correspond to the column vector index.

While four independent directions exist in 2D for GLCM calculation, 13 independent directions are found in 3D, and GLCM computation can be generalized as where is the number of gray levels present in the image or subimage considered for GLCM calculation, is the position of the voxel, and are the distances in each direction. Figure 4 shows a calculation example of 3D GLCM for direction and gray levels.

Figure 4: 3D GLCM calculation in direction . values indicate different slices (z coordinate). Arrows show the relationship between voxels for computing the cooccurrence value in the direction indicated.

This way, 3D GLCM is computed through an offset , where and correspond to 2D offset (as in 2D GLCM) and indicates the coordinate. As the offset can be applied in , , or axes, there are 27 possible offset directions. However, as there are only 13 independent directions as indicated in (1). This deals with 13 3D-GLCMs computed from each cube.

Regarding implementation details, cubes are vectorized as shown in Figure 3 to speed up the process in matlab [28]. Image vectorization aims to convert a 3D image into a matrix which contains a number of columns corresponding to the number of extracted cubes. Computations with these structures are considerably faster in matlab.

Once 3D GLCM has been defined, Haralick’s textural features can be computed as in 2D, but using the 3D GLCM as previously defined in (1). Mathematical details on Haralick’s textural features used in this work are provided in the appendix and can also be found in [8, 26, 27].

In addition to 3D Haralick features, we extract local histogram-based features from each 3D window. These features include maximum probability local intensity, mean, variance, skewness, entropy, energy and kurtosis [15].

Moreover, intensity probability in terms of the entire image is also included in the feature set. Thus, the entire feature set extracted from the image is summarized in Table 2.

Features computed from each window (cube) are extracted from the image and associated with the central voxel which is described by 23 features (i.e., feature space is composed by 23-dimensional feature vectors).

2.4. Background in SOM

The self-organizing map [29] is a well-known bioinspired clustering algorithm which aims to discover the most representative and most economic representation of data and its relationships [29, 30]. SOM consists of a number of units arranged in a two- or three-dimensional lattice, and each of them stores a prototype vector. During the training stage, the prototypes retain the most representative part of the input data, while the units on the output space holding similar prototypes (in terms of euclidean distance) are moved closer together. In other words, units on the output space are close as their prototypes are similar, and units being apart on the output space hold different prototypes. Thus, some important features of the input space can be inferred from the output space [30].(1)Input space modelling: the prototypes computed during the SOM training, , provide an approximation to the input space, as each prototype models a part of it. (2)Topological order: units on the output map are arranged into a 2D or 3D lattice, and their position depends on the specific features of the input space. (3)Density distribution: SOM reveals statistical variations on the distribution of the input data. This way, a higher density on the output space corresponds to a higher density on the input space. (4)Feature selection: prototypes computed from the input data space represent the data manifold. Thus, the algorithm reduces the input space to a set of prototype vectors.

The process mentioned previously is performed in a competitive way, where only one neuron wins (i.e., its prototype vector is the most similar to the input data instance) with each input data instance. Nevertheless, prototypes of neurons belonging to the neighbourhood of the wining unit (called best matching unit (BMU)) are also updated. Let the SOM units be linearly indexed. The BMU is computed as where is the output space for an SOM composed by units and is the th input. Moreover, prototypes of units belonging to the neighbourhood of the wining unit (also called best matching unit (BMU)) are also updated according to where is the learning factor and is the neighbourhood function defining the unit surrounding the BMU . Both and decrease exponentially with . Thus, the prototype vectors quantize the data manifold and represent the cluster center of the data mapped on each BMU.

2.5. SOM Clustering Using FCM

SOM can be seen as a clustering method as it quantizes the feature space by a number of prototypes, and each prototype can be considered as the most representative vector of a class. On the other hand, the prototypes are projected onto a two- or three-dimensional space while topology (i.e., distribution of SOM units in the projection space) is preserved. In that sense, SOM assumes that each map unit acts as a single cluster. In this work, input space is composed by feature vectors whose coordinates represent a different feature as presented in Section 2.3. Thus, each voxel is represented by a 23-dimensional vector and SOM is used to group these vectors (i.e., voxels) in different classes. This way, SOM performs hard clustering over the feature vectors which describe image voxels, and the cluster a voxel belongs to is represented by the BMU corresponding to its feature vector. In other words, SOM quantizes the feature space. However, this simple model that considers each unit as a different cluster does not exploit valuable information contained in SOMs and it is referred to its topological preservation property (i.e., nearby units in the map model similar data [29]). Thus, SOM provides extra advantages over classical clustering algorithms if more than one unit represents the same class, and a range of SOM units act as BMU for a subset of the data manifolds. This adds flexibility to the clustering algorithm and allows to compute a set of prototype vectors for the same class. Nevertheless, since each cluster can be prototyped by a set of model vectors, grouping SOM units is necessary to define cluster borders [8]. SOM clustering can be addressed by specific clustering algorithms such as CONN linkage [31, 32], which implements a hierarchical agglomerative technique using the topological information contained in the map and the relationships between the SOM layer and the data manifold to build clusters. Nevertheless, taking into account the membership probability of a voxel to a cluster requires fuzzy or probabilistic clustering techniques. Hence, we used fuzzy c-means (FCM) algorithm as a voxel can belong to more than one cluster at the same time according to a certain membership measure [20, 33, 34]. FCM for SOM can be formulated as follows. Let be the feature vector representing the voxel of the image and the prototype associated with the -unit on the SOM. An objective function can be defined as where is the number of data samples (voxels), is the number of SOM units, and is the membership function defined as where is the center of cluster , defined as

Fuzzy clustering is carried out by optimizing the objective function (4) until where and represents the iteration steps. This iterative procedure will converge to a local minimum of [33].

Once the SOM is trained and clustered, each voxel (described by its corresponding feature vector) is mapped to a cluster, so that it belongs to a specific tissue with a probability. Figure 5 shows the membership value assigned to each SOM unit for the three clusters (WM, GM, and CSF), and Figure 6 shows the projection of the SOM prototypes in 3D where the units have been colored according to the maximum membership criterion using the probabilities computed by FCM clustering. However, this method deals with hard clustering and does not take into account partial volume effect (PVE) [22, 35, 36]. In order to take into account PVE in our implementation, we introduced a thresholding parameter to determine whether a BMU is included in two classes at the same time. Thus, we define the matrix as

Figure 5: SOM units membership probability for WM (a), GM (b), and CSF (c) clusters according to FCM clustering for IBSR volume 7.
Figure 6: 3D projection of the SOM prototypes for IBSR volume 7. Units have been colored according to the maximum membership criterion. Maroon, orange, and green represent WM, GM, and CSF prototypes, respectively.

Thus, voxels whose BMU fulfills the constraint are included in and clusters (i.e., belong to and tissues).

In order to identify the tissue corresponding to each cluster, we use the fact that GM voxels usually present lower intensity values than CSF, and WM voxels present the higher intensity values due to the MRI acquisition process. This way, the cluster with the lower mean intensity value is associated with GM voxels and the cluster with the higher mean intensity values is associated with WM voxels.

2.6. Feature Selection

As shown in Section 2.3, first order and second order (textural) and histogram-based features are extracted from each cube. However, using all the features does not provide the best results as they may not be discriminative enough for the three tissues. Moreover, using non-discriminative features can deteriorate the clustering results. Thus, a feature selection stage was performed using a subset of 5 training images to compute the most discriminative features. This is addressed by a genetic algorithm (GA), using the parameters described in [8] which evolves an initial population of solutions (permutations of the features) aiming to minimize the fitness function defined as where is the mean Jaccard coefficient [7, 15, 23] for a subset of 5 images (volumes no. 7, no. 8, no. 9, no. 10, and no. 11). It measures the average overlap between the segmented image and the segmentation reference provided by the database. This metric is used in many works [2123, 37] to assess the performance of segmentation algorithms and can be defined as where represents the cardinality of the set.

Figure 7 shows the fitness value for 60 generations, which are enough for the GA to converge. The optimized feature set is shown in Table 3.

Figure 7: Fitness function in the optimization process. Convergence is achieved in 50 generations.

Thus, these features have been used to process the images on the IBSR database and to provide the segmentation outcomes based on the Jaccard index as shown in Section 4.

2.7. SOM Topology Selection

The number of SOM units usually determines the performance of the clustering. Figure 8 shows the quantization error as a function of the number of units used in the model, computed using the equation where is the BMU corresponding to data sample and is the number of data samples in the dataset.

Figure 8: Mean quantization error as a function of the number of SOM units.

The quantization error in this figure represents a measure of the reconstruction error, which tends to stabilize from 64 units. Thus, we choose 10 × 10 units map as a trade-off between quantization error and performance. Moreover, 3D SOM layer with hexagonal lattice was used as it yields better segmentation results.

3. Segmentation Procedure

Unsupervised segmentation using SOM requires to cluster the SOM units after training. This can be addressed using a standard clustering algorithm or a specific algorithm developed to cluster the SOM layer, as shown in Section 2.5. The overall process is summarized in Algorithm 1.

Algorithm 1: SOM-FCM segmentation algorithm for 3D MRI.

Cluster assignment of SOM units can be addressed in two ways. Since FCM computes membership probability, the units can be assigned to the cluster providing the maximum probability. This method uses the maximum membership criterion and assumes that a voxel only belongs to a specific cluster and does not address the partial volume effect (PVE) (i.e., a voxel can contain signal from different tissues due to the limited resolution of the acquisition process). The second approach can assign a voxel to different clusters simultaneously if the membership probability is above a predefined threshold . These two previous approaches have been implemented in the experiments conducted and experimental results are provided with and without PVE correction. has been used in the experiments performed in Section 4, meaning that voxels whose membership probability differs in for two different clusters are assigned to these two clusters (tissues) simultaneously.

4. Experimental Results

Numerous experiments were carried out to assess the performance of the proposed algorithm using the IBRS2 database, as it provides real brain MRIs. Figure 9 shows the segmentation results for some slices of axial and coronal planes, respectively, for the IBSR volume 7 using the maximum membership criterion to defuzzify the clustering result. In this case, no PVE correction is applied as each voxel only belongs to a single cluster.

Figure 9: Segmentation results for the IBSR volume 7. Some slices of the axial and coronal planes are shown in (a) and (b), respectively. Segmentation performed by experts is shown in (c) and (d) for the axial and coronal planes, respectively.

Thus, in Figure 9 each tissue is shown as a different color. Specifically, CSF, GM, and WM are shown as green, orange, and maroon to identify them in the same figure. In addition, Figures 9(c) and 9(d) show the segmentation references from the IBSR database (i.e., manual segmentation by expert radiologist or ground truth). It is worth noting that expert segmentations included in the IBSR database does not include internal CSF spaces. However, our approach also delineates sulcal CSF. This is the main source of difference between our segmentation outcomes and the ground truth. Thus, Jaccard index for CSF is not as high as in the WM or GM case.

In order to show segmentation outcomes, Figure 10 presents a slice of the original IBSR2 volume no. 7 after brain extraction and the segmented tissues. In this figure, CSF, GM, and WM are shown in Figures 10(b), 10(c), and 10(d), respectively. Coronal plane is shown in Figure 12.

Figure 10: Axial slice from volume no. 7 (a). Segmentation results show CSF (b), WM (c), and GM (d).

In addition, PVE correction is applied as explained in Section 2.5, and the results are presented in Figures 11, 13, and 14. As shown in these figures, PVE correction improves the results, specially for GM delineation. Figure 14 presents the Jaccard index for the 18 volumes on the IBSR2 database. Moreover, a comparison with other segmentation techniques is presented for comparison. As shown in these figures, our method outperforms the FCM method applied over the voxel intensity levels and also performs better for some volumes than other methods combining fuzzy clustering and intensity inhomogeneity compensation techniques. Our method tends to better delineate WM and also delineates correctly the CSF. Moreover, brain extraction stage may cause differences in the final segmentation results in terms of the Jaccard index, as the number of voxels in the segmentation references may differ depending on the brain extraction technique (i.e., brain extraction by manual delineation).

Figure 11: Axial slice from volume no. 7 (a). Segmentation results show CSF (b), WM (c), and GM (d) with PVE correction, .
Figure 12: Coronal slice from volume no. 7 (a). Segmentation results show CSF (b), WM (c), and GM (d).
Figure 13: Coronal slice from volume no. 7 (a). Segmentation results show CSF (b), WM (c), and GM (d) with PVE correction, .
Figure 14: Jaccard index calculated throughout the images in the IBSR2 database.

5. Conclusions

In this paper, we present MRI segmentation methods using 3D statistical features (Tables 1 and 4). These features include first and second order statistics computed using overlapping cubes moving across the image. In addition, local histogram features computed from each cube are used to compose the feature space. The feature vectors associated with each nonbackground voxel are unsupervisedly modelled by an SOM, reducing the feature space to a number of prototypes each of them representing a set of voxels. These prototypes are grouped to define the cluster borders in the SOM layer using FCM, allowing a specific prototype to model voxels belonging to different tissues simultaneously. This way, PVE correction is incorporated to the segmentation algorithm. The algorithm has been assessed by experiments regarding the modelling capabilities using the proposed feature set and the proposed clustering technique combining SOM and FCM. Moreover, a GA-based feature selection stage is used over a subset of images to compute the most discriminative features. Segmentation results using the selected features show improvements over other segmentation methods depending on the specific volume and clearly outperform FCM. In addition, results using maximum membership criterion to defuzzify the clustering result and fuzzy clusters (i.e., it is possible to assign a voxel to two tissues at the same time) are shown. The first approach does not correct PVE while the latter does. Segmentation techniques could help to find causes of brain disorders such as Alzheimer’s disease (AD). In fact, the segmentation algorithm presented in this paper is part of a larger study performed by the authors on the tissue distribution for neurological disorders characterization and the early diagnosis of AD.

Table 1: 3D directions in spherical coordinates and offset coding. Offset (d) indicates radius value.
Table 2: Feature set extracted from 3D image.
Table 3: Optimized feature set.
Table 4: Mean and standard deviation of the Jaccard index for the segmentation methods in Figure 14.


Haralick’s Textural Features Computed Using the 3D GLCM





Inverse difference moment (ASM, homogeneity):


Sum average:


Cluster shade:

Cluster prominence:

Maximum probability:

Difference variance:

Notations(i) is the number of distinct gray levels in the window. (ii) corresponds to the th entry in the 3D GLCM. (iii) marginal-probability distribution matrix. (iv);   th and th entry in the marginal-probability distribution matrix, respectively. (v), , , , are, respectively, the means and standard deviations of the partial probability density functions and . In the case of variance calculation, represents the mean of the values within the 3D GLCM. Symmetrical GLCM yields and . (vi); . (vii); . (viii),  ,  . (ix),  ,  .


This work was partly supported by the MICINN under the TEC2012-34306 project and the Consejería de Innovación, Ciencia y Empresa (Junta de Andalucía, Spain) under the Excellence Projects P09-TIC-4530 and P11-TIC-7103.


  1. T. Kapur, W. Eric, L. Grimson, W. M. Wells, and R. Kikinis, “Segmentation of brain tissue from magnetic resonance images,” Medical Image Analysis, vol. 1, no. 2, pp. 109–127, 1996. View at Google Scholar · View at Scopus
  2. D. N. Kennedy, P. A. Filipek, and V. S. Caviness, “Anatomic segmentation and volumetric calculations in nuclear magnetic resonance imaging,” IEEE Transactions on Medical Imaging, vol. 8, no. 1, pp. 1–7, 1989. View at Publisher · View at Google Scholar · View at Scopus
  3. P. Sahoo, S. Soltani, A. Wong, and Y. Chen, “A survey on thresholding tecniques,” Computer Vision, Graphics Image Process, vol. 41, pp. 233–260, 2000. View at Google Scholar
  4. D. Salas-González, E. E. Kuruoglu, and D. P. Ruiz, “Finite mixture of α-stable distributions,” Digital Signal Processing, vol. 19, no. 2, pp. 250–264, 2009. View at Publisher · View at Google Scholar · View at Scopus
  5. D. Salas-González, J. M. Górriz, D. J. Ramírez, M. Schloegl, E. W. Lang, and A. Ortiz, “Parametrization of the distribution of white and grey matter in mri using an alpha-stable mixture model,” Computers in Biology and Medicine, vol. 43, no. 5, pp. 559–567, 2013. View at Google Scholar
  6. J. M. Górriz, F. Segovia, J. Ramírez, A. Lassl, and D. Salas-González, “GMM based SPECT image classification for the diagnosis of Alzheimer’s disease,” Applied Soft Computing, vol. 11, pp. 2313–2325, 2011. View at Google Scholar
  7. A. Ortiz, J. M. Górriz, J. Ramírez et al., “Improving MRI segmentation with probabilistic GHSOM and multiobjective optimization,” Neurocomputing, 2013. View at Publisher · View at Google Scholar
  8. A. Ortiz, J. M. Górriz, J. Ramírez, D. Salas-González, and J. M. Llamas-Elvira, “Two fully-unsupervised methods for mr brain segmentation using som-based strategies,” Applied Soft Computing, vol. 13, no. 5, pp. 2668–2682, 2013. View at Publisher · View at Google Scholar
  9. R. Momenan, D. Hommer, R. Rawlings, U. Ruttimann, M. Kerich, and D. Rio, “Intensity-adaptive segmentation of single-echo t1-weighted magnetic resonance images,” Human Brain Mapping, vol. 5, no. 3, pp. 194–205, 2002. View at Google Scholar
  10. N. Kovacevic, N. J. Lobaugh, M. J. Bronskill, B. Levine, A. Feinstein, and S. E. Black, “A robust method for extraction and automatic segmentation of brain images,” NeuroImage, vol. 17, no. 3, pp. 1087–1100, 2002. View at Publisher · View at Google Scholar · View at Scopus
  11. R. Eisner, R. Brummer, M. Mersereau, and R. Lewine, “Automatic detection of brain contours in mri data sets,” in Information Processing in Medical Imaging, vol. 511 of Lecture Notes in Computer Science, pp. 188–204, 1991. View at Google Scholar
  12. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,” International Journal of Computer Vision, vol. 1, no. 4, pp. 321–331, 1988. View at Publisher · View at Google Scholar · View at Scopus
  13. W. M. Wells III, W. E. L. Crimson, R. Kikinis, and F. A. Jolesz, “Adaptive segmentation of mri data,” IEEE Transactions on Medical Imaging, vol. 15, no. 4, pp. 429–442, 1996. View at Google Scholar · View at Scopus
  14. J. F. Brinkley, “Hierarchical geometric constraint networks as a representation for spatial structural knowledge,” Proceedings of the 16th Annual Symposium on Computer Application in Medical Care, pp. 140–144, 1992. View at Google Scholar · View at Scopus
  15. S. Theodoridis and K. Koutroumbas, Pattern Recognition, Academic Press, Salt Lake City, Utah, USA, 2009.
  16. Y. F. Tsai, I. J. Chiang, Y. C. Lee, C. C. Liao, and K. L. Wang, “Automatic MRI meningioma segmentation using estimation maximization,” in Proceedings of the 27th Annual International Conference of the Engineering in Medicine and Biology Society (EMBS '05), pp. 3074–3077, September 2005. View at Scopus
  17. J. Xie and H. T. Tsui, “Image segmentation based on maximum-likelihood estimation and optimum entropy-distribution (MLE-OED),” Pattern Recognition Letters, vol. 25, no. 10, pp. 1133–1141, 2004. View at Publisher · View at Google Scholar · View at Scopus
  18. Y. Zhang, M. Brady, and S. Smith, “Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm,” IEEE Transactions on Medical Imaging, vol. 20, no. 1, pp. 45–57, 2001. View at Publisher · View at Google Scholar · View at Scopus
  19. M. Singh, P. Patel, D. Khosla, and T. Kim, “Segmentation of functional MRI by K-means clustering,” IEEE Transactions on Nuclear Science, vol. 43, no. 3, pp. 2030–2036, 1996. View at Google Scholar · View at Scopus
  20. R. Duda, P. Hart, and D. Stork, Pattern Recognition, Springer, Berlin, Germany, 2008.
  21. B. Caldairou, N. Passat, P. Habas, C. Studholme, and F. Rosseau, “A non-local fuzzy segmentation method: application to brain mri,” Tech. Rep., Universitè de Strasbourg. CNRS, Strasbourg Cedex, France, 2010. View at Google Scholar
  22. D. L. Pham and J. L. Prince, “Adaptive fuzzy segmentation of magnetic resonance images,” IEEE Transactions on Medical Imaging, vol. 18, no. 9, pp. 737–752, 1999. View at Publisher · View at Google Scholar · View at Scopus
  23. Center for Morphometric Analysis Internet Brain Segmentation Repository (IBSR), Massachusetts General Hospital, 2012.
  24. FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Brain extraction tool (bet), 2012.
  25. FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Fmrib software library, 2012.
  26. L. Tesař, A. Shimizu, D. Smutek, H. Kobatake, and S. Nawano, “Medical image analysis of 3D CT images based on extension of Haralick texture features,” Computerized Medical Imaging and Graphics, vol. 32, no. 6, pp. 513–520, 2008. View at Publisher · View at Google Scholar · View at Scopus
  27. R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man and Cybernetics, vol. 3, no. 6, pp. 610–621, 1973. View at Google Scholar
  28. Mathworks, “MATLAB—The Language of Technical Computing,” 2005.
  29. T. Kohonen, Self-Organizing Maps, Springer, Berlin, Germany, 2001.
  30. S. Haykin, Neural Networks, Prentice-Hall, Upper Saddle River, NJ, USA, 2nd edition, 1999.
  31. K. Taşdemir, P. Milenov, and B. Tapsall, “Topology-based hierarchical clustering of self-organizing maps,” IEEE Transactions on Neural Networks, vol. 22, no. 3, pp. 474–485, 2011. View at Publisher · View at Google Scholar · View at Scopus
  32. K. Tasdemir and E. Merenyi, “Exploiting data topology in visualization and clustering of self-organizing maps,” IEEE Transactions on Neural Networks, vol. 20, no. 4, pp. 549–562, 2009. View at Google Scholar
  33. J. C. Dunn, “A fuzzy relative of the isodata process and its use in detecting compact well-separated clusters,” Journal of Cibernetics, vol. 3, no. 3, pp. 32–57, 1973. View at Google Scholar · View at Scopus
  34. J. C. Bezdek, “Fuzzy cluster analysis in very large data sets,” 2009, abstract.
  35. D. L. Pham, “Spatial models for fuzzy clustering,” Computer Vision and Image Understanding, vol. 84, no. 2, pp. 285–297, 2001. View at Publisher · View at Google Scholar · View at Scopus
  36. M. E. Brandt, T. P. Bohan, L. A. Kramer, and J. M. Fletcher, “Estimation of CSF, white and gray matter volumes in hydrocephalic children using fuzzy clustering of MR images,” Computerized Medical Imaging and Graphics, vol. 18, no. 1, pp. 25–34, 1994. View at Publisher · View at Google Scholar · View at Scopus
  37. H. Greenspan, A. Ruf, and J. Goldberger, “Constrained Gaussian mixture model framework for automatic segmentation of MR brain images,” IEEE Transactions on Medical Imaging, vol. 25, no. 9, pp. 1233–1245, 2006. View at Publisher · View at Google Scholar · View at Scopus