- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Advances in Artificial Neural Systems
Volume 2012 (2012), Article ID 457590, 7 pages
Unsupervised Neural Techniques Applied to MR Brain Image Segmentation
1Department of Communication Engineering, University of Malaga, 29071 Malaga, Spain
2Department of Signal Theory, Networking and Communications, University of Granada, 18071 Granada, Spain
Received 17 February 2012; Accepted 14 April 2012
Academic Editor: Anke Meyer-Baese
Copyright © 2012 A. Ortiz et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The primary goal of brain image segmentation is to partition a given brain image into different regions representing anatomical structures. Magnetic resonance image (MRI) segmentation is especially interesting, since accurate segmentation in white matter, grey matter and cerebrospinal fluid provides a way to identify many brain disorders such as dementia, schizophrenia or Alzheimer’s disease (AD). Then, image segmentation results in a very interesting tool for neuroanatomical analyses. In this paper we show three alternatives to MR brain image segmentation algorithms, with the Self-Organizing Map (SOM) as the core of the algorithms. The procedures devised do not use any a priori knowledge about voxel class assignment, and results in fully-unsupervised methods for MRI segmentation, making it possible to automatically discover different tissue classes. Our algorithm has been tested using the images from the Internet Brain Image Repository (IBSR) outperforming existing methods, providing values for the average overlap metric of 0.7 for the white and grey matter and 0.45 for the cerebrospinal fluid. Furthermore, it also provides good results for high-resolution MR images provided by the Nuclear Medicine Service of the “Virgen de las Nieves” Hospital (Granada, Spain).
Nowadays, magnetic resonance imaging (MRI) systems provide an excellent spatial resolution as well as a high tissue contrast. Nevertheless, since actual MRI systems can obtain 16-bit depth images corresponding to 65535 gray levels, the human eye is not able to distinguish more than several tens of gray levels. On the other hand, MRI systems provide images as slices which compose the 3D volume. Thus, computer-aided tools are necessary to exploit all the information contained in an MRI. These are becoming a very valuable tool for diagnosing some brain disorders such as Alzheimer’s disease [1–5]. Moreover, modern computers, which contain a large amount of memory and several processing cores, have enough process capabilities for analyzing the MRI in reasonable time.
Image segmentation consists in partitioning an image into different regions. In MRI, segmentation consists of partitioning the image into different neuroanatomical structures which corresponds to different tissues. Hence, analyzing the neuroanatomical structures and the distribution of the tissues on the image, brain disorders or anomalies can be figured out. Hence, the importance of having effective tools for grouping and recognizing different anatomical tissues, structures and fluids is growing with the improvement of the medical imaging systems. These tools are usually trained to recognize the three basic tissue classes found on a healthy brain MR image: white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). All of the nonrecognized tissues or fluids are classified as suspect, to be pathological.
The segmentation process can be performed in two ways. The first consists of manual delimitation of the structures present within an image by an expert. The second consists of using an automatic segmentation technique. As commented before, computer image processing techniques allow exploiting all the information contained in an MRI.
There are several automatic segmentation techniques. Some of them use the information contained in the image histogram [6–11]. This way, since different contrast areas should correspond with different tissues, the image histogram can be used for partitioning the image. Nevertheless, variations on the contrast of the same tissue are found in an image due to RF noise or shading effects due to magnetic field variations, resulting in tissue misclassification. Other methods use statistical classifiers based on the expectation-maximization (EM) algorithms [12–14], maximum likelihood (ML) estimation , or Markov random fields [16, 17]. Other segmentation techniques are based on artificial neural network classifiers [8, 18–21] such as self-organizing maps (SOMs) [18, 19, 21–23].
In this paper we present three segmentation alternatives based on SOMs, which provide good results over the internet brain image repository (IBSR)  images.
2. SOM Algorithm
SOM is an unsupervised classifier proposed by Kohonen and it has been used for a large number of applications regarding classification or modelling . The self-organizing process is based on the distance (usually the Euclidean distance) computation among each training sample and all the units on the map as a part of a competitive learning process. On the other hand, several issues such as topological map, number of units on the map, initialization of weights, and the training process on the map are decisive for the classification quality. Regarding the topology, a 2D hexagonal grid was selected since it fitted better in the feature space as shown in the experiments.
The SOM algorithm can be summarized as follows. Let be the data manifold. In each iteration, the winning unit is computed according to where , , is the input vector at time and is the prototype vector associated with the unit . The unit closer to the input vector is referred to as winning unit and the associated prototype is updated. To complete the adaptive learning process on the SOM, the prototypes of the units in the neighborhood of the winning unit are also updated according to: where is the exponential decay learning factor and is the neighborhood function associated with the unit . Both, the learning factor and the neighborhood function decay with time; thus the prototypes adaptation becomes slower as the neighborhood of the unit contains less number of units:
Equation (3) shows the neighbourhood function, where represents the position on the output space and is the distance between the winning unit and the unit on the output space. The neighbourhood is defined by a Gaussian function which shrinks in each iteration as shown in (4). In this competitive process, the winning unit is named the best matching unit (BMU). On the other hand, controls the reduction of the Gaussian neighborhood in each iteration. is a time constant which depends on the number of iterations and the map radius and computed as = number_of_iterations/map_radius:
The quality of the trained map can be computed by the means of two measures. These two measures are the quantization error , which determines the average distance between each data vector and its best matching unit (BMU) and the topological error , which measures the proportion of all data vectors for which first and second BMUs are not adjacent units. Both, the quantization error and the topological error are defined by the following:
In (5), is the total number of data vectors, and is 1 if the first and the second BMU for are nonadjacent and 0 otherwise. In (6) the quantization error is defined where is the th data vector on the input space and is the weight (prototype) associated with the best matching unit for the data vector . Therefore, lower values of and imply a better topology preservation, which is equivalent to a better clustering result. That is to say, the lower the values on the quantization error and the topological error , the better the goodness of the SOM [25, 26]. In this paper, SOM toolbox  has been used to implement SOM.
3. MR Image Segmentation with SOM
In this section we present two image segmentation algorithms based on unsupervised SOM. The first uses the histogram to segment the whole volume (i.e., classify all the voxels on the volumetric image). The second extracts a set of features from each image slice and uses an SOM to classify the feature vectors into clusters using the devised entropy gradient clustering method. Thus, Figure 1 shows the block diagram of the presented segmentation algorithms.
3.1. Image Preprocessing
Once the MR image has been acquired, a preprocessing is performed in order to remove noise and to homogenize the image background. The brain extraction for undesired structures removal (i.e., skull and scalp) can be done at this stage. There are several algorithms for this purpose such as brain surface extractor (BSE), brain extraction tool (BET) , Minneapolis consensus strip (McStrip), or hybrid watershed algorithm (HWA) . Since IBSR 1.0 images have these undesired structures already removed, brain extraction is not required. Nevertheless, images provided by IBSR 2.0 are distributed without the scalp/skull already removed. In these images, the brain has been extracted in the preprocessing stage using BET.
3.2. Segmentation Using the Volume Image Histogram (HFS-SOM)
The first step after preprocessing the image consists in computing the volume image histogram which describes the probability of occurrence of voxel intensities in the volume image and provides information regarding different tissues. A common approach to avoid processing the large number of voxels present on MR images consists in modelling the intensity values as a finite number of prototypes, which deals to improve the computational effectiveness. After computing the histogram, the bin 0 is removed since it contains all the background voxels. Thus, only information corresponding to the brain is stored.
Figure 2 shows the rendered brain surface from the IBSR volume 12, and its histogram.
Histogram data including the intensity occurrence probabilities and the relative position (bin number), , are used to compose the feature vectors , to be classified by the SOM.
On a trained SOM, the output layer is composed by a reduced number of prototypes (the number of units on the output layer) modelling the input data manifold. In addition, the most similar prototypes are closely located in the output map at the time the most dissimilar are located apart. Nevertheless, since all the units have an associated prototype, it is necessary to cluster the SOMs in order to define the borders between clusters. In other words, each prototype is grouped so that it belongs to a cluster. Thus, the -means algorithm is used to cluster the SOMs, grouping the prototypes into a number of different classes, and the DBI , which gives lower values for better clustering results, is computed for different values to provide a measurement of the clustering validity.
The clusters on the SOM group the units so that they belong to a specific class. As each of these units will be the BMU of a specific set of voxels, the clusters define different voxel classes. This way, each voxel is labeled as belonging to a class (i.e., segment).
3.3. MR Image Segmentation with SOM and the Entropy-Gradient Algorithm (EGS-SOM)
The method described in this section is also based on SOM for voxel classification, but histogram information from the image volume is replaced by computing a set of features, selecting the most discriminant ones. After that, SOM clustering is performed by the EGS-SOM method described here in after, which allows us to obtain higher-resolution images providing good segmentation results as shown in the experiments.
3.3.1. Feature Extraction and Selection
In this stage some significant features from the MR image are extracted to be subjected to classification. As commented before, we perform the image processing slice by slice on each plane. Thus, the feature extraction is carried out by using an overlapping and sliding window of pixels on each slice of a specific plane.
In the feature extraction process, window size plays an important role since smaller windows are not able to capture the second-order features, that is, texture information. The use of higher window sizes results in loosing resolution. Therefore the size gives a good trade-off between complexity and performance.
In this paper we use first- and second-order statistical features . The first-order features we extract from the image are intensity, mean, and variance. The intensity is referred to the gray level of the center pixel on the window. The mean and variance are calculated taking into account the gray level present on the window. On the other hand, we additionally use second-order features such as textural features. Haralick et al.  proposed the use of 14 features for image classification, computed using the gray level coocurrence matrix (GLCM) method. The set of second-order features we have used are energy, entropy, contrast, angular second moment (ASM), sum average, autocorrelation, correlation, inverse difference moment, maximum probability, cluster prominence, cluster shade, dissimilarity, and second-order variance as well as moment invariants .
In order to select the most discriminant features, a genetic algorithm is used to minimize the topological and the quantization error on the SOM through the fitness function shown in (7)
The feature selection process is summarized in Figure 3.
The stop criterion is reached when the performance of the proposed solutions does not improve the performance significantly (1%) or the maximum number of generations is reached (500).
Once the dimension of the feature space has been reduced, we use the vectors of this space for training a SOM. The topology of the map and the number of units on the map are decisive for the SOM quality. In that sense, we use a hexagonal grid since it allows better fitting the prototypes to the feature space vectors. Each BMU on the SOM has an associated pixel on the image. This association is made through a matrix computed during the feature extraction phase which stores the coordinates of the central pixel on each window. This allows associating a feature vector to an image pixel.
Nevertheless, these clusters roughly define the different areas (segments) on the image, and a further fine-tuning phase is required. This fine-tuning phase is accomplished by the entropy-gradient method.
The procedure devised consists of using the feature vectors associated with each BMU to compute a similarity measurement among the vectors belonging with each BMU and the vectors associated to each other BMU. Next, the BMUs are sorted in ascending order of the contrast. Finally, the feature vectors of each BMU are included on a cluster. For each map unit, we compute the accumulated entropy: where is the map unit index and the number of pixels belonging to the map unit in the classification process. This means that the unit has a number of -associated pixels. Since the output layer on the SOM is a two-dimensional space, we calculate the entropy-gradient vector from each map unit (8) and move to the opposite direction for clustering.
4. Results and Discussion
In this section we show the segmentation results obtained using real MR brain images from two different sources. One of these sources is the IBSR database  in two versions, IBSR and IBSR 2.0.
Figures 4(a) and 4(b) show the segmentation results for the IBSR volume 100_23 using the HFS-SOM algorithm and the EGS-SOM algorithm, respectively. In these images, WM, GM, and CSF are shown for slices 120, 130, 140, 150, 160, and 170 on the axial plane. Expert segmentation from IBSR database is shown in Figure 4(c).
Figure 5(a) shows the segmentation results for the IBSR 2.0 volume 12 using the fast volume segmentation algorithm. In this figure, each row corresponds to a tissue and each image column corresponds to a different slice. In the same way, Figure 5(b) shows the same slices of Figure 5(b) but the segmentation is performed using the EGS-SOM algorithm. Figure 5(c) shows the segmentation performed by expert radiologists provided by the IBSR database (ground truth).
Visual comparison between automatic segmentation and the ground truth points up that the EGS-SOM method outperforms the fast volume segmentation method.
This fact is also stated in Figure 6 where Tanimoto’s index is shown for different segmentation algorithms, where SSOM corresponds to our entropy-gradient algorithm, BMAP is biased map , AMAP is adaptative map , MAP is maximum a posteriori probability , MLC is maximum likelihood , FUZZY is fuzzy k-means  and TSKMEANS is tree-structured k-means . The performance of the presented segmentation techniques has been evaluated by computing the average overlap rate through Tanimoto’s index, as it has been widely used by other authors to compare the segmentation performance of their proposals [13, 16, 17, 21, 26, 37–41]. Tanimoto’s index can be defined as where is the segmentation set and is the ground truth.
In this paper we presented fully unsupervised segmentation methods for MR images based on hybrid artificial intelligence techniques for improving the feature extraction process and self-organizing maps for pixel classification. The use of a genetic algorithm provides a way for training the Self-Organizing map used as a classifier in the most efficient way. This is because the dimension of the training samples (feature vectors) has been reduced in order to be enough discriminant but not redundant. As a result, the number of units (neurons) on the map is also optimized as well as the classification process. Thus, we take advantage of the competitive learning model of the SOM which groups the pixels into clusters. This competitive process discovers similarities among the pixels, resulting in an unsupervised way to segment the image. Moreover, the clusters’ borders are redefined by using the entropy-gradient method presented on this paper. The whole process allows figuring out the segments present on the image without using any a priori information.
The results shown in Section 4 have been compared with the segmentations provided by the IBSR database that outperform the results obtained by other algorithms such as k-means or fuzzy k-means. The number of segments or different tissues found in an MR image is figured out automatically making possible to find out tissues which could be identified with a pathology.
This work was partly supported by the Consejería de Innovación, Ciencia y Empresa (Junta de Andalucía, Spain) under the Excellence Projects TIC-02566 and TIC-4530.
- I. A. Illán, J. M. Górriz, J. Ramírez et al., “18F-FDG PET imaging analysis for computer aided Alzheimer's diagnosis,” Information Sciences, vol. 181, no. 4, pp. 903–916, 2011.
- I. A. Illán, J. M. Górriz, M. M. López et al., “Computer aided diagnosis of Alzheimer's disease using component based SVM,” Applied Soft Computing Journal, vol. 11, no. 2, pp. 2376–2382, 2011.
- J. M. Górriz, F. Segovia, J. Ramírez, A. Lassl, and D. Salas-Gonzalez, “GMM based SPECT image classification for the diagnosis of Alzheimer's disease,” Applied Soft Computing Journal, vol. 11, no. 2, pp. 2313–2325, 2011.
- M. Kamber, R. Shinghal, D. L. Collins, G. S. Francis, and A. C. Evans, “Model-based 3-D segmentation of multiple sclerosis lesions in magnetic resonance brain images,” IEEE Transactions on Medical Imaging, vol. 14, no. 3, pp. 442–453, 1995.
- J. Ramírez, J. M. Górriz, D. Salas-Gonzalez, et al., “Computer-aided diagnosis of Alzheimer's type dementia combining support vector machines and discriminant set of features,” Information Sciences. In press.
- D. N. Kennedy, P. A. Filipek, and V. S. Caviness, “Anatomic segmentation and volumetric calculations in nuclear magnetic resonance imaging,” IEEE Transactions on Medical Imaging, vol. 8, no. 1, pp. 1–7, 1989.
- A. Khan, S. F. Tahir, A. Majid, and T. S. Choi, “Machine learning based adaptive watermark decoding in view of anticipated attack,” Pattern Recognition, vol. 41, no. 8, pp. 2594–2610, 2008.
- Z. Yang and J. Laaksonen, “Interactive retrieval in facial image database using self-organizing maps,” in Proceedings of the MVA, 2005.
- M. García-Sebastián, E. Fernández, M. Graña, and F. J. Torrealdea, “A parametric gradient descent MRI intensity inhomogeneity correction algorithm,” Pattern Recognition Letters, vol. 28, no. 13, pp. 1657–1666, 2007.
- E. Fernández, M. Graña, and J. R. Cabello, “Gradient based evolution strategy for parametric illumination correction,” Electronics Letters, vol. 40, no. 9, pp. 531–532, 2004.
- M. García-Sebastián, A. Isabel González, and M. Graña, “An adaptive field rule for non-parametric MRI intensity inhomogeneity estimation algorithm,” Neurocomputing, vol. 72, no. 16-18, pp. 3556–3569, 2009.
- T. Kapur, L. Grimson, W. M. Wells, and R. Kikinis, “Segmentation of brain tissue from magnetic resonance images,” Medical Image Analysis, vol. 1, no. 2, pp. 109–127, 1996.
- Y. F. Tsai, I. J. Chiang, Y. C. Lee, C. C. Liao, and K. L. Wang, “Automatic MRI meningioma segmentation using estimation maximization,” in Proceedings of the 27th Annual International Conference of the Engineering in Medicine and Biology Society (IEEE-EMBS '05), pp. 3074–3077, September 2005.
- J. Xie and H. T. Tsui, “Image segmentation based on maximum-likelihood estimation and optimum entropy-distribution (MLE-OED),” Pattern Recognition Letters, vol. 25, no. 10, pp. 1133–1141, 2004.
- Y. Zhang, M. Brady, and S. Smith, “Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm,” IEEE Transactions on Medical Imaging, vol. 20, no. 1, pp. 45–57, 2001.
- N. A. Mohamed, M. N. Ahmed, and A. Farag, “Modified fuzzy c-mean in medical image segmentation,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '99), pp. 3429–3432, March 1999.
- W. M. Wells III, W. E. L. Crimson, R. Kikinis, and F. A. Jolesz, “Adaptive segmentation of mri data,” IEEE Transactions on Medical Imaging, vol. 15, no. 4, pp. 429–442, 1996.
- D. Tian and L. Fan, “A brain MR images segmentation method based on SOM neural network,” in Proceedings of the 1st International Conference on Bioinformatics and Biomedical Engineering (ICBBE '07), pp. 686–689, July 2007.
- I. Güler, A. Demirhan, and R. Karakiş, “Interpretation of MR images using self-organizing maps and knowledge-based expert systems,” Digital Signal Processing, vol. 19, no. 4, pp. 668–677, 2009.
- P. K. Sahoo, S. Soltani, and A. K. C. Wong, “A survey of thresholding techniques,” Computer Vision, Graphics and Image Processing, vol. 41, no. 2, pp. 233–260, 1988.
- W. Sun, “Segmentation method of MRI using fuzzy Gaussian basis neural network,” Neural Information Processing, vol. 8, no. 2, pp. 19–24, 2005.
- J. Alirezaie, M. E. Jernigan, and C. Nahmias, “Automatic segmentation of cerebral MR images using artificial neural networks,” IEEE Transactions on Nuclear Science, vol. 45, no. 4, pp. 2174–2182, 1998.
- A. Ortiz, J. M. Górriz, J. Ramírez, and D. Salas-Gonzalez, “MR brain image segmentation by hierarchical growing SOM and probability clustering,” Electronics Letters, vol. 47, no. 10, pp. 585–586, 2011.
- T. Kohonen, Self-Organizing Maps, Springer, 2001.
- E. Arsuaga and F. Díaz, “Topology preservation in SOM,” International Journal of Mathematical and Computer Sciences, vol. 1, no. 1, pp. 19–22, 2005.
- K. Taşdemir and E. Merényi, “Exploiting data topology in visualization and clustering of self-organizing maps,” IEEE Transactions on Neural Networks, vol. 20, no. 4, pp. 549–562, 2009.
- E. Alhoniemi, J. Himberg, J. Parhankagas, and J. Vesanta, “SOM Toolbox for Matlab v2.0,” 2005, http://www.cis.hut.fi/projects/somtoolbox.
- M. O. Stitson, J. A. E. Weston, A. Gammerman, V. Vork, and V. Vapnik, “Theory of support vector machines,” Tech. Rep. CSD-TR-96-17, Department of Computer Science, Royal Holloway College, University of London, 1996.
- M. Nixson and A. Aguado, Feature Extraction and Image Processing, Academic Press, 2008.
- R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man and Cybernetics, vol. 3, no. 6, pp. 610–621, 1973.
- M. Hu, “Visual pattern recognition by moments invariants,” IRE Transactions on Information Theory, vol. 8, pp. 179–187, 1962.
- Internet Brain Database Repository, Massachusetts General Hospital, Center for Morphometric Analysis, 2010, http://www.cma.mgh.harvard.edu/ibsr/data.html.
- J. C. Rajapakse and F. Kruggel, “Segmentation of MR images with intensity inhomogeneities,” Image and Vision Computing, vol. 16, no. 3, pp. 165–180, 1998.
- J. L. Marroquin, B. C. Vemuri, S. Botello, F. Calderon, and A. Fernandez-Bouzas, “An accurate and efficient Bayesian method for automatic segmentation of brain MRI,” IEEE Transactions on Medical Imaging, vol. 21, no. 8, pp. 934–945, 2002.
- J. C. Bezdek, L. O. Hall, and L. P. Clarke, “Review of MR image segmentation techniques using pattern recognition,” Medical Physics, vol. 20, no. 4, pp. 1033–1048, 1993.
- L. P. Clarke, R. P. Velthuizen, M. A. Camacho et al., “MRI segmentation: methods and applications,” Magnetic Resonance Imaging, vol. 13, no. 3, pp. 343–368, 1995.
- C. T. Su and H. C. Lin, “Applying electromagnetism-like mechanism for feature selection,” Information Sciences, vol. 181, no. 5, pp. 972–986, 2011.
- K. Tan, E. Khor, and T. Lee, Multiobjective Evolutionary and Applications, Springer, 1st edition, 2005.
- T. Tasdizen, S. P. Awate, R. T. Whitaker, and N. L. Foster, “MRI tissue classification with neighborhood statistics: a nonparametric, entropy-minimizing approach,” in Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI '05), 2005.
- I. Usman and A. Khan, “BCH coding and intelligent watermark embedding: employing both frequency and strength selection,” Applied Soft Computing Journal, vol. 10, no. 1, pp. 332–343, 2010.
- Y. Wang, T. Adali, S. Y. Kung, and Z. Szabo, “Quantification and segmentation of brain tissues from MR images: a probabilistic neural network approach,” IEEE Transactions on Image Processing, vol. 7, no. 8, pp. 1165–1181, 1998.