About this Journal Submit a Manuscript Table of Contents
Advances in Artificial Neural Systems
Volume 2013 (2013), Article ID 278241, 18 pages
http://dx.doi.org/10.1155/2013/278241
Research Article

Novel Discrete Compactness-Based Training for Vector Quantization Networks: Enhancing Automatic Brain Tissue Classification

Computer Engineering Institute, The Technological University of the Mixteca (UTM), Carretera Huajuapan-Acatlima Km 2.5, 69004 Huajuapan de León, OAX, Mexico

Received 27 June 2013; Revised 19 September 2013; Accepted 18 November 2013

Academic Editor: Juan Ignacio Arribas

Copyright © 2013 Ricardo Pérez-Aguila. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An approach for nonsupervised segmentation of Computed Tomography (CT) brain slices which is based on the use of Vector Quantization Networks (VQNs) is described. Images are segmented via a VQN in such way that tissue is characterized according to its geometrical and topological neighborhood. The main contribution rises from the proposal of a similarity metric which is based on the application of Discrete Compactness (DC) which is a factor that provides information about the shape of an object. One of its main strengths lies in the sense of its low sensitivity to variations, due to noise or capture defects, in the shape of an object. We will present, compare, and discuss some examples of segmentation networks trained under Kohonen’s original algorithm and also under our similarity metric. Some experiments are established in order to measure the effectiveness and robustness, under our application of interest, of the proposed networks and similarity metric.

1. Introduction

1.1. Problem Statement

The main objective of this work is the description of our methodology for automatic classification of brain tissues. In concrete terms, a use of Vector Quantization Networks (VQNs) in the automatic nonsupervised characterization of tissue in the human head is proposed. It is expected, by means of nonsupervised classification, that brain regions presenting similar features are properly grouped. In such sense, the idea to be developed here considers the application of a similarity metric which is sustained in the use of the Discrete Compactness (DC). The DC is a factor that shares information about the shape of an object. It was proposed originally by Bribiesca [1] and it is inspired by the familiar Shape Compactness of an object. However, it has a greater robustness because it has a low sensitivity to variations, in the shape of an object, produced due to noise or capture defects. The original specification for Kohonen’s training algorithm considered the use of the Euclidean Distance as similarity metric in order to identify the so-called Winner Neuron. Several works also mention that the use of another metrics in order to achieve this task is possible: the Canberra Distance [2], the Sup Distance (a special case of the Minkowski Distance also known as Chebyshev Distance) [3], the Manhattan Distance [4], and the Dot Product [2]. In this regard, we have done some experiments of this kind; see [5]. The metric to be used depends on the specific characteristics of the classification task to be performed by a VQN. In our case we aim to characterize cerebral tissue by taking into account the geometry and topology surrounding a given pixel. But we also need capture errors, due to noise, for example, not to affect the obtained final classification. For this reason, DC results in a good option for comparing and classifying brain tissue. Although this work considers a training set obtained from 2D cerebral images, we will see that in order to compute DC in advance a 2D-3D mapping of the regions for characterization is required. From a 2D region a 3D object is obtained for which its DC is computed. The same applies to the Weights Vectors in the neurons that model our VQNs: they will be also mapped to 3D objects in order to compute their DCs.

1.2. Article’s Organization

This work is structured as follows. Section 2 describes the basis and relation between the 1-Dimensional Kohonen Self-Organizing Maps (KSOMs) and VQNs, while Section 3 describes the fundamentals behind DC. Section 4 summarizes a method, originally described in [6], for achieving nonsupervised characterization of tissue in Computed Tomography (CT) brain slices. This characterization is based on Kohonen’s original training algorithm applying Euclidean Distance as similarity metric. Two network topologies and some results of brain tissue characterization will be described. The Section 5 presents our method for mapping 2D brain regions into 3D objects described through voxelization. Section 6 describes the implementation of our proposed similarity metric and its integration into VQNs’ model. This metric was also discussed in [7]. Via some examples, some topological properties of the produced segmentations will be identified. In Section 7 the robustness of the proposed networks is determined when a particular evaluation set is considered. A point of view from which the results generated from the proposed similarity metric, which are based on DC, will be compared with those of the networks trained under Euclidean Distance will be established. Finally, Section 8 summarizes the obtained results and some conclusions and future perspectives of research are presented.

1.3. Related Works

The work described in [6] represents our first experience related to the potential use of 1D-KSOMs in the automatic nonsupervised classification of tissue in the human brain. In that paper we boarded an idea which has been present in some of our previous works and the development described here (as to be seen in Section 4): to take in account the neighborhood around a given pixel, because it complements the information about the tissue to be identified. Although each pixel has an intensity which is associated with a particular tissue, it is important to consider the pixels that surround it together with their intensities. In [6], once training, based only on Euclidean Distance, has been achieved, neurons’ Weights Vectors are seen as 2D grayscale images. In this way, it is possible to observe the representation of the classes achieved by the training procedures. These representations are the parameters used by the networks in order to classify a given image region and therefore its associated tissue. The main contribution presented in [6] is the arising of an attractive property: from a visual point of view, it was possible to identify some pairs of representations that appeared to be symmetrical. Specifically, we detected the presence of symmetries whose nature is mirror, rotational, reflective, or a composition of them. In consequence, it is suggested that tissues’ classification is also providing insights about location by considering left/right hemispheres and anterior/posterior cranium. These observations provided us with arguments in order to sustain the proposed networks taken into account, during the classification process, the topology, geometry, and also spatial location of tissue.

Another previous work to mention is the one described in [5] where we go one step further by considering the Image Classification process. Networks defined in [6] share segmentations for a set of CT brain images. The whole set of segmented images is then used as a training set for new KSOMs whose objective is to group them in classes in such way that it is expected that the members of a class share common and useful properties. As we will see in Section 2, Kohonen’s classical model establishes the use of the Euclidean metric in order to determine the similarity between an input vector and a Weights Vector. But, as mentioned in Section 1.1, other distance functions can be considered for the purpose of identifying a Winner Neuron. In [5] the training of networks for Image Classification based on the use of metrics such as Manhattan, Canberra, Sup, and Pérez-Aguila (which has an important role here, to be described in Section 6) was considered. The idea was to obtain the classifications produced when these functions were used during the training process and to determine the impact in the way the networks classify and distribute the elements in the training set. The results were compared with those generated using the traditional Euclidean Distance. By experimental way, it was concluded that the Euclidean Distance produced much better classifications of the segmented images than those shared by the other considered functions. Moreover, once the members of each class were obtained, the relation patient/segmented image was analyzed. This leaded to report, in [5], another interesting observation: we found a network that grouped the segmented images in the majority of its available classes, but each used class contained images that belonged to just one patient. Other networks that exhibited this characteristic in the majority of their available classes were identified, but some of them had images that belonged to two or more patients.

The two mentioned previous works, [5, 6], have provided us results for continuing moving along the study of using VQNs in the brain tissue characterization task. Positioning with respect to [5, 6], here we will deal with the Image Segmentation task but based on the use of a similarity metric distinct to the Euclidean Distance.

2. One-Dimensional Kohonen Self-Organizing Maps (1D-KSOMs) and Vector Quantization Networks (VQNs)

A Kohonen Network with inputs and neurons may be used to classify points embedded in an N-Dimensional space into categories [8, 9]. Input points have the form (). Each neuron , , has associated an -Dimensional Weights Vector which describes a representation of its corresponding class . All these vectors have the form , .

A set of training points are presented to the network times. According to [10], all values of Weight Vectors should be randomly initialized. In the th presentation, , the neuron whose Weights Vector , , is the most similar to the input point is chosen as Winner Neuron. In the model proposed by Kohonen, such selection is based on the Squared Euclidean Distance. The selected neuron is that with the minimal distance between its Weights Vector and the input point : Once the th Winner Neuron, , in the th presentation, has been identified each one of the network’s Weights Vectors is updated according to where the term 1/( + 1) is the learning coefficient and is a neighborhood function that denotes the distance between the Winner Neuron and the neuron . For neurons close enough to the Winner Neuron, should be a value near to 1. On the other hand, is close to zero for those neurons characterized as distant to the Winner Neuron.

When the presentations have been achieved, the values of the Weights Vectors correspond to coordinates of the “gravity centers” of the clusters of the classes.

According to (2) we must define a neighborhood function. In this work we will use following rule: That is, once the Winner Neuron has been identified, only its weights are updated as established in (2). The use of (3) clearly implies that we are only considering the competitive aspect of KSOMs: the so-called “The Winner Takes All” principle [9]. Hence, this gives rise to a Vector Quantization Network (VQN) or Simple Vector Quantizer [11]. In the current context, our objective is to see first how the networks’ neurons behave as isolated elements. We are ignoring the cooperative aspect of KSOMs because we are interested in taking only each neuron’s specialization and its contribution to correct tissue classification. We are conscious that under our network specification, with the removal of any neuron possibly all the information concerning its corresponding class will be lost because of its independence with respect to the other neurons [11]. However, in Section 8 we establish as one of future lines of research the analysis of the behavior of our networks when cooperation is taken into account by using different neighborhood functions. It is worth mentioning that purely competitive networks, using (3), apparently are not excluded to be called KSOMs. In this sense see for example, [12], where a network also based on null neighborhoods is described; it is cited in the report, by Pöllä et al., presented in [13]. Nevertheless, although our main source of inspiration comes from Kohonen SOM theory, and for the sake of positioning our work in the proper study area, we preserve the denomination VQN for all the networks presented here.

3. Discrete Compactness (DC)

In Image Processing, Pattern Recognition, and Computer Vision fields it is sometimes required to characterize, for a given object, its topological and geometrical factors. They have a paramount role in more elaborated tasks such as those related to classification, indexing, or comparison. One of the most used factors for describing the shape of an object is the Shape Compactness (SC) [14]. The SC refers to a measure between a given object and an ideal object [15]. In the 2D Euclidean Space, SC is usually computed via the well-known ratio where is the perimeter of an object and its area. Such ratio has its origins in the isoperimetric inequality: It is actually the solution to the Isoperimetric Problem which states to find the simple closed curve that maximizes the area of its enclosed region [16]. The equality is obtained when the considered curve is a circle. Hence, as pointed out by [14], the ratio for SC is in effect comparing an object with a circle. In the 3D space the isoperimetric inequality is given by: where is the area of the boundary of a 3D object, while is its volume. Hence, the ratio denotes SC of a 3D object and it effectively is comparing such object with a sphere.

As [14, 17] point out, these classical ratios are very sensitive to variations in the shape of an object. Moreover, they point out, when the above definitions are applied to objects defined via pixelization (in the 2D case) or voxelization (3D case), that small changes in the final object’s boundary produce more important variations in the computed values. Consider, for example, the sets of boxes presented in Figure 1. The polygon described by the union of the boxes shown in Figure 1(a) has a perimeter of 32 , while its area is 48 . Figure 1(b) shows a polygon that can be seen as a modified version (because of noise, artifacts, digitalization scheme, etc.) of the previous one. Its perimeter is given by 58 . Both polygons have the same area, but their shapes have some slight differences. SC for the first polygon is given by 1.6976, while for the second is 5.5770. These values are significantly distinct, and by considering SC as a rule for classification, this could imply that they are very different objects.

fig1
Figure 1: Polygons defined by the union of 48 unitary boxes.

In order to provide a solution to the above problem, Bribiesca, in [1, 14], defined the Discrete Compactness (DC). It is based on the notion of counting the number of edges (in the 2D case) and faces (in the 3D case) which are shared between pixels or voxels, according to the case, that define an object. Then, DC is given by the following expression [1, 14]: where (i) is number of shared edges (faces) within an object consisting of pixels (voxels), (ii) is the maximum number of shared edges (faces) achieved with an object consisting of pixels (voxels), (iii) is the minimum number of shared edges (faces) achieved with an object consisting of pixels (voxels), and (iv) .

In [14], for the 2D case, and are used, which, respectively, describe the maximum and minimum number of internal contacts (shared edges) between the pixels forming a squared object. In this case, when the object corresponds to a square of sides and when it corresponds to a rectangle with base of length 1 and height . In [17] is established. Hence, if then the object corresponds to a chain of pixels such that no edges, and only vertices, are shared. Reconsidering the polygons presented in Figure 1, we have for that shown in Figure 1(a), while for the polygon in Figure 1(b). In both cases ; hence, . By using , then DC for polygons in Figures 1(a) and 1(b) is 0.9739 and 0.8156, respectively. In both cases, it is clear that DC provides us with a more robust criterion for objects’ comparison/classification/description of shapes under the advantage that it is much less sensitive to variations in their shape. For the 3D case, in [14] it is used. If is a power of 3, then the given provides the number of shared faces in an array of voxels that correspond to a cube of edges of length . By using then a stack of voxels is defined [14].

4. Previous Work: Nonsupervised Tissue Characterization

Nonsupervised characterization of normal and pathological tissue types has great potential in clinical practice. But, as Abche et al. [18] point out, the automatic segmentation of medical images is a complex task for two reasons:(i)the variability of the human anatomy varies from a subject with respect to other;(ii)the images’ acquisition process could introduce noise and artifacts which are difficult to correct.

As commented in Section 1.1, the problem to be boarded here is the automatic nonsupervised characterization of brain tissue. It is expected that the proposed networks identify, during its training processes, the proper representations for a previously established number of classes of tissue. Hence, a CT brain slice can be segmented in such way that each type of tissue is appropriately identified. Many methods for description, object recognition, or indexing are sustained on a preprocessing based on automatic segmentation, [19, 20]. This section summarizes our methodology established originally in [6].

A point to be considered with respect to the training sets to be used is the fact that a first approach could suggest that the grayscale intensity of each pixel, in each brain slice, can be seen as an input vector (formerly an input scalar). However, as discussed in [12], our networks will be biased towards a characterization based only on grayscale intensities. It is clear that each pixel has an intensity which, it is understood, captures, or is associated, to a particular tissue; however, it is important to consider the pixels around it. The neighborhood surrounding a given pixel complements the information about the tissue to be identified [6].

Let be a pixel in an image. Given it is possible to build a subimage by taking those pixels inside a squared neighborhood of radius and center at . Pixel and its neighboring pixels will be called a mask. See Figure 2.

278241.fig.002
Figure 2: Example of two masks in a brain slice image.

Our experiments were based on a set of 340 8-bits grayscale images corresponding to CT brain slices. They are a series of axial images of the whole head of 5 patients. All the 512 × 512 images were captured by the same tomography scanner and they have the same contrast and configuration conditions.

The networks’ training sets are composed by all the masks that can be generated in each one of the 340 available images. A VQN expects as input a vector, or point, embedded in the N-Dimensional Space. A mask is seen as a matrix, but by stacking its columns on top of one another a vector is obtained. This straightforward procedure linearizes a mask making it a suitable input for the network.

Two VQNs with different topologies and training conditions were implemented:(i)network topology :(a)mask size: 5 pixels,(b)inputs: ,(c)neurons (classes): ,(d)presentations: ,(ii)network topology :(a)mask size: 4 pixels,(b)inputs: ,(c)neurons (classes): ,(d)presentations: .

These two topologies were previously designed for performing some experiments discussed in [5].

Table 1 shows the segmentation obtained for three brain slices at distinct positions of the head. The segmented images are presented in false color.

tab1
Table 1: Tissue characterization of three selected brain slices via network topologies and .

5. Computing the 3D Discrete Compactness (DC) of a 2D Mask

As seen previously, a mask of radius is a portion of an image which is centered in a given pixel. Therefore, a mask is a grayscale subimage whose size is defined by its radius. In Section 3 we introduced DC as a factor that describes geometry and topology of an object represented through a pixelization (2D) or a voxelization (3D). Our current objective is to show how it is possible computing DC for our previously defined masks. Remember that these masks defined the training sets for the VQNs described in the previous section. We have to take into account our masks which are grayscale 2D subimages. For this reason, first of all, we have to consider a conversion process with the objective of properly computed DC. The methodology to be described is based on some aspects originally presented in [21, 22]. In Figure 3 an example of a grayscale mask of radius 4 is presented.

278241.fig.003
Figure 3: Example of a mask of radius 4.

Suppose that the intensity values of the pixels are in the set , where value 1/256 is associated with black color, while value 1 corresponds to white color. Then we are considering 256 possible intensities. At this point it is convenient to clarify that other color scales can be easily adapted in the procedure we are describing; see, for example, [22], where images under 32 bits RGB are used. Because we are working with a set of CT brain images under 8 bits grayscale our description is oriented towards this specific scale.

Each one of the mask’s pixels will be extruded towards the third dimension, where its grayscale intensity value is assumed as its coordinate , while coordinates and correspond to the original pixels’ coordinates [21, 22]. See Figure 4.

278241.fig.004
Figure 4: The 3D space defined for the extrusion of grayscale 2D pixels.

A pixel’s extrusion towards third dimension depends on its intensity value . Because we are considering 256 possible intensities, then for a given pixel, we always obtain a stack composed by 256 k voxels. All of these voxels are located in the same coordinates and than the original pixel. Stack’s height is precisely 256 k. See Figure 5.

fig5
Figure 5: The voxelization resulting from the extrusion of grayscale 2D pixels. (a) The resulting 3D object with grayscale elements for referencing with Figure 4. (b) The white 3D object: this is the real one because the resulting extrusion consists no longer of grayscale elements.

By this way, given a mask of radius we obtain a 3D object expressed as a set of voxels; see Figure 5(a). The number of such voxels corresponds to the sum of the intensities of each one of the pixels in the mask, multiplied by 256. Our process can be understood as a mapping of a 2D grayscale mask into a 3D monochrome object; see Figure 5(b). The information contained in the pixel’s original intensities is preserved thanks to the use of a third dimension. Clearly, given a set of masks, it is possible to compute the DC of their corresponding 3D objects. However, we have to know in advance the values and . In the case related to we have to consider that all our masks, of radius , have a size (2 + 1) × (2 + 1). Because the maximum intensity value in a pixel can be 1 then the number of voxels required to represent a mask where all pixels are white is given by 256·(2 + 1)2. The obtained 3D object corresponds to a prism of squared base and height 256. This object is the one that will characterize our value . The specific value of depends on the size of the masks to be processed. On the other hand, we simply establish that .

6. Discrete Compactness-Based Training

6.1. The Proposed Metric

At this point we have all the elements for mapping a mask into a 3D object in order to compute its corresponding DC. Our intention is to incorporate DC to the training mechanism for a VQN. We assume, as we did it in Section 4, that our networks receive as input the vectors that correspond to linearizations of masks which in turn are taken from our set of 340 images. The size of the vectors is defined by the mask’s size. This implies that our Weights Vectors also have that same number of components. The Weights Vectors describe the representations used by the network for characterizing the elements in the training set. Such representations in turn can be considered as grayscale images whose size is the same as that of the masks that compose our training set. In consequence, it is possible to compute the DC of each Weights Vector. Then, let and be the values of the DCs of the 3D representations for input vector and Weights Vector , respectively. Then a similarity metric for determining the likeness, from a geometrical and topological point of view, between and is defined. We make use of the Pérez-Aguila Metric [12]: The Pérez-Aguila Metric is effectively a metric over , as proved in [12]. It is clear that if scalars and are very close then 0. The range of the values for DC is . Therefore we have (as also discussed in [7]) This is the only change to be applied to Kohonen’s Training Process: the computation of the Euclidean Distance is substituted by the computation of the Pérez-Aguila Metric between DCs of the 3D representations for a Weights Vector and an input vector. All of this is in order to determine the Winner Neuron. The remaining of the training procedure, as described in Section 2, suffers no changes and (3) will be also used as neighborhood function.

6.2. Visualizing the New Training Sets

We will use the same two network topologies described in Section 4. In order to distinguish them from topologies and , we will denote them as and . We commented in Section 5 that the specific value for depends on the masks’ sizes. Because the masks to use have sizes 5 × 5 and 4 × 4, then we have, in Table 2, the corresponding values for .

tab2
Table 2: values achieved with an object that defines a prism of squared base and height 256. The base’s lengths correspond to the mask sizes used in network topologies and .

We know that every mask is defined by its radius and also by the coordinates, with respect to the original image, of its central pixel. Then, given an image, the intensity value of each pixel can be replaced by the DC value of the mask with radius and center precisely in . This means that we can map a grayscale image into a false color image whose intensities correspond to the DC values for all the masks that can be derived. For example, Figure 6(a) shows one of our original CT brain slices. In Figure 6(b) all the DC values when the size of the mask is 5 × 5 can be appreciated. Figure 6(c) has the corresponding visualization with a mask’s size . These mappings are useful in the sense that they allow us to understand what information is going to be used by the VQN, during its training process, in order to adjust its Weights Vectors. It can be noted in Figures 6(b) and 6(c) how it is possible to appreciate regions with similar DCs, and moreover, how these regions share the same type of tissue. See, for example, the green regions. They correspond to bone tissue. Nevertheless, the training procedure is the one that will determine the proper classes of tissue. However, it is clear how using DC as a measure of similarity leads us to some early encouraging observations.

fig6
Figure 6: False color image whose intensities correspond to the DC values for all the masks that can be derived from (a). (b) Mask size: , (c) Mask size: .
6.3. Segmentation Based on Discrete Compactness

Table 3 shows some segmentation results, in false color, obtained by applying networks and over our set of 340 images. Both and were also used in a study presented in [7].

tab3
Table 3: Tissue characterization of three selected brain slices via network topologies and .

Now we board the question related to the benefits obtained when DC, together with Perez-Aguila Metric, is used as similarity metric. Momentarily, we only discuss segmentations produced by network topologies and . Both networks grouped in 10 classes a training set composed by 258,064 masks of size . We analyze the segmentation results when image from Figure 6(a) is sent to both networks. Classes in each segmentation were sorted decreasingly with respect to the number of their members. In Table 4 the members of the first five classes are shown. Both networks have a class where there are grouped regions corresponding to empty space. In fact, these classes have the maximal number of members: 131,955 in the case of network and 132,534 for network . The sum of members in classes 1, 5, 2, and 6, under , is 118,144, whereas class 1 groups approximately to 71.16% of such members. On the other side, for network , the sum of the members in classes 9, 0, 1, and 7 is 91,533. In this case the class with more members is 9 (38,052), which translates in the property that it has the 41.57% of the previous sum. Then, it is clear that network has given us a more equitable distribution of the masks between the classes. Also of interest is the class 1 under network . It corresponds mainly to bone tissue. It can be observed how there are no classes under topology which present this characteristic (see Tables 4 and 5). Class 1 of network contains bone tissue, but there is also present brain tissue, among others. In this last sense, classes 9 and 0, from network , describe brain tissue but without the presence of bone tissue.

tab4
Table 4: Classification of points in image from Figure 6(a). The five classes with more members are presented (sorted decreasingly).
tab5
Table 5: Classification of points in image from Figure 6(a). The five classes with the minor number of members are presented (sorted decreasingly).

Now see Table 5. In this point we have other important differences between the classifications provided by networks and . One point to stand out is the delimitation of tissue shared by network . Classes 2, 4, 8, 5, and 6 can be seen as boundaries for the different regions of tissue lying in the remaining classes. Cleary in these cases network produces a classification where boundaries are not continuous. In fact, in classes 7 and 0 it is not possible to appreciate components of enough size and visual description of the tissue to which they are associated. In network , we have that class 6 has the minimal number of members: 2,337. But it is clear how the tissue described can be seen as the one that separates bone tissue (class 1) from the remaining types.

7. Robustness Evaluation

The results presented in the previous section are encouraging in the sense that they have shared with us the observation that tissue classification based on DC allows separating in a more proper way the brain tissue. Furthermore, as seen previously, the identified regions have a better cohesion. We remember that our networks were trained using a set composed by 340 images which are associated with 5 real patients. The results of the previous section are based precisely on those images. In this Section we will consider a new Evaluation Set in order to measure performance and robustness of our networks. The main goal is to compare, from an objective point of view, the efficiency of networks and , which were trained using Kohonen’s original rule, and opposite networks and trained using our proposed rule, which as we know is based in DC. The objectivity will be based on the fact we consider a set of images for which the types of tissues and the regions they form are well identified. More specifically, 362 images generated by the well-known online tool BrainWeb will be used [2327]. In our situation we have considered the generation of images associated with a normal brain. One of the characteristics of BrainWeb is the generation of images using a MRI (Magnetic Resonance Imaging) Simulator [26, 27]. This point is important because our networks were trained with images generated via CT. This implies that we are also evaluating performance of our networks by presenting the images obtained by a different technique.

7.1. Generating the Evaluation Sets

The 362 evaluation images were generated by configuring with different values the MRI parameters of BrainWeb. Specifically, we generated 2 sets of 181 images. They differentiate mainly with respect to used values for noise, modality (signal weighting), and intensity nonuniformity [23]. The corresponding configurations are presented in the Table 6. Table 7 shows visualizations of slices 30, 90, and 120 according to the used configuration. It can be noted that head’s orientation is opposite to that present in the images used in the original training set. The only modification applied to the evaluation images was a scaling in such way the new size with which are presented to our networks is . Starting from this point we will refer to the evaluation images as first set and second set, understanding that one set differs from the other by the generation parameters as presented in Table 6.

tab6
Table 6: The two BrainWeb MRI configurations used for generation of evaluation sets (see text for details).
tab7
Table 7: Three selected brain slices generated by online tool BrainWeb. Three different MRI configurations were applied (see Table 6 for details).
7.2. Segmenting the Images in the Evaluation Sets

Table 8 presents the segmentations produced by networks and for slices 30, 90, and 120 in first and second sets. Table 9 presents segmentations for the same slices in the same sets, but they were produced by networks and .

tab8
Table 8: Tissue characterization of three selected brain slices (evaluation first and second sets) via network topologies and .
tab9
Table 9: Tissue characterization of three selected brain slices (evaluation first and second sets) via network topologies and .
7.3. Generation of If-Then Classification Rules Based on BrainWeb Tissue Characterization

BrainWeb’s authors have made the anatomical models used for generating their simulations available [25]. These models are represented as voxelization and the tissue that occupies, completely or with major presence, each voxel is identified. In this sense, we have considered these tissue characterizations. As mentioned previously, we will use them as a way for objective comparison in order to determine if our contribution, classification based on DC, effectively allows obtaining better segmentations than those produced by networks trained with the original rule based on Euclidean Distance. Table 10 shows the 10 distinct tissues that can be identified in the images generated by BrainWeb. Each tissue is associated with an index in .

tab10
Table 10: The ten types of cerebral tissue that are present in the anatomical model used in BrainWeb for generating MRI simulations.

As previously established, each one of the evaluation images is presented to each one of our networks. The produced segmentation is analyzed for determining the different tissues that were grouped in each class. Then, for each class a counting is made for identifying its Winner Tissue. For example, see Table 11. In this table the results corresponding to Slice 90, in the Second Set, when it was presented to network are presented. It can be observed that class 1 has 32,874 members, all of them corresponding to tissue type 0, that is, Image Background (see Table 10).

tab11
Table 11: Brain tissues collected in each class of network topology for brain slice 90 (second set).

In another example we can see that class 10 has 34,046 members. These members are associated with tissue types 1, 2, 3, 4, 5, 6, 8, and 9. However, the Winner Tissue is type 2: Grey Matter. On the other hand it is clear that there are classes without members, such as, for example, class 0. Using each evaluation image the Winner Tissue for each one of the classes that compose a network is determined. Later on, a counting of all the Winner Tissues for each class once all the evaluation images, in the same set, have been used is performed.

For example, Table 12 shows the results once all images in the Second Set were presented to network . It can be seen, in one case, that at the end class 0 groups only two types of tissue: 0 and 4 (Image Background and Fat Tissue, resp.). Then, we will identify the Winner Tissue for a class, but unlike the particular analysis per image, this relation class/Winner Tissue that will be applied now to all the evaluation images in the same set. For example, reconsidering class 0 in network we have that the corresponding tissue is type 4: Fat Tissue.

tab12
Table 12: Winner brain tissues collected in each class of network topology . The results group all counts for all the evaluation images in the second set.

The relation between classes and Winner Tissues we just defined is used for establishing simple If-Then classification rules. These rules will associate in direct way, and according to the network’s Winner Neuron, the type of tissue that corresponds in a given region of an evaluation image. For example, according to Table 12 we know that the Winner Tissue associated with class 15 is type 6: Skin Tissue. Therefore, we just have to apply the following rule when an evaluation image is presented to network :

“If the Winner Class is class 15 then the corresponding region is characterized as Skin Tissue (type 6)”

In another words, given a network we have a number of rules equal to the number of its classes. By applying these rules a segmentation of the input evaluation image is obtained, but each region has now an associated specific tissue type. See Tables 13 and 14 where the classification rules for the classes in networks and are presented, respectively. It should be noted that we have defined sets of rules according to the evaluation set presented to the Networks.

tab13
Table 13: Associations between winner classes and tissue characterizations for network topology . There is a set of If-Then rules for each evaluation set.
tab14
Table 14: Associations between winner classes and tissue characterizations for network topology . There is a set of If-Then rules for each evaluation set.
7.4. Verifying Robustness

It should be clear that the If-Then classification rules defined in the previous subsection can assign an incorrect tissue type to certain regions because at the end we are considering an only Winner Tissue whose designation is based on the counting previously commented. This situation is also related to the weights’ configurations in our networks since they are responsible for initially characterizing the regions in the images. The good new is we have the anatomic model used in BrainWeb for generating the simulations. Moreover, we have the correct characterizations of tissue. Hence, all the evaluation images to our two networks are presented; the classification rules that correspond to each network and each set of evaluation images are applied, this produce segmentations; and finally, a simple counting of hits and misses by comparing the tissue assignment produced by the networks/classification rules against the correct assignation in the original anatomic model is performed. In consequence, the total number of hits and misses is used as a comparison point in order to measure the performance and robustness of our proposed networks and classification rule. See Table 15. For example, network has more than 19,000,000 hits and it failed in 8,241,906 characterizations. Our current point of interest lies in verifying that the classification produced by the networks trained using DC effectively has more hits than the classification achieved by networks based on Euclidean Distance. Observing Table 15 we can appreciate that in fact networks and have more hits than those obtained by networks and . In the case versus , and using the First Set, we have that outnumbers by 1,142,878 hits to network . There is one exception: versus when the Second Set is used. In these situations we have that network has 168,379 more hits than network . However, this difference represents 0.6% of the total of the evaluations. This could point out us that in fact this case represents a situation where it is not possible to differentiate the performance of such networks. Although it should be remembered this pair of networks was used in Section 6 for presenting our first results related to the produced segmentations. Tables 4 and 5 provided us empirical evidence with respect to because it separates in a more proper way the brain tissue by generating regions with better cohesion, and it provides a more equitable distribution of the masks between the classes.

tab15
Table 15: Total number of hits and misses when an evaluation set is presented to network topologies , , , and (see text for details).

Concluding, and by the above remarks, the results of our experiment allows us to establish that effectively better classification results are obtained when our proposal of classification based on DC is used. Furthermore, the robustness of our networks is validated because they have been capable of correctly classifying, in the best case given by , 19,621,234 masks (approximately 72%, see Table 15), in the evaluating input slices. We recall that the networks were trained using slices generated via CT, while the evaluation set, generated via BrainWeb, corresponds to MRI simulation. Finally, the head’s orientation is opposite to that used during the networks’ training phase.

8. Conclusions and Future Work

In this work we have described a similarity metric for the identification of the Winner Neuron in VQNs training. It is well known that medical reasoning is mainly based on the information and knowledge acquired from previous closed cases [28]. The contributions presented here are part of a main objective that aims at the generation of a database composed by a set of images that correspond to patients with well-specified diagnosis and the medical procedures followed. Images are going to be grouped in classes in such way those included in a class share anatomical characteristics. The automatic classification of previously boarded clinical situations, expressed via images, has great potential for physicians because it could be possible (1) to index an image corresponding to a new case in an appropriate class and (2) to use the associated closed case of each member in such class in order to build a suggestion of the diagnosis and procedures to apply [29].

We have seen how the substitution of the classical rule by the proposed one, based on DC and Pérez-Aguila Metric, has shared us some interesting results for characterization of tissue in CT brain slices. We recall that for achieving the computation of DC of the masks that compose an image, also for Weights Vectors in the networks’ neurons, a 2D-3D mapping that, in one side, preserves information referent to grayscale intensity of the original pixels is required. On the other side, this 3D representation also expresses geometric and topologic information which is then used by the network in its training process. According to Tables 4 and 5 we can appreciate that the final segmentation groups in a more coherent way the elements in an image sharing a clear identification of the tissue described by each class. The results of Table 5 also lead us to understand that segmentation supported in DC is directed in a way such that the formed regions are well delimited as much as possible. The experiments described in Section 7 have allowed us to conclude that there are better classification results by considering our proposed similarity metric. As seen previously, these results rise from using images generated by the well-known online tool BrainWeb. For these evaluation images the tissues and regions formed are well identified. Moreover, by using BrainWeb generated images we were able to validate the robustness of our networks because they were capable of processing MRI simulations and successfully classifying the majority of the brain tissues.

In [7] we present another point of view to sustain the benefits of the use of Discrete Compactness. Specifically it is studied how the similarity metric, for segmentation purposes, impacts other processes such as the classification of segmented images. In such work we built additional 1D-KSOMs whose objective is to classify segmented images. The impact is measured in two ways: (a) with respect to the differences existing between segmented images in the same class and (b) with respect to the way a representative relates to the members of its class. In both cases, we are assisted by some simple error functions. Summarizing, the obtained results tell us that segmented images under Discrete Compactness are distributed in classes in a better way than using the classical rule based on Euclidean Distance. In our experiments we observe that images in the same class are closer in terms of their common characteristics, and their representatives, the Weights Vectors, describe them in much better way. As commented previously, see [7] in order to get more specific details and results with respect to the experiments developed there.

Continuing the line of research we have followed here, we can see according to (3), that we have considered null neighborhoods when we trained the networks presented in this work. We proceed in that way because by the introduction of our similarity metric we wanted to observe how the networks’ neurons behaved as isolated elements. However it is well known that Kohonen Networks have as one of their paramount fundamentals the fact that updating rule (2) should act over all the neurons inside the Winner Neuron’s neighborhood. As part of our future work we will incorporate different neighborhood functions in order to observe in detail the influence of the Winner Neuron over other neurons when our proposed similarity metric is used.

We close this work establishing another line of future research. The only change applied to Kohonen’s training procedure was the one related to the similarity metric. The updating rule (2) modifies the weights of the Winner Neuron in terms of the input vector and the current learning coefficient. Geometrically, current Weights Vector, , is translated by means of input vector . The resulting vector is then scaled by the factor defined by the learning coefficient and the neighborhood function. This implies that spatial relationships are only taken into account in order to obtain a new Weights Vector. We have seen what happens when other types of relations are taken in consideration when a network is trained. We took the essence of one of Kohonen’s learning rules and proposed one based on Discrete Compactness and Pérez-Aguila Metric. Hence, we can establish a new line of future research in the sense Kohonen’s learning rules can be seen as starting points for defining analogous updating rules that could take in account well-known operators such as Boolean Regularized Operations and Morphological Operators. On the other side, it is possible to use other geometrical and topological factors in order to determine similarity. In concrete terms, we are proposing, as future work, the specification of a Nonsupervised Classifier based on Kohonen’s learning rules were the Winner Neuron and its updating is according to one or various geometrical and topological interrogations and operations. We are optimist in the sense that the results to be obtained are going to be encouraging as the ones presented here.

References

  1. E. Bribiesca, “Measuring 2-D shape compactness using the contact perimeter,” Computers and Mathematics with Applications, vol. 33, no. 11, pp. 1–9, 1997. View at Scopus
  2. N. B. M. Yusof, Multilevel Learning in Kohonen SOM Network for Classification Problems, Universiti Teknologi Malaysia, 2006.
  3. R. Kamimura, S. Aida-Hyugaji, and Y. Maruyama, “Information-theoretic self-organizing maps with minkowski distance,” in Proceedings of the 17th IASTED International Conference on Artificial Intelligence and Soft Computing, pp. 15–20, ASC, July 2003. View at Scopus
  4. M. Porrmann, M. Franzmeier, H. Kalte, U. Witkowski, and U. A. Rückert, “Reconfigurable SOM hardware accelerator,” in Proceedings of the 10th European Symposium on Artificial Neural Networks (ESANN '02), pp. 337–342, Bruges, Belgium, April 2004.
  5. R. Pérez-Aguila, “Automatic segmentation and classification of computed tomography brain images: an approach using one-dimensional Kohonen networks,” IAENG International Journal of Computer Science, vol. 37, no. 1, pp. 27–35, 2010.
  6. R. Pérez-Aguila, “Brain tissue characterization via non-supervised one-dimensional Kohonen networks,” in Proceedings of the 19th International Conference on Electronics, Communications and Computers (CONIELECOMP '09), pp. 197–201, IEEE Computer Society, Puebla, México, February 2009.
  7. R. Pérez-Aguila, “Enhancing brain tissue segmentation and image classification via 1D Kohonen networks and discrete compactness: an experimental study,” Engineering Letters, vol. 21, no. 4, pp. 171–180, 2013.
  8. E. Davalo and P. Naïm, Neural Networks, The Macmillan Press, 1992.
  9. H. Ritter, T. Martinetz, and K. Schulten, Neural Computation and Self-Organizing Maps: An Introduction, Addison-Wesley, 1992.
  10. J. Hilera and V. Martínez, Redes Neuronales Artificiales, Alfaomega, México, 2000, (Spanish).
  11. D. Niebur, “An example of unsupervised networks Kohonen’s self-organizing feature map,” Technical Report, Jet Propulsion Laboratory & California Institute of Technology, 1995, http://trs-new.jpl.nasa.gov/dspace/handle/2014/30739.
  12. R. Pérez-Aguila, P. Gómez-Gil, and A. Aguilera, “Non-supervised classification of 2D color images using Kohonen networks and a novel metric,” in Progress in Pattern Recognition, Image Analysis and Applications, vol. 3773 of Lecture Notes in Computer Science, pp. 271–284, Springer, Berlin, Germany, 2005. View at Publisher · View at Google Scholar
  13. M. Pöllä, T. Honkela, and T. Kohonen, “Bibliography of self-organizing map (SOM) papers: 2002–2005 addendum,” TKK Reports in Information and Computer Science TKK-ICS-R23, Department of Information and Computer Science, Faculty of Information and Natural Sciences, Helsinki University of Technology, 2009, http://lib.tkk.fi/Reports/2009/isbn9789522482532.pdf.
  14. E. Bribiesca and R. S. Montero, “State of the art of compactness and circularity measures,” International Mathematical Forum, vol. 4, no. 27, pp. 1305–1335, 2009.
  15. S. Marchand-Maillet and Y. M. Sharaiha, Binary Digital Image Processing: A Discrete Approach, Academic Press, 2000.
  16. R. Osserman, “The isoperimetric inequality,” Bulletin of the American Mathematical Society, vol. 84, no. 6, pp. 1182–1238, 1978.
  17. J. Einenkel, U.-D. Braumann, L.-C. Horn et al., “Evaluation of the invasion front pattern of squamous cell cervical carcinoma by measuring classical and discrete compactness,” Computerized Medical Imaging and Graphics, vol. 31, no. 6, pp. 428–435, 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. A. B. Abche, A. Maalouf, and E. Karam, “A hybrid approach for the segmentation of MRI brain images,” in Proceedings of the IEEE 13th International Conference on Systems, Signals and Image Processing, September 2006.
  19. F. Peng, K. Yuan, S. Feng, and W. Chen, “Pre-processing of CT brain images for content-based image retrieval,” in Proceedings of the 1st International Conference on BioMedical Engineering and Informatics (BMEI '08), pp. 208–212, Sanya, China, May 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. M. Berthod, Z. Kato, S. Yu, and J. Zerubia, “Bayesian image classification using Markov random fields,” Image and Vision Computing, vol. 14, no. 4, pp. 285–295, 1996. View at Publisher · View at Google Scholar · View at Scopus
  21. R. Pérez-Aguila, Orthogonal polytopes: study and application [Ph.D. thesis], Universidad de las Américas Puebla (UDLAP), 2006, http://catarina.udlap.mx/u_dl_a/tales/documentos/dsc/perez_a_r/.
  22. R. Pérez-Aguila, “Representing and visualizing vectorized videos through the extreme vertices model in the n-dimensional space (nD-EVM),” Journal Research in Computer Science, vol. 29, pp. 65–80, 2007.
  23. BrainWeb: Simulated Brain Database, 2013, http://www.bic.mni.mcgill.ca/brainweb/.
  24. C. A. Cocosco, V. Kollokian, R. K.-S. Kwan, and A. C. Evans, “Brain web: online interface to a 3D MRI simulated brain database,” NeuroImage, vol. 5, no. 4, part 2, article S425, 1997. View at Scopus
  25. D. L. Collins, A. P. Zijdenbos, V. Kollokian et al., “Design and construction of a realistic digital brain phantom,” IEEE Transactions on Medical Imaging, vol. 17, no. 3, pp. 463–468, 1998. View at Scopus
  26. R. K.-S. Kwan, A. C. Evans, and G. B. Pike, “An extensible MRI simulator for post-processing evaluation,” in Visualization in Biomedical Computing, vol. 1131 of Lecture Notes in Computer Science, pp. 135–140, Springer, 1996. View at Publisher · View at Google Scholar
  27. R. K.-S. Kwan, A. C. Evans, and B. Pike, “MRI simulation-based evaluation of image-processing and classification methods,” IEEE Transactions on Medical Imaging, vol. 18, no. 11, pp. 1085–1097, 1999. View at Publisher · View at Google Scholar · View at Scopus
  28. F. S. McDonald, P. S. Mueller, and G. Ramakrishna, Mayo Clinic Images in Internal Medicine, Informa HealthCare, 1st edition, 2004.
  29. R. Pérez-Aguila, Una Introducción al Cómputo Neuronal Artificial, El Cid Editor, Argentina, 2012, (Spanish).