Abstract

Several segmentation methods are implemented and applied to segment the facial masseter tissue from magnetic resonance images. The common idea for all methods is to take advantage of prior information from different MR images belonging to different individuals in segmentation of a test MR image. Standard atlas-based segmentation methods and probabilistic segmentation methods based on Markov random field use labeled prior information. In this study, a new approach is also proposed where unlabeled prior information from a set of MR images is used to segment masseter tissue in a probabilistic framework. The proposed method uses only a seed point that indicates the target tissue and performs automatic segmentation for the selected tissue without using labeled training set. The segmentation results of all methods are validated and compared where the influences of labeled or unlabeled prior information and initialization are discussed particularly. It is shown that if appropriate modeling is done, there is no need for labeled prior information. The best accuracy is obtained by the proposed approach where unlabeled prior information is used.

1. Introduction

Recent advances in medical imaging have enabled the derivation of useful information about different body parts and tissues. Two major imaging modalities, computed tomography (CT) and magnetic resonance imaging (MRI), are commonly used as sources to extract anatomical structures. Despite the fact that CT is preferred for hard tissues, such as bone, MR images are commonly used for evaluating the presence and extent of the soft tissue volumes such as brain and heart.

Nowadays, doctors and clinical specialists take the advantage of imaging modalities in gathering anatomical information about a patient and are able to use this information in diagnosis and prognosis. The further step is to involve artificial intelligence to automate this diagnosis/prognosis process for segmenting target tissues.

Currently, most of the automatic soft tissue segmentation methods in the literature consider tissues like brain, heart, and lung as target tissues and there are very few works about facial soft tissue (FST) (e.g., facial muscles) segmentation. Considering the key role of the face in human life and the huge increase in craniofacial surgeries around the world, FST segmentation has become more important in recent days. Planning before a facial surgery by performing the modifications virtually prior to the actual operation [1] is very important to increase the overall success of the actual operation. In addition, for patients seeking surgical treatment, it would be very beneficial to have a means to predict the postsurgical appearance of their face. For this to be done, the first step is to obtain an anatomic model of the patient’s face. Such a complicated computer model should include segmented hard (i.e., skull) and soft tissues (i.e., muscles, skin, and fat). Besides, each FST (e.g., a muscle) should also be segmented from the others when the operation has an effect on such a tissue.

Soft tissue segmentation is very complicated due to the fact that soft tissues do not have a constant shape. Moreover, segmentation becomes more complicated when the soft tissues interfere with each other and this is always the case for FSTs. To solve these problems, prior information is commonly used in a different manner to improve the segmentation quality.

By prior information, we mean the knowledge that we took from a set of individual MRI scans which can be considered as the training set and used to determine prior shapes and locations of the target tissues. This is quite like the method when a specialist doctor extracts the target tissue in a new image based on his/her past experience of viewing thousands of similar images. The standard method is to manually label the training data and construct an atlas from it [2]. Then, the labeled atlas is registered to the test MRI set and the labeling is applied to the test set based on the transformation in the registration process. The atlas can also be used as the prior labeling information in a Markov random field (MRF) statistical model to optimize the segmentation [3].

Currently, these methods have been used for soft tissues other than FST in the literature. We implemented representative examples of the methods in the literature and compared them for the purpose of segmentation of masseter muscle.

Moreover, we proposed a new segmentation approach, which requires very little user interaction. Instead of manually labeled atlases, unlabeled training images are used as hidden atlases for the purpose of evaluating the effect of unlabeled prior information. The main reason in using the unlabeled prior information is that manual labeling of tens of medical image data sets is a very complicated and time consuming task and is prone to error.

In our previous work [4], we introduced a new neighboring model that takes advantage of unlabeled prior information presented as registered training images. We used a modified region growing algorithm to perform the segmentation for masseter tissue. In the current study, we aim to use the same neighborhood idea in a newly proposed probabilistic framework.

The unlabeled prior knowledge was used in our MRF structure and we tried to optimize the segmentation results iteratively by using expectation maximization (EM) algorithm. Finally, we compared our method using unlabeled prior information with the previously mentioned methods using labeled prior information and evaluate the advantages and disadvantages of all methods.

2. Literature Survey

2.1. Soft Tissue Segmentation

Although there are plenty of methods that perform soft tissue segmentation in the literature, facial soft tissue (FST) segmentation has received relatively little attention. Considering the visualization similarities between FST and other soft tissues like brain, the segmentation process can be the same theoretically. But due to different characteristics of these tissues, such as more complicated and interfered structure of FSTs than other soft tissues, more precise and powerful segmentation methods are needed for FST segmentation.

Facial soft tissues are usually small and surrounded with other tissues that share the same intensity values [5]. Different but neighboring tissues are interfering with each other in some cases that make tissue detection a hard work even for a specialist doctor. Other than that, unlike other tissues like brain that have a specific shape model, FSTs do not have a specific shape but they may have different shapes in different individuals. All these difficulties make the segmentation process nearly impossible and thus the requirement of additional information is inevitable. In this chapter, we will discuss soft tissue segmentation studies that are related to our work.

Purely intensity-based segmentation and classification methods assign a label to each pixel in the image and require only the intensity information that is generated by the MR imaging device. However, in medical image segmentation, different anatomical structures may have the same intensity values or intensity distributions that cannot be distinguished from each other. In such cases, extra information should be considered and included in the segmentation process. Spatial information like neighborhood relationships between pixels can be very useful in segmenting individual tissues. In addition to geometrical constraints, relationships between several different but similar data sets can also be considered. The additional data that is used in a segmentation process is called the prior information. Soft tissue segmentation methods usually use prior information in a different manner to improve the segmentation accuracy. The prior information is included mostly in the form of single or multiple atlases. An atlas can be presented as a single manually segmented data (2D image or 3D voxel volume or 2D/3D sequences) or can be formed from multiple manually segmented data [2]. For example, 70 infant brain MRI [6], 275 brain dataset [7], and 14 cardiac image sequences [8] were used to construct atlases.

As the number of atlases fused increases, the average segmentation accuracy increases [9]. Fusion of a large number of atlases is more likely to create a smooth estimate of the structure. However, construction of multiatlas is very hard because it requires manual segmentation on tens of data. In addition to that, increased computational cost of registering large numbers of atlases to the query image is an immediate practical problem. There are some solutions proposed for this problem in the literature. In [10], adaptive multiatlas is proposed where local atlas-based operations are performed. The proposed algorithm automatically selects the most appropriate atlases for a target image and automatically stops registering atlases when no further improvement is expected. In [11], an appropriate atlas is selected based on the scale resemblance of the atlas and the query data.

Atlases should be registered to the query data before the segmentation process. Segmentations in atlases are transformed to the query data and subsequently fused or combined. One way of atlas-based segmentation is to transform the atlas segments to the test data by using nearest-neighbor interpolation so that each atlas provides a discrete labeling for each voxel. The final label can then be decided by “majority vote” [12]. Nonrigid registration is also used for segmentation purposes in [13], where atlas is used as a guide to perform population segmentation through population deformable registration. The atlas is registered to all of the test sets and the sets are deformed toward the atlas to achieve population segmentation.

Another method to incorporate the atlas in the segmentation process is to use MRF (Markov random field) or HMRF (hidden Markov random field) models. MRF models are commonly used for unsupervised segmentation of medical data since smoothness constraint can easily be incorporated to the model by neighboring relations among the pixels to be segmented. The first studies of brain segmentation use the basic HMRF formulation where smoothness is defined based on the resemblance of the neighbors [14, 15]. Then iterative methods like ICM (iterated conditional model) are used to find the most probable labeling. In soft tissue segmentation, standard MRF modeling may not be applied directly since the parameters of the model need to be tuned for each new image. To improve standard MRF models, segmentation and registration are joined in [15]. This method aims to improve segmentation and registration accuracy by incorporating registered MRI sets in a combined MRF model and estimating the labels in a registration criterion. It is shown that by using this combination, the computational cost of registration is reduced and there is a sizable improvement in segmentation of human brain and mouse heart. However, this method needs the initial prior models to be set precisely.

In [16], distributed MRF segmentation is proposed to cope with spatially varying intensity distributions. Three different distribution classes are defined for MRI brain segmentation. The main problem in this approach is to find a partition that only includes these three classes.

However, the usual way of improving the MRF performance in segmentation is to use parametric model where the parameters are learned from the image usually by EM (expectation maximization) algorithm [1719]. An HMRF model is developed in [20] to segment brain MR images where the EM algorithm is used to estimate the HMRF model parameters by solving maximum likelihood (ML) problem. Since there is no prior information used in this method, the algorithm is highly sensitive to noise and therefore is not robust. A commonly preferred method to incorporate the prior information to the MRF models is to register the atlas to the test image and to define the initial segment labels of the test image by the transformed atlases.

In [21], each tissue type is labeled based on the transformed atlas to obtain the probability of each tissue type for each voxel. The initial class labels are assigned by choosing the maximum probability tissue type. Then the classification algorithm is used to locally maximize mutual information by changing the class of each voxel. The mutual information is defined based on model probability density function (PDF). Initial class labels are used as the prior probability of the labels for brain segmentation.

A manually constructed probabilistic atlas is used in [8] to estimate the initial model parameters which are used as the priori information in the classification process. The segmentation algorithm incorporates spatial and temporal contextual information by using 4D Markov random fields. Finally, the expectation maximization (EM) algorithm is used to perform segmentation on cardiac MR images.

In the literature, using the atlas as the prior probability of the labels is the most commonly chosen method to incorporate the prior information to the segmentation. However, this requires manually segmented atlases to be prepared. In this study, we propose another way for this cooperation where no manually labeled atlas is required.

2.2. Facial Soft Tissue Segmentation

All methods mentioned above perform segmentation for soft tissues such as brain, lungs, and cardiac. Very few studies considered facial soft tissue (FST) segmentation for MR images.

In the literature, FST segmentation is mostly done for clinical purposes with manual or other simple segmentation methods where human interaction is required. Manual segmentation can also be combined with the help of segmentation tools as in [22, 23] where finite element model (FEM) of the face is constructed from facial MRI scans. In [24], a clinical study is presented which performs manual segmentation to investigate the differences in facial soft tissues between MuSK-MG patients and healthy people.

Anatomical visualization is another application of FST segmentation. In [25], one observer performs semiautomatic segmentation using the editor module of the 3D Slicer software [26] to segment lip muscles and reconstruct 3D models.

Other than manual methods, there are some other automatic or semi-automatic methods studied for FST segmentation. The main problem with classification algorithms in FST segmentation is the presence of several tissue types in one MRI slice. These tissue types may be different in the corresponding slices among different individuals. Therefore, the segmentation results may be poor or too many manual interactions may be needed.

Ng et al. [2729] have tested several methods for FST segmentation using prior knowledge. The process starts with manual segmentation of the training sets. Then registration of the training sets to the test set is applied. The training images are transformed according to the difference between the shape of the head and the target tissue in each image and also tissue surface similarity. A tissue template is defined based on the transformed labeling. The muscle template is employed by the morphological operators to obtain an initial estimate of the muscle boundary. The muscle boundary then serves as the input contour to the gradient vector flow that snake iterates to the final segmentation. An improved method is proposed in [28]; that is, shape determinative slices are used as a guide in 3D segmentation. A similar method is used in [27] with a new method for determining the dominant slices of three human masticatory muscles (masseter, lateral, and medial pterygoids). In [30, 31], the authors proposed a novel segmentation method based on a 3D statistical model. The statistical model is made by using manual labeling of the masseter tissue. They show that by using prior shape knowledge, clinically acceptable results can be achieved.

All these methods need user interaction in several steps during the segmentation process. Also a manual thresholding method is used to exclude bone and fat that makes the method less automatic.

The complete and automatic segmentation of facial soft tissues still remains as an unsolved problem. In this work, we aim to investigate some of the methods which have been tested in segmentation of other soft tissues and try to modify them to be used in FST segmentation.

3. Methods

3.1. Overview

Our aim in this study is to investigate the role of the labeled or unlabeled prior information in facial soft tissue segmentation. For this purpose, we apply several existing two dimensional (2D) segmentation methods for target facial soft tissues. These methods are chosen because they are the representatives in the previous literature, which use prior information in some way or the other. A comparison between these methods will clarify different aspects of prior knowledge-based segmentation methods. These methods are as follows.Method a: atlas-based segmentation.Method b: MRF-based segmentation where initialization is done by an initial segmentation, which is based on region growing that started from a seed point.Method c: MRF-based segmentation where initialization is done by an initial segmentation, which is based on region growing using a labeled atlas.Method d: Bayesian-EM based segmentation using labeled atlas.

Then our newly proposed segmentation approach, MRF-based segmentation using unlabeled prior information (Method e), will be introduced and applied to the same image sets for 2D segmentation.

Masseter muscle in head is selected as the target tissue in this study. Masseter is a strong and large muscle, responsible for jaw motion. An axial view of both right and left masseter muscles in an MR image is shown in Figure 1. The muscle borders are specified in green.

All images used in this work are whole head and neck 3D MRI sets which are obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) [32]. All image sets are axial T1 weighted sets with 1.2 mm slice thickness. Each set contains 256 slices with 256 × 217 pixels resolution. Ten different sets are randomly selected as the experimental data. In each experiment, leave-one-out technique is used, that is, each set is selected as the test set and the remaining 9 sets are used as the training set. This process is repeated for all sets.

A block diagram is given in Figure 2 to explain the methods and their relationships. More details about each block are provided in Section 3.3.

3.2. Implemented Methods

Method a. Atlas-based segmentation is one of the popular methods in medical image analysis, especially in brain soft tissue segmentation [2, 16, 33]. We implemented the same method for facial soft tissue segmentation. In this method, an affine registration and histogram equalization are applied to the images and then a transform from each training image to the test image is achieved by using a nonrigid registration. The manual labels are then transformed by the computed transformation and the final segmentation result is achieved by averaging all of the transformed labels. The method includes blocks (i), (ii), (iii), (v), and (viii) as shown in Figure 2. The details for each step are explained in Section 3.3.

Method b. In this method, we try to perform MRF-based segmentation without using any prior information. This will help us to understand the basic MRF segmentation (which is considered as a baseline) and the effect of prior information. The segmentation process is like the method used in [20] that performs segmentation for brain MR images. The performance with this basic MRF approach is poor, because the convergence of the EM algorithm strongly depends on the initial labeling and parameters. Thus, different from [20], we use a region growing algorithm to obtain an initial segmentation starting from a seed point. The results of this algorithm are used as initial labels. Initial parameters are also computed from this initial segmentation. This causes a better performance in the final segmentation as we may expect. This method includes block: (i), (ii), (iv), (vi), and (ix) as shown in Figure 2 and explained in details in Section 3.3.

Method c. This method is similar to Method b except that the region growing algorithm in this section (which is also explained in part (vii) in Section 3.3) is a modified version of the basic form described in part (vi) in Section 3.3. In the basic region growing, only the neighboring relations are considered. However, the modified region growing method also uses unlabeled training data. This modified region growing algorithm is used for the initialization of the segmentation to investigate the effect of the initial estimate in MRF-based segmentation and also to be fair in comparison between MRF-EM method and our proposed approach where we perform MRF-based segmentation with initials obtained from the modified region growing algorithm that takes advantage of the unlabeled prior information. So this method cannot be called a prior-free method. This method includes blocks (i), (ii), (iv), (vii), and (ix) as shown in Figure 2.

Method d. The importance of using prior information in medical image segmentation is discussed before. In this method, the prior information is in the form of labeled atlases and EM algorithm is used for the estimation of the model parameters. The initial labels are estimated by constructing a probabilistic atlas from the binary outputs of block (iii) in Figure 2.

The method is similar to [21], in which a probabilistic atlas is used in MRF-EM segmentation and initialization. This method includes block (i), (ii), (iii), and (x) as shown in Figure 2 and explained in details in Section 3.3.

Method e. This method is our proposed approach that takes the advantage of unlabeled prior information. The prior information is used in initialization as described in part (vii) in Section 3.3. We introduce a new formulization to include unlabeled prior information in MRF-EM-based segmentation as explained in part (xi). Instead of just considering to be or not to be neighbors, as was done in the standard MRF approaches, we included color differences of the neighbors. By using this feature, the average difference between the target tissue distribution and the training images is computed. This prior information is then used in the Bayesian framework to find out a posteriori probability. Model parameters and the true labeling are computed iteratively by the EM algorithm. This method includes blocks (i), (ii), (iv), (vii), and (xi) as shown in Figure 2 and explained in details in Section 3.3.

3.3. Explanation of the Blocks

(i) Affine Registration. All data sets are registered three dimensionally by an affine registration method with 9 degrees of freedom (3 for rotation, 3 for shearing, and 3 for translation) so the slices will roughly correspond to each other. Normalized mutual information is used as the similarity measure in the registration process. The registration is performed automatically by using Amira software [34].

(ii) Histogram Equalization. To solve the intensity bias field problem, a simple histogram equalization method is used as explained in [35]. In this method, histogram of all images is calculated and the average histogram is equalized. Then the intensity values of pixels in each slice are remapped to the new intensity value.

(iii) Manual Segmentation. In this step, masseter muscle is segmented manually in the selected slices. The manual segmentation is performed by a professional expert. The time required for labelling is very long and depends on the expert’s ability. The output of this block is a set of binary images that indicate the masseter area in each image.

(iv) Seed Point and Threshold Selection. In this step, a seed point inside the target tissue and a threshold are selected by the user for each image. These values are being used for initial segmentation in the further steps. Since the histogram of the test images is equalized previously, the threshold is kept constant for all test images. Thus, seed point marking and threshold selection are done only once.

(v) Nonrigid Registration. All 9 unlabeled training images are registered nonrigidly to the test set by Demon’s registration method [33]. By applying the nonrigid registration, target tissue in the training set tends to change shape toward the shape of the tissue in the test set. The process is fully automatic but it is highly time consuming.

(vi) Region Growing (RG). The seed point and threshold values from (iv) are used in this step to perform initial labeling. Region growing method is used in this step which uses the initial seed point and threshold to segment the target region in a 2D image based on intensity information only. The result of this step is a binary image that includes target tissue pixels (labeled as 1) and background pixels (labeled as 0).

(vii) Modified Region Growing (Region Growing with Prior Information). The region growing algorithm in this step is a modified version of the basic region growing method. It is modified to take the advantage of prior information. In this case, region growing is done not only by considering the neighboring pixels on the same slice but by considering the corresponding pixels in the other data sets, that is, training sets, although they are not segmented a priori. Since the training sets are registered, they roughly share the same coordinates and hopefully have the same locations for target structures.

We assume that pixels are connected to each other through the neighboring system as shown in Figure 3. The current pixel is connected to the corresponding pixels in the upper and lower slices and 9 other training images as well as 8 nearest neighbors in the same slice. These 19 neighbor pixels effect the classification of the current pixel. A new criterion is defined for growing the region as follows:

Here is the current pixel and is the neighbor pixel from the neighboring set . is the intensity mean of the pixels of the already segmented region in the current step. The term involves the comparison of only the intensity value of the current pixel. The term represents the influence of the neighboring pixels. Two parameters and control the effect of each term and . and are set manually and kept constant throughout the experiments. The algorithm checks not only the pixel’s intensity, but for each pixel to be lower than a preset threshold and proceeds as in the original RG algorithm. If the neighbors of the pixel have similar values as the already segmented tissue mean, this increases the probability for the current pixel to be included to the already segmented region.

(viii) Averaging. To obtain a single segmentation of the test set, an average image is computed from the labels produced in (v) for the training images. By performing majority voting procedure on the average image, we select the pixels that are repeated more than 4 times out of total 9 images. The output of this step is a binary image that is the final segmentation result of Method a.

(ix) MRF-EM with Smoothness Term as the Prior Information. In this part we briefly discuss the mathematical aspects of MRF-EM based method that is implemented in this study. Let be a rectangular lattice for a 2D image of size as

Each element of corresponds to a pixel such that the location in the image space is specified by the indices and . In MRF models, sites are normally treated as an unordered set but when a 2D image is modeled then are ordered pixel locations as where is the number of pixels in the image and is equal to . An observation is a rectangular array of pixel values in low level vision problems. In this case, each pixel in the observation set takes a value in set .

Let be a discrete set of labels as

Segmentation process is defined as assigning a unique value to each site in in a way that whole domain of is supported. So it is a mapping from to as

Then the set of labeling for all sites in is shown as .

We call the family a random field. refers to the event where takes the value . The probability that random variable takes the value is abbreviated as , and the joint probability is denoted and abbreviated as . Random field is said to be MRF on with neighborhood system if and only if

A set of random variables is said to be a Gibbs random field (GRF) with respect to if the distribution takes the following form: where is a constant named temperature, and is the energy function (8). is the normalization term. The energy function of the Gibbs distribution can be expressed as the sum of several terms. Each term is described by the cliques of a certain size as

The Hammersley-Clifford theorem [36] gives necessary and sufficient conditions under which the equivalence of MRF and GRF models can be achieved.

Then, the conditional probability can be written as follows:

Here, the model includes only pairwise clique potentials. The smoothness term can be defined by pair-wise clique potentials as follows:

Here, is the smoothing parameter and defines the class label (i.e., 1 for the segmented tissue and 0 for the other tissues).

By using Bayes estimation, the posterior probability can be computed from the prior distribution (i.e., smoothness in MRF literature) and the likelihood, where is the conditional pdf of the observations and is the prior probability of labelings . In standard MRF modeling, is initialized as random. for the Gaussian MRF model case is the following intensity distribution function:

Here, the parameter set is .

Minimizing the Bayes risk is equal to maximizing the posterior probability. The expectation maximization (EM) algorithm is employed to maximize the posterior probability. In this iterative algorithm, the posterior probability for step is computed in the expectation step by using the model parameters and at iterative step as follows: where is the Gaussian distribution for class label in step (12) and is the prior probability (9) over at step .

Then the model parameters are obtained in the maximization step as follows:

This process is repeated until the likelihood difference, that is, , becomes less than a threshold. The threshold value is kept as 0.001 in this study. Then the labels are assigned to the pixels according to the posterior probability (13).

(x) Bayesian-EM with Probabilistic Atlas. This part is like (ix) but different in that prior probability is not in the form of the smoothness term, but is introduced as a probabilistic atlas in the expectation step (13) of the EM algorithm as follows:

Here, is the prior probability that is equal to the probabilistic atlas as follows:

is the probability of the pixel to have label . The probability is computed by using manually segmented train images obtained in (iii). This probability is kept constant throughout the segmentation.

(xi) MRF-EM with Unlabeled Prior Information. In this part, we try to incorporate the prior information to the MRF-EM framework not by using a labeled atlas but by using the original unlabeled images in the training set that can be called the “latent atlas.” A new definition is proposed for the prior probability which uses unlabeled prior information. By doing this, through the EM learning steps, the incorporation of the atlas and the model is updated and learned until convergence.

Unlike other methods that perform a MAP estimation to estimate the labeling and use it in pair-wise clique potential computation, we define the prior probability without using labels. To take advantage of unlabeled training images, we compute the difference between the mean of each class in the current step and the intensity value of the corresponding pixel in the neighboring set . We prefer the pixels with less difference to have higher clique potentials and so we subtract the difference value from 1. The value 1 is the maximum value that the difference result can take. By performing summation over all training images, the overall prior probability for pixel is computed as follows: where is the same neighboring system as shown in Figure 3 which includes training sets and is the class label (i.e., 1 for the segmented tissue and 0 for the other tissues). Figure 4 shows the presentation of prior information, (17), for sample image.

As can be observed from the image, the prior information gives a good estimate of the pixels that may be in the target tissue. This image is like an imaginary image that a specialist may have in her/his mind due to seeing thousands of MRI pictures.

The important point about this picture is that the target tissue is fully unconnected from the neighboring tissues. This feature helps the segmentation process a lot in the segmentation of FSTs that are generally connected to the neighboring tissues.

4. Results and Discussion

4.1. Evaluation

The validation of the segmentation methods was done by comparing the automatic segmentation results with the manual segmentation results. For this purpose, the target tissue is segmented manually in each slice by an expert. This process is repeated for all 10 experimental sets and these manual segmentations are only used as the ground truth.

We used dice metric [37] to evaluate the correspondence between the segmentation result and the ground truth. The metric is defined as follows: where is the segmented area and is the ground truth.

4.2. Results

The overall accuracy results of 2D segmentation for all methods for all sets are shown in Table 1 and Figure 6 for better visualization.

Atlas-based segmentation is known to be successful in brain tissue classification but as you can see in Table 1, Method a, the results are not very good for the masseter tissue. This is because human brain’s shape is mostly similar in different individuals but facial tissues like masseter may have various shapes in different people.

This method completely depends on the atlas and when the shape and position of the tissue of the atlas are different from the shape and position of the tissue of the test data, then the registration may result in a very wrong answer.

As you can see in Table 1, the segmentation result is very poor for set 4 due to the difference between the tissue and head shapes of the atlas and the data set 4. If we exclude set 4, the average accuracy is increased about 5% and becomes 77.66%.

This problem can be solved either by selecting the experimental data similar to the atlas or by increasing the number of training images in a way that covers all possible shapes, which is not very realistic.

The segmentation result for Method b is very successful in most cases, such as sets 1, 2, 3, and 8, but in some cases, such as sets 4 and 5, segmentation results are poor. To investigate this issue, we checked the initial labeling for the worst result (i.e., set 5) and the best result (i.e., set 8). The region growing outcome for sets 5 and 8 is shown in the original image in Figures 5(a) and 5(b). As you can see, the initial labeling is so poor in case of set 5 that ends in poor overall segmentation where case 8 starts with a good estimate and results in more than 92% accuracy.

In Method c, we tried to solve the problem of initial labeling where a modified region growing algorithm (vii) was used for initialization. As you see in Table 1, there is about 12% improvement in the average segmentation accuracy. This shows the importance of initial labeling in MRF-EM segmentation and also using prior information in region growing algorithm. The prior information used here is unlabeled raw training images. Although there is an overall improvement in the segmentation performance, in some cases accuracy decreases. For example, for previously investigated sets 5 and 8, although the accuracy is improved about 59% for set 5, there is about 17% decrease for set 8.

The initial labeling with modified RG for sets 5 and 8 are shown in Figures 5(c) and 5(d). The improvement in set 5 and decrement in set 8 are very clearly observed. The prior information brings improvement for set 5 where there is an intensity inhomogeneity but it is not useful for set 8, which has a shape different than the training sets. However, the overall improvement is noticeable and there is not any big decline for different cases as can be seen in Figure 5.

The segmentation performance for Method d is close to the MRF-based segmentation with modified region growing method (Method c) but it is about 1% lower. Despite the large amount of manual interaction required for the prior information in the MRF-EM part, this method shows lower accuracy than the previous method. This is mostly because of the initial estimation that is constant (16) for all images.

Finally, our proposed method (Method e) shows the best overall performance among all tested methods. In 6 out of ten data, the accuracy of this method is over 90%. The worst results are for sets 6 and 7 which also cause poor results by using normal EM-MRF method (Method c). So we can conclude that poor initialization is the problem for these cases. But this comment is not true for other low accuracies for sets 2 and 4.

The main problem is in finding a generic solution that results in a good accuracy for all of the images. But this requires that the training set should be big enough to overlap all possible shapes. Since manual labeling is not required for Method e, using many data as the prior information is possible. Another problem may be due to the affine registration which also may sometimes cause poor initialization.

When labeled prior information is used with the modified RG algorithm, Method c which uses MRF modality performs better than other methods. When unlabeled information is used, then Method e performs better than all other methods. It is important to note that the only manual interaction is the selection of a seed point and a threshold for region growing algorithm. The threshold value is kept constant because of the previously applied histogram equalization algorithm. Since Method e does not use any labeled training images, selection of a seed point for target tissue indication is inevitable. The rest of the method is fully automatic. The segmentation results of all methods for a test image are shown in Figure 6. Segmentation of the top and the bottom slices of the masseter muscle is more challenging because of the dimensions of the tissue. The results of our proposed method for images in the bottom slices are shown in Figure 7.

The average accuracy of the proposed method is about 3.5% lower than the results of our previous work [4], where test images which have similar masseter shape were selected manually. However, in this study the images were selected randomly to provide a fair comparison between implemented methods.

5. Conclusion

In this study, we tested four different state-of-the-art methods for facial soft tissue segmentation on magnetic resonance images. Each method has a different way of including the prior information to the segmentation process. Some use labeled data as the prior information, some use this labeled data only as an initial estimation of the segment, and some do not use prior information at all.

Our main interest in this work was to investigate the role of the prior information in FST segmentation by using different methods. We applied all these methods on 10 different MRI data sets belonging to different individuals and aimed to segment the masseter muscle in them. The experimental MRI sets were registered 3 dimensionally before the segmentation so the slices roughly corresponded to each other.

Method a is fully based on the registration of the labeled training images to the test image. The average accuracy of this method is 72.92%. Method b is an MRF-EM based segmentation method where initial segment estimation is obtained by region growing which starts from a seed point. No prior information is used in this method and the acquired average accuracy is 71.92%.

According to our results, although Method a uses labeled prior information, the accuracy of Method b is very close to it. This shows that atlas-based methods are not as successful as expected in segmentation of FSTs. The most important reasons for this failure are the variation of the tissue shape among the sets and the existence of similar tissues in the neighborhood of the target tissue.

Method c is similar to Method b, except for the fact that the initial segment estimate is obtained by the modified region growing algorithm which uses prior information from the unlabeled training set. The accuracy is improved by 12% which emphasizes the importance of initial estimate in MRF-EM process and also the importance of using prior information.

In Method d, the similar MRF-EM framework is used but this time the labeled training images are used for both the initialization and the MRF model implementation. The method reaches 82.54% accuracy that is close to Method e which does not use any manual labeling. We may conclude that determining the target tissue with a seed point and a threshold (like we did in Method c) is more informative for MRF-EM framework than labeled atlases.

In the end, Method e uses unlabeled prior information both in initial estimation and during MRF-EM optimization. The average accuracy for this method is 86.52% which is the best result between the tested methods. The proposed approach starts from the same initial estimates as Method c but it uses prior information inside the MRF-EM process that causes about 4% improvement in the final segmentation accuracy. The importance of using prior information can be shown better when we compare Method b with our proposed method where using prior information causes about 15% improvement.

In the previous studies, Ng et al. [27] performed masseter segmentation with average of 91.6% accuracy. Manually segmented training images are used in their study which is difficult and time consuming. Other than that, they have a criteria to only select the training images that are similar to the test image which reduces the robustness of the method. Our proposed approach achieves accuracy close to their result without using additional manual information or criteria.

We also believe that by increasing the number of training sets, although unlabeled, the accuracy of the method will be improved.

The final goal of this study is to segment the masseter tissue three dimensionally and use the results to construct a realistic biomechanical face model for any individual.

Conflict of Interests

The authors do not have any conflict of interest with the content of the paper.

Acknowledgments

Data used in the preparation of this article were obtained from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (http://www.loni.ucla.edu/ADNI). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in the analysis or writing of this report. Complete listing of ADNI investigators is available at http://www.loni.ucla.edu/ADNI/Collaboration/ADNI_Manuscript_Citations.pdf.