Abstract

Within the learning framework of maximum weighted likelihood (MWL) proposed by Cheung, 2004 and 2005, this paper will develop a batch Rival Penalized Expectation-Maximization (RPEM) algorithm for density mixture clustering provided that all observations are available before the learning process. Compared to the adaptive RPEM algorithm in Cheung, 2004 and 2005, this batch RPEM need not assign the learning rate analogous to the Expectation-Maximization (EM) algorithm (Dempster et al., 1977), but still preserves the capability of automatic model selection. Further, the convergence speed of this batch RPEM is faster than the EM and the adaptive RPEM in general. The experiments show the superior performance of the proposed algorithm on the synthetic data and color image segmentation.

1. Introduction

As a typical statistical technique, clustering analysis has been widely applied to a variety of scientific areas such as data mining [1], vector quantization [2, 3], image processing [47], and so forth. In particular, clustering analysis provides a useful tool to solve the several computer vision problems, for example, multithresholding of gray level images, analysis of the Hough space, and range image segmentation, formulated in the feature space paradigm [8]. In general, one kind of clustering analysis can be formulated as a density mixture modeling, in which each mixture component represents the density distribution of a data cluster. Subsequently, the task of clustering analysis is to identify the dense regions of the input (also called observation interchangeably) densities in a mixture. Such a clustering is therefore called a density mixture clustering.

In general, the Expectation-Maximum (EM) algorithm [9, 10] has provided a general solution for the parameter estimation of a density mixture model. Nevertheless, it needs to preassign an appropriate number of density components, that is, the number of clusters. Roughly, the mixture may overfit the data if too many components are utilized, whereas a mixture with too few components may not be flexible enough to approximate the true underlying model. Subsequently, the EM almost always leads to a poor estimate result if the number of components is misspecified. Unfortunately, from the practical viewpoint, it is hard or even impossible to know the exact cluster number in advance. In the literature, one promising way is to develop a clustering algorithm that is able to perform a correct clustering without preassigning the exact number of clusters. Such algorithms include the RPCL algorithm [11] and its improved version, namely, RPCCL [12]. More recently, Cheung [13, 14] has proposed a general learning framework, namely, Maximum Weighted Likelihood (MWL), through which an adaptive Rival Penalized EM (RPEM) algorithm has been proposed for density mixture clustering. The RPEM learns the density parameters by making mixture component compete each other at each time step. Not only are the associated parameters of the winning density component updated to adapt to an input, but also all rivals’ parameters are penalized with the strength proportional to the corresponding posterior density probabilities. Therefore, this intrinsic rival penalization mechanism enables the RPEM to automatically select an appropriate number of densities by gradually fading out the redundant densities from a density mixture. Furthermore, a simplified version of RPEM has included RPCL and RPCCL as its special cases with some new extensions.

In the papers [13, 14], the RPEM algorithm learns the parameters via a stochastic gradient ascending method; that is, we update the parameters immediately and adaptively once the current observation is available. In general, the adaptiveness of the RPEM makes it more applicable to the environment changed over time. Nevertheless, the convergence speed of the RPEM relies on the value of learning rate. Often, by a rule of thumb, we arbitrarily set the learning rate at a small positive constant. If the value of learning rate is assigned too small, the algorithm will converge at a very slow speed. On the contrary, if it is too large, the algorithm may even oscillate. In general, it is a nontrivial task to assign an appropriate value to the learning rate, although we can pay extra efforts to make the learning rate dynamically change over time, for example, see [15].

In this paper, we further study the MWL learning framework and develop a batch RPEM algorithm accordingly provided that all observations are available before the learning process. Compared to the adaptive RPEM, this batch one need not assign the learning rate analogous to the EM, but still preserves the capability of automatic model selection. Further, the convergence speed of this batch RPEM is faster than the EM and the adaptive RPEM in general. The experiments have shown the superior performance of the proposed algorithm on the synthetic data and color image segmentation.

The remainder of this paper is organized as follows. Section 2 reviews the MWL learning framework. In Section 3, we present the batch RPEM algorithm in detail, in which the weights involve a coefficient . We will therefore further explore the assignment of in Section 4. Section 5 shows the detailed experiment results. Finally, we draw a conclusion in Section 6.

2. Overview of Maximum Weighted Likelihood (MWL) Learning Framework

Suppose that an input comes from the following density mixture model: where is the parameter set of . Furthermore, is the number of components, is the mixture proportion of the th component, and is a multivariate probability density function (pdf) of parameterized by . As long as we know the value of , an input can be classified into a certain cluster via its posterior probability: using the winner-take-all rule, that is, assigning an input to Cluster if or taking its soft version which assigns to Cluster with the probability Therefore, how to estimate the parameter set , particularly without knowing the correct value of in advance, is a key issue in density mixture clustering.

In the MWL learning framework [13, 14], the parameter set is learned via maximizing the following Weighted Likelihood (WL) cost function: with where ’s are the designable weights satisfying the two conditions:(1), (2).

Suppose that a set of i.i.d. observations, denoted as , comes from the density mixture model in (1). The empirical WL function of (3), written as , can be given as with Moreover, the weights ’s have been generally designed as [13, 14] where is a coefficient varying with the time step in general. Please note that ’s in (7) can be negative as well as positive. For simplicity, we hereinafter set as a constant, denoted as . Moreover, is an indicator function with

Subsequently, under a specific weight design, the papers [13, 14] have presented the adaptive RPEM to learn via maximizing the WL function of (5) using a stochastic gradient ascent method, which is able to fade out the redundant densities gradually from a density mixture. Consequently, it can automatically select an appropriate number of density components in density mixture clustering. Interested readers may refer to the paper [14] for more details. We summarize the main steps of the adaptive RPEM in Algorithm 1. Although the experiments have shown the superior performance of the adaptive RPEM on automatic model selection, its convergence speed, however, relies on the value of learning rate. Under the circumstances, we will present a batch version without the learning rate in the next section.

Initialization: Given a specific ( , is the true number of clusters),
       initialize the parameter .
Step  1: Given the current input and the parameter estimate, written
   as , compute ’s and ’s via(2)
   and (7), respectively.
Step  2: Given ’s and ’s, we update by
,
   with
,
   where is a small positive learning rate.
Step  3: Let , and go to Step  1 for the next iteration until
       is converged.

3. Batch RPEM Algorithm

To estimate the parameter set within the MWL framework, we have to maximize the empirical WL function in (5). In general, we update the parameters via maximizing the first term of (5), that is, , by fixing the second term . Subsequently, we need to solve the following nonlinear optimization problem: subject to the constraints as shown in (1). We adopt the Lagrange method analogous to the EM by introducing a Lagrange multiplier into the Lagrange function. Subsequently, we have In this paper, we concentrate on the Gaussian mixture model only, that is, each component in (1) is a Gaussian density. We then have where , and are the mean (also called seed points interchangeably) and the covariance of the th density, respectively.

Through optimizing (10), we then finally obtain the batch RPEM algorithm as shown in Algorithm 2. If a covariance matrix is singular, then it indicates that the corresponding th density component is degenerated and can be simply discarded without being learned any more in the subsequent iterations. In this case, we have to normalize those remaining ’s so that their sum is always kept to be 1.

Initialization: Given a specific ,
        initialize the parameter .
Step  1: Given , we compute ’s and ’s for
   all ’s via (2) and (7), respectively.
Step  2: Fixing ’s and ’s, we update by
,
,
,
      where
Step  3: Let , and go to Step  1 for the next iteration until
   is converged.

In the above batch RPEM, its capability of automatic model selection is controlled by the weight functions ’s, which further rely on the parameter as shown in (7). Subsequently, a new question is arisen: how to assign an appropriate value of ? The next section will answer this question.

4. How to Assign Parameter?

To deal with how to assign an appropriate value of , we rewrite (7) as the following form: Intuitively, the term can be regarded as the award received by the winning density component (i.e., the density with ), and meanwhile the term is the penalty of the rival components (i.e., those densities with ). In general, it is expected that the award is positive and the penalty is negative. That is, should be greater than −1. Otherwise, as , we will meet an awkward situation; that is, the amount of award is negative and the penalty one becomes positive. This implies that we will penalize the winner and award the rivals, which evidently violates our expectations. Furthermore, as , both of the award and penalty amount become zero. In this special case, the batch RPEM is actually degenerated into the EM without the property of automatic model selection. As a result, is required to be greater than −1. In addition, in the batch RPEM should be a negative value. Otherwise, the weights of the rival components ’s become negative, resulting in some ’s to be negative finally. Hence, an appropriate selection of in the batch RPEM would be a negative value and greater than −1. That is, should be fallen into the range of .

Furthermore, our empirical studies have found that a smaller will lead the batch RPEM algorithm to a more robust performance, especially when the value of is large and the data are overlapped considerably. In other words, the algorithm has a poor capability of automatic model selection if is close to zero. To illustrate this scenario, we have utilized two synthetic data sets that are generated from the two bivariate three-Gaussian mixtures individually as shown in Figures 1(a) and 1(b), where each data set consists of observations with the true mixture proportions: , , and . Also, the true ’s and ’s of data set 1 in Figure 1(a) are while the true parameters of data set 2 in Figure 1(b) are It can be seen that the clusters in data set 1 are well separated, whereas the clusters in data set 2 are overlapped considerably.

For each data set, we conducted the three experiments by setting , , and , respectively. Also, all ’s and ’s were initialized at and the identity matrix, respectively. During the learning process, we discarded those densities whose covariance matrices ’s were singular. Table 1 shows the performance of the batch RPEM over the parameter . We found that, as and , all ’s we have tried from −0.9 to −0.1 lead to the good performance of the algorithm when using the data set 1. For example, as and , we randomly initialized the eight seed points in the input space as shown in Figure 2(a). After all the parameters were converged, 2 out of 8 density components had been discarded and the mixture proportions of the remaining components were converged to , , , , , and . It can be seen that the three principal mixing proportions, , , and , have well estimated the true ones, while the other proportions were inclined to zero. The corresponding three ’s and ’s were As shown in Figure 2(b), the three ’s have successfully stabilized at the corresponding cluster centers, meanwhile the other three redundant seed points have been pushed away and stably located at the boundary of the clusters. That is, the redundant densities have been fade out through the learning, thus the batch RPEM can select the model automatically as well as the adaptive version.

Nevertheless, when is set at a large value, for example, say , it is found that the proposed algorithm could not fade out the redundant density components from a mixture if is close to 0. Instead, we should set at a value close to −1. For example, as , , and , we ran the proposed algorithm. It was found that 13 of 20 seed points were maintained by discarding those whose covariance matrices ’s were singular. The mixture proportions of the remaining components were converged to , , , , , , , , , , , , and . The three principal mixing proportions, , , and , have also well estimated the true ones while the other proportions were tended to zero. Furthermore, the corresponding ’s were , , and . As shown in Figure 3(a), the learned ’s are correctly allocated at the center of the three clusters and the other redundant seed points were driven away to the boundaries of clusters. Hence, the batch algorithm performed a good model selection by assigning . In contrast, if we assign to some value close to zero, the algorithm will lead to a poor model selection. We take for instance. The mixture proportions of the remaining 19 out of 20 components were converged to , , , , , , , , , , , , , , , , , , and . It can be seen that none of ’s tends to zero. As shown in Figure 3(b), all the converged positions have a bias from the cluster centers. In other words, the algorithm has a poor performance as get close to zero. Hence, if is large, it would be better to choose a relative smaller value of between −1 and 0.

In addition, we also investigated the assignment of on data set 2, where the data are considerably overlapped. We take , , and for instance. The converged positions of the seed points are shown in Figure 4(a), where the learned positions converged to the cluster centers while driving the redundant seed points to the boundaries of the clusters. That is, the proposed batch algorithm can work quite well as . Also, we let , , and to run the algorithm again for comparison. As shown in Figure 4(b), the converged positions of the seed points have a bias from the cluster centers. This implies that the values of that are close to zero cannot work well in this case. More examples can be found in Table 1. It can be seen that the feasible region of is shrunk over the overlap level of the data. For example, the appropriate values of are in the range of only when using the date set 2 with and or 8, respectively. In contrast, is feasible in the full range of where we have tried so far as data set 1 is used. Hence, by a rule of thumb, we should choose an appropriate value of close to −1. Nevertheless, we have also noted that it is not a good choice if is too close to −1. In fact, the proposed algorithm will gradually degenerate to the EM as tends to −1; that is, the capability of the proposed algorithm on model selection will be reduced as tends to −1. Hence, to sum up, empirical studies have found that is an appropriate feasible region of . In the next section, we therefore arbitrarily set at −0.8.

5. Experimental Results

To evaluate the performance of the batch RPEM algorithm, we have conducted the following three experiments.

5.1. Experiment  1: Batch RPEM on Synthetic Data with

This experiment was to evaluate the convergence speed of the batch RPEM. We utilized data points from a mixture of three bivariate Gaussian densities with the true parameters as follows: We let , which is equal to the true mixture number . The three seed points were randomly allocated in the observation space as shown in Figure 5(a), where the data are considerably overlapped. Moreover, all ’s and ’s were initialized at and the identity matrix, respectively. Figure 5(d) shows the positions of the three converged seed points, which are all stably located at the corresponding cluster centers. For comparison, we also implemented the EM under the same experimental environment. Figure 5(b) shows that the EM had successfully located the three seed points as well as the batch RPEM.

Nevertheless, as shown in Figures 6(c) and 7, the batch RPEM converges at 20 epochs, while the EM needs 60 epochs as shown in Figure 6(a). That is, the convergence speed of the batch RPEM is significantly faster than the EM. This indicates that the intrinsic rival-penalization scheme of the batch RPEM, analogous to the RPCL [11], RPCCL [12], and the adaptive RPEM [14], is able to drive away the rival seed points so that they can be more quickly towards the other cluster centers. As a result, the batch RPEM converges much faster than the EM. Moreover, we also compared it with the adaptive RPEM, in which we set the learning rate . Figure 5(c) shows the convergent results of the seed points. It can be seen that the adaptive RPEM works quite well in this case, but it needs around 70 epochs as shown in Figure 6(b). Actually, the adaptive RPEM can be further speed up if an appropriate learning rate is adopted, which, however, is not a trivial task.

5.2. Experiment  2: Batch RPEM on Synthetic Data with

This experiment will investigate the performance of batch RPEM performance as . We generated observations from a mixture of five bivariate Gaussian density distributions with the mixing proportions: and the true cluster centers: 15 seed points were initialized in the input space arbitrarily. During the learning, the three density components were discarded because their corresponding covariances became singular. As a result, the remaining 12 converged proportions were , , , , , , , , , , , and . It can be seen that the five principal values , ,, , and were estimated well, while the others were learned towards zero. A snapshot of the corresponding ’s were , , , , and . As shown in Figure 8, these five seed points have successfully allocated in the cluster centers, meanwhile the batch RPEM drove the redundant seed points to the boundaries of the clusters.

5.3. Experiment  3: Batch RPEM on Color Image Segmentation

This experiment further investigated the batch RPEM algorithm on color image segmentation in comparison to the EM algorithm. We implemented the image segmentation in the red-green-blue (RGB) color space model that represents each pixel in an image by a three-color vector. We conducted color image segmentation on a hand image and a house image as shown in Figures 9(a) and 10, respectively. For the former, we initially assigned 10 seed points randomly. After the convergence of the algorithms’ performance, a snapshot of their segmentation results is shown in Figures 9(b) and 9(c). It can be seen that the blue tiny swim ring-shaped region after segmentation process by the batch RPEM is smoother than the EM. Further, the tiny nail regions have been partitioned by the batch RPEM but the EM is not. In other words, the batch RPEM algorithm performs better than the EM algorithm.

For the house image, we initially assigned the seed points to be 80. A snapshot of the converged segmentation results of the EM and the batch RPEM is shown in Figure 11. It can be seen that the texture on the red wall and the green lawn has no longer maintained after the segmentation process both by the EM and the RPEM. However, the small white regions of windows on red wall were disappeared by the EM as well as the triangle shadow area on the wall. In contrast, the batch RPEM algorithm partitioned these regions well as shown in Figure 11(b). Actually, the batch RPEM has drove out the redundant seed points far away and maintained some principal components correctly, which therefore leads to a better performance in color image segmentation.

6. Conclusion

In this paper, we have developed a batch RPEM algorithm based on MWL learning framework for Gaussian mixture clustering. Compared to the adaptive RPEM, this new one need not select the value of learning rate. As a result, it can learn faster in general and still preserve the capability of automatic model selection analogous to the adaptive one. We have evaluated the proposed batch RPEM algorithm on both synthetic data and color image segmentation. The numerical results have shown the efficacy of the proposed algorithm.

Acknowledgments

This work was jointly supported by the grants from the Research Grant Council of the Hong Kong SAR with the Project Code: HKBU 210309, the Natural Science Foundation of China (60974077), the Natural Science Foundation of Guangdong Province (s2011010005075), Guangzhou Technology Projects (11c42110781), the Grants 60403011 and 60973154 from the NSFC, and NCET-07-0338 from the Ministry of Education, China. This work was also partially supported by the Fundamental Research Funds for the Central Universities, HUST:2010ZD025, and Hubei Provincial Science Foundation under Grant 2010CDA006, China.