Abstract

Regression clustering is a mixture of unsupervised and supervised statistical learning and data mining method which is found in a wide range of applications including artificial intelligence and neuroscience. It performs unsupervised learning when it clusters the data according to their respective unobserved regression hyperplanes. The method also performs supervised learning when it fits regression hyperplanes to the corresponding data clusters. Applying regression clustering in practice requires means of determining the underlying number of clusters in the data, finding the cluster label of each data point, and estimating the regression coefficients of the model. In this paper, we review the estimation and selection issues in regression clustering with regard to the least squares and robust statistical methods. We also provide a model selection based technique to determine the number of regression clusters underlying the data. We further develop a computing procedure for regression clustering estimation and selection. Finally, simulation studies are presented for assessing the procedure, together with analyzing a real data set on RGB cell marking in neuroscience to illustrate and interpret the method.

1. Introduction

Regression and clustering are probably two of the most important statistical data mining methods used in practice including artificial intelligence and neuroscience. However, regression clustering, a data mining method integrating the two, has rarely been studied as a single entity despite its great potential for practical use. It is, thus, the intention of this paper to focus on statistical estimation, selection, and computing of regression clustering. In this section, we briefly review cluster analysis but not the familiar regression analysis and then introduce the regression clustering problem.

(1) Cluster Analysis. Cluster analysis is an important unsupervised statistical learning and data mining technique for clustering homogeneous observations from data. Its main objective is to divide a collection of data points, often of multivariate nature, into subsets or “clusters” such that observations within one cluster are more “similar” (homogeneous) to each other than to observations in different clusters. Cluster analysis is usually used in situations where clustering information is not observed on the data points and one wants to get this information from the data to explicitly group them.

Many approaches have been developed in cluster analysis, which in general fall into two categories: hierarchical and partitive. A hierarchical approach proceeds by either a sequence of “agglomerative” stages or a sequence of “divisive” ones. At each agglomerative stage, clusters are produced by merging or retaining the clusters produced at the immediate previous stage, where clusters at the initial stage may simply be taken to be those individual data points. Contrarily, at each divisive stage, clusters are produced by splitting or retaining the clusters produced at the immediate previous stage, where one may assume a single cluster containing all the data points at the initial stage. The key feature of a hierarchical approach is that clusters obtained at one stage are derived from those in the immediate previous stage. On the other hand, partitive approaches refer to those nonhierarchical ones which may be further classified according to other features of clustering.

The outcome of a hierarchical clustering is often represented by a graph called dendrogram in which each stage of merging or splitting is determined by optimizing some similarity or dissimilarity criterion. A significant drawback of hierarchical clustering methods is that the divisions or fusions, once made, are irrevocable. That is, when an agglomerative algorithm has joined two objects into one cluster, they cannot subsequently be separated, and when a divisive algorithm has made an unwanted split, the objects involved can no longer be recombined into one cluster. Kaufman and Rousseeuw [1] comment on this as follows: “A hierarchical method suffers from the defect that it can never repair what was done in previous steps.”

In contrast, a partitive clustering constructs a fixed number of clusters often by an iterative procedure. It imposes two requirements in the procedure: (i) each cluster must contain at least one object and (ii) each object must belong to exactly one cluster. In addition, the number of clusters constructed stays fixed during the iterations and an initial partition is required to start the iteration. At each iteration, a tentative partition is constructed by relocating the data points to optimize a conditional criterion. This procedure continues until certain convergence or stability of partition occurs. Commonly used partitive clustering approaches include those -means type of methods: -means, -modes, -medians, and -medoids [2, 3]. New developments in this regard can be found in Hastie et al. [4, section  14.3] and Clarke et al. [5, Chapter  8], for example.

We present an example here to illustrate the use of -means method for clustering. The example uses the well-known Iris data from Anderson [6] which was analyzed by Fisher [7] and many others. The data give the measurements in centimeters of the variables sepal length and width and petal length and width, respectively, for 50 flowers from each of the 3 species of Iris: setosa, versicolor, and virginica. The data which can be retrieved from statistics package R [8] are displayed in Figure 1, where we see the data of sepal length and width and petal length and width distributed in clusters. So we use the -means algorithm of Hartigan and Wong [3] to find a partition of 3 clusters for the data and compare the partition with the species information given. The computing is done in R with a random initial partition determined by set.seed(123). The result is summarized in Table 1, from which we see a perfect match between cluster 1 and species setosa and some mismatch between clusters 2 and 3 and species versicolor and virginica.

(2) Regression Clustering. In this paper, we will focus on regression clustering, a data mining method which iteratively clusters data into clusters according to the available regression pattern and then updates the regression in each cluster simultaneously until equilibrium is attained. It is commonly known that regression is for studying the relationship between a dependent variable and a set of explanatory variables which have observations on a sample of objects. If the samples come from different populations and the variable indexing the populations also has an effect on the dependent variable, the regression should be performed on individual populations separately through the corresponding subsamples observed, or by including the population effect in the model, in order to make valid or more reliable statistical inference. However, the population indexing variable sometimes is not observed or unobservable. In such situations, it is necessary to cluster the sample objects to conform to their respective populations as much as possible and then apply regression to each cluster. We refer to this procedure as regression clustering if our focus is clustering the data points or as cluster regression if it is studying the unobserved regression patterns in the data.

Before getting into details of regression clustering, we review various measures of similarity or dissimilarity used in general cluster analysis. Note that to identify possible clusters of observations in data it is essential to be able to measure how close or how far individual data objects are to/from each other. Current measures include the single linkage (nearest neighbour) and complete linkage (further neighbour) (cf. [1]) and the -means. These are usually considered as descriptive since they do not involve any probability distribution and use only descriptive statistics as the measures of similarity or dissimilarity between observations. An obvious disadvantage of using a descriptive measure is one cannot make statistical inference on results of clustering; thus, one is not able to assess the variability involved in the results. To enable making statistical inference, probability distributions or models are postulated for the clusters of data, and it is deemed that data in the same cluster have the same probability distribution. Hence, the similarity or dissimilarity measures to be used are assigned a probability distribution, and the significance and variability of clustering can be readily derived. Probability model based approaches can be applied in both hierarchical and partitive types of clustering. We choose to use the probability model for partitive regression clustering here.

Note that there is no absolute boundary between descriptive and probability model based clustering methods. Some clustering methods were heuristically motivated, but later on, statisticians studied their performance from a probabilistic perspective. For instance, MacQueen [2] and Pollard [9] studied the asymptotic behaviour of -means using a probability model based approach; Hartigan [10] and Wong [11] investigated the mathematical relationship between high-density clusters and the single-linkage clustering method.

Consider a finite set of objects together with data being the observations of these objects. The problem of regression clustering is to recover the latent partitioning of so that the relationship between and can be studied by regressions on separately. A probability model based clustering approach assumes that the observed data are a sample of respective random vectors that belong to a set of populations indexed by . Thus, those with , have the same probability distribution. The specification of the probability distributions can be either parametric or nonparametric.

(3) Organization of the Paper. In Section 2 we provide a detailed formulation of regression clustering including modeling, parameter estimation, and partition determination. In Section 3 we present two procedures for estimating the number of clusters in cluster linear regression. In Section 4, a pointwise iterative assessing algorithm is developed for implementing the regression clustering procedures. A simulation study and an example are presented in Section 5. A real data example on RGB cell marking clustering is analyzed in Section 6. The paper ends with a Conclusion section.

2. Regression Clustering Model and Optimization

Regression clustering becomes very useful when one intends to recover or estimate the unobserved class-specific regression hyperplanes based on the sample data of dependent and explanatory variables. Note that the notion of hyperplane used here is a generic one, which means it does not necessarily pass through the origin in the space. It should be more correctly called an affine set. But we do not distinguish them in this paper.

For cluster regression or regression clustering problem, the data have the form , where is an explanatory column vector and is a random dependent variable for the th object. The probability distribution of does not provide any information on regression hyperplanes; thus, our statistical inference will be made conditional on the observed . In other words, we can simply treat as nonrandom. As in the general setting of probability model based cluster analysis, there are two different approaches for regression clustering. One is the random partition or soft partition approach in which each data point is assigned a nonzero probability to fall into any of the clusters or equivalently follows a mixture probability distribution. The discussion can be found in DeSarbo and Cron [12] and Quandt and Ramsey [13], among others. Another one is the fixed partition or hard partition approach in which each data point is assigned a cluster membership or label through certain optimization procedure, so a data point belongs to only one cluster. As discussed in Bock [14, 15] and Späth [16, 17], the probability distribution or classification likelihood function of a data point in a fixed partition approach of regression clustering, with an unknown partition of , is of the form where in many situations we can assume to be a normal density with mean and variance . This is equivalent to describing the data by a group of linear models:Since the partition is unknown and the number of possible such partitions depends on , the model (2) is a nonparametric one. Actually, it can be proved that the total number of nondegenerate partitions of form is equal to the Stirling number of the second kind ; confer Tomescu [18]. Also, the linear regression function in (2) can be extended to a nonlinear one including spline and local polynomial regression and so forth under this regression clustering setting. This extension will not be pursued in this paper. Further, the true distribution of need not be the normal. Namely, we use only as a “working” distribution for . Then the corresponding least squares or maximum likelihood approach becomes the quasi-likelihood one that still possesses many optimality properties (cf. chapter 9 of [19]). We resort to using a robust approach in this paper instead to deal with the violation of normality assumption.

Given the regression clustering model introduced above, we need to estimate the parameters and find the best partition together with for application. Optimal parameter estimation and partition can be achieved using the maximum likelihood principle, while finding the optimal can be done based on an information criterion. The latter will be explained in next section. Now, we proceed to do parameter estimation and partition.

Under the fixed partition model (2), the log-likelihood function is given byIt is clear that the best estimates of the parameters and the partition should be those maximizing the log-likelihood (3) for given . However, the number of possible partitions is astronomic even for moderate and ; for example, and . Therefore, it is almost impossible to find the global optimal partition by enumeration. Here, we propose an iterative estimation method to find local optimal estimates of and for a given . This method extends the exchange method of Späth [16, 17].

When fixing at given estimates , (3) achieves the maximum if each data point belongs to clusterAt given , , (3) is the sum of the usual log-likelihood functions for homogeneous linear regressions within clusters. Hence, it is maximized at the least squares estimates obtained based on the data points within , andThen, is monotonically increased if the steps (4) and (5) are carried out alternately. This procedure leads to a local maximum in finitely many steps. It is expected to be a good approximation of the global maximum if an initial partition is properly chosen. In practice, we often assume that the variance parameters , have a common value and estimate by a pooled estimator. This modification tends to return a more robust partition than otherwise.

Note that the work in this section so far can be extended to multivariate regression clustering without any theoretical difficulty. The essential difference between multivariate regression and multiple regression is that the former has a vector response variable while the latter has a univariate one. Hence, (3) to (5) and the relevant ones in the rest of the paper can be easily modified to incorporate the vector response variable, from which it is ready to perform multivariate regression clustering. We will not get into the technical details involved but will provide a real data example in Section 6 to perform multivariate regression clustering.

It is well-known that the least squares method is very sensitive to outliers and violation of the normality assumption in the data. Robust methods can be developed to overcome this vulnerability. Among them, procedures based on -estimation are considered here. -estimation can be regarded as a generalization of the maximum likelihood estimation. A particular one is the maximum likelihood estimation based on Huber’s least favourable distribution, whose density function is the normal at around the origin and the exponential in the tails. Using Huber’s -estimation method, we can drop the assumption in (2) and estimate by minimizing for given partition . Here, is Huber’s discrepancy function defined aswhere is determined by the scale parameter in Huber’s least favourable distribution. We find that assuming a constant scale parameter across all clusters tends to give better robust results, so we adopt this assumption in this paper. Now for given estimates , each data point is assigned or reassigned to cluster . At this point, it can be seen that, instead of , the function will be monotonically increased if the above two -estimation steps are carried out alternately. This gives a robust counterpart of the likelihood-based local optimal estimation and selection introduced earlier in this section.

To conclude this section, note that the fixed partition approach has a particular advantage over the random partition one in the context of regression clustering or cluster regression. As observed by Hennig [20], the mixture probability model involved in random partitioning presumes implicitly an assignment independence of each object to clusters with respect to the covariate vectors . That is, the clusters keep the same proportions for every fixed covariate vector . In other words, the probability of a point to be generated by cluster is independent of and . This is generally not true as shown in Figure 2, which is adapted from Hennig [20]. On the other hand, the fixed partition model (2) supposes that the cluster membership of each object or cluster labels are explicitly parameterized and are determined by the estimation of and through the points . Hence, the fixed partition model does take care of the problem of possible assignment dependence between the th object and the associated covariate . In principle, the random partition approach can be generalized to account for the assignment dependence, for example, by allowing to depend on . But the resultant probability model will be much more difficult to be analyzed both algebraically and numerically; and no such study can be found in literature so far to our knowledge.

3. Estimating the Number of Clusters

The number of clusters to be used in regression clustering is normally unknown so it should also be estimated. In this section we provide two procedures for estimating the number of clusters, one based on least squares estimation and the other on robust -estimation.

We use a more detailed notation to denote the data objects which have observations as described in previous sections. Recall that these objects are assumed to be a random sample coming from a structured population, which consists of a fixed (but unknown) number, say , of subpopulations, each of which is characterized by a regression hyperplane with class-specific unknown parameters. Therefore, for the observations from this population, there exists an underlying partition , and by (2) each cluster follows the regression modelwhere , is an design matrix in the cluster , is an -vector of random errors, is an identity matrix, and for . Here, , are unknown parameter vectors; and , are assumed to be distinct from one another. It is clear that . In the following, we assume that , where is a known positive integer. Note that in (7) we have suppressed the in for simplicity of presentation. Also note that the normality assumption for the random errors , although reasonable in many situations, is just a “working” distribution and not really required for applying the least squares method.

In order to estimate , we fit a regression clustering model to the data for each using the methods developed in Section 2. A criterion function of can be obtained from the cluster regression fitting. Then is estimated as the minimizer of the criterion function. Shao and Wu [21] have used this idea to develop an information-based criterion for estimating . Let be an arbitrary -cluster partition of the observations. Shao and Wu’s information criterion is defined aswhere is a strictly increasing positive function of , is a sequence of positive constants, are least squares estimators, and is the Euclidean norm. Typically, and or are chosen. Then , the estimate of , is the integer that minimizes this criterion, that is,It can be seen that in (8) the first term is the sum of residual squares which measures the goodness of fit of the model and the second term is the penalty for overfitting. Moreover, the criterion (9) shows that one determines the optimal number of clusters and the corresponding partitioning simultaneously. We shall call (8) together with (9) Criterion LS-C in the sequel, which stands for clustering by the LS method.

Under some mild conditions, it is shown in Shao and Wu [21] that the proposed Criterion LS-C selects the true number of regression hyperplanes with probability one among all class-growing sequences of classifications, when the number of observations from the population increases to infinity.

Concerning the robustness of regression clustering, one can use a robust criterion to estimate the underlying number of clusters , where we assume that each cluster is characterized by a linear model:with the random error not following any specific distribution contrary to that in the linear model (7). In particular, Rao et al. [22] have developed the following robust information criterion function for estimating :where is Huber’s discrepancy function and are the -estimators described in Section 2 or equivalently satisfyingIt can be seen that similar to that in (8) the first term in (11) is a generalization of a minimum negative log-likelihood function derived from Huber’s least favourable distribution, and the second term is the penalty for overfitting.

Using (11), the estimate of the underlying number of clusters is the one satisfyingWe shall call (11) together with (13) Criterion RM-C, which stands for the clustering based on robust -estimation. Similar to Criterion LS-C, Criterion RM-C implies that one determines the optimal number of clusters and the corresponding partitioning simultaneously.

In Rao et al. [22], it is shown that the true clustering and the associated regression hyperplanes are attained with probability 1 by RM-C when increases to infinity and under certain mild conditions. In particular, normal distribution assumption is not required for the random errors in each regression cluster.

4. Pointwise Iterative Algorithms for Regression Clustering Estimation, Partition, and Selection

Computing algorithms can be written to implement the regression clustering methods described in Sections 2 and 3. Recall that in the methods we first estimate the optimal partition and the regression parameters simultaneously by minimizing certain within-cluster sum of residual squares sums () or alike for each fixed . The quantity to be minimized is equivalent tofor LS regression clustering or sum of robust residual squares sums (RRSS) for an -estimation based robust regression clustering. Only local minimization results can be guaranteed here. We process this local minimization for each candidate and use Criterion LS-C or RM-C to determine the best . The whole procedure can be accomplished according to the following algorithm: (i)Label all the observations from 1 to (order does not matter). Given an initial partition of , fit a regression model (or a robust regression model with a function for RM-C criterion) in each of the clusters and obtain the sum of the residual squares sums or for this partition. Let .(ii)Set , and reset if . Identify such that . Then move into with and , respectively. For each of these relocations, refit the model by regression clustering (or robust regression clustering) and calculate the sum of the residual squares sums (or RRSS) accordingly. Denote the smallest one by or . If (or in robust procedure), redefine and , and set (or ). Otherwise, return to the beginning of (ii).(iii)Repeat (ii) until the objective function (14) or (15) does not decrease any further, which means that no observation relocation is necessary and the optimal clustering is achieved for this .(iv)Proceed with (i) to (iii) for each candidate and use the Criterion LS-C or RM-C to find , the optimal number of clusters.

It is important to use a good initial partition of in running steps (i) to (iii) so that the global minimum of (14) or (15), or its good approximation, can be achieved. We propose to generate the initial partition of a dataset using the following algorithm which we find works well in practice:Consider the linear modelBased on the whole dataset, one estimates by a robust method, for example, least median regression or least trimmed squares method [23]. Note that a random seed is implicitly used in such robust methods.Put into set those data points whose distances to the regression hyperplane estimated in Step are less than a predetermined number, say . If and are both larger than a predetermined integer, say , set and go to the next step; otherwise, set and go to Step . Here, is the complementary set of .Based on the dataset , one estimates in (16) by the same robust method used in Step .Put into those points in whose distances to the regression hyperplane estimated in Step are less than . If and are both larger than , set and repeat Step ; otherwise, go to Step .The initial partition is if or just the whole dataset itself if . One can adjust the values of and either in advance or adaptively to get an initial partition of clusters for any given . For example, set to the integer part of and to the best value such that two clusters can be obtained in .

The above initial partition algorithm gives essentially an iterated hierarchical binary clustering method, where each binary clustering is realized through resistant regression such as the least median regression. The resistant regression is robust, having high breakdown threshold; thus, although not being fully efficient, it is highly likely to produce a reasonable initial partition through its iterated executions.

The two algorithms consisting of Steps to and (i) to (iv) may be named IPARC to reflect the iterative pointwise assessing nature in regression clustering.

5. Example and Simulation Study

In this section, we first apply regression clustering to the Iris data and provide a brief guideline on when to use the method properly. We then present a simulation study to assess the finite sample performance of Criteria LS-C and RM-C.

5.1. The Iris Data Example

Recall the Iris data that we analyzed using the -means method in Section 1. Now we want to use the regression relationship between sepal length and sepal width variables to partition the 150 observations of sepal length and width and petal length and width into 3 clusters. Statistics package R is used to implement our IPARC procedure, where we set and or 0.2 and initial random seed being determined by set.seed(123456), and use only the least squares estimation in this example. The partition result and its comparison with the species information are summarized in Table 2. Comparing Tables 1 and 2, we see that the cluster information revealed by the cluster regression sepal.length ~ sepal.width is very much the same as that by the -means and conforms with the species information.

When we use cluster regression sepal.length ~  sepal.width + petal.length + petal.width to partition the data into 3 clusters, we get a result summarized in Table 3 which is very different from Tables 1 and 2. This confirms that the cluster label information obtained from regression clustering has a different interpretation from that obtained from the -means. The former tells us how differently regression performs across the clusters, while the latter tells us how distances among data observations themselves behave differently across the clusters. Fitting this regression clustering to the data gives and coefficients of determination of 0.972, 0.958, and 0.971, respectively, for the 3 regression hyperplanes. On the other hand, when we fit the same regression model to the 3 clusters determined by the -means, we get and coefficients of determination of 0.575, 0.525, and 0.578. Similar results are obtained if the same regression model is fit to the 3 clusters determined by the species variable. Therefore, regression clustering method is fundamentally different from the general cluster analysis methods such as the -means. One should use regression clustering if partitioning data to conform to the regression pattern is of interest.

5.2. Simulation Study

We use simulated data sets to assess the finite sample performance of Criteria LS-C and RM-C for regression clustering. Two factors will be considered for this simulation: number of clusters (2 or 3) and error distributions (, , or ), so there are in total 6 cases of data to be considered, which are summarized in Table 4. There will be only one covariate involved in the regression clustering and the covariate is generated from . The parameters used for each case are given in Table 5. Then, the fixed partition regression clustering model , , , is applied to generate the response values , where is a random number originating from , , or , and the first element of is the constant 1 corresponding to the intercept term in the model.

Figures 3 and 4 give us an intuition of what the data typically look like for Cases  1–6 with normal, , or Cauchy errors. These figures show that the groupings of the linear patterns are visible with standard normal random errors and getting worse with random errors. The groupings are hard to see with random errors.

In this study, we set , where is the number of regression coefficients in the model and is a constant in our study; is the unknown number of clusters that we are seeking. It is noted that in an information model selection criterion, a penalty function, which is in (8) or (11), is usually chosen as or with a constant . In light of the fact that , we set .

The function we employed for -estimation is if and otherwise (Huber ). In the following, when we state the simulation results, Criterion RM-C means -estimation based regression clustering procedure with Huber’s exclusively.

For each of the six cases, we conduct 1000 simulations using Criteria LS-C and RM-C separately. To apply the algorithm IPARC, we set and . The algorithm given in the previous section is then used to estimate the number of clusters in cluster linear regression. In Tables 6 and 7, we summarize the results from the simulation study, where each number represents the relative frequencies of selecting the possible numbers of clusters out of 1000 replications.

It is clear that Criterion LS-C performs almost perfectly in Cases  1 and 4 since the errors are standard normal distributed. However, when there exist outliers in the data set or the normality of the data is violated, Criterion LS-C performs poorly. On the contrary, as shown in Tables 6 and 7, Criterion RM-C does as nearly perfect a job as Criterion LS-C in Cases  1 and 4; at the same time, neither outliers nor abnormality has much effect on its ability to detect the underlying true number of regression hyperplanes in the data.

In addition to the robustness shown above in selecting the number of clusters, the procedure of the -estimation based regression clustering is also robust in partitioning the data. Table 8 presents the estimation of the regression parameters by applying LS-C and RM-C to the data shown in Figures 3 and 4. From the table, it can be seen that when the errors are or distributed, the LS regression clustering method is not able to capture the underlying groupings, while the -estimation based regression clustering method detects the true linear patterns in the data, in spite of the abnormality in the data.

6. Analysis of RGB Cell Tracking Data

Recently, a new technique called RGB marking has been introduced to facilitate the identification of individual cell clones in both in vivo and in vitro experiments [24]. RGB marking introduces three lentiviral vectors in individual cells encoding the basic colors red, green, and blue. Raw image data representing 128 colorectal cancer cells are shown in Figure 5; the same data are to be analyzed in detail in this section. Since the colored cells are easily identifiable within whole organ structures, scientists can track the cells and determine their role during processes such as organ regeneration, malignant outgrowth, or immune responses.

To this end, scientists are required to cluster cell types according to some basic color combinations. Due to the variability of the vector insertion, however, single RGB-marked cells express fluorescent proteins at different and very characteristic levels. The underlying principle of additive color mixing, similar to that in computer or TV screens, generates different color combinations that can be used to discriminate individual cell clones. The main difficulty in this kind of data is that the intrinsic variability of the underlying biological mechanisms makes the actual number of distinguishable colors generated by RGB marking in a tissue difficult to predict. In addition, cell intensities for different colors are known to vary depending on the cell area, which is an indicator of cell morphology.

The data set analyzed in this section consists of measurements on colorectal cancer cell lines expressing various quantities of three different fluorescent proteins: Cerulean (blue), Venus (yellow/green), and mCherry (red). The genes coding for the fluorescent proteins were transferred into the cells via lentivirus-mediated transduction at a less than 100% efficiency so that most cells expressed different quantitative combinations of all three fluorescent proteins as described by Weber et al. [24]. The cells were imaged on a high-content imager (Operetta, Perkin Elmer). The final data consisted of fluorescent intensities of red, blue, and green color channels (electromagnetic wavelength in nanometers, nm), morphology parameters including cell areas, and spatial coordinates for 128 cells.

Figure 5 shows the original data and clustering obtained by the LS regression clustering approach defined in (14), using multivariate regression with color intensities as the response vector, with morphological predictor (cell area) being used in (c), and without using any predictors in (d). Clustering methods are relatively robust to the initial random seed (here we used set.seed(111)) in both cases. When the cell area predictor is included, the resulting clustering changes, thus suggesting that the cell morphology information (cell area) plays a role in separating different cells types. In Table 9, we summarize the outcome for this LS regression clustering.

To select the optimal number of clusters, we used the information criterion function (8) for LS and (11) for RM, with , where is the unknown number of clusters that we are seeking for. Figure 6 shows the optimal numbers of clusters using (C1) and (C2) for both clustering approaches. Robust clustering is carried out using Huber’s discrepancy function (6) with the tuning constant being chosen. The resulting optimal number of clusters based on C1 is 5 by both LS and RM regression clustering criteria, which is compatible with biological considerations.

Finally, we assess the performances of the LS and RM regression clustering and compare them with that of the -means method. The prediction strength (PS) statistic introduced by Tibshirani and Walther [25] is used for the assessment.

For a candidate number of clusters ( in our case), let denote the partition of the test set resulting from regression clustering on all the data. Let be the number of observations in these clusters. Let be the partition of the test set resulting from regression clustering on the training set. In particular, in the latter case each data point in the test set is clustered using (4) with , produced by the training set.

Following notations of Tibshirani and Walther [25], denote as the comembership matrix, with th element , if a pair of observations and that belong to the same cluster in (i.e., ) also fall into the same cluster in , and otherwise. The prediction strength statistic can be written asTherefore, the prediction strength is the proportion of observation pairs in the worst performing test cluster whose clustering results remain unchanged when clustering them by the training set clustering rule. Clearly, a regression clustering result has higher predictive power if the associated PS is higher.

For our data, we assess the clustering performance by cross-validation using 4 random partitions of our sample. Cross-validated prediction strength values for -means, LS, and RM regression clustering methods are 0.44, 0.80, and 0.66, respectively. This suggests that the LS regression clustering is superior to the -means. Moreover, due to the absence of strong deviations from the multivariate normal model for these data, the out-of-sample prediction strength of the LS regression clustering is larger than that of the robust RM regression clustering approach.

7. Conclusion

In this paper, we review the general cluster analysis methods and then focus on regression clustering which uses the model based fixed partition method and clusters the data based on the dependence between the response and explanatory variables. We provide both least squares based and robust -estimation based methods for estimating parameters, partitioning the data, and selecting the optimal number of clusters in regression clustering. Algorithms have been developed to implement these methods. The example and simulation study conclude a satisfactory finite sample performance of the algorithms. Applying our developed method to regression cluster the RGB cells tracking data gives results compatible with biological considerations. It is known that the methods can only provide a local optimization solution and are computing intensive especially when the sample size is large. Currently, we are investigating these issues and expect to provide an improved solution to be reported elsewhere in the near future.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The authors would like to thank Christina Mølck from the Department of Pathology at the University of Melbourne for carrying out the RGB cell marking experiment and Cameron Nowell from the Institute of Pharmaceutical Sciences at Monash University for doing the high-content imaging and producing the raw data that is analyzed in Section 6. Further, the authors would like to thank Louis Vermeulen and Maartje van der Heijden (University of Amsterdam, the Netherlands) for their generous gift of DLD1-LeGO cells as used in Section 6.