Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 6148456 | https://doi.org/10.1155/2018/6148456

Yongkai Ye, Xinwang Liu, Qiang Liu, Xifeng Guo, Jianping Yin, "Incomplete Multiview Clustering via Late Fusion", Computational Intelligence and Neuroscience, vol. 2018, Article ID 6148456, 11 pages, 2018. https://doi.org/10.1155/2018/6148456

Incomplete Multiview Clustering via Late Fusion

Academic Editor: Giosuè Lo Bosco
Received19 May 2018
Revised01 Aug 2018
Accepted14 Aug 2018
Published01 Oct 2018

Abstract

In real-world applications of multiview clustering, some views may be incomplete due to noise, sensor failure, etc. Most existing studies in the field of incomplete multiview clustering have focused on early fusion strategies, for example, learning subspace from multiple views. However, these studies overlook the fact that clustering results with the visible instances in each view could be reliable under the random missing assumption; accordingly, it seems that learning a final clustering decision via late fusion of the clustering results from incomplete views would be more natural. To this end, we propose a late fusion method for incomplete multiview clustering. More specifically, the proposed method performs kernel k-means clustering on the visible instances in each view and then performs a late fusion of the clustering results from different views. In the late fusion step of the proposed method, we encode each view’s clustering result as a zero-one matrix, of which each row serves as a compressed representation of the corresponding instance. We then design an alternate updating algorithm to learn a unified clustering decision that can best group the visible compressed representations in each view according to the k-means clustering objective. We compare the proposed method with several commonly used imputation methods and a representative early fusion method on six benchmark datasets. The superior clustering performance observed validates the effectiveness of the proposed method.

1. Introduction

The term “multiview data” refers to a collection of different data sources or modalities that describe the same samples. For example, clinical text and images serve as two views of a patient’s diagnosis file, or an image on a webpage may be described by the pixel data and the surrounding text. Clustering is one of the unsupervised learning tasks that divides samples into disjointed sets, revealing the intrinsic structure of the samples [13]. Multiview clustering aims to utilize the information from various views for better clustering performance. A number of studies have been conducted to explore multiview clustering; these studies can be roughly divided into two categories. The methods in the first category create a fusion of the multiview information in the early stage and then perform clustering [46]. The methods in the second category group samples in each view and then create a late fusion of the clustering results from different views to obtain the final clustering decision [7, 8].

However, in real-world applications of multiview clustering, incomplete views often exist. For example, in patient grouping [9], patients often undergo various tests, but some patients may fail to undergo particular tests due to poor health or the high costs involved. Alternatively, in user grouping for a recommendation system [10], a user’s multiview data consists of transaction histories, social network information, and credit records from different systems; however, it is not guaranteed that all users will have complete information from all systems.

A straightforward strategy for handling incomplete multiview clustering is to first fill the incomplete view information and then apply the common multiview clustering algorithm. Some widely used filling algorithms include zero filling, mean value filling, and k-nearest neighbor filling.

In addition to simple filling methods, a few early fusion methods have been proposed for incomplete multiview clustering. In [11], a method was proposed to deal with cases where one view is complete and the other is incomplete. The kernel matrix of the incomplete view is imputed following Laplacian regularization from the complete view. Kernel canonical correlation analysis is then performed to ascertain the projected space that maximizes the correlation between the corresponding projected instances across the two views. Based on this work, a method was proposed to solve the problem when the two views are incomplete [10]. This method iteratively updates the kernel matrix of one view using Laplacian regularization from the other view. Using this work as a foundation, Zhao et al. [12] added global graph regularization of the samples to guide the learning of the subspace. A similar work proposed to integrate the feature learning process without the nonnegative constraints on the data [13]. However, all of the above works are either limited to two views or hard to adapt to more than two views. Recently, Shao et al. [14] proposed a multiview clustering method not limited to two views. The proposed method learns the latent representations in subspace for all views, then produces a consensus representation that minimizes the difference between views, after which clustering is performed on the consensus representation.

What these studies overlook is that the clustering results from the incomplete views could be reliable under a random missing assumption. Most of the studies on incomplete multiview clustering are based on this assumption, which holds that whether an instance in a view is missing is not relevant to the corresponding sample’s cluster label. Under this assumption, the missing ratios of each cluster should be almost the same; therefore, the overall cluster structure could be kept in an incomplete view.

Accordingly, we build a toy data consisting of three Gaussian distributions to illustrate how the cluster structure could be maintained under random missing conditions. We randomly delete the instances with different ratios and perform kernel k-means on the visible instances. From Figure 1, it can be observed that the clustering accuracy (ACC) of the visible instances is stable when the missing ratio increased; moreover, the cluster centroids of the visible instances under random missing stay near the cluster centroids of the complete view. Moreover, we repeat the random missing procedure for 100 times at different missing ratios. As shown in Figure 2, the average ACC of the visible instances also remain stable, and the cluster centroids of the visible instances stay around the cluster centroids of the complete view.

Since clustering results from incomplete views could thus be made reliable, this enables us to propose a late fusion method for incomplete multiview clustering, while most of the previous studies focus on early fusion methods. Firstly, we perform kernel k-means clustering on the visible instances in each view. The clustering result of each view is encoded as a zero-one indicator matrix, each row of which contains the label information of the corresponding instance. Since some instances may be missing in some views, the corresponding rows of the matrices of some views may also be missing. These indicator matrices can also be considered as compressed representations of different views. Secondly, to create a fusion of the clustering results from different views, we develop an algorithm to find a clustering decision that can group each view’s visible compressed representations well according to k-means objectives. Figure 3 presents the process of the proposed method along with a brief example. Compared with several imputation-based methods and a representative early fusion method, the proposed method has superior clustering performance.

We conclude this section by highlighting the main contributions of this work, as follows: (1) We propose a late fusion method for incomplete multiview clustering, while most previous studies have concentrated on early fusion methods. Experimental results also validate the effectiveness of the proposed method. (2) In the second step of the proposed method, we design an alternate updating algorithm with proved convergence to learn the clustering decision that achieves the best k-means objective values with the visible instances in each view. (3) We provide some practical advice on initializing the clustering decision via analyzing the results of the comprehensive experiments.

2. Preliminary

In this section, we introduce some preliminary knowledge to facilitate better understanding of our proposed method. We first outline the notations used in this paper, after which k-means clustering and kernel k-means clustering are briefly reviewed, since these methods will be used in the proposed late fusion method.

2.1. Notation

Suppose the incomplete multiview data have N samples and P views. A sample should have at least one visible view. A sample’s representation in a view, which is a row vector, is called an instance. Suppose the instances in view are row vectors with length , which means the instances in view have features. Thus, the instances in view form a matrix, which is denoted as . Accordingly, we use to denote the instance for sample in view . An zero-one matrix stores the view missing information, where indicates that view for sample is available; otherwise, the view is missing. Assume that the actual number of clusters, denoted as , is already known. We can thus perform clustering in each view . An indicator matrix is used to store the clustering result. If the instance of sample is missing in view , the ith row of is all zero; otherwise, if sample belongs to cluster c in view , we have and . The goal of incomplete multiview clustering is to find a clustering decision from all views. Similarly, we use a zero-one matrix to store the clustering decision.

2.2. k-Means Clustering

The idea behind k-means clustering is to find a clustering assignment and a set of cluster centroids that bring the samples in each cluster closer to the corresponding centroid. Sum-of-squares loss is minimized to achieve this goal. Assume that is the sample set and is the unknown cluster indicator matrix, where means that sample belongs to cluster . is the centroid of cluster . The objective function of k-means is

An alternate updating algorithm is designed to solve this problem. Firstly, the centroids of the clusters are initialized. The cluster assignment is then updated by assigning the cluster label of each sample according to the closest centroid. Next, the centroids are updated by calculating the average of the samples in each cluster. The centroids and the cluster assignment are alternately updated until the cluster assignment no longer changes.

2.3. Kernel k-Means Clustering

Kernel k-means clustering is the kernel version of k-means clustering [15]. The objective is to find a cluster assignment that minimizes the sum-of-squares loss between the samples and the corresponding centroids in the kernel space. The kernel mapping from to a reproducing kernel Hilbert space is . The objective of kernel k-means clustering is as follows:where is the centroid of cluster and is the number of samples in cluster .

Define satisfies and . is the trace operator and is an all-one column vector with length . The equivalent matrix form of Equation (2) is

However, the problem in Equation (3) is difficult to solve due to the discrete constraint on variable . Accordingly, we may instead solve an approximated problem where is relaxed to real values. Letting leaves us with the following problem:where the constant is removed. The optimal is found by calculating the eigenvectors that correspond to the largest eigenvalues of . Since can serve as a projection of the samples to space , k-means clustering is performed on to obtain the final cluster assignment.

3. The Proposed Method

In a departure from conventional subspace methods, we develop a late fusion method for incomplete multiview clustering. This method performs kernel k-means clustering in each incomplete view and then finds a consensus cluster according to each view’s clustering result. The first step of the late fusion method, which is easy to understand, will be introduced only briefly. We will focus primarily on the second step to explain how a fusion of the incomplete clustering results from different views might be created. The overall algorithm is then presented and its complexity analyzed.

3.1. Clustering with Visible Instances in Each View

In line with most of the previous research into incomplete multiview clustering, we also assume that the instances in each view satisfy the random missing assumption. Although there are missing instances in an incomplete view, a common clustering method can be applied directly to the visible instances. As pointed out in the introduction, the clustering results in each view are reliable, which makes the late fusion of these results promising. In this paper, we perform kernel k-means on each incomplete view, since the multiview datasets are kernel data. Another clustering method could also be used in this step. It should be noted that while different clustering methods may have different robustness to random missing conditions, an investigation of this is beyond the scope of this paper. The clustering results are encoded as zero-one matrices: , as described in the Notation section.

3.2. The Proposed Late Fusion Objective

To create a fusion of the clustering results , we consider these clustering results as compressed representations in each view. Each row of the matrix can also serve as a compressed representation of the corresponding instance. The aim is to find a final clustering decision that can adequately group the compressed representations in each view. For the incomplete view, it is natural to expect that the remaining visible parts of the view can also be grouped well according to the final clustering decision.

For view , we use to denote the ith row of , while is the cluster label for the ith instance in view . However, can also serve as a compressed representation of the ith instance. When performing clustering on , suppose the cluster indicator matrix is and the centroid of cluster c is . The objective function for performing k-means clustering with the visible compressed representations in view is thuswhere is used to select the visible parts following the description in the Notation section.

For the multiview situation, we wish to find a consistent clustering decision that groups each view’s visible compressed representations adequately. Thus, we propose to minimize the sum of the k-means objective values of all views with the visible compressed representations. The proposed objective function is as follows:

3.3. Optimization of the Late Fusion Objective

Similar to k-means clustering, we iteratively update and to solve the problem in Equation (6).(1)Updating : when are fixed, the optimization problem is

The updating of is similar to that of the k-means clustering:

Lemma 1. Equation. (8) is the optimal solution for the optimization problem in Equation (7).

Proof. Minimizing Equation (7) is equivalent to minimizing the following subproblem separately:

Denoting , we then have

When follows Equation (8), according to Equation (10), reaches its minimum.(2)Updating : when is fixed, the optimization problem is

By taking the derivative of Equation (11) with respect to to be 0, we can obtain the updated as

Lemma 2. Equation (12) is the optimal solution for the optimization in Equation (11).

Proof. Equation (11) is equivalent to

Therefore, to minimize Equation (11) is equivalent to minimize separately. The subproblem of minimizing is as follows:

The derivative of is as follows:where is set as Equations (12) and (15) equals 0. Because Equation (14) is convex, Equation (14) reaches its minimum. Therefore, each subproblem reaches its minimum, meaning that Equation (11) also reaches its minimum.

3.4. Convergence of the Alternate Optimization

Theorem 1. The alternate updating of and converges.

Proof. According to Lemma 1 and Lemma 2, in the updating of both and , the objective value is not increasing. Moreover, because , , and , the objective value is lower bounded by 0. As a result, the alternate updating procedure converges.

3.5. Initialization for

For the alternate optimization, should be initialized in order to begin the optimizing process. The initialization of is an important factor in the performance of the final clustering decision. In order to obtain better performance, the initialization is not random. Instead, we use a basic method for incomplete multiview clustering to obtain an initial indicator matrix . For example, we can first fill the incomplete data with a filling method such as zero-filling and then perform multiple kernel k-means clustering to obtain an initial indicator matrix . Selecting a suitable method to obtain is crucial for the proposed method. We will explore this through a number of experiments in the Experiments section.

3.6. The Proposed Algorithm and Complexity Analysis

The overall algorithm is summarized in Algorithm 1. When learning the clustering results from each view, the initialization of is an important factor that affects the performance of the final clustering decision. In order to obtain a better performance, the initialization is not random. Instead, we calculate following Equation (12) with an initial indicator matrix from another basic solution of incomplete multiview clustering. Again, choosing a suitable is crucial for the proposed method, and we will therefore explore this with comprehensive experiments in the following Experiments section.

Input:
 Incomplete multiview data:
 The number of clusters: K
 The initial clustering decision:
Output:
 The final clustering decision:
(1)Initializing centroids with according to Equation (12)
(2)Performing kernel k-means with the visible instances in each view with cluster number K to get clustering results:
(3)repeat
(4) Updating with Equation (8)
(5) Updating according to Equation (12)
(6)until does not change
(7)return

Eigenvector decomposition is applied to solve the kernel k-means problem. The time complexity for eigenvector decomposition using the most popular QR algorithm is [16]. For all views, the complexity is . Assume that the alternate updating procedure iterates R times. For each iteration, the complexity of updating is , while according to Equation (12), the complexity of updating is . Accordingly, the overall complexity of the proposed late fusion is .

4. Experiments

4.1. Datasets

Experimental comparisons are conducted on six multiple kernel learning benchmark datasets. In these datasets, each kernel serves as a view.

4.1.1. Caltech102

A precomputed kernel dataset from [17], which is generated from the object categorization dataset Caltech101. This dataset can be downloaded from http://files.is.tue.mpg.de/pgehler/projects/iccv09/#download.

4.1.2. CCV

Consumer video analysis benchmark dataset proposed in [18]. The original dataset can be downloaded form http://www.ee.columbia.edu/ln/dvmm/CCV/. We compute three linear kernels on its MFCC, SIFT, and STIP features and then compute three Gaussian kernels on these features, where widths are set as the mean of sample pair distances.

4.1.3. Digital

Handwritten numerals (0–9) dataset from UCI Machine Learning Repository. The original dataset consists of 6 feature sets and can be downloaded from http://archive.ics.uci.edu/ml/datasets/Multiple+Features. Following the settings in [6], we select 3 of 6 feature sets (Fourier feature set, pixel averages feature set, and morphological feature set) to generate 3 kernels.

4.1.4. Flower17

17 category flower dataset from Visual Geometry Group. The original dataset can be downloaded from http://www.robots.ox.ac.uk/~vgg/data/flowers/17/index.html.

4.1.5. Flower102

102 category flower dataset from Visual Geometry Group. The original dataset can be downloaded from http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html.

4.1.6. ProteinFold

Fold recognition dataset which consists of 694 proteins with 27 SCOP fold [19]. Following the settings in [19], we generate 10 second order polynomial kernels and two inner product kernels. The matlab file of the kernel data can be downloaded from https://github.com/HoiYe/MKL_datasets/blob/master/proteinFold_Kmatrix.mat.

The basic information of these datasets is summarized in Table 1.


DatasetSample numberKernel numberCluster number

Caltech102153025102
CCV6773620
Digital2000310
Flower171360717
Flower10281894102
ProteinFold6941227

4.2. Compared Methods

The proposed method is compared with several imputation methods and a representative early fusion method. Moreover, the best result of a single view is also provided as a baseline.

4.2.1. Best Result of a Single View (BS)

The best clustering result from a view. We select the view that has the highest clustering performance with the visible instances. If this view is incomplete, we assign the missing instances with random labels and then report the performance.

4.2.2. Zero Filling Plus Multiple Kernel k-Means (ZF)

The missing kernel entries are filled by zero, after which multiple kernel k-means clustering is applied.

4.2.3. Mean Filling Plus Multiple Kernel k-Means (MF)

The missing kernel entries are filled by the average value of the corresponding visible entries in other views. Multiple kernel k-means clustering is then applied.

4.2.4. k-Nearest Neighbor Filling Plus Multiple Kernel k-Means (KNN)

The incomplete kernels are filled using the k-nearest neighbor imputation algorithm, after which multiple kernel k-means is applied.

4.2.5. Alignment-Maximization Filling Plus Multiple Kernel k-Means (AF)

The alignment-maximization filling proposed in [11] is a simple yet efficient kernel imputation method. A complete kernel is generated by averaging the zero-filled kernels of each view, after which each incomplete kernel is filled with this complete kernel according to the algorithm in [11]. Multiple kernel k-means clustering is applied after filling the incomplete kernels.

4.2.6. Partial View Clustering (PVC)

This subspace method, proposed in [20], tries to learn a subspace where two views’ instances of the same sample are similar. It is a representative early fusion method for incomplete multiview clustering.

4.3. Experimental Setting

In our experiments, the number of clusters is considered as prior knowledge. Base kernels are centralized and scaled during the preprocessing procedure following the suggestion in [21].

Since the base kernels are complete in the original datasets, the incomplete kernels need to be generated manually. We assume that the ratio of samples with missing views (incomplete sample ratio) is ϵ. To generate the missing view information matrix , we randomly select samples. The missing probability of a view is . Next, for each sample that has incomplete views, a random vector is generated. The pth view will be missing for this sample if . Since at least one view should exist for a sample, we will generate a new random vector until at least one view for the sample is present. In our experiments, ϵ varies from 0.1 to 0.9 to demonstrate how the performance of different methods varies with respect to ϵ, while is fixed as 0.5. Normalized mutual information (NMI) is applied to evaluate the clustering performance.

4.4. Experimental Results
4.4.1. Late Fusion Performance with Different Initializations

The proposed method requires an initial clustering decision for the late fusion process. In this paper, the clustering decision of other commonly used imputation methods is employed for initialization. We expect performance improvement after late fusion compared with the initial clustering decision. In Table 2, we compare the performance of the initial method and the corresponding late fusion result on six benchmark datasets with different incomplete sample ratios. The better performance value is shown in bold. It can be observed that improvements are evident in most situations under late fusion conditions. On ProteinFold, Flower17, Caltech102, and Digital, a consistent boost with late fusion can be achieved; for example, late fusion performance is about 27% higher compared with the BS initial result when 80% of samples are incomplete on Digital. The reason for this performance boost is that the late fusion step considers the consistency between views and leverages the information from both views to revise the initial clustering. However, there are some exceptions for performance improvement. On CCV, the late fusion result is worse than AF when 20% of samples are incomplete. We suggest that these results emerge for the following reasons: first, AF can achieve a fairly good imputation on CCV when the incomplete sample ratio is 20%; second, the views of CCV may not be highly consistent, which could hurt the efficiency of the late fusion step. When the incomplete sample ratio is 80%, use of late fusion fails to improve the performance for three of five methods on Flower102. This indicates that late fusion is also hurt when percentage of incomplete samples is high. Because the late fusion method is based on the consistency assumption, we can assume that the inclusion of some noisy views, due to the high incomplete sample ratio in Flower102, has attenuated the performance of the late fusion method. However, in most cases, the late fusion procedure’s performance is improved relative to the initial method. Exceptions occur when the consistency between the views of the dataset is not strong or the initial method is highly effective. It is noteworthy that the late fusion procedure can be viewed as a refined progress for the initial method’s clustering decision.


ProteinFoldFlower17Caltech102DigitalCCVFlower102
InitialLate fusionInitialLate fusionInitialLate fusionInitialLate fusionInitialLate fusionInitialLate fusion

Incomplete sample ratio 20%
BS33.5336.9337.2744.2652.7055.1157.2466.0815.1417.4543.0546.30
ZF34.7937.9641.9546.3957.2158.8644.8554.9813.0613.0939.9040.50
MF35.2737.8742.4547.2357.0758.8444.5651.6013.2313.3639.8340.40
KNN35.1937.9442.6046.4257.2358.9265.5971.3012.5913.3440.0440.44
AF37.1438.8943.8746.8858.1659.3048.0854.4013.8213.2040.3540.51

Incomplete sample ratio 50%
BS28.2734.2039.5243.6448.6252.9645.0363.8410.9416.8435.7443.06
ZF30.0534.1438.2844.5453.3556.1741.3451.878.8812.8237.0438.02
MF31.1034.1537.4544.0253.2656.3340.1749.918.9712.7636.9838.00
KNN33.7335.8338.7444.4254.8656.8065.9069.199.5312.7337.5238.09
AF33.9435.6842.2443.8057.4158.2247.3553.9810.9212.6438.0838.18

Incomplete sample ratio 80%
BS24.6332.5822.1842.5146.8751.3635.2862.277.6115.4529.6639.49
ZF26.0330.2933.3842.7950.9954.0439.0651.518.7612.6235.2135.23
MF27.5030.9233.2842.6350.5353.7535.3848.749.0112.9835.2735.17
KNN32.9233.9934.1841.6453.1454.9558.9562.438.4012.5935.9035.34
AF32.7333.7340.0243.5156.2156.8446.0752.5810.6112.9336.5335.66

4.4.2. Choosing Initialization Method

Although the experimental results in the previous section show that improvement can be obtained using the late fusion method, the question of how to choose a suitable initialization to ensure the best final performance remains unsolved. In this section, we conduct some empirical studies to determine the relationship between the initialization method and the final late fusion performance.

For each dataset, we calculate the mean NMI of different incomplete sample ratios for the late fusion of different initializations. Once this is complete, we rank the performance on each dataset to see which initialization achieve the best final performance, as shown in Table 3. Late fusion based on KNN ranks first on ProteinFold and Digital, while on Flower17 and Caltech102, late fusion based on AF achieves the best performance. On the two relatively large datasets, that is, CCV and Flower102, late fusion based on BS is most suitable. In the last two columns of Table 3, “rankScore” denotes the average rank of six datasets, while “overall” denotes the rank of “rankScore.” The “overall” column indicates that AF may be a good choice for the best final fusion performance over the six datasets.


ProteinFoldFlower17Caltech102DigitalCCVFlower102Rank scoreOverall
MeanRankMeanRankMeanRankMeanRankMeanRankMeanRank

BS34.62343.67553.02564.28216.78142.8913.333
ZF34.18544.70256.22352.82412.80537.9544.834
MF34.50444.65356.19450.26512.91337.9155.005
KNN35.93144.28456.87268.63112.80437.9733.172
AF35.90245.02158.17153.77313.11238.1422.501

However, as shown in Table 2, it is possible that late fusion performance could be worse than the initial result. Therefore, we also investigate the relative late fusion performance changes of different initializations to see which initial methods may be boosted less via late fusion. Similarly, for each dataset, we calculate the mean NMI changes of different incomplete ratios after late fusion for different initialization. Table 4 shows that BS, ZF, and MF benefit substantially from late fusion; for example, when using BS as initialization, there is an 18.09% boost on Digital. However, the late fusion method cannot make a significant boost against AF, as the boost on Digital is only small at 6.47%.


ProteinFoldFlower17Caltech102DigitalCCVFlower102Rank scoreOverall
ChangeRankChangeRankChangeRankChangeRankChangeRankChangeRank

BS5.31114.2013.78118.0914.9316.7311.001
ZF3.8627.2922.48310.8722.5220.5522.172
MF3.1637.1732.56210.1432.5130.5132.833
KNN2.1145.9341.7644.6252.3040.1444.504
AF1.4753.0250.8956.4741.185−0.1755.005

In short, it may be impossible to find a universal best initialization technique for the proposed late fusion method. However, the empirical results allow us to draw some conclusions regarding the choice of initialization. (1) If we have a strong prior knowledge to decide which view is most important, BS may be a suitable initialization, since BS can be substantially boosted by late fusion (overall rank 1 as shown in Table 4) and achieves relatively good final performance (rank 1 on Digital and CCV, final rank 4 as shown in Table 3). (2) Although AF is a very good initialization that leads to the best late fusion performance (overall rank 1 as showed in Table 3), there is a risk that the late fusion process may not give better results than the initial one. (3) ZF and MF are not recommended to be used as initializations, due to their poor final late fusion performance.

4.4.3. Comparisons between the Best Late Fusion and the Basic Methods

Figure 4 shows that when the best initialization is used, the performance of the proposed late fusion method can always achieve the best NMI on the six benchmark datasets compared with basic methods. For example, on the challenging dataset CCV, the performance of late fusion with the best initialization outperforms other methods in different incomplete sample ratios. More specifically, when incomplete ratio is 0.9, the late fusion method significantly outperforms the second best method by around 5%. The results in Figure 4 indicate that the proposed late fusion method can benefit from a suitable initialization to achieve better performance than the commonly used imputation methods.

4.4.4. Comparisons with Early Fusion Method for Two Views

In this section, we compare the proposed method with partial view clustering (PVC), which is a representative early fusion method proposed in [20]. PVC is a method originally designed for two views, such that it is difficult to adapt it to more than two views. Therefore, we compare the performance on two views selected from Digital. According to the experimental results presented in Table 3, KNN is the best initialization on Digital; we thus compare the performance of PVC with late fusion using KNN as the initial method. Moreover, we compare the result of late fusion with the PVC initialization to determine whether the late fusion method can boost the performance of the initial PVC clustering decision. From Figure 5, we can observe that the late fusion step can result in an improvement over using PVC as the initial method, since PVC + ate fusion always has better performance than PVC. On view 1 and view 2, the performance of PVC + late fusion is comparable with KNN + late fusion.The result of view 1 and view 3 and the result of view 2 and view 3 show that KNN + late fusion has the best performance, and significantly outperforms PVC. Overall, the results on Digital indicate that the proposed late fusion method can improve the PVC clustering decision and can also outperform PVC significantly with suitable initialization. On a note of particular interest, the results indicate that the proposed late fusion process can refine the results of the early fusion method.

5. Conclusion

In this paper, we propose a novel late fusion method to learn a consensus clustering decision from the clustering results of incomplete views without imputation. To learn the consensus clustering decision, we design an alternate updating algorithm and prove its convergence theoretically. Moreover, we perform comprehensive experiments to study carefully how the initialization affects the final performance of the proposed method. Although we cannot find a best initialization for all situations, we suggest that the clustering result of the best single view is an effective initialization. With suitable initialization, the proposed method outperforms the commonly used imputation methods and a representative early fusion method.

Although the proposed method demonstrates the effectiveness of late fusion strategy in the field of incomplete multiview clustering, there are several promising directions for further research. First direction is to automatically generate the clusters without fixing the number of clusters. In many real-world applications of clustering, the number of clusters is unknown, where the proposed method cannot be applied. Instead of using kernel k-means clustering, we can perform other density-based clustering methods to get the clustering result in single view [22] and then design a new method to integrate the information between views. To integrate the density-based clustering results is a challenging problem. Second direction is to apply deep learning techniques for better late fusion results. Since 3DConvNets has achieved great success in feature learning [23], performing late fusion after feature learning with 3DConvNets may improve the final clustering performance. Third direction is to investigate how the clustering method in single view can affect the late fusion performance. In this paper, we perform kernel k-means clustering in each incomplete view. However, there are also other optional advancing clustering methods [2427]. What kind of methods is suitable for late fusion for incomplete multiview clustering remains unrevealed.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Key R&D Program of China (No. 2018YFB1003203), the National Natural Science Foundation of China (Nos. 61672528, 61403405, and 61702593), and Hunan Provincial Natural Science Foundation of China (No. 2018JJ3611).

References

  1. Z. Halim and Uzma, “Optimizing the minimum spanning tree-based extracted clusters using evolution strategy,” Cluster Computing, vol. 21, no. 1, pp. 377–391, 2017. View at: Publisher Site | Google Scholar
  2. X. Gu, P. P. Angelov, D. Kangin, and J. C. Principe, “A new type of distance metric and its use for clustering,” Evolving Systems, vol. 8, no. 3, pp. 167–177, 2017. View at: Publisher Site | Google Scholar
  3. R. Hyde, P. Angelov, and A. MacKenzie, “Fully online clustering of evolving data streams into arbitrarily shaped clusters,” Information Sciences, vol. 382-383, pp. 96–114, 2017. View at: Publisher Site | Google Scholar
  4. Q. Yin, S. Wu, R. He, and L. Wang, “Multi-view clustering via pairwise sparse subspace representation,” Neurocomputing, vol. 156, pp. 12–21, 2015. View at: Publisher Site | Google Scholar
  5. Y. Ye, X. Liu, Y. Jianping, and Z. En, “Co-regularized kernel k-means for multi-view clustering,” in Proceedings of International Conference on Pattern Recognition, Cancun, Mexico, December 2016. View at: Google Scholar
  6. X. Liu, Y. Dou, J. Yin, L. Wang, and E. Zhu, “Multiple kernel k-means clustering with matrix-induced regularization,” in Proceedings of Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, February 2016. View at: Google Scholar
  7. E. Bruno and S. Marchand-Maillet, “Multiview clustering: a late fusion approach using latent models,” in Proceedings of International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2009, pp. 736-737, Boston, MA, USA, July 2009. View at: Google Scholar
  8. S. F. Hussain, M. Mushtaq, and Z. Halim, “Multi-view document clustering via ensemble method,” Journal of Intelligent Information Systems, vol. 43, no. 1, pp. 81–99, 2014. View at: Publisher Site | Google Scholar
  9. M. Xu, T. Wong, and K. S. Chin, “A medical procedure-based patient grouping method for an emergency department,” Applied Soft Computing, vol. 14, pp. 31–37, 2014. View at: Publisher Site | Google Scholar
  10. W. Shao, X. Shi, and S. Y. Philip, “Clustering on multiple incomplete datasets via collective kernel learning,” in Proceedings of 2013 IEEE 13th International Conference on Data Mining, pp. 1181–1186, Dallas, TX, USA, December 2013. View at: Google Scholar
  11. P. Rai, A. Trivedi, H. Daumé lii, and S. L. DuVall, “Multiview clustering with incomplete views,” in Proceedings of NIPS Workshop on Machine Learning for Social Computing, Whistler, Canada, October 2010. View at: Google Scholar
  12. H. Zhao, H. Liu, and Y. Fu, “Incomplete multi-modal visual data grouping,” in Proceedings of Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, July 2016. View at: Google Scholar
  13. Q. Yin, S. Wu, and L. Wang, “Incomplete multi-view clustering via subspace learning,” in Proceedings of CIKM’15 Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 383–392, Melbourne, Australia, October 2015. View at: Google Scholar
  14. W. Shao, L. He, and S. Y. Philip, “Multiple incomplete views clustering via weighted nonnegative matrix factorization with regularization,” in Proceedings of Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 318–334, Porto, Portugal, September 2015. View at: Google Scholar
  15. I. S. Dhillon, Y. Guan, and B. Kulis, “Kernel k-means: spectral clustering and normalized cuts,” in Proceedings of Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 551–556, Seattle, WA, USA, August 2004. View at: Google Scholar
  16. V. Y. Pan and Z. Q. Chen, “The complexity of the matrix eigenproblem,” in Proceedings of ACM Symposium on Theory of Computing, pp. 507–516, Atlanta, GA, USA, May 1999. View at: Google Scholar
  17. P. Gehler and S. Nowozin, “On feature combination for multiclass object classification,” in Proceedings of IEEE 12th International Conference on Computer Vision, pp. 221–228, Zurich, Switzerland, September 2009. View at: Google Scholar
  18. Y.-G. Jiang, G. Ye, S.-F. Chang, D. Ellis, and A. C. Loui, “Consumer video understanding: a benchmark database and an evaluation of human and machine performance,” in Proceedings of ACM International Conference on Multimedia Retrieval (ICMR), Trento, Italy, April 2011. View at: Google Scholar
  19. T. Damoulas and M. A. Girolami, “Probabilistic multi-class multi-kernel learning: on protein fold recognition and remote homology detection,” Bioinformatics, vol. 24, no. 10, pp. 1264–1270, 2008. View at: Publisher Site | Google Scholar
  20. S. Li, Y. Jiang, and Z. Zhou, “Partial multi-view clustering,” in Proceedings of Twenty-Eighth AAAI Conference on Artificial Intelligence, Québec, Canada, July 2014. View at: Google Scholar
  21. C. Cortes, M. Mohri, and A. Rostamizadeh, “Algorithms for learning kernels based on centered alignment,” Journal of Machine Learning Research, vol. 13, pp. 795–828, 2012. View at: Google Scholar
  22. Z. Halim and J. H. Khattak, “Density-based clustering of big probabilistic graphs,” Evolving Systems, pp. 1–18, 2018. View at: Google Scholar
  23. I. Ullah and A. Petrosino, “Spatiotemporal features learning with 3dpyranet,” in Advanced Concepts for Intelligent Vision Systems, pp. 638–647, Springer International Publishing, Cham, Switzerland, 2016. View at: Google Scholar
  24. Z. Halim, M. Waqas, A. R. Baig, and A. Rashid, “Efficient clustering of large uncertain graphs using neighborhood information,” International Journal of Approximate Reasoning, vol. 90, 2017. View at: Publisher Site | Google Scholar
  25. Z. Halim, M. Atif, A. Rashid, and C. A. Edwin, “Profiling players using real-world datasets: clustering the data and correlating the results with the big-five personality traits,” IEEE Transactions on Affective Computing, 1 page, 2017, In press. View at: Publisher Site | Google Scholar
  26. Z. Halim, M. Waqas, and S. F. Hussain, “Clustering large probabilistic graphs using multi-population evolutionary algorithm,” Information Sciences, vol. 317, pp. 78–95, 2015. View at: Publisher Site | Google Scholar
  27. C. G. Bezerra, B. S. J. Costa, L. A. Guedes, and P. P. Angelov, “A new evolving clustering algorithm for online data streams,” in Proceedings of 2016 IEEE Conference on Evolving and Adaptive Intelligent Systems, pp. 162–168, Natal, Brazil, May 2016. View at: Google Scholar

Copyright © 2018 Yongkai Ye et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views953
Downloads523
Citations

Related articles