Developments in Mobile Multimedia TechnologiesView this Special Issue
Research Article | Open Access
Shicheng Li, Qinghua Liu, Jiangyan Dai, Wenle Wang, Xiaolin Gui, Yugen Yi, "Adaptive-Weighted Multiview Deep Basis Matrix Factorization for Multimedia Data Analysis", Wireless Communications and Mobile Computing, vol. 2021, Article ID 5526479, 12 pages, 2021. https://doi.org/10.1155/2021/5526479
Adaptive-Weighted Multiview Deep Basis Matrix Factorization for Multimedia Data Analysis
Feature representation learning is a key issue in artificial intelligence research. Multiview multimedia data can provide rich information, which makes feature representation become one of the current research hotspots in data analysis. Recently, a large number of multiview data feature representation methods have been proposed, among which matrix factorization shows the excellent performance. Therefore, we propose an adaptive-weighted multiview deep basis matrix factorization (AMDBMF) method that integrates matrix factorization, deep learning, and view fusion together. Specifically, we first perform deep basis matrix factorization on data of each view. Then, all views are integrated to complete the procedure of multiview feature learning. Finally, we propose an adaptive weighting strategy to fuse the low-dimensional features of each view so that a unified feature representation can be obtained for multiview multimedia data. We also design an iterative update algorithm to optimize the objective function and justify the convergence of the optimization algorithm through numerical experiments. We conducted clustering experiments on five multiview multimedia datasets and compare the proposed method with several excellent current methods. The experimental results demonstrate that the clustering performance of the proposed method is better than those of the other comparison methods.
With the rapid development of computer technology, the collected multimedia data from many research fields, such as computer vision, image processing, and natural language processing, always have features with high dimension and complex structures. These high-dimensional data can not only provide abundant information but also bring some problems such as the “curse of dimensionality” [1, 2]. Therefore, how to effectively deal with high-dimensional data has become a widespread concern . Dimensionality reduction is an efficient way to solve this issue, which can map the original data to a low-dimensional space and obtain a low-dimensional representation derived from the hidden information in the original data .
In recent years, many dimensionality reduction methods have been proposed for multimedia data . The matrix factorization method has become one of the research hotspots owing to its simple theoretical basis and easy implementation. Principal component analysis (PCA) , independent components analysis (ICA) , vector quantization (VQ) , etc. are well-known matrix factorization methods that can obtain a low-rank approximation matrix by decomposing a high-dimensional data matrix, and they can effectively extract a low-dimensional representation from high-dimensional data. However, these methods do not utilize any constraints on the matrix elements during the process of matrix decomposition. It means that the results allow negative elements, which give rise to the loss of physical meaning in low-dimensional representations. To solve this problem, Lee et al. added nonnegative constraints into matrix decomposition and proposed a nonnegative matrix factorization (NMF)  method. The low-dimensional feature representations obtained by NMF method are part-based so that they have strong interpretability. Consequently, NMF has attracted the wide attention of researchers. There are a large number of improved algorithms based on NMF have been emerged, which have achieved great success in computer vision, natural language processing, speech recognition, DNA sequence analysis, and other areas [10–13].
NMF decomposes the original nonnegative data matrix into the product of a nonnegative basis matrix and a nonnegative coefficient matrix (also called low-dimensional feature matrix). The original data can be expressed as a linear combination of basis matrices, and the combination coefficients can form the coefficient matrix. Since NMF uses nonnegative constraints, it reflects the intuitive notion of combining parts to form a whole and has better interpretability than other methods. The obtained experimental results indicate that NMF has achieved good performance on image and document clustering tasks. Nevertheless, the traditional NMF method only considers the nonnegativity constraints of the elements, which may result in the obtained basis matrix having poor sparseness and independence. To solve the above problems, researchers have imposed additional constraints on the basis matrix or the coefficient matrix and proposed a series of improved methods. For instance, Hoyer  designed a sparsity measurement criterion and proposed an NMF variant with sparsity constraints (NMF-SC). Moreover, to enhance the independence of the obtained basis matrices and low-dimensional representation, Choi  proposed orthogonal nonnegative matrix factorization (ONMF), which imposed orthogonal constraints on the basis matrix and the coefficient matrix. However, the above methods have nonnegative limitations on the original data, thereby limiting the applicability of these NMF-based algorithms. Therefore, Ding et al.  proposed a semi-nonnegative matrix factorization (SNMF). Different from traditional NMF, SNMF relaxed the limitations on the original data and coefficient matrix and only imposed a nonnegative constraint on the basis matrix. The methods mentioned above have better capabilities than their predecessors for feature extraction and achieved better results in real-world tasks, but they only extracted shallow features .
In recent years, deep learning has exhibited outstanding performance in feature representation tasks [18–20]. Therefore, many researchers have introduced deep learning into matrix factorization and proposed a large number of deep feature representation methods [21–27]. Ahn et al.  proposed multilayer nonnegative matrix factorization (MNMF). Different from traditional NMF-based approaches, MNMF decomposed the coefficient matrix several times to obtain an underlying part-based representation that can extract deep hierarchical features from the original data. In addition, to expand the application scope, Trigeorgis et al.  integrated deep factorization and semi-NMF to propose a deep semi-nonnegative matrix factorization (deep semi-NMF) method. However, both MNMF and deep semi-NMF only considered the deep decomposition of the coefficient matrix for the training data. For the new test data, the basis matrix was used to obtain the deep low-dimensional representation. Therefore, the basis matrix directly affected the results of the deep low-dimensional representation. To obtain a more accurate deep low-dimensional representation of the original data matrix, Zhao et al.  applied deep factorization to the basis matrix and proposed a deep NMF method based on basis image learning.
With the rapid development of the Internet and data collection technology, a large amount of multiview multimedia data can be easily acquired [28–30]. For example, an object can be shot from different views. An image can be described with different types of features such as color, texture, and shape. These multiview multimedia data can provide different information for each view, but they also contain potential correlations among these different views. Furthermore, they contain more information than single-view data. It is possible to simply integrate multiview data into single-view data, which ignores the differences and potential correlations between the various views of the data [28–30].
Consequently, extensive multiview data dimensionality reduction methods have been proposed [31–33]. Liu et al.  proposed a multiview NMF (multi-NMF) method which established the relationship between different perspectives by learning the common coefficient matrix among different views. Subsequently, Chang et al.  introduced a new regularization term into the multi-NMF and used it for clothing image clustering. Inspired by ONMF, Liang et al.  proposed NMF with coorthogonal constraints (NMFCC) for multiview multimedia data clustering. Additionally, to consider the correlations between multiple views, Zhan et al.  jointly optimized the graph matrix and concept factorization process and proposed an adaptive structure concept factorization (ASCF) method for multiview clustering. Although the above methods can handle multiview multimedia data well, they still belong to the class of feature representation method based on shallow factorization [38, 39]. The underlying deep features in the multiview data are still not available. Therefore, Zhao et al.  maximized the mutual information between various views, which forced the nonnegative representation of the last layer in each view to be as similar as possible. Then, the deep semi-NMF method was applied to multiview multimedia data clustering. Different from the existing studies, to adaptively provide feature weights for different perspectives in the multiperspective deep feature representation procedure, Huang et al. introduced an adaptive-weighted framework into the multiview deep semi-NMF and proposed an adaptive-weighted multiview clustering method based on deep matrix factorization . Unlike the literature , it can adaptively assign weights to different views in a multiview deep feature representation. However, these methods still consider only the deep decomposition of the coefficient matrix. Therefore, an adaptive-weighted multiview deep basis matrix factorization (AMDBMF) is proposed for multimedia data clustering in this paper. Different from the above methods, AMDBMF first decomposes the basis matrix using a deep way on the data of each view simultaneously and then integrates the low-dimensional features of all view through the adaptive weighting mechanism to extract more accurate multiperspective deep low-dimensional representations. The flowchart of the proposed AMDBMF approach is shown in Figure 1. At last, we perform extensive experiments on five publicly available multiview multimedia datasets. These experimental results show that the proposed AMDBMF approach outperforms the existing related approaches.
The remainder of this paper is organized as follows. “Related Works” describes the related algorithms including NMF and deep semi-NMF briefly. “Adaptive-Weighted Multiview Deep Basis Matrix Factorization” introduces the adaptive-weighted multiview deep basis matrix factorization (AMDBMF) algorithm in detail. The experimental results and analysis are discussed in “Experiments and Analysis.” Finally, the conclusions are given in “Conclusions and Future Work.”
2. Related Works
2.1. Nonnegative Matrix Factorization
Suppose that the given multimedia data can be represented as, where is the dimensionality of the data and is the number of samples. Each sample can be represented as a -dimensional feature vector . NMF is aimed at finding two low-ranking nonnegative matrices and that fulfill . After obtaining and , the original data can be expressed as , that is, each sample can be expressed as a linear combination of the basis matrix , and the coefficient vector is . Therefore, the matrices and are called the basis matrix and coefficient matrix, respectively. The objective function of NMF is defined as follows: where is the Frobenius norm operation.
According to the Karush-Kuhn-Tucker (KKT) condition, the update formulas for variables and are as follows:
2.2. Deep Nonnegative Matrix Factorization
The traditional NMF method can remove redundant information and reveal the hidden semantic features of multimedia data, but it cannot learn an effective feature representation for the data. For example, a facial image contains various changes such as posture, lighting, and expression changes. Therefore, Trigeorgis et al.  pointed out that the coefficient matrix, as a low-dimensional representation of high-dimensional data, should be able to continue to be decomposed so that more abstract low-dimensional features can be obtained. Thus, these processes of deep factorization are defined as where and represent the factorization results of the -th layer. It can be seen from Eq. (3) that deep NMF performs a procedure of matrix factorization at each layer and uses the decomposed coefficient matrix as the input data of the next layer to continue decomposing. Consequently, the process of deep matrix factorization performed on the data is expressed as
The objective function of deep NMF is defined as follows:
Similar to that of NMF, the update formula can be defined as follows: where , denotes the reconstruction of the -th layer’s feature matrix, and the symbol represents the dot product of matrices. represents a matrix operation that restrains all the negative elements to zeros and keeps the positive elements unchanged. On the contrary, turns the positive elements to be zeros while the negative elements are to be nonnegative.
3. Adaptive-Weighted Multiview Deep Basis Matrix Factorization
First, an adaptive-weighted multiview deep basis matrix factorization (AMDBMF) method is proposed, which incorporates the nonnegative matrix factorization and deep learning into a unified framework. Next, an optimization algorithm with an iterative updating rule is designed to solve the objective function of AMDBMF. Then, an adaptive-weighted fusion mechanism is provided. Finally, we provide the complexity analysis of the proposed algorithm.
Suppose that denotes a multimedia data set which contains samples. Each sample is described by views. Thus, the -th view’s features for this sample can be represented as . The features of all samples in this view can be represented as .
3.1. Objective Function
First, matrix factorization is performed on the features in each view of the multimedia data, and the objective function can be defined as where and denote the basis matrix and the coefficient matrix of the -th view’s features, respectively.
Then, the deep factorization is performed on . The process is defined as follows: where and denote the basis matrices and coefficient matrices for each layer in the -th view, respectively.
Finally, to fuse the data from multiple perspectives, the final objective function is defined as
From Eq. (10), we can find that the objective function is nonconvex for all variables, but it is convex for each of them on their own. Therefore, we design an iterative update algorithm to find the local optimal solution of the objective function. To solve this problem, one variable is updated while the other variables are fixed. The detailed updating rules are described as follows.
The optimal objective function for variables and can be defined as
Let , and then Eq. (11) can be simplified as
The Lagrangian function of Eq. (12) is expressed as where and are Lagrange multipliers.
Taking the partial derivatives of Eq. (13) with respect to and , and setting these derivatives to zero, we have
According to the KKT condition and , the update rules of variables and are as follows: where the symbol represents the dot product of matrices.
3.3. Feature Confusion
After obtaining the basis matrix and coefficient matrix of each layer for each view through the optimization algorithm, an adaptive-weighted fusion mechanism is adopted to obtain a low-dimensional representation of the multiview data, and the weight calculation is where is a small constant.
Then, is normalized by Eq. (17)
Finally, since the low-dimensional representation of each view is expressed as , the fusion of the low-dimensional features derived from the multiview data can be expressed as
3.4. Complexity Analysis
Clearly, the proposed algorithm can be divided into two stages: pretraining and fine-tuning. For convenience, suppose that the number of iterations is , is the number of data views, and is the number of layers. The number of features for all views is , and the number of low-dimensional representations for each layer is . In the pretraining process, the complexity of a single view is . Therefore, the complexity of the whole pretraining process is . For the fine-tuning part, the main computational complexity is derived from updating , , and , which requires , , and complexity, respectively. Since , the total computational complexity of the proposed algorithm is .
4. Experiments and Analysis
Five commonly used multiview multimedia datasets from the Internet are used in the experiments to verify the effectiveness of the proposed method.
This dataset includes a collection of 416 news events and 948 related news reports from February to April 2009 from three well-known news media outlets, including BBC, Reuters, and Guardian. In the experiments, 169 news items reported by all three news media outlets are used. These news events include six categories: business, entertainment, health, politics, sports, and technology (http://mlg.ucd.ie/datasets/3sources.html).
4.1.2. BBC 
This dataset contains 685 news articles collected from the BBC News Network between 2004 and 2005. Each article is divided into four parts, and the data consist of five kinds of news: business, entertainment, politics, sport, and technology (http://mlg.ucd.ie/datasets/segment.html).
4.1.3. BBC Sport 
This dataset includes 737 news articles from the BBC Sport network from 2004 to 2005. These news articles cover six fields, such as track, field, cricket, football, rugby, and tennis (http://mlg.ucd.ie/datasets/segment.html).
4.1.4. Reuters 
This is a dataset that includes 1200 English articles from six types of samples, and each article has been translated into French, German, Italian, and Spanish (http://lig-membres.imag.fr/grimal/data.html).
4.1.5. Wikipedia 
This dataset consists of specific Wikipedia material with 2669 articles in 29 categories (http://www.svcl.ucsd.edu/projects/crossmodal/). In the experiments, we select a subset of the 10 most popular categories containing a total of 693 samples. The detailed statistical information about the different datasets is given in Table 1.
In the experiments, we select three commonly used clustering evaluation indicators : accuracy (ACC), normalized mutual information (NMI), and purity to evaluate the performance of the proposed method.
Assuming that the clustering result of is and that the corresponding true label is , then the clustering accuracy (ACC)  is defined as where the function is defined as follows:
The function maps the clustering result to the corresponding true label. The Kuhn-Munkres algorithm  is employed to find the best mapping result.
Assume that and are the clustering result and the true label set, respectively. The mutual information (MI) between them is defined as where and represent the probabilities that a sample is randomly selected from the dataset belonging to and , respectively. represents the joint probability of a sample randomly being selected from the dataset belonging to and . Let and represent the entropies of and , respectively. Since the value range of mutual information is between 0 and , the normalized mutual information (NMI) is defined as
Purity is a straightforward and transparent evaluation method that is defined as follows: where represents the number of clusters, is the number of elements in the most numerous category in cluster , and is the number of elements in cluster .
4.3. Experimental Results and Analysis
In the first experiment, to test the influences of the parameters on the proposed method, we set the number of factorization layers and the feature dimension of each layer to and , respectively. Furthermore, we adopt a grid search to find the optimal parameter value. In the experiment, the low-dimensional features obtained by the proposed algorithm are clustered by the -means algorithm. Since the initialization of the -means algorithm has an impact on the clustering results, we repeat the random initialization process with 10 times and report the mean value. First, the optimal feature dimension of each layer is fixed, and the numbers of layers are changed. As shown in Figure 3, in most cases, when the number of layers is set to 1, the result of each measure is poorer than the rest. However, as the number of layers increases, the performance also increases. It shows that the deep factorization helps to improve the performance of the proposed method.
Then, the numbers of layers are fixed, and the dimension of the feature is changed. The result is shown in Figure 4. It can be seen that as the dimensionality increases, the clustering performance also improves in most cases. However, this trend is not always maintained, and the clustering performance decreases or remains stable as the dimensionality increases once the performance reaches the optimal level. The details of the optimal parameter groups in our proposed algorithm are listed in Table 2.
The second experiment is conducted to verify that the fusion of multiview information is beneficial for improving the clustering performance of the proposed method. First, we perform traditional NMF and deep basis matrix factorization (DBMF) for the data of each view. Then, we obtain the low-dimensional features of the multiview data by fusing the features of different views with equal weight. Finally, the proposed AMDBMF method is compared with the above two methods. The comparison results are listed in Tables 3–5. According to the tables, the performance of the DBMF method is better than that of the traditional NMF method, which indicates that more abstract features can be obtained through the deep factorization. The performance of the proposed AMDBMF method is better than that of the DNBMF method, which verifies that the adaptive fusion of different views is beneficial for extracting more robust low-dimensional features from multiview data.
The third experiment compares the performance of the proposed AMDBMF method with those of some currently popular multiview algorithms, including MVCF , DeepMVC , GMC , and NMFCC . MVCF utilized the correlation information between the views obtained by jointly optimizing the graph matrix of the data of each view. DeepMVC used a nonparameterized adaptive learning method to obtain the weights between views. NMFCC introduces orthogonal constraints into the basis matrix and coefficient matrix. The best results yielded by the different multiview learning methods on different datasets are shown in Tables 6–8. It can be seen that the performances of the proposed method are significantly better than those of the other comparison methods in most cases. Since these methods use different mechanisms to fuse multiview data information, all the methods present different performances on different databases. Therefore, how to effectively integrate fusion mechanisms is still an open problem.
The final experiment verifies the convergence of the proposed optimization algorithm. The convergence curves of the proposed method on different datasets are given in Figure 5. As seen from the figures, the iterative update rules in Algorithm 1 decrease the objective function value obtained by our proposed method. Moreover, we can also see that our proposed method converges very quickly on these datasets.
5. Conclusions and Future Work
To efficiently learn the feature representations of multiview multimedia data, this paper proposes a new deep nonnegative matrix factorization method with multiview learning. Unlike traditional methods, the proposed method deeply decomposes the basis matrix, so it not only can learn the component representation of the original data but also can learn more abstract deep features. Furthermore, to effectively fuse the available multiview data information, this paper introduces an adaptive feature fusion mechanism.
To solve the shortcoming of information fusion for multiview data, a large number of fusion mechanisms have been proposed, and they achieve different performances on different datasets. Therefore, how to effectively integrate different mechanisms to improve the feature representation ability of a given approach is one of the key research tasks to be addressed in the future. Moreover, we will apply our method to other fields such as medical image procession and medical text analysis .
The data are derived from public domain resources.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This research is supported by the National Natural Science Foundation of China under grant nos. 62062040, 61962026, 62006174, and 71762018, the Chinese Postdoctoral Science Foundation (grant no. 2019M661117), the Provincial Key Research and Development Program of Jiangxi under grant nos. 20192ACBL21031 and 20202BABL202016, the Science and Technology Research Project of Jiangxi Provincial Department of Education (grant nos. GJJ191709 and GJJ191689), Fundamental Research Funds for the Central Universities under grant no. 2412019FZ049, the Graduate Innovation Foundation Project of Jiangxi Normal University under grant no. YJS2020045, and the Young Talent Cultivation Program of Jiangxi Normal University.
- Y. Yi, Y. Chen, J. Wang, G. Lei, J. Dai, and H. Zhang, “Joint feature representation and classification via adaptive graph semi-supervised nonnegative matrix factorization,” Signal Processing: Image Communication, vol. 89, article 115984, 2020.
- Y. Yi, J. Wang, W. Zhou, C. Zheng, J. Kong, and S. Qiao, “Non-negative matrix factorization with locality constrained adaptive graph,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 2, pp. 427–441, 2020.
- G. T. Reddy, M. P. K. Reddy, K. Lakshmanna et al., “Analysis of dimensionality reduction techniques on big data,” IEEE Access, vol. 8, pp. 54776–54788, 2020.
- Y. Yi, J. Wang, W. Zhou, Y. Fang, J. Kong, and Y. Lu, “Joint graph optimization and projection learning for dimensionality reduction,” Pattern Recognition, vol. 92, pp. 258–273, 2019.
- S. Ayesha, M. K. Hanif, and R. Talib, “Overview and comparative study of dimensionality reduction techniques for high dimensional data,” Information Fusion, vol. 59, pp. 44–58, 2020.
- H. Abdi and L. J. Williams, “Principal component analysis,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 2, no. 4, pp. 433–459, 2010.
- A. Hyvärinen and E. Oja, “Independent component analysis: algorithms and applications,” Neural Networks, vol. 13, no. 4-5, pp. 411–430, 2000.
- Y. Linde, A. Buzo, and R. Gray, “An algorithm for vector quantizer design,” IEEE Transactions on Communications, vol. 28, no. 1, pp. 84–95, 1980.
- D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix factorization,” Nature, vol. 401, no. 6755, pp. 788–791, 1999.
- Y. X. Wang and Y. J. Zhang, “Nonnegative matrix factorization: a comprehensive review,” IEEE Transactions on Knowledge and Data Engineering, vol. 25, no. 6, pp. 1336–1353, 2012.
- A. A. Jamali, A. Kusalik, and F. X. Wu, “MDIPA: a microRNA–drug interaction prediction approach based on non-negative matrix factorization,” Bioinformatics, vol. 36, no. 20, pp. 5061–5067, 2020.
- P. Chalise, Y. Ni, and B. L. Fridley, “Network-based integrative clustering of multiple types of genomic data using non-negative matrix factorization,” Computers in Biology and Medicine, vol. 118, p. 103625, 2020.
- M. Hou, J. Li, and G. Lu, “A supervised non-negative matrix factorization model for speech emotion recognition,” Speech Communication, vol. 124, pp. 13–20, 2020.
- P. O. Hoyer, “Non-negative matrix factorization with sparseness constraints,” Journal of Machine Learning Research, vol. 5, no. 9, pp. 1457–1469, 2004.
- S. Choi, “Algorithms for orthogonal nonnegative matrix factorization,” in 2008 IEEE International Joint Conference on Neural Networks, pp. 1828–1832, Hong Kong, China, June 2008.
- C. H. Q. Ding, T. Li, and M. I. Jordan, “Convex and semi-nonnegative matrix factorizations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 45–55, 2008.
- J. Fan and J. Cheng, “Matrix completion by deep matrix factorization,” Neural Networks, vol. 98, pp. 34–41, 2018.
- Y. Bengio, A. Courville, and P. Vincent, “Representation learning: a review and new perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798–1828, 2013.
- Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
- G. Zhong, L. N. Wang, X. Ling, and J. Dong, “An overview on data representation learning: from traditional feature learning to recent deep learning,” The Journal of Finance and Data Science, vol. 2, no. 4, pp. 265–278, 2016.
- J. H. Ahn, S. Kim, J. H. Oh, and S. Choi, “Multiple nonnegative-matrix factorization of dynamic PET images,” in Proceedings of Asian Conference on Computer Vision, pp. 1009–1013, Jeju, Korea, 2004.
- G. Trigeorgis, K. Bousmalis, S. Zafeiriou, and B. W. Schuller, “A deep matrix factorization method for learning attribute representations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 3, pp. 417–429, 2016.
- Y. Zhao, H. Wang, and J. Pei, “Deep non-negative matrix factorization architecture based on underlying basis images learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 6, pp. 1897–1913, 2021.
- Y. Meng, R. Shang, F. Shang, L. Jiao, S. Yang, and R. Stolkin, “Semi-supervised graph regularized deep NMF with bi-orthogonal constraints for data representation,” IEEE transactions on neural networks and learning systems, vol. 31, no. 9, pp. 3245–3258, 2020.
- M. Tong, Y. Chen, L. Ma, H. Bai, and X. Yue, “NMF with local constraint and deep NMF with temporal dependencies constraint for action recognition,” Neural Computing and Applications, vol. 32, no. 9, pp. 4481–4505, 2020.
- J. Li, G. Zhou, Y. Qiu, Y. Wang, Y. Zhang, and S. Xie, “Deep graph regularized non-negative matrix factorization for multi-view clustering,” Neurocomputing, vol. 390, pp. 108–116, 2020.
- Z. Shu, X. Wu, C. Hu, C. Z. You, and H. H. Fan, “Deep semi-nonnegative matrix factorization with elastic preserving for data representation,” Multimedia Tools and Applications, vol. 80, no. 2, pp. 1707–1724, 2021.
- J. Zhao, X. Xie, X. Xu, and S. Sun, “Multi-view learning overview: recent progress and new challenges,” Information Fusion, vol. 38, pp. 43–54, 2017.
- Y. Li, M. Yang, and Z. Zhang, “A survey of multi-view representation learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 10, pp. 1863–1883, 2018.
- T. Hussain, K. Muhammad, W. Ding, J. Lloret, S. W. Baik, and V. H. C. de Albuquerque, “A comprehensive survey of multi-view video summarization,” Pattern Recognition, vol. 109, p. 107567, 2021.
- X. Xu, Y. Yang, C. Deng, and F. Nie, “Adaptive graph weighting for multi-view dimensionality reduction,” Signal Processing, vol. 165, pp. 186–196, 2019.
- R. Zhang, F. Nie, X. Li, and X. Wei, “Feature selection with multi-view data: a survey,” Information Fusion, vol. 50, pp. 158–167, 2019.
- P. Luo, J. Peng, Z. Guan, and J. Fan, “Dual regularized multi-view non-negative matrix factorization for clustering,” Neurocomputing, vol. 294, pp. 1–11, 2018.
- J. Liu, C. Wang, J. Gao, and J. Han, “Multi-view clustering via joint nonnegative matrix factorization,” in Proceedings of the 2013 SIAM International Conference on Data Mining, pp. 252–260, Austin, TX, USA, 2013.
- W. Y. Chang, C. P. Wei, and Y. C. F. Wang, “Multi-view nonnegative matrix factorization for clothing image characterization,” in 2014 22nd International Conference on Pattern Recognition, pp. 1272–1277, Stockholm, Sweden, August 2014.
- N. Liang, Z. Yang, Z. Li, W. Sun, and S. Xie, “Multi-view clustering by non-negative matrix factorization with co-orthogonal constraints,” Knowledge-Based Systems, vol. 194, p. 105582, 2020.
- K. Zhan, J. Shi, J. Wang, H. Wang, and Y. Xie, “Adaptive structure concept factorization for multiview clustering,” Neural Computation, vol. 30, no. 4, pp. 1080–1103, 2018.
- S. Wei, J. Wang, G. Yu, C. Domeniconi, and X. Zhang, “Multi-view multiple clusterings using deep matrix factorization,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 4, pp. 6348–6355, 2020.
- W. Zhao, C. Xu, Z. Guan, and Y. Liu, “Multiview concept learning via deep matrix factorization,” IEEE transactions on neural networks and learning systems, vol. 32, no. 2, pp. 814–825, 2021.
- H. Zhao, Z. Ding, and Y. Fu, “Multi-view clustering via deep matrix factorization,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017.
- S. Huang, Z. Kang, and Z. Xu, “Auto-weighted multi-view clustering via deep matrix decomposition,” Pattern Recognition, vol. 97, article 107015, 2020.
- D. Greene and P. Cunningham, “Practical solutions to the problem of diagonal dominance in kernel document clustering,” in Proceedings of the 23rd international conference on Machine learning, pp. 377–384, New York, NY, USA, 2006.
- G. Bisson and C. Grimal, “An architecture to efficiently learn co-similarities from multi-view datasets,” in Neural Information Processing. ICONIP 2012. Lecture Notes in Computer Science, vol 7663, T. Huang, Z. Zeng, C. Li, and C. S. Leung, Eds., pp. 184–193, Springer, Berlin, Heidelberg, 2012.
- L. Fu, P. Lin, A. V. Vasilakos, and S. Wang, “An overview of recent multi-view clustering,” Neurocomputing, vol. 402, pp. 148–161, 2020.
- Y. Yi, Y. Shi, H. Zhang, J. Wang, and J. Kong, “Label propagation based semi-supervised non-negative matrix factorization for feature extraction,” Neurocomputing, vol. 149, pp. 1021–1037, 2015.
- H. Zhu and M. C. Zhou, “Efficient role transfer based on Kuhn–Munkres algorithm,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 42, no. 2, pp. 491–496, 2011.
- H. Wang, Y. Yang, and B. Liu, “GMC: graph-based multi-view clustering,” IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 6, pp. 1116–1129, 2019.
- X. Huang, B. Zhong, Y. Cao, Y. Yi, and M. Gu, “Chest X-ray lung Chinese description generation based on semantic labels and hierarchical LSTM,” in 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 1020–1023, Seoul, Korea (South), December 2020.
Copyright © 2021 Shicheng Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.