Research Article  Open Access
Minggang Du, ShanWen Zhang, Hong Wang, "Tumor Classification Using HighOrder Gene Expression Profiles Based on Multilinear ICA", Advances in Bioinformatics, vol. 2009, Article ID 926450, 9 pages, 2009. https://doi.org/10.1155/2009/926450
Tumor Classification Using HighOrder Gene Expression Profiles Based on Multilinear ICA
Abstract
Motivation. Independent Components Analysis (ICA) maximizes the statistical independence of the representational components of a training gene expression profiles (GEP) ensemble, but it cannot distinguish relations between the different factors, or different modes, and it is not available to highorder GEP Data Mining. In order to generalize ICA, we introduce MultilinearICA and apply it to tumor classification using high order GEP. Firstly, we introduce the basis conceptions and operations of tensor and recommend Support Vector Machine (SVM) classifier and MultilinearICA. Secondly, the higher score genes of original high order GEP are selected by using tstatistics and tabulate tensors. Thirdly, the tensors are performed by MultilinearICA. Finally, the SVM is used to classify the tumor subtypes. Results. To show the validity of the proposed method, we apply it to tumor classification using high order GEP. Though we only use three datasets, the experimental results show that the method is effective and feasible. Through this survey, we hope to gain some insight into the problem of high order GEP tumor classification, in aid of further developing more effective tumor classification algorithms.
1. Introduction
In the past several years, the DNA microarray technology has attracted tremendous interest in both the scientific community and industry. Generally, developed DNA microarray experiment technology allows the recording of expression levels of thousands of genes simultaneously [1]. Such massive gene expression data gives rise to a number of effective computational challenges. With the wealth of gene expression profiles (GEP), more and more new predictions, clustering, and classifications algorithms have been proposing and used for the GEP analysis [2, 3]. Up to now, many tumor classification methods using GEP are proposed by a number of researchers, and many studies have reported the application of GEP for molecular classification of tumor [4–6]. In GEP data mining, Principal Component Analysis (PCA) is a classic effective tool for analyzing the largescale GEP [7, 8]. But it ignores all higherorder data relationships—the higherorder statistical dependencies. Independent Components Analysis (ICA) is a useful extension of PCA that has been developed in context with blind separation of independent sources from their linear mixtures [8, 9]. PCA is just sensitive to secondorder relationships of the data. However, the ICA model usually leaves some freedom of scaling and sorting by convention, the independent components are generally scaled to unit deviation, while their signs and orders can be chosen arbitrarily. In general, the number of independent components is equal to the number of the observational variables. While the goal of PCA is to minimize the reprojection error from compressed data, while the goal of ICA is to minimize the statistical dependence between the basis vectors. The ICA learns a set of statistical independent components by analyzing the higherorder dependencies in the training samples in addition to the correlations. Such blind separation techniques have been popularly used, for example, in various applications of auditory signal separating, medical signal processing, and so on [10–12]. ICA is capable of extracting biologically relevant gene expression features from microarray data. A number of tumor classification applications for performing ICA have been proposed. Gen et al. () introduced an ICAbased gene classification method. They validated their method by using the yeast GEP during sporulation. Liebermeister [11] applied ICA to microarray data to find independent modes of gene expression. Zhang et al. [13] devised a pattern recognition procedure based on ICA, which is suitable for the identification of diagnostic expression patterns for other human cancers and demonstrates the feasibility of simple and accurate molecular cancer diagnostics for clinical implementation. Zheng et al. [8] performed ICA on the GEP dataset which preprocessed by tstatistics, the outputs of ICA were then classified using support vector machine (SVM). Frigyesi et al. [14] applied iterated ICA to three different gene expression datasets to obtain reliable components. They found that many of the low ranking components indeed may show a strong biological coherence and hence be of biological significance. Kong et al. [15] described theoretical frameworks of ICA to further illustrate its feature extraction function in GEP analysis. Biswas et al. [16] applied ICA to gene expression traits derived from a cross between two strains of Saccharomyces cerevisiae and decomposed the data into a set of metatraits, which are linear combinations of all the expression traits.
But, ICA cannot distinguish between highorder GEP that rise from different experiments, or different time, or different studies. These GEP are called highorder GEP. In practice, the structure of GEP integrated from different studies is of an order higher than that of a matrix. These datasets can be tabulated a tensor. If we deal with these GEP respectively or unfold the tensor into a matrix, these degrees of freedom are lost and much of the information in the data tensor might also be lost. This problem is addressed by multilinear framework. Whereas ICA employs linear (matrix) algebra, Multilinear ICA model exploits tensor algebra [17, 18]. Multilinear ICA is able to learn the interactions of multiple samples (genes) inherent to highorder dataset formation and separately encode the higherorder statistics of each of these factors. It has been used widely in image recognition [17–20]. Omberg et al. [21] described a multilinear highorder SVD, reformulated to decompose a data tensor into a linear superposition of rank1 subtensors, and provided an integrative framework for highorder GEP analysis from different studies, where significant subtensors represent independent biological programs or experimental phenomena. A quick survey of biological literatures shows that multilinear ICA is still seldom used in bioinformatics. In this paper, we apply Multilinear ICA to tumor classification using highorder GEP.
This paper is organized as follows. Section 2 briefly discusses some mathematical backgrounds, including tensor, multilinear ICA model, and SVM classifier and introduces the gene selection strategy based on the tstatistics and Multilinear ICA model of highorder GEP dataset. In Section 3, a classification method using multilinear ICA is proposed, and the predication results for applying the method to the highorder GEP are given. Some conclusive remarks and future works are included in Section 4.
2. Methods
In recent years the tensor analysis in pattern recognition and other areas has attracted more and more attention. Tensor means multidimensional or multimode array. Often the data have a multidimensional structure and it is then somewhat unnatural to organize them as matrices or vectors. As a simple example, each GEP is a twodimensional data array, that is, a matrix. Then many GEP from different studies are 3dimensional data array, which can be easy expressed by a thirdorder tensor. Though tensor analysis has been used for a long time in many areas, it is seldom used in GEP analysis. So it is necessary to introduce the basis conceptions and operations of tensor [22, 23].
2.1. Mathematical Background of Tensor
A tensor is a multidimensional array. Roughly speaking, a scalar is a 0order tensor, an vector is a 1order tensor of size , and an matrix is a 2order tensor of size . An th order tensor, denoted as, is a generalization of these algebraic objects to one with indices, where denotes its random element. The dimension of along the different orders is given by (). Tensors are often found in differential geometry where they most of the time (if not exclusively) represent multilinear operators. A thirdorder tensor, denoted as , has three indices as shown in Figure 1.
The starting point of the derivation of a multilinear SVD will be to consider an appropriate generalization of the link between the column (row) vectors and the left (right) singular vectors of a matrix. To be able to formalize this idea, we introduce “tensor unfolding.” There are several ways to do so. To avoid confusion, we will stick to one particular ordering of the column (row) vectors. One particular type of “tensor unfolding” will prove to be particularly useful, namely, the matrix representation of a given tensor in which all its column (row) vectors are simply stacked one after another. Simply, for a 3order tensor , these unfolding procedures can be visualized, () is expressed in detail as follows:
For two tensors , their inner product, denoted as , is defined in a straightforward way as
The norm of a tensor is defined as
We regard that two tensors are called orthogonal when their inner product is 0. The tensor distance between and is expressed as follows:
2.2. Mathematical Background of Multilinear ICA
Independent component analysis (ICA) is a valid data analysis technique for uncovering independent components which underlie the observational data (Lieven et al., 2000). This technique seeks the linear transformation of the original data to have a mutually independent representation. ICA is a linear analysis method, which can remove all linear correlations. But, it is not well suited to the representation of highorder GEP ensembles. To remedy this shortcoming, we introduce the Multilinear ICA as follows [19, 24, 25].
Recall the classical SVD of a matrix,
Since and are matrices, they are also regarded as 2order tensors. It is not hard to understand and verify following representation by tensor product. We can express the SVD in terms of the nmode product,
Naturally for a general tensor , highorder SVD [26, 27] is obtained by decomposing the tensor as the tensor product of an order tensor S and a series of matrices (), written as follows:
Where is called the core tensor, () is a mode matrix spanning the column space of , which is the mode flattening of .
The core tensor is analogous to the diagonal singular value matrix in conventional matrix SVD (although it does not have a simple, diagonal structure). The core tensor governs the interaction between the mode matrices , which contain the orthonormal vectors spanning the column space of matrix resulting from the thmode flattening of tensor .
For thirdorder tensor , can be written as the product
with the following properties.
(i), , and are orthogonal matrices.(ii) is a real tensor of the same dimensions as and is all orthogonal, that is, slices along any mode are orthogonal, let , .(iii)The mode singular values are the diagonal elements of . The norms of the slices along every mode are ordered, , .For 1mode singular values of the matricized tensor , we have , , and . The ordering property implies that, loosely speaking, the “energy” or “mass” of the core tensor is concentrated in the vicinity of the point nearby. This property makes it possible to use the highorder SVD for data compression.
In fact, we can compute the Multilinear ICA by the following two steps.
(1)For each compute by computing SVD of .(2)Solve for the core tensor asIn ICA, there is a strategy for multilinear ICA. The architecture results in a factorial code, where each set of coefficients that encodes samples of tumor, genes, spanning data sources, and so forth is statistically independent. Flattening the data tensor in the th mode and computing the ICA, we obtain
where the mode matrices are given by .
The architecture results in a set of basis vectors which are statistically independent across the different modes. We can derive the relationship between  mode ICA and mode SVD in the context of the architecture as follows:
where the core tensor is .
2.3. Mathematical Background of Gene Selection Strategy
Among a large number of genes of GEP, only a small part may benefit the correct classification of tumor subtypes. The large rest of genes has little impact on the classification. Even worse, some genes may act as “noise” and depress the classification accuracy. To obtain higher classification accuracy, we need to pick out a gene subset which benefits the tumor classification most.
statistics is a statistical method. It is applied to measuring how large the difference is between the distributions of two groups of the samples. For a single gene, if it shows larger distinction between two groups, it is more important for the classification of the two groups. To find the genes that contribute most to the classification, tstatistics has been used in gene selection in recent years [8, 28].
Selecting important genes using tstatistics involves three steps. Firstly, a score based on tstatistics (named score) is calculated for each gene by the following (11):
This step allows to find the important genes that help to discriminate between two classes by calculating a score for each gene based on the mean (resp., ) and the standard deviation (resp., ) of each class of the samples.
Secondly, all the genes are rearranged according to their score. The gene with the largest score is put in the first place of the ranking list, followed by the gene with the second largest score, and so on.
Finally, only some top genes in the list are used for classification. We select a set of genes corresponding to the top ranked to be used as initial informative genes. The standard tstatistics is only applicable to measure the difference between two groups. Therefore, when the number of classes is more than two, we need to modify the standard tstatistics.
2.4. Mathematical Background of SVM Classifier
Support vector machine (SVM) is an area of statistical learning, subject to extensive research [29]. The SVM is based on the principle of risk minimization and thus provides good generalization control. This allows one to work with datasets that contain many irrelevant and noisy features. Using nonlinear kernels, SVM can model nonlinear dependences among features and the target, which may prove advantageous for the classification problems. When SVM is used for tumor gene classification, it can separate a given set of binary labeled training data with a hyperplane that is maximally distant from them (the maximal margin hyperplane) [8, 30].
Because there are only few samples of the GEP achieved in general, we use SVM [5] as the classifier in our feature selection study, which have been proven to be very useful for classifying the gene expression data.
A MATLAB toolbox implementing SVM is freely available for academic purposes, and we can download from: http://www.isis.ecs.soton.ac.uk/resources/svminfo/.
2.5. MultilinearICA Model of High Order GEP Dataset
The structure of GEP integrated from different studies or experiments is of an order higher than that of a matrix, we generally call it spanning datasets. Now let the tensor denote the GEP of spanning dataset, of size samplgendataset, and is the expression level of the th gene in the th sample of th dataset, in general , . Each column vector of tensor , that is , lists the GEP measured under the th gene and th dataset. The row vectors, and , list the GEP measured for the th sample under the th dataset across all genes, and under the jth gene across all datasets, respectively. We suppose that all data have already been preprocessed and normalized, that is, every gene of GEP has mean zero and standard deviation 1.
The following discuss computational methods for the best multilinear rank approximation problem:
where is a given GEPtensor and is unknown GEPtensor.
Our goal is to seek the best low order multilinear dimension approximation tensor . This is a generalization of the best low dimension tensor approximation problem. It is well known that for matrix the solution is given by truncating the singular values in the SVD of the matrix. But for tensor in general, the truncated tensorSVD does not give an optimal approximation.
A thirdorder GEPtensor with rank can be written as the product
where is a tensor, and , , and are matrices with orthonormal columns. The approximation problem is equivalent to a nonlinear optimization problem defined on a product of Grassmann manifolds.
We want to find a tensor of the form . It fixes all vectors except and then solves for the optimal , likewise for , , cycling through the indices until the specified number of iterations is exhausted. These steps [31] are explained as follows:
In : GEPtensor .
Out : GEPtensor , an estimate of the best rank1 approximation of .
(1)Compute initial values. Let be the dominant left singular vector of , .(2)For (until converged), do what follows. (3)Let , and let , where is the index of the final result of step 2. (4)Set .This algorithm is important in practical applications, for the different rank1 terms can often be related to different “mechanisms” that have contributed to the higherorder tensor, in addition, sufficiently mild uniqueness conditions enable the actual computation of these components (without imposing orthogonality constraints, as in the matrix case).
In general, the number of genes in a single sample is in the thousands. So the above procedure can be used to compress the Highorder GEP.
2.6. Tumor Classification Method
To simplify the computation, we normalized the expression values for each of the genes such that each sample has zero mean and unit variance. We chose respectively largerscore genes from all GEP datasets using the method described in Section 2.3. We divide each dataset into two parts, training subdataset and testing subdataset, and tabulate two tensors, training tensor and testing tensor , respectively. We performed Multilinear ICA on training tensor to produce a core tensor and three matrixes , , and such that
Here, the core tensor contains the coefficients (representations) of the multilinear combination of statistically independent sources (rows of , ) that comprise . From the testing tensor and , , , we can achieve core tensor by the following equation:
After achieving the representations of the training and test data using tstatistics and Multilinear ICA, the final step is to classify the dataset. We unfold the tensors and , obtain two matrices and , and truncate them. And then, we use and its corresponding label to train SVM classifier. Finally, we import to SVM and export its corresponding label to assess the performance.
3. Results
To verify the classification abilities of the proposed algorithm, the experimental results are presented in this section. tstatistics is first used to select gene which with high score, multilinear ICA model is acted on the chosen training GEPtensor to extract independent eigenarrays, and then SVM is applied to classify the tumor samples using their representations corresponding to independent eigenarrays.
3.1. Datasets
There are three available GEP datasets, leukemia tumor and lung tumor. Two publicly leukemia datasets are downloaded from the web sites, http://www.broad.mit.edu/cgibin/cancer/datasets.cgi and http://www.genome.wi.mit.edu/MPR. The lung tumor dataset can be downloaded from the web site http://www.broad.mit.edu/cgibin/cancer/datasets.cgi. The descriptions of three datasets are shown in Table 1.

Because we have not obtained more available Highorder GEP from public web set, in order to validate the proposed algorithm, we have to divide a large tumor dataset to small parts, which regard as datasets obtained from different experiments or different studies.
All the datasets are normalized so that they have zero means and standard deviations. After being normalized, the genes in the GEP are ranked by tstatistics . The S value distribution of every gene is shown in Figure 2 on leukemia dataset 1.
From Figure 2, we can see that the number of genes with very little value is very large. That is to say, the vast majority of genes have little or not contribution to tumor classification. In general, we have reason to simply select 200 top ranked genes from the three datasets for Multilinear ICA, respectively.
3.2. TwoFold Crossvalidated on Leukemia Datasets
In order to buildup a tensor of spanning datasets, we chose randomly 38 samples from dataset 2 as many as all from the dataset 1. We design an experiment on all samples using 2fold crossvalidated to evaluate the classification model. In these datasets, we chose 200 largerscore genes from two datasets using the tstatistics described in Section 2.3 for analyses and constitute training tensor and testing tensor, and all data samples are already assigned to a training set or testing set , as shown in Table 2.

By above analysis, the original training set and testing set are all tensors. We performed Multilinear ICA on training tensor to obtain a core tensor and three matrixes , , and ; they are as shown in Figures 34. From Figure 4, we find that many elements of are very small or zero.
From the testing tensor and three matrixes , , and , we can obtain the testing core tensor by (16). Then we unfold the tensors and , obtaining two matrices and . We then use and their corresponding label to train the SVM classifier with Gaussian kernels and finally use and their corresponding label to assess the performance. The statistical mean correct classification result is 99%.
3.3. LOOCV on Two Leukemia Datasets
Because of a fat lot datasets at hand, we do the same experiment by leaveoneout cross validated (LOOCV), that is, the training set is a tensor , and the testing set is a tensor . The classification process is in principle similar to the one described above. The statistical mean correct classification result is 99.80%.
3.4. LOOCV on “Three” Leukemia Datasets
We divide the above dataset 2 into two parts, each part has 38 samples. Note that there are 4 samples in two parts synchronously. Now we have three datasets and assign them to a training set or testing set. We design that the training set is tensor , and the testing set is . We performed Multilinear ICA on training tensor . We can obtain a core tensor and three matrixes , , and , as shown in Figures 6, 7, and 8. Then can be obtained from (16). After unfolding and as matrixes and , respectively, and achieving the representations of the training and test data, we then use and their corresponding label to train SVM, and finally use and their corresponding label to assess the performance. The classifying process is in principle similar to the ones described above. The statistical mean correct classification result is 99.54%. We find that this result is little smaller than the result in Section 3.3. The reason is that the Leukemia Dataset 2 is divided into two parts.
From the above experimental results, we can see that with the gene of spanning datasets decrease used in Multilinear ICA, the classification accuracy of the spanning Leukemia datasets is high, which means that the result is effective.
3.5. LOOCV on “Three”Order Lung Datasets
Similar to Section 3.4, we firstly chose 200 genes using the tstatistics described in Section 2.3 for analyses, then select dividing 180 samples of the lung GEP dataset 3 into three parts as training set, each part having 60 samples, while a rest sample as test set. We design that the training set is tensor , and the testing set is . After performing Multilinear ICA on training tensor , a core tensor and three matrixes , , and are obtained, as shown in Figure 9, then can be obtained from (16).
After unfolding and as matrixes and , respectively, and achieving the representations of the training and test data, we then use and their corresponding label to train SVM, and finally use and their corresponding label to assess the performance. The statistical mean correct classification result is 90.26%.
To show the efficiency and the feasibility of the proposed method, we compare our method with other two methods, SVM and ICA + SVM [8]. The classification results are listed in Table 3 for comparison.

From Table 3, it is found that the result of ICA + SVM is little better than the proposed method on the lung datasets. The reason is also that the complete lung data is divided into three parts.
The experiment results demonstrate that our method achieves better classification rate. When the microarray data is highorder integrated from different studies, if we unfold the data into matrix, the structure of the data is break and most of the information in the tensor data might be lost. The proposed method can analyze and dispose synchronously the highorder GEP datasets. We could experience the superiority by using our proposed method on highorder data.
4. Conclusions
The tumor classification based on GEP is a challenging task in bioinformatics. The developed DNA microarray experiment technology has resulted in expression levels of thousands of genes being recorded over just a lot of different samples. ICA is a novel tool on the single GEP. But it is not available for the highorder datasets integrated from different studies or different experimental setting. Considering the biological significance, we think that the classification using a relatively large number of genes of spanning datasets may be more reasonable. For this reason, a new classification scheme for Highorder GEP is proposed. The method involves dimension reduction of highorder Highorder GEP using Multilinear ICA, followed by using tstatistics and the classification applying SVM. The experimental results show that our proposed method is effective. The method only provides an integrative framework for higherorder tumor classification using Highorder GEP. However, there is still a great amount of work that needs to be done in order to achieve the goal of tumor classification of spanning datasets. Further work needs doing to apply our methods to other high order GEP based on hard classified tumors.
Acknowledgments
This work was supported by the Grants of the National Science Foundation of China, nos. 30570368 & 30700161, the Grant from the National Basic Research Program of China (973 Program), no.2007CB311002.
References
 D. Shalon, S. J. Smith, and P. O. Brown, “A DNA microarray system for analyzing complex DNA samples using twocolor fluorescent probe hybridization,” Genome Research, vol. 6, no. 7, pp. 639–645, 1996. View at: Google Scholar
 M. Granzow, D. Berrar, W. Dubitzky, A. Schuste, F. Azuaje, and R. Eils, “Tumor classification by gene expression profiling: comparison and validation of five clustering methods,” ACM SIGBIO Newsletter, vol. 21, pp. 16–22, 2001. View at: Google Scholar
 L. Wang, F. Chu, and W. Xie, “Accurate cancer classification using expressions of very few genes,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 4, no. 1, pp. 40–53, 2007. View at: Publisher Site  Google Scholar
 U. Alon, N. Barka, D. A. Notterman et al., “Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays,” Proceedings of the National Academy of Sciences of the United States of America, vol. 96, no. 12, pp. 6745–6750, 1999. View at: Publisher Site  Google Scholar
 T. S. Furey, N. Cristianini, N. Duffy, D. W. Bednarski, M. Schummer, and D. Haussler, “Support vector machine classification and validation of cancer tissue samples using microarray expression data,” Bioinformatics, vol. 16, no. 10, pp. 906–914, 2000. View at: Google Scholar
 H. Wang, D.S. Huang, X.M. Zhao, and X. Huang, “A novel clustering analysis based on PCA and SOMs for gene expression patterns,” in Advances in Neural Networks, vol. 317 of Lecture Notes in Computer Science, pp. 476–481, Springer, Berlin, Germany, 2004. View at: Google Scholar
 J. J. Dai, L. Lieu, and D. Rocke, “Dimension reduction for classification with gene expression microarray data,” Statistical Applications in Genetics and Molecular Biology, vol. 5, no. 1, pp. 1–21, 2006. View at: Google Scholar
 C.H. Zheng, D.S. Huang, and L. Shang, “Feature selection in independent component subspace for microarray data classification,” Neurocomputing, vol. 69, no. 16–18, pp. 2407–2410, 2006. View at: Publisher Site  Google Scholar
 A. Hyvärinen, J. Karhunen, and E. Oja, Independent Component Analysis, John Wiley & Sons, New York, NY, USA, 2001.
 G. Hori, M. Inoue, S. I. Nishimuraet et al., “Blind gene classificationan ICAbased gene classification/clustering method,” RIKEN BSI BSIS Technical Report 025,112, 2002. View at: Google Scholar
 W. Liebermeister, “Linear modes of gene expression determined by independent coponent analysis,” Bioinformatics, vol. 18, no. 1, pp. 51–60, 2002. View at: Google Scholar
 D.S. Huang and C.H. Zheng, “Independent component analysisbased penalized discriminant method for tumor classification using gene expression data,” Bioinformatics, vol. 22, no. 15, pp. 1855–1862, 2006. View at: Publisher Site  Google Scholar
 X. W. Zhang, Y. L. Yap, D. Wei, F. Chen, and A. Danchin, “Molecular diagnosis of human cancer type by gene expression profiles and independent component analysis,” European Journal of Human Genetics, vol. 13, no. 12, pp. 1303–1311, 2005. View at: Publisher Site  Google Scholar
 A. Frigyesi, S. Veerla, D. Lindgren, and M. Höglund, “Independent component analysis reveals new and biologically significant structures in micro array data,” BMC Bioinformatics, vol. 7, article 290, 2006. View at: Publisher Site  Google Scholar
 W. Kong, C. R. Vanderburg, H. Gunshin, J. T. Rogers, and X. Huang, “A review of independent component analysis application to microarray gene expression data,” BioTechniques, vol. 45, no. 5, pp. 501–520, 2008. View at: Publisher Site  Google Scholar
 S. Biswas, J. D. Storey, and J. M. Akey, “Mapping gene expression quantitative trait loci by singular value decomposition and independent component analysis,” BMC Bioinformatics, vol. 9, article 244, 2008. View at: Publisher Site  Google Scholar
 M. A. O. Vasilescu and D. Terzopoulos, “Multilinear analysis of image ensembles: TensorFaces,” in Proceedings of the 7th European Conference on Computer Vision (ECCV '02), pp. 447–460, Copenhagen, Denmark, 2002. View at: Google Scholar
 M. A. O. Vasilescu and D. Terzopoulos, “Multilinear image analysis for facial recognition,” in Proceedings of the International Conference on Pattern Recognition (ICPR '02), vol. 2, pp. 511–514, 2002. View at: Google Scholar
 M. A. O. Vasilescu and D. Terzopoulos, “Multilinear subspace analysis of image ensembles,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 93–99, 2003. View at: Google Scholar
 M. A. O. Vasilescu and D. Terzopoulos, “Multilinear independent component analysis,” in Learning, Snowbird, Utah, USA, 2004. View at: Google Scholar
 L. Omberg, G. H. Golub, and O. Alter, “A tensor higherorder singular value decomposition for integrative analysis of DNA microarray data from different studies,” Proceedings of the National Academy of Sciences of the United States of America, vol. 104, no. 47, pp. 18371–18376, 2007. View at: Publisher Site  Google Scholar
 X. F. He, D. Cai, and P. Niyogi, “Tensor subspace analysis,” in Advances in Neural Information Processing Systems 18, Vancouver, Canada, 2005. View at: Google Scholar
 T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SANDIA Report SAND20076702, 2007, Unlimited Release, http://www.osti.gov/bridge. View at: Google Scholar
 M. A. O. Vasilescu and D. Terzopoulos, “Multilinear independent components analysis,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 547–553, 2005. View at: Google Scholar
 M. A. O. Vasilescu and D. Terzopoulos, “Multilinear analysis of image ensembles: TensorFaces,” in Proceedings of the 7th European Conference on Computer Vision (ECCV '02), pp. 447–460, Copenhagen, Denmark, 2002. View at: Google Scholar
 L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear singular value decomposition,” SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1253–1278, 2000. View at: Google Scholar
 G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996.
 V. G. Tusher, R. Tibshirani, and G. Chu, “Significance analysis of microarrays applied to the ionizing radiation response,” Proceedings of the National Academy of Sciences of the United States of America, vol. 98, no. 9, pp. 5116–5121, 2001. View at: Publisher Site  Google Scholar
 V. N. Vapnik, Statistical Learning Theory, Wiley Interscience, New York, NY, USA, 1998.
 N. Cristianini and J. ShaweTaylor, An Introduction to Support Vector Machines, Cambridge University Press, Cambridge, UK, 2000.
 B. W. Bader and T. G. Kolda, “Algorithm 862: MATLAB tensor classes for fast algorithm prototyping,” ACM Transactions on Mathematical Software, vol. 32, no. 4, pp. 635–653, 2006. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2009 Minggang Du et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.