- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Advances in Mechanical Engineering
Volume 2013 (2013), Article ID 206251, 12 pages
Manifold Adaptive Kernel Semisupervised Discriminant Analysis for Gait Recognition
School of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China
Received 28 August 2013; Revised 14 October 2013; Accepted 28 October 2013
Academic Editor: Emanuele Zappa
Copyright © 2013 Ziqiang Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A manifold adaptive kernel semisupervised discriminant analysis algorithm for gait recognition is proposed in this paper. Motivated by the fact that the nonlinear structure captured by the data-independent kernels (such as Gaussian kernel, polynomial kernel, and Sigmoid kernel) may not be consistent with the discriminative information and the intrinsic manifold structure information of gait image, we construct two graph Laplacians by using the two nearest neighbor graphs (i.e., an intrinsic graph and a penalty graph) to model the discriminative manifold structure. We then incorporate these two graph Laplacians into the kernel deformation procedure, which leads to the discriminative manifold adaptive kernel space. Finally, the discrepancy-based semi-supervised discriminant analysis is performed in the manifold adaptive kernel space. Experimental results on the well-known USF HumanID gait database demonstrate the efficacy of our proposed algorithm.
In the past two decades, gait recognition has become a hot research topic in pattern recognition and computer vision, owing to its wide applications in many areas such as information surveillance, identity authentication, and human-computer interface. While many algorithms have been proposed for gait recognition [1–5], the most successful and popular approaches to date are the average silhouettes-based methods with subspace learning. The common goal of theses approaches is to find a compact and representative low-dimensional feature subspace for gait representation, so that the intrinsic characteristics of the original gait image are well preserved. Representative algorithms include principal component analysis (PCA) , linear discriminant analysis (LDA) , locality preserving projection (LPP) , and marginal Fisher analysis (MFA) .
PCA aims to generate a set of orthonormal basis vectors where the samples have the minimum reconstruction error. Since PCA is an unsupervised method, it is optimal in terms of reconstruction, but not for discriminating one class from others. LDA is a supervised subspace learning approach which seeks the projection directions that maximize interclass scatter and at the same time minimize intraclass scatter. When the label information is available, LDA usually outperforms PCA for pattern classification tasks. However, LDA has a critical drawback: its available feature dimension is limited by the number of classes in the data. To overcome this problem, Tao et al. proposed the general averaged divergence analysis (GADA)  framework and the maximization of the geometric mean of all divergences (MGMD)  method, respectively. In addition, in order to efficiently and robustly estimate the low-rank and sparse structure of high-dimensional data, Zhou and Tao  developed “Go Decomposition” (GoDec) method and proved its asymptotic and convergence speed. While these algorithms have attained reasonable good performance in gait recognition, face recognition, and object classification, they are designed for discovering only the global Euclidean structure, whereas the local manifold structure is ignored.
Recently, various researches have shown that images possibly reside on a nonlinear submanifold [12–14]. Therefore, gait image representation is fundamentally related to the manifold learning, which aims to derive a compact low-dimensional embedding that preserves local geometric properties of underlying high-dimensional data. The most representative manifold learning algorithm is locality preserving projection (LPP) , which aims to find the optimal linear approximation to eigenfunctions of the Laplace-Beltrami operator on the data manifold. Since LPP is originally unsupervised and does not take the class label information into account, it does not necessarily work well in supervised dimensionality reduction scenarios. By jointly considering the local manifold structure and the class label information, as well as characterizing the separability of different classes with the margin criterion, marginal Fisher analysis (MFA)  delivers reasonably good performance in many pattern classification applications. While the motivations of these methods are different, they can be interpreted into a general graph embedding framework (GEF)  or the patch alignment framework (PAF) . In addition, the discriminative information preservation (DIP)  algorithm was also proposed by using PAF. Although the above vector-based dimensionality reduction algorithms have achieved great success in image analysis and computer vision, they seriously destroyed the intrinsic tensor structure of high-order data. To overcome this issue, Tao et al. [17, 18] generalized the vector-based learning to the tensor-based learning and proposed the supervised tensor learning (STL) framework. More recently, it has been shown that the slow feature analysis (SFA)  can extract useful motion patterns and improve the recognition performance.
In general, the supervised dimensionality reduction approaches are suitable for pattern classification tasks when there are sufficient labeled data available. Unfortunately, in many practical applications of pattern classification, one often faces a lack of sufficient labeled data, since the labeling process usually requires much human labor. Meanwhile, in many cases, large numbers of unlabeled data can be easier to obtain. To effectively utilize the labeled and unlabeled data simultaneously, semisupervised learning  was proposed and introduced into the dimensionality reduction process. The motivation behind semisupervised learning is to employ a large number of unlabeled data to help build more accurate models from the labeled data. In the last decades, many semisupervised learning methods have been proposed, such as transductive SVM (TSVM) , cotraining , and graph-based semisupervised learning algorithms . In addition, motivated by the recent progress in Hessian eigenmaps, Tao et al.  introduced the Hessian regularization into SVM for semisupervised learning and mobile image annotation on the cloud. All these algorithms only considered the classification problem, either transductive or inductive. Semisupervised dimensionality reduction has been considered recently, the most representative algorithm is semisupervised discriminant analysis (SDA) , which aims to extract discriminative features and preserve geometrical information of both labeled and unlabeled data for dimensionality reduction. While SDA has achieved reasonably good performance in face image and image retrieval, there are still some problems that are still not properly addressed till now.(1)The original SDA is still a linear technique in nature. It can only extract the linear features of input patterns, and it fails for nonlinear features. So SDA is inadequate to describe the complexity of real gait images because of viewpoints, surface, and carrying status variations.(2)The original SDA suffers from the singular (small sample size) problem, which exists in high-dimensional pattern recognition tasks such as gait recognition, where the dimension of the samples is much larger than the number of available samples.
To address the above issues, we propose a novel manifold adaptive kernel semisupervised discriminant analysis (MAKSDA) algorithm for gait recognition in this paper. First, we reformulate the optimal objective function of SDA using the discrepancy criterion rather than the ratio criterion, so that the singular problem can be avoided. Second, the discrepancy-based SDA is extended to the nonlinear case through kernel trick . Meanwhile, the discriminative manifold adaptive kernel function is proposed to enhance the learning capability of the MAKSDA. Finally, experimental results on gait recognition are presented to demonstrate the effectiveness of the proposed algorithm.
In summary, the contributions of this paper are as follows.(1)We propose MAKSDA algorithm. MAKSDA integrates the discriminative information obtained from the labeled gait images and the manifold adaptive kernel function explored by the unlabeled gait images to form the low-dimensional feature space for gait recognition.(2)In order to avoid the singular problem, we explore the discrepancy criterion rather than the ratio criterion in the kernel space.(3)We have analyzed different parameter settings of MAKSDA algorithm, including the kernel function type, the nearest neighbor size in the intrinsic graph, and the nearest neighbor size in the penalty graph.
The remainder of this paper is organized as follows. Section 2 describes how to extract the Gabor-based gait representation. Section 3 briefly reviews SDA. In Section 4, we propose the MAKSDA algorithm for gait recognition. The experimental results are reported in Section 5. Finally, we provide the concluding remarks and suggestions for future work in Section 6.
2. Gabor Feature Representation of Gait Image
The effective representation of gait image is a key issue of gait recognition. In the following, we employ the averaged gait image as the appearance model [1, 2], since it can employ a compact representation to characterize the motion patterns of the human body. In addition, the Gabor wavelets , whose kernels are similar to the 2D receptive field profiles of the mammalian cortical simple cells, exhibit desirable characteristics of spatial locality and orientation selectivity. Therefore, it is reasonable to use Gabor functions to model averaged gait images. Partialarly, the averaged gait image is first decomposed by using Gabor filters; we then combine the decomposed images to give a new Gabor feature representation, which has been demonstrated to be an effective feature for gait recognition.
The Gabor filters are the product of an elliptical Gaussian envelope and a complex plane wave, which can be defined as follows: whereanddefine the orientation and scale of Gabor filters, respectively,,denotes the norm operator, and the definition of wave vectoris as follows: whereand.represents the maximum frequency and its value is usually set as.denotes the spacing factor between kernels in the frequency domain and its value is usually set as.
The Gabor filters defined in (1) are all self-similar since they can be generated from one filter (the mother wavelet) by scaling and rotating via the wave vector. The termis subtracted in order to make the kernel DC-free. Thus, a band of Gabor filters is generated by a set of various scales and rotations.
In this paper, following the conventional settings, we use Gabor filters at five scalesand eight orientationswith the parameter. Then, we have 40 Gabor kernel functions from five scales and eight orientations. Figures 1 and 2 show the real part of the Gabor filters used in this paper and their magnitude, respectively. As can be seen, the Gabor filters demonstrate desirable features of spatial localization, orientation selectivity, and frequency selectivity.
The Gabor feature representation of a gait image is obtained by convolving the Gabor filters with the averaged gait image. Letbe the averaged gait image; the convolution of the gait imageand Gabor filterscan be defined as follows: where,represents the convolution operator, andis the convolution result corresponding to the Gabor filters. As a result, the set forms the Gabor feature representation of the gait image. As can be seen, for each averaged gait image, we can obtain 40 Gabor-filtered images after convolving the averaged gait image with the Gabor filters. In addition, as suggested in [28, 29], in order to encompass different spatial frequencies (scales), spatial localities, and orientation selectivity, we concatenate all these representation results and derive the final Gabor feature vector. Before the concatenation,is downsampled by a factorto reduce the space dimension and normalize it to zero and unit variance. We then construct a vector out of theby concatenating its rows (or columns). Now, let represent the normalized vector constructed from; the final Gabor feature vector can be defined as follows: Consequently, the vector serves as the Gabor feature representation of the averaged gait image for gait recognition.
3. Brief Review of SDA
Given a set oflabeled samples, each of them has a class label andunlabeled sampleswith unknown class labels. Letand; the optimal objective function of SDA is defined as follows: whereanddenote the between-class scatter matrix and total scatter matrix, respectively. According to the graph perspective of LDA in [7, 8],andcan be defined as follows: where and the weight matrixis defined as follows: wheredenotes the total number of data samples belonging to the class label.
In addition, the regularizer itemis used to model the manifold structure. By using locally invariant idea of manifold learning,is defined as follows: whereis the graph Laplacian,is a diagonal matrix given by, anddenotes the following weight matrix:
Then, the projection vectoris given by the maximum eigenvalue solution to the generalized eigenvalue problem:
Although SDA has exploited both discriminant and geometrical information for dimensional reduction and achieved reasonably good performance in many fields, there are still some problems that are not properly addressed until now.(1)SDA suffers from the singular problem in gait recognition, since the number of gait images is much smaller than the dimension of each gait image.(2)SDA is a linear technique in nature, so it is inadequate to describe the complexity of real gait images. Although the nonlinear extension of SDA through kernel trick has been proposed in , it still has two shortcomings: (1) it suffers from the singular problem and (2) it adopts the data-independent kernels which may not be consistent with the intrinsic manifold structure revealed by labeled and unlabeled data samples.
To fully address the above issues, we propose a novel manifold adaptive kernel semisupervised discriminant analysis (MAKSDA) algorithm for gait recognition in the following section.
4. Manifold Adaptive Kernel SDA (MAKSDA) Algorithm
Although SDA can produce linear discriminating feature, the problem of numerical computation for gait recognition still exists; that is, the matrixin (12) may suffer from the singular problem. In this paper, the discrepancy criterion [30–32] is proposed as an alternative way to avoid the singular problems of SDA, since the ratio criterion can be well solved by the discrepancy criterion. Then, the discrepancy-based SDA can be defined as follows:
Then, maximizingis equivalent to maximizingand minimizingsimultaneously, which is consistent with the ratio criterion of the original SDA.
Since we can freely multiplyby some nonzero constant, we assume. Then, the maximization problem in (13) can be equivalently transformed into the following Lagrange function:
Let; we can obtain
Then, the SDA problem can be transformed into finding the leading eigenvectors of matrix. Since no matrix inverse operation needs to be calculated, the discrepancy-based SDA successfully avoids the singular problem of the original SDA.
Let the column vectorsbe the solution of (15) ordered in terms of their eigenvalues. Thus, the SDA embedding is given by, wheredenotes the lower-dimensional feature representation of andis the optimal projection matrix of SDA.
Although the above discrepancy-based SDA algorithm avoids the singular problem of the original SDA algorithm, it is still a linear algorithm. It may fail to discover the nonlinear geometry structure when gait images are highly nonlinear. Thus, in order to solve the nonlinear problem, the discrepancy-based SDA needs to be generalized to its nonlinear version via kernel trick. The main idea of kernel trick is to map the input data into a feature space with a nonlinear mapping function, where the inner products in the feature space can be easily computed through a kernel function without knowing the nonlinear mapping function explicitly. In the following, we discuss how to perform the discrepancy-based SDA in reproducing kernel hilbert Space (RKHS) and how to produce a manifold adaptive kernel function which is consistent with the intrinsic manifold structure, which gives rise to the manifold adaptive kernel SDA (MAKSDA).
To extend SDA to MAKSDA, letbe a nonlinear mapping function from the input space to a high-dimensional feature space. The idea of MAKSDA is to perform the discrepancy-based SDA in the feature spaceinstead of the input space. For a proper chosen, an inner productcan be defined on, which makes for a so-called RKHS. More specifically,holds, whereis a positive semidefinite kernel function.
Let,, anddenote the between-class scatter matrix, the total scatter matrix, and the regularizer item in the feature space, respectively. According to (7) and (9), we can obtain where and the definition ofis similar to the definition in Section 3.
Then, according to (13), the optimal objective function of MAKSDA in the feature spacecan be defined as follows: with the constraint
To solve the above optimal problem, we introduce the following Lagrangian multiplier method: with the multiplier.
Let; we can obtain
Since any solutionmust be the linear combinations of, there exist coefficientssuch that wheredenotes the data matrix in; that is,, and.
Thus, the MAKSDA problem can be transformed into finding the leading eigenvectors of. Since no matrix inverse operation needs to be computed, MAKSDA successfully avoids the singular problem. Meanwhile, each eigenvectorgives a projective functionin the feature space. For a new testing data sample, its low-dimensional embedding can be computed according to, where kernel matrix.
From the above derivation procedure, we can observe that the kernel functionplays an important role in the MAKSDA algorithm. The traditional kernel-based methods commonly adopt data-independent kernels, such as Gaussian kernel, polynomial kernel, and Sigmoid kernel. However, the nonlinear structure captured by those data-independent kernels may not be consistent with the discriminative information and the intrinsic manifold structure . To address this issue, in the following, we discuss how to design the discriminative manifold adaptive kernel function of MAKSDA, which fully takes account of the discriminative information and the intrinsic manifold structure, thus leading to much better performance.
Letbe a linear space with a positive semidefinite inner product(quadratic form) and letbe a bounded linear operator. In addition, we defineto be the space of function fromwith the modified inner product: Sindhwani et al. have proved thatis still a RKHS .
Given the data examples, letbe the evaluation map
Denoteand; thus we can obtain whereis a positive semidefinite matrix. Letdenote
Reference  has shown that the reproducing kernelinis wheredenotes the kernel matrix in andis an identity matrix. The key issue now is the choice of, so that the deformation of the kernel induced by the data-dependent norm is motivated with respect to the discriminative information and the intrinsic manifold structure of gait images.
In order to model the discriminative manifold structure, we construct two nearest neighbor graphs, that is, an intrinsic graphand a penalty graph. For each data sample, the intrinsic graphis constructed by finding itsnearest neighbors from data samples that have the same class label with, and putting an edge betweenand its neighbors. The weight matrixon the intrinsic graphis defined as follows: wheredenotes the data sample set of thenearest neighbors ofthat are in the same class.
Similarly, for each data sample, the penalty graphis constructed by finding itsnearest neighbors from data samples that have class labels different from that of and putting an edge betweenand its neighbors from different classes. The weight matrixon the penalty graphis defined as follows: wheredenotes a set of data pairs that are thenearest pairs among the data pair set.
To encode the discriminative information, we maximize margins between different classes. The between-class separability is modeled by the graph Laplacian defined on the penalty graph: whereis the Laplacian matrix of the penalty graph and the th element of the diagonal matrixis.
To encode the intrinsic manifold structure, the graph Laplacian provides the following smoothness penalty on the intrinsic graph: whereis the Laplacian matrix of the intrinsic graph and the th element of the diagonal matrixis.
We minimize (33) to retain the intrinsic manifold structure information and maximize (32) to make the data samples in different classes separable. Thus, by combining discriminative information and the intrinsic manifold structure information together, we can setin (29) as
As can be seen, the main idea of constructing discriminative manifold adaptive kernel is to incorporate the discriminative information and the intrinsic manifold structure information into the kernel deformation procedure simultaneously. Thus, the resulting new kernel can take advantage of information from labeled and unlabeled data. When an input initial kernel is deformed according to (35), the resulting manifold-adaptive kernel function may be able to achieve much better performance than the original input kernel function. In this paper, we simply use the Gaussian kernel as the input initial kernel.
In summary, by combining the above discussions, we can outline the proposed manifold-adaptive kernel SDA (MAKSDA) algorithm as follows.(1)Calculate the initial kernel matrixin the original data space. Construct an intrinsic graphwith the weight matrix defined in (30) and calculate the graph Laplacian. Construct a penalty graphwith the weight matrix defined in (31) and calculate the graph Laplacian. Calculateand the discriminative manifold-adaptive kernel functionin terms of (34) and (35), respectively.(2)Replacein (24) withdefined in (35) and obtain the following generalized eigenproblem: (3)Compute the eigenvectors and eigenvalues for the generalized eigenproblem (36). Let the column vectorbe solutions of (36) ordered according to their eigenvalues. Thus, the MAKSDA embedding can be computed as follows: where the discriminative manifold adaptive kernel.
Now, we obtain the low-dimensional representations of the original gait images with (37). In the reduced feature space, those images belonging to the same class are close to each other, while those images belonging to different classes are far apart. Thus, the traditional classifier algorithm can be applied to classify different gait images. In this paper, we apply the nearest neighbor classifier for its simplicity, and the Euclidean metric is used as the distance measure.
The time complexity analysis of MAKSDA is outlined as follows. Computing the input initial kernel matrix needs. Constructing the intrinsic graph and the penalty graph needs and, respectively. Computing the discriminative manifold adaptive kernel matrix and the generalized eigenproblem needs. Projecting the original image into the lower-dimensional feature space needs. Thus, the total computational complexity of MAKSDA is, which is the same as the traditional kernel SDA algorithm in the kernel space.
5. Experimental Results
In this section, we report experimental results on the well-known USF HumanID gait database  to investigate the performance of our proposed MAKSDA algorithm for gait recognition.
The system performance is compared with the kernel PCA (KPCA) , kernel LDA (KLDA) , kernel LPP (KLPP) , kernel MFA (KMFA) , and kernel SDA (KSDA) , five of the most popular nonlinear methods in gait recognition. We adopt the commonly used Gaussian kernel as kernel function of these four algorithms. In the following experiments, the Gaussian kernel with parameters, is used, whereis the standard deviation of the data set. We report the best result of each algorithm from among the 21 experiments. There are two important parameters in our proposed MAKSDA algorithm, that is, the number of nearest neighborin the intrinsic graph and the number of nearest neighborin the penalty graph. We empirically set them to 6 and 15, respectively. In the following section, we will discuss the effect on the recognition performance with different values ofand. In addition, since the original SDA algorithm is robust to the regularization parameter, we simply setin KSDA and MAKSDA for fair comparison.
We carried out all of our experiments upon the USF HumanID gait database, which consists of 1870 sequences from 122 subjects (people). As suggested in , the whole sequence is partitioned into several subsequences according to the gait period length, which is provided by Sarkar et al. . Then, the binary images within one gait cycle are averaged to acquire several gray-level average silhouette images as follows: whererepresent the binary images for one sequence withframes and denotes the largest integer less than or equal to. Some original binary images and the average silhouettes of two different peoples are shown in Figure 3, where the first seven images and the last image in each row denote the binary silhouette images and the average silhouette image, respectively. As can be seen, different individuals have different average silhouette images.
In this paper, to perform gait recognition, the averaged gait image is decomposed by Gabor filters introduced in Section 2. We combine the decomposed images to give a new Gabor feature representation defined in (5), which is suitable for gait recognition. Our use of the Gabor-based feature representation for the averaged gait-image-based recognition is based on the following considerations: (1) Gabor functions provide a favorable tradeoff between spatial resolution and frequency resolution, which can be implemented by controlling the scale and orientation parameters; (2) it is supposed that Gabor kernels are similar to the 2D-receptive field profiles of the mammalian cortical simple cells; and (3) Gabor-function-based representations have been successfully employed in many machine vision applications, such as face recognition, scene classification, and object recognition.
In short, the gait recognition algorithm has three steps. First, we calculate the Gabor feature representation of the averaged gait image. Then, the Gabor feature representations are projected into lower-dimensional feature space via our proposed MAKSDA algorithm. Finally, the nearest neighbor classifier is adopted to identify different gait images. As suggested in [1–3], the distance measure between the gallery sequence and the probe sequence adopts the following median operator, since it is more robust to noise than the traditional minimum operator. Consider where,, and,, are the lower-dimensional feature representations from one probe sequence and one gallery sequence, respectively.anddenote the total number of average silhouette images in the probe sequence and one gallery sequence, respectively.
Three metrics (the Rank-1, Rank-5, and Average recognition accuracies) are used to measure the recognition performance. Rank-1 means that the correct subject is ranked as the top candidate, Rank-5 means that the correct subject is ranked among the top five candidates, and Average denotes the recognition accuracy among all the probe sets, that is, the ratio of correctly recognized persons to the total number of persons in all the probe sets. Tables 1 and 2 show the best results obtained by KPCA, KLDA, KLPP, KMFA, KSDA, and MAKSDA. From the experimental results, we can make the following observations.(1)Our proposed MAKSDA algorithm consistently outperforms KPCA, KLDA, KLPP, KMFA, and KSDA algorithms, which implies that extracting the discriminative feature by using both labeled and unlabeled data and explicitly considering the discriminative manifold adaptive kernel function can achieve the best gait recognition performance.(2)KPCA obtains the worst performance on the USF HumanID gait database even though it is a kernel-based method. The possible reason is that it is unsupervised and only adopts the data-independent kernel function, which are not necessarily useful for discriminating gait images with different persons.(3)The average performances of KLDA and KLPP are almost similar. For some probe sets, KLPP outperforms KLDA, while KLDA is better than KLPP for other probe sets. This indicates that it is hard to evaluate whether manifold structure or the class label information is more important, which is consistent with existing studies [37, 38].(4)KMFA is superior to KLDA and KLPP, which demonstrates that KMFA can effectively utilize local manifold structure as well as the class label information for gait recognition.(5)The semisupervised kernel algorithms (i.e., KSDA and MAKSDA) consistently outperform the pure unsupervised kernel algorithms (i.e., KPCA and KLPP) and the pure supervised kernel algorithms (i.e., KLDA and KMFA). This observation demonstrates that the semisupervised learning can effectively utilize both labeled and unlabeled data to improve gait recognition performance.(6)Although KSDA and MAKSDA are all the nonlinear extensions of SDA via kernel trick, MAKSDA performs better than SDA. The main reason could be attributed to the following fact. First, MAKSDA avoids the numerical computation problem without computing matrix inverse. Second, KSDA adopts the commonly used data-independent kernel functions; thus the nonlinear structure captured by these kernel functions may not be consistent with the intrinsic manifold structure of gait image. For MAKSDA, it adopts the discriminative manifold-adaptive kernel function; thus the nonlinear structure captured by these data-adaptive functions is consistent with the discriminative information revealed by labeled data as well as the intrinsic manifold structure information revealed by unlabeled data.(7)MAKSDA obtains the best recognition performance on all the experiments, which implies that both discriminant and geometrical information contained in the kernel function are important for gait recognition.
We also conduct an in-depth investigation of the performance of Gabor-based feature with respect to different parameters, such as the number of scales and orientations of Gabor features. In this study, we adopt the default settings; that is, we have 40 Gabor kernel functions from five scalesand eight orientations. By using MAKSDA algorithm, we test the performance of Gabor-based feature under different parameters, in which we still adopt the nearest neighbor classifier with the distance defined in (39) for fair comparison. The average Rank-1 and Rank-5 recognition accuracies of Gabor-based feature with different parameters are shown in Table 3. As can be seen, the recognition accuracies using the default parameter setting (i.e., five scales and eight orientations) are unanimously better than using other parameter settings, which is consistent with recent studies from several other research groups .
The construction of the kernel function is one of the key points in our proposed MAKSDA algorithm. Our proposed MAKSDA algorithm adopts the discriminative manifold adaptive kernel function to capture the discriminative information and the intrinsic manifold structure information. Of course, we can also use other kinds of traditional kernel functions, such as Gaussian kernel, polynomial kernel, and Sigmoid kernel. To illustrate the superiority of our proposed discriminative manifold adaptive kernel function, we test the average Rank-1 and Rank-5 recognition accuracies under different kernel functions. The experimental results are shown in Table 4. As can be seen, our proposed discriminative manifold adaptive kernel function achieves the best performance, while the rest kernel functions have comparative performance. The superiority of discriminative manifold adaptive kernel is due to the fact that it is the data-dependent kernel, and the nonlinear structure captured by it may be consistent with the intrinsic manifold structure of gait image, which has been shown very useful for improving the learning performance by many previous studies. However, Gaussian kernel, polynomial kernel, and Sigmoid kernel are all data-independent common kernels, which might not be optimal in discriminating gait images with different semantics. This also demonstrates that simultaneously considering the local manifold structure and discriminative information is essential in designing kernel SDA algorithms for gait recognition.
In addition, our proposed MAKSDA algorithm has two essential parameters: the number of nearest neighborin the intrinsic graph and the number of nearest neighborin the penalty graph. We empirically setto 6 andto 15 in the previous experiments. In this section, we investigate the influence of different choices ofand. We varywhile fixing and varywhile fixing. Figures 4 and 5 show the performance of MAKSDA as a function of the parameters and, respectively. As can be seen, the performance of MAKSDA is very stable with respect to the two parameters. It achieves much better performance than other kernel algorithms whenvaries from 4 to 8 andvaries from 10 to 20. Therefore, the selection of parameters is not a very crucial problem in our proposed MAKSDA algorithm.
We have introduced a novel manifold adaptive kernel semisupervised discriminant analysis (MAKSDA) algorithm for gait recognition. It can make use of both labeled and unlabeled gait image to learn a low-dimensional feature space for gait recognition. Unlike the traditional kernel-based SDA algorithm, MAKSDA not only avoids the singular problem by not computing the matrix inverse, but also can explore the data-dependent nonlinear structure of the gait image by using the discriminative manifold adaptive kernel function. Experimental results on the widely used gait database are presented to show the efficacy of the proposed approach.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is supported by NSFC (Grant no. 70701013), the National Science Foundation for Post-Doctoral Scientists of China (Grant no. 2011M500035), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant no. 20110023110002).
- S. Sarkar, P. J. Phillips, Z. Liu, I. R. Vega, P. Grother, and K. W. Bowyer, “The humanID gait challenge problem: data sets, performance, and analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 2, pp. 162–177, 2005.
- J. Han and B. Bhanu, “Individual recognition using gait energy image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 2, pp. 316–322, 2006.
- D. Tao, X. Li, X. Wu, and S. J. Maybank, “General tensor discriminant analysis and Gabor features for gait recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp. 1700–1715, 2007.
- L. Wang, T. Tan, H. Ning, and W. Hu, “Silhouette analysis-based gait recognition for human identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 12, pp. 1505–1518, 2003.
- Z. Liu and S. Sarkar, “Improved gait recognition by gait dynamics normalization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 6, pp. 863–876, 2006.
- R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, Wiley-Interscience, Hoboken, NJ, USA, 2nd edition, 2000.
- X. He, S. Yan, Y. Hu, P. Niyogi, and H.-J. Zhang, “Face recognition using Laplacianfaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 328–340, 2005.
- S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin, “Graph embedding and extensions: a general framework for dimensionality reduction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 1, pp. 40–51, 2007.
- D. Tao, X. Li, X. Wu, and S. J. Maybank, “General averaged divergence analysis,” in Proceedings of the 7th IEEE International Conference on Data Mining (ICDM '07), pp. 302–311, October 2007.
- D. Tao, X. Li, X. Wu, and S. J. Maybank, “Geometric mean for subspace selection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 260–274, 2009.
- T. Zhou and D. Tao, “GoDec: randomized low-rank & sparse matrix decomposition in noisy case,” in Proceedings of the 28th International Conference on Machine Learning (ICML '11), pp. 33–40, usa, July 2011.
- D. Cai, X. He, J. Han, and H.-J. Zhang, “Orthogonal laplacianfaces for face recognition,” IEEE Transactions on Image Processing, vol. 15, no. 11, pp. 3608–3614, 2006.
- D. Xu, Y. Huang, Z. Zeng, and X. Xu, “Human gait recognition using patch distribution feature and locality-constrained group sparse representation,” IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 316–326, 2012.
- X. Li, S. Lin, S. Yan, and D. Xu, “Discriminant locally linear embedding with high-order tensor data,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 38, no. 2, pp. 342–352, 2008.
- T. Zhang, D. Tao, X. Li, and J. Yang, “Patch alignment for dimensionality reduction,” IEEE Transactions on Knowledge and Data Engineering, vol. 21, no. 9, pp. 1299–1313, 2009.
- D. Tao and L. Jin, “Discriminative information preservation for face recognition,” Neurocomputing, vol. 91, pp. 11–20, 2012.
- D. Tao, X. Li, W. Hu, S. Maybank, and X. Wu, “Supervised tensor learning,” in Proceedings of the 5th IEEE International Conference on Data Mining (ICDM '05), pp. 450–457, November 2005.
- D. Tao, X. Li, X. Wu, W. Hu, and S. J. Maybank, “Supervised tensor learning,” Knowledge and Information Systems, vol. 13, no. 1, pp. 1–42, 2007.
- Z. Zhang and D. Tao, “Slow feature analysis for human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 3, pp. 436–450, 2012.
- X. Zhu, “Semi-supervised learning literature survey,” Tech. Rep. 1530, Computer Science Department, University of Wisconsin, Madison, Wis, USA, 2008.
- V. N. Vapnik, O. Chapelle, and J. Weston, “Transductive inference for estimating values of functions,” in Advances in Neural Information Processing, pp. 421–427, 1999.
- A. Blum and T. Mitchell, “Combining labeled and unlabeled data with co-training,” in Proceedings of the 11th Annual Conference on Computational Learning Theory (COLT '98), pp. 92–100, July 1998.
- M. Belkin, P. Niyogi, and V. Sindhwani, “Manifold regularization: a geometric framework for learning from labeled and unlabeled examples,” Journal of Machine Learning Research, vol. 7, pp. 2399–2434, 2006.
- D. Tao, L. Jin, W. Liu, and X. Li, “Hessian regularized support vector machines for mobile image annotation on the cloud,” IEEE Transactions on Multimedia, vol. 15, no. 4, pp. 833–844, 2013.
- D. Cai, X. He, and J. Han, “Semi-supervised discriminant analysis,” in Proceedings of the 11th IEEE International Conference on Computer Vision (ICCV '07), pp. 1–7, Rio de Janeiro, Brazil, October 2007.
- V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995.
- T. S. Lee, “Image representation using 2d gabor wavelets,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 10, pp. 959–971, 1996.
- C. Liu and H. Wechsler, “Gabor feature based classification using the enhanced Fisher linear discriminant model for face recognition,” IEEE Transactions on Image Processing, vol. 11, no. 4, pp. 467–476, 2002.
- C. Liu, “Gabor-based kernel PCA with fractional power polynomial models for face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 5, pp. 572–581, 2004.
- H. Li, T. Jiang, and K. Zhang, “Efficient and robust feature extraction by maximum margin criterion,” IEEE Transactions on Neural Networks, vol. 17, no. 1, pp. 157–165, 2006.
- T. Zhang, D. Tao, and J. Yang, “Discriminative locality alignment,” in Computer Vision—ECCV 2008, vol. 5302 of Lecture Notes in Computer Science, pp. 725–738, 2008.
- W. Zhang, Z. Lin, and X. Tang, “Learning semi-Riemannian metrics for semisupervised feature extraction,” IEEE Transactions on Knowledge and Data Engineering, vol. 23, no. 4, pp. 600–611, 2011.
- V. Sindhwani, P. Niyogi, and M. Belkin, “Beyond the point cloud: from transductive to semi-supervised learning,” in Proceedings of the 22nd International Conference on Machine Learning, pp. 824–831, August 2005.
- B. Schölkopf, A. Smola, and K.-R. Müller, “Nonlinear component analysis as a kernel eigenvalue problem,” Neural Computation, vol. 10, no. 5, pp. 1299–1319, 1998.
- G. Baudat and F. Anouar, “Generalized discriminant analysis using a kernel approach,” Neural Computation, vol. 12, no. 10, pp. 2385–2404, 2000.
- X. He and P. Niyogi, “Locality preserving projections,” in Proceedings of the International Conference on Advances in Neural Information Processing Systems (NIPS '03), pp. 585–591, 2003.
- D. Xu, S. Yan, D. Tao, S. Lin, and H.-J. Zhang, “Marginal fisher analysis and its variants for human gait recognition and content- based image retrieval,” IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2811–2821, 2007.
- M. Sugiyama, “Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis,” Journal of Machine Learning Research, vol. 8, pp. 1027–1061, 2007.