International Journal of Optics

International Journal of Optics / 2015 / Article

Research Article | Open Access

Volume 2015 |Article ID 763908 | 5 pages | https://doi.org/10.1155/2015/763908

Gait Recognition Using GEI and AFDEI

Academic Editor: Giulio Cerullo
Received24 Jun 2015
Revised22 Sep 2015
Accepted27 Sep 2015
Published11 Oct 2015

Abstract

Gait energy image (GEI) preserves the dynamic and static information of a gait sequence. The common static information includes the appearance and shape of the human body and the dynamic information includes the variation of frequency and phase. However, there is no consideration of the time that normalizes each silhouette within the GEI. As regards this problem, this paper proposed the accumulated frame difference energy image (AFDEI), which can reflect the time characteristics. The fusion of the moment invariants extracted from GEI and AFDEI was selected as the gait feature. Then, gait recognition was accomplished using the nearest neighbor classifier based on the Euclidean distance. Finally, to verify the performance, the proposed algorithm was compared with the GEI + 2D-PCA and SFDEI + HMM on the CASIA-B gait database. The experimental results have shown that the proposed algorithm performs better than GEI + 2D-PCA and SFDEI + HMM and meets the real-time requirements.

1. Introduction

As one of the biometrics, the gait recognition is typically used for identifying an individual using image sequences, which capture a person’s walk. Gait recognition has its advantages compared with traditional biometrics, such as nonaggression, noncontact, easy collection, and hard to hide and camouflage, which makes the gait recognition have a wide range of application of intelligent monitoring and medical diagnosis. More and more researchers are absorbed to pay attention to the gait recognition.

At present, the techniques of gait recognition can be commonly divided into two categories: model-based [13] and silhouette-based [48] approaches. Model-based approaches generally establish static or dynamic prior model, then the model parameters can be calculated by matching with the motion sequence, which are used to identify the individuals. Zeng et al. [1] present a new model-based approach for human gait recognition in the sagittal plane via deterministic learning (DL) theory. Side silhouette lower limb joint angles are chosen as the gait feature, which are extracted from the phase portrait of joint angles versus angular velocities. The obtained knowledge of the approximated gait system dynamics is stored in constant RBF networks. Finally, a bank of estimators is constructed using constant RBF networks to accomplish the gait recognition. Kovač and Peer [2] proposed a skeleton model-based gait recognition system focusing on modeling gait dynamics and eliminating the influence of subject’s appearance on recognition. Then, they tackled the problem of walking speed variation and proposed space transformation and feature fusion that mitigates its influence on recognition performance. Zhang et al. [3] proposed a model-based approach to gait recognition by employing a five-link biped locomotion human model. Firstly, they extract the gait features from image sequences using the Metropolis-Hasting method, and then Hidden Markov Models are trained based on the frequencies of these feature trajectories, from which recognition is performed. Silhouette-based approaches mainly extract the static or dynamic characteristics of the gait silhouette to recognize individuals through the moving information of image sequences without establishing prior model. Lee et al. [4] calculated the binomial distribution of every pixel in a gait cycle and organized the binomial distribution of all pixels in the gait image; therefore, the gait probability image (GPI) can be formed. Then, the symmetric Kullback-Leibler divergence is used to measure the information theoretical distance between gait signatures. This method is robust to slight variation in walking speed. Jeong and Cho [5] proposed a gait recognition method based on the multilinear tensor analysis. Firstly, they formed the accumulated silhouette from the gait image sequences and then described those as the tensor. Secondly, the method computed the core tensor which governed the interaction between factors organizing the original tensor using a multilinear tensor analysis. Finally, the computation of similarity based on the Euclidean distance is chosen to recognize the individual. Li et al. [6] regard the line quality vector of frame difference energy image as the feature of gait recognition. Continuous hidden Markov model (CHMM) was used to recognize the individual. Li and Wu [7] extracted unified Hu moments from the gait sequence as gait features and then used the SVM for training and recognition. Liu [8] proposed a novel feature fusion method. Firstly, the static characteristics were obtained from the Procrustes mean shape, then, the transform of Fan-Beam was carried out with GEI, and the dynamic characteristics of the gait sequences were obtained by using the two-dimensional principal component analysis to reduce the dimension of the feature space. Finally, the two features are fused to achieve the final recognition results. In regard that there is no consideration of the time that normalizes each silhouette in the GEI, this paper proposes a new class energy image which is denoted as the accumulated frame difference energy image (AFDEI), and then the fusion of the moment invariants extracted from GEI and AFDEI was selected as the gait feature. Then, we recognized the individual using the nearest neighbor classifier based on the Euclidean distance.

2. Gait Feature Extraction and Recognition

This paper combined the gait energy image (GEI) with the accumulated frame difference energy image (AFDEI). Furthermore, the moment invariants were extracted from GEI and AFDEI, and then the characteristics were fused as the gait feature. Finally, the nearest neighbor classifier was used to evaluate the gait recognition.

2.1. Gait Energy Image (GEI)

The gait energy image is used to reflect the gait sequence of a cycle in a simple energy image using the weighted average method. The gait sequences in a gait cycle are processed to align the binary silhouette. If the gait cycle image sequence is , gait energy image can be calculated by the following formula:where is the gait cycle image sequence, is the number of frames in a gait sequence of a cycle, and is the number of gait frames. Figure 1(a) shows the gait images of a cycle and Figure 1(b) shows the corresponding GEI.

The color or luminance of the pixels in the figure can indicate the size of the body’s parts when the person is walking. The white pixel points represent the parts which move slightly, such as head and trunk. Gray pixel points represent the parts which move significantly, such as the legs and arms. So, the gait energy image retains the static and dynamic characteristics of human walking, and it greatly reduces the amount of computation in the image processing.

2.2. Accumulated Frame Difference Energy Image (AFDEI)

Through the description of the gait energy image, there is no consideration of the time within the GEI. In order to improve the accuracy of gait recognition, this paper proposed the accumulated frame difference energy image (AFDEI).

Firstly, the frame difference energy image must be calculated. Frame difference energy image is calculated by combining forward frame difference image with backward frame difference image. Formula (2) shows how to calculate the forward frame difference image, formula (3) shows how to calculate the backward frame difference image, and formula (4) shows how to calculate the frame difference image. Considerwhere is the forward frame difference image. Considerwhere is the backward frame difference image. Considerwhere is the frame difference image.

By weighted average method, the accumulated frame difference energy image is obtained, which can reflect the time characteristic. Formula (5) shows how to calculate the accumulated frame difference image:

Because the purpose of the accumulated frame difference energy image is to reflect the time characteristic of human walking, in order to reduce the amount of computation, in this paper, the first six frames of a gait cycle are chosen as the information frames to obtain the frame difference energy image. Because of reflecting the time information of the gait pattern, the binary silhouette is not necessary aligned in order to reserve the dynamic changes of gait silhouette. Figure 2(a) shows the information frames, Figure 2(b) shows the frame difference energy image, and Figure 2(c) shows the accumulated frame difference energy image.

As shown in Figure 2(c), because the six frames of gait sequences are chosen as the information frames, each AFDEI contains the same time. The range of the human body and the brightness of pixel in the image can reflect the time characteristics such as step speed, cycle, stride length, and arm movement.

2.3. Feature Extraction and Fusion
2.3.1. Invariant Moments

M. K. Hu proposed the definition of invariant moments in 1962, according to the central moment; he proposed 7 invariant moments which have the invariance of rotation, translation, and scale. For 2D gray image , its origin moment and central moment are shown in the following formula:where is origin moment, is central moment, is the centroid coordinates of the image, , and .

normalized central moment is shown in the following formula:where .

The 7 invariant moments which are based on the normalized second-order central moment and the third-order central moment are as follows:

Tables 1 and 2 list the 10 invariant moments of the gait energy image and the accumulated frame difference energy image.


1234567

People 15.697812.175219.291221.484542.002927.586742.9403
People 25.673712.066119.084421.272541.583027.320042.5932
People 35.683112.175619.124121.318341.671927.418442.6749
People 45.700112.152519.183721.363241.760427.449442.9425
People 55.461211.656618.463820.658940.363726.503141.2689
People 65.577111.840118.831021.020041.065826.941243.1517
People 75.525311.703118.714520.920540.859426.772942.7540
People 85.728512.227819.192221.384041.793327.503743.1992
People 95.740012.289519.317521.511242.049827.662443.2840
People 105.481911.655218.475820.675040.374426.504342.1441


1234567

People 15.208711.355817.709919.893438.943825.615639.1890
People 25.135411.147717.673219.878138.832025.471139.4322
People 35.094811.167117.401219.597438.315025.210638.6742
People 45.459311.851118.848521.065141.234827.034741.5007
People 55.111811.102417.521819.735638.525425.295539.3415
People 65.233711.242818.042620.236339.602425.905539.9232
People 75.024310.803817.614919.809438.756525.251839.0545
People 85.298011.529418.173820.365439.846126.166940.2077
People 95.138811.355717.318619.512938.132625.210538.6015
People 104.977110.755917.256019.450437.978724.839838.7457

2.3.2. Feature Fusion

Through analysis of the gait pattern, the static and dynamic characteristics of the gait sequences can be characterized by the GEI, and the time characteristics can be characterized by the AFDEI. So we can get better recognition results by combining the two kinds of energy image. However, though some dynamic features can also be expressed in the AFDEI, the static characteristics are completely ignored, which will inevitably affect the accuracy of the identification. In order to sufficiently reflect the advantages of the multifeature fusion, according to the concept of trial and error, this paper has obtained the optimal weight of AFDEI and GEI in the process of feature fusion by a large number of experiments. Meanwhile, the interference of some useless information can be avoided when the useful information is obtained.

3. Experiment and Analysis

In this paper, the CASIA_B database provided by the automation of Chinese Academy of Sciences was used. The database contains 124 people: each person has 11 angles of view and three kinds of gait at the same angle of view. Three kinds of gait include 6 normal people, 2 wearing people, and 2 packaging people. In this paper, we choose the view which is divided into three groups of experiments: set A is normal walking sequence, set B is wearing walking sequence, and set C is packaging walking sequence. The principle of gait recognition is shown in Figure 3.

In this paper, the experiments were done with matlab 2012a on the computer with 2.5 GHz and the memory of 4 GB. Meanwhile, the proposed method was compared with GEI + 2D-PCA [9] and SFDEI + HMM [10]. Table 3 shows the result of experiment. The experimental result has shown that the proposed algorithm performs better than the other algorithms and meets the real-time requirements. The experimental results show that identification results have been greatly improved by the proposed gait recognition algorithm compared with the algorithms which only consider static or dynamic features. Especially in the case of wearing and packaging, the advantage of the time factor can be reflected better when the shape of the human body is affected. As a result of the slight change of walking speed, the recognition effect will be influenced. In the future studies, the most important work is to solve the problem of speed variation which can influence the recognition performance.


MethodNormalWearingPackaging

GEI + 2D-PCA [9]80.6%86.7%85.9%
SFDEI + HMM [10]87.4%90.7%89.1%
This paper88.7%91.9%89.9%

4. Conclusion

The gait energy image (GEI) preserves the dynamic and static information of a gait sequence; however, there is no consideration of the time. In regard to this problem, this paper proposed the accumulated frame difference energy image (AFDEI) to reflect the time characteristics. Furthermore, the moment invariants were extracted from GEI and AFDEI, and then the characteristics were fused, which obtains better recognition results than a single feature. Finally, the nearest neighbor classifier was used to evaluate the gait recognition. To testify the proposed algorithm, the CASIA_B database provided by the automation of Chinese Academy of Sciences was chosen to experiment. The experimental result shows that the proposed algorithm has a high recognition rate, which has a certain reference value for the study of gait recognition.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to the anonymous reviewers who made constructive comments. This work is supported by the National Natural Science Foundation of China (no. 61203302 and no. 61403277), the Tianjin Research Program of Application Foundation and Advanced Technology (14JCYBJC18900).

References

  1. W. Zeng, C. Wang, and Y. Li, “Model-based human gait recognition via deterministic learning,” Cognitive Computation, vol. 6, no. 2, pp. 218–229, 2014. View at: Publisher Site | Google Scholar
  2. J. Kovač and P. Peer, “Human skeleton model based dynamic features for walking speed invariant gait recognition,” Mathematical Problems in Engineering, vol. 2014, Article ID 484320, 15 pages, 2014. View at: Publisher Site | Google Scholar
  3. R. Zhang, C. Vogler, and D. Metaxas, “Human gait recognition at sagittal plane,” Image and Vision Computing, vol. 25, no. 3, pp. 321–330, 2007. View at: Publisher Site | Google Scholar
  4. C. P. Lee, A. W. C. Tan, and S. C. Tan, “Gait probability image: an information-theoretic model of gait representation,” Journal of Visual Communication and Image Representation, vol. 25, no. 6, pp. 1489–1492, 2014. View at: Publisher Site | Google Scholar
  5. S. Jeong and J. Cho, “A framework for online gait recognition based on multilinear tensor analysis,” Journal of Supercomputing, vol. 65, no. 1, pp. 106–121, 2013. View at: Publisher Site | Google Scholar
  6. R. Li, Y. Cheng, and L. Yu, “Gait recognition algorithm based on the line quality vector of FDEI,” Computer Application, vol. 34, no. 5, pp. 1364–1368, 2014. View at: Google Scholar
  7. M. Li and Q. Wu, “Gait recognition based on unified Hu and SUV,” Micro Computer Information, vol. 13, pp. 196–198, 2010. View at: Google Scholar
  8. F. Liu, Research on Multi View Gait Recognition Method Based on Feature Fusion, Northeast Dianli University, 2014.
  9. K. Wang, L. Liu, W. Zhai, and W. Cheng, “Gait recognition based on GEI and 2D-PCA,” Chinese Journal of Image and Graphics, vol. 14, no. 12, pp. 2503–2509, 2009. View at: Google Scholar
  10. Q. Yang and D. Xue, “Gait recognition based on sparse representation and segmented frame difference energy image,” Information and Control, vol. 42, no. 1, pp. 27–32, 2013. View at: Publisher Site | Google Scholar

Copyright © 2015 Jing Luo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

2241 Views | 629 Downloads | 13 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.