Research Article  Open Access
Louis Asiedu, Felix O. Mettle, Joseph A. Mensah, "Recognition of Reconstructed Frontal Face Images Using FFTPCA/SVD Algorithm", Journal of Applied Mathematics, vol. 2020, Article ID 9127465, 8 pages, 2020. https://doi.org/10.1155/2020/9127465
Recognition of Reconstructed Frontal Face Images Using FFTPCA/SVD Algorithm
Abstract
Face recognition has gained prominence among the various biometricbased methods (such as fingerprint and iris) due to its noninvasive characteristics. Modern face recognition modules/algorithms have been successful in many application areas (access control, entertainment/leisure, security system based on biometric data, and userfriendly humanmachine interfaces). In spite of these achievements, the performance of current face recognition algorithms/modules is still inhibited by varying environmental constraints such as occlusions, expressions, varying poses, illumination, and ageing. This study assessed the performance of Principal Component Analysis with singular value decomposition using Fast Fourier Transform (FFTPCA/SVD) for preprocessing the face recognition algorithm on left and right reconstructed face images. The study found that average recognition rates for the FFTPCA/SVD algorithm were 95% and 90% when the left and right reconstructed face images are used as test images, respectively. The result of the paired sample test revealed that the average recognition distances for the left and right reconstructed face images are not significantly different when FFTPCA/SVD is used for recognition. FFTPCA/SVD is recommended as a viable algorithm for recognition of left and right reconstructed face images.
1. Introduction
Recognizing people using face images has gained prominence among the various biometricbased methods (such as fingerprint and iris) due to its comparative advantage of being nonintrusive and less cooperative (of subjects under study). This task is easily carried out by humans. The design of machinebased face recognition systems (that mimic humans’ recognition prowess) and their underlying algorithms that give optimal classification or recognition rates, however, have been and continue to be challenging, especially when face images are obtained under uncontrolled environments (poor illumination conditions, varying poses, expressions, and occlusions) [1]. Thus, there is a growing interest in this field of research.
In the case of partially occluded faces (resulting from the wearing of mask and sunglasses, blockage by external objects, captured angle images, etc.), occlusion insensitive, local matching, and reconstruction methods have been used for recognition [2]. Performing face recognition using halfface images can be considered a special case of partially occluded faces where either the left or right face is occluded and segmented and the remaining half (nonoccluded) face is used for recognition [3]. Bilateral symmetry is a property of many natural objects including the human face [4]. Leveraging this property, the performances of holistic face representationbased algorithms have been evaluated on the left, right, and average half faces based on symmetry scores [5, 6].
Singh and Nandi [6] applied PCA on the full and left and right half faces and measured the performance of their algorithm in terms of the recognition rate and accuracy. Their results showed no difference in recognition rates between the left and right half faces but achieved higher accuracy for the left half face. They also reported no difference in accuracy rates between the full face and half faces. However, the recognition rate for half faces was half that for the full faces.
Harguess and Aggarwal [5] evaluated the recognition rate of full faces and average half faces using eigenfaces and the nearest neighbor as a classifier. They reported a significantly higher recognition rate using the average half face for both men and women compared to the full face. Asiedu et al. [7] applied the PCA/SVD algorithm on full faces under varying facial expressions. They concluded that the algorithm was most consistent and efficient under varying expressions and achieved appreciable performance with an average recognition rate of 92.86%.
Avuglah et al. [8] also applied the FFTPCA/SVD algorithm on face images under angular constraints and statistically evaluated the performance of the algorithm. They found that the algorithm perfectly recognizes head tilts that are 24° or less. They concluded that the algorithm has an appreciable performance with an average recognition rate of 92.5% in recognition of face images under angular constraints. The question of whether this algorithm performs well on partial and reconstructed frontal face images, however, has not been explored. This paper, therefore, intends to assess the performance of the FFTPCA/SVD algorithm on partial and reconstructed frontal face images based on bilateral symmetry.
2. Materials and Methods
2.1. Source of Data
Frontal face images from the Massachusetts Institute of Technology (MIT) (20032005) and Japanese Female Facial Expressions (JAFFE) database were used to benchmark the face recognition algorithm. The trainimage database contains twenty frontal face images. Ten of these images were 0° straight pose from the MIT (20032005) database, and ten were neutral pose face images from JAFFE. The images captured into the trainimage database are denoted as train images and are used to train the algorithm.
Twenty images reconstructed from the halfface images (created through vertical segmentation) of the train images were captured into the testimage database. The images captured into the testimage database are called the test images and are used for testing the recognition algorithm.
To keep the data uniform, captured images were digitized into grayscale precision and resized into dimensions, and the data types were changed into double precision for preprocessing. This made the images (matrices) conformable and enhanced easy computations. Figures 1 and 2 show subjects in the train image database.
2.2. Image Reconstruction
The left segmented halfface images were reconstructed using the following steps: (1)Rotate left segmented halfface image through 270°, and denote it as (2)Rotate the left segmented halfface image through 180°, and denote it as (3)Concatenate and as
The right segmented halfface images were reconstructed using the following steps: (1)Rotate right segmented halfface image through 270°, and denote it as (2)Rotate the right segmented halfface image through 180°, and denote it as (3)Concatenate and as
Figure 3 contains the original full images, left and right halfimages, and their reconstructed images used in the testimage database.
2.3. Research Design
Figure 4 shows a design of the recognition module.
2.4. Preprocessing
Preprocessing is an important phase of image processing where the quality of the images is enhanced. The image acquisition process comes with various forms of noise. The preprocessing techniques are used to denoise the images making them better conditioned for recognition.
2.4.1. Fast Fourier Transform
Fast Fourier Transform (FFT) was adopted as a noise reduction mechanism. According to Glynn [9], the FFT algorithm reduces the computational burden to arithmetic operations.
The DFT of a column vector is represented mathematically as where , . is the th column of the image matrix, [8].
Due to the Gaussian nature of illumination variations, the Gaussian filter is adopted for filtering the face images after the Discrete Fourier transformation. After filtering, the inverse Discrete Fourier transformation (IDFT) was performed to reconstruct images into their original forms. The inverse Discrete Fourier transformation (IDFT) is given by
The real components of the inverse transformations are extracted for the feature extraction stage whereas the imaginary component is discarded as noise. Figure 5 shows the stages in FFT preprocessing of an image (Avuglah et al. [8]).
2.5. Feature Extraction
Face image space is very large, and its storage requires reduction of the dimensions of original images using feature extraction methods. This study adopted Principal Component Analysis (PCA) proposed by Kirby and Sirovich [10] for feature extraction. PCA extracts the most significant components or those components which are more informative and less redundant, from the original data. According to Shlens [11], PCA computes the most meaningful basis to reexpress a noisy garbled dataset. The rationale behind this new basis is to filter out the noise and reveal hidden dynamics in the dataset.
2.6. Implementation of FFTPCA/SVD Algorithm
As noted earlier, used images were extracted from the Massachusetts Institute of Technology (MIT) (20032005) database and Japanese Female Facial Expressions (JAFFE). The study considered subjects captured under 0° pose from the MIT database and neutral facial expressions from JAFFE. Twenty face images, ten each from the MIT database and JAFFE, were trained, and their reconstructed images from left and right segmented half images were used for testing the recognition algorithm. To keep the data uniform, captured images were digitized into grayscale precision and resized into dimensions and the data types changed into double precision for preprocessing.
The FFTPCA/SVD algorithm was used to train the image database. Unique face features of the training set are extracted and stored in memory during the training phase.
Now, given the sample , whose elements are the vectorised form of the individual images in the study. The mean centering preprocessing mechanism is performed by subtracting the mean image from the individual images under study. The mean is given by where , length () of the image data, .
According to Avuglah et al. [8], the variancecovariance matrix of the image space is given as where the mean centered matrix is .
The eigenvalues and their corresponding eigenvectors are extracted from singular value decomposition (SVD) of the matrix, .
This decomposes the covariance matrix into two orthogonal matrices and and a diagonal matrix : where is the th column vector of .
From the training set, the principal components are extracted as and .
When a new face (test image) is passed through the recognition module, its unique features are extracted as and .
The recognition distances (Euclidean distances) are computed as
The minimum Euclidean distance , , , is chosen as the recognition distance.
3. Results and Discussion
Figures 6 and 7 present the recognition matches and distances of the left and right reconstructed face images. It can be seen in Figure 6 that there were two mismatches when the right reconstructed images are used as test images for recognition. Also, there was one mismatch when the left reconstructed images are used as test images for recognition. All recognitions on the MIT (20032005) database gave correct matches. This is evident from Figure 7.
3.1. Numerical Evaluations
The average recognition rate was the numerical assessment criteria used in this study. The average recognition rate, , of an algorithm is defined as where is the number of experimental runs, is the number of correct recognitions in the th run, and is the total number of faces being tested in each run [7]. The average error rate, , is defined as .
The total number of correct recognitions for the study algorithm is 19.
The total number of experimental runs , and the total number of images in a single experimental run .
Now, using the left reconstructed face images as test images, the average recognition rate of the study algorithm (FFTPCA/SVD) is
The average error rate
Using the right reconstructed halfface images as test images, the average recognition rate of the study algorithm (FFTPCA/SVD) is
The average error rate
3.2. Statistical Evaluation
The paired sample test is usually used to evaluate measurements of the same individuals/units recorded under different environmental conditions. The paired responses may then be analysed by computing their differences, thereby eliminating much of the influence of the extraneous unit to unit variation (Johnson and Wichern [12]). Let denote the recognition distance recorded using the left reconstructed images as test images and denote the recognition distance using the right reconstructed halfface images as test images for the th individual, then the paired differences should reflect the differential effects of the treatments. Given that the difference , , are independent observations from distribution, the statistic where has a distribution with degrees of freedom. Consequently, an level test of the hypothesis
H_{0}: (mean difference of recognition distances from left and right reconstructed images is zero) against
H_{1}:
is conducted by comparing with , the upper percentile of the distribution with degree of freedom. A confidence interval for the mean difference in recognition distance is constructed as
This test is sensitive to the assumption that the paired difference should come from a univariate normal population. The ShapiroWilk test on the observed difference gave a value of 0.980 with a value of 0.933. This shows that the distribution of the observed difference is the same as the expected distribution (normal). The plot shown in Figure 8 confirms that the observed difference is normally distributed.
The paired sample correlations between the distance for the left and right reconstructed face images are 0.858 with a value of 0.000. This indicates that there exists a strong positive linear relationship between the Euclidean distance for the left and right reconstructed face images. The value of 0.000 shows that this relationship is significant. The results of the paired sample test are shown in Table 1.

From Table 1, the average difference between the left and right reconstructed face images (LRI and RRI, respectively) is 42.5865. The test statistic value from the paired sample test is 0.928 corresponding to a value of 0.365. It can be inferred from the value that there is no significant difference between the average recognition distance for the left and right reconstructed face images. This means that the reconstructed images have the same average recognition distances at 5% level of significance when used as test images.
4. Conclusion and Recommendation
The average recognition rates for the FFTPCA/SVD algorithm were 95% and 90% when left and right reconstructed face images are used as test images, respectively. It could be inferred from the above results that the recognition algorithm has relatively higher performance when left reconstructed images are used as test images. The statistical assessment revealed that there is no significant difference between the average recognition distances for the left and right reconstructed images.
It can therefore be concluded that the average recognition distance for left reconstructed images is not significantly different from the average recognition distance for right reconstructed images when FFTPCA/SVD is used for recognition. These results are consistent with the findings of Singh and Nandi [6]. It is worthy of note that apart from the numerical evaluations which are mostly adopted in literature, this study used a more datadriven approach or a statistical approach to also assess the performance of the recognition algorithm on left and right reconstructed face image databases. The performances of FFTPCA/SVD on both databases are viable. FFTPCA/SVD is therefore recommended for recognition of left and right reconstructed face images.
Data Availability
The image database supporting this research article is from previously reported studies and datasets, which have been cited. The processed data are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there is no conflict of interest.
References
 M. Turk and A. Pentland, “Face recognition using eigenfaces,” pp. 586587. View at: Google Scholar
 X. Wei, Unconstrained face recognition with occlusions, Ph.D. thesis University of Warwick, 2014, http://go.warwick.ac.uk/wrap/66778.
 H. Jia and A. M. Martinez, “Support vector machines in face recognition with occlusions,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 136–141, Miami, FL, USA, 2009. View at: Google Scholar
 D. O’Mara and R. Owens, “Measuring bilateral symmetry in digital images,” in Proceedings of Digital Processing Applications (TENCON’96), pp. 151–156, Perth, WA, Australia, Australia, 1996. View at: Google Scholar
 J. Harguess and J. K. Aggarwal, “Is there a connection between face symmetry and face recognition?” CVPR 2011 WORKSHOPS, pp. 66–73, 2011. View at: Google Scholar
 A. K. Singh and G. C. Nandi, “Face recognition using facial symmetry,” pp. 550–554. View at: Google Scholar
 L. Asiedu, A. Adebanji, F. Oduro, and F. O. Mettle, “Statistical evaluation of face recognition techniques under variable environmental constraints,” International Journal of Statistics and Probability, vol. 4, pp. 93–111, 2015. View at: Google Scholar
 R. K. Avuglah, L. Asiedu, E. N. Nortey, and F. N. Yirenkyi, “Recognition of face images under varying headposes using fftpca/svd algorithm,” Far East Journal of Mathematical Sciences (FJMS), vol. 103, no. 11, pp. 1769–1788, 2018. View at: Publisher Site  Google Scholar
 E. Glynn, Fourier analysis and image processing’, scientific programmer. Bioinformatics, S Towers Institute for medical Research, 2007.
 M. Kirby and L. Sirovich, “Application of the karhunenloeve procedure for the characterization of human faces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 103–108, 1990. View at: Publisher Site  Google Scholar
 J. Shlens, “A tutorial on principal component analysis: derivation, discussion and singular value decomposition,” Mar, vol. 25, p. 16, 2003. View at: Google Scholar
 R. A. Johnson and D. W. Wichern, Applied Multivariate Statistical Analysis Volume 5, Prentice hall Upper Saddle River, NJ, 2002.
Copyright
Copyright © 2020 Louis Asiedu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.