Research Article  Open Access
Louis Asiedu, Bernard O. Essah, Samuel Iddi, K. DokuAmponsah, Felix O. Mettle, "Evaluation of the DWTPCA/SVD Recognition Algorithm on Reconstructed Frontal Face Images", Journal of Applied Mathematics, vol. 2021, Article ID 5541522, 8 pages, 2021. https://doi.org/10.1155/2021/5541522
Evaluation of the DWTPCA/SVD Recognition Algorithm on Reconstructed Frontal Face Images
Abstract
The face is the second most important biometric part of the human body, next to the finger print. Recognition of face image with partial occlusion (half image) is an intractable exercise as occlusions affect the performance of the recognition module. To this end, occluded images are sometimes reconstructed or completed with some imputation mechanism before recognition. This study assessed the performance of the principal component analysis and singular value decomposition algorithm using discrete wavelet transform (DWTPCA/SVD) as preprocessing mechanism on the reconstructed face image database. The reconstruction of the half face images was done leveraging on the property of bilateral symmetry of frontal faces. Numerical assessment of the performance of the adopted recognition algorithm gave average recognition rates of 95% and 75% when left and right reconstructed face images were used for recognition, respectively. It was evident from the statistical assessment that the DWTPCA/SVD algorithm gives relatively lower average recognition distance for the left reconstructed face images. DWTPCA/SVD is therefore recommended as a suitable algorithm for recognizing face images under partial occlusion (half face images). The algorithm performs relatively better on left reconstructed face images.
1. Introduction
The heightened interest of researchers in the subject of face recognition is mainly due to various application areas of efficient and resilient face recognition modules. These include bankcard identification, security monitoring, access control, and surveillance control systems. All these applications are very vital for effective, efficient communication, and interactions among people.
According to Galton [1], the traditional way of classifying faces is by collecting facial profiles such as curves, findings their norms, and classifying other profiles by their deviation from the norm.
Recent rapid advances in face recognition modules can be attributed to active development of algorithm, accessibility of larger face recognition database, and the statistical or numerical techniques used for evaluating the performance of the facial recognition algorithm.
According to Turk and Pentland [2], face recognition algorithms’ performances are restricted by constrained environments. Some of these constraints are illuminations, ageing, occlusion of face, and varying head tilt and facial expressions. In the case of partially occluded faces, occlusioninsensitive, local matching, and reconstruction techniques have been used for identification [3]. A special case of partially occluded faces occurs where either the left or right face is occluded or segmented, and the remaining half (nonoccluded part) is used for recognition. This can be regarded as performing face recognition using half face images [4].
Singh and Nandi [5] assessed the performance of PCA on the full, left and right half face images. They reported no difference in recognition rates between the left and right half faces but achieved higher accuracy for the left half face.
They also found no difference in accuracy rates between the full face and half face images. Their study however revealed that the recognition rate for half faces was half that of the full face images. It was evident from their study that the performance of their algorithm was challenged by intense occlusions.
Asiedu et al. [6] evaluated the performance of principal component analysis with singular value decomposition using fast Fourier transform (FFTPCA/SVD) for the preprocessing algorithm on the reconstructed face database. They found that the recognition rates of the FFTPCA/SVD algorithm in the recognition of left and right reconstructed face images were 95% and 90%, respectively.
However, the statistical evaluation of the algorithm’s performance showed that the average recognition distances for the left and right reconstructed face images are not significantly different. They recommended the FFTPCA/SVD face recognition algorithm as viable for the recognition of partially occluded face images; although, its performance was somewhat hindered by occlusion constraint.
The performance of the DWTPCA/SVD face recognition algorithm on varying head tilt/pose was evaluated by Asiedu et al. [7]. Their study revealed that the recognition rate of the DWTPCA/SVD algorithm declines for headposes greater than 20^{°}. The algorithm gave a perfect recognition rate when used to recognize face images captured under angular constraints less than or equal to 20^{°}. They recommended the discrete wavelet transformation (DWT) as a viable noise reduction mechanism.
It can be inferred from the above literature and current advances that the performances of face recognition algorithms are still hindered by occlusion on the face images. In this study, we leveraged on the property of bilateral symmetry of frontal faces to reconstruct half face images (partial occluded faces) and assessed the performance of the DWTPCA/SVD face recognition algorithm on the reconstructed face images database.
2. Material and Methods
2.1. Data Acquisition
The Massachusetts Institute of Technology (MIT) (20032005) frontal face image database and Japanese Female Facial Expressions (JAFFE) database were adopted to benchmark the face recognition algorithm (DWTPCA/SVD). We selected ten face images captured under 0^{°} straight pose from the MIT (20032005) database and ten neutral pose face images from JAFFE. All together, twenty images were captured into the train image database for training of the algorithm.
Twenty images reconstructed from the half face images (created through vertical segmentation) of the train images were captured into the testimage database. These images were used for testing of the recognition algorithm.
The captured images were digitized into grayscale precision and resized into dimensions, and the data types changed into double precision for preprocessing. This was done to keep uniformity and allow for easy computations. That is, this made the images (matrices) conformable and enhanced the computations. The subjects in the train image database are shown in Figures 1 and 2.
2.2. Image Reconstruction
According to Asiedu et al. [6], the left segmented half images can be reconstructed using the following steps: (i)Rotate left segmented half face image through 270^{°} and denote it as (ii)Rotate the left segmented half face image through 180^{°} and denote it as (iii)Concatenate and as
Similarly, from Asiedu et al. [6], the right segmented half images can be reconstructed using the following steps: (i)Rotate right segmented half face image through 270^{°} and denote it as (ii)Rotate the right segmented half face image through 180^{°} and denote it as (iii)Concatenate and as
In Figure 3, we present a sample of the original full image, left and right half images, and their reconstructed images used as the test images in this study.
2.3. Research Design
The first stage in the recognition process is to preprocess the train images using the adopted preprocessing mechanisms (mean centering and discrete wavelet transform (DFT)). After preprocessing, unique face features are extracted using the PCA/SVD algorithm and stored in the system’s memory as a created knowledge for recognition.
The performance of the study algorithm (DWTPCA/SVD) was assessed on two test image databases: left reconstructed face images (test image database 1) and right reconstructed face images (test image database 2). As stated earlier, samples of these test image databases are shown in Figure 3. The test images are also preprocessed using the mean centering and discrete wavelet transform (DWT) mechanisms.
Their unique features are also extracted using PCA/SVD for recognition. These features are then passed to the classifier where they are matched with the train image features stored in memory. It is important to note that only one test image database (left reconstructed face images or right reconstructed face images) is used in the face recognition module along with the train image database at a time. Figure 4 shows a design of the study recognition module.
2.4. Preprocessing Stage
Preprocessing is an effective method used to suppress unwanted image feature distortion for further processing. This helps to reduce the noise acquired and improve the quality of image for recognition. Face image preprocessing also makes the estimation process simpler and better conditioned for recognition. In this study, we adopt mean centering and discrete wavelet transform (DWT) as the preprocessing mechanisms. According to Li et al. [8], DWT can also be used in image encryption applications; although, a watermarking algorithm based on the DWT is not robust to geometric attacks, please refer to Li et al. [8] for more information on a robust doubleencrypted watermarking algorithm for image encryption.
2.4.1. Discrete Wavelet Transform (DWT)
Basically, DWT is a technique that aids in transforming image pixels into wavelet for waveletbased compression and coding. According to Kociolek et al. [9], DWT is a linear transformation that operates on a data vector whose length is an integer power of two, transforming it into a numerically different vector of the same length. It provides a principled way of downsizing the range images and also captures both frequency and location information.
Now, in the frequency domain, distribution of the frequency is transformed in each step. Define as low frequency band and as high frequency band. In DWT, the LL subband represents the lower resolution estimate of the original value, while midfrequency and highfrequency details subband HL, LH, and HH represent horizontal edge, vertical edge, and diagonal edge details, respectively [7]. Most of the energy is concentrated in low frequency subband, and that is why the LL subband (the approximate coefficients of the decomposition) is the only subband among the four subbands used to produce the next level of decomposition. More so, the LL subband contains only the lowfrequency components of the image and as such relatively free of noise.
The facial expression features are captured in the HL subband, whereas the face pose features are captured in LH subband (the vertical features of outline). The subband HH is the unstable band in all subbands because it is easily disturbed by noises, expressions, and poses, whereas the subband LL is the most stable subband [7].
The DWT refers to a set of transforms, each with a different set of wavelet basis functions. The Haar and the Daubechies sets of wavelets are the two most common wavelets. The other form of wavelets includes the Morlet, Coiflets, Biorthogonal, and Mexican Hat Symlets.
In this study, we adopt the Haar wavelet transform because it is the simplest wavelet transform and can efficiently support the interest of the study. The Haar wavelet applies a pair of lowpass and highpass filters to image decomposition first in image columns and then in image rows independently. It is also worthy to note that in the transformation process described above, we rely on the proposed convolution theorem by Wei and Li [10] which states that “a modified ordinary convolution in time domain is equivalent to simple multiplication operations for Offset Linear Canonical transform (OLCT) and Fourier transform.” According to Asiedu et al. [7], if we consider a vectorized image of dimension where is even, then the singlelevel Haar transform decomposes into two signals of length . These are the mean coefficient vector with components and detail coefficient vector , with components.
We concatenate and into another vector, which can be regarded as a linear matrix transformation of .
We then filter the transformed vector with the Gaussian filter. This is because the Gaussian noise is the default noise acquired due to illumination variations.
The DWT is invertible, so that the original signal can be completely recovered from its DWT representation [11].
The transformed vector is inverted to with components
Figure 5 shows the DWT cycle using the Haar wavelet.
2.5. The Implementation DWTPCA/SVD Algorithm
The DWTPCA/SVD algorithm was adopted as the recognition algorithm for this study. We motivate the mathematical foundation of the algorithm as follows. Define the sample whose elements are the vectorized form of the individual images in the study as .
Let be the mean of the th vectorized image; then, the mean centering of the th image is given by
The dispersion matrix of the vectorized image matrix is given as where is the mean centered matrix.
We now perform singular value decomposition (SVD) of the dispersion matrix , to obtain the eigenvalues and their corresponding eigenvectors. The SVD decomposition yields two orthogonal matrices and and a diagonal matrix Σ.
The eigenfaces are then computed as where is the column vector of the orthogonal matrix .
The principal components extracted from the training set are given as and . These are stored in memory as created knowledge for recognition.
We now consider test images from the two test image databases (left reconstructed face images and right reconstructed face image) described above (Section 2.3).
When an unknown face (test image) is passed through the recognition system, its unique features are extracted as where is the principal component (extracted features) of the test image.
The recognition distances () are computed as
The minimum Euclidean distance , , and is selected as the recognition distance for the closest match.
3. Results and Discussion
Figure 6 shows the left and right reconstructed face images (captured in test image database 1), recognition distance, and their corresponding images in the train image database that were selected as the closest match in the recognition exercise. The images in Figure 6 are from the MIT 20032005 database. It is evident from Figure 6 that the study recognition algorithm (DWTPCA/SVD) correctly recognized all the left reconstructed images from the MITdatabase. Also, there were two mismatches or wrong matches when the right reconstructed face images were used as test images for recognition from the MITdatabase.
Similarly, Figure 7 contains the left and right reconstructed face images (captured in test image database 2), recognition distance and their corresponding images in the train image database that were selected as the closest match in the recognition exercise. The images in Figure 7 are from the Japanese Female Facial Expressions (JAFFE) database. It is seen from Figure 7 that the study recognition algorithm (DWTPCA/SVD) recorded one mismatch or wrong match when the left reconstructed images from the JAFFE database were used as test images in the recognition module. The algorithm (DWTPCA/SVD) recorded three mismatches or wrong matches when the right reconstructed face images were used as test images for recognition from the JAFFEdatabase.
Overall (considering both MIT and JAFFE databases), the DWTPCA/SVD algorithm recorded two mismatches or wrong matches when the left reconstructed face images were used for recognition and five mismatches or wrong matches when the right reconstructed face images were used as test images for recognition.
3.1. Numerical Assessment of the DWTPCA/SVD Algorithm
The main numerical performance metrics adopted for assessment of the study algorithm (DWTPCA/SVD) were the average recognition rate and computational time (runtime of the algorithm). According to Asiedu et al. [6], the average recognition rate, , of an algorithm is given as where is the number of times the algorithm is executed (number of experimental runs), is the number of correct matches recorded in the run of the algorithm, and is the number of test images in a single run of the algorithm.
The average error rate, , given as accounts for the proportion of wrong matches (mismatches) when the study algorithm (DWTPCA/SVD) is adopted for recognition using the specified test image databases.
Now, when the left reconstructed face images are used as test images in the recognition module and the number of times the algorithm is executed, , then the total number of correct matches is. Also, the number of test images in a single run of an experiment, .
Therefore, the average recognition rate of the DWTPCA/SVD Algorithm is and the average error rate from equation (15) is
Similarly, when the right reconstructed face images are used as test images in the recognition module and the number of times the algorithm is executed, , then the total number of correct recognition (matches) is .
Here again, the number of test images in a single experimental run, . The average recognition rate of the study algorithm (DWTPCA/SVD) is then calculated as and the average error rate is
The average computational time of the algorithm was about 2 seconds for the recognition of 20 face images in a test image database.
3.2. Statistical Evaluation of the DWTPCA/SVD Algorithm
Table 1 contains some summary statistics of the recognition distances shown in Figures 6 and 7. From Table 1, the average recognition distance of the study algorithm when the left reconstructed images are used as test images is 482.0342 with a standard error of 70.5521. Also, the average recognition distance of the study algorithm when the right reconstructed images are used as test images is 529.7775 with a standard error of 87.5666. It can be inferred from Table 1 that the study algorithm (DWTPCA/SVD) performs better when the left reconstructed images are used as test images. This is because a relatively lower recognition distance is always preferred as it signifies a closer match. This is consistent with the results from the numerical assessment of the study algorithm.

Table 2 shows the sample correlation of 0.869 between the recognition distances for left reconstructed images and the right reconstructed images and its corresponding value, . This signifies a strong positive linear relationship between the recognition distance for the left reconstructed images and the recognition distance for the right reconstructed images. The value, p ≤0.001, also indicates that the relationship is statistically significant.
 
Significant at 0.001. 
4. Conclusion and Recommendation
The study used the DWTPCA/SVD face recognition algorithm for recognition on left and right reconstructed face image databases. The reconstruction of the face images becomes necessary in the presence of partial occlusion. We leveraged on the property of bilateral symmetry of the human face to reconstruct the faces from left and right half images.
The results of the recognition exercise revealed that the average recognition rates of the study algorithm (DWTPCA/SVD) are 95% and 75% when the left and right reconstructed face images are used as test images, respectively. It is therefore evident from the numerical assessment that the DWTPCA/SVD face recognition algorithm performs relatively better when the left reconstructed images are used as test images for recognition.
Evidence from the statistical evaluation also shows that the DWTPCA/SVD algorithm gives relatively lower average recognition distance (482.0342 with a standard error of 70.5521) when the left reconstructed face images are used as test images. This makes the left reconstruction of the face images preferred to the right reconstruction of the face images.
The findings of the study are consistent with those of Asiedu et al. [6] and Singh and Nandi [5]. The DWTPCA/SVD algorithm is recommended as a suitable algorithm for face image recognition under partial occlusion (half face images). The algorithm has a remarkable performance when used for recognition of left reconstructed face images.
Data Availability
The image data supporting this study are from previously reported studies and datasets, which have been cited. The processed data are available upon request from the corresponding author.
Conflicts of Interest
The authors declare that there is no conflict of interest.
References
 F. Galton, “Personal identification and description,” Journal of the Royal Anthropological Institute of Great Britain and Ireland, vol. 18, pp. 177–191, 1889. View at: Publisher Site  Google Scholar
 M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, 1991. View at: Publisher Site  Google Scholar
 X. Wei, Unconstrained Face Recognition with Occlusions [PhD Thesis], University of Warwick, 2014.
 H. Jia and A. M. Martinez, “Support Vector Machines in face recognition with occlusions,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 136–141, Miami, FL, USA, 2009. View at: Publisher Site  Google Scholar
 A. K. Singh and G. C. Nandi, “Face recognition using facial symmetry,” in International Conference on Computational Science, Engineering and Information Technology, pp. 550–554, Coimbatore, India, 2012. View at: Publisher Site  Google Scholar
 L. Asiedu, F. O. Mettle, and J. A. Mensah, “Recognition of reconstructed frontal face images using fftpca/svd algorithm,” Journal of Applied Mathematics, vol. 2020, Article ID 9127465, 8 pages, 2020. View at: Publisher Site  Google Scholar
 L. Asiedu, F. O. Mettle, E. N. Nortey, and E. S. Yeboah, “Recognition of face images under angular constraints using dwtpca/svd algorithm,” Far East Journal of Mathematical Sciences, vol. 102, no. 11, pp. 2809–2830, 2017. View at: Publisher Site  Google Scholar
 Y.M. Li, D. Wei, and L. Zhang, “Doubleencrypted watermarking algorithm based on cosine transform and fractional fourier transform in invariant wavelet domain,” Information Sciences, vol. 551, pp. 205–227, 2021. View at: Publisher Site  Google Scholar
 M. Kociol, A. Materka, M. Strzelecki, and P. Szczypin’ski, “Discrete wavelet transformderived features for digital image texture analysis,” in Interational Conference on Signals and Electronic Systems, pp. 99–104, Lodz, Poland, 2001. View at: Google Scholar
 D. Wei and Y.M. Li, “Convolution and multichannel sampling for the offset linear canonical transform and their applications,” IEEE Transactions on Signal Processing, vol. 67, no. 23, pp. 6009–6024, 2019. View at: Publisher Site  Google Scholar
 A. Bultheel, “Learning to swim in a sea of wavelets,” Bulletin of the Belgian Mathematical societysimon stevin, vol. 2, pp. 1–45, 1995. View at: Google Scholar
Copyright
Copyright © 2021 Louis Asiedu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.