Systems using biometric authentication offer greater security than traditional textual and graphical password-based systems for granting access to information systems. Although biometric-based authentication has its benefits, it can be vulnerable to spoofing attacks. Those vulnerabilities are inherent to any biometric-based subsystem, including face recognition systems. The problem of spoofing attacks on face recognition systems is addressed here by integrating a newly developed image encryption model onto the principal component pipeline. A new model of image encryption is based on a cellular automaton and Gray Code. By encrypting the entire ORL faces dataset, the image encryption model is integrated into the face recognition system’s authentication pipeline. In order for the system to grant authenticity, input face images must be encrypted with the correct key before being classified, since the entire feature database is encrypted with the same key. The face recognition model correctly identified test encrypted faces from an encrypted features database with 92.5% accuracy. A sample of randomly chosen samples from the ORL dataset was used to test the encryption performance. Results showed that encryption and the original ORL faces have different histograms and weak correlations. On the tested encrypted ORL face images, NPCR values exceeded 99%, MAE minimum scores were over (>40), and GDD values exceeded (0.92). Key space is determined by where A0 represents the original scrambling lattice size, and u is determined by the variables on the encryption key. In addition, a NPCR test was performed between images encrypted with slightly different keys to test key sensitivity. The values of the NPCR were all above 96% in all cases.

1. Introduction

Face recognition systems identify human faces and can differentiate between them by processing and storing visual patterns in visual data [1]. A facial recognition system provides users with a number of advantages, including passive authentication [2], by which authenticity can be established simply by being present. Video surveillance, access control, forensics, and social media are some of the security-related applications of facial recognition [311]. In accordance with [12], facial recognition systems follow the following stages: A preprocessing stage is first performed on the data. Aligning the area of interest after detecting faces in visual input. Using the preprocessed input, face features are extracted in a second step. To determine whether a face matches a database of features, the features of the face are compared. It is possible to verify a specific target or identify a facial feature based on matching results. Figure 1 shows a diagram of the facial recognition process.

In terms of biometric authentication, face recognition shares many advantages and disadvantages with other biometric methods. Biometric authentication generally provides greater security than traditional passwords [13]. Every individual has unique biometric traits, which make biometric forgeries difficult [14], and it also prevents false authentication because the registered person must be present to verify authenticity [15]. For example, a biometric authentication system could be used to protect the integrity of results obtained by studies that involve analysis of medical patterns obtained from samples taken from a predetermined set of subjects such as in the works of [16, 17]. Subjects involved in such studies can be identified and verified with biometric authentication systems before proceeding with subsequent medical analysis procedures thereby ensuring that the subject does belong to the main group under study or any subset of that group that requires special procedures. The disadvantage of biometric authentication systems is that they are vulnerable to attacks involving deep learning and machine learning models that spoof the biometrics [13].

False testimony about biometrics is submitted by attackers to gain authenticity [18]. In addition to artificial synthesis and reply attacks, there are also wolf attacks [13] and replay attacks [19]. In addition to a number of adversarial attacks [13] and poisoning attacks [20], machine learning and deep learning models are also susceptible to poisoning attacks.

A method is presented in this paper for preventing spoofing attacks on facial recognition systems by integrating an image encryption model into the process. To train and test a face recognition model based on principle component analysis (PCA), preprocessed face images are encrypted using an image encryption model. A feature extracted from an input face image has to match the key used to encrypt the image in order to be correctly identified or verified. The extracted features enrolled in the features database are encrypted. Attackers must encrypt face images with the correct key in addition to copying and submitting authenticated individuals’ images to a system to minimize the effectiveness of spoofing attacks.

In addition to offering high encryption performance and resistance to brute-force attacks, the image encryption model provides an added layer of security. An image encryption model is developed that is based on outer totalistic cellular automatas (OTCAs) and gray code for use in recognition processes. Pixels are replaced with Gray Code, while images are scrambled using CA. Using mathematical operations, a pixel’s substitution changes its values, then reverses those operations to return its values [21]. As part of the process of scrambling images, pixels are moved on the image in order to break the high correlation between adjacent pixels [22]. CA is an excellent choice for image scrambling applications [23], since it can generate complex structures from simple structures. Following is the order in which the remaining sections of this paper are presented. Second, the study reveals its work in image scrambling; third, it explores the methodology it used; fourth, it summarizes the results; and last, it presents its conclusions.

CAs consist of an infinite array of discretely updating cells, with each cell changing its state according to a universal rule depending on its present state and the states of its neighbors. Based on [24], CA-based image encryption uses direct operations on the pixels of the image to encrypt the data. There are several advantages to CA image encryption, including its ease of implementation and high security [24]. As CA-based image scrambling methods [2530] were developed, according to these methods, CA scrambling can break highly correlated pixels while remaining high-performing and resistant to a variety of attacks.

A proposal in [26] uses CA for watermarking and scrambling. CA rules are studied using fractal box dimensions in order to determine chaotic characteristics. After the image has been scrambled with a specific lattice and its evolution over a certain number of generations, it is scrambled according to the selected CA. This method of scrambling images was first used for watermarking. Using this scheme, watermarked images are more resistant to attacks such as noise, cropping, and JPEG compression. A gray image encryption scheme was developed by [30] using 2D CA. A binary image represents a bit, and eight images are created. With the B3/S1234 CA rule, eight binary images were generated as an initial configuration lattice. As 8 binary lattices are developed based on 8 binary images of the original image, both the value and position of pixels are simultaneously changed.

A study by [27] investigated how other 2D-OTCA rules would perform when scrambling images besides Game of Life. By using Von Neumann neighborhood configuration instead of Moore’s law, the authors reduce rules space and computation time. A number of generations and boundary conditions are used to evaluate scrambling performance using GDD (gray difference degree) OCTA rules. As a part of the proposed method, a random lattice is generated and evolved k times over the course of time. A matrix is used to scramble the subject image based on the evolving lattice. The initial lattice is empty. Then, starting from the top-leftmost cell, the locations of the pixels in the original subject image are used to identify the active cells. When pixels corresponding to dead cells are scrambled, rows major are copied to pixels corresponding to dead cells. This technique achieved the highest GDD on Rule 171 when compared to other proposed techniques. In addition, the technique was much faster than the other methods when it came to computation time.

Image scrambling was achieved using 2D CA in [25]. In their study, the authors looked at different configurations, such as the number of evolved generations, neighborhood configuration, boundary conditions, and rules with lambda values close to critical values, when scrambling performance was determined. A lattice from which an image is scrambled is developed using all the lattices obtained from the initial lattice. The lattice is initially empty, and then pixel values from the original subject image are multiplied by the pixel locations corresponding to the live cells in the first scrambling matrix, starting with the top-leftmost cell. The pixels in locations that have already been filled are skipped after the matrices have been scrambled. Dead pixels on the scrambling matrix are copied to corresponding pixel locations on the original image in row-major order. Compared to scrambling with fewer generations, scrambling with more generations results in a better GDD. The Moore configuration with periodic boundary conditions could also be used to increase GDD. The lambda values tested ranged from 0.20703 to 0.41404. According to the image tests, Rule 224: Game of Life achieved the highest GDD value.

A 2D CA image scrambling technique proposed by [25] was modified by [28] to achieve better GDD scrambling. Similarly, scrambling occurs in all evolutionarily evolved lattices. A row-major lattice is constructed from the pixels in the original image that correspond to the live cells in the scrambling matrix. Additionally, the remaining pixels are copied to the remaining row-major locations. The same procedure is repeated if there are more scrambling matrices. Scrambled lattices are created by applying Game of Life 224 rules. To improve GDD, periodic boundary conditions, Moore’s neighborhood, and eight generations are used. It was the combination of periodic boundary conditions, Moore’s neighborhood, and eight generations that produced the highest GDD of 0.9551.

According to [31], images can be scrambled with ECA. In this study, ECA rules were used to test scrambling performance on classes 3 and 4 rules. Scrambling was used to convert the original images into 1D vectors. During the scrambling process, a 1D lattice generated at random is evolved for k generations. A 1D lattice corresponding to the locations of live cells on the scrambling lattice is created by copying pixels from the original image. In the same way, the remaining scramble matrices that have previously been filled are skipped. After the original image is scrambled, the remaining pixels are copied to dead cells. Matrixes are then created by transforming the 1D vector into 2D. When combined with ECA scrambling, GDD can be just as effective as 2D CA and, in some cases, may even be more effective. As a result of combining Rule 22 with boundary conditions and ten generations, the GDD was high. Compared to the tested ECA rules for class 3 (22, 30, 126, 150, and 182), class 4 rule 110 achieved a higher GDD.

In [29], an image encryption method based on 2D OTCA is proposed. When rules 534 and 816 are applied to the original image, pixels values and locations are simultaneously changed. The method’s robustness is demonstrated by histograms and entropies. There is a large key space and a high degree of sensitivity. There was a NPCR of about 100%, an entropy of more than 7.2, and a correlation of almost zero on all test images. It is not possible to identify encrypted images from original images using histogram analysis.

There are many linear techniques for face recognition, but PCA is one of the most widely adopted [32]. According to [33], PCA is used to recognize faces. A PCA transforms data linearly into a new coordinate corresponding to the maximum variance direction [34]. PCA’s application to face recognition is described in this document [32]. Face images can be modeled using PCA to extract features, creating eigen faces based on eigenvectors. Eigen values with high values and their corresponding eigen vectors are determined from face data vectors using the covariance matrix.

Using PCA and ANFIS (adaptive neuro-fuzzy inference system), an efficient pose-invariant face recognition system was developed in [35]. An ANFIS classifier is used to recognize images from a variety of poses using PCA as a feature extractor. Data sets of training images of faces are scored by PCA algorithms to enable classification. Correct recognition rates are greatly improved by using ANFIS.

Based on PCA and logistic regression, [36] proposed a face recognition system. An experimentation dataset is used in the face recognition pipeline to reduce the dimensions of features. A logistic regression classifier was proposed for accurate face recognition. A two-dataset analysis was conducted to determine the classification’s efficiency.

An automatic attendance system was created by [37] to track and record the attendance of individuals within an organization. By eliminating the need to manually take attendance, automated attendance systems enable organizations to optimize their processes. In order to implement the system, a face recognition system is used. As part of the system, Haar cascades are used to detect faces. In order to test the system’s capability to recognize faces, PCA and LDA were applied to the Olivetti dataset.

Image encryption cannot yet be integrated into face recognition processes because there are not enough studies in this area. A great deal of research is being conducted on improving the accuracy of face recognition or deploying it effectively in a wide variety of applications. This work integrates OTCA and Gray Code facial recognition algorithms with a new image encryption scheme. Therefore, the recognition system can correctly identify encrypted faces as a result.

3. Methodology

A description of the methods and configurations used in the image encryption scheme is presented in this section. In the following step, we demonstrate a short PCA face recognition algorithm as well as integrate an image encryption scheme into the recognition process.

3.1. Image Encryption Scheme

In this process, gray code-based pixels are substituted, followed by 2D OTCA-based pixels scrambling. Pixel values are replaced with Gray Code representations in the pixel substitution process. A random lattice is generated to scramble pixels using the OTCA Conway’s Game of Life rule.

3.1.1. Gray Code Pixels Substitution

This phase involves replacing the Gray Code integer corresponding to a given pixel at coordinates (i, j). XOR operations are applied on adjacent binary bits in the binary representation of integers to generate Gray Code, which is then appended to the left of the string that contains the first bit. Images can be formatted with different bit depth. This bit depth represents the number of bits contained in each pixel; therefore, bit depth is the length of the binary string n used to represent pixels. The higher the value of an image’s bit depth (or length of binary string n) is, the more space (file size) is required to store the image. Note that bit depth does not affect an image’s resolution. (Algorithm 1) is applied on binary pixels values of images to convert them to their Gray Code representations

Input: binary string with length n.
Output: Gray Code representation of binary string .

To perform pixel substitution, the original image is processed to replace any pixels at positions (i, j) by equivalent gray codes corresponding to the corresponding gray levels. Thus .

3.1.2. Image Scrambling with CA

After Gray Code pixels substitution in phase one, the image is scrambled using an evolved 2D-OTCA lattice. According to the method used, the type of 2D CA used is outer totalistic cellular automata (2D OTCA). OTCA rules work by updating cells based on the current state of a cell and its neighboring cells, as described in [23]. As a result of OCTA rules, it is possible to describe the new state of cells using a transition function as follows:where are cells in neighborhood.

(1) Neighborhood Configuration. Once the OCTA transition function is defined, neighborhood configuration needs to be specified for any cell in 2D lattice. Der von Neumann neighborhood scheme and Moore neighborhood scheme [38] are popular neighborhood schemes depicted in Figure 2. In Von Neumann’s neighborhood a cell at coordinates (i′, j′) is a neighbor to cell (i, j) if it is an adjacent cell on one of the four directions north, east, west, or south to central cell (i, j). Range of cell’s neighborhood can be extended given a radius r, with that a cell at coordinates (i′, j′) is a neighbor to cell (i, j) at radius r if it satisfies the following rule:

As for Moor’s neighborhood (NM) a cell at coordinates (i′, j′) is a neighbor to cell (i, j) if it is an adjacent cell on one of the four directions north, east, west, or south or on a diagonal direction as well as to the central cell (i, j). Similarly range of neighborhood for a cell at coordinates (i, j) can be extended for a given radius r. With that, a cell at coordinates (i′, j′) is a neighbor to cell (i, j) at radius r if it satisfies the following rule:

(2) Boundary Conditions. As 2D lattices with CA rules are finite, neighboring cells at lattice bounds should be specified. A closed boundary condition (CBC) or a periodic boundary condition (PBC) can be applied to cells at extremes [40]. In CBC, adjacent cells considered naught are those adjacent to the extreme cells [41]. As for the PBCs, the cells at the extremes become adjacent to one another, so in a 2D rectangular lattice the leftmost cells are next to the rightmost cells, the top row cells are next to the bottom row, and the cells on corners are also adjacent, thus forming a toroid shape lattice [25].

(3) Conway’s Game of Life. Conway’s Game of Life (CGL) is the most famous universal automaton [42] ever invented by John Conway in 1970. OTCA describes a cell’s state based on its current status and eight nearby cells. CGL applies Moor’s Neighborhood configuration to its OCTA transition function. According to [43] in the CA GoL (Game of Life) rules (CGL and other discovered GoL rules) neighboring cells are cells that are directly touching a candidate cell. Therefore in CGL neighborhood configuration is strictly confined to Moor’s neighborhood configuration in square grid. There are other investigated shapes of grids such as hexagonal, pentagonal, and triangular grids, that may or may not have their own set of discovered GoL rules [44], but CGL does not satisfy the requirements to be a GoL rule in such grids. In CGL, cells can either be alive or dead based on their state and the states of neighboring cells. In CGL, the transition of cells between available states is governed by the following rules (see Figure 3 for an example):(i)For a cell at coordinates such that if for then otherwise .(ii)For a cell at coordinates such that if for then otherwise .

3.1.3. Scrambling Algorithm

Using the proposed scrambling algorithm, the original image is first transformed with Gray Code, then it is scrambled using an evolved 2D lattice with CGL OTCA rules on a network with PBC. The following sections describe the steps in encrypting and decrypting data.

(1) Encryption Process. (1)Convert original image to its grayscale version, and then transform all pixels in to its corresponding Gray Code integer equivalent in . That is value of pixel at coordinates in can be determined by:(2)Generate random lattice with exactly the same width and height as . Values of lattice pixels can either be 1 (alive) or 0 (dead).(3)Apply CGL OTCA transition function on with and PBC for k generations yielding .(4)Combine and on an initially empty lattice such that and .(5)Transform into a stack such that elements in stack from top to bottom are values of pixels in in row major order.(6)Scramble and search in column major order and if pop an element from top of into initially empty scrambled image at same coordinates . After search is complete search Z again in row-major order this time and if pop an element from top of into scrambled image at same coordinates .

With that encryption key for algorithm is CA rule used for evolving randomly generated A0, the number of generations k used to yield Ak, and chosen value n where (0 < n < k) it determines An that is combined with Ak to generate scrambling lattice Z. Generation of scrambled image SI for proposed algorithm can be expressed as encryption function e: .

(2) Illustration of the Encryption Algorithm. Assume Ak and An are evolved from same initial lattice A0. Then according to demonstrated algorithm Z is generated in the same manner as shown Figure 4 for instance.

Scrambling I′ with Z gives SI as illustrated in Figure 5. Assume the values of I′ pixels are different colors for now.

(3) Decryption Process. As for decryption algorithm it involves generation of scrambling lattice from provided keys. Steps for decryption are as follows:(1)Generate from provided keys where .(2)Search in column major order if then is added to .(3)Search in row-major order if then is added to .(4)Reverse then pop elements from stack in an initially empty lattice generating .

Decryption algorithm for proposed algorithm can be expressed as function d: .

After obtaining I′ it needs to be transformed back to original grayscale version of image I. Recreation of original binary bits from Gray Code is not as straight forward as the generation process. Since pixels in grayscale can assume values ranging from 0 to 255 then any value in that range can be expressed in 8 bits maximum and this is true as well for its corresponding Gray Code version. (Algorithm 2) that demonstrates the steps required for converting Gray Code to original binary values.

Input: Gray Code string with length n.
Output: Original Binary string
3.2. Face Recognition with PCA

PCA objective is expressing points in higher dimensional space in lower dimensional subspace [45]. Satisfying this objective is done by achieving PCA goals which are according to [46] extraction of most important information from data, compression due to extraction of most important information, simplification of data description, and analysis of observations and variables structure. Steps for PCA features extraction are elaborated by [47] with Euclidean distance classifier as shown as follows:(1)Convert 2D face images data into set of vectors as training data {F1, F2, …, FN}.(2)Find average of training data by (3)Covariance matrix is determined with (4)Find eigenvectors corresponding to eigenvalues by , is eigenvalue and is set of eigenvector.(5)Image projection into eigenspace is found by (6)Test image is projected with 5 and classified based on distance measured. This distance measures similarity between test image and faces database.(7)Here Euclidian distance is used as classifier for projected data. It is found by

3.3. Encrypted Face Recognition with PCA

In order to recognize encrypted faces, PCA is used. The encrypted face images used for training and testing come from PCA, which is used to extract encrypted face images. By using encrypted images for face recognition, authentication processes can be prevented from being compromised, and credential spoofing can be prevented. Therefore, the pipeline of the face recognition scheme has been revised to include OTCA’s CGL and Gray Code algorithms for image encryption. The pipeline of the proposed scheme for face recognition is shown in Figure 6. A face detection and alignment step is performed first in order to preprocess the image. The face image is then encrypted with the same key as the cipher for encrypting stored information. An encrypted image can be verified or identified by the PCA features extractor. The classification of the data and decisions are made using Euclidean distance classifiers.

4. Results and Discussion

Experiments and implementations of proposed OTCA CGL Gray Code image encryption technique and Encrypted Face PCA Recognition were conducted on Laptop with specifications 8 GB RAM, Intel(R) Core(TM) i5-3230M CPU @ 2.60 GHz and Microsoft Windows 10 Home 64 bits using Python3.

Face recognition model uses ORL faces dataset for training and testing the model. Training model uses 80% of database and 20% is used for testing. ORL dataset is already preprocessed; that is faces are already dedicated and aligned therefore preprocessing step is skipped on model implementation. Faces archive is encrypted with same key using proposed OTCA CGL Gray Code image encryption model. Using same key is better for keys management and reduces dataset encryption computational time. Using same key or different keys does not ease up or increases difficulty for distinguishing encrypted images due to high sensitivity of encryption key shown by NPCR evaluations. However, having a single key requires securing encryption key, otherwise the integrity of entire dataset could be compromised.

4.1. Face Archive Encryption

Entire ORL faces database is encrypted with proposed image encryption method. Single encryption key is used for encryption of faces images. Figure 7 shows some of the faces encrypted with the proposed method.


With that, the performance of image encryption scheme is evaluated for the histogram, correlation, number of pixels change rate (NPCR), mean absolute error (MAE), and gray difference degree (GDD). In addition a key analysis of the proposed encryption scheme is performed as well. Testing is implemented on encrypted ORL faces with the same key as in Figure 7.

4.1.1. Histogram

Histogram shows how pixels are distributed in an image and statistical characteristics as well [4850]. Encrypted image should have a different histogram from the original image [51]. Having different histograms prevents identification of the original image from the scrambled version. The histogram represents the distribution of different pixels intensities across the image. Since Gray Code pixels substitution is applied on the image before scrambling, there will be a difference between the original image and the scrambled image histograms. This eliminates the possibility of identifying an original image from the scrambled image histogram, given that a database of original and scrambled images became available to an attacker. Figure 8 shows the comparison between original image and scrambled image histograms using a randomly selected subject image from the ORL face dataset.

4.2. Correlation

Correlation indicates similarity between original image and its scrambled version. In Table 1, correlation is calculated with Karl Pearson’s formula [29]:where

Correlation coefficient values are in the range of −1 to 1 inclusively. Having 0 correlation is a good indication for encryption robustness as it means there is no correlation between the scrambled image and the original image. Correlation values of plain images are usually closer to 1 (strong positive correlation). Value of −1 means a strong negative correlation in variables.

4.2.1. Number of Pixels Change Rate

NPCR (number of pixels change rate) finds the different percentage of pixels between the two encrypted images whose corresponding original images are different in one pixel only [24]. The higher the NPCR value, the more resilient the encryption against differential attacks [52]. The NPCR ideal value is 99.6094% [53]. NPCR is found by [5456]. Where SI1 is encrypted image of plain image I1 and SI2 is encrypted image of plain image I2. I2 differs from I1 in one pixel only. Same key is used for encryption in NPCR test [57]. However, it is clear from proposed scrambling algorithm that it contains no actual diffusion stage. Diffusion stage emphasizes on establishing a dependent relationship between the encrypted image and original image in a complicated manner; where a change in one pixel in the original image changes encrypted image almost entirely [58]. As such in NPCR evaluation, a slight change is made on encryption key such that values of k, n, or both are changed. With that NPCR can be obtained as follows:where SIk is encrypted image with original key. SIk is encrypted image with modified k′ lattice. NPCR is also utilized for finding change rate between original image and its scrambled version as in work of [29, 59]. With that NPCR between the original and scrambled image is as follows:

NPCR is used to test percentage of pixels that change between the original image and encrypted images and to test the sensitivity of the algorithm to slight changes in keys. In the first case NPCR should be large between the original image and its encrypted version. On second case, a slight change is made on the encryption key, and two encrypted images from the same original image are tested for differences. Tables 2 and 3 show NPCR values between original and encrypted images and NPCR for sensitivity test, respectively.


4.2.2. Mean Absolute Error

MAE (mean absolute error) is used to determine how different is the encrypted image from the original image [60]. It is calculated with [61] as follows:

The values of MAE are in range [0, 2N − 1]. N is number of pixels bits. Higher MAE indicates more differences between encrypted image and original image which is a desirable trait for encryption robustness. Table 4 shows MAE for randomly selected encrypted images.

4.2.3. Gray Difference Degree

Gray difference degree (GDD) measures performance of scrambling on an original image. Introduced by [26] GDD is calculated using the following steps:(1)For each pixel P where find Gray Difference (GD) by where ,(2)Find average neighborhood GD for all pixels in using GDs calculated in 1 using function .(3)Repeat steps 1 and 2 for to obtain ,(4)Compute GDD using .

Table 5 shows obtained GDD values for randomly selected subject images from ORL dataset.

4.2.4. Key Analysis

Encryption keys are fundamental components for implementing encryption on subject images. Encryption keys’ resistance to attacks should be high. Encryption keys should have a large key space and a high sensitivity [62]. Keys with larger key space are more resistive to brute force attacks [63]. Resisting brute force attacks requires key space to be >2100 [64]. As for keys sensitivity test, a small change in encryption key should have large difference on the generated encrypted image [65]. NPCR is utilized in key analysis test in similar manner to work of [66].

To test the effectiveness of the image encryption key space must be large enough to withstand brute-force attacks [29]. For the proposed algorithm, the key is composed of an initial A0 lattice of size width height, number of generations k, and value n such that 0 < n < k. Since n is selected randomly based on k, and pixels on initial A0 lattice can assume one of two states (alive or dead) then key space is where  = (k(k − 1))/2) is the size of unique pairs of k and n set. Key space is exceptionally wide, and large enough k value (which in turn increases the size of ) can be selected to effectively encrypt images of smaller size. At minimum for an image of size (10 × 10) key space is (2100) > 2 100 which exceeds the brute force resistivity limit.

To test the sensitivity of the key, a random subject encrypted image is decrypted using Ak and An only. Then keys with different values of k and n are tested to decrypt the image. Results on Figure 9 show that decrypting image is only possible with the correct key. Given that A0 is available, decrypting an image with Ak or An only yields no useful information, and the same can be concluded for different values of k and n. Also, the NPCR test in Table 3 shows a high change rate between images encrypted with a slight change in encryption keys (>96% on minimum) which proves the high sensitivity of encryption keys.

4.3. PCA Implementation

The ORL faces database contains 400 pictures of 40 different subjects. Each subject has 10 faces in various poses. Thus, face images are classified into 40 classes in the ORL database and labeled accordingly. 8 images from each subject are taken for training the PCA model, and 2 images are left for testing.

Training face images are converted to vectors, and these vectors are then processed with PCA to reduce features and generate principal components that explain most variance. The original number of features in grayscale training face images is 12544 which is to be reduced according to a selected number of principle components. Figure 10 shows the number of generated principle components versus explained variance. Initially, the first few principal components explained most variations in encrypted facial features, but as the number of principal components increased, the variance explained by subsequent principles decreased. From Figure 10, 250 components are selected as the number of principal components, which account for 96% of the variance in the extracted features. In order to distinguish encrypted faces, these 250 components are used to create eigen faces. Figure 11 shows the resulting eigen faces.

4.3.1. Classification Results

A Euclidean distance classifier is used to classify test encrypted face images into the correct class labels. Table 6 shows the classification results for testing data on extracted features.

Table 6 provides a classification report for which 74 out of 80 classifications were correct. Based on that, the model was able to identify 92.5% of the encrypted faces in the database correctly.

In the second test, decrypted images were used, and we performed the same classification test. The objective is to demonstrate that in order to perform correct facial recognition, the subject image must be encrypted with the correct key; otherwise, the result is faulty. Table 7 shows the results of the classification of decrypted face images taken from the encrypted images database.

Table 7 on the classification report shows only 3 correct identifications out of 80, an accuracy of only 0.0375%. This highlights the purpose of encrypted face recognition with PCA. A method for protecting authentication processes against spoofing attacks, so that identification requires encrypting the input image with a correct key so it can be recognized correctly.

4.3.2. Execution Time

For the identification of a single face, a script was run in python3 using a single face query. With the timeit method in Python, identification time was measured for 10,000 iterations. It took 28.0455 seconds to perform 10,000 iterations. The average execution time for a single iteration is 0.0028045 seconds or 2.8045 milliseconds (milliseconds).

5. Conclusion

In conclusion, biometric authentication systems suffer from vulnerabilities that render such systems insecure against spoofing attacks. Those vulnerabilities extend to facial recognition systems as well. In order to resolve the spoofing issue, a new image encryption model was developed and integrated on the recognition pipeline of the PCA-based face recognition system. The image encryption model was used to encrypt the ORL face dataset used to train and test the model. With that, correct identification of face images requires encryption of the input face image with the same key used to encrypt the features database.

Testing encryption performance was carried out on randomly selected encrypted samples from the ORL dataset. Results showed that correlation was weak, histogram was different, NPCR values were >99%, MAE score was >40 on minimum and GDD values exceeded 0.92 on tested ORL face images. As for the key space, it was more than the brute force attack resistivity limit (2100), as the key space provided by the method depends on image size. Key sensitivity was tested as well, where an NPCR test was performed between face images encrypted with slightly different keys. In all cases, NPCR values were >96%.

After encrypting ORL faces dataset 80% of data was used to train the PCA model and the remaining 20% was reserved for testing. The model was able to achieve 92.5% accuracy in identifying encrypted test images from the encrypted features database. The same test was repeated again; however, this time with test face images that were entirely decrypted. In the second case, the system had an accuracy of 0.0375% in identifying decrypted images on a database of encrypted features. This shows the system’s ability to withstand spoofing attacks, as the submitted input image is required to be encrypted with the correct key before it can be correctly recognized.

Data Availability

Data is available at reasonable request from the corresponding authors. Please contact Basil Ibrahim through e-mail address [email protected] You may also contact Dr. Eimad Abusham [email protected].

Conflicts of Interest

The authors declare that there are no conflicts of interest.


This research was supported by Sohar University . You may contact corresponding authors for any inquiries.