Abstract

Biometric identification has verified its effectiveness in personal identity verification because of the uniqueness and noninvasion. In this research, we tend to apply the detection of biometric information to a remote sensing system for the purpose of security area monitoring. Our system is established by collecting signals from the coming individuals via the remote measurement in the specific condition where both kinds of data are detected to determine the identity. Specifically, the measuring of gait signals and facial images is integrated to provide a way of improving the detection accuracy and the robustness. In addition, the fuzzy association rule (FAR) is employed for data analysis in line with the outcomes of different methods. As such, the signals are integrated and transmitted for further processing and remote identification. Experiments are conducted to demonstrate the capability of the proposed system. With the training data increases, a high detection accuracy of 95.2% is obtained, which makes it a promising basis for the realization of remote identity verification.

1. Introduction

Automatic verification of an individual’s identity has already become a practical and creative tool in a wide range of applications, especially in the access to some high-restricted environment [1]. In view of certification, the identity verification indicates that an identity is confirmed to be real, together with the individual claiming the identity entitled to him [2]. In accordance with the Good Practice Guide (GPG), the verification of personal identity generally starts with collecting the information to pick a person from the population of interest [3]. Rather than using magnetic cards or pin numbers, current studies pay more attention to more convenient, easy, and remote sensing methods in practical use [4, 5]. Specifically, for places of high security and secrecy demands, such as government agencies, scientific laboratories, archives, or the national border, the employment of an optimal remote identity verification method with high accuracy is most pronounced [6]. For these reasons, research is still ongoing to develop identity verification devices and strategies.

Notably, biometrics has already been used for decades as one of the strongest methods to identify and authenticate individual identity [7]. Explicitly, biometrics can be regarded as the technical term for human body measurements and calculations. It refers to the metrics related to all human characteristics. Biometric technology, which is widely spread due to its high accuracy and user friendliness, is currently employed for recognizing individuals via measurable physiological or behavioural properties [8]. To the best of our knowledge, the physiological properties include fingerprint [9], iris [10], and face [11], while the behavioural properties include tread [12], signature [13], and voice [14]. To this end, by transferring the biometric properties into electric signals, we are thus able to get insight into the system of using biometrics to describe individuals [15].

In spite of the restrictions to our daily life circumstances, the remote access to biometric parameters has greatly progressed with advances in signal collecting and processing devices. It is obvious that the biometric measurement can be attached to real personal activities within a long distance, whose physiological or behavioural traits can be recorded. Nevertheless, one major problem involved with biometrics detection is that the physical appearance of a person may vary with time and environment [16], whereas a 100% accurate measurement rate is impossible to get by merely sensing one single kind of biometric parameter. For this reason, a possible resolution for addressing this issue is to integrate several biometrics into one system. However, since so many biometric parameters are involved in the identity verification, we employ the fuzzy association principle (FAR) for determination in this paper, which is widely used in data mining and decision support application. As such, the sensing signals can be applied for remote identification. The measurement devices, which are suitable for daily ambulatory signal collecting, are designed to meet this requirement for personal identity verification in security and secrecy areas [17].

In this research, we aim to establish a remote identity verification strategy specifically for high-security-monitoring area based on daily life activities. Concerning the analysis of biometric properties, we take the facial images and the gait data measurement during individual walking towards the target place due to its high precision and easy implementation. To this end, the task is addressed by integrating a set of data analysis algorithms for real-time individual biometrics detection and description.

The remaining part of this paper is organized as follows. Related work about gait analysis, face recognition, and the FAR is reviewed in Section 2. Section 3 illustrates the devising of remote signal acquisition system as well as its working procedure. In Section 4, the measurement mechanism and the data processing methodologies for feature extraction and individual recognition are presented. The working performance of the proposed approach is verified by experiments in Section 5. Concluding remarks are given in Section 6.

2.1. Gait Analysis

Walking is characterized by gait [18]. Gait analysis is initially proposed for precisely quantifying the functional integration while walking by using various strategies [19, 20]. For instance, as one of the commercial walkway devices, the GAITRite system has identified its distinguishing exploration in providing a reliable approach for detecting both averaged and individual gait parameters of the elderly [21, 22].

Gait analysis is typically carried out relying on either of the two ways, including visual observations and motion laboratory devices [23]. The former is aimed at observing the subject’s locomotion via cameras during walking while the latter is equipped with sensors for analyzing the gait variation. Nonetheless, the outcomes of the observation approach are qualitative and unreliable. In most cases, the motion measurement system provides precise results based on highly accurate sensing setups such as electromyography systems and joint accelerometers [24, 25]. From a biomechanical point of view, ground reaction forces can be explained by Newton’s third law—for every action, there is an equal and opposite reaction of the human foot pressure, which is a way of describing human gait [26]. State-of-the-art studies find that the ground reaction addresses the evaluation of muscle forces, joint torques, and stiffness of damping associated with leg-surface contact [27]. Specifically, the ground reaction force (GRF) measurement is noninvasive and easy to implement. In this way, GRF is one of the distinctive, measurable characteristics used to label and describe individuals.

2.2. Face Recognition

Face recognition indicates the detection of face from cluttered background in line with the recognition using a database of facial characteristics [28]. Typically, the face image of a person is obtained from a video source. The development of face recognition technology largely depends on the flourish of machine learning and deep learning algorithms [29]. There are multiple methods in which face recognition systems work, but in general, they work by comparing selected facial features to determine whether the two face images belong to the same person or not [30]. To this end, face recognition can be primarily carried out via the following steps: (i) face detection: in the first place, we categorize the variation of colors from the input image and find the location and size of the faces in this image [31]; (ii) feature extraction: feature extraction is a most critical procedure which takes the biometric features, such as color of the eyes and width of the nose to construct a feature representation of the target face [32]; (iii) facial recognition: the feature representation of a face is commonly in the form of matrix, which is taken as the input of the facial recognition system [33]; and (iv) individual tagging: the person identity is determined by searching the current database. In this way, a name is tagged to the person whose image has already been captured [34].

To keep the method user-friendly, the four steps are taken in real time [35]. In this way, with a camera employed for image collecting, the face recognition process can be carried out for person identity verification.

2.3. Fuzzy Association Rules (FAR)

Theoretically, fuzzy association rules are utilized to capture the correlations between low-level features and high-level essence of the target [36]. We now give a brief description of this principle.

Let it be the initial database, together with all the characteristics in.

Specifically, we carry out a fuzzy set associated with each characteristic [37]. For the characteristic , the fuzzy set indicating the internal property of is given as follows:

For the purpose of biometric parameter identification, we shall thus define as the gait feature and as the facial image feature. Supposing that and refer to the corresponding properties or gait and face feature, respectively. As for each pair of or , we call it an item. Similarly, each pair of or is an itemset.

Hence, in order to find the internal correlation, the can be derived from based on FAR. As such, we can get votes which are greater than the specific threshold of the dataset. In other words, the probability that occurs given that has occurred is defined as the confidence of the rule [38, 39]. In this way, the significance factor of is presented as follows: where is the dataset of individual identity features whose fuzzy set is . Mathematically, and . On the other hand, the certainty factor for supporting the rule is given as follows: where function is to compute the voting in comparison the threshold value. In addition, indicates the person within the permitted individual dataset.

3. System Architecture

Considering the biometric parameter measurement, a system is established aiming at verifying the person identity by using the biometric characteristics for security monitoring. There are two parts in the system, which are gait analysis part and face recognition part. The outcomes of each part are integrated to determine the identity of the coming person (Figure 1). The deployment of both parts is illustrated as follows.

3.1. GRF Testing Setup

In this paper, a strain gauge force platform with the size of is employed to collect the GRFs of both feet on the laboratory walkway while walking. Apparently, the more sensing elements are placed, the higher precision of GRF can be measured [40]. Within this force plate, 256 vertical electrode sheets and 128 horizontal electrode sheets are employed for sensing. The force platform is calibrated by the calibration matrix provided by the manufacturer before putting into practical application. The detection of GRF can easily be implemented, as shown in Figure 2. As long as a force plate is set to establish a walking foot fall, the data will be automatically recorded when someone walks on it at free speed. The GRF signals are transmitted into voltage values with respect to the force applied on the sensor surface. Based on the input signals, the sensing data is sent to a signal acquisition module (NI FD-11637) with a highest sampling rate as 100 kS/s [41]. The force signals are sampled at a frequency of 500 Hz in this system. The acquisition device has the function of signal amplifying as well as a 24-bit analog-digital conversion for data transforming. What is more, a host computer is connected directly to the signal acquisition hardware via USB. As such, the GRF outcomes can subsequently be stored and provided for further analysis. Thereby, this data collection device, on the basic of GRF sensing, can be attached to the individual verification system whenever required.

There are three components measured by using the force plate, which are as follows: (i): vertical component on axis(ii): horizontal component on axis (anterior-posterior)(iii): horizontal component on axis (medial-lateral)

We thus have . The total GRF can therefore be represented by a vector that is the sum of the vertical load as well as the shear force in two orthogonal directions. The components of GRF are illustrated in Figure 3.

3.2. Face Recognition Setup

For the purpose of image detection, a visual sensing module integrates cameras to collect time series scenarios and characterize and identify intrusion targets [42, 43]. In this paper, since the individual’s digital face image acquired from a sensor is compared with the face images in the database, a Panasonic WV-CW590CH detection camera with a 650 color dpi is deployed in front of the security door. This instrument is able to detect objects within a range of 10 meters and is hard wired to a DC-regulated power supply. Typically, the perfect record for individual faces is conducted from the 3D face images, with a pose variety of ±45° rotation intensive and ±10° rotation in the image plane [44, 45]. In this way, a face image can therefore be captured from different surroundings (Figure 4).

In consideration of the security and secrecy needs, no wireless transmission network is allowed to build within the target area. As a result, the camera is connected to the signal processing module on the host computer with low-noise electric cable. In addition, the images of permitted individuals as well as the face recognition programs are collected beforehand and are stored as a dataset for real-time calling. When the face image of the coming person is detected, the data can be transferred from the camera to the host computer. The face recognition algorithms are run to process the image signals. We shall thus identify the person’s identity by matching the faces from the database. Therefore, the users can further process the images via linking to the host computer.

4. Identity Verification Algorithm

The GRF signals and the facial images are collected through the aforementioned system while the person is stepping to the security area. In the interest of individual verification, we tend to fuse these data for processing and analysis. On this occasion, we shall thus define the primary procedures of data processing, which is exhibited in Figure 5.

4.1. GRF Testing Algorithm

During the gait analysis phase, the ground reaction forces are collected to be identified, whose schematic diagram of the system is presented in Figure 6. To start with, the GRF signals are conditioned into process able data for further investigation. The sliding window is employed for GRF data segmentation for continuous identification. Specifically, the data is filtered to remove the random noise from the sensor outputs. Furthermore, an extractor is used for gait feature extraction, from which the internal characteristics are derived from gait in time sequence. Mathematically, features characterize the data from every single analysis window without losing information. In this research, four different features are extracted to demonstrate the GRF variation, which are maximum value, average value, standard deviation, and kurtosis. Accordingly, the representation of individual gait is sent to the characteristic matcher to authenticate a walking person. The identification of GRF is conducted by using the SVM (support vector machine) classifier, which resolves problems of high-dimensional data as presented in [46]. The RBF (radial basis function) kernel is taken for data training and testing. At this stage, the gait pattern is compared with the template of one single person, which is picked from the database stored in the host computer. The force is first matched to a prespecific kind of GRF. Hereafter, the features are compared to the database of four feature parameters and their values. The identification process is exhibited in Algorithm 1 with as a preset threshold. As such, the gait representation is compared with that of all subjects within the storage. On this occasion, the output of the verification system can be either the identity of a previously enrolled individual or an alert message.

1 Initialize feature extractor.
2 Initialize SVM based classifier.
3while run do
4 Perform GRF detector to get gait set .
5 Perform feature extractor to get gait feature set
6 Perform classifier on detected gait feature set and get the probability .
7 if then
8  Accept as the target person, and its probability as .
9else
10  Output alarm information
11 end if
12 end while
4.2. Face Recognition Algorithm

According to the aforementioned steps, a novel model is built up as shown in Figure 7. Originally, an pixel image represents a face via the vector in an dimensional space. Hence, the dimensionality reduction is a procedure of transforming a high-dimensional data set into a low-dimensional representation which retains most of the information [47]. In most cases, the dimensionality reduction is a preprocess of feature extraction. As for face recognition, we use the MPCA (multilinear principal component analysis) method for reducing the feature projection matrix of face images, which is a widely used multilinear algorithm that extracts features from multidimensional objects [48]. The outputs of the MPCA are sent to the feature extractor. In this paper, we take the LBP operator based on the exploration of [49]. Since the LBP operator has the window of , the gray value of the adjacent 8 pixels is compared with that of the center value. As long as the neighboring pixel value is greater than or equal to the center pixel value, the value of the target pixel is set as 1, otherwise set as 0. Consequently, the LBP eigenvalues of each pixel can be obtained to compose a feature spectrum. In order to get a real-time outcome, the similarity distance measure, which is a simple but effective recognition approach, is employed. Specifically, the L2 similarity distance measurement, whose purpose is to compute the L2 distance between the target face and those from the database, is taken to determine whether the two faces are the same or not. The creation of face dataset is applied to store the predefined individual facial images, which is the foundation of identity matching. Similar to the GRF analysis procedures, the face recognition algorithm is presented (Algorithm 2).

 Initialize pre-processor.
 Initialize feature extractor.
 Initialize LBP operator.
while run do
 Perform camera to get facial image .
 Perform pre-processor to reduce the high-dimensionality.
 Perform feature extractor to get feature set .
 Perform LBP operator on image feature set and get the maximal confidence .
if then
  Accept as the target person, and its maximal confidence as .
else
  Output alarm information
end if
end while
4.3. Decision-Making

Each of the subsystem provides a local result of the identification outcome. Compared to the identities from the database, the output value is normalized to be either 0 or 1 where 0 and 1 indicate the rejection and acceptance, respectively. The global decision, however, is computed by integrating the two outcomes. Commonly, the two outcomes are combined by using the weighted average algorithm. In this way, the final decision is on the basic of the probability on the personal identity. As mentioned in Section 2, we employ the RAF for data integration in this paper. For every single feature, the fuzzy sets can be built up and we can therefore measure the significance of the rule. Further, a FAR base is established where all the frequent fuzzy item sets can be obtained.

The fuzzy decision is given based on the FAR. There is no need for the obtained signals entirely matching with the standard personal pattern. To estimate the individual identity, we employ the Gaussian curve membership function for data characterization. The membership within the fuzzy set is shown as follows: where is the data center and is the variance. Moreover, the fuzzy matching for individual identity is presented as follows:

The outcome of the maximum fuzzy matching degree can therefore be found. In terms of identity verification, the result corresponds to the fuzzy decision exactly, which is expressed as follows:

5. Experiments

The evaluation of working performance of the propose method for individual verification is carried out. The equipment is deployed in front of a laboratory access of a college in the city of Shanghai. The current security area allows only 26 staff to get access. We collect the GRF signals of individuals from different types of shoes and facial images of different expressions and angles. In this way, 936 pieces of data are generated. The dataset is divided into three parts: 60% training data, 20% validation, and 20% testing data. The training procedure is performed on the training set, followed with the evaluation on the validation set. As long as expected results are obtained, the testing data is sent to the detector for individual identity verification. With the power supplied, GRF signals are recorded by the sensors on the force plate, while the face images are captured via the video camera. Both kinds of signals are transmitted to the computer through the electric cable in real time. The data processing procedure is implemented on the Dell server with two GTX-1080GPU, and the operating system is a 64-bit Ubuntu 16.4. One example of the waveforms of GRF signals and face images of one individual are presented in Figures 8 and 9. We verify the identity of these volunteers by the proposed method.

Initially, we train our method through the learning data in the dataset. The accuracy of identification shown in Equation (9) is determined by calculating the number of correctly recognized class instances (TP, true positives), the number of correctly recognized instances that do not belong to the class (TN, true negatives), and instances that either were incorrectly assigned to the class (FP, false positives) or were not recognized as class instances (FN, false negatives) [50]. The model is fine-tuned during the validation and evaluated during the testing.

Reported outcomes facilitate the evaluation of the proposed methods. At the first stage, the decisions based on gait analysis and face recognition are illustrated in Table 1. We can see that, for gait analysis outcome, all the errors are resulted from the rejected genuine users. In contrast, most errors of face recognition are caused by the accepted impostors.

The detection accuracy increases in line with the number of training samples (Figure 10). The proposed method is tested based on the inputs. In comparison, we initially present the outcomes by using only one kind of biometric parameter detection. Moreover, the integration of data using weighted average based on the outcomes in Table 1 is given. The determination via FAR is applied to the two detection outcomes for decision-making. The detection errors of weighted average and FAR algorithm are given in Table 2 and Figure 11. By using the error analysis of Table 1, the weighted average algorithm has a quite even error on both rejected genuine users and accepted impostors. However, for the FAR determination, the errors caused by rejected genuine users are four times of those by accepted impostors. Experimental results on testing data are shown in Table 3.

6. Conclusion

In this paper, we mainly focus on a remote personal verification approach in the specific condition of security area monitoring. The integration of biometric parameters provides an opportunity to cater to the demands of noninvasion, high accuracy, and remote sensing based on the user’s acceptability. This paper does present some contributions to the application of personal verification systems.

Firstly, the biometric parameters, i.e., GRF signals and facial images, are directly collected and transmitted for remote measurement with the devising and deployment of the detection system.

Secondly, both kinds of signals are processed by using the state-of-the-art algorithms. The signal detectors are established, and the signals are transmitted to identify the individual characteristics from a long distance.

Thirdly, the FAR algorithm is taken to better address the identification problem. An even higher detection accuracy is obtained because of the biometric signal integration.

Future steps will be taken to focus on the extension and validation of the current to more biometrics. In addition, the signal integration strategy will also be exploit, which aims to improve the personal verification accuracy.

Data Availability

The gait data and facial images used to support the findings of this study have not been made available because of the identity confidentiality of the users.

Conflicts of Interest

The authors declare that they have no conflicts of interest.