International Journal of Optics

International Journal of Optics / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 4967034 | https://doi.org/10.1155/2020/4967034

Mohammed Ahmed Talab, Suryanti Awang, Mohd Dilshad Ansari, "A Novel Statistical Feature Analysis-Based Global and Local Method for Face Recognition", International Journal of Optics, vol. 2020, Article ID 4967034, 17 pages, 2020. https://doi.org/10.1155/2020/4967034

A Novel Statistical Feature Analysis-Based Global and Local Method for Face Recognition

Academic Editor: Paramasivam Senthilkumaran
Received10 Jul 2019
Revised07 Mar 2020
Accepted04 May 2020
Published01 Jun 2020

Abstract

Face recognition from an image/video has been a fast-growing area in research community, and a sizeable number of face recognition techniques based on texture analysis have been developed in the past few years. Further, these techniques work well on gray-scale and colored images, but very few techniques deal with binary and low-resolution images. As the binary image is becoming the preferred format for low face resolution analysis, there is a need for further studies to provide a complete solution for the image-based face recognition system with a higher accuracy rate. To overcome the limitation of the existing methods in extracting distinctive features in low-resolution images due to the contrast between the face and background, we propose a statistical feature analysis technique to fill the gaps. To achieve this, the proposed technique integrates the binary-level occurrence matrix (BLCM) and the fuzzy local binary pattern (FLBP) named FBLCM to extract global and local features of the face from binary and low-resolution images. The purpose of FBLCM is to distinctively improve performance of edge sharpness between black and white pixels in the binary image and to extract significant data relating to the features of the face pattern. Experimental results on Yale and FEI datasets validate the superiority of the proposed technique over the other top-performing feature analysis methods. The developed technique has achieved the accuracy of 94.54% when a random forest classifier is used, hence outperforming other techniques such as the gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP), respectively.

1. Introduction

Face recognition is among the most important applications of pattern recognition and image processing. Thus, research on face recognition has increased due to its significance and potential to be deployed in threat-based applications [1]. In terms of characteristics of biometrics, face recognition is universally accepted by humans in allowing a system to recognize them using their face. Thus, face recognition has been deployed in surveillance applications as well as human-computer interaction (HCI) [2]. However, development in the face recognition domain has been limited due to issues such as fast computation results required in deploying the face recognition system for surveillance operations [3]. This issue has resulted to limiting the potential of deploying the face recognition system in a real-time environment, which also resulted in a different outcome than expected in the test database conditions [4]. In this paper, we will not discuss further about this issue since our aim is not to overcome the computational time.

Feature extraction techniques in the feature extraction phase have been actively explored in the face recognition field. It is because the phase is essential in determining the face recognition performance [5]. Some researchers designed the feature extraction techniques to exploit available information from existing co-occurrence matrices or extract the face features based on the matrix-oriented feature [68]. However, feature extraction technique of an image is more efficient when a technique that is based on distinctive matching patterns is taken into consideration [9, 10]. For that reason, global and local feature extraction techniques of the face are widely implemented by researchers. The global and local features have some advantages. The advantages of global features are the extraction approach which entails complete or subarea assessment of the original image [11]. Thus, the method is deployed with respect to the texture measurement method which collects the global characteristics of a particular picture and utilized overall characteristics in the face identification operations. The method uses recognition that is found on the characteristics that are derived from the entire face. Overall, the global feature extraction involves the shape of the faces and how the faces are able to lessen vagueness by highlighting all likely options of the symmetrical area of the images.

Recently, Karczmarek et al. [12] introduced a recent multicriteria decision theory concept of a new, generalized form of Choquet integral function and its application, in particular to the problem of face classification based on the aggregation of classifiers. Such a function may be constructed by a simple replacement of the product used under the Choquet integral sign by any t-norm. Bao et al. [13] proposed a novel quaternion-based colour model with enhanced fuzzy parameterized discriminant analysis to perform face recognition. The proposed method represented and classified colour images by using an improved fuzzy quaternion-based discriminant (FQD) model, which is effective for colour image feature representation, extraction, and classification. Agarwal and Bhanot [14] proposed a technique which uses the firefly algorithm to obtain natural subclusters of training face images formed due to variations in pose, illumination, expression, occlusion, etc. Movement of fireflies in a hyperdimensional input space is controlled by tuning the parameter gamma of the firefly algorithm which plays an important role in maintaining the trade-off between effective search space exploration, firefly convergence, overall computational time, and recognition accuracy. Karczmarek et al. [15] proposed a linguistic descriptor-based approach to the problem of face identification realized by both humans and computers. This approach is motivated by an evident observation that linguistic descriptors offer an ability to formalize and exploit important pieces of knowledge describing human’s face. Yadav and Vishwakarma [16] proposed a new efficient and advanced method, inspired from the interval type-II fuzzy membership concept of the fuzzy logic. The motivation is to exploit the benefit of an extended interval type-II membership function: a new concept to fuzzy logics, in collaboration with kernel-based sparse representation for random forest. Ramalingam [17] described an application of multicriteria decision-making (MCDM) for multimodal fusion of features in a 3D face recognition system. A decision-making process is outlined that is based on the performance of multimodal features in a face recognition task involving a set of 3D face databases. Roh et al. [18] proposed a face recognition method based on fuzzy transform and radial basis function neural networks, and to reduce the dimensionality and extract the important features of face images, they used the fuzzy transform with fuzzy partition techniques. Zhi and Liu [19] established an effective face recognition model based on principal component analysis, genetic algorithm, and support vector machine, in which principal component analysis is used to reduce feature dimension, the genetic algorithm is used to optimize the search strategy, and the support vector machine is used to realize classification.

However, the local features have their own advantages; for instance, the extraction technique involves the divided portion of the picture and represents a distinct area in the image. The extracted local features are curving and mostly aligned to the boundary and local points of the image that is being extracted although the features are usually vague due to poor image condition [20, 21]. However, both global and local feature extraction techniques are combined to fully utilize both advantages. It is because of the local feature extraction technique which entails the detached portion of the picture such as shapes, outlines, corners, and boundaries of the face, whilst the global feature extraction technique entails complete or subarea assessment of the original image [11]. Thus, the global method is deployed with respect to the texture measurement method which collects the global characteristics of a particular picture which are utilized as overall characteristics in the face recognition process.

Most of the existing global and local feature extraction techniques used texture analysis techniques to extract the features. However, the techniques are not competent compatible with the binary image and low resolution of the face image; there are many face recognition methods which have been implemented to overcome the disadvantages. The methods include discrete wavelet transform (DWT, principal components analysis (PCA)) [22, 23] and discrete cosine transform (DCT), and Gabor wavelet-based image provides the best resolution [24]. Other existing works also implemented the GLCM-based method to overcome the disadvantages [2527].

These methods have found the application on both colour and gray-scale images. Meanwhile, in pattern recognition, binary images are considered as the standard image format as they comprise different shapes and textures [28]. Therefore, the advantage of this method is the approach which reduces cost and makes the framework able to deal with a wide range of applications. The outstanding statistical feature analysis methods are based on GLCM in face recognition. There are some enhancements which are implemented such as GLDM and GLRLM [29, 30]. The latest work is from GLCM that extracted different features of the face based on GLCM [31, 32]. However, in the existing methods, there are some drawbacks. The main problem is the methods which are using the gray-level image. The problem with the gray-level image is that it will duplicate image pixels. Therefore, the methods are unable to extract robust face features. Other than that the methods lead to slow processing and skipping of the structure of the face body. In general, there is no available statistical feature analysis method specialized for the textures of binary face images. It is proved that binary images have advantages such as the images have black and white pixels. Thus, the extracted features from those pixels will be robust, whereby no duplication of pixel, and the pixel location is transparent. Therefore, the face features will be sharpened and will increase the accuracy performance.

The main difference between deep learning and machine learning is due to the way data are presented in the system. Machine learning algorithms almost always require structured data, while deep learning networks rely on layers of ANN (artificial neural network). Practically, deep learning is a subset of machine learning that achieves great power and flexibility by learning to represent the world as nested hierarchy of concepts, with each concept defined in relation to simpler concepts, and more abstract representations computed in terms of less abstract ones [33, 34].

However, in most complex algorithms, the simple method is used as an algorithmic step [35, 36]. The complex methods can solve most of the complex binarization problems, but complex methods often depend on the performance of other techniques and as such are often complex to design and costly to develop. In binary image representation, only two values per pixel (black and white) exist, thereby making it a suitable method for binary images.

In face recognition, the binary image and low resolution of the face image are important to deal with binary image representation which is one of the essential preparation formats in low-resolution image analysis and recognition. However, simple thresholding methods available at present are not applicable for many of the binarization problems; other than that, in most of the implementation such as in the security area, the face image is in low resolution. Also, most of the real environment that involve with face recognition implementation has problems with low resolution of the face image. Therefore, the technique proposed in this study is based on the analysis of statistical features like gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP).

This method is applied to extract the local and global face features from binary images and images with low resolution. This technique can statistically analyze the relationships between the edge pixels in binary images. After the extraction of the edge between the black and white levels, a set of values that represent the relationships between the edge pixels were determined; these values showed that the statistical feature analysis method is better than the other techniques in establishing the relationship between pixels. However, the existing techniques of statistical feature analysis have some drawbacks. For example, the methods lead to slow processing, skipping the structure of the face body, dealing with binary images as colored images, and poor results for binary images compared to natural texture images. In general, there is no available texture analysis method specialized for the textures of binary or textual images.

For that reasons, we propose an enhanced technique that will combine the binary-level occurrence matrix (BLCM) and the fuzzy local binary pattern (FLBP) that will be named as FBLCM. This technique will develop a matrix extracted co-occurrence approach in face recognition based on the binary image pixel and will extract the texture of face features which enables the global feature to take the facial shape. The local feature extraction approach deals with the disconnected image parts such as the lines, edges, corners, and shapes, whereas the global approach deals with an entire or subregion analysis of a natural image. Thus, FLBP was combined with BLCM due to its capability of performing the statistical analysis of the behavior of binary picture pixels. This method was selected because binary image representation is the preferred textual image format. Meanwhile, the accuracy of a binarization process depends on the performance of the subsequent steps during document analysis. Due to the absence of a texture analysis method for gray-level images, the use of binary images necessitated this combination in a bid to address more descriptive features which can give a better recognition and higher accuracy. Some factors (such as the size of the binarization window) are often user defined, and this is not a practical practice with all images, especially those with different sizes and features. In the local binarization methods, the window size is a critical factor; small-sized binarization windows are suitable for noise removal, while large-sized windows are ideal for image preservation. Small-sized windows can damage large texts while large-sized windows are not good at noise removal. The FLBP is more applicable for descriptor images but less effective for shape images, and the BLCM is more effective with shape images. By placing FLBP (for descriptor classification) and BLCM (for shape classification) with this combination, we are able to achieve outperformed recognition performance compared to other related techniques.

To explain details about our work, we divided this paper into several sections. Section 2 presents the related work that focuses on the statistical feature analysis techniques in extracting global and local features of the face image. Section 3 describes the methodology of our proposed technique. Section 4 is the experimental and the result. Section 5 is the discussion, and finally conclusion is presented in Section 6.

In this section, we analyzed statistical feature analysis techniques that have been implemented to extract global and local features of the face. Therefore, the sections below are divided into global feature extraction and local feature extraction. We discussed the related techniques in the respective section.

2.1. Global Feature Extraction

Gray-level co-occurrence matrix technique is an important numerical scheme deployed for the category of texture analysis methods. The GLCM method extracts statistical values based on longitudinal allocations of gray area values within a picture [37]. Besides, the GLCM method is based on a second-order feature that makes this approach to be robust for face recognition. The second-order statistic is characterized by the GLCM and gray-level difference method (GLDM) [38].

Technically, these two approaches are similar, but the GLCM method is more often adopted by prior studies. The GLCM approach as initiated in [39] consisted of a matrix that allocates options to describe the dispersal of occurrences in a picture. Thus, the GLCM method presents the number of incidences in a gray-scale value and also presents the specific relationships.

To calculate features, statistical equations are applied to the values of the P (i, j) matrixes. Accordingly, Haralick et al. [39] also recommended several statistic equations to compute texture features in the GLCM approach as presented in equations (1)–(7), respectively:where is the standardized value and is the gray intensities number.

The GLCM has been deployed by prior studies for document analysis based on the global analysis approach, such as in iris recognition, fingerprint classification, Chinese sign language recognition as implemented by prior studies [40, 41], and a few related research. Correspondingly, Figure 1 presents how to derive the four concurrent matrices by means of levels and balances [0, 1], [−1, 1], [−1, 0], and [−1, −1] which is distinct as one adjacent pixel in the probable four axis.

It can be seen that two adjacent pixels (2, 1) of the input picture are replicated in the correspondence matrix as 3 since there are three pixel intensity values (2) and a pixel strength value (1) contiguous to each other in the input image. The neighboring pixels (1, 2) will occur again 3 times in which makes the matrices the same. Likewise, the other 3 matrices are measured.

Face recognition is utilized in several systems such as reconnaissance, verification, and security. However, it has become a very perplexing domain due to the orientation of face images, illumination, and variations in facial expressions. It is still a difficult task involved in developing an autonomous face recognition system [42]. Texture-oriented extraction technique can aid in extracting roughness, randomness, uniformity, coarseness, regularity, lightness, density, directionality, linearity, phase, quality, smoothness, etc., to enhance the texture of the image [43]. Respectively, the GLCM method as a texture feature extraction procedure can be deployed for texture features based on white and black binary image processing to improve the recognition rate; however, the GLCM method leads to high processing limitation.

Several studies have suggested several statistical feature extraction methods, and one of such methods has been proposed in [44]. The geometrical with structural features and edge direction matrix (EDMS) hybrid features extraction method was presented in [22] for online Arabic character recognition. This research mainly contributes a rule-based method for Arabic character recognition based on EDMS and a geometrical feature extraction method. The GLCM (gray-level co-occurrence matrix) and EDMS (edge direction matrixes) were also presented as feature extraction techniques for character recognition in [45]. The GLCM was applied in Arabic and English character recognition but was not ideal for low-resolution characters. We define a binary image texture as black values with spatial distribution properties on a white background. Hence, the statistical methods are suitable because they depend on the spatial distribution of the image values. It describes the pixel distribution and their relationships in the image. Statistical methods are also attractive due to their ease of implementation.

2.2. Local Feature Extraction

Local feature extraction was first founded by Ojala et al. [46] and was later improved by Ahonen et al. [47] where the authors applied LBP for facial identification. The LBP method aimed to change the gray-scale appearance into a vector which includes digital codes by comparing the association of the colour value alteration among the adjoining pixels in the circular picture symmetry. Thus, if the exterior parameters do not alter the negative and positive connections between the dominant pixel and adjoining pixels, the illustration of LBP to the appearance is improved. Thus, in a pixel connection, the LBP measure of the picture pixel can be computed bywhere and state the gray significance of the focus pixel and adjoining pixels in a picture neighborhood of range R and s relates to a threshold equation as specified by

As seen in equation (11), each pixel in the diagram is prearranged into an 8-bit LBP format. Moreover, as a substitute of hard coding the format difference, a possibility evaluation is utilized in fuzzy-LBP to characterize the prospect of a format difference to be coded as “0” or “1,” for instance a piecewise linear fuzzy membership function and a Gaussian-like membership function as mentioned in [48]. By infusing the fuzzy set into LBP as FLBP, a minor image difference will change the fuzzy-LBP histogram marginally as equated to the original LBP histogram. Nonetheless, the membership is a method of the pixel alteration, whose degree may be altered easily by noise. Thus, fuzzy-LBP is slightly subtle to noise.

In relation to the difference of the original fuzzy-LBP that employs both indication and amount of pixel dissimilarity, this study determines the fuzzy-membership sign method based on only pixel transformation. Thus, even when a pixel modification is changed by noise, its size changes considerably, but the sign hardly changes, and its membership method is always stable. Therefore, the suggested method is more rigid to noise as compared to fuzzy-LBP. In addition, FLBP partially addresses many subimages handled mainly by SIFT procedures which are then utilized to train Gaussian-integrated approaches in analyzing and improving Fisher vectors for the face verification noise problem by introducing fuzzification in the LBP conversion process [49]. In place of pixel hard coding modification as shown in equation (10), a fuzzy membership method is utilized to characterize the prospect of codes as “0” or “1.” Several membership functions were suggested by prior studies [48, 50, 51]. Among these membership functions, the linear fuzzy membership function as recommended in [50] is the most common as seen in the following equations:

Therefore, and are the possibilities where pixel alteration can be represented as “1” and “0,” correspondingly. The parameter d manages the volume of fuzzification. The benefit of the local format is its toughness to variations, as it only encodes the sign of the pixel alteration. Nevertheless, it is profound to noise that may alter the formatting. Fuzzy-LBP addresses the noise difficult by employing the fuzzy method for pixel difference. This leads to an image difference that mainly alters the fuzzy-LBP histogram slightly. But, the membership method presented in equation (15) exploits magnitude. Thus, the size of a pixel alteration is susceptible to noise irrespective of the fuzzy-LBP method proposed, which is less subtle to noise.

To resolve nose alterations, configuration problems were proposed for the face recognition problem as previously adopted in [52, 53].

Findings from prior study [54] reveals that the utilization of fuzzy and LBP features in extraction of the texture could possibly improve robustness to noise [50, 5557]. Yet, these research studies are deliberated as initial since they comprise only fewer validations and only suggested comprehensive approaches for the generation of fuzzy-based texture approaches [51, 58]. Thus, it is evident that the fuzzy-oriented texture approach is more rigid to noise. The applicability of the fuzzy binary pattern for texture representations was reviewed through fuzzification of a diversity of methodologies. Lastly, there is a need to enhance the texture depictions attained to be confirmed with a systematic and comprehensive study on the natural scene. We proposed a novel statistical method BLCM-FLBP to encode local and global feature descriptors by incorporating binary image theory to the representation of local and global patterns of the texture in images.

3. Methodology

3.1. Developed Model

This section presents the descriptor face based on FLBP and further aims to develop the novel binary statistical feature for face recognition. The proposed model defines a procedure for extraction of numerical characteristics based on the behavior of pixels in the margins between black and white facilitated by the fuzzy local binary pattern (FLBP). Accordingly, Figure 2 and Figure 3 depict the flowchart and the developed model which comprise the preprocessing stage that is deployed by using the Laplacian filter to convert images from grayscale to the binary image black and white. After that, we proceed to extract the feature in our proposed model as shown in Figure 2.

In addition, the result of the Laplacian filter entails a white and dark background. Figure 2 depicts the flowchart of the novel model which describes the phases of face recognition: (a) which involves the application of the Laplacian filter to improve the edges between the white and black nodes; (b) which entails the application of a low face in the feature extraction stage; followed by (c) which involves features extraction and then (d) which is the training and (e) which is the testing; next is (f) which is the classification phase; and lastly () which is the output as the final outcome. Figure 4 shows how the developed model employs Laplacian process for filtering the original image.

Figure 3 depicts the developed model proposed in this study for the binary statistical feature for face resolution. Thus, the developed model comprises three stages, the first stage entails the input binary image followed by the angular second moment, homogeneity, organization angle, correlation, and probability value. The last stage comprises 21 image features.

As presented in Figure 5, our proposed model uses equations to include features from the binary image values by extracting 21 features for calculating the image angular second moment, correlation, homogeneity, organization angle, and probability value (as shown in Table 1).


Type of the featureNumber of the feature

Angular second moment4
Homogeneity4
Organization angle8
Correlation4
Probability value1

3.2. Angular Second Moment

This feature is an assessment of consistency of an image. A homogeneous scene usually contains only a few binary images but moderately high values of as computed byfd13where θ characterizes 0°, 45°, 90°, and 135° and represents the comparative location in .

3.3. Correlation

This feature evaluates the binary pixel-level lined requirement among the pixels at the quantified situations comparative to one other as presented in the following equation:where θ signifies 0°, 45°, 90°, and 135° and represents the relative point in .

3.4. Homogeneity

Homogeneity is the feature that describes the proportions of state of the associations and calculates the measurement of each connection in comparison with the relationships from all angles. The following arithmetic equation (15) defines the pixel regularity:where θ denotes 0°, 45°, 90°, and 135° and represents the comparative location in .

3.5. Organization Angle

The organization angle measure indicates the axis of each pixel’s associations, which characterize the percentages for all axis in in comparison with the aggregate number of the edge as shown in the following equation:where θ represents 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315° and (scoped pixel) represents the aggregate number axis in a particular image.

3.6. Probability Value

The probability value is the feature that describes the visual main axis of the picture by depicting the principal axis of the source image. The direction is calculated by locating a state with the highest number of associations based on the following equation:

Therefore, based on Figure 5, the recommended approach consists of two steps which are designated, where, the first step involves the formation of the original image and application of the Laplacian filter to mine the edges among the white and black states of the image. In this step, a neighborhood of 5 × 5 is represented by a set encompassing 25 elements. Moreover, P33 characterizes the strength value of the main pixel and residual values are the amount of neighboring pixels as shown in Figure 5.

Similarly, the second step involves the formation of GLCM based on the binary image black and white. In this step, this study derived a new method for the formation called BLCM to facilitate the binary image by replacing the gray part of the image to binary. Thus, Figure 6 shows the neighborhood properties of the image which also includes edge information of the image.

Figure 5 depicts the neighborhood image extract properties which includes edge information. Accordingly, in our proposed approach, the 8 position values are employed as events, and the quantity of amounts in the edge image is deposited in the connected cell via the binary-level occurrence matrix (BLCM). Besides, our proposed model is based on a numerical exploration of the associations among the structure border edge pixels in digital images. Thus, each pixel in the picture is associated with a 5 × 5 eight pixels matrix. As illustrated in Figure 7, the middle point is associated with the eight pixels in the positions C (i − 1, j − 1), C (i, j − 1), C (i + 1, j − 1), C (i − 1, j), C (i + 1, j), C (i − 1, j + 1), C (i, j + 1), and C (i + 1, j + 1). Every place of the neighboring pixels represents the association axis with the focus pixel of the image.

Figure 7 shows how the eight neighboring pixels are associated for image extraction based on BLCM values which are shown in two ways which include the first edge detection and the second order association.

3.7. Application of FLBP for Face Representation

Although in the LBP method all limited descriptors that deploy vector quantization are not robust, a little change in the input image constantly results in a minor alteration in the output. Thus, to improve the rigidity of the operator, the thresholding function is replaced by two fuzzy association methods as suggested in [50]. Thus, to optimize the LBP approach in resolving the uncertainty caused by noise, researchers in [51, 58] assimilated the fuzzy approach to aid in computing the binary structure to address vagueness and improve judgment capability of the LBP approach in resolving noise. Therefore, we followed the suggestion as in [51, 58] in presenting FLBP for face representation in this section. The suggested FLBP approach is integrated in our proposed model to help handle indecision with the doubtfulness initiating from vague information [28, 5961].

Thus, we opted for FLBP for face recognition using the intuitionistic fuzzy sets as shown below.

Let U be the general set for an n-pixel neighborhood. Let be the set of all pixels with gray state greater than or equal to and B be the set of all pixels with gray state smaller than under the complete set U, individually. Also, an IFS under the complete set U is specified by

Thus, and with the form .

The numbers and represent the amount of association and nonassociation to which a parameter has either better or lower gray amount than in individually. For each parameter , the amount is called the degree of indecision. It is the point of indecision if aligns to or not. The association and nonassociation method of IFS can be illustrated as shown in the following equations:

For an neighborhood, the contribution of each FLBP code in a particular bin of the FLBP structure is estimated by the association and nonassociation method and as presented in the following equationfd21:where is the integer of neighboring contributing in the FLBP code calculation, symbolizes the synchronization of the picture, and symbolizes the mathematical symbol of the bit of the digital illustration of . The comprehensive FLBP structure is calculated by adding the effects of all FLBP codes in the inserted picture as presented in the following equation:

4. Experiments and Results

In this section, the developed novel binary statistical feature for face resolution is evaluated using simulations employed on 2 datasets Yale B and FEI facial databases [62, 63]. The first dataset Yale Face Database comprises 165 gray measure images of 15 persons [62, 63]. The Yale Face Database also contains 11 images per subject, one each with diverse facial appearance as presented in Figure 6.

Furthermore, the second dataset FEI face database [64, 65] comprises face images taken from 200 individuals (100 males and 100 females). Each individual comprises 14 collected images in the database which amounts to a total of 2800 colour face images, each reaching 180 degrees of rotation changes. Moreover, each face in the database is primarily from 19–40 years old, based on different appearance, hair, and separate individual decoration as shown in Figure 6. Hence, the dataset is tested based on different degrees and different expressions towards providing support to extract the image features that are to be utilized in evaluating the performance of our developed novel binary statistical feature for face resolution.

Figure 8 shows examples of some different angles with 180 degrees of rotation changes of the face images in the FEI second dataset. Correspondingly, since this paper aims to propose an improved global statistical feature method and local feature based on the fuzzy approach and to find the best classifier efficacy for this approach, we proceeded to apply a random forest and complex system classifiers. Then, we assessed the efficacy of the novel approach by associating it with the gray-level co-occurrence matrix (GLCM) approach to evaluate the angular second moment (ASM), homogeneity difference; variance, and correlation (see Table 2). To this end, the evaluation was deployed at angles of 0, 45, 90, and 135 for FLBP and bag of word (BOW).


Type of featureProposed methodGLCM

Angular second moment44
Homogeneity44
Organization angle8
Contrast4
Entropy4
Variance4
Correlation44
Probability value1
Total2124

In our experiment, the dataset was then divided into training datasets and evaluating datasets, after which the authors assessed a training dataset derived from a dataset of about 60% to 70%. Results from the test indicate that our new approach achieves higher correct amounts than the GLCM. Moreover, in our test, different values of datasets were evaluated to identify the higher performance. Our results suggest that the random forest obtained a better performance of 63% training dataset as compared to GLCM.

Figure 9 shows the evaluation results of the gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP) and our proposed approach from 60% to 70% separation of the two datasets and results of the neural network, random forest. Likewise, Figure 10 depicts the results of the gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP) and our proposed approach from 60% to 70% separation of the B dataset and the results of the neural network, random forest.

Results from Figures 9 and 10 indicate that our proposed approach achieved better performance with the random forest with 63% training dataset accuracy rate. Thus, the novel approach outclassed a 63% training dataset with the random forest. In addition, results from Figures 11 and 12 reveal that the proposed approach obtains better performance of about 94.54% as compared to the BLCM method which obtained a value of 90.77% in the random forest classifier (Table 3).


MethodsMeanStandard deviation

RF/FEI90.770.65
RF/Yale B90.440.79
NN/FEI92.090.55
NN/Yale B91.220.74

Next, the test was deployed in five iterations and the result is presented indicates that the mean value of the proposed approach is higher than the mean for the binary-level co-occurrence matrix, gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP). Moreover, the proposed approach obtains a standard deviation which is less than the other methods (binary-level co-occurrence matrix, gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP)).

In Table 4, the mean of values of the FLBP with the FEI dataset was (85.09% RF and 89.72% NN) and with the Yale B dataset was (87.10% RF and 86.44%). The FLBP method obtained a standard deviation with the FEI dataset (0.68% RF and 0.47% NN) and with the Yale B dataset obtained (0.73% RF and 0.76% NN). Five experiments were performed for each method in one classifier. Figure 13 shows the average values of each classifier.


MethodsMeanStandard deviation

RF/FEI85.090.68
RF/Yale B87.100.73
NN/FEI89.720.47
NN/Yale B86.440.76

The dataset was partitioned into a training set and a testing set. The training dataset for the experiment was about 60% to 70% of the original dataset as shown in Figure 14.

The random forest followed by the neural network (NN) produced acceptable accuracy rates. The experiment was continued by analyzing the result consistency of the random forest classifier which had a highly better result than that of BLCM, GLCM, BOW, and FLBP in each classifier. The MLBP was implemented following the program. The RF is a set of learning tools for classification. In this study, the confidence number of trees was fixed at 47 when using RF.

In Table 5, the mean of values of the GLCM with the Yale B dataset was (42.29% RF and 39.21% NN) and with the FEI dataset was (39.97% RF and 46.33% NN). However, the mean value for BOW with the Yale B dataset was (70.62% RF and 76.18% NN) and with the FEI dataset was (82.21% RF and 86.42% NN). The GLCM method obtained a standard deviation with the Yale B dataset as (0.87% RF and 0.95% NN) and with the FEI dataset as (0.68% RF and 0.95 NN). However, the standard deviation for BOW with the Yale B dataset was (0.90% RF and 0.85% NN) and with the FEI dataset was (0.73% RF and 0.88% NN). Five experiments were performed for each method in one classifier. Figure 15 shows the average values of each classifier.


MethodsYale B datasetFEI dataset
MeanStandard deviationMeanStandard deviation

GLCM/RF42.290.8739.970.68
BOW/RF70.620.9082.210.73
GLCM/NN39.210.9546.320.95
BOW/NN76.180.8586.420.88

Figure 16 and Figure 17 present the results achieved with the FEI and Yale B datasets from 60 to 70% of the original data which were used for the training for GLCM and BOW methods. Table 6 shows the descriptive analysis of the five tests in the proposed method, BLCM, GLCM, FLBP, and BOW with NN and RF.


MethodsYale B datasetFEI dataset
AccuracyStandard deviationAccuracyStandard deviation

Proposed method/random forest94.540.7193.160.62
GLCM/random forest42.290.8739.970.68
BOW/random forest70.620.9082.210.73
FLBP/random forest87.100.7385.090.68
Proposed method/NN93.610.4495.270.45
GLCM/NN39.210.9546.320.95
BOW/NN75.420.8586.420.88
FLBP/NN86.440.7689.720.47

Furthermore, results from the experiment are compared to results from a few prior studies (see Table 7). Among these studies, Patil and Talbar [66] designed a facial identification that integrates LBP, Gabor, and their fusion method. The authors applied principal component analysis (PCA) to decrease the image dimension. Besides, Dong [67] designed an effective theoretical method for face identification based on LBP smoothness and PCA to minimize the effect of illumination conditions and altering facial expressions. The researchers tested their model based on image robustness, rotation variations, illumination, etc.


Method referenceTechniqueAccuracy (%)

[66]Gabor wavelet, LBP92.00
[67]PCA + LBP92.88
[68]ALBP with BCD86.45
[69]Fusion Gabor, LBP, PCA94.00
Proposed methodFLBP-BLCM94.54

Innovative face recognition approach that can be applied in an unrestrained setting was proposed in [68]. Their approach is based on the facial picture LBP which measures the variation among faces by deploying the Bray–Curtis difference benchmark. The authors further propose a new technique called the amplified binary approach which is deployed by combining the standard of area of nonstructured and structured patterns that helps in improving the LBP surface texture using integrated structured uniform patterns that extract meaningful data associated with local descriptors. However, NN-based methods were applied for feature extraction from facial images. This method dynamically groups neurons to a higher order; hence, it is robust against environment variations [70, 71]. An advantage of NN is its high accuracy, only when a large database is trained, and disadvantages of NN are it requires long time to train and the database should be extremely large to have high accuracy.

For the random forest construction, a set of decision trees with regulated variations is used. An example of the random subspace method is the selection of features from random subset stochastic discrimination implementation methods. This comparison between the two methods (NN and random forest) gives more confidence and satisfactory results because these two methods are more common.

5. Discussion

This paper discusses a novel binary statistical feature for face recognition after which, tests were deployed to test the proposed model and further compare the performance of the model with the existing approaches. Thus, the experiments were aimed at assessing the performance of the proposed approach by using the BLCM-FLBP approach separately and also integrating both procedures based on the classification accuracy rate of a few classification methods. Moreover, approaches such as random forest and backpropagation neural network classifiers were applied in the evaluation. To carry out the test, the dataset was split into training and testing datasets. In the test, different proportions of training and testing datasets were evaluated to define the approach that has the best performance. The training dataset were derived from a dataset of percentages between 60% and 70%. Results from the experiment indicate that the proposed BLCM-FLBP obtained higher correct rates as compared to the gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP) as presented in Figures 16 and 17, respectively.

In addition, results reveal that the proposed method possesses better performance based on the average and standard deviation value derived from the experiment as compared to the gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP) in all cases. Our results also suggest that the random forest with a 63% training dataset obtained the better performance. This is evident due to the correction rate of GLCM being 46.32. Besides, the proposed method achieved the maximum performance with the random forest with a 63% training dataset, with a precision rate of 95.27%. The proposed method outclassed a 63% training dataset with the random forest. Based on the analysis presented in Figure 10, it is noted that the proposed approach obtained the maximum efficacy, at 95.27%, while the GLCM approach obtained 42.29%, the BOW method obtained 82.21%, and the FLBP method obtained 85.09 in the same case.

Furthermore, after repeating the experiment on several iterations for the random forest with the 68% training dataset for gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP), a random forest and backpropagation neural network with a 68% training dataset for BLCM-FLBP has been performed five times. Results from Table 4 based on the descriptive t-test statistics for the Yale dataset specify that the mean value of the proposed approach is given as 94.54%, which was higher than the average for GLCM given as 42.29%, whereas BOW obtained a value of 70.62% and FLBP obtained a value of 87.10%. Likewise, results from the FEI dataset as shown in Table 8 suggest that the proposed method obtained a value of 93.16%, which was higher than the mean for GLCM which obtained a value of 39.97%, whereas, BOW obtained a value of 82.21% and FLBP obtained a value of 85.09%.


MethodsMeanStandard deviation

RF/FEI93.160.62
RF/Yale B94.540.71
NN/FEI95.270.45
NN/Yale B93.610.44

In addition, the results show that the proposed approach attained a standard deviation of 0.71 with the Yale dataset, which was lesser than the GLCM approach of 0.87, BOW with 0.90, and lastly FLBP with 0.73. Similarly, by considering the FEI dataset, our proposed approach obtained a standard deviation value of 0.62 which was lesser than the GLCM method value of 0.68, BOW value of 0.73, and FLBP value of 0.68. Moreover, the results confirm that BLCM-FLBP gave the highest accuracy rate in all experiments than the gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP).

Besides, the analysis indicates that the implemented classification method is more effective with the proposed approach features as compared to other approaches. This is evident in results presented in Figures 9 and 10, where the classification method provides a better performance with BLCM-FLBP than GLCM, BOW, and FLBP. Lastly, the results from the experiment indicate that the proposed method is less adaptable than the gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP) at different percentages of training and testing datasets separating.

The outperformance technique is implemented in all of them which are using the gray-level image to extract the face features. The technique basically extracts the local and global features of the face image. However, there are a few drawbacks of the existing techniques: low resolution, illumination, gray image, and duplication of the pixels so that the edge of the face image is difficult to be determined. Due to that the face image is not well describing the spatial distribution of the value in the image, the descriptive face features are unable to be extracted. Thus, it will distract the accuracy performance. Other than that, the existing works only relied on the global face features, without considering the local face features in their accuracy performance observation. From that problem, a new combination of local and global face feature extraction technique is proposed. To overcome the problem of gray-level face images in the GLCM-based method, the enhancement of GLCM is proposed by using the binary local co-occurrence matrix (BLCM) to extract global face features. To extract more unique features that are based on the shape of the face image, the local face feature extraction technique is combined with the BLCM. The method of feature extraction technique is called the fuzzy local binary pattern (FLBP). Therefore, the new combination technique is known as BLCM-FLBP.

6. Conclusions and Future Scope

The study has developed an improved global feature extraction technique based on the numerical analysis of the behaviors of edge pixels in images as well as the fuzzy local feature. The proposed method is applied on the low-resolution face dataset. Obtained results from this study suggest that a set of degree can be accessible as the associations among the picture after formulating and standardizing the dataset. The information is retained in matrices referred to as the binary-level co-occurrence matrix (BLCM), and the intuitionistic fuzzy feature approach employed to encode the structure by incorporating the intuitionistic fuzzy set (IFS) procedure in the representation of the structure of images. Furthermore, the proposed approach is an extended version of the FLBP approach which is based on IFS theory which contributes to the dissemination of the FLBP.

We have applied the values of FLBP matrices in the novel approach. Then, we proceeded to test and associate our novel approach with the gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP) by applying the multilayer network, random forest. Findings from the experiment indicate that the novel approach produces greater performance as compared with the gray-level co-occurrence matrix (GLCM), bag of word (BOW), and fuzzy local binary pattern (FLBP). The proposed approach presented a maximum performance value of 94.53% obtained with the random forest based on 63% training for the Yale B data and 92.98% obtained with the neural network based on 68% training for the FEI data. However, future work will involve an improved approach that will be able to find and process effective image representations for the occurrences in the images. Secondly, the improved approach will be enabled to be used in texture discrimination in dividing the image according to the type of local and global as feature extraction stages towards developing different system applications in the pattern recognition domain.

Data Availability

The Yale B dataset and the FEI face dataset used to support face recognition are from the previously reported studies and datasets, which have been cited in the relevant places. The process data (Yale B face recognition and FEI face dataset) are available freely at http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html and https://fei.edu.br/∼cet/facedatabase.html, respectively.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to express their gratitude to the Ministry of Higher Education of Malaysia, under the Fundamental Research Grant Scheme (vote number RDU1901103), for supporting this study.

References

  1. A. M. Shabat and J.-R. Tapamo, “Angled local directional pattern for texture analysis with an application to facial expression recognition,” IET Computer Vision, vol. 12, no. 5, 2018. View at: Publisher Site | Google Scholar
  2. N. Werghi, C. Tortorici, S. Berretti, and A. Del Bimbo, “Boosting 3D LBP-based face recognition by fusing shape and texture descriptors on the mesh,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 5, pp. 964–979, 2016. View at: Publisher Site | Google Scholar
  3. J. Qian, J. Yang, N. Zhang, and Z. Yang, “Histogram of visual words based on locally adaptive regression kernels descriptors for image feature extraction,” Neurocomputing, vol. 129, pp. 516–527, 2014. View at: Publisher Site | Google Scholar
  4. C.-X. Ren, D.-Q. Dai, X.-X. Li, and Z.-R. Lai, “Band-reweighed gabor kernel embedding for face image representation and recognition,” IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 725–740, 2014. View at: Publisher Site | Google Scholar
  5. S. Awang, J. Sulaiman, N. K. M. Noor, L. Bayuaji, and L. Bayuaji, “Comparison of accuracy performance based on normalization techniques for the features fusion of face and online signature,” Advanced Science Letters, vol. 23, no. 11, pp. 11233–11236, 2017. View at: Publisher Site | Google Scholar
  6. A. Gelzinis, A. Verikas, and M. Bacauskiene, “Increasing the discrimination power of the co-occurrence matrix-based features,” Pattern Recognition, vol. 40, no. 9, pp. 2367–2372, 2007. View at: Publisher Site | Google Scholar
  7. N. Jhanwar, S. Chaudhuri, G. Seetharaman, and B. Zavidovique, “Content based image retrieval using motif cooccurrence matrix,” Image and Vision Computing, vol. 22, no. 14, pp. 1211–1220, 2004. View at: Publisher Site | Google Scholar
  8. R. F. Walker, P. T. Jackway, and D. Longstaff, “Genetic algorithm optimization of adaptive multi-scale GLCM features,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 17, no. 1, pp. 17–39, 2003. View at: Publisher Site | Google Scholar
  9. A. Busch, W. W. Boles, and S. Sridharan, “Texture for script identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 11, pp. 1720–1732, 2005. View at: Publisher Site | Google Scholar
  10. S. Awang and N. M. A. Nikazmi, “Automated toll collection system based on vehicle type classification using sparse-filtered convolutional neural networks with layer-skipping strategy (sf-cnnls),” Journal of Physics: Conference Series, vol. 1061, no. 1, Article ID 012009, 2018. View at: Publisher Site | Google Scholar
  11. X. Jiang, “Feature extraction for image recognition and computer vision,” in Proceedings of the 2009 2nd IEEE International Conference on Computer Science and Information Technology, IEEE, Beijing, China, August 2009. View at: Publisher Site | Google Scholar
  12. P. Karczmarek, A. Kiersztyn, and W. Pedrycz, “Generalized choquet integral for face recognition,” International Journal of Fuzzy Systems, vol. 20, no. 3, pp. 1047–1055, 2018. View at: Publisher Site | Google Scholar
  13. S. Bao, X. Song, G. Hu, X. Yang, and C. Wang, “Colour face recognition using fuzzy quaternion-based discriminant analysis,” International Journal of Machine Learning and Cybernetics, vol. 10, no. 2, pp. 385–395, 2019. View at: Publisher Site | Google Scholar
  14. V. Agarwal and S. Bhanot, “Radial basis function neural network-based face recognition using firefly algorithm,” Neural Computing and Applications, vol. 30, no. 8, pp. 2643–2660, 2018. View at: Publisher Site | Google Scholar
  15. P. Karczmarek, A. Kiersztyn, W. Pedrycz, and M. Dolecki, “Linguistic descriptors in face recognition,” International Journal of Fuzzy Systems, vol. 20, no. 8, pp. 2668–2676, 2018. View at: Publisher Site | Google Scholar
  16. S. Yadav and V. P. Vishwakarma, “Extended interval type-II and kernel based sparse representation method for face recognition,” Expert Systems with Applications, vol. 116, pp. 265–274, 2019. View at: Publisher Site | Google Scholar
  17. S. Ramalingam, “Fuzzy interval-valued multi criteria based decision making for ranking features in multi-modal 3D face recognition,” Fuzzy Sets and Systems, vol. 337, pp. 25–51, 2018. View at: Publisher Site | Google Scholar
  18. S.-B. Roh, S.-K. Oh, J.-H. Yoon, and K. Seo, “Design of face recognition system based on fuzzy transform and radial basis function neural networks,” Soft Computing, vol. 23, no. 13, pp. 4969–4985, 2019. View at: Publisher Site | Google Scholar
  19. H. Zhi and S. Liu, “Face recognition based on genetic algorithm,” Journal of Visual Communication and Image Representation, vol. 58, pp. 495–502, 2019. View at: Publisher Site | Google Scholar
  20. M. A. Talab, S. N. H. S. Abdullah, and M. H. A. Razalan, “Edge direction matrixes-based local binary patterns descriptor for invariant pattern recognition,” in Proceedings of the 2013 International Conference on Soft Computing and Pattern Recognition, Hanoi, Vietnam, March 2015. View at: Publisher Site | Google Scholar
  21. Y. Xu, G. Yu, Y. Wang, X. Wu, and Y. Ma, “A hybrid vehicle detection method based on viola-jones and HOG + SVM from UAV images,” Sensors, vol. 16, no. 8, p. 1325, 2016. View at: Publisher Site | Google Scholar
  22. N. N. Mohammed, M. I. Khaleel, M. Latif, and Z. Khalid, “Face recognition based on PCA with weighted and normalized mahalanobis distance,” in Proceedings of the International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), IEEE, Bangkok, Thailand, November 2018. View at: Publisher Site | Google Scholar
  23. L. Cao, H. Li, H. Guo, and B. Wang, “Robust PCA for face recognition with occlusion using symmetry information,” in Proceedings of the IEEE 16th International Conference on Networking, Sensing and Control (ICNSC), pp. 323–328, IEEE, Banff, Canada, June 2019. View at: Publisher Site | Google Scholar
  24. Y. Duan, J. Lu, J. Feng, and J. Zhou, “Context-aware local binary feature learning for face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 5, pp. 1139–1153, 2018. View at: Publisher Site | Google Scholar
  25. M. A. Al-Abaji and M. M. Salih, “The using of PCA, wavelet and GLCM in face recognition system, a comparative study,” Journal of University of Babylon for Pure and Applied Sciences, vol. 26, no. 10, pp. 131–139, 2018. View at: Publisher Site | Google Scholar
  26. S. Yahia, Y. B. Salem, and M. N. Abdelkrim, “3D face recognition using local binary pattern and grey level co-occurrence matrix,” in Proceedings of the 17th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), pp. 328–338, IEEE, Sousse, Tunisia, June 2017. View at: Publisher Site | Google Scholar
  27. R. Y. Dillak, S. Dana, and M. Beily, “Face recognition using 3D GLCM and elman levenberg recurrent neural network,” in Proceedings of the International Seminar on Application for Technology of Information and Communication (ISemantic), pp. 152–156, IEEE, Semarang, Indonesia, August 2016. View at: Publisher Site | Google Scholar
  28. M. D. Ansari, S. P. Ghrera, and A. R. Mishra, “Texture feature extraction using intuitionistic fuzzy local binary pattern,” Journal of Intelligent Systems, vol. 29, no. 1, pp. 19–34, 2016. View at: Publisher Site | Google Scholar
  29. I. Kitanovski, B. Jankulovski, I. Dimitrovski, and S. Loskovska, “Comparison of feature extraction algorithms for mammography images,” in Proceedings of the 4th International Congress on Image and Signal Processing, vol. 2, pp. 888–892, IEEE, Shanghai, China, October 2011. View at: Publisher Site | Google Scholar
  30. W. Lu, Z. Li, and J. Chu, “A novel computer-aided diagnosis system for breast MRI based on feature selection and ensemble learning,” Computers in Biology and Medicine, vol. 83, pp. 157–165, 2017. View at: Publisher Site | Google Scholar
  31. Z. Abbas, M.-U. Rehman, S. Najam, and S. D. Rizvi, “An efficient gray-level co-occurrence matrix (GLCM) based approach towards classification of skin lesion,” in Proceedings of the Amity International Conference on Artificial Intelligence (AICAI), pp. 317–320, IEEE, Dubai, UAE, April 2019. View at: Publisher Site | Google Scholar
  32. K. G. Krishnan, P. Vanathi, and R. Abinaya, “Performance analysis of texture classification techniques using shearlet transform,” in Proceedings of the International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), pp. 1408–1412, IEEE, Chennai, India, September 2016. View at: Google Scholar
  33. X. Wu, R. He, Z. Sun, and T. Tan, “A light cnn for deep face representation with noisy labels,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 11, pp. 2884–2896, 2018. View at: Publisher Site | Google Scholar
  34. O. M. Parkhi, A. Vedaldi, and A. Zisserman, Deep Face Recognition, University of Oxford, Oxford, UK, 2015.
  35. X. Chen, L. Lin, and Y. Gao, “Parallel nonparametric binarization for degraded document images,” Neurocomputing, vol. 189, pp. 43–52, 2016. View at: Publisher Site | Google Scholar
  36. B. Bataineh, S. N. H. S. Abdullah, and K. Omar, “An adaptive local binarization method for document images based on a novel thresholding method and dynamic windows,” Pattern Recognition Letters, vol. 32, no. 14, pp. 1805–1813, 2011. View at: Publisher Site | Google Scholar
  37. A. Eleyan and H. Demirel, “Co-occurrence based statistical approach for face recognition,” in Proceedings of the 2009 24th International Symposium on Computer and Information Sciences, pp. 611–615, IEEE, Guzelyurt, Cyprus, October 2009. View at: Google Scholar
  38. R. W. Conners and C. A. Harlow, “A theoretical comparison of texture algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, no. 3, pp. 204–222, 1980. View at: Publisher Site | Google Scholar
  39. R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 3, no. 6, pp. 610–621, 1974. View at: Publisher Site | Google Scholar
  40. M. Yazdi and K. Gheysari, “A new approach for the fingerprint classification based on gray-level co-occurrence matrix,” International Journal of Computer and Information Science and Engineering, vol. 2, no. 3, pp. 171–174, 2008. View at: Google Scholar
  41. Q. Yang, J. Peng, and L. Yulong, “Chinese sign language recognition based on gray-level co-occurrence matrix and other multi-features fusion,” in Proceedings of the 2009 4th IEEE Conference on Industrial Electronics and Applications, pp. 1569–1572, IEEE, Xi’an, China, June 2009. View at: Google Scholar
  42. M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, 1991. View at: Publisher Site | Google Scholar
  43. A. Materka and M. Strzelecki, Texture Analysis Methods: A Review, Lodz University of Technology, Łódź, Poland, 1998.
  44. S. M. Ismail and S. Abdullah, “Geometrical-matrix feature extraction for on-line handwritten characters recognition,” Journal of Theoretical and Applied Information Technology, vol. 49, no. 1, pp. 86–93, 2013. View at: Google Scholar
  45. M. Naeimizaghiani, S. N. H. S. Abdullah, B. Bataineh, and F. PirahanSiah, “Character recognition based on global feature extraction,” in Proceedings of the 2011 International Conference on Electrical Engineering and Informatics, IEEE, Bandung, Indonesia, September 2011. View at: Publisher Site | Google Scholar
  46. T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern Recognition, vol. 29, no. 1, pp. 51–59, 1996. View at: Publisher Site | Google Scholar
  47. T. Ahonen, A. Hadid, and M. Pietikainen, “Face description with local binary patterns: application to face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037–2041, 2006. View at: Publisher Site | Google Scholar
  48. N. Tan, L. Huang, and C. Liu, “A new probabilistic local binary pattern for face verification,” in Proceedings of the 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, November 2009. View at: Google Scholar
  49. F. Perronnin, J. Sánchez, and T. Mensink, “Improving the fisher kernel for large-scale image classification,” in Proceedings of the European Conference on Computer Vision, pp. 143–156, Springer, Crete, Greece, September 2010. View at: Google Scholar
  50. T. Ahonen and M. Pietikäinen, “Soft histograms for local binary patterns,” in Proceedings of the Finnish Signal Processing Symposium, Oulu, Finland, 2007. View at: Google Scholar
  51. D. K. Iakovidis, E. G. Keramidas, and D. Maroulis, “Fuzzy local binary patterns for ultrasound texture characterization,” in Proceedings of the International Conference Image Analysis and Recognition, pp. 750–759, Springer, Halifax, Canada, July 2008. View at: Google Scholar
  52. Z. Li, J.-I. Imai, and M. Kaneko, “Robust face recognition using block-based bag of words,” in Proceedings of the International Conference on Pattern Recognition, pp. 1285–1288, IEEE, Istanbul, Turkey, October 2010. View at: Google Scholar
  53. Y.-S. Wu, H.-S. Liu, G.-H. Ju, T.-W. Lee, and Y.-L. Chiu, “Using the visual words based on affine-sift descriptors for face recognition,” in Proceedings of the 2012 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, pp. 1–5, IEEE, Hollywood, CA, USA, December 2012. View at: Google Scholar
  54. A. Barcelo, E. Montseny, and P. Sobrevilla, “Fuzzy texture unit and fuzzy texture spectrum for texture characterization,” Fuzzy Sets and Systems, vol. 158, no. 3, pp. 239–252, 2007. View at: Publisher Site | Google Scholar
  55. E. G. Keramidas, “Ultrasound image processing and analysis framework,” University of Athens, Athens, Greece, 2007, MSc thesis. View at: Google Scholar
  56. E. G. Keramidas, D. K. Iakovidis, and D. Maroulis, “Noise-robust statistical feature distributions for texture analysis,” in Proceedings of the 16th European Signal Processing Conference, pp. 1–5, IEEE, Lausanne, Switzerland, August 2008. View at: Google Scholar
  57. E. G. Keramidas, D. K. Iakovidis, D. Maroulis, and N. Dimitropoulos, “Thyroid texture representation via noise resistant image features,” in Proceedings of the 21st IEEE International Symposium on Computer-Based Medical Systems, pp. 560–565, IEEE, Jyvaskyla, Finland, July 2008. View at: Publisher Site | Google Scholar
  58. E. Keramidas, D. Iakovidis, and D. Maroulis, “Fuzzy binary patterns for uncertainty-aware texture representation,” ELCVIA Electronic Letters on Computer Vision and Image Analysis, vol. 10, no. 1, pp. 63–78, 2012. View at: Publisher Site | Google Scholar
  59. M. D. Ansari and S. P. Ghrera, “Intuitionistic fuzzy local binary pattern for features extraction,” International Journal of Information and Communication Technology, vol. 13, no. 1, pp. 83–98, 2018. View at: Publisher Site | Google Scholar
  60. M. D. Ansari and S. P. Ghrera, “Feature extraction method for digital images based on intuitionistic fuzzy local binary pattern,” in Proceedings of the 2016 International Conference System Modeling & Advancement in Research Trends (SMART), pp. 345–349, IEEE, Moradabad, India, November 2016. View at: Publisher Site | Google Scholar
  61. M. D. Ansari, V. K. Koppula, and S. P. Ghrera, “Fuzzy and entropy based approach for feature extraction from digital image,” Pertanika Journal of Science & Technology, vol. 27, no. 2, pp. 829–846, 2019. View at: Google Scholar
  62. C. E. Thomaz and G. A. Giraldi, “A new ranking method for principal components analysis and its application to face image analysis,” Image and Vision Computing, vol. 28, no. 6, pp. 902–913, 2010. View at: Publisher Site | Google Scholar
  63. E. Z. Tenorio and C. E. Thomaz, “Analise multilinear discriminante de formas frontais de imagens 2D de face,” in Proceedings of the X Simpósio Brasileiro de Automação Inteligente SBAI, São João del Rei, Brazil, September 2011. View at: Google Scholar
  64. D. Cai, X. He, J. Han, and H.-J. Zhang, “Orthogonal laplacianfaces for face recognition,” IEEE Transactions on Image Processing, vol. 15, no. 11, pp. 3608–3614, 2006. View at: Publisher Site | Google Scholar
  65. D. Cai, X. He, Y. Hu, J. Han, and T. Huang, “Learning a spatially smooth subspace for face recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE Minneapolis, MN, USA, July 2007. View at: Publisher Site | Google Scholar
  66. J. H. Patil and S. N. Talbar, “Face recognition using gabor wavelet, LBP and its variants,” 2013. View at: Google Scholar
  67. E. Dong, Y. Fu, and J. Tong, “Face recognition by PCA and improved LBP fusion algorithm,” Applied Mechanics & Materials, vol. 734, 2014. View at: Publisher Site | Google Scholar
  68. R. Shyam and Y. N. Singh, “Face recognition using augmented local binary pattern and bray curtis dissimilarity metric,” in Proceedings of the 2nd International Conference on Signal Processing and Integrated Networks (SPIN), pp. 779–784, IEEE, Noida, India, February 2015. View at: Publisher Site | Google Scholar
  69. B. Ameur, S. Masmoudi, A. G. Derbel, and A. B. Hamida, “Fusing gabor and LBP feature sets for KNN and SRC-based face recognition,” in Proceedings fo the 2nd International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), pp. 453–458, IEEE, Monastir, Tunisia, March 2016. View at: Publisher Site | Google Scholar
  70. Y. Wang and M. Kosinski, “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images,” Journal of Personality and Social Psychology, vol. 114, no. 2, pp. 246–257, 2018. View at: Publisher Site | Google Scholar
  71. A. Tefas, C. Kotropoulos, and I. Pitas, “Variants of dynamic link architecture based on mathematical morphology for frontal face authentication,” in Proceedings of the 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, Santa Barbara, CA, USA, June 1998. View at: Google Scholar

Copyright © 2020 Mohammed Ahmed Talab et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views819
Downloads564
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.