Abstract

Face recognition (FR) is an important task in pattern recognition and computer vision. Sparse representation (SR) has been demonstrated to be a powerful framework for FR. In general, an SR algorithm treats each face in a training dataset as a basis function and tries to find a sparse representation of a test face under these basis functions. The sparse representation coefficients then provide a recognition hint. Early SR algorithms are based on a basic sparse model. Recently, it has been found that algorithms based on a block sparse model can achieve better recognition rates. Based on this model, in this study, we use block sparse Bayesian learning (BSBL) to find a sparse representation of a test face for recognition. BSBL is a recently proposed framework, which has many advantages over existing block-sparse-model-based algorithms. Experimental results on the Extended Yale B, the AR, and the CMU PIE face databases show that using BSBL can achieve better recognition rates and higher robustness than state-of-the-art algorithms in most cases.

1. Introduction

Owing to the rapid development of network and computer technologies, face recognition (FR) plays an important role in many applications, such as video surveillance, man-machine interface, and digital entertainment. Many methods of FR have been developed over the past two decades [15]. Basically, FR is a typical problem of classification.

In a typical FR system, besides the face detection and face alignment, there are two main stages in the process of FR. One is feature extraction, which obtains a set of relevant information from a face image for further classification. Because of the huge size of face images, it is desired to extract features from each face image, which have lower dimensions and facilitate recognition. Lots of feature extraction methods have been proposed, such as PCA [3, 6], LPP [5], and LDA [7]. Another stage is classification, which builds a classification model and assigns a label to a test face image. There are many classification algorithms. Typical algorithms include nearest neighbor (NN) [8], nearest subspace (NS) [9], and support vector machine (SVM) [10].

Recently, Wright et al. proposed a novel FR method called sparse representation classification (SRC) [11]. In this method, face images in the training set form a dictionary matrix (each face image is vectorized and forms a column of the dictionary matrix), and then a vectorized test face image is represented under this dictionary matrix. The representation coefficients provide hints for recognition. For example, if a test face image and a training face image belong to the same subject, then the representation coefficients of the vectorized test face image under the dictionary matrix are sparse (or compressive); that is, most coefficients are zero (or close to zero). For each class (i.e., the columns in the dictionary matrix which are associated with a subject), one can calculate the reconstruction error of the vectorized test face image using these columns and the associated representation coefficients. The class with the minimum reconstruction error suggests that test face image belongs to this class. More frequently, one uses a feature vector extracted from a face image, instead of the original vectorized face image, in this method. SRC is so robust that it can achieve good performance in occlusion and noise environments.

Following the idea of SRC, a number of SRC-related recognition methods have been proposed. Gao et al. extended the basic SRC method to a kernel version [12]. Yang et al. proposed a face recognition method via sparse coding which is much more robust than SRC in occlusion, corruption, and disguise environments [13]. Some other works improved the basic SRC method using weighted sparse representations [14], Gabor feature-based sparse representations [15], dimensionality reduction [16], locally adaptive sparse representations [17], and supervised sparse representations [18].

Recently, it is found that using algorithms based on a block sparse model [19], instead of the algorithms based on the basic sparse representation model, can achieve higher recognition rates in face recognition [20]. However, these algorithms ignore intrablock correlation in representation coefficients. The existence of intrablock correlation in representation coefficients results from the fact that training face images with the same class as a test face image are all correlated with the test face image, and thus the representation coefficients associated with the training face images are not independent. In sparse reconstruction scenarios, it is shown that exploiting the intrablock correlation can significantly improve algorithmic performance [21].

In this study, we use block sparse Bayesian learning (BSBL) [21] to estimate the representation coefficients. BSBL has many advantages over existing block-sparse-model-based algorithms, especially the fact that it has the ability to exploit the intrablock correlation in representation coefficients for better algorithmic performance. Experimental results on the Extended Yale B, the AR, and the CMU PIE databases show that BSBL achieves better results than state-of-the-art SRC algorithms in most cases.

The rest of this paper is organized as follows. Section 2 gives a brief review of the original face recognition via sparse representation. Section 3 introduces sparse Bayesian learning. The block sparse Bayesian learning approach for face recognition is proposed in Section 4. Experimental results are reported in Section 5. Conclusion is drawn in the last section.

2.1. Face Recognition via Sparse Representation

We first describe the basic SRC method [11] for face recognition. Given training faces of all subjects, a dictionary matrix is formed as follows: where and is the th face of the th subject. (For simplicity, we describe as a vectorized face image. But in practice, is a feature vector extracted from the face image, as done in our experiments.) Then, a vectorized test face is represented under the dictionary matrix as follows: where is the representation coefficient vector. In the basic SRC method, it is suggested that if the new test face belongs to a subject in the training set, say the th subject, then, under a sparsity constraint on , only some of the coefficients are significantly nonzero, while other coefficients, that is, , are zero or close to zero.

Mathematically, the previous idea can be described as the following sparse representation problem: where counts the number of nonzero elements in the vector . Once we have obtained the solution , the class label of can be found by where is the characteristic function which maintains the elements of associated with the th class, while sets other elements of to zero.

However, finding the solution to (3) is NP-hard [22]. Recent theories in compressed sensing [23, 24] show that if the true solution is sparse enough, under some mild conditions, the solution can be found by solving the following convex-relaxation problem: Further, to deal with small dense model noise, the problem (5) can be changed to the following one: where is a noise-tolerance constant. Many -minimization algorithms can be used to find the solution to (5) or to (6), such as LASSO [25] and Basis Pursuit Denoising [26].

In a practical face recognition problem, the coefficient vector (or ) not only is sparse but also is block sparse. To see this, we can rewrite the sparse representation problem (2) as follows: where is the coefficient vector associated with the th class and . When a test face belongs to the th class, ideally only elements in are significantly nonzero. In other words, only the block has significantly nonzero norm. Clearly, this is a canonical block sparse model [19, 27]. Many algorithms for the block sparse model can be used here. For example, in [20], it is suggested to use the following algorithm: This is a natural extension of basic -minimization algorithms, which imposes norm on block elements and then norm over blocks. It has been shown that exploiting the block structure can largely improve the estimation quality of [2729].

However, one should note that when the test face belongs to the th class, not only the representation coefficient block is a nonzero block, but also its elements are correlated in amplitude. The correlation arises because the faces of the th class in the training set are all correlated with the test face, and thus the elements in are mutually dependent. It is shown that exploiting the correlation within blocks can further improve the estimation quality of [21, 30] than only exploiting the block structure.

Therefore, in this study, we propose to use block sparse Bayesian learning (BSBL) [21] to estimate by exploiting the block structure and the correlation within blocks. In the next section, we first briefly introduce sparse Bayesian learning (SBL) and then introduce BSBL.

3. SBL and BSBL

SBL [31] was initially proposed as a machine learning method. But later it has been shown to be a powerful method for sparse representation, sparse signal recovery, and compressed sensing.

3.1. Advantages of SBL

Compared to LASSO-type algorithms (such as the original LASSO algorithm, Basis Pursuit Denoising, Group Lasso, Group Basis Pursuit, and other algorithms based on -minimization), SBL has the following advantages [32, 33].(1)Its recovery performance is robust to the characteristics of the matrix , while most other algorithms are not. For example, it has been shown that when columns of are highly coherent, SBL still maintains good performance, while algorithms such as LASSO have seriously degraded performance [34]. This advantage is very attractive to sparse representation and other applications, since in these applications the matrix is not a random matrix and its columns are highly coherent.(2)SBL has a number of desired advantages over many popular algorithms in terms of local and global convergence. It can be shown that SBL provides a sparser solution than Lasso-type algorithms. In particular, in noiseless situations and under certain conditions, the global minimum of SBL cost function is unique and corresponds to the true sparsest solution, while the global minimum of the cost function of LASSO-type algorithms is not necessarily the true sparsest solution [35]. These advantages imply that SBL is a better choice in feature selection via sparse representation [36].(3)Recent works in SBL [21, 37] provide robust learning rules for automatically estimating values of its regularizer (related to noise variance) such that SBL algorithms can achieve good performance. In contrast, LASSO-type algorithms generally need users to choose values for such regularizer, which is often obtained by cross-validation. However, this takes lots of time for large-scale datasets, which is not convenient and even impossible in some scenarios.

3.2. Introduction to BSBL

BSBL [21] is an extension of the basic SBL framework, which exploits a block structure and intrablock correlation in the coefficient vector . It is based on the assumption that can be partitioned into nonoverlapping blocks: Among these blocks, few blocks are nonzero. Then, each block is assumed to satisfy a parameterized multivariate Gaussian distribution: with the unknown parameters and . Here, is a nonnegative parameter controlling the block-sparsity of . When , the th block becomes zero. During the learning procedure most tend to be zero, due to the mechanism of automatic relevance determination [31]. Thus, sparsity at the block level is encouraged. is a positive definite and symmetrical matrix, capturing the intrablock correlation of the th block. Under the assumption that blocks are mutually uncorrelated, the prior of is , where . (Here means that is a block diagonal matrix, and its principal diagonal blocks are .) To avoid overfitting, all the will be imposed by some constraints and their estimates will be further regularized. The model noise is assumed to satisfy , where is a positive scalar to be estimated. Based on the previous probability models, one can obtain a closed-form posterior distribution. Therefore, the estimate of can be obtained by using the Maximum-A-Posteriori (MAP) estimation, providing all the parameters are estimated.

To estimate the parameters , one can use the type II maximum likelihood method [31, 38]. This is equivalent to minimizing the following cost function: where denotes all the parameters; that is, . There are several optimization methods to minimize the cost function, such as the expectation-maximum method, and the bound-optimization method, and the duality method. This framework is called the BSBL framework.

BSBL not only has the advantages of the basic SBL listed in Section 3.1, but also has another two advantages.(1)BSBL provides large flexibility to model and exploit correlation structure in signals, such as intrablock correlation [21, 30]. By exploiting the correlation structures, recovery performance is significantly improved.(2)BSBL has the unique ability to find less-sparse [39] and nonsparse [30] true solutions with very small errors. (Note that for an underdetermined inverse problem, that is, , where is one matrix or a product of a sensing matrix and a dictionary matrix as used in compressed sensing, one cannot find the true solution without any error, if the true solution is non-sparse (i.e., ).) This is attractive for practical use, since in practice the true solutions may not be very sparse, and existing sparse signal recovery algorithms generally fail in this case.

Therefore, BSBL is promising for pattern recognition. In the following, we use BSBL for face recognition. Among a number of BSBL algorithms, we choose the bound-optimization-based BSBL algorithm [21], denoted by BSBL-BO. (The BSBL-BO code can be downloaded at http://dsp.ucsd.edu/~zhilin/BSBL.html.)

4. Face Recognition via BSBL

As stated in Section 2, we use BSBL-BO to estimate , denoted by , and then use rule (4) to assign a test face to a class.

In practice, a test face may contain some outliers; that is, , where is the outlier-free face image and is a vector whose each entry is an outlier. Generally, the number of outliers is small, and thus is sparse. Addressing the outlier issue is important to a practical face recognition system. In [11], an augmented sparse model was used to deal with this issue. We now extend this method to our block sparse model and use BSBL-BO to estimate the solution. In particular, we adopt the following augmented block sparse model: where is a vector modeling dense Gaussian noise, , and . Here, is an identity matrix of the dimension . Clearly, is also a block sparse vector, whose first blocks are the blocks of and last elements are blocks with the block size being 1. (In experiments we found that treating the elements as one big block resulted in similar performance, while significantly sped up the algorithm.) Thus, (12) is still a block sparse model and can be solved by BSBL-BO. Once BSBL-BO obtains the solution, denoted by , its first blocks (denoted by ) and its last elements (denoted by ) are used to assign to a class according to

We now take the Extended Yale B database [40] as an example to show how our method works. As shown in SRC [11], we randomly select half of the total 2414 faces (i.e., 1207 faces) as the training set and the rest as the testing set. Each face is downsampled from 192 168 to 24 21 = 504. The training set contains 38 subjects. Each subject has about 32 faces. Therefore, in our model and . The matrix has the size , and thus the matrix has the size .

The procedure is illustrated in Figure 1. Figure 1(a) shows that a test face (belonging to Subject 4) can be linearly combined by a few training faces. Most of the coefficients estimated by BSBL-BO (i.e., ) are zero or near zero and only those associated with the test face are significantly nonzero. Figure 1(b) shows the residuals for . The residual at is 0.0008, while the residuals at are all close to 1, which makes it easy to assign the test face to Subject 4. See Section 5.1.1 for more details.

5. Experimental Results

To demonstrate the superior performance of BSBL, we performed experiments on three widely used face databases: Extended Yale B [40], AR [41], and CMU PIE [42] face databases. The face images of these three databases were captured under varying lighting, pose, or facial expression. The AR database also has occluded face images for the test of robustness of face recognition algorithms. Section 5.1 shows experimental results on face images without occlusion, and Section 5.2 shows experimental results on face images with three kinds of occlusion.

5.1. Face Recognition without Occlusion

For the experiments on face images without occlusion, we used downsampling, Eigenfaces [3, 6], and Laplacianfaces [5] to reduce the dimensionality of original faces. We compared our method with three classical methods, including nearest neighbor (NN) [8], nearest subspace (NS) [9], and support vector machine (SVM) [10]. We also compared our method with recently proposed sparse-representation-based classification methods, including the basic sparse-representation classifier (SRC) [11] and the block-sparse recovery algorithm via convex optimization (BSCO) [20]. For NS, the subspace dimension was fixed to 9. For BSCO, we used the algorithm [20] which has been shown to be the best one among all the structured sparsity-based classifiers proposed in that work.

5.1.1. Extended Yale B Database

The Extended Yale B database [40] consists of 2414 frontal-face images of 38 subjects (each subject has about 64 images). In the experiment, we used the cropped face images which were captured under various lighting conditions [43]. Two subjects are shown in Figure 2 for illustration (for each subject, only 10 face images are shown). We randomly selected half face images of each subject as the training set and the rest as the testing set. We used downsampling, Eigenfaces, and Laplacianfaces to extract features from face images. The dimensions of extracted features were 30, 56, 120, and 504, respectively.

Experimental results are shown in Figure 3, where we can see our method uniformly outperformed other algorithms regardless of used features. Particularly, our method had better performance when using Laplacianfaces. The superiority of our method was much clearer when the feature dimension was smaller and Laplacianfaces were used. For example, when the feature dimension was 56, our method achieved the highest rate of 98.9%, while NN, NS, SVM, SRC, and BSCO achieved the rates of 83.5%, 90.4%, 85.0%, 91.7%, and 79.4%, respectively. Higher performance using low dimensional features is attractive for recognition, since lower feature dimension generally implies that the computational load is accordingly lower.

5.1.2. AR Database

The AR database [41] consists of more than 4000 front-face images of 126 human subjects. Each subject has 26 images in two separated sessions, as shown in Figure 4. This database includes more facial expression and facial disguise. We chose 100 subjects (50 males and 50 females) in this experiment. For each subject, seven face images with different illumination and facial expression (i.e., the first 7 images of each subject) in Session 1 were selected for training and the first 7 images of each subject in Session 2 for testing. All the images were converted to gray mode and were resized to . Downsampled faces, Eigenfaces, and Laplacianfaces were applied with the dimensions of 30, 54, 130, and 540. Experimental results are shown in Figure 5.

From Figure 5(a), we can see that our algorithm significantly outperformed other classifiers when using downsampled features. However, our method did not achieve the highest rate when using Eigenfaces and Laplacianfaces. This might be due to the small block size in this experiment (). Although our method did not uniformly outperform other algorithms when using different face features, the recognition rate achieved by our method using downsampled faces (96.7%) was not exceeded by other algorithms using any face features.

5.1.3. CMU PIE Database

The CMU PIE database [42] consists of 41368 front-face images of 68 human subjects under different poses, illumination, and expressions. We chose one subset (C29) which included 1632 face images of 68 subjects (24 images for each subject) for this experiment. The first subject in this subset is shown in Figure 6, which varies in pose, illumination, and expression. All the images were cropped and resized to . For each subject, we randomly selected 10 images for training and the rest (14 images for each subject) for testing. Downsampled faces, Eigenfaces, and Laplacianfaces were applied with four dimensions, that is, 36, 64, 144, and 256. Experimental results are shown in Figure 7.

From Figure 7(a), we can see that sparse-representation-based classifiers usually outperformed classical ones in this dataset. Among the sparse-representation-based classifiers, BSBL and BSCO achieved higher recognition rates than SRC. For BSBL and BSCO, BSBL slightly outperformed BSCO with downsampled faces and Laplacianfaces, while BSCO outperformed BSBL with Eigenfaces. Specifically, for each feature space, BSBL achieved the highest recognition rate of 95.80% with downsampled faces, and 94.12% with Laplacianfaces, while BSCO achieved 98.42% with Eigenfaces, which was the highest one in this experiment. Nevertheless, BSBL outperformed BSCO in 8 out of 12 different combinations of dimensions and features.

5.2. Face Recognition with Occlusion

For the experiments on face images with occlusion, we used downsampling to reduce the size of face images and compared our method with NN [8], SRC [11], and BSCO [20].

5.2.1. Face Recognition with Pixel Corruption

We tested face recognition with pixel corruption on 3 subsets of the Extended Yale B database: 719 face images with normal-to-moderate lighting conditions from Subsets 1 and 2 for training and 455 face images with more extreme lighting conditions from Subset 3 for testing. For each test image, we first replaced a certain percentage (0%–50%) of its original pixels with uniformly distributed gray values in . Both the gray values and the locations were random and hence unknown to the algorithms. We then downsampled all the images to the sizes of , , , and , respectively. Two corrupted face images were shown in Figures 8(a)-8(b).

Results are shown in Table 1. It can be seen that in almost all combinational situations of dimension and corruption, BSBL achieved the highest recognition rate when compared with NN and SRC, and the performance gap between our algorithm and the compared algorithms was very large. For example, when the dimension was and 50% pixels were corrupted, BSBL achieved the recognition rate of 67.25%, while SRC only had a recognition rate of 46.37%. Meanwhile, BSBL outperformed BSCO in 17 out of 24 combinational situations of dimension and corruption. Figure 9(a) shows the recognition rates of the four algorithms at different pixel corruption levels with the dimension of .

5.2.2. Face Recognition with Block Occlusion

In this experiment, we used the same training and testing images as those in the previous pixel corruption experiment. For each test image, we replaced a randomly located square block with an unrelated image (the baboon image in SRC [11]), which occluded 0%–50% of the original testing image. We then downsampled all the images to the sizes of , , , and , respectively. Two occluded face images were shown in Figures 8(c)-8(d).

Table 2 shows the recognition rates of NN, SRC, BSCO, and BSBL on different dimensions and percentages of occlusion. Again, BSBL outperformed the compared algorithms in most cases. For example, when the occlusion percentage ranged from 10% to 50% and the face dimension was , BSBL achieved about 8.35%–13.19% higher recognition rate than BSCO, as shown in Figure 9(b).

5.2.3. Face Recognition with Real Face Disguise

We used a subset of AR database to test the performance of our method on face recognition with disguise. We chose 799 images of various facial expressions without occlusion (i.e., the first 4 face images in each session except a corrupted image named “W-027-14.bmp”) for training. We formed two separate testing sets of 200 images. The images in the first set were from the neutral expression with sunglasses (the 8th image in each session) which cover roughly 20% of the face, while the ones in the second set were from the neutral expression with scarves (the 11th image in each session) which cover roughly 40% of the face. All the images were resized to , , , and , respectively.

Results are shown in Table 3. In the case of neutral expression with sunglasses, both SRC and NN achieved higher recognition rates than BSCO and BSBL. However, in the case of neutral expression with scarves, BSBL outperformed NN, SRC, and BSCO significantly. Totally, BSBL achieved the highest recognition rates 72.50% and 74.5% with the dimensions of and , respectively, for the two testing sets, while SRC achieved the highest recognition rates 28.25% and 44.00% with the other two dimensions.

6. Conclusions

Classification via sparse representation is a popular methodology in face recognition and other classification tasks. Recently, it was found that using block-sparse representation, instead of the basic sparse representation, can yield better classification performance. In this paper, by introducing a recently proposed block sparse Bayesian learning (BSBL) algorithm, we showed that the BSBL is a better framework than the basic block-sparse representation framework, due to its various advantages over the latter. Experiments on common face databases confirmed that the BSBL is a promising sparse-representation-based classifier.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Grant no. 60903128) and also by the Fundamental Research Funds for the Central Universities (Grants nos. JBK130142 and JBK130503).