Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2013 / Article

Research Article | Open Access

Volume 2013 |Article ID 695976 | 13 pages | https://doi.org/10.1155/2013/695976

Robust Face Recognition via Block Sparse Bayesian Learning

Academic Editor: Hai-Feng Huo
Received27 May 2013
Accepted10 Sep 2013
Published27 Nov 2013

Abstract

Face recognition (FR) is an important task in pattern recognition and computer vision. Sparse representation (SR) has been demonstrated to be a powerful framework for FR. In general, an SR algorithm treats each face in a training dataset as a basis function and tries to find a sparse representation of a test face under these basis functions. The sparse representation coefficients then provide a recognition hint. Early SR algorithms are based on a basic sparse model. Recently, it has been found that algorithms based on a block sparse model can achieve better recognition rates. Based on this model, in this study, we use block sparse Bayesian learning (BSBL) to find a sparse representation of a test face for recognition. BSBL is a recently proposed framework, which has many advantages over existing block-sparse-model-based algorithms. Experimental results on the Extended Yale B, the AR, and the CMU PIE face databases show that using BSBL can achieve better recognition rates and higher robustness than state-of-the-art algorithms in most cases.

1. Introduction

Owing to the rapid development of network and computer technologies, face recognition (FR) plays an important role in many applications, such as video surveillance, man-machine interface, and digital entertainment. Many methods of FR have been developed over the past two decades [15]. Basically, FR is a typical problem of classification.

In a typical FR system, besides the face detection and face alignment, there are two main stages in the process of FR. One is feature extraction, which obtains a set of relevant information from a face image for further classification. Because of the huge size of face images, it is desired to extract features from each face image, which have lower dimensions and facilitate recognition. Lots of feature extraction methods have been proposed, such as PCA [3, 6], LPP [5], and LDA [7]. Another stage is classification, which builds a classification model and assigns a label to a test face image. There are many classification algorithms. Typical algorithms include nearest neighbor (NN) [8], nearest subspace (NS) [9], and support vector machine (SVM) [10].

Recently, Wright et al. proposed a novel FR method called sparse representation classification (SRC) [11]. In this method, face images in the training set form a dictionary matrix (each face image is vectorized and forms a column of the dictionary matrix), and then a vectorized test face image is represented under this dictionary matrix. The representation coefficients provide hints for recognition. For example, if a test face image and a training face image belong to the same subject, then the representation coefficients of the vectorized test face image under the dictionary matrix are sparse (or compressive); that is, most coefficients are zero (or close to zero). For each class (i.e., the columns in the dictionary matrix which are associated with a subject), one can calculate the reconstruction error of the vectorized test face image using these columns and the associated representation coefficients. The class with the minimum reconstruction error suggests that test face image belongs to this class. More frequently, one uses a feature vector extracted from a face image, instead of the original vectorized face image, in this method. SRC is so robust that it can achieve good performance in occlusion and noise environments.

Following the idea of SRC, a number of SRC-related recognition methods have been proposed. Gao et al. extended the basic SRC method to a kernel version [12]. Yang et al. proposed a face recognition method via sparse coding which is much more robust than SRC in occlusion, corruption, and disguise environments [13]. Some other works improved the basic SRC method using weighted sparse representations [14], Gabor feature-based sparse representations [15], dimensionality reduction [16], locally adaptive sparse representations [17], and supervised sparse representations [18].

Recently, it is found that using algorithms based on a block sparse model [19], instead of the algorithms based on the basic sparse representation model, can achieve higher recognition rates in face recognition [20]. However, these algorithms ignore intrablock correlation in representation coefficients. The existence of intrablock correlation in representation coefficients results from the fact that training face images with the same class as a test face image are all correlated with the test face image, and thus the representation coefficients associated with the training face images are not independent. In sparse reconstruction scenarios, it is shown that exploiting the intrablock correlation can significantly improve algorithmic performance [21].

In this study, we use block sparse Bayesian learning (BSBL) [21] to estimate the representation coefficients. BSBL has many advantages over existing block-sparse-model-based algorithms, especially the fact that it has the ability to exploit the intrablock correlation in representation coefficients for better algorithmic performance. Experimental results on the Extended Yale B, the AR, and the CMU PIE databases show that BSBL achieves better results than state-of-the-art SRC algorithms in most cases.

The rest of this paper is organized as follows. Section 2 gives a brief review of the original face recognition via sparse representation. Section 3 introduces sparse Bayesian learning. The block sparse Bayesian learning approach for face recognition is proposed in Section 4. Experimental results are reported in Section 5. Conclusion is drawn in the last section.

2.1. Face Recognition via Sparse Representation

We first describe the basic SRC method [11] for face recognition. Given training faces of all subjects, a dictionary matrix is formed as follows: where and is the th face of the th subject. (For simplicity, we describe as a vectorized face image. But in practice, is a feature vector extracted from the face image, as done in our experiments.) Then, a vectorized test face is represented under the dictionary matrix as follows: where is the representation coefficient vector. In the basic SRC method, it is suggested that if the new test face belongs to a subject in the training set, say the th subject, then, under a sparsity constraint on , only some of the coefficients are significantly nonzero, while other coefficients, that is, , are zero or close to zero.

Mathematically, the previous idea can be described as the following sparse representation problem: where counts the number of nonzero elements in the vector . Once we have obtained the solution , the class label of can be found by where is the characteristic function which maintains the elements of associated with the th class, while sets other elements of to zero.

However, finding the solution to (3) is NP-hard [22]. Recent theories in compressed sensing [23, 24] show that if the true solution is sparse enough, under some mild conditions, the solution can be found by solving the following convex-relaxation problem: Further, to deal with small dense model noise, the problem (5) can be changed to the following one: where is a noise-tolerance constant. Many -minimization algorithms can be used to find the solution to (5) or to (6), such as LASSO [25] and Basis Pursuit Denoising [26].

In a practical face recognition problem, the coefficient vector (or ) not only is sparse but also is block sparse. To see this, we can rewrite the sparse representation problem (2) as follows: where is the coefficient vector associated with the th class and . When a test face belongs to the th class, ideally only elements in are significantly nonzero. In other words, only the block has significantly nonzero norm. Clearly, this is a canonical block sparse model [19, 27]. Many algorithms for the block sparse model can be used here. For example, in [20], it is suggested to use the following algorithm: This is a natural extension of basic -minimization algorithms, which imposes norm on block elements and then norm over blocks. It has been shown that exploiting the block structure can largely improve the estimation quality of [2729].

However, one should note that when the test face belongs to the th class, not only the representation coefficient block is a nonzero block, but also its elements are correlated in amplitude. The correlation arises because the faces of the th class in the training set are all correlated with the test face, and thus the elements in are mutually dependent. It is shown that exploiting the correlation within blocks can further improve the estimation quality of [21, 30] than only exploiting the block structure.

Therefore, in this study, we propose to use block sparse Bayesian learning (BSBL) [21] to estimate by exploiting the block structure and the correlation within blocks. In the next section, we first briefly introduce sparse Bayesian learning (SBL) and then introduce BSBL.

3. SBL and BSBL

SBL [31] was initially proposed as a machine learning method. But later it has been shown to be a powerful method for sparse representation, sparse signal recovery, and compressed sensing.

3.1. Advantages of SBL

Compared to LASSO-type algorithms (such as the original LASSO algorithm, Basis Pursuit Denoising, Group Lasso, Group Basis Pursuit, and other algorithms based on -minimization), SBL has the following advantages [32, 33].(1)Its recovery performance is robust to the characteristics of the matrix , while most other algorithms are not. For example, it has been shown that when columns of are highly coherent, SBL still maintains good performance, while algorithms such as LASSO have seriously degraded performance [34]. This advantage is very attractive to sparse representation and other applications, since in these applications the matrix is not a random matrix and its columns are highly coherent.(2)SBL has a number of desired advantages over many popular algorithms in terms of local and global convergence. It can be shown that SBL provides a sparser solution than Lasso-type algorithms. In particular, in noiseless situations and under certain conditions, the global minimum of SBL cost function is unique and corresponds to the true sparsest solution, while the global minimum of the cost function of LASSO-type algorithms is not necessarily the true sparsest solution [35]. These advantages imply that SBL is a better choice in feature selection via sparse representation [36].(3)Recent works in SBL [21, 37] provide robust learning rules for automatically estimating values of its regularizer (related to noise variance) such that SBL algorithms can achieve good performance. In contrast, LASSO-type algorithms generally need users to choose values for such regularizer, which is often obtained by cross-validation. However, this takes lots of time for large-scale datasets, which is not convenient and even impossible in some scenarios.

3.2. Introduction to BSBL

BSBL [21] is an extension of the basic SBL framework, which exploits a block structure and intrablock correlation in the coefficient vector . It is based on the assumption that can be partitioned into nonoverlapping blocks: Among these blocks, few blocks are nonzero. Then, each block is assumed to satisfy a parameterized multivariate Gaussian distribution: with the unknown parameters and . Here, is a nonnegative parameter controlling the block-sparsity of . When , the th block becomes zero. During the learning procedure most tend to be zero, due to the mechanism of automatic relevance determination [31]. Thus, sparsity at the block level is encouraged. is a positive definite and symmetrical matrix, capturing the intrablock correlation of the th block. Under the assumption that blocks are mutually uncorrelated, the prior of is , where . (Here means that is a block diagonal matrix, and its principal diagonal blocks are .) To avoid overfitting, all the will be imposed by some constraints and their estimates will be further regularized. The model noise is assumed to satisfy , where is a positive scalar to be estimated. Based on the previous probability models, one can obtain a closed-form posterior distribution. Therefore, the estimate of can be obtained by using the Maximum-A-Posteriori (MAP) estimation, providing all the parameters are estimated.

To estimate the parameters , one can use the type II maximum likelihood method [31, 38]. This is equivalent to minimizing the following cost function: where denotes all the parameters; that is, . There are several optimization methods to minimize the cost function, such as the expectation-maximum method, and the bound-optimization method, and the duality method. This framework is called the BSBL framework.

BSBL not only has the advantages of the basic SBL listed in Section 3.1, but also has another two advantages.(1)BSBL provides large flexibility to model and exploit correlation structure in signals, such as intrablock correlation [21, 30]. By exploiting the correlation structures, recovery performance is significantly improved.(2)BSBL has the unique ability to find less-sparse [39] and nonsparse [30] true solutions with very small errors. (Note that for an underdetermined inverse problem, that is, , where is one matrix or a product of a sensing matrix and a dictionary matrix as used in compressed sensing, one cannot find the true solution without any error, if the true solution is non-sparse (i.e., ).) This is attractive for practical use, since in practice the true solutions may not be very sparse, and existing sparse signal recovery algorithms generally fail in this case.

Therefore, BSBL is promising for pattern recognition. In the following, we use BSBL for face recognition. Among a number of BSBL algorithms, we choose the bound-optimization-based BSBL algorithm [21], denoted by BSBL-BO. (The BSBL-BO code can be downloaded at http://dsp.ucsd.edu/~zhilin/BSBL.html.)

4. Face Recognition via BSBL

As stated in Section 2, we use BSBL-BO to estimate , denoted by , and then use rule (4) to assign a test face to a class.

In practice, a test face may contain some outliers; that is, , where is the outlier-free face image and is a vector whose each entry is an outlier. Generally, the number of outliers is small, and thus is sparse. Addressing the outlier issue is important to a practical face recognition system. In [11], an augmented sparse model was used to deal with this issue. We now extend this method to our block sparse model and use BSBL-BO to estimate the solution. In particular, we adopt the following augmented block sparse model: where is a vector modeling dense Gaussian noise, , and . Here, is an identity matrix of the dimension . Clearly, is also a block sparse vector, whose first blocks are the blocks of and last elements are blocks with the block size being 1. (In experiments we found that treating the elements as one big block resulted in similar performance, while significantly sped up the algorithm.) Thus, (12) is still a block sparse model and can be solved by BSBL-BO. Once BSBL-BO obtains the solution, denoted by , its first blocks (denoted by ) and its last elements (denoted by ) are used to assign to a class according to

We now take the Extended Yale B database [40] as an example to show how our method works. As shown in SRC [11], we randomly select half of the total 2414 faces (i.e., 1207 faces) as the training set and the rest as the testing set. Each face is downsampled from 192 168 to 24 21 = 504. The training set contains 38 subjects. Each subject has about 32 faces. Therefore, in our model and . The matrix has the size , and thus the matrix has the size .

The procedure is illustrated in Figure 1. Figure 1(a) shows that a test face (belonging to Subject 4) can be linearly combined by a few training faces. Most of the coefficients estimated by BSBL-BO (i.e., ) are zero or near zero and only those associated with the test face are significantly nonzero. Figure 1(b) shows the residuals for . The residual at is 0.0008, while the residuals at are all close to 1, which makes it easy to assign the test face to Subject 4. See Section 5.1.1 for more details.

5. Experimental Results

To demonstrate the superior performance of BSBL, we performed experiments on three widely used face databases: Extended Yale B [40], AR [41], and CMU PIE [42] face databases. The face images of these three databases were captured under varying lighting, pose, or facial expression. The AR database also has occluded face images for the test of robustness of face recognition algorithms. Section 5.1 shows experimental results on face images without occlusion, and Section 5.2 shows experimental results on face images with three kinds of occlusion.

5.1. Face Recognition without Occlusion

For the experiments on face images without occlusion, we used downsampling, Eigenfaces [3, 6], and Laplacianfaces [5] to reduce the dimensionality of original faces. We compared our method with three classical methods, including nearest neighbor (NN) [8], nearest subspace (NS) [9], and support vector machine (SVM) [10]. We also compared our method with recently proposed sparse-representation-based classification methods, including the basic sparse-representation classifier (SRC) [11] and the block-sparse recovery algorithm via convex optimization (BSCO) [20]. For NS, the subspace dimension was fixed to 9. For BSCO, we used the algorithm [20] which has been shown to be the best one among all the structured sparsity-based classifiers proposed in that work.

5.1.1. Extended Yale B Database

The Extended Yale B database [40] consists of 2414 frontal-face images of 38 subjects (each subject has about 64 images). In the experiment, we used the cropped face images which were captured under various lighting conditions [43]. Two subjects are shown in Figure 2 for illustration (for each subject, only 10 face images are shown). We randomly selected half face images of each subject as the training set and the rest as the testing set. We used downsampling, Eigenfaces, and Laplacianfaces to extract features from face images. The dimensions of extracted features were 30, 56, 120, and 504, respectively.

Experimental results are shown in Figure 3, where we can see our method uniformly outperformed other algorithms regardless of used features. Particularly, our method had better performance when using Laplacianfaces. The superiority of our method was much clearer when the feature dimension was smaller and Laplacianfaces were used. For example, when the feature dimension was 56, our method achieved the highest rate of 98.9%, while NN, NS, SVM, SRC, and BSCO achieved the rates of 83.5%, 90.4%, 85.0%, 91.7%, and 79.4%, respectively. Higher performance using low dimensional features is attractive for recognition, since lower feature dimension generally implies that the computational load is accordingly lower.

5.1.2. AR Database

The AR database [41] consists of more than 4000 front-face images of 126 human subjects. Each subject has 26 images in two separated sessions, as shown in Figure 4. This database includes more facial expression and facial disguise. We chose 100 subjects (50 males and 50 females) in this experiment. For each subject, seven face images with different illumination and facial expression (i.e., the first 7 images of each subject) in Session 1 were selected for training and the first 7 images of each subject in Session 2 for testing. All the images were converted to gray mode and were resized to . Downsampled faces, Eigenfaces, and Laplacianfaces were applied with the dimensions of 30, 54, 130, and 540. Experimental results are shown in Figure 5.

From Figure 5(a), we can see that our algorithm significantly outperformed other classifiers when using downsampled features. However, our method did not achieve the highest rate when using Eigenfaces and Laplacianfaces. This might be due to the small block size in this experiment (). Although our method did not uniformly outperform other algorithms when using different face features, the recognition rate achieved by our method using downsampled faces (96.7%) was not exceeded by other algorithms using any face features.

5.1.3. CMU PIE Database

The CMU PIE database [42] consists of 41368 front-face images of 68 human subjects under different poses, illumination, and expressions. We chose one subset (C29) which included 1632 face images of 68 subjects (24 images for each subject) for this experiment. The first subject in this subset is shown in Figure 6, which varies in pose, illumination, and expression. All the images were cropped and resized to . For each subject, we randomly selected 10 images for training and the rest (14 images for each subject) for testing. Downsampled faces, Eigenfaces, and Laplacianfaces were applied with four dimensions, that is, 36, 64, 144, and 256. Experimental results are shown in Figure 7.

From Figure 7(a), we can see that sparse-representation-based classifiers usually outperformed classical ones in this dataset. Among the sparse-representation-based classifiers, BSBL and BSCO achieved higher recognition rates than SRC. For BSBL and BSCO, BSBL slightly outperformed BSCO with downsampled faces and Laplacianfaces, while BSCO outperformed BSBL with Eigenfaces. Specifically, for each feature space, BSBL achieved the highest recognition rate of 95.80% with downsampled faces, and 94.12% with Laplacianfaces, while BSCO achieved 98.42% with Eigenfaces, which was the highest one in this experiment. Nevertheless, BSBL outperformed BSCO in 8 out of 12 different combinations of dimensions and features.

5.2. Face Recognition with Occlusion

For the experiments on face images with occlusion, we used downsampling to reduce the size of face images and compared our method with NN [8], SRC [11], and BSCO [20].

5.2.1. Face Recognition with Pixel Corruption

We tested face recognition with pixel corruption on 3 subsets of the Extended Yale B database: 719 face images with normal-to-moderate lighting conditions from Subsets 1 and 2 for training and 455 face images with more extreme lighting conditions from Subset 3 for testing. For each test image, we first replaced a certain percentage (0%–50%) of its original pixels with uniformly distributed gray values in . Both the gray values and the locations were random and hence unknown to the algorithms. We then downsampled all the images to the sizes of , , , and , respectively. Two corrupted face images were shown in Figures 8(a)-8(b).

Results are shown in Table 1. It can be seen that in almost all combinational situations of dimension and corruption, BSBL achieved the highest recognition rate when compared with NN and SRC, and the performance gap between our algorithm and the compared algorithms was very large. For example, when the dimension was and 50% pixels were corrupted, BSBL achieved the recognition rate of 67.25%, while SRC only had a recognition rate of 46.37%. Meanwhile, BSBL outperformed BSCO in 17 out of 24 combinational situations of dimension and corruption. Figure 9(a) shows the recognition rates of the four algorithms at different pixel corruption levels with the dimension of .


MethodDimension Percent corrupted (%)
0 10 20 30 40 50

NN 36.92 42.42 49.67 46.15 28.35 14.95
48.79 54.95 60.00 59.34 40.88 20.22
67.25 75.17 79.56 74.73 58.02 35.39
87.25 93.19 94.95 92.53 76.48 56.04

SRC 54.51 44.62 50.55 46.59 32.75 21.76
82.64 61.32 66.59 63.52 49.23 28.79
98.02 85.06 85.28 83.96 71.87 46.37
100.00 98.24 98.24 97.14 92.09 73.63

BSCO 87.48 83.52 63.30 39.12 23.30 14.29
98.02 96.04 90.99 75.60 49.45 30.33
99.34 99.12 97.58 93.85 82.20 58.68
100.00 100.00 99.12 98.68 97.14 96.70

BSBL 87.25 85.71 68.79 51.43 30.99 19.56
94.29 92.97 86.15 72.53 59.12 39.34
99.56 99.34 97.80 92.31 84.18 67.25
100.00 100.00 99.78 99.12 97.58 89.01

5.2.2. Face Recognition with Block Occlusion

In this experiment, we used the same training and testing images as those in the previous pixel corruption experiment. For each test image, we replaced a randomly located square block with an unrelated image (the baboon image in SRC [11]), which occluded 0%–50% of the original testing image. We then downsampled all the images to the sizes of , , , and , respectively. Two occluded face images were shown in Figures 8(c)-8(d).

Table 2 shows the recognition rates of NN, SRC, BSCO, and BSBL on different dimensions and percentages of occlusion. Again, BSBL outperformed the compared algorithms in most cases. For example, when the occlusion percentage ranged from 10% to 50% and the face dimension was , BSBL achieved about 8.35%–13.19% higher recognition rate than BSCO, as shown in Figure 9(b).


MethodDimension Percent occluded (%)
0 10 20 30 40 50

NN 36.92 34.29 27.69 24.40 20.44 15.17
48.79 44.84 38.68 32.09 21.54 18.46
67.25 64.18 52.09 45.71 30.33 22.64
87.25 85.50 76.92 67.25 52.31 37.14

SRC 54.51 36.26 28.13 22.64 17.36 14.29
82.64 50.99 39.56 31.65 20.66 17.36
98.02 75.39 59.78 48.57 30.33 20.88
100.00 96.48 89.23 72.31 54.29 35.17

BSCO 87.48 30.11 14.51 8.57 5.49 3.96
98.02 51.65 32.53 18.02 13.41 8.35
99.34 71.87 49.23 32.09 20.88 14.73
100.00 99.56 92.97 80.88 63.74 45.93

BSBL 87.25 46.59 28.35 18.68 11.65 10.55
94.29 66.59 40.88 26.59 20.22 12.53
99.56 83.30 60.00 45.28 33.19 23.08
100.00 96.92 92.31 75.60 56.48 42.64

5.2.3. Face Recognition with Real Face Disguise

We used a subset of AR database to test the performance of our method on face recognition with disguise. We chose 799 images of various facial expressions without occlusion (i.e., the first 4 face images in each session except a corrupted image named “W-027-14.bmp”) for training. We formed two separate testing sets of 200 images. The images in the first set were from the neutral expression with sunglasses (the 8th image in each session) which cover roughly 20% of the face, while the ones in the second set were from the neutral expression with scarves (the 11th image in each session) which cover roughly 40% of the face. All the images were resized to , , , and , respectively.

Results are shown in Table 3. In the case of neutral expression with sunglasses, both SRC and NN achieved higher recognition rates than BSCO and BSBL. However, in the case of neutral expression with scarves, BSBL outperformed NN, SRC, and BSCO significantly. Totally, BSBL achieved the highest recognition rates 72.50% and 74.5% with the dimensions of and , respectively, for the two testing sets, while SRC achieved the highest recognition rates 28.25% and 44.00% with the other two dimensions.


Method Dimension Sunglasses Scarves Total

NN 35.00 6.50 20.75
48.00 7.00 27.50
65.50 9.50 37.50
68.00 11.50 39.75

SRC 46.50 10.0028.25
72.00 16.0044.00
83.00 21.50 52.25
89.00 37.00 63.00

BSCO 14.50 9.50 12.00
35.00 19.50 27.25
68.00 44.00 56.00
76.00 50.00 63.00

BSBL 22.00 23.00 22.50
40.50 46.00 43.25
64.00 81.0072.50
65.50 83.5074.50

6. Conclusions

Classification via sparse representation is a popular methodology in face recognition and other classification tasks. Recently, it was found that using block-sparse representation, instead of the basic sparse representation, can yield better classification performance. In this paper, by introducing a recently proposed block sparse Bayesian learning (BSBL) algorithm, we showed that the BSBL is a better framework than the basic block-sparse representation framework, due to its various advantages over the latter. Experiments on common face databases confirmed that the BSBL is a promising sparse-representation-based classifier.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Grant no. 60903128) and also by the Fundamental Research Funds for the Central Universities (Grants nos. JBK130142 and JBK130503).

References

  1. A. M. Martinez and A. C. Kak, “PCA versus LDA,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228–233, 2001. View at: Publisher Site | Google Scholar
  2. P. J. Phillips, P. J. Flynn, T. Scruggs et al., “Overview of the face recognition grand challenge,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), vol. 1, pp. 947–954, San Diego, Calif, USA, June 2005. View at: Publisher Site | Google Scholar
  3. M. A. Turk and A. P. Pentland, “Face recognition using eigenfaces,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '91), pp. 586–591, Maui, Hawaii, USA, June 1991. View at: Google Scholar
  4. R. Brunelli and T. Poggio, “Face recognition: features versus templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1042–1052, 1993. View at: Publisher Site | Google Scholar
  5. X. He, S. Yan, Y. Hu, P. Niyogi, and H. Zhang, “Face recognition using Laplacianfaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 328–340, 2005. View at: Publisher Site | Google Scholar
  6. M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, 1991. View at: Google Scholar
  7. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces versus fisherfaces: recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711–720, 1997. View at: Publisher Site | Google Scholar
  8. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, John Wiley & Sons, New York, NY, USA, 2012.
  9. J. Ho, M. Yang, J. Lim, K. Lee, and D. Kriegman, “Clustering appearances of objects under varying illumination conditions,” in Proceedings of the IEEE Conference on Computer Visual and Pattern Recognition (CVPR '03), vol. 1, pp. 1–11, Madison, Wis, USA, June 2003. View at: Google Scholar
  10. V. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 2000.
  11. J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210–227, 2009. View at: Publisher Site | Google Scholar
  12. S. Gao, I. W. H. Tsang, and L. T. Chia, “Kernel sparse representation for image classification and face recognition,” in Computer Vision-ECCV, 2010, pp. 1–14, Springer, New York, NY, USA, 2010. View at: Google Scholar
  13. M. Yang, L. Zhang, J. Yang, and D. Zhang, “Robust sparse coding for face recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '11), pp. 625–632, Colorado Springs, Colo, USA, June 2011. View at: Publisher Site | Google Scholar
  14. C. Lu, H. Min, J. Gui, L. Zhu, and Y. Lei, “Face recognition via weighted sparse representation,” Journal of Visual Communication and Image Representation, vol. 24, no. 2, pp. 111–116, 2013. View at: Google Scholar
  15. M. Yang and L. Zhang, “Gabor feature based sparse representation for face recognition with gabor occlusion dictionary,” in Computer Vision-ECCV, 2010, pp. 448–461, Springer, New York, NY, USA, 2010. View at: Google Scholar
  16. L. Zhang, M. Yang, Z. Feng, and D. Zhang, “On the dimensionality reduction for sparse representation based face recognition,” in Proceedings of the International Conference on Pattern Recognition (ICPR '10), pp. 1237–1240, Istanbul, Turkey, August 2010. View at: Publisher Site | Google Scholar
  17. Y. Chen, T. T. Do, and T. D. Tran, “Robust face recognition using locally adaptive sparse representation,” in Proceedings of the 17th IEEE International Conference on Image Processing (ICIP '10), pp. 1657–1660, Hong Kong, China, September 2010. View at: Publisher Site | Google Scholar
  18. Y. Xu, W. Zuo, and Z. Fan, “Supervised sparse representation method with a heuristic strategy and face recognition experiments,” Neurocomputing, vol. 79, pp. 125–131, 2012. View at: Publisher Site | Google Scholar
  19. M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society B, vol. 68, no. 1, pp. 49–67, 2006. View at: Publisher Site | Google Scholar
  20. E. Elhamifar and R. Vidal, “Block-sparse recovery via convex optimization,” IEEE Transactions on Signal Processing, vol. 60, no. 8, pp. 4094–4107, 2012. View at: Google Scholar
  21. Z. Zhang and B. D. Rao, “Extension of SBL algorithms for the recovery of block sparse signals with intra-block correlation,” IEEE Transactions on Signal Processing, vol. 61, no. 8, pp. 2009–2015, 2013. View at: Google Scholar
  22. B. K. Natarajan, “Sparse approximate solutions to linear systems,” SIAM Journal on Computing, vol. 24, no. 2, pp. 227–234, 1995. View at: Google Scholar
  23. D. L. Donoho, “For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution,” Communications on Pure and Applied Mathematics, vol. 59, no. 6, pp. 797–829, 2006. View at: Publisher Site | Google Scholar
  24. E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies?” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406–5425, 2006. View at: Publisher Site | Google Scholar
  25. R. Tibshirani, “Regression shrinkage and selection via the Lasso,” Journal of the Royal Statistical Society B, vol. 58, no. 1, pp. 267–288, 1996. View at: Google Scholar
  26. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1998. View at: Google Scholar
  27. R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-based compressive sensing,” IEEE Transactions on Information Theory, vol. 56, no. 4, pp. 1982–2001, 2010. View at: Publisher Site | Google Scholar
  28. J. Huang and T. Zhang, “The benefit of group sparsity,” The Annals of Statistics, vol. 38, no. 4, pp. 1978–2004, 2010. View at: Publisher Site | Google Scholar
  29. N. Rao, B. Recht, and R. Nowak, “Universal measurement bounds for structured sparse signal recovery,” in Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS '12), pp. 942–950, Canary Islands, Spain, April 2012. View at: Google Scholar
  30. Z. Zhang, T. P. Jung, S. Makeig, and B. D. Rao, “Compressed sensing for energy-efficient wireless telemonitoring of noninvasive fetal ECG via block sparse Bayesian learning,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 2, pp. 300–309, 2013. View at: Google Scholar
  31. M. E. Tipping, “Sparse bayesian learning and the relevance vector machine,” The Journal of Machine Learning Research, vol. 1, no. 3, pp. 211–244, 2001. View at: Publisher Site | Google Scholar
  32. Z. Zhang, Sparse signal recovery exploiting spatiotemporal correlation [Ph.D. thesis], University of California, San Diego, Calif, USA, 2012.
  33. D. P. Wipf, Bayesian methods for finding sparse representations [Ph.D. thesis], University of California, San Diego, Calif, USA, 2006.
  34. D. P. Wipf, “Sparse estimation with structured dictionaries,” in Advances in Neural Information ProcessIng Systems, vol. 24, pp. 2016–2024, NIPS, 2011. View at: Google Scholar
  35. D. P. Wipf and B. D. Rao, “Sparse Bayesian learning for basis selection,” IEEE Transactions on Signal Processing, vol. 52, no. 8, pp. 2153–2164, 2004. View at: Publisher Site | Google Scholar
  36. J. Wan, Z. Zhang, J. Yan et al., “Sparse Bayesian multi-task learning for predicting cognitive outcomes from neuroimaging measures in Alzheimer's disease,” in Proceedings of the IEEE Conference on Computer Visual and Pattern Recognition (CVPR '12), pp. 940–947, Providence, RI, USA, June 2012. View at: Google Scholar
  37. Z. Zhang and B. D. Rao, “Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning,” IEEE Journal on Selected Topics in Signal Processing, vol. 5, no. 5, pp. 912–926, 2011. View at: Publisher Site | Google Scholar
  38. D. MacKay, “The evidence framework applied to classification networks,” Neural Computation, vol. 4, no. 5, pp. 720–736, 1992. View at: Google Scholar
  39. Z. Zhang, T. P. Jung, S. Makeig, and B. D. Rao, “Compressed sensing of EEG for wireless telemonitoring with low energy consumption and inexpensive hardware,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 1, pp. 221–224, 2013. View at: Google Scholar
  40. A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman, “From few to many: illumination cone models for face recognition under variable lighting and pose,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 643–660, 2001. View at: Publisher Site | Google Scholar
  41. A. Martinez, “The ar face database,” CVC 24, 1998. View at: Google Scholar
  42. T. Sim, S. Baker, and M. Bsat, “The cmu pose, illumination, and expression (pie) database,” in Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition (FG '02), pp. 46–51, Washington, DC, USA, May 2002. View at: Google Scholar
  43. K. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 684–698, 2005. View at: Publisher Site | Google Scholar

Copyright © 2013 Taiyong Li and Zhilin Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1187 Views | 1167 Downloads | 10 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder