Abstract

Nonnegative matrix factorization (NMF) is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF) for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.

1. Introduction

Face recognition has been one of the most challenging problems in computer science and information technology since 1990 [1, 2]. The approaches of face recognition can be mainly categorized into two groups, namely geometric feature-based and appearance-based [3]. The geometric features are based on the short range phenomena of face images such as eyes, eyebrows, nose, and mouth. The facial local features are learnt to form a face geometric feature vector for face recognition. The appearance-based approach relies on the global facial features, which generate an entire facial feature vector for face classification. Nonnegative matrix factorization (NMF) [4, 5] belongs to geometric feature-based category, while principle component analysis (PCA) [6] is based on the whole facial features. Both NMF and PCA are unsupervised learning methods for face recognition. The basic ideas of these two approaches are to find the basis images using different criterions. All face images can be reconstructed by the basis images. The basis images of PCA are called eigenfaces, which are the eigenvectors corresponding to large eigenvalues of total scatter matrix. NMF aims to perform nonnegative matrix decomposition on the training image matrix V such that ??˜????, where W and H are the basis image matrix and the coefficient matrix, respectively. The local image features are learnt and contained in W as column vectors. Follow the success of applying NMF in learning the parts of objects [4], many researchers have conducted in-depth investigation on NMF and different NMF-based approaches have been developed [719]. Li et al. proposed a local NMF method [7] by adding some spatial constraints. Wild et al. [8] utilized spherical K-means clustering to produce a structured initialization for NMF. Buciu and Pitas [9] presented a DNMF method for learning facial expressions in a supervised manner. However, DNMF does not guarantee convergence to a stationary limit point. Kotsia et al. [15] thus presented a modified DNMF method using projected gradients. Some similar supervised methods incorporated into NMF were developed to enhance the classification power of NMF [1113, 19]. Hoyer [10] added sparseness constraints to NMF to find solutions with desired degrees of sparseness. Lin [16, 17] modified traditional NMF updates using projected gradient method and discussed their convergences. Recently, Zhang et al. [18] proposed a topology structure preservation constraint in NMF to improve the NMF performance.

However, to the best of our knowledge, almost all existing NMF-based approaches encounter two major problems, namely time-consuming problem and incremental learning problem. In most cases, the training image matrix V is very large and it leads to expensive computational cost for NMF-based schemes. Also, when the training samples or classes are updated, NMF must implement repetitive learning. These drawbacks greatly restrict the practical applications of NMF-based methods to face recognition. To avoid the above two problems, this paper, motivated by our previous work on incremental learning [19], proposes a supervised incremental NMF (INMF) approach under a novel constraint NMF criterion, which aims to cluster within class samples tightly and augment the between-class distance simultaneously. Our incremental strategy utilizes the supervised local features, which are considered as the short-range phenomena of face images, for face classifications. Two public available face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Experimental results show that our INMF method outperforms PCA [6], NMF [4], and BNMF [19] approaches in both nonincremental learning and incremental learning of face recognition.

The rest of this paper is organized as follows: Section 2 briefly reviews the related works. Theoretical analysis and INMF algorithm design are given in Section 3. Experimental results are reported in Section 4. Finally, Section 5 draws the conclusions.

This section briefly introduces PCA [6], NMF [4], and BNMF [19] methods. Details are as follows.

2.1. PCA

Principal component analysis (PCA), also called eigenface method, is a popular statistic appearance-based linear method for dimensionality reduction in face recognition. The theory used in PCA is based on Karhunen-Loeve transform. It performs the eigenvalue decomposition on the total scatter matrix St and then selects the large principal components (eigenfaces) to account for most distributions. All face images can be expressed by the linear combinations of these basis images (eigenfaces). However, PCA is not able to exploit all of the feature classification information and how to choose the principal component elements is still a problem. Therefore, PCA cannot give satisfactory performance in pattern recognition tasks.

2.2. NMF

NMF aims to find nonnegative matrices W and H such that ????×??NMF˜????×??????×??,(2.1) where matrix V is also a nonnegative matrix generated by total n training images. Each column of W is called basis image, while H is the coefficient matrix. The basis number r is usually chosen less than n for dimensionality reduction. The divergence between V and WH is defined as ???=?????????????log????(????)????-??????+(????)?????.(2.2)

NMF (2.1) is equivalent to the following optimization problem: min??,?????,s.t.??=0,??=0,???????????=1,.(2.3)

The minimization problem (2.3) can be solved using the following iterative formulae, which converge to a local minimum: ??????????????????????(????)??????????,??????????????????????,????????????????????????????(????)????.(2.4)

2.3. BNMF

The basic idea of BNMF is to perform NMF on c small matrices ??(??)?R??×??0 (??=1,2,,??), namely ???(??)???×??0NMF˜???(??)???×??0???(??)???0×??0,??=1,2,,??,(2.5) where ??(??) contains ??0 training images of the ith class, and c is the number of classes. BNMF is yielded from (2.5) as follows: ????×??BNMF˜????×??????×??,(2.6) where ??=????0,????×??=[??(1)??(2)???(??)],????×??=[??(1)??(2)???(??)],????×??=diag(??(1),??(2),,??(??)), and ??(=????0) is the total number of training images.

3. Proposed INMF

To overcome the drawbacks of existing NMF-based methods, this section proposes a novel incremental NMF (INMF) approach, which is based on a new constraint NMF criterion and our previous block technique [19]. Details are discussed below.

3.1. Constraint NMF Criterion

The objective of our INMF is to impose supervised class information on NMF such that between-class distances increase, while the within-class distances simultaneously decrease. To this end, we define the within-class scatter matrix ????(??) of the ith coefficient matrix ??(??)?R??0×??0 as ????(??)=1??0??0???=1?????(??)-??(??)??????(??)-??(??)???,(3.1) where ??(??)=(1/??0)???0??=1????(??) is the mean column vector of the ith class. The within-classsamples of the kth class will cluster tightly as tr(????(??)) becomes small.

Assume ???(??) is an enlarging vector of ??(??), that is, ???(??)=(1+??)??(??) with ??>0. Then we have ?????(??)-??(??)????????<(1+??)(??)-??(??)???=??????(??)-???(??)???.(3.2) Inequality (3.2) implies that between-class distances are increased as the mean vectors of classes in H are enlarged.

Based on above analysis, we define a constraint divergence criterion function for the kth class as follows: ??(??)=?????(??(??)??????log(??)????(????)(??)????-??(??)????+(????)(??)???????)+??tr??(??)??????-??(??)???22,(3.3) where parameters ??,??>0 and ??=1,2,,??.

Our entire INMF criterion function is then designed as below: ??=?????=1??(??)=???????(??(??)??????log(??)????(????)(??)????-??(??)????+(????)(??)?????)+????????tr??(??)??????-??(??)???22?.(3.4)

Based on criterion (3.4), the following constraint NMF (CNMF) update rules (3.5)–(3.7) will be derived in the next subsection. We can show that the iterative formulae (3.5)–(3.7) converge to a local minimum as well: ??(??)???????(??)?????????(??)????(????)(??)??????(??)??????,(3.5)(??)???????(??)?????????(??)????,??(3.6)(??)?????v-??+??2-4????,2??(3.7) where ??=2??-??/??2??,??=-(2??+??/????)????(??)+1,??=-??(??)?????????(??)??????(??)????/(??(??)??(??))????, and ????(??) is the ith entry of vector ??(??), ??=1,2,,??.

So, our entire INMF is performed as follows: ???(1)??(2)???(??)?INMF˜???(1)??(2)???(??)??????????(1)??(2)???(??)???????,(3.8) where ???(??)???×??0CNMF˜???(??)???×??0???(??)???0×??0,??=1,2,,??.(3.9)

3.2. Convergence of Proposed Constraint NMF

This subsection reports how to derive the iterative formulae (3.5)–(3.7) and discusses their convergences under constraint NMF criterion (3.3).

Deffinition 3.1 (see [5]). ???(??,??) is called an auxiliary function for ??(??), if ???(??,??) satisfies ???(??,??)=??(??),??(??,??)=??(??),(3.10) where ?????, are matrices with the same size.

Lemma 3.2 (see [5]). If ???(??,??) is an auxiliary function for ??(??), then ??(??) is a nonincreasing function under the update rule ????+1=argmin???????,?????.(3.11) To obtain iterative rule (3.7) and prove its convergence, one first constructs an auxiliary function for F with fixed W.

Theorem 3.3. If ??(??)(??(??)) is the value of criterion function (3.3) with fixed ??(??), then ??(??)(??(??),???(??)) is an auxiliary function for ??(??)(??(??)), where ??(??)???(??),???(??)?=????????(??)????log??(??)????-??(??)????+???(??)??(??)??????-?????????(??)??????(??)???????(??)?????????(??)???????(??)???????(log(??)??????(??)???????-log(??)???????(??)?????????(??)???????(??)????)???+??tr??(??)??????-??(??)???22.(3.12)

Proof. It can be directly verified that ??(??)(??(??),??(??))=??(??)(??(??)). So we just need show the inequality ??(??)(??(??),???(??))=??(??)(??(??)). To this end, we will use the convex function ??=log??. For all ??,??, and ???????????=1, it holds that ?-log(????(??)??????(??)?????)=-????????????log(??)??????(??)????????????.(3.13) Substituting ????????=??(??)???????(??)????/?????(??)???????(??)???? into the above inequality, we have ?-log(????(??)??????(??)?????)=-????(??)???????(??)?????????(??)???????(??)????(log??(??)??????(??)??????-log(??)???????(??)?????????(??)???????(??)????).(3.14) Therefore, ??(??)(??(??),???(??))=??(??)(??(??)). This concludes the theorem immediately.

Obviously, the function ????(??,??)=????(??)(??(??),???(??)) is also an auxiliary function for the entire constraint NMF criterion ???(??)=????(??)(??(??)). Lemma 3.2 indicates that ??(??) is nonincreasing under the update rule (3.11). Let ?????(??,??)/????(??)????=0 and we have ?????(??,??)????(??)????=????(??)???(??),???(??)?????(??)?????=-????(??)??????(??)???????(??)?????????(??)???????(??)????1??(??)????+?????(??)???????+2??(??)????-????(??)?-???????1??????(??)????+????(??)?=0.(3.15) From the above equation, it directly induces the iterative formula (3.7), and lemma 3.2 demonstrates that (3.7) converges to a local minimum. For update rule (3.5)-(3.6), the proof is similar to that of update rule (3.7) using the following auxiliary function with fixed H: ????(??,??)=????(??)???(??),???(??)?=???????(??(??)????log??(??)????-??(??)????+???(??)??(??)?????)-???????????(??)???????(??)??????(??)??????????(??)??????(??)???????(log(??)??????(??)????????-log(??)??????(??)??????????(??)??????(??)????)+?????????tr??(??)??????-??(??)???22?.(3.16)

3.3. Incremental Learning

From the above analysis, our incremental learning algorithm is designed as follows:

(i) Sample incremental learning. As a new training sample x0 of the ith class is added to training set, we denote that ???(??)=[??(??),??0]. Thus the training image matrix becomes ??????=(1)????(??)???(??)?.(3.17)

In this case, it only needs to perform CNMF on matrix ???(??), that is, ???(??)CNMF˜???(??)???(??). The rest decompositions such as ??(??)CNMF˜??(??)??(??)(?????) need not implement repetitive computation. So, sample incremental learning can be performed as follows: ???INMF˜?????????=(1)????(??)???(??)????????????(1)????(??)???(??)?????????.(3.18)

(ii) Class incremental learning. As a new class, denoted by matrix ??(??+1), is added to the current training set, it forms a new training image matrix as ??????=(1)???(??)|??(??+1)?.(3.19)

The incremental learning settings are similar to the first item (i) that all decompositions ??(??)CNMF˜??(??)??(??)(??=1,2,,??) need not compute again. We only need perform CNMF on the matrix ??(??+1), that is, ??(??+1)CNMF˜??(??+1)??(??+1). Hence, class incremental learning can be implemented as below: ???INMF˜?????????=(1)???(??)|??(??+1)??????????(1)???(??)??(??+1)???????.(3.20)

3.4. INMF Algorithm Design

Based on the above discussions, this subsection will give a detail design on our INMFalgorithm for face recognition. The algorithm involves two stages, namely training stage and testing stage. Details are as follows.

Training stage
Step 1. Perform CNMF (3.9) on matrices (??(??))??×??0,??=1,2,,??, namely, ???(??)???×??0CNMF˜???(??)???×??0???(??)???0×??0,??=1,2,,??.(3.21)
Step 2. INMF is obtained as ????×??INMF˜????×??????×??,(3.22) where ??=????0,??=????0, and ????×??=???(1)??(2)???(??)?,????×?????=diag(1),??(2),,??(??)?.(3.23)
If there is a new training sample or class added to current training set, then the incremental learning algorithm presented in Section 3.4 is applied to this stage.

Recognition stage
Step 3. Calculate the coordinates of a testing sample ?? in the feature space span{??1,??2,,????} by ?h=??+??, where W+ is the Moore-Penrose inverse of W.
Step 4. Compute the mean column vector ???? of class i and its coordinates vector h??=??+????(??=1,2,,??). The testing image ?? is classified to class k, if ???(h,h??)=min1=??=?????(h,h??), where ???(h,h??) denotes the Euclidean distance between vectors ?h and hi.

3.5. Sparseness of Coefficient Matrix

Let ??, define sparseness function with L1 and L2 norms [7] by h?R?? It can be seen that sparseness function ??sparsev(h)=??h????-1/??h??2v??-1.(3.24) with range [0, 1].

For INMF method, we have the following theorem for each column hi of H.

Theorem 3.4. Sparseness of each column hi of H in INMF has the following estimation: ??sparse:R???R

Proof. Let v????0-v??0v????0-1=??????????????(h??)=1.(3.25) where hi belongs to class i in H.
Obviously, h??=?0,,0,h(??)??1,,h(??)????0?,0,,0???R??,?h??=?h(??)??1,?,h(??)????0????R??0,(3.26) Moreover, ???h?????1=????h?????1,???h?????2=????h?????2.(3.27) So, we have ???h??1=1???h??2=v??0.(3.28) It concludes for vv??-??0v=v??-1???h??-?????1/???h?????2v??-1=1.(3.29) that ??=????0

In the experimental section, the parameters are selected as v????0-v??0v????0-1=??sparse(h??)=1.(3.30) and ??0=4 using INMF on FERET database. It can be calculated that ??=120

While on CMU PIE database, we select 0.9522=??sparse(h??)=1.(3.31) and ??0=4 and calculate that ??=68

These demonstrate that each column of H in INMF is highly sparse. Apparently, the coefficient column vectors between different classes in H are automatically orthogonal.

3.6. Computational Complexity

This section discusses the computational complexity of our proposed INMF approach. The ith iterative procedure of proposed INMF includes two parts, namely 0.9355=??sparse(h??)=1.(3.32) and ??(??). For each matrix ??(??)??(??) the iteration for 0??0????0?? needs ??(??) multiple times. While for ????0(??0??0+2??0+2), it needs ??(??) multiple times. Therefore, the total running multiple times of our INMF are ??0??0(????0+2??+10) Similar to INMF, we can obtain the running multiple times of NMF approach as ??INMF=?2????0??20+4????0??0+2????0+10??0??0???=2??????2??2+4????????+10??????+2????.(3.33). It can be seen that the computational complexity of our INMF method is greatly lower than that of NMF.

4. Experimental Results

In this section, FERET and CMU PIE databases are selected to evaluate the performance of our INMF method along with BNMF, NMF, and PCA methods. All images in two databases are aligned by the centers of eyes and mouth and then normalized with resolution ??NMF=2??????2+4??????+2????+2????. The original images with resolution 112×92 are reduced to wavelet feature face with resolution 112×92 after two-level D4 wavelet decomposition. If there are negative pixels in the wavelet faces, we will transform them into nonnegative faces with simple translations. The nearest neighbor classifier using Euclidean distance is exploited here. In the following experiments, the parameters are set to 30×25 for NMF, ??=120 for BNMF and INMF, ??0=4 for INMF. The stopping condition of iterative update is ??=10-4,??=10-3 where ??(??-1)-??(??)??(??)=??,(4.1) is the nth update criterion function defined in (3.3), the threshold ??(??) is set to ??. We stop the iteration if stopping condition (4.1) is met or if exceeding 1000 times iteration.

4.1. Face Databases

In FERET database, we select 120 people, 6 images for each individual. The six images are extracted from 4 different sets, namely Fa, Fb, Fc, and duplicate. Fa and Fb are sets of images taken with the same camera at the same day but with different facial expressions. Fc is a set of images taken with different camera at the same day. Duplicate is a set of images taken around 6–12 months after the day taking the Fa and Fb photos. Details of the characteristics of each set can be found in [3]. Images from one individual are shown in Figure 1.

CMU PIE database includes totally 68 people. There are 13 pose variations ranging from full right-profile image to full left-profile image and 43 different lighting conditions, 21 flashes with ambient light on or off. In our experiment, for each person, we select 56 images including 13 poses with neutral expression and 43 different lighting conditions in frontal view. Part images of one person are shown in Figure 2.

4.2. Basis Face Images

This section shows the basis images of the training set learnt by PCA, NMF, BNMF, and INMF approaches. Figure 3 shows 25 basis images of each approach on CMU PIE database. It can be seen that the bases of all methods are additive except for PCA. PCA extracts the holistic facial features. INMF learns more local features than NMF and BNMF. Moreover, the greater number of basis image is, the more localization is learnt in all NMF-based approaches.

4.3. Results on FERET Database

This section reports the experimental results with nonincremental learning and incremental learning on FERET database. All methods use the same training and testing face images. The experiments are repeated 10 times; and the average accuracies under different training number, along with the mean running times, are recorded.

4.3.1. Nonincremental Learning

We randomly select 10-12 images from each person for training, while the rest of (??(??=2,3,4,5)) images of each individual for testing. The average accuracies of training samples ranging from 2 to 5 are recorded in Table 1 and plotted in Figure 4(a). The recognition accuracies of INMF, BNMF, NMF, and PCA are 66.73%, 66.07%, 64.44%, and 34.33%, respectively, with 2 training images. The performance for each method is improved when the number of training images increases. When the number of training images is equal to 5, the recognition accuracies of INMF, BNMF, NMF, and PCA are 83.08%, 81.67%, 80.25%, and 37.58%, respectively. In addition, Table 2 gives the comparisons on average time-consuming in three NMF-based approaches. It can be seen that our INMF method gives the best performance for all cases of nonincremental learning on FERET database.

4.3.2. Class Incremental Learning

For 119 people, we randomly select 3 images from each individual for training and then add a new class to the training set. NMF must conduct repeated learning while BNMF and INMF need merely perform incremental training on the new added class. The average accuracies and the mean running times are recorded in Table 3 (plotted in Figure 6(a)) and Table 4, respectively.

Compared with the NMF and BNMF approaches, the proposed method gives around 5% and 1.5% accuracy improvements, respectively. The running time of INMF is around 2 times and 219 times faster than that of NMF with 119 and 120 individuals for training and class-incremental learning, respectively. Above all, our INMF gives the best performance on FERET database.

4.4. Results on CMU PIE Database

The experimental setting on CMU PIE database is similar to that of FERET database. It also includes two parts, namely nonincremental training and incremental learning. The experiments are repeated 10 times and the average accuracies under different training number, along with the mean running times, are recorded for comparisons. Details are as follows.

4.4.1. Nonincremental Learning

For each individual, 6-?? images are randomly selected for training, while the rest (??(??=7,14,21,28)) images for testing. The average recognition rates and mean running times are tabulated in Table 5 (plotted in Figure 4(b)) and Table 6, respectively. It can be seen that the recognition accuracies of INMF, BNMF, NMF, and PCA are 68.91%, 68.58%, 66.21%, and 23.94%, respectively with training number 7. When the number of training images is equal to 28, the recognition accuracies of INMF, BNMF, NMF, and PCA are 77.18%, 76.64%, 71.77%, and 27.51%, respectively. Compared with the PCA and NMF methods, the proposed method gives around 49% and 5% accuracy improvements, respectively. The performance of INMF is slightly better than that of BNMF. However, the computational efficiency of INMF greatly outperforms BNMF.

4.4.2. Sample Incremental Learning

We randomly select 7 images from each person for training, and the rest 49 images for testing. In the sample-incremental learning stage, 7, 14, and 21 images of the first individual are added to the training set, respectively, while the training images from the rest individuals are kept unchanged. Table 7 (Figure 5) and Table 8 show the average recognition accuracies and the mean running times, respectively. Experimental results show that our INMF method gives the best performance for all cases.

4.4.3. Class Incremental Learning

For 67 people, we randomly select 7 images from each individual for training and then add a new class to the training set. NMF should conduct repetitive learning. BNMF and INMF need merely to perform incremental learning on the new added class. The average recognition rates and the mean running times are recorded in Table 9 (plotted in Figure 6(b)) and Table 10, respectively. Experimental results show that INMF outperforms BNMF and NMF in both recognition rates and computational efficiency.

5. Conclusions

This paper proposed a novel constraint INMF method to address the time-consuming problem and incremental learning problem of existing NMF-based approaches for face recognition. INMF has some good properties, such as low computational complexity; sparse coefficient matrix; orthogonal coefficient column vectors between different classes in coefficient matrix H; especially for incremental learning, and so on. Experimental results on FERET and CMU PIE face database show that INMF outperforms PCA, NMF, and BNMF approaches in nonincremental learning and incremental learning.

Acknowledgments

This work is supported by NSF of China (60573125, 60603028) and in part by the Program for New Century Excellent Talents of Educational Ministry of China (NCET-06-0762), NSF of Chongqing CSTC (CSTC2007BA2003). The authors would like to thank the US Army Research Laboratory for contribution of the FERET database and CMU for the CMU PIE database.