Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014, Article ID 879031, 16 pages
http://dx.doi.org/10.1155/2014/879031
Research Article

A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony

1Pattern Recognition Research Group, Centre for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bandar Baru Bangi, Malaysia
2Department of Computer Science, Faculty of Education for Women, University of Kufa, Iraq
3Data Mining and Optimization Group, Faculty of Information System and Technology, Universiti Kebangsaan Malaysia, 43600 Bandar Baru Bangi, Malaysia

Received 11 February 2014; Accepted 12 July 2014; Published 6 August 2014

Academic Editor: Patricia Melin

Copyright © 2014 Mohammed Hasan Abdulameer et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.

1. Introduction

Recently, biometric techniques have received a significant attention due to their wide applications in many areas such as security applications. Many examples of biometric techniques have been used such as face, voice, iris, hand geometry, retina, DNA, and ear [14]. Face recognition is a crucial biometric technique, which could be employed in several areas such as identification at front door for home protection and recognition at ATM or in combination with a smart card for authentication and video inspection for security [57]. Of late, the massive biometric applications have made face recognition as one of the dynamic research areas for computer and machine vision researchers [8]. It is noteworthy that the process of face recognition is difficult and challenging task, because it demands to consider all possible variations in the appearances, caused by change in illumination, facial characteristics, occlusions, and so forth. The major challenges in automatic facial recognition are face localization, feature extraction, and modeling [911].

Active appearance model (AAM) is one of the most prominent techniques [12] that has been widely used in feature extraction for many applications [13], which includes face modeling, studying human behavior, medical imaging tasks, like segmentation of cardiac MRIs or the diaphragm in CT data, and registration in functional heart imaging [14]. The active appearance model (AAM) details the appearance of the face [1518] and it builds statistical model of shape and appearance of any given object. The model blends the constraints on both shape and texture by learning statistical generative models for the shape of a face and the appearance of a face. After creating the model, it is essential to fit the generated model to new images, which is important to find the accurate parameters of the model for an object [19]. Nevertheless, the deficiency of fitting result is the biggest concern while matching the model with original image [20]; moreover, selecting a fitting algorithm is a crucial issue of AAM [21]. The approaches employed for enhancing the fitting performance of AAM have been classified as four groups.

The first approach involves the usage of different versions of deformable model. Christoudias et al. [22, 23] have stated that the impact of light field manipulates human face from real images; therefore each prototype has been obtained by a specific view of 2D shape of each face under a specific light field. In terms of fitting the light-field model to the input image, they have first chosen a particular view of the light-field model, which is the closest to the view of the input image, and then used the direct search to match the input image to a point in the light-field manifold. Based on the application of manifold estimation in the unified view-based feature spaces [24] have proposed a technique for synthesizing face across poses, which is capable of synthesizing unseen views, even for huge variations in the poses. However, they have just tested their work on twenty-five 3D models, which is not adequate to validate the proposed approach.

The second approach for enhancing the fitting performance of AAM combines the existing models. In this context, [25] hybridized active shape models (ASMs) and active appearance models (AAMs), for reliable image interpretation. Moreover, [26] proposed texture-constrained active shape model (TC-ASM), which stemmed from the local appearance model of active shape model (ASM), as a result of its stability in diverse light conditions. They have also assimilated the global texture model of AAM to restrict the shape and to present an optimization aspect for identifying the parameters of shape. The application of texture-constrained shape has enabled the search method to escape from the local minima of the ASM search, which consequently generates enhanced fitting outcomes. Additionally [27] have hybridized active shape models (ASM) with AAMs, in which ASMs attempt to identify appropriate landmark points using the local profile model. They have derived a gradient-based iterative method, by modifying the objective function of the ASM search, since the original objective function of the ASM search is not appropriate for combining these methods. Consequently they have proposed a new fitting method, which combines the objective functions of both ASM and AAM into a single objective function in a gradient-based optimization framework. The AAM uses gradient-based optimization techniques for model fitting, and therefore it is very sensitive to parameters of the initial model. Nevertheless, for addressing these issues, [28] have hybridized the active appearance models and the cylinder head models (CHMs), where the global head motion parameters obtained from the CHMs have been used as the cues of the AAM parameters for a good fitting or reinitialization.

The third approach for enhancing the matching performance of AAM is using a specific AAM that is suitable for different people and external variations. In this context, [29] used the person-specific AAM for tracking the subject; and they have used the generic AAM to compute subject-independent features. As the conventional AAM is generally viewpoint-specific the resulting fitting accuracy will be lower; for addressing this issue a viewpoint-specific AMM has been proposed by [30] for robust facial point extraction under multi-viewpoint. Their method collects the training samples and divides them into different groups by the viewpoint. Then, it constructs the AMMs for each training group; when a new test image is given as input, it matches the input image by all the trained AAMs, and the fitting error is computed; furthermore the minimal one will be selected as the final fitting results. When the input images have differences in pose, expression, and illumination, which were not included in the training set, the conventional AAM often deviated from giving precise outcomes. To overcome this limitation [20] have proposed tensor-based AAM, which applies multilinear algebra to the shape and appearance models of the conventional AAM, which enhances the effectiveness of fitting.

The fourth approach for enhancing the matching performance of AAM is to modify the fitting algorithm of AAM itself, by proposing a novel fitting algorithm or enhancing the existing fitting algorithms. In this perspective [14] have proposed a fast AAM using the canonical correlation analysis (CCA), which has modeled the relation between differences of the image and the model parameter for improving the convergence speed of fitting algorithm. Moreover, [31] have proposed an efficient AAM fitting algorithm based on the inverse compositional image alignment algorithm. This approach did not demand a linear relationship between the differences of image and the model parameter. However, this model has outperformed the conventional AAM in terms of yielding better fitting accuracy; furthermore the proposed model has also got faster convergence. Moreover [32] have proposed an enhanced version of fitting 2D AAM method, based on the inverse compositional image alignment algorithm, which has been employed for fitting 3D AAMs on short axis cardiac MRI. Additionally [33] have proposed 2D + 3D AAM that has an additional 3D shape model. They have also proposed the efficient fitting algorithm of 2D + 3D AAM by adding 3D constraints to the cost function of 2D AAM. On the other hand, [34] have proposed another extension of 2D + 3D AAM fitting algorithm, called multiview AAM fitting (MVAAM) algorithm, which fits a single 2D + 3D AAM to multiple view images, simultaneously obtained from multiple affine cameras. However, the evaluation process has not been adequate to demonstrate the effectiveness of the proposed approach. Similarly, [35] have proposed a new fitting algorithm of 2D + 3D AAMs for a multiple-calibrated camera system, called stereo AAM (STAAM), for increasing the fitting stability of 2D + 3D AAMs. At present, metaheuristic optimization algorithms like genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO), and artificial bee colony (ABC) have become a common solution to many optimization problems [3638].

In this paper, we have used artificial bee colony (ABC) [39] for solving the fitting problem in AAM. The ABC algorithm simulates the intelligent foraging behavior of a honey bee swarm [40]. Nevertheless, we have not directly used the conventional ABC algorithm in our research, due to its limitation in its neighborhood search for generating new food sources. However, this issue has been addressed by introducing an adaptive ABC algorithm to solve the fitting problem in AAM. The rest of this paper is organized as follows. Firstly, the problem statement of AAM is highlighted in Section 2. Next, we have discussed the conventional and proposed method, namely, adaptive artificial bee colony in Section 3. The proposed face recognition technique is explained in Section 4. In Section 5 we have presented the results of evaluation of our proposed method. Finally, we have summarized our work in the last section.

2. The Problem Statement

The face recognition methods mostly use the AAM for feature extraction and recognition process, which has weaknesses in the optimization process [20]. Therefore, it is essential to enhance the optimization process in AAM by including optimization algorithm. Of late, a lot of optimization algorithms have been utilized in many research fields, but it is crucial to pick the right algorithm that suits this present study; consequently we have utilized ABC algorithm [40]. As discussed earlier, we have not directly used the conventional ABC algorithm in our face recognition technique, as it has a drawback in its neighborhood search for generating new food sources. In neighborhood search, the standard ABC algorithm randomly chooses the food position to generate the new food sources. This random generation process influences the accuracy; and this low accuracy affects the performance of recognition. Consequently, for addressing this problem we have proposed an adaptive ABC algorithm for enhancing the conventional ABC. The earlier studies related to the active appearance model have claimed that fitting the model with the target image is a very challenging task; therefor, the soft computing techniques, mostly evolutionary based algorithms, have been utilized for resolving the issue. Nevertheless, the past studies have a lack of documenting the recent developments in this domain. According to [42] the recently developed artificial bee colony (ABC) algorithm has performed well in majority of the applications. Even though [43] have already proposed an adaptive ABC algorithm, still it fails to consider the quality of current food source when it attempts to find the new food source. Hence, in this present study we have proposed an enhanced adaptive ABC algorithm, which locates the new food sources based on the quality of previous food source. However, in face recognition the fitting has to be done rapidly to handle such a huge database. Hence, in this paper, we have proposed a face recognition technique, based on active appearance model.

3. Artificial Bee Colony Algorithm

Swarm intelligence is an active research field, which is motivated by the combined acumen of insect or animal swarms. In the due course a number of algorithms have been proposed, which imitate the biological behaviors of insect or animal swarms, for addressing different kinds of problems; among these algorithms the artificial bee colony algorithm [42] is the most recent one, which mimics the foraging behavior of honey bee colonies. Artificial bee colony (ABC) has been widely employed for addressing multimodal and multidimensional numerical optimization problems. Generally an artificial bee colony is composed of employed bees, onlookers, and scouts. The onlooker is the one which waits on the dance area for obtaining the information about food sources, whereas an employed bee is the one which travels to the food source; and a scout is a bee that is involved in a random search. The location of a food source signifies a probable solution to the optimization problem, and the amount of nectar in a food source symbolizes the excellence of the related solution.

Initially, a distributed population is produced at random. It is noteworthy that each food source comprises only one employed bee; therefore the number of employed bees and the number of food sources will always be equal. Afterwards, the locations (solutions) might be continuously modified till the optimum solution is achieved or stop conditions are fulfilled. Fundamentally, the employed bee has the capacity to remember its earlier best location and generates a new location in its neighborhood, in its memory. Based on the greedy criterion, the employed bee upgrades its food source. Basically, when the identified new food source is superior, then the old location of food source will be updated with the new one. Once all employed bees complete their search process, they will communicate with onlookers for sharing the information, about the route and distance to food sources and the amounts of nectar; the information will be communicated by means of a waggle dance in the dancing area. By monitoring the waggle dance, each onlooker selects a food source, based on the possibility value related to the food source, and seeks the area within its neighborhood for generating a new candidate solution. Later, the greedy criterion is again applied just as it works in the employed bees. However, if the position cannot be enhanced after a prespecified number of cycles, then the position needs to be deserted; on the other hand, the corresponding employed bee turns into a scout. The abandoned position will be replaced with a new randomly generated food source [40]. The main steps can be described as follows.(1)Initialize the bee colony , where denotes the population size and is the bee.(2)According to the fitness function, calculate the fitness of each employed bee and record the maximum nectar amount as well as the corresponding food source.(3)Each employed bee produce a new solution in the nighborhood of the solution in its memory by , where is a random real number in .(4)Use the greedy criterion to update . Compute the fitness of . If is superior to , is replaced with ; otherwise is remain.(5)According to the fitness of get the probability value via .(6)Depending on the probability onlookers choose food sources, search the neighborhood to generate candidate solutions, and calculate their fitness.(7)Use the greedy criteria to update the food sources.(8)Memorize the best food source and nectar amount achieved.(9)Check whether there are some abandoned solutions or not. If true, replace them with some new randomly generated solutions by ; min and max stand for lower and upper bounds of possible solutions, respectively.(10)Repeat steps (3)–(9), until the maximum number of iterations is reached or stop conditions are satisfied.As discussed above, the fitness function is an essential aspect of ABC algorithm, which examines the foraging quality of the colony, that is, the precision of probable solutions. Apart from this, few control parameters have to be designated, such as the number of employ bees or onlooker bees, the time limit for desertion, and the maximum number of cycles or stop conditions, which might directly impact the pace and steadiness of convergence.

Pseudocode of the ABC algorithm is given as in Algorithm 1.

alg1
Algorithm 1

3.1. The Proposed Adaptive ABC Algorithm

The ABC faces some inherent problems, like slow or premature convergence, particularly in case of multimodal problems like other evolutionary algorithms; in neighborhood search the standard ABC algorithm randomly chooses the food position to generate the new food sources. Generally this random generation process yields less accuracy, which negatively impacts the recognition performance. To avoid such drawbacks of the standard method, in this study we have derived an adaptive expression, which analyzes the nature of the solution space, and generates a neighborhood position as per the nectar amount of the current position.

The adaptive ABC algorithm is initiated by generating new food sources ; and , for every employed bees, where the is generated using the following: where min and max stand for lower and upper bounds of possible solutions, respectively. In the adaptive ABC, the neighborhood selection is done by determining new food positions as follows: where are the new food positions of the employee bee. These new food positions are found out by the employee bees by neighborhood search. Onlooker bees evaluate the best positions of every employee bee and go to the positions for neighborhood search as in (2). The new food sources are computed by the nectar values of the previous food sources. Initially, the current food sources nectar values are determined and subsequently the best food source, which has high nectar value among others, is found out. The higher nectar values are compared with other food sources’ nectar values. If the best food sources’ nectar value is greater than the other food sources’ nectar values, then we increase the best food sources’ nectar values; otherwise we decrease the values as shown in (2). If both values are the same, the values remain unchanged. In other words, among all the food sources, the best food sources are taken; that is, the corresponding food position is given as .

This means that the other food positions could be neighbors to the best food position and so the neighborhood search has to be carried in such a way that neighbor positions have to be considered. In order to reach the way, in the proposed Adaptive ABC, the current food position is either increased or decreased or kept unchanged by a factor based on the deviation between the current food position and best food position. If the current food position value is less than the best food position, the natural decision is to increase the current food position so as to reach the best food position and so the first criterion of (2) works. If the current food position value is greater than the best food position, then the current food position has to be decreased to reach the best food position and so the second criterion of (2) works. There need not be any change if the current position and the best position remain unchanged. The increment/decrement factor is decided by the current position, that is, the reciprocal of the current position. By utilizing the reciprocal, the skipping of optimal solutions can be avoided; that is, if the current position is at extreme less/high value, then the possibility of finding food positions could be more in the current position’s neighbor and so by taking reciprocal a small position change will be enabled instead of going for higher position change. The worst positions are forgotten and the scout bees are sent to new random position as generated in (3). The process is iteratively repeated during a maximum swarm duration; that is, cycles are reached as shown in Figure 1.

879031.fig.001
Figure 1: Adaptive ABC algorithm.

In addition, the pseudocode of the proposed adaptive ABC approach is given in Algorithm 2.

alg2
Algorithm 2

4. The Proposed Face Recognition Technique

The proposed recognition technique utilizes AAM for extracting the shape and appearance features from the database images. Nevertheless, it is crucial to have appropriate fitting for extracting AAM based features. Consequently, the fitting performance has been enhanced by presenting a new adaptive ABC algorithm, where the searching performance has been accelerated by considering the quality of the best food source. Depending on the quality of the best food source, the new food sources are generated by the neighborhood search of the algorithm. The proposed technique is mainly comprised of (1) feature extraction using AAM modeling, (2) fitting using adaptive ABC, and (3) recognition phase. The three phases of the technique are discussed in Sections 4.1, 4.2, and 4.3, respectively.

4.1. Feature Extraction Using AAM Modeling

Given a set of training images, , , , and , where is of size . In the training images, the active portions have been manually labelled for extracting the parameters of the shape and appearance models.

In a 2D image landmark points can be represented as a 2n shape vector, , where . The set of shape vectors have been normalized to a common reference frame; hence can be represented by and by applying PCA: where represents the synthesized shape in the normalized frame, illustrates the mean shape in the normalized frame, depicts the matrix of eigenvectors, extracted from the training shape, and is a set of shape parameters. Soon after acquiring a shape model, every single training image has been distorted, where its control factors could match the mean shape. Next, the texture information is tested from the shape-normalized image, which is encompassed by the mean shape for forming a texture vector, . A texture model might be constructed by applying PCA to the normalized texture vectors, where is the synthesized texture in the normalized frame, is the mean texture in the normalized frame, is the matrix which contains eigenvectors as columns, and is a set of texture parameters. The example of shape and texture can be synthesized by and . Since there are correlations between shape and texture variations, a weight matrix, , should be established for each shape parameter. is a diagonal scaling matrix. Then a concatenated vector can be generated: An appearance model will be set up by using a further PCA, where is the matrix which contains the eigenvectors as columns and is a set of appearance parameter. Now a shape and texture model can be expressed by the appearance parameter, ,

4.2. Fitting Using Adaptive ABC Algorithm

The fitting process in AAM modeling can be optimized by several existing optimization algorithms. However, due to its efficiency, ABC algorithm has been extensively used in a number of applications [44, 45] for resolving difficult optimization problems [46]. In the traditional ABC algorithm [42], novel food positions have been identified by a predefined static expression. The static expression becomes inefficient in terms of searching in huge search space, particularly in face recognition systems, because it degrades the performance of the algorithm. Consequently for enhancing the efficiency of the technique, an adaptive expression has been derived. The adaptive expression analyzes the nature of the solution space and generates a neighborhood position according to the amount of nectar in the current position (as described in the example below). The parameters that are used in the adaptive ABC algorithm are mentioned in Table 1.

tab1
Table 1: Parametric values used for adaptive ABC.

The steps involved in adaptive ABC algorithm for fitting optimization are described as follows.

The adaptive ABC algorithm is initiated by generating new food sources ; and , for every employed bees, where the is generated using the following: where min and max stand for lower and upper bounds of possible solutions, respectively. For every food source, the nectar amount is generated by doing the steps as follows.

Firstly, generate an image model as where where represents the eigenvalues of the image and is a vector of appearance parameters. Next, determine the nectar as where is the nectar amount of the th image. An optimal which can produce minimal nectar which is selected as , where is the best food source among all the food sources and the neighborhood selection, is done by determining new food positions as follows: where are the new food positions of the employee bee. These new food positions are found out by the employee bees by neighborhood search. Onlooker bees evaluate the best positions of every employee bee and go to the positions for neighborhood search as done in (12). The worst positions are forgotten and the scout bees are sent to new random position as generated in (12). The process is iteratively repeated during a maximum swarm duration; that is, cycles are reached. Once the termination criterion is met, the best weights for every image are stored in the database. The process should be explained briefly by the following example.

Example. We have randomly generated four food sources, namely, , , , and with three food positions. The generated food sources are

Here, we calculate the nectar values for the generated food sources using De Jong’s type I function [29] which is given in Table 2. De Jong’s type I general definition is described as where in (14) range is , .

tab2
Table 2: Computed nectar values.

Conventional ABC Algorithm. The new solutions are generated for the above food sources by the conventional ABC algorithm neighborhood search. The neighborhood search formula is stated as follows: The generated new food sources are The computed nectar values are given in Table 2. The nectar values of food sources of the conventional ABC algorithm are greater than the nectar values of initially generated food sources.

Adaptive ABC Algorithm. For the given initial food sources, the new solutions are generated by our proposed adaptive ABC algorithm. The new solutions are generated by using (12) and the new solutions are

In Table 2, is the nectar value of the food source . As can be seen from Table 2, our proposed adaptive ABC algorithm has generated new food sources with minimum nectar values. Based on the above example the performances of conventional and proposed adaptive ABC algorithms are given in Figure 2.

879031.fig.002
Figure 2: Adaptive and conventional ABC algorithms performance in terms of their nectar values.

Figure 2 presents the nectar cost representation when conventional and adaptive ABCs are implemented. One can see from the graph that, in conventional ABC, the bees take time to find rich food positions with higher cost as compared to the bees of proposed ABC.

4.3. Recognition Phase

The recognition system acknowledges the credibility of an image by evaluating the parameters of the image with the images in the database. The presence of image in the database assures the credibility, and the absence of images forbids the credibility. Furthermore, the test image is exposed to the extraction of shape and appearance parameter, in order to perform the recognition. The recognition system performs similarity measure using distance measure formula as given in (18) and makes decision: is obtained for the test image that is compared between the test and database image. The decision on authenticity is taken as follows: where The decision making system outputs the person ID if the subjected test unknown image has authenticity and Unidentified if has not the authenticity.

5. Experimental Results

The performance of our proposed method has been analyzed in three evaluation steps: (1) it evaluated the performance in terms of recognition results; (2) it evaluated the fitting solution adaptive ABC against ABC and (3) evaluated the proposed AAM method using fitting errors. The proposed recognition system has been experimented in the working platform of MATLAB 7.12 with the system configuration, i5 CPU @ 3.19 GHz with 4 GB RAM; and evaluation has been done using CASIA-Face V5 database [47] and our own 2.5D face dataset collected by our cybersecurity group. Moreover, UBIRIS dataset [48] has also been used to validate the stability of the proposed method with other types of biometric categories. Figure 3 shows some examples from the three datasets: CASIA, 2.5D and UBiris respectively. From CASIA database, 500 images have been used and then have been divided into five parts for experimentation. In each part, there are 100 images at five different environments of poses and illumination variations. For our property 2.5D dataset and UBIRIS dataset, 121 images are utilized in training and 6 images are exploited in testing. The performance of the technique is analyzed by conducting n-fold (for all datasets, ) cross validation over all datasets and the corresponding statistical performance measures are determined. To perform n-fold cross validation, tenfolds of training and testing datasets are generated by folding operation.

fig3
Figure 3: Sample images from (a) CASIA face dataset, (b) 2.5D face dataset, and (c) UBIRIS iris dataset.
5.1. Performance Evaluation Using Recognition Results

The performance of the proposed technique has been analyzed by conducting n-fold (for our dataset, ) cross validation over all datasets and the corresponding statistical performance measures are determined. The comparison is done with the conventional AAM and with the adaptive AAM [41]. The cross validation results for 1 : N recognition over three datasets are tabulated in Tables 3, 4, 5, 6, 7, 8, 9, 10, and 11. The statistical and the average recognition performance of the three datasets is illustrated in Figure 4. Performance that measures accuracy, sensitivity, and specificity is defined as follows.Accuracy: accuracy represents the degree of closeness of measurements of a quantity with its authentic (true) value.Sensitivity: sensitivity measure gives the percentage of recognized face images that are correctly identified as recognized face image.Specificity: specificity measure gives the percentage of nonrecognized face images that are correctly identified as nonrecognized face image.

tab3
Table 3: Cross validation results for CASIA face dataset in terms of accuracy for conventional AAM, adaptive AAM, and the proposed AAM.
tab4
Table 4: Cross validation results for 2.5D face dataset in terms of accuracy for conventional AAM, adaptive AAM, and the proposed AAM.
tab5
Table 5: Cross validation results for UBIRIS dataset in terms of accuracy for conventional AAM, adaptive AAM, and the proposed AAM.
tab6
Table 6: Cross validation results for CASIA face dataset in terms of sensitivity for conventional AAM, adaptive AAM, and the proposed AAM.
tab7
Table 7: Cross validation results for 2.5D face dataset in terms of sensitivity for conventional AAM, adaptive AAM, and the proposed AAM.
tab8
Table 8: Cross validation results for UBIRIS dataset in terms of sensitivity for conventional AAM, adaptive AAM, and the proposed AAM.
tab9
Table 9: Cross validation results for CASIA face dataset in terms of specificity for conventional AAM, adaptive AAM, and the proposed AAM.
tab10
Table 10: Cross validation results for 2.5D dataset in terms of specificity for conventional AAM, adaptive AAM, and the proposed AAM.
tab11
Table 11: Cross validation results for UBIRIS in terms of specificity for conventional AAM, adaptive AAM, and the proposed AAM.
fig4
Figure 4: Average of ten cross validation results recognition performance for three datasets in terms of (a) accuracy, (b) sensitivity, and (c) specificity for conventional AAM, adaptive AAM, and the proposed AAM.
5.2. Fitting Optimization: ABC versus Adaptive ABC

In order to evaluate the efficiency of the proposed adaptive ABC, the three datasets CASIA, 2.5D, and UBIRIS, respectively, have been tested and evaluated in 10 rounds. The proposed adaptive ABC outperformed the conventional ABC in optimization process in order to create a proper fitting of model over the original image. The performance of adaptive ABC as against the conventional ABC over the three datasets is tabulated and illustrated in Table 12 and Figures 5 and 6, respectively.

tab12
Table 12: Mean time (in seconds) and its standard deviation for the three datasets taken by conventional and adaptive ABC algorithm for the best fitting process.
879031.fig.005
Figure 5: The average time taken to fit the model in the original image by conventional and adaptive ABC over the three datasets.
879031.fig.006
Figure 6: Standard deviation taken by conventional and adaptive ABC algorithm for the three datasets.

Moreover, the best food sources obtained from different iterations are illustrated in Figure 8.

Discussion. Figure 7 depicts the image outputs of fitting performance under different iterations. When the result of ABC algorithm is compared with the results of adaptive ABC algorithm, based on different three datasets with different random ten runs, the -test result shows that the proposed method is considered to be statistically significant and has outperformed the standard method with the -test result ( and , 0.0024, and 0.0016, resp., for the three datasets). The adaptive ABC algorithm relatively takes less time than the conventional model of ABC algorithm, except for the third cross validation round of third dataset. In order to make a conclusion over the performance, the mean value has been taken for all the rounds and plotted in figure. Based on the results it is evident that the proposed adaptive ABC has outperformed the standard ABC in terms of achieving fitting efficiency, as the deficiency of performance in the third round of the third dataset seems to be negligible when compared to the performance over the other rounds of the datasets. When comparatively analyzing the datasets, the mean value has clearly showed that the adaptive ABC has consumed less time than conventional ABC algorithm, in terms of fitting the model. Furthermore, the standard deviation has showed similar result, except for the UBIRIS dataset. Generally, adaptive ABC is more efficient than the conventional ABC while dealing with the AAM fitting problem.

fig7
Figure 7: Fitting model example of faces at (a) iteration 1, (b) iteration 2, and (c) iteration 3, (d) final fit model, (e) standard AAM image, and (f) original image.
879031.fig.008
Figure 8: Graphical representation of the best food sources with number of iterations.
5.3. Evaluating the Proposed AAM Method Using Fitting Errors

The mean square error (MSE) is the average of the squared errors between target image and estimated model readings in a data sample: For example, if 100 landmarks have fitted over the target image in one of the rounds and 95 landmark points are exactly fit in the same target positions. This mean 5 landmarks might represent error because fitting will not be converged until it goes to zero. Five points may not reach the target coordinates. If 5 points out of 100 points fail to exactly reach the target coordinates, so the fitting error would be . To analyze and evaluate the performance using MSE, the performance of 10 rounds of experiments has been conducted over the three datasets. The results of fitting error values of 10 rounds of experiments for conventional AAM, adaptive AAM, and the proposed AAM techniques have been presented, respectively, in Table 13 and Figure 9. In addition, Table 14 shows the -test results for the proposed AAM as against the conventional AAM and adaptive AAM, in terms of accuracy and fitting error from the three datasets.

tab13
Table 13: Proposed and conventional AAM techniques fitting error of (a) CASIA face images dataset, (b) 2.5D face dataset, and (c) UBIRIS dataset.
tab14
Table 14: -test values of proposed AAM versus conventional AAM and proposed AAM versus adaptive AAM.
879031.fig.009
Figure 9: Fitting error average for 10 rounds taken from fitting the model in the original image by conventional AAM, adaptive AAM, and the proposed AAM.

6. Conclusion

In this paper, a face recognition technique had been proposed by extracting AAM based features. The AAM fitting problem had been solved by introducing a new adaptive ABC algorithm, in which the neighborhood selection has been accelerated by considering the nature of current food position. The introduction of adaptive ABC has fastened the AAM fitting, and hence the efficiency of recognition has been improved without compromising the recognition performance. The performance of the technique had been analyzed by using three datasets, such as CASIA face database version 5, property 2.5D face dataset, and UBIRIS dataset. The efficiency of recognition had been evaluated by experimenting 1 : N face recognition problem. The experimental results had proved that the proposed technique has been statistically significant in terms of aforesaid features. In addition, the fitting error between the generated model and target image shows that the proposed AAM had been more efficient than the conventional AAM approaches. Moreover, the graphical illustration of fitting efficiency of ABC over adaptive ABC had revealed the improvement in terms of recognition efficiency. Eventually, we have concluded that the adaptive ABC improves the recognition efficiency without compromising the accuracy.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research has been funded by the Ministry of Science, Technology and Innovation (MOSTI) through ERGS/1/2011/STG/UKM/2/48 (TK) under the title of 2D-3D Hybrid Face Matching via Fuzzy Bees Algorithm for Forensic Identification. The authors also would like to thank CyberSecurity Malaysia and Royal Police of Malaysia’s Forensics Lab for their support in the research.

References

  1. D. Sánchez and P. Melin, “Optimization of modular granular neural networks using hierarchical genetic algorithms for human recognition using the ear biometric measure,” Engineering Applications of Artificial Intelligence, vol. 27, pp. 41–56, 2014. View at Google Scholar
  2. M.-G. Kim, H.-M. Moon, Y. Chung, and S. B. Pan, “A survey and proposed framework on the soft biometrics technique for human identification in intelligent video surveillance system,” Journal of Biomedicine and Biotechnology, vol. 2012, Article ID 614146, 7 pages, 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. A. I. Fuente, L. D. V. Puente, J. J. V. Calvo, and M. R. Mateos, “Optimization of a biometric system based on acoustic images,” The Scientific World Journal, vol. 2014, Article ID 780835, 13 pages, 2014. View at Publisher · View at Google Scholar
  4. H. Benaliouche and M. Touahria, “Comparative study of multimodal biometric recognition by fusion of iris and fingerprint,” The Scientific World Journal, vol. 2014, Article ID 829369, 13 pages, 2014. View at Publisher · View at Google Scholar
  5. A. K. Jain and A. Kumar, Biometrics of Next Generation: An Overview, Second Generation Biometrics, Springer, 2012.
  6. J. Huang, B. Heisele, and V. Blanz, “Component-based face recognition with 3D morphable models,” in Audio- and Video-Based Biometric Person Authentication, vol. 2688 of Lecture Notes in Computer Science, pp. 27–34, 2003. View at Google Scholar
  7. W. Xia, S. Yin, and P. Ouyang, “A high precision feature based on LBP and Gabor theory for face recognition,” Sensors, vol. 13, no. 4, pp. 4499–4513, 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. E. Reza, A. Jahani, A. Amiri, and M. Nazari, “Expression-independent face recognition using biologically inspired features,” Indian Journal of Computer Science and Engineering, vol. 2, no. 3, pp. 492–499, 2011. View at Google Scholar
  9. M. S. Ahuja and S. Chhabra, “Effect of distance measures in PCA based face recognition,” International Journal of Enterprise Computing and Business Systems, vol. 1, no. 2, 2011. View at Google Scholar
  10. M. Balasubramanian, S. Palanivel, and V. Ramalingam, “Fovea intensity comparison code for person identification and verification,” Engineering Applications of Artificial Intelligence, vol. 23, no. 8, pp. 1277–1290, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. K. Niinuma, H. Han, and A. K. Jain, “Automatic multi-view face recognition via 3D model based pose regularization,” in Proceedings of the IEEE 6th International Conference on Biometrics: Theory, Applications and Systems (BTAS ’13), vol. 29-Oct. 2, pp. 1–8, Arlington, Va, USA, September-October 2013. View at Publisher · View at Google Scholar
  12. G. G. Gordon, “Face recognition based on depth and curvature features,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ’92), pp. 808–810, Champaign, Ill, USA, June 1992. View at Publisher · View at Google Scholar
  13. A. Sethuram, K. Ricanek, and E. Patterson, “A comparative study of active appearance model annotation schemes for the face,” in Proceedings of the 7th Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP '10), pp. 367–374, ACM, New York, NY, USA, December 2010. View at Publisher · View at Google Scholar · View at Scopus
  14. R. Donner, M. Reiter, G. Langs, P. Peloschek, and H. Bischof, “Fast active appearance model search using canonical correlation analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1690–1694, 2006. View at Publisher · View at Google Scholar · View at Scopus
  15. H. Gao, H. K. Ekenel, M. Fischer, and R. Stiefelhagen, “Boosting pseudo census transform features for face alignment,” in Proceedings of British Machine Vision Conference (BMVC '11), pp. 1–11, Dundee, UK, 2011.
  16. S. J. Lee, K. R. Park, and J. Kim, “A comparative study of facial appearance modeling methods for active appearance models,” Pattern Recognition Letters, vol. 30, no. 14, pp. 1335–1346, 2009. View at Publisher · View at Google Scholar · View at Scopus
  17. D. Govindaraj, “Application of active appearance model to automatic face replacement,” Eurographics, pp. 15–16, 2011. View at Google Scholar
  18. P. Sauer, T. Cootes, and C. Taylor, “Accurate regression procedures for active appearance models,” in Proceedings of the British Machine Vision Conference (BMVC '11), pp. 30.1–30.11, BMVA Press, September 2011. View at Publisher · View at Google Scholar
  19. X. Gao, Y. Su, X. Li, and D. Tao, “A review of active appearance models,” IEEE Transactions on Systems, Man and Cybernetics C: Applications and Reviews, vol. 40, no. 2, pp. 145–158, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. H.-S. Lee and D. Kim, “Tensor-based AAM with continuous variation estimation: application to variation-robust face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 6, pp. 1102–1116, 2009. View at Publisher · View at Google Scholar · View at Scopus
  21. J. Peyras, A. Bartoli, and S. Khoualed, Pools of AAMs: Towards Automatically Fitting any Face Image, BMVC, British Machine Vision Association, 2008.
  22. C. M. Christoudias and T. Darrell, “Light field appearance manifolds,” in Computer Vision—ECCV 2004, vol. 3024 of Lecture Notes in Computer Science, pp. 481–493, 2004. View at Google Scholar
  23. C. M. Christoudias, L. Morency, and T. Darrell, “Non-parametric and light-field deformable models,” Computer Vision and Image Understanding, vol. 104, no. 1, pp. 16–35, 2006. View at Publisher · View at Google Scholar · View at Scopus
  24. X. Huang, J. Gao, S.-C. S. Cheung, and R. Yang, Manifold Estimation in View-based Feature Space for Face Synthesis across Poses, vol. 5994 of Lecture Notes in Computer Science, 2010.
  25. W. Wang, S. Shan, W. Gao, and B. Yin, “Combining active shape models and active appearance models for accurate image interpretation,” in Proceedings of the 39th International Conference on Acoustics, Speech and Signal Processing (ICASSP ’14).
  26. S. Yan, C. Liu, S. Z. Li, H. Zhang, H.-Y. Shum, and Q. Cheng, “Texture-constrained active shape models,” in Proceedings of the International Workshop on Generative Model Based Vision, 2002.
  27. J. Sung, T. Kanade, and D. Kim, “A unified gradient-based approach for combining ASM into AAM,” International Journal of Computer Vision, vol. 75, no. 2, pp. 297–309, 2007. View at Publisher · View at Google Scholar · View at Scopus
  28. J. Sung, T. Kanade, and D. Kim, “Pose robust face tracking by combining active appearance models and cylinder head models,” International Journal of Computer Vision, vol. 80, no. 2, pp. 261–272, 2008. View at Google Scholar
  29. S. Lucey, I. Matthews, C. Hu, Z. Ambadar, F. De La Torre, and J. Cohn, “AAM derived face representations for robust facial action recognition,” in Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR '06), pp. 155–162, April 2006. View at Publisher · View at Google Scholar · View at Scopus
  30. Y. Kawarazaki, G. Duan, T. Shinkawa, and Y.-W. Chen, “Viewpoint-specific active appearance model for robust feature point extraction,” in Proceedings of the 6th International Conference on Computer Sciences and Convergence Information Technology (ICCIT '11), pp. 883–886, Seogwipo, Republic of Korea, December 2011.
  31. J. Matthews and S. Baker, “Active appearance models revisited,” International Journal of Computer Vision, vol. 60, no. 2, pp. 135–164, 2004. View at Publisher · View at Google Scholar · View at Scopus
  32. A. Andreopoulos and J. K. Tsotsos, “A novel algorithm for fitting 3-D active appearance models: applications to cardiac MRI segmentation,” in Image Analysis, vol. 3540 of Lecture Notes in Computer Science, pp. 729–739, Springer, 2005. View at Google Scholar
  33. J. Xiao, S. Baker, I. Matthews, and T. Kanade, “Real-time combined 2D+3D active appearance models,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '04), pp. 535–542, July 2004. View at Scopus
  34. C. Hu, J. Xiao, I. Matthews, S. Baker, J. Cohn, and T. Kanade, “Fitting a single active appearance model simultaneously to multiple images,” in Proceedings of the British Machine Vision Conference, September 2004.
  35. J. Sung and D. Kim, “STAAM: fitting a 2D+3D AAM to stereo images,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '06), pp. 2781–2784, Atlanta, Ga, USA, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  36. D. Sanchez, P. Melin, O. Castillo, and F. Valdez, “Modular granular neural networks optimization with multi-objective hierarchical genetic algorithm for human recognition based on iris biometric,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '13), pp. 772–778, Cancún, México, June 2013. View at Publisher · View at Google Scholar · View at Scopus
  37. P. Melin, F. Olivas, O. Castillo, F. Valdez, J. Soria, and M. Valdez, “Optimal design of fuzzy classification systems using PSO with dynamic parameter adaptation through fuzzy logic,” Expert Systems with Applications, vol. 40, no. 8, pp. 3196–3206, 2013. View at Publisher · View at Google Scholar · View at Scopus
  38. M. H. Abdulameer, S. N. H. Sheikh Abdullah, and Z. A. Othman, “Support vector machine based on adaptive acceleration particle swarm optimization,” The Scientific World Journal, vol. 2014, Article ID 835607, 8 pages, 2014. View at Publisher · View at Google Scholar
  39. D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony ({ABC}) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  40. D. Karaboga and C. Ozturk, “A novel clustering approach: artificial Bee Colony (ABC) algorithm,” Applied Soft Computing Journal, vol. 11, no. 1, pp. 652–657, 2011. View at Publisher · View at Google Scholar · View at Scopus
  41. X. Liu, “Video-based face model fitting using adaptive active appearance model,” Image and Vision Computing, vol. 28, no. 7, pp. 1162–1172, 2010. View at Publisher · View at Google Scholar · View at Scopus
  42. D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  43. K. Sushil, K. S. Tarun, P. Millie, and A. K. Ray, “Adaptive artificial bee colony for segmentation of CT lung images,” in Proceedings of the International Conference on Recent Advances and Future Trends in Information Technology, pp. 1–5, 2012.
  44. A. Baykasoulu, L. Ozbakır, and P. Tapkan, “Artificial bee colony algorithm and its application to generalized assignment problem,” in Swarm Intelligence: Focus on Ant and Particle Swarm Optimization, F. T. Chan and M. K. Tiwari, Eds., pp. 113–144, Itech Education and Pub., Vienna, Austria, 2007. View at Google Scholar
  45. G. Yan and C. Li, “An effective refinement Artificial Bee Colony optimization algorithm based on chaotic search and application for PID control tuning,” Journal of Computational Information Systems, vol. 7, no. 9, pp. 3309–3316, 2011. View at Google Scholar · View at Scopus
  46. M. Ma, J. Liang, M. Guo, Y. Fan, and Y. Yin, “SAR image segmentation based on artificial bee colony algorithm,” Applied Soft Computing Journal, vol. 11, no. 8, pp. 5205–5214, 2011. View at Publisher · View at Google Scholar · View at Scopus
  47. Chinese academy of sciences, CASIA face database V5, http://biometrics.idealtest.org/.
  48. H. Proença and L. A. Alexandre, Ubiris iris image database, 2004, http://iris.di.ubi.pt/.