Abstract

The key to improve the image recognition rate lies in the extraction of image features. In this paper, a feature extraction method is proposed for the images with similar feature in the strong noise background, which is two-dimensional principal component analysis combined with wavelet theory and frame theory. Considering that the image will be influenced by man-made and environmental noises, the algorithm of this paper considers the improvement of many algorithms. Firstly, the images are preprocessed by images enhancement based on feature enhancement. The images are processed by wavelet transform. Then, the preprocessed image matrices are used to obtain the eigenvectors, and the eigenvectors are interpolated with frame, which makes more sufficient information in the frame theory and better extracts the features on the image. Finally, this algorithm is compared other algorithms in the standard ORL face recognition database. The comparison of recognition rate and recognition time by simulation experiment is carried out in order to obtain the validity of the proposed algorithm.

1. Introduction

Image recognition is an important area of artificial intelligence, and the accuracy of image recognition is getting higher and higher. Principal component analysis (PCA) is a common linear transformation method for extracting features in image recognition. This algorithm has been developed very well. However, the calculation of this algorithm is large. In the face recognition technology, one-dimensional PCA algorithm needs to transform the two-dimensional image matrix into one-dimensional vector. The dimension of the image vector is as high as 10304, if the resolution of the image is 112 92. And the larger the data set is, the higher the dimension of the image vector will be. When the data set of image class is 100000 and the image matrix is 10304 100000, the calculation of one-dimensional PCA directly calculating the image matrix is large. This makes the calculation of the entire feature extraction process very large and leads to a high dimensional space and the relative increase of computational complexity. This requires large calculation of the entire feature extraction process. The computational complexity of the large dimension of the small sample, which also makes the image lose the structure information, is not conducive to accurate detection and recognition. For the defects of one-dimensional PCA, [1] proposes a face recognition algorithm based on 2DPCA, which is a linear unsupervised statistical method. In general, the dimension of face image is large, while the calculation of face image processing is very large. The use of one-dimensional PCA algorithm leads to the increase in computational complexity and time-consuming, so 2DPCA is introduced to deal with the images [2]. 2DPCA algorithm is a feature extraction method that directly deals with the image matrix and overcomes the problem of converting a two-dimensional image matrix into a one-dimensional vector by using the one-dimensional PCA extracts feature. To a great extent, the amount of calculations is reduced. 2DPCA also makes use of the differences between samples, effectively preserves the sample structure information, adds the identification information, and becomes a new research hot spot [3]. Reference [4] explains the application of linear transformation in matrix theory. It uses 2DPCA to find out the feature vectors and uses the classical one-dimensional PCA technology to make further compression, so that the dimension is reduced. The result shows that the direct covariance matrix can be obtained directly for the image, which is more effective in recognition rate. In [59], the classical 2DPCA algorithm has been improved, but the intra-class feature vectors are not considered fully. The image recognition algorithms are constantly updated and optimized, the classical PCA algorithm, the improved PCA algorithm and improved 2DPCA algorithm, the SVM algorithm, and the convolutional neural network algorithm can be used for face recognition. References [1012] first block the image and then use the 2DPCA algorithm to extract features for each block, at last, use the information fusion method to complete feature extraction. These algorithms use only local information, they lost information between blocks on the original face image easily. As a result, the information extracted is not complete enough. In [13], a face recognition method is proposed based on the average partition of 2DPCA. In this method, the image matrix is first divided into blocks, the intraclass normalized blocks are used to construct the overall distribution matrix; then the projection is carried out, which can reduce the dimension of the feature and avoid the use of singular value decomposition and reduce the recognition distance of the samples in the class. The experimental result shows that the recognition rate of this method is higher than 2DPCA algorithm. The above algorithms are not processed by wavelet transform, and they are processed directly by 2DPCA algorithm on the image processing, they cannot solve the external influence effectively (such as the changes of expressions and posture on the ORL face database). So the accuracy of the extracted features is not high. In [14], by combining the advantages of WT and 2DPCA, a face recognition algorithm is presented. First, in this algorithm, the firstorder wavelet transform is used to decompose the image, reduce the noise and increase the feature information, and solve the external influence (such as the changes of the expression and posture on the ORL face database). Then the 2DPCA algorithm is used to reduce the images dimensions extract the features. The result shows that the image recognition rate is improved after using the wavelet transform. After the image is processed by the wavelet, the unimproved 2DPCA algorithm does not use the redundant information between the eigenvectors. It is difficult to obtain the maximum projection value. The extracted information is not accurate enough. Therefore, this paper proposes an image recognition method based on 2DPCA combining wavelet theory and frame theory, which can fully consider the feature information and improve the recognition rate.

In summary, although the recognition rate of these improved algorithms are slightly higher than the classical 2DPCA face recognition algorithm, the recognition effect is still not very good for similar features. The analysis shows that none of these algorithms use the redundant information between the feature vectors and it is difficult to obtain the maximum projection value. Therefore, the extracted information is not accurate enough. This project is based on the wavelet decomposing and denoising, and adopts improved 2DPCA dimensionality reduction. This project expands the orthogonal principal element space into the (nonorthogonal) principal element space of the frame, so as to obtain more sufficient information in the frame principal element space. This algorithm is compared with other algorithms in the standard ORL face recognition database. The recognition rate and recognition time are compared through simulation experiments, so as to obtain the effective results of image recognition of two-dimensional principal component analysis combined with wavelet theory and frame theory.

2. Image Preprocessing Based on Feature Enhancement

For detecting and recognizing small-target images in the background of strong noise, directly processing the original image will affect the detection results. Therefore, the preprocessing of the image will help to extract the features of the image, and then improve the detection accuracy and recognition rate. In ORL face database, the image is affected by the similarity of features such as attitude, so the feature information can be enhanced by wavelet transform to improve the recognition rate.

This section reviews the idea of one-dimensional wavelet transform. Figure 1 introduces the basic idea of one-dimensional wavelet transform. After the image is processed by the wavelet, the image information is decomposed into many different spatial resolutions, frequency characteristics, and the characteristics of the direction of suband image signal. In this way, wavelet decomposition can provide good local information. And in each level of wavelet transform, image is divided into one low frequency information and three high frequencies information (respectively corresponding to hori-zontal, vertical and diagonal detail components) [15].

Given an image which is described as , the result of first-order wavelet decomposition is shown in Figure 1. Among them, is the low frequency component of the image, which is a smooth image of the original image. represents the horizontal high frequency component of the image. represents the vertical high frequency component of the image, and represents the diagonal high frequency component of the image.

Let denote the training sample images, , denotes the training sample number. First, all the images are trained in order, then they pass the first-order wavelet decomposition, and the low-frequency component and the high- frequency ones are processed at the end. The low-frequency component and high-frequency one both denote the subb-and image after wavelet decomposition. In order to match them with the training sample images, they must be expanded by adding a zero matrix to them. So that make the matrices LL, HL, LH, and HH as follows:

After the reconstruction of the wavelet, the image is obtained by Since the main energy of the noise component is generally concentrated on the detail component of the wavelet decomposition, the effect of the noise can be eliminated by neglecting the part of the high frequency component and the feature information can be enhanced by increasing the low frequency component, so the range of the low frequency coefficient is 1.2-1.5, and the range of the high frequency coefficient is 0.9-1.0. If the values of and exceed this range, the image will be distorted.

3. The 2DPCA Algorithm of the Frame Theory

3.1. The 2DPCA Algorithm

For a given image , images are obtained by a wavelet transformation of image . , set . Then the image is transformed by linear transformation:

The image is directly projected onto to get , which is called the projection feature vector of the image . And the optimal projection space can be determined by the spread of eigenvector .

Let denote the covariance matrix of the training sample projection eigenvector and represent the track of . Then when takes the maximum value, its physical meaning is as follows: find a projection axis with all the training projections on it, which maximizes the overall distribution matrix of the eigenvectors after the projection. can be recorded as follows:By (4), we can get

is the orthogonal matrix composed of the feature vectors corresponding to the eigenvalues. Let the eigenvalue of the covariance matrix be denoted as , and , The corresponding feature vector is , then ; therefore, the spectrum of matrix is decomposed into the following:Bring into (5) formula to obtain

The feature subspace is constructed by the selected feature vector corresponding to the previous eigenvalue . Then , .

In this case, only when the eigenvalue of the takes the maximum, the corresponding eigenvector is the maximum value. The projection of eigenvector is the largest on X, so when the characteristic vector of X is the largest, it makes the maximum value.

The physical meaning is that the overall dispersion degree of the eigenvector obtained by the image matrix after projection onto the space is the largest. The optimal projection space is the eigenvector corresponding to the largest eigenvalue of the overall image distribution matrix , where the vector in the optimal projection space here is a normalized normal orthogonal vector, which makes maximize.

When the eigenvalues of are arranged from large to small, and the orthogonal standard eigenvectors corresponding to the first eigenvalues are as follows:

The feature matrix of the image: can be used to extract features, and a given image sample is projected to , such that

So we can get a set of projection eigenvector , which is called the principal component vector of the image . Then choose a suitable value to form a matrix with dimensions , which is called the feature image of the image , that is,

is called the characteristic matrix or characteristic image of .

3.2. The 2DPCA Algorithm of the Frame Theory

Aiming at the small target in the background of strong noise, for some features are similar or the extracted information is incomplete, the 2DPCA algorithm of the frame theory will make the extracted features more accurate.

The mathematical idea of this algorithm: after the cyclically preprocessed image, the frame theory proposed in this paper is adopted, which mainly deals with projection space when the feature is proposed. In the field of mathematics, the eigenvectors corresponding to different eigenvalues are orthogonal to each other, so the different eigenvectors are processed, so that for the eigenvalues, there are different orthogonal eigenvectors, and their arbitrarily combination can get the species, one of any combination can be used to make up the projector space to extract the image features. In this paper, one of the combinations is used to find the largest projection axis between two adjacent eigenvectors (this projection axis is not orthogonal to other eigenvectors). For eigenvalues, the eigenvector is 2 d-1.

The specific algorithms for the 2DPCA of the frame theory are as follows: after 2DPCA’s extracted feature is , then inserting a value between projection axes and (), and so on, inserting a value between every two feature vectors. In this way, combinations of feature vectors can be obtained, which can be used to extract image features. This project is to insert the mean value between the eigenvectors and get a series of non-standard orthogonal basis vectors. For a given image , the image is projected to the new projection space :

In this way, we get a new set of projection eigenvectors of , which also called the principal component vector of . Then choose a suitable value to form a matrix , which called the feature image of the image :

It is said that is the feature image of the extracted image under the 2DPCA algorithm of the standard frame theory.

Finally, the above feature images are used to recognition.

After the image samples have been wavelet transformed, they are subjected to the 2DPCA algorithm of the frame theory, and the characteristic matrix of each image is obtained. The nearest neighbor criterion is used for recognition. The characteristic matrix of training sample is and the characteristic matrix of testing sample is .

The distance can be obtained by the following:where represents the Euclidean distance between and . The total number of samples is N, and finally the recognition is based on the nearest neighbor criterion.

4. Simulation Experiment

In this section, the simulation experiment is used to prove the effectiveness of the method proposed in this paper. The source of the experimental data set is the ORL face database [16] (created by the University of Cambridge’s AT&T laboratory, contains 400 images of 40 faces. Some volunteers’ images include changes in attitude, expression, and face ornaments. The face database is often used in the early stage of face recognition research)

4.1. Experimental Conditions

To verify the validity of image recognition by two-dimensional principal component analysis combined with wavelet theory and frame theory, this project is compared with the classical 2DPCA algorithm, the 2DPCA algorithm through wavelet transform and the 2DPCA algorithm based on the standard frame theory without wavelet processing. The object of this project is ORL face database [16]. The ORL face database has 40 people, each of whom has 10 different gestures and expressions, with a total of 400 images. The size of each face image is 112 × 92 pixels, and the gray level is 256. Face facial expressions (eyes or eyes closed, laughing, non-laughing) and facial modifications (wearing or not wearing glasses) on the ORL face database are all changed. Figure 2 is a sample taken from the first person on the ORL face database. On the ORL face database, the first 5 images of each person are selected as the training samples set, with 200 images in total. The last 5 images of each person are selected as the testing samples set, with 200 images in total. In this project, when the , of formula (2) are 1.5 and 1.0, respectively in the wavelet processing image, the reconstructed sample image can be obtained after the wavelet transform. In the 2DPCA algorithm based on the frame theory and the classical 2DPCA algorithm, the eigenvectors corresponding to the larger eigenvalues in the covariance matrix are selected as the best projection direction. Since the best projection axis affects the correct recognition rate of the human face, the 2DPCA algorithm, the wavelet decomposed 2DPCA algorithm, and the 2DPCA algorithm of the frame theory and the algorithm of this project are discussed in the experiment when the projection axis changes. These algorithms change the correct recognition rate and time on the ORL face database. The simulation results are shown in Figure 1 and Tables 1 and 2.

4.2. Results Analysis

It is noted in Table 1 that the correct recognition rate of the 2DPCA algorithm combined with the wavelet theory and the standard frame theory is the trend of the change of the projection axis on the ORL face database. It is noted in Table 2 that the recognition time of the 2DPCA algorithm combined with the wavelet theory and the frame theory is the trend of the change of the projection axis on the ORL face database. By contrast, other algorithms in this paper have been improved in recognition rate and reduced in time.

The algorithm flow chart of this project is as shown in Figure 3.

5. Conclusion

In this paper, a feature extraction method is proposed for the images with similar feature in the strong noise background. Image preprocessing based on feature enhancement is first applied to the image, which makes the image unaffected by other noise factors. Then, 2DPCA algorithm based on frame theory is proposed to extract the feature of human face. The experimental results show that the algorithm in the project not only improves the face recognition rate, but also shortens the time used for recognition. The algorithm of this project applies the frame theory to deal with the eigenvectors corresponding to the eigenvalues, and it can use the redundant information of the image to extract the feature information more effectively when identifying the similar image features.

Data Availability

(1) The source of the experimental data set is the ORL face database (created by the University of Cambridge's AT&T laboratory, contains 400 images of 40 faces. Some volunteers' images include changes in attitude, expression and face ornaments. The face database is often used in the early stage of face recognition research). The data set used by this laboratory may be less, but the ultimate goal of this experiment is to verify: the image recognition and classification based on the two-dimensional principal component analysis combined with the wavelet theory and the frame theory. In the next research experiment, we will carry out multiple experiments and expand them along the different data sets collected by ourselves. (2) First of all, each image is processed by wavelet threshold denoising. This is the image preprocessing that is required for applying different data sets. This will help to extract the features of the image, and then improve the detection accuracy and classification recognition rate. (3) After the cyclically preprocessed image, the frame theory proposed in this paper is adopted, which mainly deals with projection space when the feature is proposed. In the field of mathematics, the eigenvectors corresponding to different eigenvalues are orthogonal to each other, so the different eigenvectors are processed, so that for the d eigenvalues, there are d different orthogonal eigenvectors, and their arbitrarily combination can get the 2d species, one of any combination can be used to make up the projector space to extract the image features. In this experiment, one of the combinations is used to find a largest projection axis between two adjacent eigenvectors (this projection axis is not orthogonal to other eigenvectors). For d eigenvalues, the eigenvector is 2*d-1. ORL Database of Faces (ORL): http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under grant nos. 61605026, U1504616, and 61503123, the Program for Science & Technology Innovation Talents in Universities of Henan Province under Grant 17HASTIT021, Basic Research Project of Henan Education Department under Grant 13A510188, and the Scientific Research Foundation of Henan University of Technology under grant no. 2015RCJH14.